This requirement again comes from the repeated posts in the local usergroup of mine. Bussiness processes at some time would want to Update millions of rows based on some criteria. Even for a matter of fact sometimes we would be interested in deleting tons of rows. And issuing a DELETE statement doesnot help as it consumes lots of time. Find below an unique and interesting implementation for the problem in hand. Our moto is to perform the operation fast and efficient way and by reducing the amount of size increase of our transaction log too.
SET ROWCOUNT 1000The logic behind this is very simple indeed. ROWCOUNT causes SQL Server to stop processing the query after the specified number of rows is returned. This technique is useful because it avoids the concurrency hits that large updates incur; the smaller the x (the number of rows in the updates), the less likely that the update task will prevent other users from accessing the data. Combined with transaction-log backups, this method can also keep your transaction-log size to a minimum. And the whole logic can be done for the Delete statement too.
WHILE (1=1) BEGIN
BEGIN TRANSACTION
UPDATE...set ...,MyLastUpdate='date',...WHERE
MyLastUpdate < 'date'
-- Update 1000 nonupdated rows
IF @@ROWCOUNT = 0
BEGIN
COMMIT TRANSACTION
BREAK
END
COMMIT TRANSACTION
END
0 comments:
Post a Comment