top of page

if not IsEmpty() then DeleteAll()

My previous post on good and bad coding practices in terms of the performance touched upon one particularly contradictory coding pattern - calling IsEmpty before DeleteAll and ModifyAll functions.

You may have come across code structures like this:

Record.SetFilter(...);
if not Record.IsEmpty() then
    Record.DeleteAll();

This code probes the filtered Record variable to check if there are any records to delete, and skips DeleteAll if there are none. But if there is nothing to delete, DeleteAll won't do anything, right?

Is the additinal query really useful? And if yes, when and how does it help?

In this post, I will try to answer these questions.


How Delete query locks


To start looking for the answer, let's first refer to the Transaction locking and row versioning guide and see what it has to say about the sequence of the locks that SQL Server places on resources requested by the query.

... a shared intent lock is requested at the table level before shared (S) locks are requested on pages or rows within that table. Setting an intent lock at the table level prevents another transaction from subsequently acquiring an exclusive (x) lock on the table containing that page. Intent locks improve performance because the SQL Server Database Engine examines intent locks only at the table level to determine if a transaction can safely acquire a lock on that table. This removes the requirement to examine every row or page lock on the table to determine if a transaction can lock the entire table.

Basically, this paragraph is saying that the DB engine starts requesting locks from the lowest level of the object hierarchy (rows and keys being the highest level), requesting an intent lock before any other lock type to ensure that no other processes are holding an exclusive lock on the table, and the same time declaring intentions to other transactions.

Before deleting a table row, SQL Server places an intent exclusive (IX) lock on the table, another IX lock on the page containing the record, and an exclusive (X) lock on the row itself. If there is a related index which must be updated, additional locks are placed on the index entry and the page.


Note


The same sequence of locks is acquired when a MODIFY query is executed, so I won't be scpecifically focusing on both scenarios. Same reasoning equally applies to both DeleteAll and ModifyAll.


This locking protects table rows being updated by one transaction from other transactions executed at the same time. Exclusive lock means exactly this - no other processes can place any other lock on the row while the exclusive lock is active. On the otehr hand, it doesn't prevent concurrent transactions from updating other rows in the same table. Each transaction will be placing an exclusive lock on a row it is updating, and the IX lock on the table level. Two intent exclusive locks are compatible with each other, which means that multiple transactions can acquire the same lock on the table simultaneously.


If there are more records to be deleted or updated, the database engine keeps placing new locks on each individual row until until the lock escalation theshold is reached. When the lock is escalated, the IX lock on the table level is changed to the exclusive (X) lock, and all individual row-level locks are released. Because the exclusive lock is incompatible with any other lock type (including IX locks), from this moment on, no other transactions can modify or delete any row in the table.

You can read more on locking and escalations in one of my previous posts on this topic.


Testing tool

To illulstrate the locking processes and the mutual impact of escalation between two sessions, I prepared a small demo tool, which, as usual, you can find in my GitHub repository.


The tool boasts minimalistic UI which consists of one page Locking Test. It starts parallel background sessions running various data manipulations (one thing at a time though) on a test table, also called Locking Test.


First of all, before running any test scenario, I need to fill the test table with some data. A page action Initialize Test Table inserts 20000 records - more than enough to demonstrate lock escalations.

Other actions in the page are five preset scenarios which I will walk through now in the post. These scenarios demonstrate session interactions under different types of locking.


There are no settings for the tool at the moment - all scenarios are hardcoded, so if you want to play with the table size, operation sequence, or timings, it all has to be updated in the app. Maybe one day I will add configuration when I get bored and want to code something for fun.


Test scenarios

Besides the action buttons, the page displays two panes where we can see start/stop events with respective timings for each session, as well as error messages if a session fails. Finally, the Refresh action updates the events view. This is all of the UI. Now let's try to run the scenarios.


Test scenarios


Scenario 1


  • Session 1 starts and modifies the first record in the table

  • Session 1 gets suspended for 35 seconds to imitate long processing

  • Session 2 starts and waits for 2 seconds

  • Session 2 modifies a range of values from 5001 to 20000




In all of my scenarios, Session 1, which starts first, imitates a long-running process that aquires a lock on a row or a range of rows. Session 2 is a shorter process, which, in its turn, locks another range of rows. Threse ranges never overlap, so there is no direct update conflict on any particular record.

In the first scenario, both sessions update some records, and both are completed successfully. Session 1 runs first, modifies one recor and holds the lock for 35 seconds, committing the changes at the end of the process.

Session 2, which starts later, does its update of 15000 records and finishes in around 3 seconds (including the 2 seconds idle time).


No conflicts here. But if I suspend Session 2 and let it hang a little longer after updating all records (just enough for me to switch to the SQL Management Studio and run a query against the sys.dm_tran_locks management view), I can see a very long list of active locks placed on table rows.



Active IX lock on the table level held by the Session 1 prevents lock escalation in the Session 2, but does not stop it from updating all 15 000 records. This way SQL Server achieves better cuncurrency in exchange for higher memory consumption due to a lagrge number of fine-grain locks.


Scenario 2


  • Session 1 starts and modifies a range of values from 1 to 15000

  • Session 1 gets suspended for 35 seconds to imitate long processing

  • Session 2 starts and waits for 2 seconds

  • Session 2 modifies the last record in the table


Now I am switching the order of the two processes. This time, the longer session which starts first, updates 15000 rows and triggers the escalation, whilst the session No. 2 attempts to modify a single record.


Now the result is very different. In the absense of any concurrent transactions, Session 1 successfully acquires the exclusive lock of the table after the escalation and holds it until the end of the process. Session 2 which attempts to modify one record, now cannot acquire the IX lock on the table and has to wait until it gets killed by the timeout. And we see the error message so much loved by the users:

We can't save your changes right now, because a record in table 'Locking Test' is being updated in a transaction done by another session.

Scenario 3


  • Session 1 starts and modifies the first record in the table

  • Session 1 gets suspended for 35 seconds to imitate long processing

  • Session 2 starts and waits for 2 seconds

  • Session 2 reads values from 5001 to 15000 with RepeatableRead isolation


Scenarios 3 and 4 repeat the previous two, except that the session acquiring locks on a range of rows, runs FinsSet with RepeatableRead isolation instead of modifying records.

This scenario follows the steps of the scenario 1: Session 1 modifies one record, and Session 2 reads 15000 rows with RepeatableRead.



And like in the first one, both sessions are completed successfully without interfering with each other.


Scenario 4


  • Session 1 starts and reads values from 1 to 15000 with RepeatableRead isolation

  • Session 1 gets suspended for 35 seconds to imitate long processing

  • Session 2 starts and waits for 2 seconds

  • Session 2 modifies the first record in the table


This one is the variation of Scenario 2, replacing ModifyAll with FindSet and RepeatableRead isolation.



And once again, the session that attempts to modify a record fails after a timeout - for the same reason. Select with the RepeatableRead hint places locks on the same resources as Delete or Update do, but instead of the exclusive or intent exclusive, Select acquires the shared (S) lock on the table row, and Intent Shared (IS) on the page and the table. Shared locks are escalated following the same rules as exclusive locks, so when the number of lock objects on rows reaches the esclation threshold, the IS lock on the table is transformed to the shared (S) lock and all granular row locks are released.

After this, Session 2 starts its update and requests the table-level IX lock, which, according to the Microsoft Transaction locking guide, is incompatible with the shared lock. And the Session 2 is left waiting for the shared lock to be released - until the timeout.




Scenario 5


And finally, the scenario that triggered this long and boring deep dive into locking compatibility. What happens if one of the concurrent sessions tries to delete a record which doesn't exist?

Scenario 5 in my locking test tool runs two sessions with following actions:


  • Session 1 starts and reads values from 1 to 15000 with RepeatableRead isolation

  • Session 1 gets suspended for 35 seconds to imitate long processing

  • Session 2 starts and waits for 2 seconds

  • Session 2 attempts to modify a value with ID = -1 which does not exist in the table


After all the previous explanations, we can go straight to the result - to see that the second session failed, even though there are no records to update.



I think the reason of this error must be clear now. Like in previous examples, Session 1 in this scenatio escalated a lock on the table level before Session 2 began the update. Now the first action that the second session does is request the IX lock on the table, even before searching for any records to update. IX lock is incompatible with the active S lock, so the Session 2 has to wait for the Session 1 to finish its processing and release the lock.

This example executes a FindSet with RepeatableRead to demonstrate the principle - an escalation which blocks updates can happen even on read oprations.


Conclusion


To conclude, patterns like

if not Record.IsEmpty() then
    Record.DeleteAll()

or

if not Record.IsEmpty() then
    Record.ModifyAll()

can be beneficial under certain conditions:

  • Multiple concurrent transactions are updating or deleting records in the same table simultaneously;

  • Recordsets being updated are likely to be large enough to trigger a lock escalation (approximately over 5000 records).


Besides that, if some sessions in your environment read relatively large number of records under the RepeatableRead isolation, they can also block concurrent updates.

574 views0 comments

Recent Posts

See All

Comments


bottom of page