When the scope is deleting multiple rows selected based on some WHERE clause condition you don't do it directly using delete statement with that condition especially when we talk about a large number of rows. You first isolate the PK values in a separate table, called i.e. delete_items, in ascending order with an identity column that will become PK on the delete_items table. You then take this new table in a loop based on its identity column where you select 1, 10. 100. 1000 rows at a time by inner joining to the target table on its PK with the PK values saved in delete_items table and delete the rows this way. Deletes with where clause of large number of rows will lock table for the duration of delete and it will generate issues if it is in a live environment. That is why you need to divide and conquer. This will be similar with the range situation but there is no really solution to delete that avoids the logical reads as it has to look for the rows in the tables and all their counterparts in every index that exists on the target table and remove them all. Delete is one of the heaviest operation in SQL world.
We can insert those records(Which needs to be delete) using select query with where conditions into temp table and then delete all records from temp table will increase the performance. Please try this and share your thoughts. Thanks
Another thing to note is that it depends too on how many indexes you have on the table. Having 1 non clustered index on the table on the column you are filtering by (similar to a SELECT) will often help your delete as it can do a fast lookup on the value BUT each non clustered index increases the number of delete operations that must be performed. If you are deleting 1 row from a table with 1 non clustered index, it will do 2 delete operations - one from the table (heap or clustered index) and one from the non-clustered index. If you have 100 non clustered indexes on the table, then it'll need to do 101 delete operations. Having a GOOD clustered index key will help reduce the number of non clustered indexes you need to make and thus save you disk space and data manipulation time. INSERTS, UPDATES, and DELETES need to jump through each non clustered index and change the data appropriately. This can apply to filtered indexes as well, even if your data doesn't exist in the filtered index, it may need to scan the whole index anyways. Something I am a little curious about is if SQL would behave better if you used different syntax on the second delete example. I've read mixed things on between and how the query optimizer generates the query plan for those. Would using >= and
it is interesting, as a general rule I am not deleting data very often, very handy to know or at least have some insight that given the index/s on a table you are doing a complex delete on will have an effect, cheers Pinal
It is all about usage of your system. Most of the time it will help because you delete a single operation. When you delete e.g. thousands or more records then you do it ussually not so offen and you accept the performance because of what offen it happens.
@@PinalDaveSQLAuthority can u provide me free access for mysql on pluralsight created by u I want to learn as it is required to gain knowledge and also from professional perspective.
Soft delete of record by having and bit column.. Update the column when delete d. Hope update operation on column with index won't use much logical reads
When the scope is deleting multiple rows selected based on some WHERE clause condition you don't do it directly using delete statement with that condition especially when we talk about a large number of rows. You first isolate the PK values in a separate table, called i.e. delete_items, in ascending order with an identity column that will become PK on the delete_items table. You then take this new table in a loop based on its identity column where you select 1, 10. 100. 1000 rows at a time by inner joining to the target table on its PK with the PK values saved in delete_items table and delete the rows this way. Deletes with where clause of large number of rows will lock table for the duration of delete and it will generate issues if it is in a live environment. That is why you need to divide and conquer. This will be similar with the range situation but there is no really solution to delete that avoids the logical reads as it has to look for the rows in the tables and all their counterparts in every index that exists on the target table and remove them all. Delete is one of the heaviest operation in SQL world.
Thank you for detailed comment. It helps.
💯
Brilliant!
Thank You.
I love this response. Do you mind sharing a link of where you demo this? I have struggled with this prior.
We can insert those records(Which needs to be delete) using select query with where conditions into temp table and then delete all records from temp table will increase the performance. Please try this and share your thoughts. Thanks
Works.
Another thing to note is that it depends too on how many indexes you have on the table. Having 1 non clustered index on the table on the column you are filtering by (similar to a SELECT) will often help your delete as it can do a fast lookup on the value BUT each non clustered index increases the number of delete operations that must be performed. If you are deleting 1 row from a table with 1 non clustered index, it will do 2 delete operations - one from the table (heap or clustered index) and one from the non-clustered index. If you have 100 non clustered indexes on the table, then it'll need to do 101 delete operations. Having a GOOD clustered index key will help reduce the number of non clustered indexes you need to make and thus save you disk space and data manipulation time. INSERTS, UPDATES, and DELETES need to jump through each non clustered index and change the data appropriately. This can apply to filtered indexes as well, even if your data doesn't exist in the filtered index, it may need to scan the whole index anyways. Something I am a little curious about is if SQL would behave better if you used different syntax on the second delete example. I've read mixed things on between and how the query optimizer generates the query plan for those. Would using >= and
Very well said.
Can you also include a large object in the table and explain the deletion with index and without index, along with Locking impact
good topic.
it is interesting, as a general rule I am not deleting data very often, very handy to know or at least have some insight that given the index/s on a table you are doing a complex delete on will have an effect, cheers Pinal
Thanks for the comment!
Query: When deleting records on index based table doesn't it also add extra cost to re-index remaining rows?
Not always.
Indexes are a nightmare when they go wrong. Got on query at work that is reading 197gb worth of pages because it does a key lookup millions of times.
I totally agree with you.
cheers Pinal
Thanks!
It is all about usage of your system. Most of the time it will help because you delete a single operation. When you delete e.g. thousands or more records then you do it ussually not so offen and you accept the performance because of what offen it happens.
Very true.
Sir please make video on Azure synapse dedicated pool sql
Noted
Hello plz make video on Functions Dynamic SQL Stored procedures
ok
@@PinalDaveSQLAuthority can u provide me free access for mysql on pluralsight created by u I want to learn as it is required to gain knowledge and also from professional perspective.
@@Knowledgegreenone It is not possible to distribute the course for FREE, however, you can sign up for a free trial on the website.
@@PinalDaveSQLAuthority also make videos on json handling and XML handling in sql
@@Knowledgegreenone Sure.
Thanks Pinal. Is the extra logical reads the result of having to maintain the Index (removal of entries)?
The extra logical reads are finding those records which needs to be deleted.
Soft delete of record by having and bit column.. Update the column when delete d. Hope update operation on column with index won't use much logical reads
Often it does as well. Indexes do harm your system.