Indexing for Delete - SQL in Sixty Seconds 197

Поделиться
HTML-код
  • Опубликовано: 21 дек 2024

Комментарии •

  • @zberteoc
    @zberteoc 2 года назад +16

    When the scope is deleting multiple rows selected based on some WHERE clause condition you don't do it directly using delete statement with that condition especially when we talk about a large number of rows. You first isolate the PK values in a separate table, called i.e. delete_items, in ascending order with an identity column that will become PK on the delete_items table. You then take this new table in a loop based on its identity column where you select 1, 10. 100. 1000 rows at a time by inner joining to the target table on its PK with the PK values saved in delete_items table and delete the rows this way. Deletes with where clause of large number of rows will lock table for the duration of delete and it will generate issues if it is in a live environment. That is why you need to divide and conquer. This will be similar with the range situation but there is no really solution to delete that avoids the logical reads as it has to look for the rows in the tables and all their counterparts in every index that exists on the target table and remove them all. Delete is one of the heaviest operation in SQL world.

  • @thevadeva3540
    @thevadeva3540 2 года назад +2

    We can insert those records(Which needs to be delete) using select query with where conditions into temp table and then delete all records from temp table will increase the performance. Please try this and share your thoughts. Thanks

  • @briangalebmg002
    @briangalebmg002 2 года назад +1

    Another thing to note is that it depends too on how many indexes you have on the table. Having 1 non clustered index on the table on the column you are filtering by (similar to a SELECT) will often help your delete as it can do a fast lookup on the value BUT each non clustered index increases the number of delete operations that must be performed. If you are deleting 1 row from a table with 1 non clustered index, it will do 2 delete operations - one from the table (heap or clustered index) and one from the non-clustered index. If you have 100 non clustered indexes on the table, then it'll need to do 101 delete operations. Having a GOOD clustered index key will help reduce the number of non clustered indexes you need to make and thus save you disk space and data manipulation time. INSERTS, UPDATES, and DELETES need to jump through each non clustered index and change the data appropriately. This can apply to filtered indexes as well, even if your data doesn't exist in the filtered index, it may need to scan the whole index anyways. Something I am a little curious about is if SQL would behave better if you used different syntax on the second delete example. I've read mixed things on between and how the query optimizer generates the query plan for those. Would using >= and

  • @azeezullasheriff7746
    @azeezullasheriff7746 11 месяцев назад +1

    Can you also include a large object in the table and explain the deletion with index and without index, along with Locking impact

  • @keep_digging
    @keep_digging 2 года назад +1

    it is interesting, as a general rule I am not deleting data very often, very handy to know or at least have some insight that given the index/s on a table you are doing a complex delete on will have an effect, cheers Pinal

  • @maftahur
    @maftahur 2 года назад +1

    Query: When deleting records on index based table doesn't it also add extra cost to re-index remaining rows?

  • @MiningForPies
    @MiningForPies 2 года назад +2

    Indexes are a nightmare when they go wrong. Got on query at work that is reading 197gb worth of pages because it does a key lookup millions of times.

  • @frankzelazko
    @frankzelazko 2 года назад +2

    cheers Pinal

  • @hovardlee
    @hovardlee 2 года назад +2

    It is all about usage of your system. Most of the time it will help because you delete a single operation. When you delete e.g. thousands or more records then you do it ussually not so offen and you accept the performance because of what offen it happens.

  • @TheANKUSHVIDEOYOUTUB
    @TheANKUSHVIDEOYOUTUB 2 года назад +1

    Sir please make video on Azure synapse dedicated pool sql

  • @Knowledgegreenone
    @Knowledgegreenone 2 года назад +1

    Hello plz make video on Functions Dynamic SQL Stored procedures

    • @PinalDaveSQLAuthority
      @PinalDaveSQLAuthority  2 года назад

      ok

    • @Knowledgegreenone
      @Knowledgegreenone 2 года назад +1

      @@PinalDaveSQLAuthority can u provide me free access for mysql on pluralsight created by u I want to learn as it is required to gain knowledge and also from professional perspective.

    • @PinalDaveSQLAuthority
      @PinalDaveSQLAuthority  2 года назад

      @@Knowledgegreenone It is not possible to distribute the course for FREE, however, you can sign up for a free trial on the website.

    • @Knowledgegreenone
      @Knowledgegreenone 2 года назад +1

      @@PinalDaveSQLAuthority also make videos on json handling and XML handling in sql

    • @PinalDaveSQLAuthority
      @PinalDaveSQLAuthority  2 года назад

      @@Knowledgegreenone Sure.

  • @milesforrest6920
    @milesforrest6920 2 года назад +1

    Thanks Pinal. Is the extra logical reads the result of having to maintain the Index (removal of entries)?

    • @PinalDaveSQLAuthority
      @PinalDaveSQLAuthority  2 года назад

      The extra logical reads are finding those records which needs to be deleted.

  • @seeniibrahim2280
    @seeniibrahim2280 2 года назад +1

    Soft delete of record by having and bit column.. Update the column when delete d. Hope update operation on column with index won't use much logical reads