Your last method at the end with MOVE ONLINE is elegant. If that's not possible, a fast solution for an in-place DELETE on large non-partitioned tables is to sort the rows for deletion by DB block ID by using the ROWID, may be using a temp table, and then batch deleting the rows. This reduces the db file sequential reads by clustering by physical location. Linux SD devices and often SAN devices will do a read ahead and pre-cache adjacent data blocks.
As always , there is tradeoff , in ordet o move the table and keep the needed rows (assuming we need to delete 20% of the table) we still need almost double the space. however, it is a great option !!!
Can we use multi threadding. I. e. dividing deleting rows with some logic and distributing to multiple session. Like dbms_parallel_execute where rows are distributed in chunks and each chunk is allocated to each session.
Sir, is it true ? 'Bulk collect' is the best approach to delete more a large number of rows efficiently from a table which contains huge amount of data?
Hi..can you make a video on varchar 32k. I used in my table and found the row size got increased and the table is largest in db. Is v32k taking lots of space in asm .should i move to clob?
Nice summary. Thanks Connor.
Your last method at the end with MOVE ONLINE is elegant. If that's not possible, a fast solution for an in-place DELETE on large non-partitioned tables is to sort the rows for deletion by DB block ID by using the ROWID, may be using a temp table, and then batch deleting the rows. This reduces the db file sequential reads by clustering by physical location. Linux SD devices and often SAN devices will do a read ahead and pre-cache adjacent data blocks.
Indeed - an excellent option. I'd love to see the database do this natively
Hi Connor... please make a detailed video on how to effectively read awr and addm reports.
its on the list
As always , there is tradeoff , in ordet o move the table and keep the needed rows (assuming we need to delete 20% of the table) we still need almost double the space. however, it is a great option !!!
Indeed. Although space is so cheap and plentiful nowadays (I know this is a generalisation :-))
Absolutely agree 💯
Can we use multi threadding. I. e. dividing deleting rows with some logic and distributing to multiple session. Like dbms_parallel_execute where rows are distributed in chunks and each chunk is allocated to each session.
yes you can. dbms_parallel_execute is a common option here. That improves response time, but you'll be doing the same amount of work in total
If we try this out on a 5 TB table and since its online, Database would need additional 5 TB to create a MV in background to support online?
Correct. A "move" always needs (temporarily) the old and new data.
Sir, is it true ?
'Bulk collect' is the best approach to delete more a large number of rows efficiently from a table which contains huge amount of data?
bulk collect is definitely better than single row at a time, but native SQL (delete from ...) is implicitly the same or better than bulk collect
But non- partitioned table to delete millions of rows bulk collect with some limit and commit is good right compared to simple delete?
Hi..can you make a video on varchar 32k. I used in my table and found the row size got increased and the table is largest in db. Is v32k taking lots of space in asm .should i move to clob?
A varchar2(32k) is just a clob under the covers (once the data is larger than ~4k) so you incur all the same performance overheads
Nice
Seems like a nuke would be faster.
"truncate" has always been very "thorough" :-)