Hi Connor! I don't understand how breaking it down by file helps. We're still doing table access by rowid range, right? Is the objective to ensure that multi-block reads are not interrupted by file breaks?
We guarantee that we won't ever have to scan a range of data that does not apply to this table. You only get a multiblock read breaks for the first smaller extents, but once they hit 1meg there will not be a break. And presumably you're only going to use this for a tables of some significant size.
You own list of chunks is not better than the parallel dunks, you still have multiple per File. It only might decrease the seeking for a given job, bu then it has much more jobs with less predictable overall size. So i am not sure it’s worth it (but the queries are neat, do they translate well to ASM and Exa?)
The number of jobs is unrelated to the number of chunks - it is governed by the job queue parameters. It is not multiple per file that is what we are trying to avoid, it is about guaranteeing that we won't ever have to scan a range of data that does not apply to this table.
@@DatabaseDude Ah I see, you mean DBMS_PARALLEL does not skip over file extends which are not part of the table. That does look like a important possible improvement.
@@DatabaseDude but it produces multiple tasks per file if they have multiple non-consecutive extends (however I guess it doesnt really matter if you access a single file in parallel or multiple, but since you explicitely mentioned that this happens with the standard method, it also happens with yours)
Great lesson, thanks!
You mention it is possible to use dbms_parallel_execute to do Alter index rebuild…can you present an example for that ?
Will this apply if I have a bigfile tablespace?
Hi Connor! I don't understand how breaking it down by file helps. We're still doing table access by rowid range, right? Is the objective to ensure that multi-block reads are not interrupted by file breaks?
We guarantee that we won't ever have to scan a range of data that does not apply to this table.
You only get a multiblock read breaks for the first smaller extents, but once they hit 1meg there will not be a break.
And presumably you're only going to use this for a tables of some significant size.
@@DatabaseDude why would we be scanning data outside the current table?
Hi thanks for this session. Is it possible for you to share the script used in this session
Yes - its here github.com/connormcd/misc-scripts/tree/master/office-hours
Great site and scripts! I just can't find the one for this video.
You own list of chunks is not better than the parallel dunks, you still have multiple per File. It only might decrease the seeking for a given job, bu then it has much more jobs with less predictable overall size. So i am not sure it’s worth it (but the queries are neat, do they translate well to ASM and Exa?)
The number of jobs is unrelated to the number of chunks - it is governed by the job queue parameters. It is not multiple per file that is what we are trying to avoid, it is about guaranteeing that we won't ever have to scan a range of data that does not apply to this table.
@@DatabaseDude Ah I see, you mean DBMS_PARALLEL does not skip over file extends which are not part of the table. That does look like a important possible improvement.
@@DatabaseDude but it produces multiple tasks per file if they have multiple non-consecutive extends (however I guess it doesnt really matter if you access a single file in parallel or multiple, but since you explicitely mentioned that this happens with the standard method, it also happens with yours)