Even faster than DBMS_PARALLEL_EXECUTE

Поделиться
HTML-код
  • Опубликовано: 13 дек 2024

Комментарии • 13

  • @praveenkumar-fx5wx
    @praveenkumar-fx5wx 4 года назад +1

    Great lesson, thanks!

  • @bzezinahapolania9086
    @bzezinahapolania9086 2 года назад

    You mention it is possible to use dbms_parallel_execute to do Alter index rebuild…can you present an example for that ?

  • @laurentiuoprea06
    @laurentiuoprea06 4 года назад

    Will this apply if I have a bigfile tablespace?

  • @kaleycrum6350
    @kaleycrum6350 4 года назад

    Hi Connor! I don't understand how breaking it down by file helps. We're still doing table access by rowid range, right? Is the objective to ensure that multi-block reads are not interrupted by file breaks?

    • @DatabaseDude
      @DatabaseDude  4 года назад

      We guarantee that we won't ever have to scan a range of data that does not apply to this table.
      You only get a multiblock read breaks for the first smaller extents, but once they hit 1meg there will not be a break.
      And presumably you're only going to use this for a tables of some significant size.

    • @kaleycrum6350
      @kaleycrum6350 4 года назад

      @@DatabaseDude why would we be scanning data outside the current table?

  • @SheetalGuptas
    @SheetalGuptas 4 года назад

    Hi thanks for this session. Is it possible for you to share the script used in this session

    • @DatabaseDude
      @DatabaseDude  4 года назад +2

      Yes - its here github.com/connormcd/misc-scripts/tree/master/office-hours

    • @lizreen9563
      @lizreen9563 4 года назад

      Great site and scripts! I just can't find the one for this video.

  • @berndeckenfels
    @berndeckenfels 4 года назад

    You own list of chunks is not better than the parallel dunks, you still have multiple per File. It only might decrease the seeking for a given job, bu then it has much more jobs with less predictable overall size. So i am not sure it’s worth it (but the queries are neat, do they translate well to ASM and Exa?)

    • @DatabaseDude
      @DatabaseDude  4 года назад

      The number of jobs is unrelated to the number of chunks - it is governed by the job queue parameters. It is not multiple per file that is what we are trying to avoid, it is about guaranteeing that we won't ever have to scan a range of data that does not apply to this table.

    • @berndeckenfels
      @berndeckenfels 4 года назад

      @@DatabaseDude Ah I see, you mean DBMS_PARALLEL does not skip over file extends which are not part of the table. That does look like a important possible improvement.

    • @berndeckenfels
      @berndeckenfels 4 года назад

      @@DatabaseDude but it produces multiple tasks per file if they have multiple non-consecutive extends (however I guess it doesnt really matter if you access a single file in parallel or multiple, but since you explicitely mentioned that this happens with the standard method, it also happens with yours)