10 - Sorting & Aggregations (CMU Databases Systems / Fall 2019)

Поделиться
HTML-код
  • Опубликовано: 24 янв 2025

Комментарии • 18

  • @AndersonSilva-dg4mg
    @AndersonSilva-dg4mg 5 лет назад +33

    You are the Best teacher in the subject database

  • @firstlast7086
    @firstlast7086 3 года назад +2

    38:32 the best definition for aggregation I've ever heard

  • @pb25193
    @pb25193 4 года назад +7

    The Indian dude asks really good questions. Good job Andy handling them masterfully.

  • @hasan0770816268
    @hasan0770816268 3 года назад +1

    7:37 agenda (sorting and aggregation)
    8:17 why we need sorting?
    11:26 external merge sort
    13:01 2-way external merge sort
    23:52 double buffering optimization
    24:27 general k-way merge sort
    29:05 B-tree for sorting
    37:35 aggregation

  • @420_gunna
    @420_gunna 4 года назад +8

    pavlo possibly the most dangerous man in database systems

  • @patrykuchman5388
    @patrykuchman5388 3 года назад +1

    25:02 - The general external merge sort. Andy mentions that in general if we have B buffer pool pages then we can use B-1 pages to store the sorted runs and 1 as output. But wouldn't it be better to dedicate more than 1 page for the output? If B is high and we only have 1 output buffer page, then every times it fills up we have to store that page to disk and wait until this operation finishes before we can proceed with merging

  • @galaxy_mosaic3587
    @galaxy_mosaic3587 2 года назад +2

    my only constructive feedback would be that it would be nice for the content to be formatted so that the full slide is visible. then somehow adjust the proportions so that the recorded video replay is not always overlapping the lower right corner of the slide because sometimes important content gets hidden.

  • @abdelrhmanahmed1378
    @abdelrhmanahmed1378 2 года назад

    1:04:00 in rehash phase we do sequential scan over the buckets ? if yes , if it will be better to just do sequential scan over the original data ?

  • @th3sly
    @th3sly 5 лет назад +8

    was that guy wearing a MongoDB t-shirt? :)))

  • @akshpatel7935
    @akshpatel7935 3 года назад

    21:31 how does pass #3 fit entirely in main memory? Wasn't that the precondition that whole dataset can't fit in main memory?

    • @andypavlo
      @andypavlo 3 года назад +3

      It doesn't fit entirely in memory. You fetch one page at a time per sorted run and then fill up the output buffer. When the output is full, you write it out to disk.

    • @akshpatel7935
      @akshpatel7935 3 года назад

      @@andypavlo Thanks a lot Prof. Pavlo, that clears things up. I was not expecting a response so soon on a 2 year old lecture. And thank you for making the lectures, assignments and problem sets public.

  • @alexalexfff
    @alexalexfff 3 года назад

    What text book is mentioned in the development hints slide?

  • @420_gunna
    @420_gunna 4 года назад +1

    dj drop tables last heard muttering I aint goin back as he left

  • @陈洋-r1s
    @陈洋-r1s 4 года назад +1

    in the hash sorting,why we need rehash rather than just operate on the buckets from phase 1.

    • @AshishNegi1618
      @AshishNegi1618 4 года назад

      If first page has 1k key-values (some duplicates).. you will need some data structure to group values of same key together => hence 2nd hash table. Now, it looks like we could just use first hash table and second is redundant.. but from this lecture, it looks like idea is first to collect all values of a key together using first hash and then calculate aggregation using second hash..

    • @huangwentao
      @huangwentao 4 года назад

      @@AshishNegi1618 In my opinion, if we just use 1st hash functions, we can have different partitions, and following aggregation operations can be performed by scanning each key-value pair inside each partition. But such scanning may require a large number of scans, which is not that effective. If we use the 2nd hash function, we can avoid the aforementioned scans. Instead, we can just perform aggregations by a single scan over hash keys with the help of 2nd hash tables.

    • @zhchen6644
      @zhchen6644 3 года назад

      because you can't: the data is larger than you mem