They Enabled Postgres Partitioning and their Backend fell apart

Поделиться
HTML-код
  • Опубликовано: 21 ноя 2024

Комментарии • 56

  • @hnasr
    @hnasr  Год назад +1

    Check out my backend performance course performance.husseinnasser.com

  • @siya.abc123
    @siya.abc123 Год назад +90

    As we become more and more senior we realise more and more that every decision in our field is basically a compromise. Nothing is perfect. There is no magic. There is a cost.

    • @ZeeshanAli-nk3xk
      @ZeeshanAli-nk3xk Год назад

      deep!

    • @just_A_doctor
      @just_A_doctor Год назад +2

      We become!?!
      Senior never said he is senior like that.

    • @EzequielRegaldo
      @EzequielRegaldo Год назад

      Dude that problem is super jr

    • @arcanernz
      @arcanernz Год назад +2

      @@EzequielRegaldoagreed this bug negates partitions altogether like who thought it was ok to lock all partitions.

    • @Atlastheyote222
      @Atlastheyote222 Год назад +1

      Sums up 99% of cloud development tbh

  • @max0521
    @max0521 3 месяца назад

    Great video as always! It's a really good idea to create another video with the most common issues regarding the partitioning in PostgreSQL, issues like the one that you explain in this video.

  • @kushalkamra3803
    @kushalkamra3803 Год назад +5

    Thank you 🙏. I was wondering why did Postgres team did not anticipate locking issues while scaling in a feature that will be used for scaling.

  • @sujeetagrahari2292
    @sujeetagrahari2292 Год назад +2

    You said if the query is to get the data of the spcific day, even it will scan the whole partitions. But aws docs provided in the blog suggest, it will read the many partitions if you want the data from multiple days.
    "1. You query many days worth of data, which requires the database to read many partitions.
    2. The database creates a lock entry for each partition. If partition indexes are part of the optimizer access path, the database creates a lock entry for them, too.
    3. When the number of requested locks entries for the same backend process is higher than 16, which is the value of FP_LOCK_SLOTS_PER_BACKEND, the lock manager uses the non-fast path lock method."

    • @LeoLeo-nx5gi
      @LeoLeo-nx5gi Год назад

      can you please provide the link to AWS blog, thanks !!

  • @pajeetsingh
    @pajeetsingh Год назад +10

    When they are working on database for their startup; Why do they always start with one big table?
    As part of design and expecting 10 mils row per day; the initial design could have considered partitioning.
    Is the industry standard is to start with novice design and fix-as-the-bugs/requirements-come-by?

    • @martinvuyk5326
      @martinvuyk5326 Год назад +2

      Yep, that's exactly why NoSQL grew in popularity. FrontEnd and BackEnd don't wanna think about db

    • @mishikookropiridze
      @mishikookropiridze Год назад +1

      You won't know that you will have 10 mil rows per day until you have it.

    • @DoubleM55
      @DoubleM55 8 месяцев назад +1

      It would be overkill to always engineer everything to support billions of rows, when probably it won't be needed.
      As your data starts to grow, you figure out a way to move forward.
      Works fine in most cases, unless you hit the wall on some weird behavior like shown in this video.

    • @JasminUwU
      @JasminUwU 18 дней назад +1

      Premature optimization is the enemy of progress

    • @pajeetsingh
      @pajeetsingh 18 дней назад

      @ Accelerate.

  • @diamondkingdiamond6289
    @diamondkingdiamond6289 Год назад +6

    Also, don’t query times drastically increase the more partitions you have. E.g. if you have 1000 partitions query times will be extremely slow.

    • @pajeetsingh
      @pajeetsingh Год назад

      Source it plox

    • @noir5820
      @noir5820 Год назад +8

      Depends are you querying multiple partitions or not ? The whole point of partitioning is to speed up queries

    • @jacob_90s
      @jacob_90s Год назад

      It adds a little more over head but lets you scale farther. As with everything, all about pros and cons

    • @codingjerk
      @codingjerk 11 месяцев назад

      Planing will slow down, yes, but 1000 partitions is not a huge number, I used to have 4096 partitions on one huge table and it worked like charm. Plans for some analytical queries were slowed down tho, to like ~100ms overhead, but it's okay for analytical queries anyway.

  • @hossman333
    @hossman333 7 дней назад

    Awesome video! Thank you!!

  • @lakhveerchahal
    @lakhveerchahal Год назад +3

    Hi Hussein, at 18:10 what was the query that you ran on pg13?
    I tried reproducing it, but it is locking only that specific partition.
    Here's what I did -
    Select * from orders where order_date = '2020-01-01';

  • @ritwizsinha1261
    @ritwizsinha1261 5 месяцев назад

    In postgres during reading or selects unless the query specifically asks for a lock their is no lock acquired because of the convenience of mvcc

  • @Atlastheyote222
    @Atlastheyote222 Год назад

    I was considering doing this for my PVE cluster of development environments to simplify docker swarm deployment. Maybe it isn’t worth the trouble lol

  • @pallavSemwal
    @pallavSemwal Год назад +1

    could you please do a course on devOps ??

  • @BryanChance
    @BryanChance Год назад +2

    Partitioning is too scary for me unless the data is non-critical and some data loss is acceptable. Often the configuration is too complex and brittle that it becomes a bigger problem than what you're to solve. LOL ooh gives me nightmares..

  • @PtYt24
    @PtYt24 5 месяцев назад +1

    They could have saved themselves a lot of time by using the timescaledb postgresql extension.

  • @pollathajeeva23
    @pollathajeeva23 Год назад

    Interesting case study I'm just curious that how Distributed SQL which is Postgres compatible mitigate this issue in distributed system like YugaByte and CRDB.

  • @samirzerrouki3153
    @samirzerrouki3153 Год назад

    Hello hussein;
    Can you please talk about the action took by twitter to prevent scraping (6000post/day). And can we hear your thoughts about this and if there any alternative solutions to achieve the goal.
    Thank you :))

  • @haythamasalama0
    @haythamasalama0 Год назад +1

    Great video ❤

  • @engineerscodes
    @engineerscodes Год назад +1

    Nice video 👌

  • @mawesome4ever
    @mawesome4ever Год назад

    This is interesting! Do you know if this has been fixed in the newer versions of postgres? I'm looking for a DB for my game and i think i'll eventually scale and don't want to run into this same issue

    • @rayanfarhat5006
      @rayanfarhat5006 Год назад +2

      Every DB has it's own problems when scaling, if your game need scaling, then you have enough money to hire a db expert to fix your scaling issues

  • @mhcbon4606
    @mhcbon4606 Год назад

    i dont have a wide picture on the industry, my gut feeling and small experience tells me that very often in system design people relies on auto partitioning systems to "make it happen". they are confusing quantitative distribution and qualitative distribution. Partitioning the data in space is one thing, Partitioning the data by access and usage is something else. anyways, not my concern anymore.

  • @pajeetsingh
    @pajeetsingh Год назад

    As of latest postgres, does it really create a process per SELECT operation?
    That's crazy thinking.

    • @DoubleM55
      @DoubleM55 8 месяцев назад

      No, It never did. It creates a process per *connection*. And then one connection can perform many queries.

  • @rydmerlin
    @rydmerlin Год назад

    Unlike Oracle it won’t automatically create the partition upon insert. In my last assignment I chose to partition with a 7 day interval and the table had billions of rows over a 10 year period. The table was also subpartitioned. This DBA looks like he had experience with Oracle. What authority could he had used to verify his plan before he took action?

    • @dinoscheidt
      @dinoscheidt Год назад

      Well, this kind of smoke testing is a case where human-machine collaboration (i.e. talking it through with the next version of chat gpt) might be a valid path for the future. Not because the senior DBA doesn’t know what he is doing, and he still needs to know what he is doing, but because an llm is great to potentially filter down a bunch of documentation, content, cases studies and issues one could never have read all and pattern match them. Otherwise no fault - its just life if one can’t afford to do an infinite amount of research and never get to action.

  • @ifyugwumba8120
    @ifyugwumba8120 Год назад

    How do you do it sir just after creating a course on udemy you are back on RUclips. Show me thy ways
    Am not even done with your udemy course

  • @robarnold8377
    @robarnold8377 Год назад

    Great video - this will sound odd but your accent is so Scottish in tone. I promise it’s not just because lock was said 100 times.

    • @robarnold8377
      @robarnold8377 Год назад

      Also I never experienced this issue in oracle. Balancing global vs local partitions was important - but simple oracle partitions definitely felt more reliable.

    • @robarnold8377
      @robarnold8377 Год назад

      All the partitions wtaf! I just got to that part of the video. Hell
      That’s a bug. That negates the usage of partitions on ANYTHING that gets an update operation
      Maybe Postgres communicated this out in advance. But that does not seem like a good start of a partition use case

  • @muayyadalsadi
    @muayyadalsadi 6 месяцев назад

    16:58 via part_man or pg_cron extension

  • @GebzNotJebz
    @GebzNotJebz Год назад +2

    يعطيك العافية يا هندسة.. محتوي رائع

  • @jp-wi8xr
    @jp-wi8xr Год назад

    "Disney magic genie" made me realize, you look like Aladdin.

  • @AkashBanik-sf2dw
    @AkashBanik-sf2dw Год назад

    What happened to your discord server??

  • @bilincinontolojikizdirabi
    @bilincinontolojikizdirabi Год назад

    why do they need to use transactions on this huge table? Isn't using data marts or DWHs are more effective? This tablo should not be a transactional table i believe, especially with 22 indexes???

  • @pawsdev
    @pawsdev 6 месяцев назад

    So solution - not use SQL at all for millions rows per day, better not use SQL at all

  • @yassineoujaa2670
    @yassineoujaa2670 Год назад

    Recently I asked the AI to generate random arabic names , and the one of the names was Nasser Hussein lol

  • @rmacpie3475
    @rmacpie3475 Год назад

    Hi,
    My IP is dynamic and it's uses intrrnal wan services Any solutions for port forwarding? I use UPnP port forwarding no use, Try all solutions please help

  • @jondoe79
    @jondoe79 Год назад +1

    Only Partition for data to be archived.

  • @pajeetsingh
    @pajeetsingh Год назад

    19:14 S you don't know and we have liberty to disagree with you?

  • @mrngwozdz
    @mrngwozdz Год назад

    You can do partitioning by date by adding additional table and rule on it. Here is how to achievie this:
    CREATE RULE autocall_createpartitionifnotexists AS ON INSERT TO public.data_temp DO INSTEAD (" +
    "SELECT createpartitionifnotexists((new.value)::date) AS createpartitionifnotexists;" +
    "INSERT INTO public.data (value)" +
    "VALUES (NEW.value);" +
    ")
    And function createpartitionifnotexists:
    CREATE OR REPLACE FUNCTION public.createpartitionifnotexists(fordate date)
    RETURNS void
    AS '
    declare dayStart date := date_trunc(''day'', forDate);
    declare dayEndExclusive date := dayStart + interval ''1 day'';
    declare tableName text := ''public.data_'' || to_char(forDate, ''YYYYmmdd'');
    begin
    if to_regclass(tableName) is null then
    execute format(''create table %I partition of public.data for values from (%L) to (%L)'', tableName, dayStart, dayEndExclusive);
    execute format(''create unique index on %I (id)'', tableName);
    execute format(''create index on %I USING btree (value)'', tableName);
    end if;
    end;'
    LANGUAGE plpgsql;