5 Secrets for making PostgreSQL run BLAZING FAST. How to improve database performance.

Поделиться
HTML-код
  • Опубликовано: 18 май 2024
  • There's an approach in here for everyone! PostgreSQL is one of the most versatile SQL databases but often does not provide performance out of the box.
    Fortunately, there are some easy approaches to improve the speed of database queries when it comes to PostgreSQL. In this video, we look at 5 different approaches which can be used based on your requirements.
    #postgresql #database #optimization
    Become a better developer in 4 minutes: bit.ly/45C7a29 👈
    Source code: github.com/dreamsofcode-io/po...
    My socials:
    Twitter: / dreamsofcode_io
    Discord Server: / discord
    Please consider supporting me as well!
    Patreon: / dreamsofcode
  • НаукаНаука

Комментарии • 74

  • @luispalacios27
    @luispalacios27 Год назад +13

    I discovered your channel a few days ago; it's been amazing. Keep up the excellent work.

    • @dreamsofcode
      @dreamsofcode  Год назад +4

      Thank you! I appreciate that a lot

  • @makkusu3866
    @makkusu3866 Год назад

    Really well made video. Staying here for more!

  • @HrHaakon
    @HrHaakon 10 месяцев назад +66

    Prepared statements should not be used because they're faster. They should be used because they're much safer.
    The speed increase is just a free bonus.

    • @dreamsofcode
      @dreamsofcode  10 месяцев назад +12

      This is a great point.

    • @tipeon
      @tipeon 7 месяцев назад +5

      As a Java developer, I don't even remember the last time I used an unprepared statement.

    • @HrHaakon
      @HrHaakon 7 месяцев назад +5

      @@tipeon I have been reliably informed that Oracle have imprisoned a Night Elf in their compiler and whenever it sees a java.sql.Statement, he screams "YOU ARE NOT PREPARED", teleports to it and starts wailing.
      It may be true, but I haven't used a PreparedStatement since Blizzard released Heroes of the Storm, so...

    • @tizian_heckmann
      @tizian_heckmann 6 месяцев назад +1

      While that is correct, I am pretty sure prepared statements were initially developed for performance.

  • @HideBuz
    @HideBuz Месяц назад

    Bravo, subbed!

  • @pss_crs
    @pss_crs 9 месяцев назад

    I would like to see content possible and good way to implement multi tenant on postgres

  • @n0kodoko143
    @n0kodoko143 10 месяцев назад

    Awesome video

  • @SirJagerYT
    @SirJagerYT Год назад +31

    Suggestion for next sql video is "how to vectorize sql database for fast searching"

    • @dreamsofcode
      @dreamsofcode  Год назад +7

      This is a great suggestion!

    • @wexwexexort
      @wexwexexort 8 месяцев назад +4

      ​@@dreamsofcodeHave you done it

  • @prathameshjoshi007
    @prathameshjoshi007 8 месяцев назад +2

    slight addition to COPY. you can use \copy from client if you don't have access to store input files on server. i.e. You can locally stream csv to server.

  • @Fanaro
    @Fanaro 9 месяцев назад

    Any book recommendations on how to optimize PostgreSQL?

  • @heroe1486
    @heroe1486 Год назад +1

    Thanks for the video, very good content and well edited, I'd just recommend putting more dynamism in your voice to match the pacing

  • @unitythemaker
    @unitythemaker 6 дней назад

    Thanks!

    • @dreamsofcode
      @dreamsofcode  6 дней назад +1

      Thank you so much for the support. It's really appreciated!!!

  • @and_rotate69
    @and_rotate69 Год назад +1

    I want to be from ur first subscribers so when u reach a million in the next year i will comment i was here when he was getting started (i was here at 5k)

  • @xucongzhan9151
    @xucongzhan9151 Год назад +1

    Nice and informative vid.
    Care to share what mic you are using? Sounds very nice

    • @dreamsofcode
      @dreamsofcode  Год назад

      Thank you! I appreciate the feedback.
      For this video I used the Electrovoice RE20. I also recorded in a soundtreated room this time as well which made a difference!

    • @xucongzhan9151
      @xucongzhan9151 Год назад

      @@dreamsofcode Ah, that explains! I am looking to upgrade my gear but RE20 is really out of my budget 😂 Thanks for the reply

    • @dreamsofcode
      @dreamsofcode  Год назад +1

      @@xucongzhan9151 It's pricey! I think the Shure SM57 is pretty decent as well and much cheaper, I use that one whenever I travel!

  • @MrAtomUniverse
    @MrAtomUniverse 9 месяцев назад

    What tool are you using the terminal looks so good

  • @budmonk2819
    @budmonk2819 7 месяцев назад +1

    I'm from the Oracle world, a lot of familiar concepts

  • @torvic99
    @torvic99 7 месяцев назад

    MOOOORE

  • @gjermundification
    @gjermundification 9 месяцев назад

    Which language would I write a postgreSQL extension in? PL/SQL? ECMA? Python?

    • @dreamsofcode
      @dreamsofcode  9 месяцев назад +1

      SQL and C code are typically used for creating an extension. Mainlg SQL code and C if you need something more powerful!

  • @thepaulcraft957
    @thepaulcraft957 Год назад +2

    there is no link to your code in the video description. very interesting video!

    • @dreamsofcode
      @dreamsofcode  Год назад +2

      Oh shoot! Thank you for letting me know. I'll fix that now.

    • @dreamsofcode
      @dreamsofcode  Год назад +2

      @ThePaulCraft Fixed! Thank you again.

  • @thegrumpydeveloper
    @thegrumpydeveloper 7 месяцев назад

    Nice! Now I don’t have to use web3 and store my data on crypto and pay per request and have huge latencies and non acid transactions. 😂

  • @DerClaudius
    @DerClaudius 8 месяцев назад +1

    Why would you use preared statements instead of stored procedures? They are automatically "prepared" and don't need to be recreated in every session

    • @Lightbeerer
      @Lightbeerer 7 месяцев назад

      Stored procedures are wonderful, but prepared statements have the advantage of being more dynamic in nature. Imagine e.g. a web page displaying a list with multiple columns each with different filters and sorting options. It would be a nightmare to implement with a stored procedure, but using a prepared statement you can dynamically build the necessary query.

    • @DerClaudius
      @DerClaudius 7 месяцев назад +2

      @@Lightbeerer Kinda disagree here. Especially with web pages, the connections/sessions are very short and only exists for the short time the page is rendered. And if you only execute the query once per request or paging request, preparing the statements make it slower. And you can absolutely implement dynamic filtering/sorting etc with an SP... and with a lot less SQL injection dangers...

    • @DerClaudius
      @DerClaudius 7 месяцев назад

      @@Lightbeerer of course it's all trade-offs but especially for web pages, preparing doesn't make sense if you don't call the query multiple times.

    • @tipeon
      @tipeon 7 месяцев назад

      With connection pooling, prepared statements make sense because the connections are actually long lived.

    • @DerClaudius
      @DerClaudius 7 месяцев назад

      @@tipeon That's not what Connection pooling is for and I would consider this bad design. Connection pooling is for mitigating the connection overhead, but you're not supposed to assume that the connection is pooled or in any state.. you should assume it's a new or in some sense resetted connection. So you would have to first ask the server if there's already a prepared statement in the session and if not, recreate it. That would make things slow again. But it's probably not even possible, because you couldn't reconnect to your prepared statement after you reconnect. Which API allows you to reconnect to an already prepared statement on the server once you let go of the statement object you held? I'm not aware of any. So for this to work you'd need to implement your own connection pooling and keep track of the statement.. and that's an even worse idea.

  • @dungeon4971
    @dungeon4971 Год назад

    I am wondering how did you insert 20 million row into a table, where did you get that data from

    • @dreamsofcode
      @dreamsofcode  Год назад +1

      I just randomly generated it using a mock data library in Go.

  • @rembautimes8808
    @rembautimes8808 3 месяца назад

    Joined as a sub , excellent content especially on read replicas

  • @ongayijohnian8787
    @ongayijohnian8787 22 дня назад

    Mahn, please do the dadbod plugins for NvChad

  • @apinanyogaratnam
    @apinanyogaratnam 11 месяцев назад +1

    🔥any good resources to learn more?

    • @dreamsofcode
      @dreamsofcode  11 месяцев назад +5

      There's very little out there really on optimizing PostgreSQL. If it's something of interest I can dedicate some more videos into optimization!

    • @apinanyogaratnam
      @apinanyogaratnam 11 месяцев назад +2

      @@dreamsofcode yes pls!!!

    • @abdu4729
      @abdu4729 11 месяцев назад

      @@dreamsofcode YES that would be really helpful

  • @doce3609
    @doce3609 Год назад +1

    chef

  • @ujjwalaggarwal7065
    @ujjwalaggarwal7065 24 дня назад

    you should add a timestamp for the copy statement part of the video

  • @SXsoft99
    @SXsoft99 9 месяцев назад

    people are not using indexes in an SQL service?
    also what people should learn about indexes
    if you create an index on a column the db will search faster after it
    if you have 3 where conditions, for example, then you need to create an index for those 3 colum combination for speed

    • @dreamsofcode
      @dreamsofcode  9 месяцев назад

      Definitely at some of the places I've worked at.
      Indexes are kind of interesting, they're not very useful for small data sizes, and there's the risk of over optimizing for them.

    • @mcspud
      @mcspud 6 месяцев назад +1

      >if you have 3 where conditions, for example, then you need to create an index for those 3 colum combination for speed
      Query planners are smart - if you have very large data sets you can do multi-column indexes and make sure the set reduction is in the correct order, but in my experience even up to a few billion records just having b-trees on each column individually is enough.

  • @halfsourlizard9319
    @halfsourlizard9319 29 дней назад

    Who considered Mongo 'fancy'!? I thought everyone had got over the NoSql silliness.

  • @2547techno
    @2547techno Месяц назад +2

    Your index explaination is not entirely correct. Postgres does offer hash-based indexes which are a lot closer to your explaination but the default index type (which you used in your creation example) is a B-Tree index, the data structure is very different.
    Paritions don't do anything meaningful to speed up writes, they would only speed up reads. Instead of scanning a whole table for a record, you only need to scan the single partition (assuming you're querying a single key) where you know your record lives in. It's the same concept as database sharding, but on one machine instead of multiple.

    • @dreamsofcode
      @dreamsofcode  Месяц назад

      Thanks for your comment! I'm not sure what you're referencing but yes, btree is the default index which uses a binary tree for its lookup table.
      Partitions can speed up writes usually when asscosited with b tree indexes.
      One such factor is because of b tree balancing, which on partitioned table is usually less intensive than on the entire data set. Another increase in performance is when performing deletes associated with the partition column, as deleting the partition rather than deleting the rows of a table prevents rebalancing from taking place.
      This is common in time series data and dramatically improves write performance.

  • @garm0nb0z1a
    @garm0nb0z1a 9 месяцев назад +2

    Prepared statements STILL don't work with pgbouncer and most other db proxies. No thanks.

  • @LinhLinhBD
    @LinhLinhBD 6 дней назад

    A good database should be fast by default. if something requires deep knowledge to make it fast, it's a nerdy database.

    • @dreamsofcode
      @dreamsofcode  6 дней назад

      Which databases would you consider "fast by default"?

    • @NysShortCut
      @NysShortCut День назад

      So everything in programming is nerdy then cuz you need to learn to make things work.
      wait, that indeed makes some sense...

    • @LinhLinhBD
      @LinhLinhBD 20 часов назад

      @@dreamsofcode Mysql is faster, but lack feature. So they both not good. Mongodb is pretty god, unfortunately it's not SQL which is what most people need. That's why companies should make better databases. Something that plug and play and fast.

  • @socks5proxy
    @socks5proxy Месяц назад

    Thanks!