PostgresTV 💙💛
PostgresTV 💙💛
  • Видео 167
  • Просмотров 137 895
Skip scan | Postgres.FM 113 | #PostgreSQL #Postgres podcast
[ 🇬🇧_🇺🇸 Check out the subtitles - we now edit them, ChatGPT+manually! You can also try RUclips's auto-translation of them from English to your language; try it and share it with people interested in Postgres!]
Michael and Nikolay are joined by Peter Geoghegan, major contributor and committer to Postgres, to discuss adding skip scan support to PostgreSQL over versions 17 and 18.
Here are some links to things they mentioned:
* Peter’s previous (excellent) interview on Postgres TV ruclips.net/video/iAPawr1DxhM/видео.html
* Efficient Search of Multidimensional B-Trees (1995 paper by Harry Leslie, Rohit Jain, Dave Birdsall, and Hedieh Yaghmai) vldb.org/conf/1995/P710.PDF
* Index Skip Scanning in...
Просмотров: 225

Видео

Postgres Emergency Room | Postgres.FM 112 | #PostgreSQL #Postgres podcast
Просмотров 228День назад
[ 🇬🇧_🇺🇸 Check out the subtitles - we now edit them, ChatGPT manually! You can also try RUclips's auto-translation of them from English to your language; try it and share it with people interested in Postgres!] Nikolay and Michael discuss PostgreSQL emergencies - both the psychological side of incident management, and some technical aspects too. Here are some links to things they mentioned: * Si...
Get or Create | Postgres.FM 111 | #PostgreSQL #Postgres podcast
Просмотров 30214 дней назад
[ 🇬🇧_🇺🇸 Check out the subtitles - we now edit them, ChatGPT manually! You can also try RUclips's auto-translation of them from English to your language; try it and share it with people interested in Postgres!] Michael and Nikolay are joined by Haki Benita, a technical lead and database enthusiast who writes an excellent blog and gives popular talks and training sessions too, to discuss the surp...
Getting started with benchmarking | Postgres.FM 110 | #PostgreSQL #Postgres podcast
Просмотров 40321 день назад
[ 🇬🇧_🇺🇸 Check out the subtitles - we now edit them, ChatGPT manually! You can also try RUclips's auto-translation of them from English to your language; try it and share it with people interested in Postgres!] Michael and Nikolay are joined by Melanie Plageman, database internals engineer at Microsoft and major contributor and committer to PostgreSQL, to discuss getting started with benchmarkin...
Index-Only Scans | Postgres.FM 109 | #PostgreSQL #Postgres podcast
Просмотров 482Месяц назад
[ 🇬🇧_🇺🇸 Check out the subtitles - we now edit them, ChatGPT manually! You can also try RUclips's auto-translation of them from English to your language; try it and share it with people interested in Postgres!] Nikolay and Michael discuss Index-Only Scans in Postgres - what they are, how they help, some things to look out for, and some advice. Here are some links to things they mentioned: * Inde...
Why Postgres? | Postgres.FM 108 | #PostgreSQL #Postgres podcast
Просмотров 425Месяц назад
[ 🇬🇧_🇺🇸 Check out the subtitles - we now edit them, ChatGPT manually! You can also try RUclips's auto-translation of them from English to your language; try it and share it with people interested in Postgres!] Nikolay and Michael discuss why they chose Postgres - as users, for their businesses, for their careers, as well as some doubts. Here are some links to things they mentioned: * Our episod...
Compression | Postgres.FM 107 | #PostgreSQL #Postgres podcast
Просмотров 321Месяц назад
[ 🇬🇧_🇺🇸 Check out the subtitles - we now edit them, ChatGPT manually! You can also try RUclips's auto-translation of them from English to your language; try it and share it with people interested in Postgres!] Nikolay and Michael discuss compression in Postgres - what's available natively, some newer algorithms available in recent versions, some things that would be cool additions, and some ext...
Out of disk | Postgres.FM 106 | #PostgreSQL #Postgres podcast
Просмотров 353Месяц назад
[ 🇬🇧_🇺🇸 Check out the subtitles - we now edit them, ChatGPT manually! You can also try RUclips's auto-translation of them from English to your language; try it and share it with people interested in Postgres!] Nikolay and Michael discuss Postgres running out of disk space - including what happens, what can cause it, how to recover, and most importantly, how to prevent it from happening in the f...
Postgres startup ecosystem | Postgres.FM 105 | #PostgreSQL #Postgres podcast
Просмотров 391Месяц назад
[ 🇬🇧_🇺🇸 Check out the subtitles - we now edit them, ChatGPT manually! You can also try RUclips's auto-translation of them from English to your language; try it and share it with people interested in Postgres!] Nikolay and Michael discuss the Postgres startup ecosystem - some recent closures, some recent fundraising announcements, and their thoughts on where things are going and what they'd like...
Four million TPS | Postgres.FM 104 | #PostgreSQL #Postgres podcast
Просмотров 6262 месяца назад
[ 🇬🇧_🇺🇸 Check out the subtitles - we now edit them, ChatGPT manually! You can also try RUclips's auto-translation of them from English to your language; try it and share it with people interested in Postgres!] Nikolay talks Michael through a recent experiment to find the current maximum transactions per second single-node Postgres can achieve - why he was looking into it, what bottlenecks occur...
Soft delete | Postgres.FM 103 | #PostgreSQL #Postgres podcast
Просмотров 5402 месяца назад
[ 🇬🇧_🇺🇸 Check out the subtitles - we now edit them, ChatGPT manually! You can also try RUclips's auto-translation of them from English to your language; try it and share it with people interested in Postgres!] Nikolay and Michael discuss soft deletion in Postgres - what it means, several use cases, some implementation options, and which implementations suit which use cases. Here are some links ...
Should we use foreign keys? | Postgres.FM 102 | #PostgreSQL #Postgres podcast
Просмотров 6862 месяца назад
[ 🇬🇧_🇺🇸 Check out the subtitles - we now edit them, ChatGPT manually! You can also try RUclips's auto-translation of them from English to your language; try it and share it with people interested in Postgres!] Nikolay and Michael discuss foreign keys in Postgres - what they are, their benefits, their overhead, some edge cases to be aware of, some improvements coming, and whether or not they gen...
To 100TB, and beyond! | Postgres.FM 100 | #PostgreSQL #Postgres podcast
Просмотров 1,4 тыс.3 месяца назад
[ 🇬🇧_🇺🇸 Check out the subtitles - we now edit them, ChatGPT manually! You can also try RUclips's auto-translation of them from English to your language; try it and share it with people interested in Postgres!] Michael and Nikolay are joined by three special guests for episode 100 who have all scaled Postgres to significant scale - Arka Ganguli from Notion, Sammy Steele from Figma, and Derk van ...
Sponsoring the community | Postgres.FM 099 | #PostgreSQL #Postgres podcast
Просмотров 1553 месяца назад
[ 🇬🇧_🇺🇸 Check out the subtitles - we now edit them, ChatGPT manually! You can also try RUclips's auto-translation of them from English to your language; try it and share it with people interested in Postgres!] Michael is joined by Claire Giordano, Head of Postgres Open Source Community Initiatives at Microsoft, to discuss several ways to contribute to the Postgres community - from core contribu...
Full text search | Postgres.FM 098 | #PostgreSQL #Postgres podcast
Просмотров 3693 месяца назад
Full text search | Postgres.FM 098 | #PostgreSQL #Postgres podcast
Minor releases | Postgres.FM 097 | #PostgreSQL #Postgres podcast
Просмотров 2553 месяца назад
Minor releases | Postgres.FM 097 | #PostgreSQL #Postgres podcast
Custom vs generic plan | Postgres.FM 096 | #PostgreSQL #Postgres podcast
Просмотров 2754 месяца назад
Custom vs generic plan | Postgres.FM 096 | #PostgreSQL #Postgres podcast
LIMIT vs performance | Postgres.FM 095 | #PostgreSQL #Postgres podcast
Просмотров 4494 месяца назад
LIMIT vs performance | Postgres.FM 095 | #PostgreSQL #Postgres podcast
Buffers II (the sequel) | Postgres.FM 094 | #PostgreSQL #Postgres podcast
Просмотров 2984 месяца назад
Buffers II (the sequel) | Postgres.FM 094 | #PostgreSQL #Postgres podcast
Massive DELETEs | Postgres.FM 093 | #PostgreSQL #Postgres podcast
Просмотров 4674 месяца назад
Massive DELETEs | Postgres.FM 093 | #PostgreSQL #Postgres podcast
Logical replication common issues | Postgres.FM 092 | #PostgreSQL #Postgres podcast
Просмотров 1,1 тыс.4 месяца назад
Logical replication common issues | Postgres.FM 092 | #PostgreSQL #Postgres podcast
Don't do this | Postgres.FM 091 | #PostgreSQL #Postgres podcast
Просмотров 6035 месяцев назад
Don't do this | Postgres.FM 091 | #PostgreSQL #Postgres podcast
Search | Postgres.FM 090 | #PostgreSQL #Postgres podcast
Просмотров 3925 месяцев назад
Search | Postgres.FM 090 | #PostgreSQL #Postgres podcast
Health check | Postgres.FM 089 | #PostgreSQL #Postgres podcast
Просмотров 3335 месяцев назад
Health check | Postgres.FM 089 | #PostgreSQL #Postgres podcast
superuser | Postgres.FM 088 | #PostgreSQL #Postgres podcast
Просмотров 2155 месяцев назад
superuser | Postgres.FM 088 | #PostgreSQL #Postgres podcast
transaction_timeout | Postgres.FM 087 | #PostgreSQL #Postgres podcast
Просмотров 2856 месяцев назад
transaction_timeout | Postgres.FM 087 | #PostgreSQL #Postgres podcast
Rails + Postgres | Postgres.FM 086 | #PostgreSQL #Postgres podcast
Просмотров 3286 месяцев назад
Rails Postgres | Postgres.FM 086 | #PostgreSQL #Postgres podcast
Why isn't Postgres using my index? | Postgres.FM 085 | #PostgreSQL #Postgres podcast
Просмотров 6546 месяцев назад
Why isn't Postgres using my index? | Postgres.FM 085 | #PostgreSQL #Postgres podcast
Overhead of pg_stat_statements and pg_stat_kcache | Postgres.FM 084 | #PostgreSQL #Postgres podcast
Просмотров 3526 месяцев назад
Overhead of pg_stat_statements and pg_stat_kcache | Postgres.FM 084 | #PostgreSQL #Postgres podcast
Modern SQL | Postgres.FM 083 | #PostgreSQL #Postgres podcast
Просмотров 9377 месяцев назад
Modern SQL | Postgres.FM 083 | #PostgreSQL #Postgres podcast

Комментарии

  • @easypeasydev179
    @easypeasydev179 2 дня назад

    What's the point of inviting a guest and talking yourself all the time?

  • @josemiguelgonzalezayala5957
    @josemiguelgonzalezayala5957 4 дня назад

    In most cases indexed accesses via SKIP SCAN in Oracle are not optimal and give problems. In general, there are very few cases where they are a good option and when they are chosen by the optimizer as access method it is usually a mistake and even a multi-block read can be more efficient.

  • @luisweck5285
    @luisweck5285 5 дней назад

    This gon' be good!

  • @rosendo3219
    @rosendo3219 17 дней назад

    woaaaa freddy mercury came to the podcast!!!!!

  • @thgreasi
    @thgreasi 18 дней назад

    WRT the "tag A insert on conflict do nothing & read again" topic: as far as I understand, this should only be an issue with isolation level repeatable-read and serialisable. In read committed it shouldn't be an issue since transaction 2 will only resume and attempt the interest (and get the conflict error) only after transaction 1 commits and releases the row lock. As a result, the subsequent read on repeatable that transaction 2 shroud be able to find tag A. Am I missing something?

  • @kirkwolak6735
    @kirkwolak6735 23 дня назад

    We used these phrases when using Oracle. Predicate Complete Queries: These are queries where the WHERE clauses are covered by indexes. Query Complete Queries: Where every part of the Query is covered by indexes. Allowing for an Index-Only scan to happen, and be very efficient. When moving to PostgreSQL, we found that there were more variables, like making sure the Visibility Map and statics were good so the Optimizer would choose the index. The problem in PG is that even if it uses the index. It often has to refer back to the table to see if the record is visible. (Again, the Visibility Map helps). Regardless, the value of indexing is huge. But also knowing when it is a waste of time. My smallest client has a TINY database. Recently moved to PG from SQLite as they grew to multiple users in geographically distant offices. Not even his Customer table will use an index... Because it's like 200 records. This table is likely cached. The query runs in a fraction of the planning time. Expecting an index to be used in those cases... Kinda crazy. But we all have these tiny lookup tables. No need to index those to death...

  • @prashanttendulkar
    @prashanttendulkar 26 дней назад

    Best episode of postgres tv

  • @artasheskhachatryan4804
    @artasheskhachatryan4804 Месяц назад

    There is still no wait duration for each wait event in postgres, which would be very helpfull.

  • @artasheskhachatryan4804
    @artasheskhachatryan4804 Месяц назад

    There are OLTP systems that need sharding. iGaming industry for example.

  • @michaelbanck367
    @michaelbanck367 Месяц назад

    Reorg only needs the amount of active data of the table in addition temporarily, so if 90% got deleted or Bloat is 600% or more, then the additional disk space is not 2x but 10-30%

  • @nickmillerable
    @nickmillerable Месяц назад

    A real cliffhanger.

  • @kirkwolak6735
    @kirkwolak6735 Месяц назад

    Wow... 2024 is destroying me. My Bingo card has the following open squares: - UFO Lands and Alien sniffs the President of the USA (inappropriately) - Nikolay Samokhvalov switches away from PostgreSQL - Michael Christofides comes out with a line of Hair Products - Tom Lane releases a Thread Based version of PostgreSQL If I get ANY one of those... I will have BINGO! LOL. With Love and Humor guys!

  • @DavidPechCZ
    @DavidPechCZ Месяц назад

    Hi, I've heard this episode yesterday and I could it literally today (TOAST compression - 2TB table with a single JSONB field). Just thanks guys!

  • @LearningSFR
    @LearningSFR Месяц назад

    Can you guys do an episode on what is an appropriate ratio of Database Administrators x number of servers that is humanly possible to manage. Specially in cloud environments, or startup companies the database requirements grow fast, but not many new DBAs are hired to maintain these databases at the same pace as the environment grows. I love the podcast. Keep up the good work.

  • @jocketf3083
    @jocketf3083 Месяц назад

    Thanks for another great episode! We once ran out of space after a small application change. Because of (...) reasons we needed to have our temp storage limit set high. The application change altered a query in a way where it took a very long time to finish. The query slowly consumed temp storage space as it went along. Since the application kept kicking off new instances of that query we ran out of space pretty fast! Captain Hindsight has a few lessons for us there, but at least the fix was easy. To be safe, we failed over to a standby replica and set the application's account to NOLOGIN. Once the application deployment had been rolled back we unbanned the account. We then took our time to clone the database to our old primary and let it rejoin our Pgpool load balancer as a replica.

  • @kirkwolak6735
    @kirkwolak6735 Месяц назад

    Great Stuff as usual. I believe everyone should have something monitoring disk free space, and alerting at some level of low disk, extra early. Would have been nice to hear what "formulas" you guys tend to use. Like at least 3x Daily WAL Max, or some such. We've hit this in the past with another vendor. Because someone left Tracing on. And a TON of logfiles were being produced that filled that disk...

  • @jirehla-ab1671
    @jirehla-ab1671 Месяц назад

    if I were to add a mesage que for real time OLTP database workloads, would that also induce latency which makes the OLTP database workload not in real time anymore? if so then whats the point of message queues then if its going to be used in real time OLTP database workloads?

    • @NikolaySamokhvalov
      @NikolaySamokhvalov Месяц назад

      The whole point of message queue is to "detach" some work, to make it async. This allows to response faster (lower latency) but guarantee that the work will be done. And if this work is done very soon - it's almost real time. But you're right, there is certain trade-off here, and this "detaching" usually makes sense when system becomes more complex.

  • @davidfetter
    @davidfetter 2 месяца назад

    If that unix socket regression is real, it's very likely a bug. Also, the fact that there's a huge difference between the TCP version and the unix socket version suggests that there are improvements to be had in the listener code.

  • @deadok68
    @deadok68 2 месяца назад

    Hi guys, very proud of you and yours creativity.Found this channel through Nickolay's some 5 years old podcasts and got here, almost 1/3 already listened, thanks

  • @RU-qv3jl
    @RU-qv3jl 2 месяца назад

    It would also be interesting to use different machines and try it out with the different connection pooling options. I imagine that could be interesting too. You would add latency for sure. Sadly I don’t have the credits to try something like that :(

  • @mehmanjafarov5432
    @mehmanjafarov5432 2 месяца назад

    hi @NikolaySamokhvalov . I regularly listen to your podcasts, and I've been actively researching memory management topics in the documentation as well as various sources on Google. While doing so, I came across several misleading or inaccurate blog posts regarding certain cases. Therefore, I have a specific question: Is Vacuum Buffers considered a form of local memory (similar to work memory) or shared memory? Thanks

  • @RU-qv3jl
    @RU-qv3jl 2 месяца назад

    Really good content as always, thanks for sharing your knowledge.

  • @Neoshadow42
    @Neoshadow42 2 месяца назад

    Subtitles are incredible, thanks guys!

  • @poppop101010
    @poppop101010 2 месяца назад

    great content thnx for the effort!

  • @kirkwolak6735
    @kirkwolak6735 2 месяца назад

    I loved this one! I love how PG allows the entire record to be easily encoded and stored. We implemented an audit feature like this in Oracle. It was way too much code. We stored the OLD and NEW record. When I saw how easy it was for a single table in PG... I started falling in love... For us, the table, and the timestamp was always attached. To Answer the question: "How do you show this to the Manager" (the record changes. Assuming you stored the table_name, table_id columns with it. Then you would create a visible link that pointed to that record if there was an updated record in existence. And if it is a deleted recorded you will either need to merge it into the results, or show them NEAR the messages of the rough similar timestamp. You don't have to show them. Just show that they exist, with an easy way to get to them. FWIW, Day 1 in training. We showed users that all of their edits were stored. And Deletes were Stored as well. we only had to recover a couple of times.

  • @drasticfred
    @drasticfred 2 месяца назад

    i always do add a reserve "flag column" to my tables, usually type int, no matter what the table serves for, it comes very handy, gives a flexibility that glue it any other table, service or logic etc.

  • @obacht7
    @obacht7 2 месяца назад

    Thank you for another nice episode! I like that you started out with a very gentle introduction what the topic is about, why it is important, and what the main issues are related to Postgres. In some of the past episodes, I was sometimes a bit lost because I couldn't follow your deep knowledge quickly while not knowing enough about the postgres-specific challenges/internals myself. So thanks for setting the stage a bit for the beginners and Postgres-"foreigners" (pun intended) 👍

    • @NikolaySamokhvalov
      @NikolaySamokhvalov 2 месяца назад

      Thanks I needed to hear this. Passed to Michael too. I think we'll do it more - basics in the beginning of an episode

  • @LearningSFR
    @LearningSFR 2 месяца назад

    Awesome work. I would love to hear more about logical replication on high intensive workloads (master node with hundreds of databases x 1 replication slot per database)

  • @jianhe5119
    @jianhe5119 2 месяца назад

    at 1:10:35, i use tmux, when i use mouse select text, it will automatically copy the text, so i don't have "search with google" option.

  • @chralexNET
    @chralexNET 2 месяца назад

    In a personal project I am making, I am trying to build a backend and database where foreign keys isn't the default (without thinking) mechanism to use in all cases, I wrote a comment about that for episode 69. I think this video validates a lot of what I am experimenting with, but definitely I think using foreign keys is okay on tables that you know are very low activity because it reduces the complexity of the application code if it has to handle relationships not being guaranteed to be valid. In the end what I'll end up with is something where foreign keys aren't used, where hard deletes without cascades are used on the root ("parent" tables), and where the application will clean-up data during regular maintenance and do the full database vacuum during maintenance. It will work well for my project, because it will have daily maintenance with downtime, where I am aiming for that to be 5 minutes or less. It is a personal project, so 99.99% uptime isn't a concern of mine, but performance during the advertised operational hours of the system is important. The backend is basically just infrastructure for some game servers for old games, and the thing about these old games is that they get more unstable the longer they run for. So the user-facing application (the game server) will have downtime, I am just using that down-time window for my backend and database as well.

    • @NikolaySamokhvalov
      @NikolaySamokhvalov 2 месяца назад

      Thanks for sharing your experience. Worth noting, "heavy loads" I mention all the time are quite rare - say, starting at 10k TPS. Before that, I would use FKs without doubts.

    • @chralexNET
      @chralexNET 2 месяца назад

      ​@@NikolaySamokhvalov What I am getting from this is experience in building a system, that can work without relying on foreign keys, just as one thing. It actually tries to do a lot of the things you have been talking about on your streams, even if it is on a small scale. It will most importantly give me experience for building this sort of system, both the application side and the database side, but the most important things is on the application side because it changes what kind of code should be written.

    • @chralexNET
      @chralexNET 2 месяца назад

      @@NikolaySamokhvalov Uhm, I had to edit my comment before because I misread what you wrote, I thought you wrote "worth nothing", but you wrote "Worth noting", so I went on a bit of a tangent. Sorry about that, you can just forget about what I wrote before, unless that is actually what you meant. And it is a good point that I should only expect benefits at the higher TPS.

    • @NikolaySamokhvalov
      @NikolaySamokhvalov 2 месяца назад

      @@chralexNET no worries. My comment was my own worry that when I talk about edge/corner-case problems, I forget to mention that to meet those problems, you need to grow your workloads quite high. So it might provoke false impression like "FKs are really bad" - this I wouldn't want to happen. They are good. It's just, really heavy loads are challenging, and edge cases are not good :)

  • @RU-qv3jl
    @RU-qv3jl 2 месяца назад

    I mean I think that the benefits of partitioning are obvious. I also think that there are a lot of people who don‘t know internals and won‘t think about it. I also think that with partitioning it is worth cautioning not to go too far. By default the planner will only re-order, I think, 8 tables or something like that? So too many partitions can lead to worse plans as you run into the genetic optimiser more quickly right? I think that would also be worth discussing (Says me just part way through the episode) :) Another really nice chat but the way, thanks. I always like hearing your thoughts.

  • @rosendo3219
    @rosendo3219 2 месяца назад

    gratz boys on episode 100! always listening to you in my car while driving to my boring work

  • @bhautikin
    @bhautikin 2 месяца назад

    One of the best episodes!

  • @dshukertjr
    @dshukertjr 3 месяца назад

    Congrats on episode 100! Sorry if this has been covered in the past episodes, but I would love to know more about the following. 1. Why do you seem to discourage using foreign keys? 2. It seemed like all of the three companies rarely perform joins within their databases, but do they do they perform joins on the application layer? Is this common for large scale databases to not join within the database?

    • @NikolaySamokhvalov
      @NikolaySamokhvalov 2 месяца назад

      FKs are great and I personally use them everywhere. However, they have risks: 1) perf. overhead required to maintain them (that's ok usually), 2) perf. cliff related to multixact IDs - going to demonstrate soon with the PostgresAI bot.

    • @utenatenjou2139
      @utenatenjou2139 2 месяца назад

      I large scale, having (constraint) foreign key make managing data real difficult. under complex structure, imagine when there are data to be correct. Note: for small data set no prob.

  • @anbu630
    @anbu630 3 месяца назад

    Congrats on your 100th episode !! Watched your 1st one and here the 100th one as well :-)

  • @JamesBData
    @JamesBData 3 месяца назад

    Congrats on reaching episode 100!

  • @RajanGhimiree
    @RajanGhimiree 3 месяца назад

    Can you guys make a complete episode on logical replication, from configuration to replicate data from source server to replication server.

  • @davidcarvalho2985
    @davidcarvalho2985 3 месяца назад

    Okay, you guys convinced me. I will try pgbadger. Thanks for this interview by the way. Really nice

  • @kirkwolak6735
    @kirkwolak6735 3 месяца назад

    So, I was wondering... Wouldn't it be nice if there were 2-3 types of plans based on some of the values of the parameters, so you get the most optimum plan and maybe the optimizer does Parameter Peeking to determine which of the X plans to choose... And then I realized. Wow... The application could do this. Create 3 prepared statements for the same query. And execute against the one TUNED for the query parameter types forcing the best plan to be used by design... Hmmm... We have this situation. We have a complicated search. But when the value we are searching for is small (lots of hits) vs large (few hits). It wants to choose the wrong one after a few queries and then a switch. Unfortunately, this is inside of a Procedure where the statement is prepared around us. We would have to basically duplicate the complex query just to make the condition so that it executes the right right way. But I might still try that.

  • @kirkwolak6735
    @kirkwolak6735 3 месяца назад

    Yes, you should test with your extensions. You should have a few general procedures you run that exercise using all of the extensions. And you should monitor log sizes. In case something is going wrong, and it's only in the log files. I like using htop in linux, and watching how much memory the various threads are using and the total. In case memory consumption has changed... This can lead to issues. Reading the documentation for the release. YES, it is good documentation. But it can feel a bit overwhelming because they document so much...

  • @Marekobi
    @Marekobi 3 месяца назад

    This is gold !! :)

  • @pdougall1
    @pdougall1 3 месяца назад

    Can ya'll talk about the best way to think about adding indexes? What is the problem when adding too many on a table for instance. Or when to reach for one when a query is slow. Confounding factors when there are other queries using the same column (not sure that's relevant). I'm sure there is a lot to consider that are just unknown unknowns for me.

    • @NikolaySamokhvalov
      @NikolaySamokhvalov 3 месяца назад

      hey Patrick - have you listened to episode "068 Over-indexing"?

    • @pdougall1
      @pdougall1 3 месяца назад

      @@NikolaySamokhvalov I have not, but definitely will. Also looks like there's one on under indexing as well! Might be exactly what I'm looking for, thanks!

  • @kirkwolak6735
    @kirkwolak6735 4 месяца назад

    Michael, thank you for sticking to your guns to get your explanation out there. There is a subtle difference in the AUDIENCE you two seem to be addressing. Nikolay seems to not care about launching a long-running query... Because when he sits down, he likely either knows he has a problem already, OR he's got such deep experience in PG, that he knows to check a few thing before he starts pounding out a query. I believe he implies this when he talks about how he adds the LIMIT based on what he is expecting (eg, when he might be wrong, he will do a LIMIT 2 and let the error guide him). Whereas you were (IMO) driving from a Novice (like me) who *thought* that just adding a LIMIT was *always* a decent safety approach. And my understanding is currently limited to (LIMIT + Order By = Red Flag). Your point goes deeper than that. So, now I realize the correct formula is: (LIMIT + (Order By|Index Range Scan) = Red Flag). Meaning the optimizer might be doing what looks like a simple range scan on some column, but it is orthogonal to the data being found, and can quickly become a semi-seq_scan (find first row with the index, and the seq_scan in reverse until the number of records hit the limit... Which may never happen! Making it scan to the beginning/end). That's two wildly different target audiences. And I could be completely wrong. It's my guess. Of course I look up to both of you, so I apologize if I misstated your positions!

    • @michristofides
      @michristofides 4 месяца назад

      Thank you Kirk, for the kind words and the wonderful summary! I think you're spot on, and am glad to hear it was helpful

  • @pdougall1
    @pdougall1 4 месяца назад

    Ya'll are great! Its really important to hear professional db people talking about how all of this works in practice. Beyond a basic explanation that can be found in books (books are also really important btw)

  • @hamzaaitboutou8563
    @hamzaaitboutou8563 4 месяца назад

    more of this please <3

  • @iury0x58
    @iury0x58 4 месяца назад

    Great content, guys! Binging the channel

  • @iury0x58
    @iury0x58 4 месяца назад

    Thank you for this content. Very nice

  • @davidfetter
    @davidfetter 4 месяца назад

    I just love the way this episode captured the processes that actually go into doing the thing! BTW, the repository for the web site is, as far as I know, also a git repository, and I suspect that rebase requests--NEVER use merge--would be easier to get into it than patches sent to the -hackers mailing list for the core code would be.

  • @keenmate9719
    @keenmate9719 4 месяца назад

    Looking forward for this one... paging and limits, it's like naming and cache invalidation :-)

  • @nishic1
    @nishic1 4 месяца назад

    Woww.Excellent video..Very informative..