Incredible video. I was to solidify my understanding of the concept of columnar databases vs row based and this video not only made it easy to understand, but enjoyable too!
Great video man. I like a mix of column (for logging, source of truth) and table based RDBMS and also documented oriented (which could be either row or column) for quick trashy dirty data that makes you blush when you look at it too long. But I've seen columns used for quick trashy data - where sums or map reduce is the highest priority and it blows everything else away. I am digging ScyllaDB lately.
13:35 Aggregates read more than you need. Only if you don't have indexes on the columns you query and if your core business is querying that data then you will have that indexed anyway. Also: if the amount of reads becomes a problem the first thing you do is de-normalize that value into a separate table. This is where database monitorring becomes essential, a nice topic for a ten-part series that will blow your viewers minds :-) I'd change the pro's and con's to "what kind of applications benefit from this." because every point you mention has some serious vaceats, related issues and known workarounds.
Amazing visualisation of concept keeping the technicalities agnostic along with equally simplified narration. The quality of your material and narration is inversely proportional to the jokes :)
Really awesome content. A really good source for me who is looking to improve the backend concepts. Really thank you for such good content. Just subscribed for updates
Could you please teach us how to do columnar partition in Postgres? It's easy to find lessons on horizontal partitioning, but I can't find writings on how to do vertical. Thank you!
Great vid. I am working with both data structure types :) Using the postgres as a row base to prepare it for a transformation into columnar for gpus to process :)
@@minscj Hi, the solution we went is proprietery, so I can't really go into details. I can however suggest that you take a look at the concept of apache arrow. www.dremio.com/announcements/introducing-apache-arrow/ has a nice diagram. We went very low level and didn't use many of the existing open-source abstraction layers. It all came down to understanding how the GPU's processing cycle works and the alignment of the columnar data to said cycle.
@Hussein I love your database videos. Could you create a video on how to Alter large tables which has millions or maybe billions of records without a downtime in Postgres.
Great video. QQ: For columnar DB, if DB stores all the metadata about which block has 1006, won't it also store metadata about social security number 666? So we would need only 2 jumps instead of 3 jumps right?
1006 is the row id(internal to the database). The db only knows in which blocks do these intern ids exist. It doesn't store any such metadata for the other columns
Great work here! So many explanations of this are too high level, and miss the key differentiator: i.e. the way in which the data is accessed. You did a great job and did it at your own pace. Hope you find success with this style.
Sir.... Here you said when searching for first_name it automatically load the final block.... it escape first block of first_name..... How can it find it? Is it because the the row_number is indexed in the db table? if not then why not find the final block using ssn?
Good video but a couple small things. I think the video was slower than it needed to be. Like too many tangent and repetition. We can pause and go back and forth so no need to artificially slow it down. Also i think for this topic leaving out indexes does not make sense. Almost no one is going to choose to use a column oriented db before trying indexes.
If let's say in a row oriented db, from your explanation the commas does not exist but just for displaying, how will the engine knows where to start to look for first name etc?
Год назад
For instance, PostgreSQL stores these sequences of values in tuple storage, one for each column in the table. The values are serialised and packed together to form the tuple. When querying data from a table, PostgreSQL uses the stored column names in the system catalogs to interpret the tuples' content correctly. The column names are used by the query planner and executor to map the data values from the tuple storage to their respective columns based on their positions in the tuple.
Why can't we just do SELECT Salary from emp? will that be efficient or will it result in the entire row read and then it will be filtered? The table can be indexed for ssn or name.
Hussein, thanks for the videos. Today imma try & figure out how to download a RUclips video with vanilla NodeJs if I don’t figure imma ask you guys for help
Clarification: “column stores” and “wide column stores” are quite different! I watched this expecting to learn about BigTable/Cassandra. But they have key differences so this video doesn’t apply to them. TIL
omg, why only you can explain complicated problems in easy words! Tutorials always say "NoSql is good for fast write, scalable, not suitable for complicated query", but no one explain clearly as you! Column based NoSql is just for simple data write and AGGREGATE query. One example is number of likes of a video. Just define a simple table, (video_id, user_like_id), then sum(user_like_id), this scenario is the best for NoSQl. Or sensor data, not complicated(can tolerate write slow), but lots of aggregate query, like min(), max(), average().
hussein kindly be straight forward on the videos, you to much talkative i like that but i more information centric information seeker. if you provide to the point would be appreciated alot, second don't mixed or drag the words while talking.
Clearly, you're a naturally gifted teacher. Great content.
Love the way you teach. I almost didn't want the video to end.
#savetheducks
I understand this is one of your older videos, but wanted to mention that your content is first class! Thank you!
Most interesting way of teaching I have ever found, learning can't be more fun than this!
Incredible video. I was to solidify my understanding of the concept of columnar databases vs row based and this video not only made it easy to understand, but enjoyable too!
The content of this channel is superb.
Great video man. I like a mix of column (for logging, source of truth) and table based RDBMS and also documented oriented (which could be either row or column) for quick trashy dirty data that makes you blush when you look at it too long.
But I've seen columns used for quick trashy data - where sums or map reduce is the highest priority and it blows everything else away. I am digging ScyllaDB lately.
Visually clear, funny and interesting explanations, you are greatly talented.
13:35 Aggregates read more than you need. Only if you don't have indexes on the columns you query and if your core business is querying that data then you will have that indexed anyway. Also: if the amount of reads becomes a problem the first thing you do is de-normalize that value into a separate table.
This is where database monitorring becomes essential, a nice topic for a ten-part series that will blow your viewers minds :-)
I'd change the pro's and con's to "what kind of applications benefit from this." because every point you mention has some serious vaceats, related issues and known workarounds.
Correct that is why didn’t include indexes in the mix. Thanks for the feedback as usual
Amazing visualisation of concept keeping the technicalities agnostic along with equally simplified narration.
The quality of your material and narration is inversely proportional to the jokes :)
this dude is good, the channel is underrated
Your Accent and voiceovers make it more attractive to learn.
Really awesome content. A really good source for me who is looking to improve the backend concepts. Really thank you for such good content. Just subscribed for updates
Glad it was helpful! and welcome to the community
Youre very entertaining to watch, listen, and learn from
❤️❤️❤️
One of the best explanations I had seen. Thanks man
Gives concrete examples of when column database operations are faster or slower than a row database. Thank you!
Great information in so simple way..... It's clear the concept in best of best way 👍. I loves your all videos....
Thanks Virendra 🙏
"Lets confused everybody by new names" : hahahaha well said ! great video thanks
I love your explanation, awesome!!! 🎉
I really loved your method in describing this topic .
You opened my eyes :D
Great video, well explained, fun and infomative. I loved that, thanks dude!
Funny and effective, loved it 👍
This just shows how much he loves what he does.
Thank you so much for explaining this concept so beautifully and in such a great depth...I am a fan of your teaching.....
Could you please teach us how to do columnar partition in Postgres? It's easy to find lessons on horizontal partitioning, but I can't find writings on how to do vertical. Thank you!
it's called horizontal partitioning. read about it on net
Great video and explanation, thank you.
this was excellent it cleared up the fud👍 thanx!
Thank you very much for making this video with a real-time example. Much appreciated
ur awesome man .great explanation
Absolutely amazing video. Thank you
Thank you Hussien. really simple, good and funny.
Great vid. I am working with both data structure types :) Using the postgres as a row base to prepare it for a transformation into columnar for gpus to process :)
Nice! Your going HTAP
@@hnasr No, not going hybrid transactional, as the columnar data is being used as runtime in memory data until a bulk update changes it.
@@chunheguo9230 hi please could you give me more details how i can do the same? please reply
@@minscj Hi, the solution we went is proprietery, so I can't really go into details. I can however suggest that you take a look at the concept of apache arrow. www.dremio.com/announcements/introducing-apache-arrow/ has a nice diagram. We went very low level and didn't use many of the existing open-source abstraction layers. It all came down to understanding how the GPU's processing cycle works and the alignment of the columnar data to said cycle.
Great explanation
Amazing content!
Thanks. Good info. Never know how column dbs work
Nicely explained
awesome explanation for both row and column oriented db's
another banger of a video
Dude that was sweet. Any chance of doing a video on file systems and mapping them to DB OPERATIONS
yup, I wonder if I increase the text value in a column or add a new column, how does it map to disk i/o
Thanks bro, very useful
Excellent I Love this
But where is the video you explained how to change the database engine for specific table ?
I think its this one Database Engines Crash Course (MyISAM, Aria, InnoDB, XtraDB, LevelDB & RocksDB)
ruclips.net/video/K9Qd3UMHUQ4/видео.html
Really good video bro, I like what you do
nice explanation thank you
@Hussein I love your database videos. Could you create a video on how to Alter large tables which has millions or maybe billions of records without a downtime in Postgres.
Great explanation.
Which databases stores both rowbased and column based structures?
Waiting for your udemy course. Great stuff as usual.
Great video. QQ: For columnar DB, if DB stores all the metadata about which block has 1006, won't it also store metadata about social security number 666? So we would need only 2 jumps instead of 3 jumps right?
1006 is the row id(internal to the database). The db only knows in which blocks do these intern ids exist. It doesn't store any such metadata for the other columns
Thank you! That's very clear !
Cassandra (NoSQL) uses LSM Tree which makes it a better choice for heavy writes in comparison to SQL databases, any thoughts on this?
Thank you so much!
hi! can you do a vid with indexes? the visuals are so helpful!
very clear. bravo!
u r the best
Great work here! So many explanations of this are too high level, and miss the key differentiator: i.e. the way in which the data is accessed. You did a great job and did it at your own pace. Hope you find success with this style.
Great Video!
Thank you so much ! it was so clear
Great Video Hussein.. when are you doing webrtc?
I am working on the slides, once thats done Ill work on the demos so maybe a week or two
@@hnasr Thanks
Sir.... Here you said when searching for first_name it automatically load the final block.... it escape first block of first_name..... How can it find it? Is it because the the row_number is indexed in the db table?
if not then why not find the final block using ssn?
Hey Hussein, wouldn't it be fair to say that to get the advantages of column db in row db, we end up making indexes in row db?
This column store sounds similar to inverted indexes that search engines (eg elastic search) use. Are there key differences there?
Good video but a couple small things. I think the video was slower than it needed to be. Like too many tangent and repetition. We can pause and go back and forth so no need to artificially slow it down. Also i think for this topic leaving out indexes does not make sense. Almost no one is going to choose to use a column oriented db before trying indexes.
at 21:58 1006 was found directly using some "tricks".. Then why can't we use the same tricks to find 666:1006 in the first try?
If let's say in a row oriented db, from your explanation the commas does not exist but just for displaying, how will the engine knows where to start to look for first name etc?
For instance, PostgreSQL stores these sequences of values in tuple storage, one for each column in the table. The values are serialised and packed together to form the tuple. When querying data from a table, PostgreSQL uses the stored column names in the system catalogs to interpret the tuples' content correctly. The column names are used by the query planner and executor to map the data values from the tuple storage to their respective columns based on their positions in the tuple.
Hussain
Can you please make a short video of different kinds if DBs who are the providers.. what are the ideal uses.
Question, column oriented is the same with family column Db?
Yes same name. Columnar and column store are other names.
Why can't we just do SELECT Salary from emp? will that be efficient or will it result in the entire row read and then it will be filtered? The table can be indexed for ssn or name.
Hussein, thanks for the videos. Today imma try & figure out how to download a RUclips video with vanilla NodeJs if I don’t figure imma ask you guys for help
Thank bro 🎉
How do you work on 1 or less column table?? 🤔
great video!
nice demonstration
Thank you..
"The devil!"
"Save the ducks guys save the ducks"
Now I understand databases.
Hey bit of an off topic question why did you change your name from igeometry?
Moving from GIS to personal brand so I get to cover multiple topics mainly.
Perfect!
Lmao, what's the reference to every time you write to a disk that a duck dies? 😹
Hussein I wanna know how you had that level of curiosity machallah? is it something gained by training?
It is pure curiosity and asking why and having the humility to learn takes time.
We generally want all the columns, that's what a record or document is
love you sir
interetsting video, shouldn't data in column oriented db be stored sorted ?
Not necessary, the table data aren't stored sorted usually otherwise writing becomes difficult. Indexes on the other hand are sorted
Lesson learnt from the video, Save the ducks :p
I think I can not imagin how locking are working on column oriented database, it is a nightmare unless it has it's own deifferent techniques
You’re hilarious 😂 and offer a great explanation. Thanks!
#savetheducks
🦆🦆🦆🦆 Great video!
Nasser, great video. But one observation, clearly you were High while making this video.🤣
Hahahahha you're so funny. Good video. Thanks
"...they have all this meta-data, mumbo-jumbo"
-Hussein
Clarification: “column stores” and “wide column stores” are quite different! I watched this expecting to learn about BigTable/Cassandra. But they have key differences so this video doesn’t apply to them. TIL
Correct wide column is different. Group of columns into a column family. Best of both words
Six Six Six, the devil... SUBSCRIBED
"SAVE THE DUCK", guys, "SAVE THE DUCK".
omg, why only you can explain complicated problems in easy words!
Tutorials always say "NoSql is good for fast write, scalable, not suitable for complicated query",
but no one explain clearly as you! Column based NoSql is just for simple data write and AGGREGATE query. One example is number of likes of a video.
Just define a simple table, (video_id, user_like_id), then sum(user_like_id), this scenario is the best for NoSQl.
Or sensor data, not complicated(can tolerate write slow), but lots of aggregate query, like min(), max(), average().
hussein kindly be straight forward on the videos, you to much talkative i like that but i more information centric information seeker. if you provide to the point would be appreciated alot, second don't mixed or drag the words while talking.
Awe man but ducks are delicious
In this series, I want to previous 2 videos, but member only. So, I think this channel is useless for me.
"Lets confused everybody by new names", make them look like a fool who can not understand things, thus makes us more "professional" and "experts"!
666 thank you for the laughter my friend
jack of all trades master of none
just be a teacher and not Jim Carry