excellent to see infrastructure and scalability evolution in Google, now it all makes sense, how they get to map reduce, how they get to commodity hardware, ...
A very enlightening talk !! Especially the patterns discussed at the end.. What fun it would have been to go wrong, learn a lesson, think about trade-offs and design and implement and observe the results
Inspiring. He mentioned text web was not a challenge yet, but videos in your mobile pocket is not possible short term, which is accurate now. In 2022, it has become possible for me to view this video anywhere on earth 😯
best talk i know of on the history and circumstances of why google built their systems. especially crazy that GFS was built because commodity hard disks were unreliable haha.
Informative presentation, but I am confused by the example @1:01:13 - what does the 30MB/s represent? I thought he was talking about reading from disk, but from the previous slide, that was supposed to be 1MB/20ms, which would be 50MB/s ... oh well, maybe it's closer to 30MB/s i.e. 1MB/33ms. But it seems a weird example to use to illustrate the power of estimations, given that no calculations are needed to know that if you parallelize everything, the mathematical result will be 30 times faster - which is all he derived after all that math.
To read 256K of data from the disk, you first need to seek to the start of the sector (10 ms for moving the hard disk head to the right cylinder, then wait for the correct sector to rotate under the head), and then you need to perform a sequential read. I think that the 30MB/s is the sequential read rate, so reading 256KB = 0.25MiB sequentially after the seek takes approximately 256KB / 30MB/s = 1/120 sec = 8 ms, for a total of 18ms.
Good, I like that you share this video, I wish success always He talks about how their technology has evolved over time and how their technological infrastructure has allowed them to be so successful.
This is such a great example of a talk which gives you so many takeaways, unlike majority conference talks you get to see nowadays.
This is crazy that someone built google without the help of google.. :D
a trick : watch movies on flixzone. Been using it for watching lots of of movies recently.
excellent to see infrastructure and scalability evolution in Google, now it all makes sense, how they get to map reduce, how they get to commodity hardware, ...
design of Mapreduce,Bigtable,LevelDb,Spanner, that's great
A very enlightening talk !! Especially the patterns discussed at the end.. What fun it would have been to go wrong, learn a lesson, think about trade-offs and design and implement and observe the results
Nice summary. Everything is still relevant in 2020.
Great talk! dose Jeffrey Dean has similar talk about the last 10 years of Google technology?
Real master and pioneer on designing system at such scale. Learned a lot.
Awesome talk, explained so simply and clearly!
Inspiring. He mentioned text web was not a challenge yet, but videos in your mobile pocket is not possible short term, which is accurate now. In 2022, it has become possible for me to view this video anywhere on earth 😯
Dean really described the advanced technology of building software systems. A great mentor
Great stuff, thanks for sharing this
best talk i know of on the history and circumstances of why google built their systems. especially crazy that GFS was built because commodity hard disks were unreliable haha.
Informative presentation, but I am confused by the example @1:01:13 - what does the 30MB/s represent? I thought he was talking about reading from disk, but from the previous slide, that was supposed to be 1MB/20ms, which would be 50MB/s ... oh well, maybe it's closer to 30MB/s i.e. 1MB/33ms. But it seems a weird example to use to illustrate the power of estimations, given that no calculations are needed to know that if you parallelize everything, the mathematical result will be 30 times faster - which is all he derived after all that math.
To read 256K of data from the disk, you first need to seek to the start of the sector (10 ms for moving the hard disk head to the right cylinder, then wait for the correct sector to rotate under the head), and then you need to perform a sequential read. I think that the 30MB/s is the sequential read rate, so reading 256KB = 0.25MiB sequentially after the seek takes approximately 256KB / 30MB/s = 1/120 sec = 8 ms, for a total of 18ms.
Absolute genius
Thanks!! I find it very informative. Very Nicely explained!!
Good, I like that you share this video, I wish success always He talks about how their technology has evolved over time and how their technological infrastructure has allowed them to be so successful.
Fantastic talk. But. I wish he would explain more each step. Talk slower. Pretend we're not all experts in his field.
Great talk.
52:47 why disable cpu cache?
48:07 MapReduce Framework
this guy is probably hella rich by now
Awesome
good
this is the link to this PPT: static.googleusercontent.com/media/research.google.com/en//people/jeff/Stanford-DL-Nov-2010.pdf