This was a fantastic presentation. She covered a huge amount of material in a short time. What they've done and how they've done it is very impressive.
Note: My understanding for Memcache Lease is, you're allowing servers to return stale values with the knowledge of it being stale. This is different from most simple implementations of cache invalidation, which would query the db and update the cache whenever the value is stale. The philosophy here is that the stale value is still useful, and the value difference is not worth the load on the database.
"I need to learn about scaling" *heads to youtube, finds this video* "Wow, I now know EVERYTHING about scaling". The best video on scaling infrastructure i've found so far. No jargon, no acronym's, specific detail about exactly how things are balanced, routed, managed and replicated. Love it.
Hey there! I just wanted to take a moment to remind you how incredible you are. Your kindness, resilience, and unique talents make a positive impact on the lives of those around you. Your smile has the power to brighten the darkest of days, and your words have the ability to uplift and inspire. Never forget the strength and beauty that reside within you. You are capable of achieving great things and making a difference in this world. So keep being amazing, keep chasing your dreams, and never lose sight of the incredible person you are. You've got this, and today is going to be an amazing day for you!
This has been such an educational video. I feel excited about the problems, everything was so well covered and explained and So many aspects were touched without any redundant data. Thank infoq for this video. Super super intereseting.
Great presentation! I'm dealing with many of the scaling challenges discussed by Lisa in my organization. Although they vary and Instagram's solution does not solve my challenges, but Lisa certainly offers any view of how great companies address them.
Thanks for the great talk, very clear and concise. Interestingly, some of the problem in the "scale up" section can be resolved by using a programming language more suitable for modern machines. The "scale up" section sounds like "hacks that make Python faster".
I am graduating this year, so I don't have a lot experience. I feel from your comment that you have a lot of knowledge from experience. May I ask you which programming languages are more suitable for scalability in modern machines. Thank you in advance
Only usable on a large scale when replaced with C, lol. Once again Python has proven that it is a scripting language for toying around. This talk is like one complaint about Python after the other: 1) Performance is bad. 2) Memory usage is bad. (I lol'd when she said that just the running Python code itself takes up a significant amount of memory.) 3) GC is bad.
Nice presentation. There are bunch of things that can be improved for detection of the time series jumps by Fourier transformation of the time series and comparing the two frequencies on a predetermined delta of difference.
Lisa didn't discuss about the postgres data sharding. Is it possible to store meta data and handle queries for billions users in just one postgres instance? Any idea?
Does dead code really take up that much memory? It will never be run so it doesn't affect runtime, but how much smaller would your executable be if you removed dead code?
I think here they are talking about RAM consumption. In other compiled languages, compiler actually removes the code that will never get called, JS has tree-shaking something like that, but in case of Python, if a module is loaded into memory, Python loads on methods into memory and then this cascades. I'm not sure how much gain they could have had, but by the looks of improvements, it seems, they were building really fast and they left a lot of dead code behind which when cleaned helped them a lot. Had they been cleaning from start, they change would not be that much.
Yeah, even i was not aware. I did some digging and i this is done using PgQ. instagram-engineering.com/instagration-pt-2-scaling-our-infrastructure-to-multiple-data-centers-5745cbad7834 ... under the caching section.
How do you do test the configurations for scale out, or is this applied to live running machines? Or are specific test machines carved out from live users?
26:04 Is this statement correct? "We run n processes where n is greater than the cpu cores of the system." I thought we should have at most the same number of processes as the number of cores.
Instead of having every django d1, d2 competing to go the db for a cache refresh and causing the 'thundering herd', the d1,d2s should only check for data in memcache. It can be the job of memcache or an external service to refresh the data (independent of d1,d2s) from the DB. memcache can continue to serve old or stale data to d1,d2, while in parallel - load the data from DB and then invalidate the old data in a transactional block. Of course for a short time till invalidation you may have double the size of data in you memcache. It is sort of similar on what memcache-lease is doing, but I think d1,d2s should be kept to focus on memcache rather than speaking to the db and causing 'herd' problem.
I don't agree, because cache is more expensive than DB. And like the speaker said, data access is local to region many times. If you eagerly update the memcache with the entire dataset you have to then deal with the huge amount of storage you require, not to mention that scaling out the memcache cluster (or any change in the hardware in that cluster) would take forever, because you need to prewarm the cache. If you don't do that you end up with a lazy population strategy, which is exactly what she is suggesting. You also amortize the cost of the first slow query. It's win win.
Any reason why these images can't be asynchronously processed when you user uploads the image and stores different sizes in S3 buckets provided through CDN.. Thereby, you avoid processing while fetching whenever user requests .. This would further improve the processing power right .. Anyone thoughts on this ?
It's simple, usually if you are working on a feature, you create a branch from master and then work on it and then after ages, you merge it back into master. What they did was, instead of branching out, every commit would go to master, so basically your commits have to be stable, but need not to be complete, so this way, if someone starts working next day, they already have the changes u committed which reduces future merge issues.
@@gsb22 This sound like not the best approach. what about code review ? or in case reverting only one commit after you pushed your 100 stable commit. now imagine after reverting this commit(for some reason) the feature is crashing ! shall your revert all the 99 commit ? should you fix and commit and push in the same day ? i mean, this can cause more issues than it may help.
they push frequently, so merge conflicts are small and easy to fix. If two branches are merged after a month of development on them, then that's shit storm whereas if they are regularly updated with master, less conflicts.
Exactly. If every Django uses the stale data, memcache will never get updated. [Edit] : I think, if a request comes and no other "fill" request is being processed, then this request gets the DB access whereas other requests that are coming when the previous one was still filling, they get stale data and once the fill up is done and new like gets added and DB is updated, then the cycle starts. Example - Request R1 comes, no other requests are doing the "fill" process, memcache allows this requests to hit DB and do the fill up, meanwhile if R2,R3,...R100 comes, memcache says, their is already a fill process in work and you can fckk off with this stale value or wait till this "fill" process is done and then you would be treated as R1 and you get to query the data. Anyone who didnt get this, feel free to comment, I'll try different way to explain this then.
This was a fantastic presentation. She covered a huge amount of material in a short time. What they've done and how they've done it is very impressive.
*Timestamps*
0:00 Introduction (Lisa Guo)
2:21 1. Scale out
5:11 1.1 Instagram Stack Overview
5:46 1.2 Storage vs Computing
6:29 1.3 Scale out: Storage
8:13 1.4 Scale out: Computing
8:52 1.5 Memcache + consistency issues
12:05 1.6 DB load problem
14:01 1.7 Memcache Lease
15:12 1.8 Results, Challenges, Opportunities
17:03 2. Scale up
17:57 2.1 Monitor (Collect Data)
20:07 2.2 Analyze (C-Profile)
23:06 2.3 Optimize
26:19 2.3a Memory Optimizations
29:06 2.3b Network Latency Optimizations
30:40 2.4 Challenges, Opportunities
31:36 3. Scale Dev Team
33:06 3.1 What We Want
33:30 3.2 Tao Infrastructure
34:33 3.3 Source Control
36:17 3.4 How to ship code with 1 master approach?
37:54 3.5 How often do we ship code?
40:03 Wrap-up
41:15 Q&A
Note: My understanding for Memcache Lease is, you're allowing servers to return stale values with the knowledge of it being stale. This is different from most simple implementations of cache invalidation, which would query the db and update the cache whenever the value is stale. The philosophy here is that the stale value is still useful, and the value difference is not worth the load on the database.
@@zss123456789 That's a very good point I haven't thought about. Thanks!
not every hero are wearing the cape, thx !!
InfoQ is doing excellent job by bringing these talks to us.
"I need to learn about scaling"
*heads to youtube, finds this video*
"Wow, I now know EVERYTHING about scaling".
The best video on scaling infrastructure i've found so far. No jargon, no acronym's, specific detail about exactly how things are balanced, routed, managed and replicated. Love it.
Hey there! I just wanted to take a moment to remind you how incredible you are. Your kindness, resilience, and unique talents make a positive impact on the lives of those around you. Your smile has the power to brighten the darkest of days, and your words have the ability to uplift and inspire. Never forget the strength and beauty that reside within you. You are capable of achieving great things and making a difference in this world. So keep being amazing, keep chasing your dreams, and never lose sight of the incredible person you are. You've got this, and today is going to be an amazing day for you!
Amazing sharing! Kudos InfoQ❤
That’s the great presentation . To the point and not super technical . Newbie like me in the world of architecture can understand
Every architecture video should be like this, instead of marketing BS.
This is a treasure box ! Thank you Miss/Mrs XYZ for the super lucid explanation.
Easy to understand presentation. Thanks
Best presentation I have ever seen! Thank you.
One of the best presenter who I have ever seen.
Dude, load testing on prod! What a badass move!
Amazing presentation!
Such an insightful presentaion from a developers point .. Thank you so much
Kudos to infoQ team for bringing such tech videos.
Thank you so much for sharing 😊😊😊
Git and code shipping approach is mind blowing ❤
Very nice
easily the best presentation i ever came across in these talks
Well thought through presentation. Many takeaways.
Such an awesome video, thank you for sharing
Well delivered talk with clear separation of topics.
she did really well. also s/o to the guy asking the very last question for answering it with his exp..
Very informative talk
A complex topic explained in a simple way. Thank you!
This has been such an educational video. I feel excited about the problems, everything was so well covered and explained and So many aspects were touched without any redundant data. Thank infoq for this video. Super super intereseting.
This has become my go to talk
Great presentation! I'm dealing with many of the scaling challenges discussed by Lisa in my organization. Although they vary and Instagram's solution does not solve my challenges, but Lisa certainly offers any view of how great companies address them.
Great post I ever seen thanks
Excellent Presentation. Insight to practical scalable challenges.
Well said "performance part of dev cycle rather than after thought.."
Gold Video ! learned so many aspect of scaling
Fantastic!!
Brilliant talk👍
Wow. She is brilliant.
Great talk. I have got some new tools and process for my work. Thank you very much.
Happy to hear that.
Great talk thank you
Really fantastic presentation, thanks Lisa and InfoQ!
it was a fantastic presentation. very clear, easy understand, and very detail,
This is a seriously good talk
Excellent talk.
Awesome concise high level presentation.
Great presentation. Great job Lisa !
Awesome talk, thank you!
Super informative. Thank you!
deployment to 20,000+ servers in 10 mins !!!
Yeah, just mic drop moment
TAO is a Distributed Graph based database not a Relational database. Their are nodes and links for relations
Thanks for the great talk, very clear and concise. Interestingly, some of the problem in the "scale up" section can be resolved by using a programming language more suitable for modern machines. The "scale up" section sounds like "hacks that make Python faster".
I am graduating this year, so I don't have a lot experience. I feel from your comment that you have a lot of knowledge from experience. May I ask you which programming languages are more suitable for scalability in modern machines. Thank you in advance
@@MrHades2325 Erlang and Scala - To name two
Developer efficiency > compute efficiency
@@piyh3962 “Move fast, break things” :)
stupid comment. And I'm not even a python fan. It's usually academics who make such shallow statements
awesome to see python scaled to INSTAGRAM LEVEL
Only usable on a large scale when replaced with C, lol. Once again Python has proven that it is a scripting language for toying around.
This talk is like one complaint about Python after the other:
1) Performance is bad.
2) Memory usage is bad. (I lol'd when she said that just the running Python code itself takes up a significant amount of memory.)
3) GC is bad.
Best Talk I have seen! Thank you for sharing!
Nice presentation. There are bunch of things that can be improved for detection of the time series jumps by Fourier transformation of the time series and comparing the two frequencies on a predetermined delta of difference.
Extremely beneficial. Please have more of these
This is pure gold !
this planet will never recover from the Python's environmental impact
Really good talk!
Go Lisa!!
Fantastic. So insightful.
How would you synchronize betwen different postgresql servers? It would still cause latency issue.
Engineers are so good at optimizations that they ultimately optimize themselves. Great presentation though...
Interesting, thanks
Lisa didn't discuss about the postgres data sharding. Is it possible to store meta data and handle queries for billions users in just one postgres instance? Any idea?
10:20 She mentioned sharding by hash of user id.
Does dead code really take up that much memory? It will never be run so it doesn't affect runtime, but how much smaller would your executable be if you removed dead code?
I think here they are talking about RAM consumption. In other compiled languages, compiler actually removes the code that will never get called, JS has tree-shaking something like that, but in case of Python, if a module is loaded into memory, Python loads on methods into memory and then this cascades. I'm not sure how much gain they could have had, but by the looks of improvements, it seems, they were building really fast and they left a lot of dead code behind which when cleaned helped them a lot. Had they been cleaning from start, they change would not be that much.
Great talk!
11:36 Today I learnt that you can run daemons on a database also(postgres in this case as she said).
Yeah, even i was not aware. I did some digging and i this is done using PgQ. instagram-engineering.com/instagration-pt-2-scaling-our-infrastructure-to-multiple-data-centers-5745cbad7834 ... under the caching section.
wow! this is cool
wow. this was a great talk!
I started sweating when she talked about the single branch tactic
you got that ? if yes can you please explain
How do you do test the configurations for scale out, or is this applied to live running machines? Or are specific test machines carved out from live users?
This convinces me that even Python can be scaled as a global distributed system. Stop saying python sucks guys
Python the best.
Excellent talk!
44:21 You guys are robust!
OMG, the source control part is surprising. It looks like ig is a giant monolithic app with one code base. Why not break it out at early phase
Because the “move fast, break things” philosophy
At 12:05, if the memcache is invalidated then why does it need it then? Like the read and write operations are on the database server then.
Great talk.
26:04 Is this statement correct? "We run n processes where n is greater than the cpu cores of the system." I thought we should have at most the same number of processes as the number of cores.
Very nice presentation. But I wish she wouldnt say Data Centre and Region interchangably.
Fantastic presentation, lot was covered in very short span of time.
Is there anyone point me more such content here on RUclips. Thanks.
There is similar content available on infoq.com
Fantastic talk! Learnt a lot.
Can anybody see my comment? Am I trapped on a single Datacenter in SGP?
Wow, 20,000 web servers where the code is deployed with 40-60 rollouts per day
a huge work behind the scenes
definitely interested on how they manage to do this
Instead of having every django d1, d2 competing to go the db for a cache refresh and causing the 'thundering herd', the d1,d2s should only check for data in memcache. It can be the job of memcache or an external service to refresh the data (independent of d1,d2s) from the DB. memcache can continue to serve old or stale data to d1,d2, while in parallel - load the data from DB and then invalidate the old data in a transactional block. Of course for a short time till invalidation you may have double the size of data in you memcache. It is sort of similar on what memcache-lease is doing, but I think d1,d2s should be kept to focus on memcache rather than speaking to the db and causing 'herd' problem.
I don't agree, because cache is more expensive than DB. And like the speaker said, data access is local to region many times. If you eagerly update the memcache with the entire dataset you have to then deal with the huge amount of storage you require, not to mention that scaling out the memcache cluster (or any change in the hardware in that cluster) would take forever, because you need to prewarm the cache. If you don't do that you end up with a lazy population strategy, which is exactly what she is suggesting. You also amortize the cost of the first slow query. It's win win.
Everyone commits on master and it doesn't go wrong... that's impressive haha.
Testing EVERYTHING 😂
@@jimmyadaro but dev at one time is it ?
Very good thanks
Wonder how they do code reviews if everyone works from one branch
Could someone pls post link for the article mentioned by author related to disabling garbage collection?
This article could be useful: www.infoq.com/articles/Java_Garbage_Collection_Distilled
This was pretty insightful.
Any reason why these images can't be asynchronously processed when you user uploads the image and stores different sizes in S3 buckets provided through CDN.. Thereby, you avoid processing while fetching whenever user requests .. This would further improve the processing power right .. Anyone thoughts on this ?
Someone can say where can we find more information about that git single-master approach?
It's simple, usually if you are working on a feature, you create a branch from master and then work on it and then after ages, you merge it back into master. What they did was, instead of branching out, every commit would go to master, so basically your commits have to be stable, but need not to be complete, so this way, if someone starts working next day, they already have the changes u committed which reduces future merge issues.
@@gsb22 This sound like not the best approach. what about code review ? or in case reverting only one commit after you pushed your 100 stable commit. now imagine after reverting this commit(for some reason) the feature is crashing ! shall your revert all the 99 commit ? should you fix and commit and push in the same day ? i mean, this can cause more issues than it may help.
@@tawfiknasser1348 you can cherry pick to revert a commit. And yes, this method has problems but this us the tradeoff they went with
How can they handle conflict when using a single branch?
they push frequently, so merge conflicts are small and easy to fix. If two branches are merged after a month of development on them, then that's shit storm whereas if they are regularly updated with master, less conflicts.
How does it know if it should wait or use stale value?
Exactly. If every Django uses the stale data, memcache will never get updated.
[Edit] : I think, if a request comes and no other "fill" request is being processed, then this request gets the DB access whereas other requests that are coming when the previous one was still filling, they get stale data and once the fill up is done and new like gets added and DB is updated, then the cycle starts.
Example - Request R1 comes, no other requests are doing the "fill" process, memcache allows this requests to hit DB and do the fill up, meanwhile if R2,R3,...R100 comes, memcache says, their is already a fill process in work and you can fckk off with this stale value or wait till this "fill" process is done and then you would be treated as R1 and you get to query the data.
Anyone who didnt get this, feel free to comment, I'll try different way to explain this then.
"Don't count the servers, make the servers count"
That’s easy when you have a multimillionaire contract with a cloud computing provider (and/or own your own bare-metal servers).
@@jimmyadaro I think what it meant was, dont say we have 10k servers so the load will get handled, say that every server is running 100% efficiently.
@@gsb22 Sure, that makes sense, but still, they are capable of pay per really-high-scale servers.
this is an awesome talk!!!
This is interesting
20k servers updated in 10 minutes.
I need another talk about just that
Bookmark: 12:00
Maybe I'm crazy, but their git flow is the most stressful thing I've ever seen
Use Rust :)
36:00
In Instagram, Requests = Djangos? @15:02