Scaling Instagram Infrastructure

Поделиться
HTML-код
  • Опубликовано: 21 ноя 2024

Комментарии • 165

  • @mikejeffery8371
    @mikejeffery8371 6 лет назад +192

    This was a fantastic presentation. She covered a huge amount of material in a short time. What they've done and how they've done it is very impressive.

  • @zss123456789
    @zss123456789 4 года назад +189

    *Timestamps*
    0:00 Introduction (Lisa Guo)
    2:21 1. Scale out
    5:11 1.1 Instagram Stack Overview
    5:46 1.2 Storage vs Computing
    6:29 1.3 Scale out: Storage
    8:13 1.4 Scale out: Computing
    8:52 1.5 Memcache + consistency issues
    12:05 1.6 DB load problem
    14:01 1.7 Memcache Lease
    15:12 1.8 Results, Challenges, Opportunities
    17:03 2. Scale up
    17:57 2.1 Monitor (Collect Data)
    20:07 2.2 Analyze (C-Profile)
    23:06 2.3 Optimize
    26:19 2.3a Memory Optimizations
    29:06 2.3b Network Latency Optimizations
    30:40 2.4 Challenges, Opportunities
    31:36 3. Scale Dev Team
    33:06 3.1 What We Want
    33:30 3.2 Tao Infrastructure
    34:33 3.3 Source Control
    36:17 3.4 How to ship code with 1 master approach?
    37:54 3.5 How often do we ship code?
    40:03 Wrap-up
    41:15 Q&A

    • @zss123456789
      @zss123456789 4 года назад +3

      Note: My understanding for Memcache Lease is, you're allowing servers to return stale values with the knowledge of it being stale. This is different from most simple implementations of cache invalidation, which would query the db and update the cache whenever the value is stale. The philosophy here is that the stale value is still useful, and the value difference is not worth the load on the database.

    • @bogdax
      @bogdax 3 года назад +1

      @@zss123456789 That's a very good point I haven't thought about. Thanks!

    • @juakinggg
      @juakinggg 2 года назад +1

      not every hero are wearing the cape, thx !!

  • @sanjeevdiitm
    @sanjeevdiitm 4 года назад +51

    InfoQ is doing excellent job by bringing these talks to us.

  • @JamesCollins90
    @JamesCollins90 2 года назад +3

    "I need to learn about scaling"
    *heads to youtube, finds this video*
    "Wow, I now know EVERYTHING about scaling".
    The best video on scaling infrastructure i've found so far. No jargon, no acronym's, specific detail about exactly how things are balanced, routed, managed and replicated. Love it.

  • @person.a
    @person.a Год назад

    Hey there! I just wanted to take a moment to remind you how incredible you are. Your kindness, resilience, and unique talents make a positive impact on the lives of those around you. Your smile has the power to brighten the darkest of days, and your words have the ability to uplift and inspire. Never forget the strength and beauty that reside within you. You are capable of achieving great things and making a difference in this world. So keep being amazing, keep chasing your dreams, and never lose sight of the incredible person you are. You've got this, and today is going to be an amazing day for you!

  • @hokcuan2390
    @hokcuan2390 Год назад

    Amazing sharing! Kudos InfoQ❤

  • @cpsarathe
    @cpsarathe 6 лет назад +49

    That’s the great presentation . To the point and not super technical . Newbie like me in the world of architecture can understand

  • @smonkey001
    @smonkey001 3 года назад +6

    Every architecture video should be like this, instead of marketing BS.

  • @ryan-bo2xi
    @ryan-bo2xi 4 года назад +13

    This is a treasure box ! Thank you Miss/Mrs XYZ for the super lucid explanation.

  • @riteshbajaj6
    @riteshbajaj6 2 года назад

    Easy to understand presentation. Thanks

  • @rustemiskakov2973
    @rustemiskakov2973 2 года назад

    Best presentation I have ever seen! Thank you.

  • @yuchonghe3192
    @yuchonghe3192 3 года назад +2

    One of the best presenter who I have ever seen.

  • @pareshmaniyar8273
    @pareshmaniyar8273 3 года назад +3

    Dude, load testing on prod! What a badass move!

  • @mnchester
    @mnchester 2 года назад

    Amazing presentation!

  • @shoumeshrawat1362
    @shoumeshrawat1362 3 года назад +1

    Such an insightful presentaion from a developers point .. Thank you so much

  • @rameshj9198
    @rameshj9198 3 года назад +2

    Kudos to infoQ team for bringing such tech videos.

  • @roshedulalamraju7936
    @roshedulalamraju7936 2 года назад

    Thank you so much for sharing 😊😊😊

  • @random-characters4162
    @random-characters4162 Год назад

    Git and code shipping approach is mind blowing ❤

  • @mitotv6376
    @mitotv6376 2 года назад

    Very nice

  • @Sanyat100
    @Sanyat100 2 года назад

    easily the best presentation i ever came across in these talks

  • @babitarpur
    @babitarpur 6 лет назад +23

    Well thought through presentation. Many takeaways.

  • @filmbyben2
    @filmbyben2 3 года назад

    Such an awesome video, thank you for sharing

  • @enjoyalife1
    @enjoyalife1 4 года назад +3

    Well delivered talk with clear separation of topics.

  • @RichardTMiles
    @RichardTMiles 3 года назад +1

    she did really well. also s/o to the guy asking the very last question for answering it with his exp..

  • @Pjblabla2
    @Pjblabla2 2 года назад

    Very informative talk

  • @markuslenger2642
    @markuslenger2642 3 года назад +1

    A complex topic explained in a simple way. Thank you!

  • @ketanshah6613
    @ketanshah6613 3 года назад +2

    This has been such an educational video. I feel excited about the problems, everything was so well covered and explained and So many aspects were touched without any redundant data. Thank infoq for this video. Super super intereseting.

  • @MengLinMaker
    @MengLinMaker 3 месяца назад

    This has become my go to talk

  • @yuhechen7258
    @yuhechen7258 4 года назад +1

    Great presentation! I'm dealing with many of the scaling challenges discussed by Lisa in my organization. Although they vary and Instagram's solution does not solve my challenges, but Lisa certainly offers any view of how great companies address them.

  • @VaibhavPatil-rx7pc
    @VaibhavPatil-rx7pc 3 года назад

    Great post I ever seen thanks

  • @senthilkumar5
    @senthilkumar5 5 лет назад +1

    Excellent Presentation. Insight to practical scalable challenges.

  • @just4meonly
    @just4meonly 3 года назад +1

    Well said "performance part of dev cycle rather than after thought.."

  • @amitcool99
    @amitcool99 3 года назад

    Gold Video ! learned so many aspect of scaling

  • @pariveshplayson
    @pariveshplayson 2 года назад

    Fantastic!!

  • @KrishnaDasPC
    @KrishnaDasPC 2 года назад

    Brilliant talk👍

  • @TheInvestmentCircle
    @TheInvestmentCircle 2 года назад

    Wow. She is brilliant.

  • @zenymax36
    @zenymax36 6 лет назад +7

    Great talk. I have got some new tools and process for my work. Thank you very much.

    • @infoq
      @infoq  6 лет назад +2

      Happy to hear that.

  • @kienphan6436
    @kienphan6436 5 месяцев назад +1

    Great talk thank you

  • @FeliciaFay
    @FeliciaFay 4 года назад +1

    Really fantastic presentation, thanks Lisa and InfoQ!

  • @jccourse
    @jccourse 5 лет назад +1

    it was a fantastic presentation. very clear, easy understand, and very detail,

  • @vinylwarmth
    @vinylwarmth 11 месяцев назад

    This is a seriously good talk

  • @pranavsharma9025
    @pranavsharma9025 3 года назад

    Excellent talk.

  • @tejasripavuluri6359
    @tejasripavuluri6359 5 лет назад +2

    Awesome concise high level presentation.

  • @karvinus
    @karvinus 6 лет назад +4

    Great presentation. Great job Lisa !

  • @obiwan_smirnobi
    @obiwan_smirnobi 2 года назад

    Awesome talk, thank you!

  • @False41
    @False41 4 года назад +1

    Super informative. Thank you!

  • @hemalpatel1504
    @hemalpatel1504 4 года назад +33

    deployment to 20,000+ servers in 10 mins !!!

    • @Rxlochan
      @Rxlochan 3 года назад +2

      Yeah, just mic drop moment

  • @akshatjainbafna
    @akshatjainbafna 2 года назад

    TAO is a Distributed Graph based database not a Relational database. Their are nodes and links for relations

  • @helinw
    @helinw 6 лет назад +21

    Thanks for the great talk, very clear and concise. Interestingly, some of the problem in the "scale up" section can be resolved by using a programming language more suitable for modern machines. The "scale up" section sounds like "hacks that make Python faster".

    • @MrHades2325
      @MrHades2325 4 года назад

      I am graduating this year, so I don't have a lot experience. I feel from your comment that you have a lot of knowledge from experience. May I ask you which programming languages are more suitable for scalability in modern machines. Thank you in advance

    • @TeluguAbbi
      @TeluguAbbi 4 года назад

      @@MrHades2325 Erlang and Scala - To name two

    • @piyh3962
      @piyh3962 3 года назад +12

      Developer efficiency > compute efficiency

    • @jimmyadaro
      @jimmyadaro 3 года назад

      @@piyh3962 “Move fast, break things” :)

    • @abeidiot
      @abeidiot 2 года назад

      stupid comment. And I'm not even a python fan. It's usually academics who make such shallow statements

  • @driziiD
    @driziiD 5 лет назад +2

    awesome to see python scaled to INSTAGRAM LEVEL

    • @xnoreq
      @xnoreq 5 лет назад +8

      Only usable on a large scale when replaced with C, lol. Once again Python has proven that it is a scripting language for toying around.
      This talk is like one complaint about Python after the other:
      1) Performance is bad.
      2) Memory usage is bad. (I lol'd when she said that just the running Python code itself takes up a significant amount of memory.)
      3) GC is bad.

  • @hengwang74
    @hengwang74 3 года назад

    Best Talk I have seen! Thank you for sharing!

  • @amlanch
    @amlanch 5 лет назад +2

    Nice presentation. There are bunch of things that can be improved for detection of the time series jumps by Fourier transformation of the time series and comparing the two frequencies on a predetermined delta of difference.

  • @jeffsaremi
    @jeffsaremi 5 лет назад +1

    Extremely beneficial. Please have more of these

  • @alpham6685
    @alpham6685 3 года назад

    This is pure gold !

  • @donotreportmebro
    @donotreportmebro Год назад

    this planet will never recover from the Python's environmental impact

  • @quicksilver5413
    @quicksilver5413 3 года назад

    Really good talk!

  • @Sunshine_1998
    @Sunshine_1998 2 года назад

    Go Lisa!!

  • @genie7941
    @genie7941 5 лет назад

    Fantastic. So insightful.

  • @jpzhang8290
    @jpzhang8290 4 года назад +2

    How would you synchronize betwen different postgresql servers? It would still cause latency issue.

  • @nortrom212
    @nortrom212 Год назад

    Engineers are so good at optimizations that they ultimately optimize themselves. Great presentation though...

  • @Secret4us
    @Secret4us 3 дня назад

    Interesting, thanks

  • @yuhechen7258
    @yuhechen7258 4 года назад +2

    Lisa didn't discuss about the postgres data sharding. Is it possible to store meta data and handle queries for billions users in just one postgres instance? Any idea?

    • @evgeni-nabokov
      @evgeni-nabokov 3 года назад +1

      10:20 She mentioned sharding by hash of user id.

  • @chuckywang
    @chuckywang 5 лет назад +4

    Does dead code really take up that much memory? It will never be run so it doesn't affect runtime, but how much smaller would your executable be if you removed dead code?

    • @gsb22
      @gsb22 3 года назад +1

      I think here they are talking about RAM consumption. In other compiled languages, compiler actually removes the code that will never get called, JS has tree-shaking something like that, but in case of Python, if a module is loaded into memory, Python loads on methods into memory and then this cascades. I'm not sure how much gain they could have had, but by the looks of improvements, it seems, they were building really fast and they left a lot of dead code behind which when cleaned helped them a lot. Had they been cleaning from start, they change would not be that much.

  • @valentynkuznietsov7866
    @valentynkuznietsov7866 3 года назад

    Great talk!

  • @blasttrash
    @blasttrash 4 года назад +4

    11:36 Today I learnt that you can run daemons on a database also(postgres in this case as she said).

    • @psykidellic
      @psykidellic 3 года назад +1

      Yeah, even i was not aware. I did some digging and i this is done using PgQ. instagram-engineering.com/instagration-pt-2-scaling-our-infrastructure-to-multiple-data-centers-5745cbad7834 ... under the caching section.

  • @Kideqx
    @Kideqx 7 лет назад +4

    wow! this is cool

  • @cenkerdemir
    @cenkerdemir 5 лет назад +3

    wow. this was a great talk!

  • @That__Guy
    @That__Guy 3 года назад +2

    I started sweating when she talked about the single branch tactic

    • @payaljain4015
      @payaljain4015 Год назад

      you got that ? if yes can you please explain

  • @placidchat7532
    @placidchat7532 5 лет назад +2

    How do you do test the configurations for scale out, or is this applied to live running machines? Or are specific test machines carved out from live users?

  • @1234fewgfwe
    @1234fewgfwe Год назад

    This convinces me that even Python can be scaled as a global distributed system. Stop saying python sucks guys

  • @chiranjibghorai6950
    @chiranjibghorai6950 6 лет назад +1

    Excellent talk!

  • @saurabhchopra
    @saurabhchopra 4 года назад +4

    44:21 You guys are robust!

  • @kevin8918
    @kevin8918 4 года назад +1

    OMG, the source control part is surprising. It looks like ig is a giant monolithic app with one code base. Why not break it out at early phase

    • @jimmyadaro
      @jimmyadaro 3 года назад

      Because the “move fast, break things” philosophy

  • @shakeib98
    @shakeib98 2 года назад

    At 12:05, if the memcache is invalidated then why does it need it then? Like the read and write operations are on the database server then.

  • @weblancaster
    @weblancaster 7 лет назад +4

    Great talk.

  • @pursuitofcat
    @pursuitofcat 3 года назад

    26:04 Is this statement correct? "We run n processes where n is greater than the cpu cores of the system." I thought we should have at most the same number of processes as the number of cores.

  • @arunsatyarth9097
    @arunsatyarth9097 4 года назад +1

    Very nice presentation. But I wish she wouldnt say Data Centre and Region interchangably.

  • @karnveerayush
    @karnveerayush 5 лет назад

    Fantastic presentation, lot was covered in very short span of time.
    Is there anyone point me more such content here on RUclips. Thanks.

    • @infoq
      @infoq  5 лет назад +2

      There is similar content available on infoq.com

  • @denkigumo
    @denkigumo 5 лет назад

    Fantastic talk! Learnt a lot.

  • @cafeliu5401
    @cafeliu5401 5 лет назад +26

    Can anybody see my comment? Am I trapped on a single Datacenter in SGP?

  • @MendaSpain
    @MendaSpain 6 лет назад +11

    Wow, 20,000 web servers where the code is deployed with 40-60 rollouts per day

    • @mostafaelmadany8046
      @mostafaelmadany8046 6 лет назад +2

      a huge work behind the scenes

    • @aeshi001
      @aeshi001 6 лет назад +3

      definitely interested on how they manage to do this

  • @audi88
    @audi88 6 лет назад +3

    Instead of having every django d1, d2 competing to go the db for a cache refresh and causing the 'thundering herd', the d1,d2s should only check for data in memcache. It can be the job of memcache or an external service to refresh the data (independent of d1,d2s) from the DB. memcache can continue to serve old or stale data to d1,d2, while in parallel - load the data from DB and then invalidate the old data in a transactional block. Of course for a short time till invalidation you may have double the size of data in you memcache. It is sort of similar on what memcache-lease is doing, but I think d1,d2s should be kept to focus on memcache rather than speaking to the db and causing 'herd' problem.

    • @matt_not_fat
      @matt_not_fat 5 лет назад +5

      I don't agree, because cache is more expensive than DB. And like the speaker said, data access is local to region many times. If you eagerly update the memcache with the entire dataset you have to then deal with the huge amount of storage you require, not to mention that scaling out the memcache cluster (or any change in the hardware in that cluster) would take forever, because you need to prewarm the cache. If you don't do that you end up with a lazy population strategy, which is exactly what she is suggesting. You also amortize the cost of the first slow query. It's win win.

  • @pizza-cat1337
    @pizza-cat1337 4 года назад +5

    Everyone commits on master and it doesn't go wrong... that's impressive haha.

    • @jimmyadaro
      @jimmyadaro 3 года назад

      Testing EVERYTHING 😂

    • @payaljain4015
      @payaljain4015 Год назад

      @@jimmyadaro but dev at one time is it ?

  • @Textras
    @Textras 6 лет назад

    Very good thanks

  • @cozzbie
    @cozzbie 4 года назад

    Wonder how they do code reviews if everyone works from one branch

  • @ankitsolomon
    @ankitsolomon 6 лет назад

    Could someone pls post link for the article mentioned by author related to disabling garbage collection?

    • @infoq
      @infoq  6 лет назад

      This article could be useful: www.infoq.com/articles/Java_Garbage_Collection_Distilled

  • @ZhaoWeiLiew
    @ZhaoWeiLiew 5 лет назад

    This was pretty insightful.

  • @anandt8362
    @anandt8362 3 года назад

    Any reason why these images can't be asynchronously processed when you user uploads the image and stores different sizes in S3 buckets provided through CDN.. Thereby, you avoid processing while fetching whenever user requests .. This would further improve the processing power right .. Anyone thoughts on this ?

  • @tamborelconejo
    @tamborelconejo 4 года назад +1

    Someone can say where can we find more information about that git single-master approach?

    • @gsb22
      @gsb22 3 года назад

      It's simple, usually if you are working on a feature, you create a branch from master and then work on it and then after ages, you merge it back into master. What they did was, instead of branching out, every commit would go to master, so basically your commits have to be stable, but need not to be complete, so this way, if someone starts working next day, they already have the changes u committed which reduces future merge issues.

    • @tawfiknasser1348
      @tawfiknasser1348 3 года назад

      @@gsb22 This sound like not the best approach. what about code review ? or in case reverting only one commit after you pushed your 100 stable commit. now imagine after reverting this commit(for some reason) the feature is crashing ! shall your revert all the 99 commit ? should you fix and commit and push in the same day ? i mean, this can cause more issues than it may help.

    • @gsb22
      @gsb22 3 года назад

      @@tawfiknasser1348 you can cherry pick to revert a commit. And yes, this method has problems but this us the tradeoff they went with

  • @kevintran6102
    @kevintran6102 4 года назад

    How can they handle conflict when using a single branch?

    • @gsb22
      @gsb22 3 года назад

      they push frequently, so merge conflicts are small and easy to fix. If two branches are merged after a month of development on them, then that's shit storm whereas if they are regularly updated with master, less conflicts.

  • @Joso997
    @Joso997 5 лет назад +1

    How does it know if it should wait or use stale value?

    • @gsb22
      @gsb22 3 года назад

      Exactly. If every Django uses the stale data, memcache will never get updated.
      [Edit] : I think, if a request comes and no other "fill" request is being processed, then this request gets the DB access whereas other requests that are coming when the previous one was still filling, they get stale data and once the fill up is done and new like gets added and DB is updated, then the cycle starts.
      Example - Request R1 comes, no other requests are doing the "fill" process, memcache allows this requests to hit DB and do the fill up, meanwhile if R2,R3,...R100 comes, memcache says, their is already a fill process in work and you can fckk off with this stale value or wait till this "fill" process is done and then you would be treated as R1 and you get to query the data.
      Anyone who didnt get this, feel free to comment, I'll try different way to explain this then.

  • @hammad8053
    @hammad8053 3 года назад +1

    "Don't count the servers, make the servers count"

    • @jimmyadaro
      @jimmyadaro 3 года назад

      That’s easy when you have a multimillionaire contract with a cloud computing provider (and/or own your own bare-metal servers).

    • @gsb22
      @gsb22 3 года назад +1

      @@jimmyadaro I think what it meant was, dont say we have 10k servers so the load will get handled, say that every server is running 100% efficiently.

    • @jimmyadaro
      @jimmyadaro 3 года назад

      ​@@gsb22 Sure, that makes sense, but still, they are capable of pay per really-high-scale servers.

  • @ddg170
    @ddg170 4 года назад

    this is an awesome talk!!!

  • @deerew23
    @deerew23 5 лет назад

    This is interesting

  • @jaywelborn
    @jaywelborn Год назад

    20k servers updated in 10 minutes.
    I need another talk about just that

  • @Roshen_Nair
    @Roshen_Nair 2 года назад

    Bookmark: 12:00

  • @bersi3306
    @bersi3306 3 месяца назад

    Maybe I'm crazy, but their git flow is the most stressful thing I've ever seen

  • @zeroows
    @zeroows 2 года назад

    Use Rust :)

  • @ucretsiztakipci6612
    @ucretsiztakipci6612 Год назад

    36:00

  • @hardikmahant7353
    @hardikmahant7353 3 года назад

    In Instagram, Requests = Djangos? @15:02