What runs ChatGPT? Inside Microsoft's AI supercomputer in 2024 | Featuring Mark Russinovich

Поделиться
HTML-код
  • Опубликовано: 1 авг 2024
  • Microsoft has built the world’s largest cloud-based AI supercomputer that is already exponentially bigger than it was just 6 months ago, paving the way for a future with agentic systems.
    For example, its AI infrastructure is capable of training and inferencing the most sophisticated large language models like GPT-4o at massive scale on Azure. In parallel, Microsoft is also developing some of the most compact small language models with Phi-3, capable of running offline on your mobile phone.
    Watch Azure CTO and Microsoft Technical Fellow Mark Russinovich demonstrate this hands-on and go into the mechanics of how Microsoft is able to optimize and deliver performance with its AI infrastructure to run AI workloads of any size efficiently on a global scale.
    This includes a look at: how it designs its AI systems to take a modular and scalable approach to running a diverse set of hardware including the latest GPUs from industry leaders as well as Microsoft’s own silicon innovations; the work to develop a common interoperability layer for GPUs and AI accelerators, and its work to develop its own state-of-the-art AI-optimized hardware and software architecture to run its own commercial services like Microsoft Copilot and more.
    ► QUICK LINKS:
    00:00 - AI Supercomputer
    01:51 - Azure optimized for inference
    02:41 - Small Language Models (SLMs)
    03:31 - Phi-3 family of SLMs
    05:03 - How to choose between SLM & LLM
    06:04 - Large Language Models (LLMs)
    07:47 - Our work with Maia
    08:52 - Liquid cooled system for AI workloads
    09:48 - Sustainability commitments
    10:15 - Move between GPUs without rewriting code or building custom kernels.
    11:22 - Run the same underlying models and code on Maia silicon
    12:30 - Swap LLMs or specialized models with others.
    13:38 - Fine-tune an LLM
    14:15 - Wrap up
    ► Unfamiliar with Microsoft Mechanics?
    As Microsoft's official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft.
    • Subscribe to our RUclips: / microsoftmechanicsseries
    • Talk with other IT Pros, join us on the Microsoft Tech Community: techcommunity.microsoft.com/t...
    • Watch or listen from anywhere, subscribe to our podcast: microsoftmechanics.libsyn.com...
    ► Keep getting this insider knowledge, join us on social:
    • Follow us on Twitter: / msftmechanics
    • Share knowledge on LinkedIn: / microsoft-mechanics
    • Enjoy us on Instagram: / msftmechanics
    • Loosen up with us on TikTok: / msftmechanics
    GPT-4o is the large language model used behind Apple Intelligence and updates to Siri.
    #AI #AISupercomputer #LLM #GPT
  • НаукаНаука

Комментарии • 76

  • @alexpearson415
    @alexpearson415 2 месяца назад +28

    This is my favorite video that Microsoft makes. So cool

    • @MSFTMechanics
      @MSFTMechanics  2 месяца назад +3

      Thank you so much! Appreciate your taking the time to comment and glad you liked it.

  • @ThaLiquidEdit
    @ThaLiquidEdit 2 месяца назад +21

    Mark Russinovich is a legend!

  • @blitzio
    @blitzio 2 месяца назад +15

    Awesome to see this, especially the hardware, networking and data center breakdown and info.

  • @ds920
    @ds920 2 месяца назад +13

    That’s why I choose to buy their stocks, they know what it means to actually work. It was a long way for me from early 90s, when I’m - hardcore Unix user was calling Windows only using words “must die”, to start spending my free money on their stocks, and to actually admit what this company is really doing all this time. Thank you guys for keeping that spirit!

    • @Gersberms
      @Gersberms Месяц назад

      They do awesome work, VS Code is basically the best program I've ever used. It's just such a shame Windows 11 is garbage all over again. I just moved to Ubuntu at home and couldn't be happier with it.

  • @Breaking_Bold
    @Breaking_Bold Месяц назад +3

    Very very informative…sent it to my kid who is in college to see and keep seeing till they understand every word!!!

  • @Daniel-es9dq
    @Daniel-es9dq Месяц назад +3

    I’m so glad people much smarter than I are working on this.

  • @user-gg8we2ot4b
    @user-gg8we2ot4b 2 месяца назад +5

    Interesting architecture.

  • @drivenbycuriosity
    @drivenbycuriosity 2 месяца назад +5

    Most fascinating part for me is the Multi-LORA.

    • @MSFTMechanics
      @MSFTMechanics  2 месяца назад +2

      It is. It's a little like differencing disks with the additional state/data.

  • @BigEightiesNewWave
    @BigEightiesNewWave 2 месяца назад +9

    Man, Mark is God-status at Microsoft

  • @user-b39z1
    @user-b39z1 Месяц назад +3

    With Great Power comes Great Capabilities...
    Microsoft 📲💻🖥🎮

  • @SuperRider-RS
    @SuperRider-RS 2 месяца назад +4

    Great session, Thank you

    • @MSFTMechanics
      @MSFTMechanics  2 месяца назад +1

      Appreciate the compliment, thank you!

  • @ABLwAmazing
    @ABLwAmazing 2 месяца назад +3

    Ah, the sysinternals guy. I owe half my career to this guy. Thx.

  • @ShpanMan
    @ShpanMan 2 месяца назад +2

    Underrated video, a lot of cool useful details!

    • @MSFTMechanics
      @MSFTMechanics  2 месяца назад +1

      Thank you! Happy that it's useful - and it keeps evolving quickly.

  • @LouSpironello
    @LouSpironello 2 месяца назад +5

    Great info about the architecture! Thank you.

    • @MSFTMechanics
      @MSFTMechanics  2 месяца назад +1

      Thank you! Glad it helped on the architecture front.

  • @IshaqIbrahim3
    @IshaqIbrahim3 2 месяца назад +4

    Timeline: 9:00 What happen to the heat energy extracted during cooling? Does it get used to generate electricity to power other devices or supply energy to some of the cooling fans or is it not used for anything?

    • @jamieknight326
      @jamieknight326 Месяц назад

      It’s not reused. The heat is distributed across millions of litres of water and it can’t be concentrated back into a single spot. Sadly we can’t take 2 litres of 50c water and turn it into 1 litre of 100c water…
      The water is heated, but not heated enough to be very useful for much beyond heating offices / nearby buildings.
      I’m curious if someone will use the heat for some sort of low energy industrial process like drying cement.

    • @IshaqIbrahim3
      @IshaqIbrahim3 Месяц назад

      @@jamieknight326 like keeping the tea, coffee, eggs etc. warm. 🤣

  • @QuantumXdeveloper
    @QuantumXdeveloper 2 месяца назад +1

    Great session, Mark is as always the best❤

    • @MSFTMechanics
      @MSFTMechanics  2 месяца назад +1

      Thanks so much! Appreciate your taking the time to comment.

  • @liberty-matrix
    @liberty-matrix Месяц назад +4

    "it's funny you know all these AI 'weights'. they're just basically numbers in a comma separated value file and that's our digital God, a CSV file." ~Elon Musk. 12/2023

  • @GhostyDog
    @GhostyDog Месяц назад +1

    What’s that again..? You’re adding the capacity of the third most powerful supercomputer every month! 😮

  • @sachoslks
    @sachoslks 2 месяца назад +2

    5 times the Azure supercomputer deployed each month, thats insane!!! What does that mean for training next gen frontier models? 30x November 2023 does it mean you can train it 30x longer, 30x bigger or 30x faster or what? Will this continue up to the end of the year reaching almost 65x compute in one year?

    • @MSFTMechanics
      @MSFTMechanics  2 месяца назад +2

      Good questions. We have deployed 30x total or on average 5 additional instances per month of the November 2023 Top 500 submission with 14k networked GPUs, 1.1m cores and 561 petaflops. These will continue getting bigger and more instances provisioned in the future. And now there are more options for GPUs and AI accelerators, too, plus the Nvidia H200 and Blackwell architectures are coming soon with more speed, power and efficiency.

  • @kylev.8248
    @kylev.8248 2 месяца назад +3

    This is awesome

    • @MSFTMechanics
      @MSFTMechanics  2 месяца назад +2

      Glad you liked it and thank you!

  • @RohanKumar-vx5sb
    @RohanKumar-vx5sb 2 месяца назад +1

    cool stuff!

  • @lifeslooker
    @lifeslooker 2 месяца назад +1

    What would it take to take a 175B model to shrink it to run on a mobile phone? What are the limitations? The language used in the model? Can a compression be used or a language be developed that doesn't take up much space?

    • @MSFTMechanics
      @MSFTMechanics  2 месяца назад +4

      The closest correlation to size is the parameter count, so Phi-3-mini has 3.8bn parameters and is roughly 2.2GB file size to run locally on the phone as demonstrated by Mark in the video. There are things that the larger models will do in terms of reasoning and built-in knowledge, as Mark said. One example that we actually hit while planning this show is that the slightly larger Phi-3 models could phrase the cookie recipe in the writing style of Yoda from Star Wars. Because mini didn't have the pop culture references in its training set, we made the tone sarcasm instead.

    • @lifeslooker
      @lifeslooker 2 месяца назад +2

      @@MSFTMechanics funny I’m watching Star Wars episode 1 right now on Apple TV+😂😂😂😂
      Sarcasm is something the is very rich in style and in different languages would be interesting to see how this is done in say Italian or French

  • @Rafael555888
    @Rafael555888 Месяц назад

    So they can now run the same LLm on different GPUs(Nvidia vs Maya vs AMD)?

  • @MDFnyny
    @MDFnyny 2 месяца назад +2

    Thanks, quite impressive!

    • @MSFTMechanics
      @MSFTMechanics  2 месяца назад +1

      Thanks for watching and commenting!

  • @nestorreveron
    @nestorreveron 2 месяца назад +2

    Thanks.

  • @bfg5244
    @bfg5244 2 месяца назад +1

    that's inspiring

    • @MSFTMechanics
      @MSFTMechanics  2 месяца назад +1

      Glad you liked it. Thanks for taking the time to comment.

  • @Jj-du8ls
    @Jj-du8ls 2 месяца назад +4

    5 times the Azure supercomputer deployed each month? Is that a typo..

    • @MSFTMechanics
      @MSFTMechanics  2 месяца назад +8

      It's not. We just announced 30x have been added since November 2023

    • @Hashtag-Hashtagcucu
      @Hashtag-Hashtagcucu 2 месяца назад +1

      What he isn’t saying is for how long this rate goes on

    • @guruware8612
      @guruware8612 2 месяца назад +2

      @@Hashtag-Hashtagcucu For ever, as long as there are people thinking that it's a great idea to chat with a machine or have a robot-dog.
      Insanity is the new norm.

    • @coreystrait513
      @coreystrait513 2 месяца назад +2

      ​@@MSFTMechanicsStargate and quantum computing hurry up

  • @jamieknight326
    @jamieknight326 Месяц назад

    It’s amazing… impressive budget for by chips from NVIDIA. But is it worth it? Curious to see if AI will take off or not.

  • @duran5533
    @duran5533 Месяц назад +1

    Did I understand correctly: "Today, 6 months later, we deploy the equivalent of 5 of those supercomputers every month"!?!?

    • @MSFTMechanics
      @MSFTMechanics  Месяц назад +2

      That's right. 30+ instances have been built since November 2023

  • @phobosmoon4643
    @phobosmoon4643 2 месяца назад +2

    Great video. I have a maybe annoying question; how can we know that cloud ai services are selling us what they say they are? For example, context length could easily be fudged.

    • @phobosmoon4643
      @phobosmoon4643 2 месяца назад +1

      @@test-zg4hv yea I'm asking how you test it? Is it kind of like a error checking algorithm?

    • @MSFTMechanics
      @MSFTMechanics  2 месяца назад +2

      You can stipulate that in code or using the Azure AI Studio, and you can test it. We cover that to some extent in this episode ruclips.net/video/3hZorLy6JiA/видео.html

  • @kyber.octopus
    @kyber.octopus 2 месяца назад +1

    Nice

  • @jeffreyrh
    @jeffreyrh 2 месяца назад +2

    Wouldn't it be possible to create a distributed computer system like SETI or that Protein folding project, and use this computing power to train AI systems? Those projects used peoples personal computers when they had idle time.

    • @Zreknarf
      @Zreknarf 2 месяца назад

      it's called a botnet and yeah you can do that. these are purpose built AI chips though, nobody has those at home because they are not for sale yet.

    • @Zreknarf
      @Zreknarf 2 месяца назад +1

      also, from the video, inferencing requires high bandwidth memory, not so much compute power, which would suffer greatly from latency

  • @synthwave7
    @synthwave7 2 месяца назад +2

    Glad Microsoft is making sure there is co-existence between all hardware manufacrturers, otehrwise AI hardware will become chaos.

  • @Crunch_dGH
    @Crunch_dGH Месяц назад +1

    I prefer the much more reliable/resilient IOS. Just replacing my trusty Air with a 2TB M3 iPad Pro.

  • @sceptic33
    @sceptic33 Месяц назад +1

    on the subject of cooling and power requirements, i've been saying for ages that the "waste heat" is only waste if you don't use it. most electricity generators work by using heat to drive turbines. instead of using burning fuel or nuclear reactions to create heat, we should use the heat generated by compute as the source for generating electricity. pump and compress the heat from the cooling fluid into a reservoir which a second heat exchanger uses to vaporise a second working fluid which drives the turbines turning generators that feed electricity back to the GPU clusters. recycle the power endlessly.

    • @jamieknight326
      @jamieknight326 Месяц назад +1

      The physics problem is around concentrating energy / heat into one spot.
      While the total heat energy is in the MW range, it’s distributed across millions of litres of fluid (water / air) which is lightly heated and can’t be concentrated into a single place. Thermodynamics doesn’t allow for addition of heat between working fluids. You can’t use 2 litres of 50c water to create 1 litre of 100c water.
      I’m a nutshell, we can take the distributed head and convert it into the high pressure high volume of steam needed to run an electricity turbine.
      The heat may be useful for an industrial process like drying cement. But that ends up being being uneconomical as power from the grid is much cheaper than recovered heat.
      I wish this process worked. It would be amazing, but the physics doesn’t work out. :(

    • @sceptic33
      @sceptic33 Месяц назад

      @@jamieknight326 people always say it can't be done. i'm not convinced. low grade heat is raised when compressed by a heat pump. using a multi stage setup where a chain of pumps uses the increased temp from the previous pump as the base to concentrate further, is see no reason why a final reservoir of compressed heat shouldn't be hot enough to drive a turbine and generate electricity. you can generate electricity with a sterling engine and a cup of tea. a data centre converting 100MW of electricity into 99.9MW of heat, should be able to provide 99.9MW of heat to a heat engine.

  • @Arcticwhir
    @Arcticwhir 2 месяца назад +1

    13:38 you used the same exact joke a year ago with mark

    • @MSFTMechanics
      @MSFTMechanics  2 месяца назад +2

      Yes, that was intentional, because Multi-LoRA would allow Neo to have hundreds or thousands of skills added simultaneously, not just the one like last year.

  • @amg2u
    @amg2u Месяц назад +1

    iPhone?

    • @MSFTMechanics
      @MSFTMechanics  Месяц назад

      Yes, iPhone 15 Pro Max in this case.

  • @youturunnyng
    @youturunnyng Месяц назад

    Rubén godoy islas 4:35

  • @Rkcuddles
    @Rkcuddles 2 месяца назад +2

    This dude AI?

    • @DeployJeremy
      @DeployJeremy 2 месяца назад +1

      Mark has been trained on at least 175 billion parameters, but he isn't AI 🙂

  • @ArronLorenz
    @ArronLorenz 2 месяца назад +1

    Solid organic joke.

  • @donelson52
    @donelson52 Месяц назад +3

    How much CO2 does this cost? EXACTLY how bad is it now and EXACTLY HOW will you power this by 2030

    • @MSFTMechanics
      @MSFTMechanics  Месяц назад +3

      Check out the Microsoft sustainability site for details: www.microsoft.com/en-us/corporate-responsibility/sustainability-journey