🎯 Key Takeaways for quick navigation: 00:00 🖥️ Tesla's AI Division has created a supercomputer called Dojo, already operational and growing in power rapidly, set to become a top 5 supercomputer by early 2024. 01:25 💹 Dojo's computing power forecasted to reach over 30 exaflops by Feb 2024, with plans to ramp up to 100 exaflops by Oct 2024. 03:02 💰 Tesla's Dojo, a specialized AI training cluster, equates to a $3 billion supercomputer, offering remarkable AI model training capabilities. 04:00 🚗Dojo focuses on training Tesla's full self-driving neural network, surpassing standard supercomputer definitions for specialized AI training. 05:38 📸 Dojo processes immense amounts of visual data for AI model training through labeling, aiming to automate a task previously done by humans. 07:01 🧠 Dojo adopts a unique "system on a chip" architecture, like Apple's M1, optimizing efficiency and minimizing power and cooling requirements. 08:10 💼 Dojo operates on tile levels, fusing multiple chips to create unified systems, enhancing efficiency and power in AI training. 10:00 ⚙️ Tesla can add computing power through Dojo at a lower cost, avoiding competition for industry-standard GPUs, potentially leading to a new business model. 11:23 🌐 Future versions of Dojo could be used for general-purpose AI training, enabling Tesla to rent out computing power as a lucrative business model. 12:45 🔄 Renting out excess computing power from Dojo can potentially revolutionize Tesla's profitability, similar to Amazon Web Services. Made with HARPA AI
The fundamental unit of the Dojo supercomputer is the D1 chip,[21] designed by a team at Tesla led by ex-AMD CPU designer Ganesh Venkataramanan, including Emil Talpes, Debjit Das Sarma, Douglas Williams, Bill Chang, and Rajiv Kurian.[5] The D1 chip is manufactured by the Taiwan Semiconductor Manufacturing Company (TSMC) using 7 nanometer (nm) semiconductor nodes, has 50 billion transistors and a large die size of 645 mm2 (1.0 square inch).[22] As an update at Artificial Intelligence (AI) Day in 2022, Tesla announced that Dojo would scale by deploying multiple ExaPODs, in which there would be:[20] 354 computing cores per D1 chip 25 D1 chips per Training Tile (8,850 cores) 6 Training Tiles per System Tray (53,100 cores, along with host interface hardware) 2 System Trays per Cabinet (106,200 cores, 300 D1 chips) 10 Cabinets per ExaPOD (1,062,000 cores, 3,000 D1 chips) Tesla Dojo architecture overview According to Venkataramanan, Tesla's senior director of Autopilot hardware, Dojo will have more than an exaflop (a million teraflops) of computing power.[23] For comparison, according to Nvidia, in August 2021, the (pre-Dojo) Tesla AI-training center used 720 nodes, each with eight Nvidia A100 Tensor Core GPUs for 5,760 GPUs in total, providing up to 1.8 exaflops of performance.[24] credit: wiki
Thanks for sharing the actual numbers. Do you know if Tesla's numbers are fo reduced precision like the ones used for AI inference (16 bit) or Training (32 bits)? Thanks!
Yeah, autonomous self driving is working for them... Except there are at least 5 manufacturers that already have Level 3 ... And they scrapped all that work and went from visual to AI powered self driving.... Well said, really... amateurs
Nice pace, good graphics, not too "fanboy", plenty of terminology, and raised a few questions I need to go look up and think about. All around effective RUclips. Well done.
@@cookasaurus_rex oh man we got an English major in our midst!! I could have watched that a thousand times and not caught it cause that is one superfluous "e" in my estimation and yet we still need to know how to differentiate long and short vowels.
And still Tesla hasn't done much other than slightly improve FSD that is still widely ridiculed. Don't even get started in BOT until it can actually do something useful at a fast pace.
Actually, the semiconductor trend for the past few years is moving away from single chip SoC designs to multi-chip packages, which means the SoC is not on a single piece of silicon, but multiple pieces of silicon inside a single “cpu” package. This is what is used in the M1, the chips in the iPhone, and inside AMD and Intel’s latest cutting edge CPUa etc. Multiple chiplets are placed very close to each other, even stacked one on top of the other inside a “cpu package,” but the SoC is no longer a single piece of silicon in cutting edge products. The reason this is happening is, of course, economics. The different chips are produced in the process nodes that are most economical. So the I/O hub in an AMD cpu is in one process while the cpu clusters are on cutting edge processes in units of 8 or 16 cores per cluster. Then the cpu package has one or more of these separate cluster chiplets placed around an I/O hub chiplet in the AMD example. In an Apple products, the A-series and M1 cpus, separate pieces of silicon for CPU and for memory are stacked inside the CPU package. This is why your M-series computers system memory can’t be upgraded.
The reason chiplets work well is also yields, smaller chips mean higher yields per wafer. Large chips can be made useless by one tiny imperfection where as with say 8 smaller chips covering the same area that same imperfection only loses one smaller chip with all the others being fully functional. Interposers are then constructed using very old and reliable techniques to stitch all the chiplets together. Not quite as fast as a single large chip but considerably cheaper.
If every vehicle on public streets had a "gps" transmitter giving out data like direction, speed, etc. FSD could take advantage incorporating this localized data (car to car) to help determine its next action. A Futurama episode when the gang went to Robot Planet the robots move like vehicle traffic but fit between each other at high speeds. Perfect trafgic management.
@@getsideways7257trust me, we still have a lot of privacy in this day and age. And although I want technology to improve and would love the sharing of location data and things between cars without intrusive being able to monitor people. I would avoid any reduction in privacy
What’s amazing is that the auto industry is just the beginning. This will be the foundation of advances in gaming, MMO-VR, physics research, simulations, and more.
I take it he is Mac man. In the old days we called this 'cascading', and we had 27 iMacs connected. No one ever talks about the software needed to use this configuration. This sounds impressive, but the Hardware is far beyond available Software to run them. They still don't have much to do. Back then we thought 10 gigaflops was incredible. Working on these things is what I used to do and explains why I garden now.
which new marketing scam term will it be next? FULL self driving? TRUE FULL self driving? I SWEAR BY GOD THIS IS THE TRUEST AND FULLEST self driving? THIS TIME FOR REAL FULL self driving? I PROMISE NEXT YEAR IT'S READY FULL self driving?
@@Astra2 as we all know, FULL SELF DRIVING is a marketing term as it's not fully autonomous. Now the comment above me said "true" full self driving...which is rather funny considering for how long musk has promised true autonomy...if you don't get a joke and rather be a butthurt tesla fanboy and billionaire boot licker, go ahead.
@@L3nny666 Full self driving means fully autonomous. It's currently in beta, that's why it's not fully autonomous. I understand what you mean but I think it would be unwise to doubt the same person who figured out how to land rockets.
@@Astra2 yeah sure..."beta". tesla is still at SAE level 2, while mercedes and toyota are already at sae level 3. and you don't really believe musk figuered out any of this technology, right? this man is an investor and not an engineer.
that's going to happen sooner than you think ! Governments will refuse people to drive cars not being autonomous ! 95% of accidents are due to human error, that's an enormous cost to the social security.
@@11insertusernamehere what I'm saying is that automated cars have a lower mortality rate, but we shouldn't allow institutions to effectively prevent any person from driving even though people driving has a higher mortality rate
I had a thought. Smartphones fall into basically two camps. iOS and Android. Is it possible that autonomous vehicles would also fall into two camps? Tesla and Apple instead of Apple and Google. The traditional car companies are too far behind and will likely never catch up to Tesla because they wanted to wait and see, because no one actually believed that Tesla's vision only would work. Everyone was betting that Tesla would fail. Well it's pretty clear now that Tesla was right. So I believe that Tesla will license their technology to other car companies. I also believe that Apple will license their technology to car companies. I believe that Apple will come out with their autonomous car technology in 2026. That should be about when Tesla perfects their autonomous car technology. In Teslas case, no later than the end of 2026 and in Apple's case, no earlier than 2026.
Even with massive computing data, you still need as much training data as you can get. I love Apple, and am invested in their stock, but I don’t believe it will come close to teslas date of achieving FSD..
@@timmuyrers2057 I can't say you are wrong. Apple, like everyone else, is too scared of the cars making mistakes. Allowing the beta process is a huge advantage. I did say in my post that Apple would come out with something maybe AFTER 2026 (later then). But that Tesla would be maybe EARLIER than 2026( no later than). Or I could be wrong altogether. But I wouldn't count Apple out just yet.
In Apple's case most likely not earlier than 2028 or even 2030. While I do applaud apple for what they have done with computer chips and he new AR headset is impressive. They have reportedly reduced their ambitions in the car space. Even if they went all in, I don't think that FSD is something you can take shortcuts on.
There’s no Apple autonomous car technology lol. They can’t even get Siri right. That’s not their stronghold. Apple won’t be present in auto market. With the Apple and Android analogy in the auto market it’s indeed interesting. But in that case it looks more like Tesla being Apple, making the few premium products in a huge scale, but also Android, licensing their software and ecosystem to others. Really looks more like a world where Android doesn’t exist and everyone licenses iOS.
M2 Ultra !== the most powerful computer you can buy. Sure for ARM it is probably the most powerful, but it is far from actual client (end user) max performance chip.
Not even the most powerful ARM system not by a long way. There are ARM processors available that have hundreds of cores not the dozen or so Apple put into their designs.
Needing AWS servers for basically only Black Fridays is brilliant because no-one else (in the U.S.) needs those resources at that time. And likewise for other holidays and elsewhere.
love it when talking about supercomputers and showing html and javascript, exactly the thing which needs exa-flops a one with EIGHTEEN zeros- and showing 15 in the video, btw here we call that trillions.
🤔👍3×3×3=1 , because it's prime number is one. But this current progress is a game changer. It will come too a point where processing power will be as a fluid concept as the cloud!!!
Why a prime number: example the human brain ; left hemisphere, right hemisphere, one whole brain (3)=1 just like quantum mechanics, two points coming together for one answer(3)=1.
09:16 "The most powerful computers you can buy" far from it. m2ultra provides 48187 passmarks, and 4038 in single thread The core i7 12700k provides 34780 passmarks but 4055 in single thread (so a bit faster for some single thread processes) The AMD Ryzen Threadripper 3990X provides 81228 passmarks and 2,569 in single thread, obliterating the m2ultra. for those 3, the core i7 is faster for single thread, also cheaper, only 300usd.
@@jabulaniharveyhe would still be wrong. The Apple M chips are nothing special, just modified arm chips which they licensed for the ability to modify to their needs. But as far as Apple marketing and PR, it's spot on. Complete nonsense that the typical Mac user will gobble up lol. Should also clarify. The video is great and I think the author was trying to simplify everything to make the points easily digestible. Just take anything he says about Apple performance with a dump truck sized grain of salt.
@@bsbllclownRight, remember how Jobs used to claim that their 68K based systems were a 'Supercomputer on a chip'? Ugh, that level of false advertising shouldn't be allowed to happen. It's basically a form of thief.
@@jakubiskra523 For AI the a100 is still the better card as it has hardware specific to deep learning tasks with the H100 being the better option for raw processing scientific workloads. The A100 is also more energy efficient making it a better fit for large multi card systems. They are basically designed for different tasks rather than being different generations of the same thing.
@@schrodingerscat1863 this is why all of ai companies are using h100 for they clusters, and h100 is more energy efficient in every way, your source of information is not trustworthy
Awesome video. funny too how at about 2 minutes in while explaining what an exa-flop is and this powerful computer, they show some basic html and css hehe
To put it simply, FSD must produce a set of correct and safe driving responses, to a set of situational images created by the cars cameras. That requires some amount of "prediction", or what each object in the image is, and what it is likely to do next. Ignoring inattentiveness, even human drivers can get that wrong a lot of times. If FSD is to be successful it needs to get that right more often than human drivers do. Also driving responses need to be different under different road surface and weather conditions, and I don't even know if FSD accommodates this. But in any case the "computational power" required for this, probably cannot be "on board" the vehicle. It might resolve to image analysis, object identification within the image, and probabilities of what each object will do next, and the driving response to that. That is a lot of possible "image" - "driving response" combinations to be processed in real time. Even if the super computer could do it, then there is also the "real time" communication between the computer and the vehicle. (the bandwidth)
Compared to Apple CPUs that still use DRAM for memory, Dojo is using lots of SRAM which is highly expensive but much faster than DRAM. Most computers use SRAM only of L1 cache in the CPU and the main memory is using cheaper DRAM tech.
Totally different applications, the Dojo processors only need small amounts of memory because their task is very specific and highly optimised for that single task. Apple CPUs are just general purpose CPUs with a lot of sub systems integrated into a single package to reduce communication power consumption and latency. Dojo is more like a GPU than a CPU.
@@schrodingerscat1863 Dojo also has system-wide DDR4 SDRAM but it's used as fast storage device instead of treating it like a traditional RAM. Load and store speeds to storage (I would assume SDRAM) are 400 GB/s and 270 GB/s according to Wikipedia article. If you compare this to modern computers, Intel i9-13900K has max memory bandwidth about 90 GB/s while using all cores in optimal memory channel configuration. But yes, SRAM has single clock latency: Dojo runs at 2 GHz so that would be 0.5 ns vs best available DDR4 SDRAM has latency around CL12 or about 6.7 ns. So obviously you would try to write apps so that you can use only the memory that as 13x smaller latency. However, that doesn't mean that Dojo cannot run other apps, too, only that you cannot get optimal performance with apps that cannot fit at least the full inner loop into the available SRAM.
I loved your video but.......to define an exaflop you show 1 followed by 15 zeros and say the 1 should be followed by 18 zeros. I am just curious which you intended.
I like that Elon is singlehandedly creating the mechanism by which Terminators with hunt humans. Literally no other company is creating a system to replicate the human vision capability for ai…except Tesla
99% of humans are born good; able to empathize with others; programmed into us by a billion years of evolution. The other 1% we have to incarcerate or are responsible of an exponentially great % of violent crime. An AI is 0% born good; 100% unable to empathize, and far too little is understood today of the fundamental programming that permits humans and societies to prosper together reliably.
@@starwaytoinfinitythat was a part of it. However, according to Elon (if you take his words for face value): It was mostly motivated because OpenAI is now closed source and the implications of them being able to manipulate the algos are too great. In that, he gas expressed major concerns over the implications on free speech, which is his stated main reason for buying Twitter as well. He goes pretty in depth into his motivations in his interview with Tucker Carlson.
🎯 Key Takeaways for quick navigation:
00:00 🖥️ Tesla's AI Division has created a supercomputer called Dojo, already operational and growing in power rapidly, set to become a top 5 supercomputer by early 2024.
01:25 💹 Dojo's computing power forecasted to reach over 30 exaflops by Feb 2024, with plans to ramp up to 100 exaflops by Oct 2024.
03:02 💰 Tesla's Dojo, a specialized AI training cluster, equates to a $3 billion supercomputer, offering remarkable AI model training capabilities.
04:00 🚗Dojo focuses on training Tesla's full self-driving neural network, surpassing standard supercomputer definitions for specialized AI training.
05:38 📸 Dojo processes immense amounts of visual data for AI model training through labeling, aiming to automate a task previously done by humans.
07:01 🧠 Dojo adopts a unique "system on a chip" architecture, like Apple's M1, optimizing efficiency and minimizing power and cooling requirements.
08:10 💼 Dojo operates on tile levels, fusing multiple chips to create unified systems, enhancing efficiency and power in AI training.
10:00 ⚙️ Tesla can add computing power through Dojo at a lower cost, avoiding competition for industry-standard GPUs, potentially leading to a new business model.
11:23 🌐 Future versions of Dojo could be used for general-purpose AI training, enabling Tesla to rent out computing power as a lucrative business model.
12:45 🔄 Renting out excess computing power from Dojo can potentially revolutionize Tesla's profitability, similar to Amazon Web Services.
Made with HARPA AI
You feed a link somewhere and it spits these out?! Please share the secrets of your ways?
Thanks.
In an unprecedented move, Dojo changed its name to Skynet.
Underrated comment
SkynetX to be exact.
@@Tailspin80 Just X :)
Xnet💫
Why Yes lol 😅😆😅😆😅😆😆😅😆😅😆😅
The fundamental unit of the Dojo supercomputer is the D1 chip,[21] designed by a team at Tesla led by ex-AMD CPU designer Ganesh Venkataramanan, including Emil Talpes, Debjit Das Sarma, Douglas Williams, Bill Chang, and Rajiv Kurian.[5]
The D1 chip is manufactured by the Taiwan Semiconductor Manufacturing Company (TSMC) using 7 nanometer (nm) semiconductor nodes, has 50 billion transistors and a large die size of 645 mm2 (1.0 square inch).[22]
As an update at Artificial Intelligence (AI) Day in 2022, Tesla announced that Dojo would scale by deploying multiple ExaPODs, in which there would be:[20]
354 computing cores per D1 chip
25 D1 chips per Training Tile (8,850 cores)
6 Training Tiles per System Tray (53,100 cores, along with host interface hardware)
2 System Trays per Cabinet (106,200 cores, 300 D1 chips)
10 Cabinets per ExaPOD (1,062,000 cores, 3,000 D1 chips)
Tesla Dojo architecture overview
According to Venkataramanan, Tesla's senior director of Autopilot hardware, Dojo will have more than an exaflop (a million teraflops) of computing power.[23] For comparison, according to Nvidia, in August 2021, the (pre-Dojo) Tesla AI-training center used 720 nodes, each with eight Nvidia A100 Tensor Core GPUs for 5,760 GPUs in total, providing up to 1.8 exaflops of performance.[24] credit: wiki
Elon , with his outrageously audacious visions attracts the most talented and brilliant people to his companies ❤
Thanks for sharing the actual numbers. Do you know if Tesla's numbers are fo reduced precision like the ones used for AI inference (16 bit) or Training (32 bits)? Thanks!
The speed of change and "successful change" is going to be staggering....
It should be noted that Telsa is still buying as many Nvidia GPUs as they can get their hands on.
So they had 1.8 exaflops in 2021 and now are building a computer that only has one exaflop?
It’s crazy to see how far ahead Tesla is in the auto industry
Not just auto industry
@@JrbWheaton well said AI too.
@@fredfrond6148 Energy, computing, solar, robotics, mining, the list goes on
@@fredfrond6148and lithium refining.
Yeah, autonomous self driving is working for them... Except there are at least 5 manufacturers that already have Level 3 ... And they scrapped all that work and went from visual to AI powered self driving.... Well said, really... amateurs
Thanks!
Well brought presentation with understable analogies!
@1:55 Too funny. An Exoflop is a 1 with 18 zeros behind it.... and the video shows 15 zeros. A lot of good info here on Dojo... thanks for the update.
18 zeros would be too small on the display. We don't all have your perfect eyesight. Hehe
Nice pace, good graphics, not too "fanboy", plenty of terminology, and raised a few questions I need to go look up and think about. All around effective RUclips. Well done.
Except for "Artificial Intelligence *traning* cluster" @ 04:12 :/
@@cookasaurus_rex oh man we got an English major in our midst!! I could have watched that a thousand times and not caught it cause that is one superfluous "e" in my estimation and yet we still need to know how to differentiate long and short vowels.
Can you Imagine, hundreds of thousands of teslas are feeding data to this machine every day
That is their main advantage, the limiting factor for AI systems is becoming the amount of training data available.
And still Tesla hasn't done much other than slightly improve FSD that is still widely ridiculed. Don't even get started in BOT until it can actually do something useful at a fast pace.
millions
@@codingispower1816they’re still years ahead of everyone also they just switched to AI learning so the improvements will be big in short time
I more than liked this video. It was a wealth of information in less than 15 minutes. 🙂
This
Is out of my mind, amazing ❤❤❤❤❤
That was quite interesting. Thanks.
🌴☀️🌴
Actually, the semiconductor trend for the past few years is moving away from single chip SoC designs to multi-chip packages, which means the SoC is not on a single piece of silicon, but multiple pieces of silicon inside a single “cpu” package. This is what is used in the M1, the chips in the iPhone, and inside AMD and Intel’s latest cutting edge CPUa etc. Multiple chiplets are placed very close to each other, even stacked one on top of the other inside a “cpu package,” but the SoC is no longer a single piece of silicon in cutting edge products.
The reason this is happening is, of course, economics. The different chips are produced in the process nodes that are most economical. So the I/O hub in an AMD cpu is in one process while the cpu clusters are on cutting edge processes in units of 8 or 16 cores per cluster. Then the cpu package has one or more of these separate cluster chiplets placed around an I/O hub chiplet in the AMD example. In an Apple products, the A-series and M1 cpus, separate pieces of silicon for CPU and for memory are stacked inside the CPU package. This is why your M-series computers system memory can’t be upgraded.
Technically they could add additional bus logic to allow external memory for expansion, but that defeats the purpose of being compact.
🎯💯
The reason chiplets work well is also yields, smaller chips mean higher yields per wafer. Large chips can be made useless by one tiny imperfection where as with say 8 smaller chips covering the same area that same imperfection only loses one smaller chip with all the others being fully functional. Interposers are then constructed using very old and reliable techniques to stitch all the chiplets together. Not quite as fast as a single large chip but considerably cheaper.
A 'flop' is a floating point operation which is more complicated than a mere computer instruction.
Keep up the great work, Elon & Tesla Team.💯💯
Ready to see the luxury Tesla RVs also, Boss.😉😉
What is a wait if we’ve ever been sursnagged to unforgivable faulty price presumptions👽
Well brought presentation with understable analogies!. Thank you for your hard work .
Love your content, thanks for all you do
At 2:11 your big number is missing three more zeros! That number is only 1 quadrillion.
Apart from the inaccuracies and generalizations in this video there were some nice images.
2:11 - that's only 15 zeros, you're 3 zeros short.
If every vehicle on public streets had a "gps" transmitter giving out data like direction, speed, etc. FSD could take advantage incorporating this localized data (car to car) to help determine its next action.
A Futurama episode when the gang went to Robot Planet the robots move like vehicle traffic but fit between each other at high speeds. Perfect trafgic management.
no one wants to put in a tracking device in their car ffs, this isn't China
Privacy has left the chat
@@Fastotec9 What privacy are you talking of this day and age?
@@getsideways7257trust me, we still have a lot of privacy in this day and age. And although I want technology to improve and would love the sharing of location data and things between cars without intrusive being able to monitor people. I would avoid any reduction in privacy
They will kill us all
Realy a great video! Thanks!
Great description of Dojo.
GREAT VIDEO THANKS FOR EXPLAINING ALL THAT, IT MAKES IT CLEAR ITS A NO BRAINER FANTASTIC 🙂👍
You need to use the tensor core throughput of the A100. Probably even at lower precision (BF16) to have something realistic to compare against
it looks like they have the memory right on the chip to maximize the memory speed
The Dojo compute figure is 8 bit.
The A100 compute figure he uses are 16 bit.
Amazing video for better understanding the implications and funcionality of dojo! Thanks :)
Another great vid. Thanks 👍
@2:10 - You're either missing 3 zeroes, or an exaflop is 15 zeroes.
Dojo is making the matrix!
What’s amazing is that the auto industry is just the beginning. This will be the foundation of advances in gaming, MMO-VR, physics research, simulations, and more.
Thank you for your hard work ❤
super liked the video thank you so much
Fear does not exist in this Dojo!
strike first strike hard
No sensai
I take it he is Mac man. In the old days we called this 'cascading', and we had 27 iMacs connected. No one ever talks about the software needed to use this configuration.
This sounds impressive, but the Hardware is far beyond available Software to run them. They still don't have much to do.
Back then we thought 10 gigaflops was incredible. Working on these things is what I used to do and explains why I garden now.
When you said, you showed a paper dollar. That is currency. Gold and silver is money. Money is something of value
Your BEST video up to now!! Thank you!!😛
This is the beginning of true FSD, and will be an epic win if tesla plays their cards correctly.
which new marketing scam term will it be next? FULL self driving? TRUE FULL self driving? I SWEAR BY GOD THIS IS THE TRUEST AND FULLEST self driving? THIS TIME FOR REAL FULL self driving? I PROMISE NEXT YEAR IT'S READY FULL self driving?
@@L3nny666The term is just full self driving. Always has been and always will be.
@@Astra2 as we all know, FULL SELF DRIVING is a marketing term as it's not fully autonomous. Now the comment above me said "true" full self driving...which is rather funny considering for how long musk has promised true autonomy...if you don't get a joke and rather be a butthurt tesla fanboy and billionaire boot licker, go ahead.
@@L3nny666 Full self driving means fully autonomous. It's currently in beta, that's why it's not fully autonomous. I understand what you mean but I think it would be unwise to doubt the same person who figured out how to land rockets.
@@Astra2 yeah sure..."beta". tesla is still at SAE level 2, while mercedes and toyota are already at sae level 3.
and you don't really believe musk figuered out any of this technology, right? this man is an investor and not an engineer.
Great video
Great stuff! Please turn up the background music a little more in the next videos.
Its crazy to find a company like Tesla in auto industry
why?
Meaning of your comment?
I think he means crazy great!
Great explanation, thanks!
Huge Thank you 🎉❤
Imagine car insurance companies deciding to only insure driverless cars.
Thats so stupid to say. Think about what you just said
that's going to happen sooner than you think ! Governments will refuse people to drive cars not being autonomous ! 95% of accidents are due to human error, that's an enormous cost to the social security.
@phvaessen it's going to happen but it shouldn't, even if it means a higher mortality rate
@@daviddickey9832 what do you mean higher mortality rate? will driverless cars cause more accidents than human drivers in your opinion?
@@11insertusernamehere what I'm saying is that automated cars have a lower mortality rate, but we shouldn't allow institutions to effectively prevent any person from driving even though people driving has a higher mortality rate
I love your news letter!!
absolutly professional and detail expalnation stright to the point. Thank and wait for the next.
AMD deserves the credit for the MCM design. As they were the first to show its benefits large scale with their Ryzen processors.
Actually it was the thread ripper.
Time will tell like the hyperloop and Tesla truck could go either way.
I had a thought. Smartphones fall into basically two camps. iOS and Android. Is it possible that autonomous vehicles would also fall into two camps? Tesla and Apple instead of Apple and Google. The traditional car companies are too far behind and will likely never catch up to Tesla because they wanted to wait and see, because no one actually believed that Tesla's vision only would work. Everyone was betting that Tesla would fail. Well it's pretty clear now that Tesla was right. So I believe that Tesla will license their technology to other car companies. I also believe that Apple will license their technology to car companies. I believe that Apple will come out with their autonomous car technology in 2026. That should be about when Tesla perfects their autonomous car technology. In Teslas case, no later than the end of 2026 and in Apple's case, no earlier than 2026.
Even with massive computing data, you still need as much training data as you can get. I love Apple, and am invested in their stock, but I don’t believe it will come close to teslas date of achieving FSD..
@@timmuyrers2057 I can't say you are wrong. Apple, like everyone else, is too scared of the cars making mistakes. Allowing the beta process is a huge advantage. I did say in my post that Apple would come out with something maybe AFTER 2026 (later then). But that Tesla would be maybe EARLIER than 2026( no later than). Or I could be wrong altogether. But I wouldn't count Apple out just yet.
In Apple's case most likely not earlier than 2028 or even 2030. While I do applaud apple for what they have done with computer chips and he new AR headset is impressive. They have reportedly reduced their ambitions in the car space. Even if they went all in, I don't think that FSD is something you can take shortcuts on.
There’s no Apple autonomous car technology lol. They can’t even get Siri right. That’s not their stronghold. Apple won’t be present in auto market.
With the Apple and Android analogy in the auto market it’s indeed interesting. But in that case it looks more like Tesla being Apple, making the few premium products in a huge scale, but also Android, licensing their software and ecosystem to others. Really looks more like a world where Android doesn’t exist and everyone licenses iOS.
M2 Ultra !== the most powerful computer you can buy. Sure for ARM it is probably the most powerful, but it is far from actual client (end user) max performance chip.
Not even the most powerful ARM system not by a long way. There are ARM processors available that have hundreds of cores not the dozen or so Apple put into their designs.
Well, the channel is catered to Tesla fanboys. No need to check any facts, just join the movement and fly to the stars.
@@cubertmisoThe Boring Company, Neuralink, Starlink, SpaceX, "Twitter", ...
It's not just about cars, it's about EVERYTHING Elon Musk.
Excellent food for thought
Excellent video my man!
Needing AWS servers for basically only Black Fridays is brilliant because no-one else (in the U.S.) needs those resources at that time. And likewise for other holidays and elsewhere.
@4:30 -- that casing making it look like an asic imo.
Thankfully, self-driving cars, like EV's, will never be mainstream.
... Ok, that's a cool name, well done 👏🏻👏🏻👏🏻
Wonderfully done Kudos 😊
infinite possibilities of developments
You said 1 with 18 zeros, but put 15, make sure minor details add up!
Hey, i was the first viewer and first lile. Lol. Grewt video. This is a game changer. One more money maker for Tesla. Here comes FSD and Optimus.
That will come because of Dojo, Dojo is the secret sauce of Tesla for FSD and Optimus.
love it when talking about supercomputers and showing html and javascript, exactly the thing which needs exa-flops
a one with EIGHTEEN zeros- and showing 15 in the video, btw here we call that trillions.
2:10 Says 18 zeros. Shows 15 zeros.
9:10 These are the most powerful computers that you can buy! 🤣😂🤣
Oh please! What a delirious Apple fanboy statement.
I love your content, though! 😁
Yeye when I see it in action I will believe it
We squandered the train
🤔👍3×3×3=1 , because it's prime number is one. But this current progress is a game changer. It will come too a point where processing power will be as a fluid concept as the cloud!!!
Ps love my DOJO chip merchandise 😁👍💚
Why a prime number: example the human brain ; left hemisphere, right hemisphere, one whole brain (3)=1 just like quantum mechanics, two points coming together for one answer(3)=1.
Nice work thanks
Great video. Keep it up and dont make me hit the new button🤣
I imagine a scenario where Tesla sells Trainjng Tiles and makes more profit from TTs than cars. Your “game changer” is spot on.
Your best video yet.
3:52 Maybe they train the AI of these Tesla robots "Optimus" with the DOJO?
Not maybe, definitely. They've said as much.
Wow I love it🎉😮🎉
09:16 "The most powerful computers you can buy"
far from it.
m2ultra provides 48187 passmarks, and 4038 in single thread
The core i7 12700k provides 34780 passmarks but 4055 in single thread (so a bit faster for some single thread processes)
The AMD Ryzen Threadripper 3990X provides 81228 passmarks and 2,569 in single thread, obliterating the m2ultra.
for those 3, the core i7 is faster for single thread, also cheaper, only 300usd.
i think that the narrator may be referring to the performance-per-watt metric
@@jabulaniharveyhe would still be wrong. The Apple M chips are nothing special, just modified arm chips which they licensed for the ability to modify to their needs. But as far as Apple marketing and PR, it's spot on. Complete nonsense that the typical Mac user will gobble up lol.
Should also clarify. The video is great and I think the author was trying to simplify everything to make the points easily digestible. Just take anything he says about Apple performance with a dump truck sized grain of salt.
@@bsbllclownRight, remember how Jobs used to claim that their 68K based systems were a 'Supercomputer on a chip'?
Ugh, that level of false advertising shouldn't be allowed to happen. It's basically a form of thief.
@@jabulaniharvey performance per watt is dominated by ASIC chips
@@jabulaniharvey Heard that the same way. Let an M2 surpass a Threadripper or 4090 in rendering or such then we'll talk.
"That is a one with 18 zeroes behind it" and they show 15 zeroes... brilliant.
Thank you very educational
tesla ,forever
Leonardo da Vinci forever !!!
Keep up the good work
Thank you very much
Per Teslas own internal memo's, DOJO does not represent any significant step in compute, merely a reduction in cost.
That's it.
Good content well presented,,,
It already knows all the information. It's all in the network already.
Congratulations brother for strong stiff competition in da market sector 👏
if i had none the year before and the next year I have some, I have created a 100% increase in my production...
I wonder what the compute per watt is for Dojo vs A100?
exactly. ^^^^^^^^^^^^^^^^
THIS is what people should be asking and talking about.
Ad Dojo is more highly optimised for a specific task it is almost certainly way more efficient than the A100 for that particular task.
They should compare it to h100, a100 is last gen so the comparisons are more favorable.
@@jakubiskra523 For AI the a100 is still the better card as it has hardware specific to deep learning tasks with the H100 being the better option for raw processing scientific workloads. The A100 is also more energy efficient making it a better fit for large multi card systems. They are basically designed for different tasks rather than being different generations of the same thing.
@@schrodingerscat1863 this is why all of ai companies are using h100 for they clusters, and h100 is more energy efficient in every way, your source of information is not trustworthy
350million miles of FSD data 🎉🎉🎉
Awesome video. funny too how at about 2 minutes in while explaining what an exa-flop is and this powerful computer, they show some basic html and css hehe
To put it simply, FSD must produce a set of correct and safe driving responses, to a set of situational images created by the cars cameras. That requires some amount of "prediction", or what each object in the image is, and what it is likely to do next. Ignoring inattentiveness, even human drivers can get that wrong a lot of times. If FSD is to be successful it needs to get that right more often than human drivers do. Also driving responses need to be different under different road surface and weather conditions, and I don't even know if FSD accommodates this. But in any case the "computational power" required for this, probably cannot be "on board" the vehicle. It might resolve to image analysis, object identification within the image, and probabilities of what each object will do next, and the driving response to that. That is a lot of possible "image" - "driving response" combinations to be processed in real time. Even if the super computer could do it, then there is also the "real time" communication between the computer and the vehicle. (the bandwidth)
Well done.
Dojo tesla optimus, can't wait.
Compared to Apple CPUs that still use DRAM for memory, Dojo is using lots of SRAM which is highly expensive but much faster than DRAM. Most computers use SRAM only of L1 cache in the CPU and the main memory is using cheaper DRAM tech.
Totally different applications, the Dojo processors only need small amounts of memory because their task is very specific and highly optimised for that single task. Apple CPUs are just general purpose CPUs with a lot of sub systems integrated into a single package to reduce communication power consumption and latency. Dojo is more like a GPU than a CPU.
@@schrodingerscat1863 Dojo also has system-wide DDR4 SDRAM but it's used as fast storage device instead of treating it like a traditional RAM. Load and store speeds to storage (I would assume SDRAM) are 400 GB/s and 270 GB/s according to Wikipedia article. If you compare this to modern computers, Intel i9-13900K has max memory bandwidth about 90 GB/s while using all cores in optimal memory channel configuration.
But yes, SRAM has single clock latency: Dojo runs at 2 GHz so that would be 0.5 ns vs best available DDR4 SDRAM has latency around CL12 or about 6.7 ns. So obviously you would try to write apps so that you can use only the memory that as 13x smaller latency. However, that doesn't mean that Dojo cannot run other apps, too, only that you cannot get optimal performance with apps that cannot fit at least the full inner loop into the available SRAM.
The chip will eventually resemble a rubics cube . And spin automatically for multiple use combinations.
Elon: "Hey world don't do AI"
Elon: "Welcome Dojo"
I wonder if that would run DCS in VR with full graphics options?
Nvidia can do it
12:23 no, that isn't how AWS started out and is a myth being spread around. AWS was designed from the ground up.
I am a big fan of this super computers race.. this subject...
Liked & subbed!!! Ireland,,,
wow i want one!
I loved your video but.......to define an exaflop you show 1 followed by 15 zeros and say the 1 should be followed by 18 zeros. I am just curious which you intended.
I like that Elon is singlehandedly creating the mechanism by which Terminators with hunt humans. Literally no other company is creating a system to replicate the human vision capability for ai…except Tesla
no other company or nation state, that you know of....
Don't forget he co founded open ai and left because of the lack of caution.
99% of humans are born good; able to empathize with others; programmed into us by a billion years of evolution. The other 1% we have to incarcerate or are responsible of an exponentially great % of violent crime.
An AI is 0% born good; 100% unable to empathize, and far too little is understood today of the fundamental programming that permits humans and societies to prosper together reliably.
Hollywierd has a lot to answer for.
@@starwaytoinfinitythat was a part of it. However, according to Elon (if you take his words for face value):
It was mostly motivated because OpenAI is now closed source and the implications of them being able to manipulate the algos are too great. In that, he gas expressed major concerns over the implications on free speech, which is his stated main reason for buying Twitter as well.
He goes pretty in depth into his motivations in his interview with Tucker Carlson.
Most of the content on this channel is available before it comes here, but nowhere is it presented so well. I don't mind the review.
It is crazy to see how far behind Tesla is with self-driving.