Arm vs x86 - Key Differences Explained

Поделиться
HTML-код
  • Опубликовано: 21 дек 2024

Комментарии • 936

  • @GaryExplains
    @GaryExplains  4 года назад +94

    I also recommend this written article about this topic: Arm vs x86: Instruction sets, architecture, and all key differences explained - www.androidauthority.com/arm-vs-x86-key-differences-explained-568718/

    • @dalezapple2493
      @dalezapple2493 4 года назад +1

      Learned a lot Gary thanks 😊...

    • @Yusufyusuf-lh3dw
      @Yusufyusuf-lh3dw 4 года назад +1

      @Z3U5 why would that be a big boon to the industry, considering the fact that Apple want to create a large propietory ecosystem that tries to hide everything from everyone including the cpu specs, capabilities and associated technology. Apple is like the boogie man trying to steal the future of computing. But I'm sure they will reach where they where in the 90s

    • @Yusufyusuf-lh3dw
      @Yusufyusuf-lh3dw 4 года назад +2

      @Z3U5 yes. There are multiple different use cases in the server market and ARM fits into a few of those. If you look at historic data in server segment market share it's not 100% X86 or Power CPUs. There are small % of other architectures. There are even Intel Atom based servers. But the argument that Arm is going to capture huge server market share is totally unrealistic.

    • @MozartificeR
      @MozartificeR 4 года назад +2

      Is it difficult for a company to recompile an app into risc?

    • @HungryGuyStories
      @HungryGuyStories 4 года назад

      Thank you for the informative video! But why, WHY, *_WHY_* does almost every RUclipsr hate their viewers ears?!?!?!

  • @xcoder1122
    @xcoder1122 3 года назад +321

    The biggest problem that x86 has are its strong guarantees. Every CPU architecture gives programmers implicit grantees. If they require guarantees beyond that, they have to explicitly request those or work around the fact that they are not available. Having lots of guarantees makes writing software much easier but at the same time causes a problem for multicore CPUs. That's because all guarantees must be uphold across all cores, no matter how many there are. This puts an increasingly amount of burden on the bus that interconnects the single cores, as well as any shared caches between them. The guarantees that ARM gives are much weaker (just as the ones of PPC and RISC-V are much weaker), which makes it much easier so scale those CPUs to a bigger amount of cores, without requiring a bus system that needs to become drastically fast or drastically complex, as such a bus system will draw more and more power and at some point become the bottleneck of the entire CPU. And this is something Intel and AMD cannot change for x86, not without breaking all backward compatibility with existing code.
    Now as for what "guarantees" actually means, let me give you some examples: If one core writes data first to address A and then to address B and another core monitors address B and sees it has changed, will it also see that A has changed? Is it guaranteed that other cores see write actions in the same order they were performed by some core? Do cores even perform write actions in any specific order to begin with? How do write actions and read actions sort towards each other across cores? Do they sort at all? Are some, all, or none of write actions atomic? Over 99% of the time none of these questions matter to code that you write, yet even though less than 1% of all code needs to know the answers to these questions, the CPU has to stick to whatever it promises 100% of the time. So the less it promises, the easier for the CPU to keep it and the less work must be invested into keeping it.
    When Microsoft switched from x86 in the fist Xbox to PPC in the Xbox360, a lot of code broke because that code was not legit in the first place. The only reason why this code was working correctly so far were the strong guarantees of the x86 architecture. With PPC having weaker guarantees, this code would only work sometimes on these CPUs, causing very subtle, hard to trace bugs. Only when programmers correctly applied atomic operations and memory barriers, as well as used guarded data access (e.g. using mutexes or semaphores), the code was still working correctly as when using these, the compiler understands exactly which guarantees your code needs to be correct, and knowing for which CPU you are compiling, it also knows whether it has to do something to make your code work correctly or whether it will work correctly "on its own" as the CPU gives enough guarantees for that, in which case the compiler will do nothing. I see code every second day that only works coincidentally as it's being used on a CPU that has certain guarantees but this very same code would fail on other CPUs for sure. At least if they have multiple cores, there are multiple CPUs in the system, or the CPU executes code in parallel as if there were multiple cores (think of hyper-threading).
    The guarantees of x86 were no issue when CPUs had a single core and systems had a single CPU but they increasingly become an issue with the number of cores rising. Of course, there are server x86 CPUs with a huge number of cores but take a look at their bus diagrams. Notice how much more complex their bus systems are compared to the consumer x86 CPUs? And this is an issue if you want fast CPUs with many cores running at low power, providing long battery life and not requiring a complex cooling system or otherwise will run at very high temperature. And cores is currently the easiest way to scale CPUs, as making the cores run at higher speed is much harder (the clock frequency of CPUs is currently usually below what it used to be before all CPUs got multicore) and it's even harder to make the CPU faster without raising clock frequency and adding more cores as most optimizations possible have already been done. That's why vendors need to resort to all kind of trickery to get more speed out of virtually nothing and these tricks causes other issues, like security issues.
    Think of Meltdown, Spectre, and Spectre-NG. These attacks used flaws in the way how optimizations were implemented to make CPUs faster without actually making them faster, e.g. by using speculative execution, so the CPU is not faster but it still can perform work when it otherwise couldn't and it it was doing the work correctly, it appears to be faster. If what it has just done turns out to be wrong, though, it had to rollback what it just did and this rollback was imperfect as they didn't clean up the cache, thinking that nobody can look directly into it anyway, so why would that matter? But that assumption was wrong. By abusing another of these tricks to make CPUs faster, branch prediction, it is possible to reveal what is currently cached and what isn't and that way software can get access to memory it should never have access to.
    So I think the days of x86 are counted. It will not die tomorrow but it will eventually die as in many aspects x86 is inferior to ARM and RISC-V who both give you more computational power for less money and less electric power and this is not a temporarily thing, as no matter what trickery and shrinking they are going to apply make x86 faster, the same trickery and shrinking can also be applied to ARM and RISC-V to make them faster as well, yet on top of that, it will always be easier to give them more cores as well and that's why they keep winning in the long term.

    • @tvthecat
      @tvthecat 2 года назад +52

      Did you just type a fucking novel right here?

    • @dariosucevac7623
      @dariosucevac7623 2 года назад +22

      Why does this only have 26 likes? This is amazing.

    • @lachmed
      @lachmed 2 года назад +22

      This was such a great read. Thanks for taking time to write this!

    • @jamesyu5212
      @jamesyu5212 2 года назад +4

      There are so many things wrong in this comment that I’m glad it’s not gotten a lot of likes. Thank God people learn to do actual research than agree to unverified claims in the comments section.

    • @xcoder1122
      @xcoder1122 2 года назад +23

      @@jamesyu5212 People especially need to stop listening to comments that simply make blanket claims of anything being wrong without naming a single specific point that is supposed to be wrong or refuting even a single point. Who says sweepingly, something is wrong, which has ultimately said nothing and can save his know-it-all comment in the future just the same, because apparently he knows nothing, otherwise he would give something concrete from him.

  • @ProcashFloyd
    @ProcashFloyd 4 года назад +427

    Nice to hear from someone who actually knows what they are talking about. Not for the sake of making youTube content.

    • @squirlmy
      @squirlmy 4 года назад

      I thought specifically for Apple Mac's move to ARM. I've heard theories that are basically "conspiracy" theories! lol

    • @mrrolandlawrence
      @mrrolandlawrence 4 года назад +3

      i bet gary still runs openVMS at home ;)

    • @mayankbhaisora2699
      @mayankbhaisora2699 4 года назад

      @@mrrolandlawrence I don't know what the hell are you talking about but i get what you are saying :)

    • @ragnarlothbrook8117
      @ragnarlothbrook8117 4 года назад +1

      Exactly! I'm always impressed by Gary's knowledge.

    • @RaymondHng
      @RaymondHng 4 года назад +2

      @@mayankbhaisora2699 VMS was the operating system that ran on DEC minicomputers (midrange systems). OpenVMS is the descendant of VMS that is developed and supported by VMS Software Inc. en.wikipedia.org/wiki/OpenVMS

  • @octacore9976
    @octacore9976 4 года назад +166

    There are thousands of useless videos "Apple moving to it's own silicon!!!"...but this is the only video that is actually informative.

  • @zer0legend109
    @zer0legend109 4 года назад +307

    I stopped midway saying to myself, why I feel like I was tricked into a college lecture and not noticing it except midway but still liking it

    • @SH1xmmY
      @SH1xmmY 4 года назад +2

      You are not alone

    • @rhemtro
      @rhemtro 3 года назад

      doesn't even feel like a college lecture

    • @batmanonabike3119
      @batmanonabike3119 2 года назад +7

      Wait… clandestinely educating us? Cheeky

    • @lukescurrenthobby4179
      @lukescurrenthobby4179 8 месяцев назад +1

      That’s me rn

    • @macaroni.ravioli
      @macaroni.ravioli 3 месяца назад

      ​​@@lukescurrenthobby4179 me too hahahhaha I'm Cracking up At the Orig commenter above 🤣🤣🤣

  • @Charlie-jf1wb
    @Charlie-jf1wb 4 года назад +89

    Struggled to find someone who can actually explain something as complex as Arm v x86 in such straightforward terms. thank you.

  • @goombah_ninja
    @goombah_ninja 4 года назад +17

    Nobody can explain it better than you. Thanks for packing it all under 21 mins. I know it's tough.

  • @TurboGoth
    @TurboGoth 3 года назад +10

    In your slide listing the major differences in RISC v CISC, one thing that struck ME as relevant because of my interest in compilers and which admittedly may not be particularly worth mentioning depending on your audience, is that RISC instructions also tend to be more uniform in the sense that if you want to do an add instruction, for example, you don't have to really think about what instructions are allowed with which registers. Also, the instruction pointer register is just a regular general purpose register while that register is often special purpose in other architectures. It's awkward and roundabout to even access. Often you need odd tricks like to use Intel's call instruction as if you're calling a function for its side effect of pushing the instruction pointer to the stack. Then from there, you can pop the value off. But in arm, you just have it in one of your normal registers. Obviously, you must beware if you dump random garbage into it that you'll start running code from a different place in memory but that should be obvious. Yet such uniformity can eat space and so the THUMB instructions don't have it.

  • @yogevbocher3603
    @yogevbocher3603 4 года назад +102

    Just to give a feedback: The pronunciation of the name Dobberpuhl was perfect.

    • @seanc.5310
      @seanc.5310 4 года назад +1

      How else would you pronounce it?

    • @zoehelkin
      @zoehelkin 4 года назад +4

      @@seanc.5310 like South Gloucestershire

    • @ivanguerra1260
      @ivanguerra1260 4 года назад

      I understood rubber poop !!

    • @squirlmy
      @squirlmy 4 года назад +1

      Of course, he's British, not an American!

  • @thecalvinprice
    @thecalvinprice 4 года назад +9

    My father was working for DEC at that time, and he told me a lot about the Alpha chip and their foray into 64bit before AMD and Intel brought their consumer 64bit chips to market. He also told me about the purchase by Compaq(and subsequently HP) and that a lot of people weren't keen on the purchase as there was concern all their efforts would disappear.
    These days DEC is barely a whisper when people talk about tech history, but I still remember sitting at an ivory CRT with the red 'DIGITAL' branding on it.
    Edit: Thanks Gary for the little nostalgia trip.

    • @scality4309
      @scality4309 4 года назад +1

      VT-100?

    • @tasosalexiadis7748
      @tasosalexiadis7748 2 года назад

      The leader designer of the DEC Alpha founded an CPU company in 2003 which Apple bought in 2008 (PA Semi). This were they got the tallent to engineer their own CPUs for the iPhone (starting from iPhone 5) and now the new ARM-based Macs.

  • @ZhangMaza
    @ZhangMaza 4 года назад +10

    Wow I learn a lot just from watching your video, thanks Gary :)

  • @ricgal50
    @ricgal50 9 месяцев назад +2

    Really great to hear someone who knows about Acorn. In 1986 I lived in North-western Ontario (Canada). Our library put in a desk with 6 computers on it. They were the North American version of the BBC Micro, made by Acorn. It was the first computer I worked on. I didn't find out about the ARM chip and the Archimedes until the mid nineties. And it was shortly after that time that ARM was spun off. BTW, Acorn was owned by Olivetti at this time. When I found out about the Raspberry Pi, I got onboard. I had a model A chip, and then I bought a Pi 4 just in time for Covid, so I could use my camera module to have a second camera for Zoom.

  • @dalobucaram7078
    @dalobucaram7078 4 года назад +37

    Gary is arguably the best person to explain these kind of topics in layman's terms. Intel will finally get some humolity lesson.

  • @reptilez13
    @reptilez13 4 года назад +8

    The possibilities for how this will go and how it will effect the industry or other players in said industry is endless. Very interesting next few years for numerous reasons, this being a big one. Great vid too btw.

  • @JanuszKrysztofiak
    @JanuszKrysztofiak 3 года назад +13

    I've got an impression memory is (relatively) slower and more expensive TODAY than back then. Then - in the early 1980s - performances of RAM and CPUs were not far apart, so CPUs were not that handicapped by RAM - they could fetch instructions directly and remain fully utilized. Today it is different: RAM is much, much slower than CPU, so CPUs spend a good part of their transistor budgets on internal multilevel caches. Whereas on modern systems you can have 128 GB of RAM or more, the one that actually matches your CPU speed is the L1 cache which is tiny - Ryzen 5950x has only 64 kilobytes of L1 cache per core (32 kB for code and 32 kB for data). I would say the instruction set "density" is even more important now.

    • @mayatrash
      @mayatrash 2 года назад

      Why do you think that is? Because the R&D Intro memory is worse? Genuine question

    • @okaro6595
      @okaro6595 Год назад +1

      Very few modern systems have 128 GB RAM. Typical is about 16. 16 GB costs some $50. In in around 1983 typical was 64 KB. It was about $137 which inflation adjusted would be about $410. Sure RAM is not more expensive now.

    • @GreenDayGA
      @GreenDayGA Год назад

      @@okaro6595 L1 cache is expensinve

  • @handris99
    @handris99 4 года назад +3

    I was afraid in the beginning that all I'll hear is history but I'm glad I watched til the end the answers to my questions were answered.

  • @jeroenstrompf5064
    @jeroenstrompf5064 4 года назад +6

    Thank you! You get extra points for mentioning Elite - I played that so much on a Commodore 64 when I was around 16

  • @alexs_thoughts
    @alexs_thoughts 4 года назад +6

    So what would be any arguments in favor of the Intel (or X86 in general) side of the situation?

    • @tophan5146
      @tophan5146 4 года назад +1

      Unexpected complications with 10nm.

    • @RickeyBowers
      @RickeyBowers 4 года назад +2

      IPC and higher single-core speed.

    • @Robwantsacurry
      @Robwantsacurry 4 года назад

      The same reasons that CISC was developed in the first place, in the 1950's memory was rare and expensive, complex instruction sets where developed to reduce the ammount of memory required. Today memory is cheap but its speed hasn't improved as fast as CPU's, a RISC CPU requires more instrucions hence more memory access, so memory bandwith is more of a bottleneck. That is why a CISC CPU can have a major advantage in high performance computing.

    • @snetmotnosrorb3946
      @snetmotnosrorb3946 4 года назад +1

      Compatibility. That's it really. x86 has come to the end of the road. The thing is bloated by all different design paths taken and extensions implemented over the decades, and is crippled by legacy design philosophies from the 70s. Compatibility is really the only reason it has come this far, that is what attracted investment. PowerPC is a way beter design even though it's not much newer, but it came in a time when Wintel rolled over all competition in the 90s combined with insane semiconductor manufacturing advances that only the biggest dogs could afford, and thus PPC didn't gain the traction needed to survive in the then crucial desktop and later laptop computer markets. Now the crucial markets are server and mobile devices, places where ARM has a growing foot in and totally dominates respectively. Now ARM is getting the funding needed to surpass the burdened x86 in all metrics. The deciding factor is that x86 has hit a wall, while ARM has quite a lot of potential still, and that is what ultimately will force the shift despite broken compatibility. Some will still favour legacy systems, so x86 will be around for a long time, but its haydays are definitely counted.
      @@RickeyBowers ARM has way higher IPC than x86. No-one has ever really tried to make a wide desktop design based on ARM until now, so total single core speed is only valid now a few more years.

    • @AdamSmith-gs2dv
      @AdamSmith-gs2dv 4 года назад

      @@Robwantsacurry Single core performance is also better with x86 CPUs, for ARM CPUs to accomplish anything they NEED to have multi threaded programs other wise they are extremely slow

  • @johnmyviews3761
    @johnmyviews3761 4 года назад +10

    I understood Intel was a memory maker initially however a Japanese company commissioned intel to develop its calculator chips that ultimately developed into microprocessors

  • @mokkorista
    @mokkorista 4 года назад +84

    My cat: trying to catch the red dot.

  • @mohammedmohammed519
    @mohammedmohammed519 4 года назад +5

    Gary, you’ve been there and done that ⭐️

  • @AkashYadavOriginal
    @AkashYadavOriginal 4 года назад +2

    Gary I've read on forums that ARM currently is not a true RISC processor. Some of their recent additions like Neon instructions are not compatible with RISC philosophy and are more like CISC instructions can you please explain.
    Also Intel & AMD's recent additions like SSE actually follow RISC philosophy.

    • @GaryExplains
      @GaryExplains  4 года назад

      NEON and SSE are both forms of SIMD, so there is some confusion in your statement.

    • @AkashYadavOriginal
      @AkashYadavOriginal 4 года назад

      @@GaryExplains I've read forums like these that say many of the ARM instructions are actually CISC and many instructions on x86 are follow RISC. So practically the difference between RISC & CISC processors are actually meaningless. news.ycombinator.com/item?id=19327276

    • @GaryExplains
      @GaryExplains  4 года назад +1

      CISC and RISC are very high level labels and can be twisted to fit several different meanings. Bottom line x86 is CISC without a doubt, it has variable instruction lengths and can perform operations on memory. As I said in the video, since the Pentium Pro, all x86 instructions are reduced to micro ops and the micro ops are similar to RISC, but that decoding is an extra step which impacts performance and power. Arm on the otherhand is a load/store architecture and has fixed instruction lengths. Those are fundamental differences.

    • @AkashYadavOriginal
      @AkashYadavOriginal 4 года назад

      @@GaryExplains thanks Professor appreciated

  • @lilmsgs
    @lilmsgs 4 года назад +3

    I'm an ex-DECie tech guy too. Glad to see someone helping to explain their place in history. All these years later, I'm still heart broken.

  • @TechboyUK
    @TechboyUK Год назад +1

    Excellent overview! WIth the CPU race on at the moment, it would be great to have an updated version of this video in around 6 months from now 😊

  • @aaaaea9268
    @aaaaea9268 4 года назад +4

    This video is so underrated like lol you explained something I was never able to understand in just 20 minutes

  • @reikken69
    @reikken69 4 года назад +1

    what about RISC-V? heard it is quite different from the ARM architecture.....

  • @RoyNeeraye
    @RoyNeeraye 4 года назад +11

    10:52 *Apple

    • @GaryExplains
      @GaryExplains  4 года назад +8

      Ooopss! 😱

    • @hotamohit
      @hotamohit 4 года назад

      i was just reading it at that part, amazing that you noticed it.

  • @ddd12343
    @ddd12343 4 года назад +1

    Talking about porting of apps from programmer perspective: can x86 to ARM shift be done automatically? I've read many comments pointing that Apple moving to ARM might have some problems with actually convincing app developers to move to ARM. Apple is also offering special ARM based prototype computers for programmers just for testing so they could be ready for release of ARM Mac. Why is that? Isn't that a matter of simple app recompilation? Can't Apple do that automatically for apps in its store?

  • @cypherllc7297
    @cypherllc7297 4 года назад +25

    I am a simple man . I see gary, I like the video

  • @cliffordmendes4504
    @cliffordmendes4504 3 года назад

    Gary, you explaination is smooth like butter!

  • @madmotorcyclist
    @madmotorcyclist 4 года назад +3

    It is interesting how the CISC vs. RISC battle has evolved with Intel's CISC holding the lead for many years. Unfortunately, Intel rested on its laurels by keeping the 8086 architecture and boosting its operating frequencies while reducing the chip size production down in nm levels. Thermal limits with this architecture have literally been reached giving Apple/ARM the open door with their approach which now after many years has surpassed the 8086 architecture. It will be interesting to see how Intel reacts given their hegemony control of the market.

  • @adj2
    @adj2 3 года назад +1

    Absolutely beneficial I learned a lot. I still have a lot more learning to do. Definitely subscribed and will be pulling up other videos that you’ve done for additional information.

  • @jinchoung
    @jinchoung 4 года назад +21

    surpised you didn't mention the softbank acquisition. all my arm stock got paid out when that happened and it kinda surprised me. totally thought I'd just get rolled over into softbank stock.

    • @patdbean
      @patdbean 4 года назад +1

      Yes, £17 a share was a good deal at the time, I wonder what they would be valued at today.

    • @autohmae
      @autohmae 4 года назад

      @@patdbean they benefited from the price drop of the Pound because of Brexit.

    • @patdbean
      @patdbean 4 года назад

      @@autohmae SoftBank did yes, but ARM themselves (I think) have always charged royalties in USD.

    • @vernearase3044
      @vernearase3044 4 года назад

      Well, it looks like Softbank is shopping ARM around ... hope it doesn't get picked up by the mainland Chinese.
      As for stocks, I noticed I was spending waaaayyy too much on Apple gear, so I bought some Apple right after the 7-1 split in 2014 to help defray the cost at IIRC about $96/share.

    • @Cheese-n-Cake16
      @Cheese-n-Cake16 4 года назад +1

      @@vernearase3044 The British regulation board will not allow the Chinese to buy ARM, because if they do, that would be the death of ARM in the Western world

  • @falcon81701
    @falcon81701 4 года назад +2

    Best video on this topic by far

  • @1MarkKeller
    @1MarkKeller 4 года назад +37

    *GARY!!!*
    Good Morning Professor!*
    *Good Morning Fellow Classmates!*

  • @danielho5635
    @danielho5635 4 года назад

    3:03 Small Correction -- Itanium, at launch did not have x86 backward compatibility. Later on an x86 software emulation was made but it was too slow.

  • @Delis007
    @Delis007 4 года назад +4

    I was waiting for this video. Thanks

  • @prince-op2ff
    @prince-op2ff 4 года назад +1

    Garry please explain concept of pipe lining in processors

    • @GaryExplains
      @GaryExplains  4 года назад

      I cover that in the IPC video linked in the description.

  • @numtostr
    @numtostr 4 года назад +11

    Hey gary, your videos are awesome.
    I am waiting for some more rust videos. Last rust video helped me to get started in rust lang.

  • @gaborenyedi637
    @gaborenyedi637 4 года назад +1

    The instruction -> uops translation does not really counts. It inevitably takes some transistors, but currently there are so many transistors that it does not really count (say 1%). Compared to the total power consumption of a real computer (not a phone, but a notebook or a desktop) it is next to nothing. The much higher power consumption comes from many other things, like huge cache, AVX (yes, you use it much, check what a compiler produce!), branch prediction, out-of-order execution, hyperthreading and so. Some of these can be found in some ARM chips, but they mostly don't have them.

    • @GaryExplains
      @GaryExplains  4 года назад

      The difference is that those 1% of transistors are used ALL of the time. It doesn't matter how much space they take relatively, but how often they are used. Like putting a small wedge under a door and saying "this door is hard to open, odd since the wedge is only 1% of the whole door structure".

  • @TheSahil-mv3ix
    @TheSahil-mv3ix 4 года назад +4

    Sir ! When will ARM v9 arrive ? Are there any dates or assumptions ? What will be it like ?

    • @MiguelAngel-rw7kn
      @MiguelAngel-rw7kn 4 года назад +2

      Rumors says that the a14 will be the first one to implement it, so ARM should announce it before Apple. Maybe next month?

  • @patdbean
    @patdbean Год назад +1

    19:27 have intel gone byond 10nm yet? This video is now 4 years old.

    • @GaryExplains
      @GaryExplains  Год назад +1

      Yes.

    • @patdbean
      @patdbean Год назад

      @@GaryExplains 7NM? They are not at TSMC's 5/4 yet are they? And what about globle foundries? When AMD moved to TSMC I think GF were at 12nm

  • @danielcartis9011
    @danielcartis9011 4 года назад +9

    This was absolutely fascinating. Thank you!

  • @sgodsell2
    @sgodsell2 4 года назад

    Do you think Apples new Mac ARM SoCs are based on ARMs Neoverse N1 or better yet E1 architecture? Because all the other ARMs chips before were based in a cluster sizes up to a maximum of 4 cores. The E1 can have a cluster up to 8 cores. The N1 can have clusters from 8 cores up to a maximum of 16 cores.

    • @GaryExplains
      @GaryExplains  4 года назад

      Apple's silicon isn't based on any designs from Arm, it is Apples own design. Also, since DynamIQ, all Cortex CPUs can have 8 cores per cluster.

  • @taidee
    @taidee 4 года назад +9

    Very rich in details this video as usual, thank you 🙏🏾 Gary.

  • @BeansEnjoyer911
    @BeansEnjoyer911 4 года назад +2

    Subbed. Just straight up information easily explained. Love it

  • @correcteur_orthographique
    @correcteur_orthographique 4 года назад +4

    Silly question : why haven't we uses ARMs processors in PC before ??

    • @Rehunauris
      @Rehunauris 4 года назад

      There have been ARM powered computers (Raspberry Pi is best example) but it's been hobbyist/niche market.

    • @Ashish414600
      @Ashish414600 4 года назад +1

      because people don't want to rewrite everything! ARM architecture is dominant in embedded system market,in SOCs, you won't see x86 architecture popular there. For PCs, it's opposite. Intel already dominated PC market so it's hard for any company to develop a system for entirely different architecture. The S/W developers (especially the cross compiler designers) will surely have headache, but if Apple is successful, it will indeed bring a revolution, and challenge Intel monopoly in pc!

    • @correcteur_orthographique
      @correcteur_orthographique 4 года назад

      @@Ashish414600 ok thx for your answer.

    • @augustogalindo8687
      @augustogalindo8687 3 года назад +1

      ARM is actually a RISC architecture, and there is information about it in Grove’s book (Ex Intel CEO), he claims Intel actually considered using RISC instead of CISC (current Intel chips architecture) but he decided against it because there was not much of a difference, consider we are talking about a time where most computers where stationary and didn’t need to be energy efficient, so it made sense back then to keep on with CISC. However, nowadays energy efficiency has become very important and that’s where ARM is taking an important role.

  • @CaptainCaveman1170
    @CaptainCaveman1170 4 года назад +1

    Hi great video. I'm wondering if you ever run low on content, could you cover the very interesting history of the Datapoint 2200? I know pretty nuch everybody would disagree with this assertion, but in my nind it is the first fully contained "Personal Computer".

    • @RaymondHng
      @RaymondHng 4 года назад +1

      ruclips.net/video/DrJiYysZwxk/видео.html
      ruclips.net/p/PL6A115E759C11E84F

  • @Caesim9
    @Caesim9 4 года назад +12

    The big problem of Intel is that they invested heavily in branch prediction. And it was a wise move. Their single core performance was really great but then Spectre and Meltdown happened and they had to shut down the biggest improvements. Their rivals AMD or ARM invested more in multicore processors. They aren't affected by these vulnerabilities and so Intel has a lot of catching up to do.

    • @autohmae
      @autohmae 4 года назад +8

      Actually, lots of architectures were effected by Spectre and Meltdown, but especially Intel the most.
      They effected at least: Intel, AMD, ARM, MIPS, PowerPC. But not: RISC-V and not Intel Itanium. At least RISC-V does things in a similar way but not all the things needed to hit into the problem. But in theory new designs of chips could have been affected too. Obviously they know about these things now, so it will probably not happen.
      Different processors series/types/models of each architecture were affected in different ways.

    • @Yusufyusuf-lh3dw
      @Yusufyusuf-lh3dw 4 года назад +2

      Intel did not discard any of their prediction units or improvements because of spectre meltdown problem.

  • @seeker9145
    @seeker9145 4 года назад

    Nice explanation. However, I would like to know why did Apple have to base its processor architecture on ARM if it was designing it's own custom architecture? Is it because they needed the ARM instruction set? If yes, couldn't they make their own instruction set? Or that is a very long process?

    • @igorthelight
      @igorthelight 4 года назад +1

      It's much easier to use already existing architecture that develop your own.

  • @roybixby6135
    @roybixby6135 4 года назад +5

    And everyone thought Acorn was finished when it's RISC Archimedes computer flopped...

  • @lolomo5787
    @lolomo5787 4 года назад +1

    Its cool that gary takes time to read and replies to sensible comments here even if his video is uploaded months or even a years ago. Other channels dont do that.

  • @sathishsubramaniam4646
    @sathishsubramaniam4646 4 года назад +3

    My brain has been itching for few years now whenever I hear ARM vs Intel, but I was too lazy. Finally everything is flushed and cleared out.

    • @diegoalejandrosarmientomun303
      @diegoalejandrosarmientomun303 4 года назад +2

      Arm or Amd? Amd is the direct competence of intel regarding x86 chips and other stuff. Arm is another arquitecture, which is the one covered on this video

  • @binoymathew246
    @binoymathew246 4 года назад +1

    @Gary Explains Very educational. Thank you for this.

  • @yash_kambli
    @yash_kambli 4 года назад +8

    Mean while when we could expect to see risc-v isa based smartphones and PCs. If someone would be bring out high performance , Out of order, deeper pipeline, multi threaded risc v processor then it might give tough competition to ARM

    • @GaryExplains
      @GaryExplains  4 года назад +6

      The sticking point is the whole "if someone would be bring out high performance , Out of order, deeper pipeline, multi threaded RISC-V processor". The problem isn't the ISA but actually build a good chip with a good microarchitecture.

    • @shaun2938
      @shaun2938 4 года назад +1

      Microsoft and Apple wouldn’t be spending billion on changing to ARM if they didn’t feel that ARM could keep up with x86 development while still offering significant power savings. Where RISC-V is still an unknown. Saying that, by supporting ARM it would most likely make supporting RISC-V in the future much easier.

    • @AnuragSinha7
      @AnuragSinha7 4 года назад

      @@shaun2938 yeahI think compatibility won't be big issue because they all can design and make it backward compatible.

    • @autohmae
      @autohmae 4 года назад

      RISC-V will take a decade or more, I think it will mostly be embedded and microprocessors for now and it could take a huge chunk of that market. And some a predicting RISC-V will be used in the hardware security module market

    • @carloslemos3678
      @carloslemos3678 4 года назад

      @@GaryExplains Do you think Apple could roll their own instruction set in the future?

  • @Nayr7928
    @Nayr7928 4 года назад +2

    Hey Gary, I'd like to know more about how Metal, Vulkan and OpenGL works differently. Can you make a vid about it? Your vids of how things work are very informative.

    • @TurboGoth
      @TurboGoth 3 года назад +2

      Ha! Wow. The finer points in how an API is designed and a comparison of APIs that achieve similar goals but are designed by different groups is a tough one. But I would like to add a perspective on this question since I've toyed with Vulkan and OpenGL. And, really Vulkan is OpenGL 5. Kronus, the creator of OpenGL wanted a reset in their API and going between the OpenGL versions in a somewhat graceful way was getting too awkward so they reset the API and went with a new name. Now, to bring Metal into the conversation, it should be noted than Vulkan is effectively an open Metal since Metal is Apple's but it took the same approach in that they are very heavy in their rigid establishment in how the software is to work with the configuration of the display settings so that the runtime conditionals inside the API to cope with any changing conditions are minimized. And this allows for very efficient communication between the application software and the hardware. Also, Vulkan (and perhaps Metal too - I don't know, I've never actually programmed in it - i'm no Apple fanboy) consolidates the compute API with the video API so that you can run number crunching workloads on the GPU with a similar API as you could use to draw images to the screen. And this lets you see the GPU as a more general data crunching device that only happens to crunch data on the same device where it is ultimately displayed. OpenCL is another API that gives this capability (to crunch data) but it is a more narrow view of GPU capabilities in that you don't use graphics through it. But Vulkan can be quite complicated because of all the burdensome setup that is to be established so as to simplify the runtime assumptions and this can really be a huge nuisance for a learner. Using OpenGL as of version 3.3 or so will make your journey easier. But OpenGL ES 2.0 or 3.0 will make it easier still so you can avoid the major OpenGL API drift as of the programmable shaders era which completely changed the game. Before that, there was something referred to as the "fixed function pipeline" and that's ancient history.

  • @gr8bkset-524
    @gr8bkset-524 4 года назад +4

    I grew up using the 8088 in engineering school and upgraded my PCs along the way. I worked for Intel for 20 years after they acquired the multiprocessor company I worked for. I got rid of most of their stock after I stopped working for them and I saw the future in mobile. These days, my W10 laptop sits barely used while I'm perfectly happy with a $55 Raspberry Pi hooked up to my 55" TV. Each time I use my W10 laptop it seems to be doing some scan or background tasks that take up most of the CPU cycles. Old Dinosaurs fade away.

  • @STohme
    @STohme 4 года назад +1

    Very interesting presentation. Many thanks Gary.

  • @denvera1g1
    @denvera1g1 4 года назад +6

    Everyone comparing ARM and Intel, no one talking about ARM and AMD Like the Ampere Altra(80 core) VS EPYC 7742(64 core). Ampere is 4% more powerful, and uses 10% less energy making it 14.4% more efficent than AMD. But some people might point out that for AMD to compete with an 80 core, they have to pump more power into their 64 core which uses disproportunately more energy, than poerformance it improves, i'll be REALLY interested to see how a fully 7NM AMD EPYC where that space savings from a 7nm IO die instead of 12nm makes room for two more 8 core CCDs for a total of 80 cores. Some might argue, that if AMD had used a fully 7nm processor and had 80 cores, they would be not only more powerful, but also more efficicent(less energy for that power)

    • @scaryonline
      @scaryonline 4 года назад +1

      So what about threadripper Pro? Its 20 percent more powerful than intel platinum 58 core

    • @denvera1g1
      @denvera1g1 4 года назад

      @@scaryonline Doesnt it use more power than epyc because of higher frequency?

    • @gatocochino5594
      @gatocochino5594 4 года назад +3

      I found no independent benchmarks for the Altra CPU, Ampere claims(keyword here) their CPU is 4% more powerful IN INTEGER WORKLOADS than the Epyc 7742. Saying the Ampere Altra is ''4% more powerful(...) than AMD'' is a bit misleading here.

  • @GaneshMKarhale
    @GaneshMKarhale 4 года назад +1

    What is the difference between mobile processors and arm processors for desktop?

    • @thebrightstar3634
      @thebrightstar3634 4 года назад +1

      Really2 good questions next time Gary should do one video becoz of this question U ask anyways really questions 👍👍👍

  • @bigpod
    @bigpod 4 года назад +5

    nanometer litrography doesnt mean anything when comparing 2 CPUs from different manufacturers because they define what they mesure as litography differently, aka intel CPUs will have more density of components at the same litograpjy size and even 1 size smaller from another manufacturer like tsmc

    • @ILoveTinfoilHats
      @ILoveTinfoilHats 4 года назад +1

      Intel's talks with TSMC show's that Intel's 10nm is actually similar to TSMCs 6nm (not 7nm as speculated) and Intel's 7nm similar to TSMC 5nm.

    • @l.lawliet164
      @l.lawliet164 2 года назад

      Not true, the power consumption is different, intel can have the same density, but still use more power, because have bigger transistors

    • @bigpod
      @bigpod 2 года назад

      @@l.lawliet164 how does that work if you go by transistor count (thats the density) intel can put at their 10nm litography same number of transistors in same space as TSMC 6nm, they just count different components(component that consists of multiple transistors)

    • @l.lawliet164
      @l.lawliet164 2 года назад

      @@bigpod that's true, but their transistors are still bigger that's why you see better power consumption for tsmc even if they both have the same density... this means Intel pack process is better, but there transistor is worse. Performance can be equal, but consumption can't

    • @l.lawliet164
      @l.lawliet164 2 года назад

      @@bigpod This actually give intel the advantage, because they can get same performance with a worse component and also more cheap.

  • @ercipi
    @ercipi 3 года назад

    Its like going to a seminar but for free! Thanks a bunch.

  • @friendstype25
    @friendstype25 4 года назад +6

    The last school project I did was on the development of Elite. Such a cool game! Also, planet Arse.

    • @klontjespap
      @klontjespap 4 года назад +2

      never heard of that planet, is it close to uranus?

    • @technoman8219
      @technoman8219 4 года назад

      @@klontjespap hah

  • @vamsiprashanth3829
    @vamsiprashanth3829 11 дней назад

    @GaryExplains, in the last slide, it is 10nm not 10mn

  • @Flankymanga
    @Flankymanga 4 года назад +3

    Now we all know where Garry's knowledge of IC's come from... :)

  • @muzhaq5
    @muzhaq5 4 года назад

    Hello Gary! Please clear out my confusion about the iPhone (2007). You mentioned that it's SOC was developed by ARM but on Wikipedia page of the original iPhone...it says it was actually made by Samsung. I also heard it somewhere else that it was made by Samsung.

    • @GaryExplains
      @GaryExplains  4 года назад +2

      Don't confuse design with manufacture.

    • @muzhaq5
      @muzhaq5 4 года назад

      @@GaryExplains Got it. Thank you!

  • @skarfie123
    @skarfie123 4 года назад +19

    I can already imagine the videos in a few years "The fall of Intel"... Sad...

    • @WolfiiDog13
      @WolfiiDog13 4 года назад +3

      I don't think it will fall. But it will take a huge hit, specially if the PC market also moves to better architectures

    • @stefanweilhartner4415
      @stefanweilhartner4415 4 года назад +3

      they need to do some RISC-V stuff because the x86 world will die. that is for sure. just a matter when. they tried to push their outdated architecture in many areas where they completely sucked. in the embedded world and mobile world nobody gives a fuck about intel x86. it is just not suited fot that market and that market is already very competitive. intel just wasted tons of money in that regard. at some point they will run out of money too and then it is too late.

    • @WolfiiDog13
      @WolfiiDog13 4 года назад +2

      @Rex yes, legacy support is the reason I don't think they will fail, and just take a huge financial hit instead, nobody is gonna immediately change their long running critical systems just cause the new architecture is better performing, when stability is key, you can't change to something new like that and expect everything to be fine. But you are wrong to think ARM is just "for people", actually, we already have big servers running on ARM for years now, and it works perfectly fine (performance per watt is way better on these systems). Also, quantum computers will not be substitute for traditional computing, it's more like just an addition with very specific applications, not everyone will take advantage of this, I think we will never see a 100% quantum system working, it will always be a hybrid (I could be wrong, but I also can't imagine how you would make a stable system with such a statistical-based computing device, you always need a traditional computer to control it).

    • @autohmae
      @autohmae 4 года назад +1

      @the1919 coreteks probably already has one ready :-)

    • @autohmae
      @autohmae 4 года назад +1

      @Rex You did see ARM runs the #1 top 500 super computer, right ? And Amazon offers is for cloud servers and ARM servers are being sold more and more.
      Not to mention: AMD could take a huge chunk of the market from Intel. More and more AMD servers are being sold now. All reducing Intel's budget.

  • @jonathangerard745
    @jonathangerard745 4 года назад

    Could you please make a detailed video of Moore's law?

    • @GaryExplains
      @GaryExplains  4 года назад

      I did already: ruclips.net/video/I4yPek19cn8/видео.html

  • @xemtube
    @xemtube 3 года назад

    Skip to 13:30 if you just want to know the main technical difference between Arm vs x86, it basically comes down to RISC vs CISC

  • @lahmyaj
    @lahmyaj 4 года назад

    Gary, can you please explain how Virtualization like running Windows 10 x86-64 through VirtualBox or Parallels with an Apple Silicon Mac might work? Your mention of x86 microcode makes me think this is actually easier to do than otherwise thought?

    • @GaryExplains
      @GaryExplains  4 года назад +1

      I don't think there will be virtualization of Windows 10 x86 on the new Apple silicon. At best there will be emulation.

    • @lahmyaj
      @lahmyaj 4 года назад

      Gary Explains lol forgive me as my knowledge of virtualization and emulation isn’t too advanced but does that basically mean that it’ll struggle to run x86-64 Windows 10 or any x86 VMs?

    • @GaryExplains
      @GaryExplains  4 года назад +1

      My current assumption is that it will, as you say, struggle to run any x86/x86-64 VMs because basically it isn't an x86/x86-64 based device. It is designed to run macOS not Windows. If people need Windows then they will need to buy a Windows machine.

    • @lahmyaj
      @lahmyaj 4 года назад

      Gary Explains yeh interesting. It’s more just for studying/hobby stuff I’d wanna run x86-64 VMs but yeh will be interesting to see what Mac model is the first released and then what info comes out after some reviews are done.

  • @frenchyalicea649
    @frenchyalicea649 4 года назад

    Have you made a vid on Marvel's Thunderx3/latest chip and how it got from xscale to current design???

  • @cuddlybug2026
    @cuddlybug2026 3 года назад

    Thanks for the video, if I use Windows ARM on Macbook (via Parallel) can I download windows app from any source? Or does it have to be from the Microsoft Store only?

  • @rene-jeanmercier6517
    @rene-jeanmercier6517 4 года назад +1

    This is an EXCELLENT review for some one who programmed with an Intel 4004 way back when. Thank you so much. Regards, RJM

  • @mr88cet
    @mr88cet 4 года назад

    Hmm... Wasn’t the original IBM PC AT based upon the 80286, not the ‘386?

    • @GaryExplains
      @GaryExplains  4 года назад +1

      Indeed it was. Compaq released the first 386 version.

  • @bigpod
    @bigpod 4 года назад

    well if RISC cpu doesnt have a instructions necessery to do the job same job can take 3x or more time to complete on CISC more instructions are in first of all and well they can be easierly emulated and microcode can allow for more to be loaded it

    • @jonnypena7651
      @jonnypena7651 4 года назад

      ARM already prepare a SIND, a new set of high performance instructions, also, Apple walk-around is using dedicate acelerator.

  • @pilabs3206
    @pilabs3206 4 года назад +2

    Thanks Gary.

  • @Ace-Brigade
    @Ace-Brigade Месяц назад

    So if I follow this logic correctly you could then take an x86 instruction set and compile it down or decompile it as it may be to a RISC instruction set before it runs?
    Meaning couldn't I just compile my x86 software to a RISC instruction set and have complete backwards compatibility with software?
    Sure it would take extra compilation up front while the application is being built but with how fast processors are today I don't imagine that would take more than a few seconds for a large application.

    • @GaryExplains
      @GaryExplains  Месяц назад +1

      Yes, that is very roughly how the x86 emulators work on Apple Silicon Macs and for Windows on Arm laptops (i.e CoPilot+ laptops)

    • @Ace-Brigade
      @Ace-Brigade Месяц назад

      @GaryExplains do they do that at compile time or runtime? Emulators typically do that at runtime right? I would imagine the overhead would be pretty serious.

  • @RonLaws
    @RonLaws 4 года назад +1

    it may have been worth mentioning the NEON extension for ARM in more detail, as it is in essence what MMX was for the pentium - Hardware decoding of h264 data streams among other things. (relatively speaking)

    • @az09letters92
      @az09letters92 4 года назад

      NEON is more like SSE2 for intel. MMX was pretty limited, you couldn't mix floating point math without extreme performance penalties.

  • @DRFRACARO44
    @DRFRACARO44 4 года назад

    Which is better to learn about if you're trying to become a malware analyst?

  • @elvinziyali5184
    @elvinziyali5184 4 года назад

    Excuse me, I didn't get the point at the end. Is ARM going to design for PCs or Apple going to scale it by themselves? Thanks for the great video!

    • @Haldered
      @Haldered 4 года назад

      Apple have been designing their own ARM-based chips for awhile now for iOS, and will transition MacOS and Mac products to run on their own Apple-designed ARM chips. These chips won't be available for non-Apple products obviously, and AMD is outperforming Intel in the rest of the consumer market, and gamers especially are abandoning Intel. Intel are a big company though, so who knows what the future is.

  • @MarthaFockerMF
    @MarthaFockerMF 3 года назад

    Thanks for the info dude, its comprehensive, but a little bit too much on history part. Anyway, its a great vid! Keep it up!

  • @-zero-
    @-zero- 4 года назад +2

    interesting video, btw can you make a video series on the working of cpu? maybe you made a video about binary but never went into full depth of how the adders work into making a "cpu"

    • @GaryExplains
      @GaryExplains  4 года назад +1

      Did you watch the videos in my "How Does a CPU Work" series??? ruclips.net/p/PLxLxbi4e2mYGvzNw2RzIsM_rxnNC8m2Kz

    • @-zero-
      @-zero- 4 года назад

      @@GaryExplains yes i have watched them, i would like to learn more in depth on what goes inside the cpu, like how do transistors and adders process the binary and work thier way into assembly language

    • @GaryExplains
      @GaryExplains  4 года назад +1

      That is hardware circuit design and not something I particularly enjoy, I am more of a software person. You would need to find a hardware and/or logic channel for that kind of thing.

    • @GaryExplains
      @GaryExplains  4 года назад +1

      Yes, Ben Eater's channel is a good place to go for that stuff.

    • @ashfaqrahman2795
      @ashfaqrahman2795 4 года назад

      You can take up a course on Coursera called "NAND to Tetris". Basically you build a 16-bit CPU using NAND gates (Part-1) and write minimal software to add life to the hardware using a custom-made programming language (Part-2).

  • @TCOphox
    @TCOphox 4 года назад +2

    Thanks for converting those complex documentations into understandable English for plebians like me!
    Interesting things I've learnt so far:
    * AMD made direct Intel clones in the beginning.
    * Intel is forced to use AMD's 64bit. implementation because they couldn't develop their own successful one.
    * Intel has made ARM chips and has a license for ARMv6, but sold its ARM division off.
    * Apple had a much longer history with ARM than I expected.
    * Imagination went bankrupt so Apple bought lots of their IP and developed their own GPUs from there.
    * Apple was the first to incorporate 64bit into smartphones.

  • @apivovarov2
    @apivovarov2 4 года назад

    What shares should we buy?

  • @stephen7715
    @stephen7715 Год назад

    Absolutely brilliant video. Thank you for sharing

  • @anandsuralkar2947
    @anandsuralkar2947 3 года назад

    I would be so glad if u make a video about automatic x86 to arm porting system..if there are any.

  • @bruceallen6492
    @bruceallen6492 4 года назад +1

    80286 had segmentation, but it did not have paging. I think OS/2 would smoke a cigarette waiting for segment swaps. 80386 had segmentation swaps and paging. Paging was the what the engineers wanted. Beefier MOBO chips were needed to support DMA while the CPU continued to execute code. 8086 machines smoked during the DMAs with DOS.

  • @jF-sp8lo
    @jF-sp8lo 4 года назад +2

    just quick on the x86 history, the 8086 was 16 bit and expensive so they released a more affordable 8088 chip in 1979 that only had an 8 bit data bus (much cheaper than 8086) so more popular, also AMD made 8088 4.77 Mhz clone chips with 10Mhz turbo (first PC i ever built) and clone 8086's (Clones started way before your listed 386 clones). I like that you mentioned Cyrix even though they where so unstable I only built one. The AMD64 has to do with patents not just that they where there first.

  • @Agreedtodisagree
    @Agreedtodisagree 4 года назад +1

    Great job Gary.

  • @galdutro
    @galdutro 4 года назад +1

    If Apple designs a microarchtectute for their higher TDP devices, by how much do you think they will be able to outperform current intel CPUs?

    • @GaryExplains
      @GaryExplains  4 года назад

      That is a good question, but unfortunately there are no data points to even make an educated guess.

    • @galdutro
      @galdutro 4 года назад

      Gary Explains does the SPEC2006 used by Anandtech translate well to real life workloads? Because on that scenario, the A13 core is just 10-15 percent away from the skylake architecture, consuming only a fraction of the power.
      I mean... that is an mobile architecture. This make me kind of hyped for some huge transformation on the PC market with Apple move to arm architecture. Should I be this hyped? I mean... I’m hoping that performance increases linearly with the power budget, but, there is nothing indicating that this will happen.

    • @galdutro
      @galdutro 4 года назад

      Gary Explains I’m a windows user. I live in a third world country and I can’t afford a Mac. But I wish the Apple transition to ARM will translate into better cheap Arm based windows/Linux machines. All I want is this, better user experience. I just hope that it doesn’t take to long for the gains that the Mac takes to translate to better PCs overall!

    • @GaryExplains
      @GaryExplains  4 года назад

      SPEC2006 is a good data point and at least gives us a direction of travel. I expect the A14 variants to be competitive in whatever Macs Apple releases.

  • @FlaxTheSeedOne
    @FlaxTheSeedOne 4 года назад

    Isn't RISC the acronym for Reduced instruction set Complexity not Computing as it hints to use less complex and therefor less power hungry instructions like AVX512 on intels side. The whole point of risc is to reduce instructionset complexity and maybe use more instructions for a given program but do them way more efficiently. I.e. Use 5 instructions using 1nW compared to 1 instruction using 20nW and further allowing clock speeds to rise as the traces should not be as long etc.

    • @GaryExplains
      @GaryExplains  4 года назад +1

      The opening paragraph of Patterson's paper on RISC says very clearly, "The Reduced Instruction Set Computer (RISC) Project investigates an alternative to the general trend toward computers with increasingly complex instruction sets..." I also have a whole video on CISC vs RISC here ruclips.net/video/g16wZWKcao4/видео.html

  • @bsipperly
    @bsipperly 4 года назад

    Hi Gary, Does this mean that Intel will create a straight up RISC processor to compete with Apple M1 chip? Better speed to power ratio?

  • @antonio6140
    @antonio6140 3 года назад

    So just to make sure/a few questions:
    - x86 can run 32bit and 64 bit apps/programs
    - x64 can only run 64bit apps/programs
    - x86-64 is...what exactly?
    - ARM can run ARM32 and ARM64 apps/programs? Or is there no 32/64 difference in ARM?
    - M1 apple silicon can run Intel64 apps through Rosetta, however it's impossible to emulate an entire 32bit/64bit operating system (either through virtual machines or bootcamp)? Also, is Intel64 the same as x86-x64? What about AMD64?

    • @IskeletuBr
      @IskeletuBr 3 года назад

      x64 isn't another architecture, it's the same x86 but 64 bits

  • @official_ashhh
    @official_ashhh Год назад

    Brilliant explanation of x86 vs risc architecture.

  • @ashwani_kumar_rai
    @ashwani_kumar_rai 4 года назад

    Hello gary can you make some videos on operating system but at the lower kernel level how things work, bootstrap,how drivers interact,how gui is build on the top of windowing system

  • @gabboman92
    @gabboman92 4 года назад

    Gary, I wonder if you could design a speed test like the one you made for the new and old cpu but for the arm mac when its released

    • @GaryExplains
      @GaryExplains  4 года назад +1

      Yes, I think something like that might be possible, maybe I already started working on it! 🤫

    • @gabboman92
      @gabboman92 4 года назад

      @@GaryExplains do you have a kit? i wonder if gaming on it will be viable *java minecraft*

    • @GaryExplains
      @GaryExplains  4 года назад

      @@gabboman92 No I don't have a transition kit, but if I was theoretically working on such a tool then it would run on Windows, Linux, and macOS, on x86, x86-64, and ARM64. All the pieces necessary for that are available, except for ARM64 macOS.

  • @bibekghatak5860
    @bibekghatak5860 3 года назад

    Nice video and thanks to Mr Gary for the enlightenment .

  • @josefheinl2797
    @josefheinl2797 4 года назад +1

    Key differences at 11:41

  • @jeffk412
    @jeffk412 4 года назад

    at 11:02... who's doggo in the background?

  • @chillidog8726
    @chillidog8726 4 года назад

    Hey what do you think about RISC V Macro op Fusion and its importance for RISC V to eventuall replace x86 and ARM for open sauce (maybe)