Intel Wants to KILL 32-bit Mode in its CPUs - X86-S Explained

Поделиться
HTML-код
  • Опубликовано: 4 ноя 2024

Комментарии • 1 тыс.

  • @GaryExplains
    @GaryExplains  Год назад +7

    Get 25% off Blinkist Annual premium! Start your 7-day free trial by clicking here: blinkist.com/garyexplains

    • @Vampirat3
      @Vampirat3 Год назад +1

      THIS IS A HUGE OVERSTEP OF POWER AND CONTROL. AS AN AMERICAN I AM LOOSING SECURITY BY HANDING IT OVER TO INTEL.
      ITS ALL COMPROMISED , SAVE YOURSELf.

    • @Muhammad-sx7wr
      @Muhammad-sx7wr Год назад

      What's been holding Intel back was its spying hidden instruction sets. Which were running below level zero. If they get over having to maintain that it will be able to really rebuild the architecture and make it more energy efficient. Otherwise there's a huge concern whether they'll be around at all.

    • @JohnPMiller
      @JohnPMiller Год назад

      6:36 "You can't get a 64-bit version of Windows 11, for example." Yes, you can.
      Great video, as always! Thanks, Blinkist, for sponsoring!

    • @Vampirat3
      @Vampirat3 Год назад

      @@JohnPMiller I mean just , but still... all in all..... one will be made to replace it , because its structural integrity and privacy is deeply compromised #1, and 2. the win 11 UI is just bleh...
      And I base that on Full control settings , and ease of hidden panels,bg programs past user control ... etc..... as if you were an OS engineer , not a user or company .....
      I just pump the 32 bit support because of developer accessibility .
      Yes great video thank you !

    • @petevenuti7355
      @petevenuti7355 Год назад +1

      You said you would have a link to explaining real vs virtual memory‽
      And if they do away with 16&32bit
      How is os\2 supposed to work then? (Half kidding)

  • @Winnetou17
    @Winnetou17 Год назад +272

    Fun fact: because of the 16 bit modes, you can still boot into MS-DOS and/or FreeDOS on any modern x86 CPU. Well, if I'm not mistaken you still need to have legacy BIOS boot, the MBR kind and the keyboard has to be PS/2 or emulated, not stricly on USB. At least for MS-DOS. FreeDOS I think has a few extra compatibility features.

    • @marcusk7855
      @marcusk7855 Год назад +9

      You'd only run DOS on an emulator or legacy pc anyway wouldn't you?

    • @craigmurray4746
      @craigmurray4746 Год назад +18

      You definitely can use a USB keyboard with FreeDOS, I have done so myself. But the other requirements are correct yes, CSM mode on in the firmware, MBR partition and so on.

    • @ryanjay6241
      @ryanjay6241 Год назад +18

      Lol yeah, you can still find people on RUclips doing silly things like using an i9 and an ISA adapter for a Sound Blaster booting modern machines into DOS to play old games. Although interesting, I just prefer to keep a few semi-collectable old machines around for when I want a retro experience.

    • @coder_rc
      @coder_rc Год назад +2

      Yep

    • @neonlost
      @neonlost Год назад +5

      @@marcusk7855 i’ve run FreeDOS to flash RAID cards with different bios. probably could do it in uefi shell some how

  • @turosianarcher8771
    @turosianarcher8771 Год назад +154

    One interesting point that Gary didn't make is the reason that AMD was the one to come out with 64-bit x86 extensions instead of Intel, and that this is not the first time Intel tried to leave compatibility modes (and maybe x86 clone companies?) behind. At the time, Intel and HP were pushing their VLIW (corrected thanks to James Duncan below) architecture called Itanium as a 64-bit solution and letting their x86 lineup remain at 32-bit for the consumer market, and Microsoft created a version of windows to support the new Itanium 64 bit architecture for Intel & HP. Around the same time, AMD (who had an x86 license but NOT an Itanium license) had no choice but to continue to develop x86 or face irrelevancy. Thus, the AMD-64 architecture was born, backwards compatibility was preserved, and Microsoft built the 64-bit version of Windows for this x86 64-bit standard. When the Itanium stagnated, and AMD64 gained traction in the market as a compatible alternative, Intel had to do something, and was rumored to be coming up with their own extensions to x86. To add insult to injury (if I remember correctly) after the costs of building an Itanium version, Microsoft was reluctant to build *another* x86-64 version for Intel's proposed 64 bit x86 competitor, forcing Intel to make their processors compatible with the AMD-64 instructions for 64-bit x86 moving forwards.
    I think that removing backwards compatibility mode, so long as the ability to use older programs in emulation was sufficient, probably will be good for the average consumer. If it isn't though, be sure that AMD will be there again to capitalize on what Intel leaves on the table. If it makes sense, great...we all get better faster machines. If it doesn't make sense, someone will be there to "retain compatibility because people want it" and gain market share because of it. Either way, it'll work out.

    • @jaimeduncan6167
      @jaimeduncan6167 Год назад +1

      He did, and Itanium was not RISC. You should seed the video again, review the history of Itanium again and understand what the architecture was about and why it did not perform.

    • @Abeldigital3210
      @Abeldigital3210 Год назад +5

      But Itanium wasn't a perfect solution either. If I remember correctly its ABI had very long instructions, which made it difficult to optimize programs at compile time and the following I'm not sure, but I think it included hardware x86 emulation, but so slow and inefficient that it was not worth it.

    • @GaryExplains
      @GaryExplains  Год назад +21

      I clearly said it was AMD that invented 64-bit x86.

    • @jameshogge
      @jameshogge Год назад +3

      Itanium was a VLIW architecture. Also, in fairness to Intel, the consumer space at the time really could not use more than a 32-bit address space.

    • @johndoh5182
      @johndoh5182 Год назад +1

      Yeah, he did.

  • @jameshogge
    @jameshogge Год назад +38

    Lots of people are talking about compatibility with older operating systems but that is already a lost battle:
    With the likes of UEFI, GPT disks and secure boot, it is already impossible to boot the really old OSes (like DOS) in 16bit mode. With the management engine on modern CPUs, it is also often impossible to flash custom firmware and get around this.
    At this point, the legacy boot modes are already mostly useless. Management engine was the thing to complain about as that actually prevents you from running unsigned firmware

    • @GaryExplains
      @GaryExplains  Год назад +7

      Exactly 👍

    • @edelzocker8169
      @edelzocker8169 Год назад +3

      Most Systems support Legacy-Boot and most OSes doesnt support UEFI boot by default

    • @kami4542
      @kami4542 Год назад

      "it is already impossible to boot the really old OSes (like DOS) in 16bit mode." No it's not (not really to be precise). Just a examples :
      ruclips.net/video/bS9hiSwL1KY/видео.html
      ruclips.net/video/YAHbJ_S2a6E/видео.html
      ruclips.net/video/WcRtNnd8lFs/видео.html

    • @REAL-UNKNOWN-SHINOBI
      @REAL-UNKNOWN-SHINOBI 10 месяцев назад

      ​@@edelzocker8169Some people forget that this exists.

  • @suki4410
    @suki4410 Год назад +63

    Fun fact: I worked in a small software house just before the millenium. They used clipper as software for their video customers. We were in panic, because at this time the motherboard producers wanted to switch to modern standards with only USB and no PS2 or serial connection. We used normal desktop computers as billing points.

    • @MrThomashorst
      @MrThomashorst Год назад +3

      yeah ... theat brings back memories from the old days ... remembers me of the great "nantucket tools 2" library😆

    • @ssl3546
      @ssl3546 Год назад +3

      bro you can get a Ryzen 7950x motherboard TODAY with a PS/2 port I don't know what you're talking about.

    • @p0358
      @p0358 Год назад +1

      @@ssl3546 and with COM header too, and you need a card with a ribbon to connect to it

    • @richard.20000
      @richard.20000 Год назад +3

      ARM Cortex X3 core .............. 19% higher IPC than AMD Zen 4 ........... 17% higher IPC than Intel Raptor Lake
      New Cortex X4 ....................... 33% higher IPC than AMD Zen 4 ........... 24% higher IPC than Intel Raptor Lake
      How x86-S is gonna help with that?
      I'd expect x86-S will get rid of variable instruction length encoding (to solve polynomial explosion problem due to need to decode multiple instructions in parallel). RISC has 32-bit fixed instruction length so CPU knows exactly where all instructions start. x86 instruction can have 1 byte up to 15 bytes so 10th instruction could begin at 10th byte up to 136th byte = 126 possibilities. This is the worst pain of x86 which hurts CPU efficiency and apparently it's not being solved? Crazy.

    • @veselinnikolov84
      @veselinnikolov84 Год назад +3

      Fun fact: Windows executables still have MS-DOS header at the beginning of the file! 😀

  • @pychang
    @pychang Год назад +167

    This probably won't affect most end users, but it's kind a big news in the computer history.
    Btw I didn't know that cpu nowadays still boot into 16-bit mode first and then transit to 64-bit. Great content as always!😄

    • @AceWing905
      @AceWing905 Год назад +20

      You'd be surprised at what sort of apps turn out to be still 32bit
      Even Steam is 32bit, for example
      EDIT: Okay, if compatibility mode stays, this won't be a problem

    • @LunaticEdit
      @LunaticEdit Год назад +19

      @@AceWing905 The only reason steam is 32-bit on windows is because they don't need to upgrade it. It's been 64-bit on Mac for years and works just fine. The day 32-bit support is dropped you'll see steam release a 64-bit client.

    • @AceWing905
      @AceWing905 Год назад +1

      @@LunaticEdit Makes sense
      But I misunderstood originally anyway
      If compatibility mode stays, then Steam can remain 32bit

    • @volodumurkalunyak4651
      @volodumurkalunyak4651 Год назад +10

      1. SMM is currently entered in 16 bit real mode. Very first thing it does - switch to 32 bit protected mode and further to 64 bit long mode.
      2. If UEFI boot is used, CPU enters BIOS in 16 bit real mode, BIOS switches to 32 bit mode, then 64 bit mode and only then loads OS. 16 bit mode is used once.
      3. If legacy boot is used (MBR+CSM) then BIOS enters 16 bit mode before loading OS. MS DOS or outher 16 bit OS can be used or OS loader can switch back to 32 and 64 bit mode.

    • @valenrn8657
      @valenrn8657 Год назад +3

      @@volodumurkalunyak4651 UEFI Class 3 removes CSM. Intel wanted to enforce UEFI Class 3.
      Starting from the 10th Gen Intel Core, Intel no longer provides Legacy Video BIOS for the iGPU (Intel Graphics Technology).
      My AMD X670E motherboard still has CSM and AMD RDNA2 IGP still has Legacy Video BIOS for CSM.

  • @peteblazar5515
    @peteblazar5515 Год назад +58

    I think main reason are "soon" expiring patents for start "edition" x86_64, new patents for x86_64-S will block enter in the market for other players. All legacy instruction are probably for decades executed by micro-code.

    • @gregoryreimer869
      @gregoryreimer869 Год назад +23

      Doubtful since x86 and x86_64 are both out of patent already. x86 by a stupid margin, and x64 by I think 4 years. There is a ton of newer tech made in the last 20 years that's covered by patent without needing this though(both intel and AMD have a bunch of extensions that you couldn't add).
      Still, there's nothing stopping a person from making a processor right now, and there's plenty enough that make older ones for industrial use, just nobody that wants to put in the billions to only potentially compete with the desktop/server market in the years it would take to try and catch up. Even VIA with a golden ticket to compete in the market has all but given up.

    • @EDARDO112
      @EDARDO112 Год назад +5

      I think that's bullshit, all computers we have now use 64bit, 32 bit is almost useless and taking out of the chip will free space so they can make better chips, those relying in 32 but can just buy older chips

    • @soylentgreenb
      @soylentgreenb Год назад +10

      You don’t understand. 64-bit code can use 32-bit instructions and 16 bit instructions. There are advantages such as smaller instruction and data size which means 64-bit compilers use large numbers of 32 bit instructions.

    • @oj0024
      @oj0024 Год назад +4

      I've heard people say that it might make internal validation easier for intel.

    • @CompatibilityMadness
      @CompatibilityMadness Год назад +4

      @@soylentgreenb And those (probably) can be used under 64-bit compatibility mode.
      X86-S simply means you won't be able to BOOT into 8/16/32-bit OS (DOS/Windows), that usually utilized those instructions natively.
      Think of it like this : A driver packing stuff onto a bike or into a lorry.
      It doesn't matter what you want to pack, a lorry can take whatever a bike can (I assume here, Intel won't "cut out" 16-bit/32-bit instructions support).
      And from what I understood from the video, Intel simply removes the part where our driver has to hop on a bike, to get to his lorry at the start of work day. X86-S means lorry is parked outside driver's house, and no bike is needed. It has nothing to do with what you can pack into bika or lorry (again, I hope).

  • @mc10guru
    @mc10guru Год назад +39

    Ahoy, thanks for the video. Remember, this is bleeding edge stuff. In the Real World (R) such as banking and industry 16 bits isn't even dead as some of it still runs on that (especially industrial machines). Before I retired in 2011 I was still servicing 32 bit Warpserver computers, mostly at banks, investment firms and retail stores. I suspect I'll be fertilizer before this comes to full fruition. Cheers, daveyb

    • @LordTails
      @LordTails Год назад +2

      Doesn't surprise me since coding is the same. A lot of times you deal with code that is probably nearly as old as the developers charged with maintaining it. COBOL is a great example of old code that is still used, to my understanding a lot more so in banking and finance. It wouldn't surprise me if 32-bit was still a thing when I'm retired.

    • @triadwarfare
      @triadwarfare Год назад +6

      I don't think they use the same processors. Mainframe apps are powered and run in a mainframe server. The terminal that you use to interface with the mainframe app doesn't need to run in 16 bits. 32 bit support will still remain.
      If some industries still want to keep their 8 and 16 bit programs, maybe they deserve to go the way of the dodo. They had 30 years to update their programs and chose not to? Just pure laziness.

    • @AFnord
      @AFnord Год назад +2

      @@triadwarfare Say that to every university that needs to keep their legacy machines running because they simply can't afford to upgrade. When I studied chemical engineering my university still had analysis machines from the early 80's, and were desperate to keep them in running order because a new one would set them back close to 1million €

    • @yuan.pingchen3056
      @yuan.pingchen3056 Год назад

      I'm sure my nutrition quality will better than yours.

    • @verygoodbrother
      @verygoodbrother Год назад

      @@AFnord Saying €1 million is meaningless if it cost €2 million to maintain the old gear.

  • @jeffnew1213
    @jeffnew1213 Год назад +48

    Just considering that BIOSes and UEFI firmware will have to change as will all hypervisors which will offer virtualization of and on these new processors. It's going to be a long, long transition.

    • @valenrn8657
      @valenrn8657 Год назад +1

      UEFI Class 3 has removed CSM.
      Starting from the 10th Gen Intel Core, Intel no longer provides Legacy Video BIOS for the iGPU (Intel Graphics Technology).

    • @jameshogge
      @jameshogge Год назад +13

      I'm not sure it will.
      - BIOS is essentially gone with the current generations
      - UEFIs are all specific to the current hardware IE new CPUs will roll out at the same time as new, compatible UEFIs that use these features
      - Hypervisors shouldn't need any more attention than operating systems and all relevant, modern ones are probably already operating in 64bit

    • @DanTheMan827
      @DanTheMan827 Год назад

      @@jameshogge but will this affect the ability of Windows to run 32-bit apps?

    • @anonymousnearseattle2788
      @anonymousnearseattle2788 Год назад +5

      @@DanTheMan827 The table around 3 minutes showed that Long Mode/Compatibility Mode will still exist. 64-bit Windows will continue to run 32-bit applications.

    • @_--_--_
      @_--_--_ Год назад +12

      @@DanTheMan827 No a 64-bit OS still can run 32-bit applications and a 64-bit hypervisor still can host 32-bit OSes through normal comp mode.
      The only thing that will change is that the CPU can no longer boot into 32-bit OS natively or boot into a 32-bit hypervisor.
      So absolutely irrelevant considering Win11 doesnt even have a 32-bit edition anymore and anyone who would want to operate a 32-bit hypervisor with modern hardware should maybe see a doctor anyway.

  • @GuruEvi
    @GuruEvi Год назад +9

    The question is really whether it makes sense to switch then to X86-S vs just going straight for ARM. The only reason Intel is still big is because of Windows with its own 16-and-32-bit mode software still commonly in use (eg. Excel) which has things like OLE and DDE from Windows 3.0 that many, many software sold today still haven't changed (unlike Mac, nobody is forcing them to modernize).

    • @GaryExplains
      @GaryExplains  Год назад +5

      Are you running software today on Windows 10 that uses 16-bit OLE stuff from Windows 3?

    • @volodumurkalunyak4651
      @volodumurkalunyak4651 Год назад +10

      1. 32 bit program support on 64 bit OS is not removed yet in x86S.
      2. 16bit program support is not present on any 64bit Microsoft OS.
      3. 32 bit OS and legacy boot suppport (that relies on 16 bit support) are getting removed in x86S
      Change any software???
      Anything but OS itself can stay the same.

  • @Akck67
    @Akck67 Год назад +10

    Gary, what are the benefits of a 64 bit only cpu? Would it allow them to save on die size by removing some transistors? And with modern SSD based systems booting pretty fast, would this make a noticeable difference in boot times?

    • @renegade4dio
      @renegade4dio Год назад +5

      I'd say yes to the first question and probably no to the second. The CPU only spends a couple seconds in 16bit mode before it switches to 64bit. I doubt it will be noticible without a stopwatch.

    • @volodumurkalunyak4651
      @volodumurkalunyak4651 Год назад +8

      Removing 16 bit and 32 bit modes (except mode for 32 bit program on 64 bit OS) does simplify instruction decoder. That could help reach higher clock speeds at the same voltage and power.

    • @TheEVEInspiration
      @TheEVEInspiration Год назад +5

      @@volodumurkalunyak4651 Not only that, it simplifies and streamlines the rest of the processor too as there is less "state" to keep track off.
      Quite a bit of the old stuff will be handled by microcode and/or extra internal op-code space (read: more wires), so an increase in overal power-efficiency can also be expected.
      Beyond the direct impact, I suspect it allows more design freedom too as certain backward-compatibly issues are no longer things to be worried about and thus won't hold back use of better design options. They must have run into those in the past all the time, figuring out something clever, but never get to use it due to that compatibility demand.

    • @D9ID9I
      @D9ID9I Год назад +3

      It would allow them to do new marketing.

    • @marcogenovesi8570
      @marcogenovesi8570 Год назад

      it's not boot times it's wasting less silicon to run old instructions Supporting all the modes means you must have hardware (or instruction translation of some kind) dedicated to that.

  • @billy65bob
    @billy65bob Год назад +7

    As long as I can still play 16bit games from that awkward period where it's too new for DOS and too old for 32-bit windows, I'm happy.

    • @Longlius
      @Longlius Год назад +1

      You'd probably be better off using something like DOSBox or 86box for that kind of stuff tbh - modern computers aren't going to have really good soundblaster compat

  • @Crossfire2003
    @Crossfire2003 Год назад +3

    I am all for the x86-S mode implementation!
    Apple dropped the 16-bit & 32-bit modes years ago!

  • @altmindo
    @altmindo Год назад +8

    i think x86-s its a good idea without a hidden business agenda. would ease the design of x86-64 cpus from scratch (not that there are a lot of new ground-up designs now)

    • @valenrn8657
      @valenrn8657 Год назад +1

      It's an Intel initiative, not AMD's.

  • @4olovik
    @4olovik Год назад +16

    interesting how much die space this will save for x86 CPUs? how much energy is wasted? Intel wants to catch up with others and going to redesign their architecture.

    • @UncleKennysPlace
      @UncleKennysPlace Год назад +3

      Intel certainly knows how to create heat; some efficiency gains may be realized, but I doubt they'll get the power/performance levels of AMD.

    • @Winnetou17
      @Winnetou17 Год назад +11

      I think it will be very insignificant. It's just cleanup and a few miliseconds faster boot time

    • @DaveVT5
      @DaveVT5 Год назад +2

      I had a similar question as @4olovik, is this freeing up die real estate? Also, what implication does this have on the other extensions over the years such as MMX and SSE? Seems like the benefits should be more than jumping through a few legacy boot modes as that’s been a technique honed over nearly 40 years!
      As always, love the content Gary!

    • @haukikannel
      @haukikannel Год назад +5

      Every bit matter!
      Even 2 sents save will give millions to Intel!

    • @repatch43
      @repatch43 Год назад +18

      Almost nothing. This isn't about die savings, but more about engineering resources. Every time they 'add' something they have to do a ton of engineering work to ensure all the legacy crap still works. It's a waste of time considering all that legacy crap isn't being used by anyone anymore.
      I used to work for a GPU company. To this day there are legacy blocks in modern GPUs that aren't even enabled in software anymore. Things like video encoders for codecs not used anymore. They remain on the die simply because the engineering effort to remove them is not worth it. The people who wrote that code are LONG gone (either literally, or have moved on to other projects and don't remember what they wrote 20 years ago) so trying to requalify everything after removing those things, and fixing the multitude of unintended consequences just isn't worth it. It was easier to just remove driver support for this hardware, so they don't have to SUPPORT it anymore, and leave it wasting a very small amount of die space.
      Intel obviously has figured the time has come where the effort in removing this legacy crap is actually worth doing. Most of that is because they've never deprecated this legacy crap, they still need to ensure it all works, even though nobody uses it.

  • @davidg5898
    @davidg5898 Год назад +3

    I'm all for it. CPU design/manufacture has reached a point where generational differences in power efficiency and speed come from how well every last nanometer of the die is used.
    The opposition is mainly "but what about..." exceptions -- like old industrial machines, retro uses, etc. -- that ignore just how niche and small those markets are, and that the majority of those can already be handled with VMs and emulators/translation layers, and even more such programs will be created during any transition period with even greater compatibility and performance.

    • @BM-jy6cb
      @BM-jy6cb Год назад +2

      Other manufacturers will still license and produce 386 and 486 cores for those markets just as they still produce Z80s and 6502s nearly 50 years on.

    • @D9ID9I
      @D9ID9I Год назад

      It makes no sense unless they do full instruction set refactoring.

  • @ben-and-maffy
    @ben-and-maffy 4 месяца назад +1

    It will probably speed things up, and possibly make the chips less expensive? Emulation of the legacy modes is still an option.

  • @MonochromeWench
    @MonochromeWench Год назад +5

    This has been a long time coming with Intel already getting rid of UEFI CSM on their boards. No CSM means no MBR booting, so really only modern 64bit Operating system can be used.

  • @combat.wombat
    @combat.wombat Год назад +14

    Did Intel talk about how much die space might be saved (if any) from cutting out the older modes?

    • @valenrn8657
      @valenrn8657 Год назад +3

      Intel P-cores are huge when compared to AMD Zen 4.

    • @jameshogge
      @jameshogge Год назад +7

      I doubt there will be anything meaningful at all. These are all relatively small changes to control structures (whereas the big die space consumers are memory structures & accelerators that have many duplicates of the same logic).

    • @rosomak8244
      @rosomak8244 Год назад +1

      @@jameshogge Wrong. If not looking at caches which will simplify the instruction decoders significantly.

    • @ericnewton5720
      @ericnewton5720 Год назад

      @@jameshoggeyeah I disagree with that assertion. I bet there’s a decent amount of die space built to preserve the old structures, just like when I’m working on 10 year old software, there’s significant amounts of old code that’s superseded by new code but the old is kept to keep customer X running.

    • @jameshogge
      @jameshogge Год назад +3

      @@ericnewton5720 I see two problems with this:
      1. When comparing to software, the things that are large in hardware are small and the things that are small in hardware are large. Take, for example, a section of code with lots of control flow. Each conditional can be broken down into a few gates and you end up with a relatively small logic circuit. Now allocate an array. This is just one line of code but in hardware you have to generate an enormous structure to back it up.
      The big consumers of die space inside processor cores are largely all memory structures: the branch predictor, the reorder buffer, L1 caches, micro-op caches, the TLBs, the register files, load and store queues.
      2. The legacy modes can largely reuse the same hardware. They offer subsets of the 64bit ISA. In the decoder, this may manifest itself as a single control signal that masks off the 64bit instructions when not in that mode. Given that x86 uses variable width instructions, it might not even require that.
      I think the biggest changes are likely to be around address generation for loads/stores as these legacy modes come with different addressing modes. These are, however, a couple of small functional units and all of the large memory structures will either be unused or used but unchanged. Intel/AMD would be mad to build in separate copies of them.

  • @i00Productions
    @i00Productions Год назад +10

    I am quite sure that technically UEFI is only 32 / 64 bits at boot.. however a lot of UEFIs also include or a built upon traditional BIOS'.. partly for compatibility and partly because you can easily get hold of these from certain companies (eg AMI) rather than to have to make one from scratch.. Also I'm quite sure you can write a boot loader for a UEFI that supports 16-bit.. but initially it will have to execute in 32-bit mode.. On another note I think that the discontinued Itanium range by Intel were 64bit only. Also just a simple mistake @6:30 you state that "you can't get a 64-bit version of windows 11" 😛!

    • @alcorza3567
      @alcorza3567 Год назад +2

      Well, it was probably a slip up. Gary likely meant you can't get a 32bit version of Windows 11, but you also technically can't get a 64bit version of Windows 11 because it's the only version. Semantics... But you're right... I think he actually meant to say 32bit Windows 11.

    • @i00Productions
      @i00Productions Год назад

      @@alcorza3567 I realise that that's why I said it (the slip up) was a simple mistake.. the other stuff was my focus

  • @shaurz
    @shaurz Год назад +31

    It's a step in the right direction but they should really bite the bullet and introduce a new encoding, drop the segment registers, legacy floating point instructions and registers, legacy descriptor tables, obsolete instructions, etc. ARM did this with ARM64 - which is essentially a completely new instruction set compared to 32-bit ARM.

    • @khalidacosta7133
      @khalidacosta7133 Год назад +2

      That's probably because legacy 32bit ARM products are still being produced... once AMD / Intel go full X64, there will be no more 32bit processors for legacy products... but then I suppose they could just make legacy products....

    • @andreimiga8101
      @andreimiga8101 Год назад +23

      One of the key selling points of x86 is backwards compatibility. The only reason why such an ugly architecture survived for 45 years is the massive amount of software written for it. The x64 architecture itself is already 20 years old, and still contains a lot (although not all) of the obsolete stuff in the architecture. Changing it would completely render every x86 software inoperable. It still does a good job at removing a lot of unused stuff and cleaning up the opcode table.
      The x86 PC is, unlike Android or Apple's ecosystems, open (thank goodness). You can't just come and say: every software will have to update or it won't work on the next CPUs. Something like this can definitely work on an ARM ecosystem, where the device's manufacturer controls what applications can be installed. But not on x86.
      Both 32 and 64-bit modes are gonna be stuck with us for quite a long time. But this new x86S pretty much does what you point out. Segment registers are not used, descriptor tables are also not used with the new FRED event delivery. Legacy floating point instructions are used quite a lot actually (even 64-bit, because they implement the 80-bit type in hardware) so they can't be removed.

    • @LA-MJ
      @LA-MJ Год назад

      What does this mean for tools like memtest?

    • @Henfredemars
      @Henfredemars Год назад

      I think it's a matter of the golden handcuffs of upward compatibility.

    • @lawrencedoliveiro9104
      @lawrencedoliveiro9104 Год назад +1

      What you’re asking for is called “RISC-V”.

  • @erascarecrow2541
    @erascarecrow2541 Год назад +4

    As a proof of concept they really need to make the x64-S as they envision it, but emulate everything they want to take out in software, see what works, and what breaks when you take it away.
    Though how that will work with VirtualBox i don't know, as VirtualBox uses a different permission mode and runs on the hardware, thus it might mean having to emulate the older 16/32bit instructions for older OSes.
    Another option might be if they include a JIT recompiler that will recompile 16/32bit code into the native 64bit and then just run it there, much like PCSX and certain chips in Japan did.

    • @cfusername
      @cfusername Год назад +1

      As far as I understand, the CPU is still able to execute old code without any fundamental changes in 64bit long mode, maybe less efficient (no memory advantages), though. I think most of these changes only get rid of a lot of legacy stuff, that doesn't make sense anymore and/or isn't used at all for decades. I would assume, that these changes are not that fundamental for CPU or board manufacturers to implement. This is more a "behind the scenes" kind of cleanup operation that won't really change much for the user.

    • @erascarecrow2541
      @erascarecrow2541 Год назад +2

      @@cfusername Yeah, i can see a number of instructions that work the same regardless (pretends it's 32bit when ti's just 64bit because it makes no difference on 32bit code), unless you wanted to do interesting bswap opcodes to have extra register spaces.
      The only real issues i can think of would be the immediates used, which can be in [memory] locations, or to a value applied to a register. It comes down to how much it can reduce complexity, to which saves spaces, which makes it faster or lets them put more cores on one die.
      Though under the hood with microcode, likely the raw CPU's aren't changing much and are already 64bit and just the microcode is relied on; So being able to rip out a basically-unneeded layer would let them save room on more important new instructions.

  • @felicytatomaszewska
    @felicytatomaszewska Год назад +5

    06:37 I guess you wanted to say you can't get 32-bit version of Win-11.

    • @GaryExplains
      @GaryExplains  Год назад +2

      🤪 Ooops!

    • @felicytatomaszewska
      @felicytatomaszewska Год назад

      @@GaryExplains I have question what advantage shall we get from it? will it result in more performance, stability, more transisters or cache on CPU?

  • @shaurz
    @shaurz Год назад +9

    Unfortunately they are still keeping the GDT, LDT, IDT and segment registers since they can't get rid of that without breaking existing operating systems.

  • @ChrisM541
    @ChrisM541 4 месяца назад +1

    Yup, maintaining legacy is a pain in the ass, holding back 'real' advancement.

  • @judgewest2000
    @judgewest2000 Год назад +4

    How have they not made this transition like 10 years ago!?

    • @sarowie
      @sarowie Год назад

      Intel wanted to launch a platform called itanium. Ever heard of it?
      Ever wondered why x64 is also called AMD64?

    • @judgewest2000
      @judgewest2000 Год назад

      @@sarowie That's a dumb answer - do you want to delete your response? There's a difference to making a CPU so different it requires compilers and OS's to perform magic tricks to make code work at speed with no compatibility of anything before it versus something that's more of an evolutionary step on the x86 road. And yes I'm aware of the history of AMD64.

  • @AdrianGlowacki
    @AdrianGlowacki Год назад +1

    I have a question. Will it be possible to install 16/32 bit operating systems on a virtual machine (VirtualBox, VMware...) on host with X86-S processors? Will I need to disable Intel VT or AMD-V in a virtual machine to run a 16/32 bit system?

    • @edelzocker8169
      @edelzocker8169 Год назад

      No, because VM-Software will emulate the hardware BUT it will run slower

  • @MCrex007
    @MCrex007 Год назад +59

    The worst part is, if it does result in cheaper CPUs, and ones that might even run better, all consumers will get is either the same performance at the same price, or more expensive for the same performance.

    • @leonro
      @leonro Год назад +19

      I remember that when Samsung removed chargers from phone boxes, they decreased the price for one generation of phones by like $100 to decrease backlash, only to raise it back to normal a year later. We might see a similar thing.

    • @kolotxoz
      @kolotxoz Год назад +6

      It wont result in cheaper CPUs, the 32bit intructions are small, really small in sillicon size, that you wont get any benefit in performance, cooling or price

    • @dannymitchell6131
      @dannymitchell6131 Год назад +1

      It depends on a lot of things. I built my 4790k rig for about $1k without a GPU.
      For $1k now I can get a 12900k rig and $1k has less value now than it did several years ago.

    • @dannymitchell6131
      @dannymitchell6131 Год назад

      I also have a $1k laptop with an OLED screen (13th gen i7 and a 3050) that can out render or at least match my desktop 4790k and 1080.
      That's a full laptop for $1200 after tax beating a desktop from 7 or 8 years ago that cost over $2k, 1k for the PC, $700 for the 1080, $300 for the monitor, and another $150 for kb&m.

    • @mitchjames9350
      @mitchjames9350 Год назад

      @@leonro Apple did the same by using the environment as justification to get more money out of you.

  • @etienne6641
    @etienne6641 Год назад +2

    What will the advantage be of a X86-S for the users? Faster boot?

    • @CakeCh.
      @CakeCh. Год назад +2

      Cores can get slightly smaller. And more cores or more L3 cache can fit in the same amount of space, maybe?

    • @rosomak8244
      @rosomak8244 Год назад +4

      Faster instruction decoders and thus faster CPUs. Less shenaningas in system software.

  • @PearComputingDevices
    @PearComputingDevices Год назад +2

    Well by gutting compatability modes they'll probably be able to go further and faster with the x86 platform. All this legacy stuff can hold back performance for sure. Do I think it's a good idea? It depends. The potential sounds nice but it comes at a cost. Sometimes legacy stuff is just better. But if you wanted to build something from scratch that didn't include any older software than perhaps it's the way to go

  • @rhueoflandorin
    @rhueoflandorin Год назад +2

    Didn't really get into the benefits and drawbacks of the changes..but oh well.

    • @GaryExplains
      @GaryExplains  Год назад +1

      There aren't any drawbacks, if there were Intel wouldn't do it. As for benefits, they are as I describe, just removing stuff from 1978 that isn't used any more. I thought I covered all that, but oh well.

  • @petersilva037
    @petersilva037 Год назад +12

    Will they use this opportunity to simplify/reduce the need for "intel management engine" ? If they are going 64bit only, and therefore changing the boot environment, there are lots of long standing security objections to IME, and it would be great if they could clean that up so that people who can have a more assured platform to build on.

    • @gregorimacarioharbs5715
      @gregorimacarioharbs5715 Год назад +1

      it's not entirely 64bit only... it's just that the OS is 64bit which is already true if you account for UEFI... because you can't run win32 on a UEFI64 and everything is UEFI64 (there's no dual UEFI32/64)... just legacy bios (which will disappear for real with x86-S machines)

    • @volodumurkalunyak4651
      @volodumurkalunyak4651 Год назад

      Intel ME and AMD PSP are probably left unmodified there. Those do already use separate management cores anyway.

    • @theexplosionist2019
      @theexplosionist2019 Год назад

      The IME and PSP are intelligence agency backdoors and won't be removed.

  • @abhiramshibu
    @abhiramshibu Год назад +1

    So with x86-s cpu won't be able to boot dos even in vm unless we disable hardware acceleration for virtualization..

  • @DJaquithFL
    @DJaquithFL Год назад +39

    I would tend to agree with Intel getting rid of everything except 64-bit computing, but also they need to greatly reduce the amount of supported instruction sets. The support reduces efficiency, reduces speed, adds heat (waste) and adds cost.

    • @DripDripDrip69
      @DripDripDrip69 Год назад +3

      AMD dropped 3DNow with Bulldozer, FMA4 and XOP with Zen.

    • @brennethd5391
      @brennethd5391 Год назад

      Intel tigerlake supports some avx512 instruction sets

    • @anlemeinthegame1637
      @anlemeinthegame1637 Год назад +6

      Intel feels ARM breathing down their neck, with a more streamlined, legacy-free ISA.

    • @gregorimacarioharbs5715
      @gregorimacarioharbs5715 Год назад +1

      they didnt removed 32bit... they just removed 32bit OS capabilities that were not used... because UEFI forced the OS to be 64bit already... (if UEFI were forced to be both 32/64) then they would not had done that

    • @ikjadoon
      @ikjadoon Год назад

      >The support reduces efficiency, reduces speed, adds heat (waste) and adds cost.
      Huh?! X86S, as written by Intel, is only reducing the firmware & UEFI size. It's irrelevant to how a CPU actually runs on an OS. You're about 50 levels too deep in abstraction. This is a very low-level, arguably invisible change.

  • @valentinoesposito3614
    @valentinoesposito3614 3 месяца назад +1

    No one needs anything below 32bit

  • @dono42
    @dono42 Год назад +3

    The only problem I can think of is that people learning assembly for the first time even now often start with 32-bit before working up to 64-bit. That said, few people even try 16-bit assembly anymore, so I guess skipping straight to 64-bit may be reasonable.

    • @dsa1979
      @dsa1979 Год назад +1

      you can easily use a virtual machine for these things

    • @mandarbamane4268
      @mandarbamane4268 Год назад

      People around me still have to start with 8 bit or 16 bit as part of curriculum 💀

    • @gregorimacarioharbs5715
      @gregorimacarioharbs5715 Год назад

      you can still run 32bit programs just fine... on 64bit OSes nothing really changed... and when developing OSes people tend to use VMs which still would work just fine to boot 32bit OSes

    • @volodumurkalunyak4651
      @volodumurkalunyak4651 Год назад

      @@dsa1979 x86S removes support of running 16 bit code even inside a VM.

    • @TheEVEInspiration
      @TheEVEInspiration Год назад +1

      To get an introduction to what assembly is like, any mode will do IMO.
      For normal application code, the concept of registers, the type of operations and addressing modes is in the end what it is about.
      Getting to learn how memory works, what the access costs are, how dependencies and predictability impact performance is what develops later.
      Most will never have to code in assembly, but it can be a useful skill to figure out what a compiler is doing and where things do not go as intended and why.

  • @jamegumb7298
    @jamegumb7298 Год назад +1

    IIRC this will also get rid of a bunch of deprecated instructions.

  • @GregMoress
    @GregMoress Год назад +3

    It makes sense. All software can simply be recompiled and will work fine.
    If the software is so old that the code is lost, then just keep using pre-2024 machines, or reverse the code and re-compile it.
    Maintaining backwards portability for 40 year old code is like a 40 year old still sleeping with his Teddy Bear.

    • @sarowie
      @sarowie Год назад

      Recompile is not always an option; but emulation is. 286, 386, 486 ... emulation is a solved problem.

    • @delayed_control
      @delayed_control Год назад +2

      Only actively maintained or open source software can be recompiled. Which is actually the minority.

  • @IsaacSteadman
    @IsaacSteadman Год назад +1

    I find it odd that Intel went with 5-level page tables with 9 bits per level and 12 bits to index into the 4KiB page yielding 57 bits of virtual addressing instead of 4-level page tables 12 bits per level and 15 bits to index into the 32KiB page yielding 63 bits of virtual addressing

    • @altmindo
      @altmindo Год назад +1

      4KB pages need to finally die, large pages should become the default both in sw and hw.

    • @theexplosionist2019
      @theexplosionist2019 Год назад

      2mb pages should be the standard.

  • @annieworroll4373
    @annieworroll4373 Год назад +16

    Probably a good thing in the long run, though the lack of real mode or virtual 8086 mode might cause some compatibility issues for some software. Especially in industrial and business settings, very often, if the software works, they don't replace it and will go to great lengths to keep hardware around to run it. Pure 32 bit protected mode stuff might still run fine, but other stuff, not so much and hardware that will run it natively will eventually become hard to find and hideously expensive.
    Though by the time this tech hits the market, it should be possible to emulate a 32 bit x86 chip well enough to run any of those mission critical legacy apps well enough to get the job done. It's probably doable now, maybe not for software made at the very end of the 32 bit era but for a lot of the older stuff various companies are running at least. The instruction set isn't completely different, so virtualization for at least parts of it might be viable. Someone would make a Rosetta style compatibility layer.
    So all in all, assuming Intel works on a Rosetta style application to cover any mission critical applications that won't play nice in compatibility mode, this should be a good change.

    • @MichaelWerneburg
      @MichaelWerneburg Год назад +6

      > Especially in industrial and business settings
      Let's not forget government. 8)
      Various levels of Japanese government are currently wrangling with ridding themselves of floppy disks at long last.

    • @RaffaelloBertini
      @RaffaelloBertini Год назад

      they will upgrade/change the hardware most likely. Also this X86S architecture doesn't imply that intel will stop producing X86 (current one) CPUs ... if they can sold those CPUs, probably it will do it. but anyway if the business has to buy new x86 (not S) CPUs, at that point they could buy X86s CPUs and install a virtual software to run X86 legacy architecture... if they are not willing to modernize their software. But i think most likely they should do also to use probably something like ARM CPUs instead that are more energy efficient, therefore reducing costs in the long run and payback the investment of modernizing the infrastructure (considering those machine are working kinda 24/7)

    • @jampipe3137
      @jampipe3137 Год назад

      Would having a single legacy core on the chip fix this? Do the efficiency cores support these modes?

    • @Demopans5990
      @Demopans5990 Год назад +1

      @@jampipe3137
      By that point, you just give them purpose made cpus that run 16 and 32bit made on older mature fabs

    • @valenrn8657
      @valenrn8657 Год назад

      @@Demopans5990 There's AMD, VIA and DM&P Electronics (continues SiS' Vortex86 line). X86S in an Intel initiative to enforce UEFI Class 3.

  • @okaro6595
    @okaro6595 Год назад +1

    To classify somewhat the long mode does not support real mode or virtual-86 mode so old DOS programs cannot be run on it. It does support 16-bit protected mode but Microsoft has chosen not to provide that support in Windows.
    So basically that means just that you must use a 64 bit OS. For normal end users nothing changes, they just need to have a copy of the Windows that supports the new CPU.

    • @marcovtjev
      @marcovtjev Год назад +1

      And/or a Linux beyond a certain level where the bootmanager supports it

  • @pamus6242
    @pamus6242 Год назад +4

    I loved playing supertux almost 20 years ago on my laptop having a 32 bit pentium III coppermine core.

    • @OpenGL4ever
      @OpenGL4ever Год назад

      Supertux isn't the problem, it can be recompiled for 64 Bit. The problem is software that requires something like a 16/32 Bit Windows 9x/Me inside a VM.

    • @pamus6242
      @pamus6242 Год назад

      @@OpenGL4ever it's not that. Supertux is a matured game so it's easy to port onto anything.
      My point was 32 bit was faster than 64 bit back then and packages were few, libraries were a nightmare and hardware support was mature for 32 bit. 64 bit used more memory... Most of us had few. I had 384 MB of memory.
      Everything that 64 bit was had an overhead to it. Now all this is moot.

    • @OpenGL4ever
      @OpenGL4ever Год назад

      @@pamus6242 I disagree. When 64 Bit was available in 2003, 1 GB of memory was already common and i had 2 GB in my PC. Less than 512 MB of memory is a 1999 thing. That was 4 years before the first 64 bit x86 CPU came out.
      And when Intel released its 64 Bit Core2Duo i bought it with cheap 8 GB.
      Also, 64 bit is not slower than 32 bit, but faster because it has more registers and this speeds up program execution. With function calls, for example, more values can be placed in the registers for parameter transfers instead of having to save them slowly via the stack and thus slow main memory.
      Another reason why 64-bit binaries were practically faster is because the base of 64-bit CPUs all had SSE2 support as the lowest common denominator and you could develop and ship the software for it. While with 32-bit you still had to consider CPUs that did not support SSE2. Thus your common denominator was much worse.

    • @pamus6242
      @pamus6242 Год назад

      @@OpenGL4ever
      Most people could not afford computers back then !! Let alone have an internet connection. Most people on budgets weren't buying Intel. AMD was that underdog that got the job done.
      People bought older stock because it was cheaper than buying new and the displacement of new hardware versus old was much longer unless nowadays where it just makes sense to buy newer gen hardware than older. It wasn't until the phenom I came out that 64 bit became etched permanently because those chips were cheap and phenom II's were cheaper!! Couple this with the advent of DDR3 being more than affordable, 32 bit computing would pretty much have been dead by 2010.
      ALso most people could not afford RAM and it wasn't needed unlike today.
      This isn't 64 bit being better than 32 bit, the point being most people preferred 32 bit over 64 bit because one was either using a 32 bit chip or the RAM requirement for running in 64 bit mode for a 64 bit chip was 4GB.
      There is nothing great about 32 bit, its done.

    • @OpenGL4ever
      @OpenGL4ever Год назад

      @@pamus6242 We are talking about 2003, not 1990!
      In 2003 everyone had a PC, including one ore more older PC as second computer.
      Where do you live? In a third world country?
      We were not talking about price. We where talking about RAM.
      And BTW, in 2005 BF2 was released and it required at least 1 GB o RAM.

  • @lazzi-droid1181
    @lazzi-droid1181 Год назад

    That's all great, but what about my games from the prehistoric times will they run, or do I now have to run a VM.

  • @JoeStuffzAlt
    @JoeStuffzAlt Год назад +4

    x86 emulation has gotten so far that I say it's very doable using software

    • @richard.20000
      @richard.20000 Год назад

      Every x86 CPU runs as RISC inside => so we have HW x86 emulation in since 1994 AMD K5.
      This SW emulation of old x86 on new x86 CPU using CISC-to-RISC HW translation sound kind of over-engineered. Why not running just RISC in a first place and save a lot of transistors, energy and effort?

  • @Freddie1980
    @Freddie1980 Год назад +20

    Good overview of the paper but I would have liked it more if you spent some more time explaining the real world implications of such a move.

    • @GaryExplains
      @GaryExplains  Год назад +13

      I don't think there are any real world implications for 99.5% of people.

    • @sjswitzer1
      @sjswitzer1 Год назад +5

      I would expect some gains in terms of recovering die space from the 16-bit modes. But it’s probably not a lot because you could fit hundreds of 386s onto a modern processor chip.
      They can probably reduce gate delays just a tiny bit too since the obsolete modes don’t need to be checked for.
      I suppose it’s in the paper but was hoping to hear a bit about that here.

    • @lawrencedoliveiro9104
      @lawrencedoliveiro9104 Год назад +2

      There is still a lot of Windows code which is 32-bit only. For example, Microsoft Visual Studio itself only moved to 64-bit within the last year or two.

    • @GaryExplains
      @GaryExplains  Год назад +6

      And 32-bit apps will continue to work.

    • @ghyslainabel
      @ghyslainabel Год назад +1

      If someone has old 16 bits applications, they can run in emulators, like DosBox. I do not worry for the software.
      For old hardware that are external to the computer, they communicate with the computer by a serial or parallel cable. There are still motherboards that support those connectors, or one may use a USB adaptor.
      For old cards plugged directly on the motherboard... the motherboards themselves already lost the oldest connectors. If one uses an old motherboard, it cannot support newer processors anyway.
      I do not really see a scenario where the 16bits legacy mode is mission critical for anyone. If someone has an example, I would be happy to learn it.

  • @Garythefireman66
    @Garythefireman66 Год назад +1

    Professor dropping knowledge bombs

  • @charleshines2142
    @charleshines2142 Год назад +15

    32 bit is not terrible, it just has a 4 GB memory limit. I also don't miss being stuck with 4 GB file size limits.

    • @TheHighborn
      @TheHighborn Год назад +9

      4gb file size has nothing to do with it. It's the FAT32 format......

    • @jameshogge
      @jameshogge Год назад +5

      @@TheHighborn This is incorrect. Both have a 4GB limit. This is because 2^32 is approximately 4 billion

    • @dawidvanstraaten
      @dawidvanstraaten Год назад +3

      It’s terrible because it adds a lot of overhead to the ISA, making it use more gates meaning it is less performance per watt.

    • @briansonof
      @briansonof Год назад

      @@jameshogge No, that's because of design limitations of old Windows versions. Other operating systems never had a 4GB file size limit on 32-bit architectures because they used more than 32 bits to represent file sizes and offsets.

    • @canalellis
      @canalellis Год назад +1

      You can surely operate a 32 bit system with more than 4gb of ram, Linux already does that, but every program opened can only use 4gb of available memory

  • @D4RKV3NOM
    @D4RKV3NOM Год назад

    that was a bit of a sneaky promotion segment xD had me thinking for a second, wasit what does this have to do with the video xD good video though gary as always

  • @shanehebert396
    @shanehebert396 Год назад +3

    I'm for it. I've been on the 64-bit bandwagon since the 90s (SGIs, etc.) ;)
    I think this may have a transition period... they might keep everything for the consumer market while transitioning the server/HEDT to this, then bring the consumer products in after some time. But, I'm all for flipping the switch.

  • @linjeffer3511
    @linjeffer3511 Год назад

    why x86S ? Is it better than arm or risc-v ? Has it more power effeciency ?

    • @GaryExplains
      @GaryExplains  Год назад

      Do you use x86 now or something else?

  • @mattbosley3531
    @mattbosley3531 Год назад +3

    This from the people who resisted 64 bit as long as possible. I'm sure they have an ulterior motive, I just don't know what it is yet.

    • @rashidisw
      @rashidisw Год назад

      I believes Microsoft already knew about this hence they said only x64 Windows going forward back then.

  • @gordonlawrence1448
    @gordonlawrence1448 Год назад

    I have two questions. How much will this reduce the cost of production? Will the CPU need less clock cycles per instruction? If the answers are very little and no then what is the point? There is another question 256 bit processors. There have been specialist ones for decades. Will migrating to 256 bit mean we go through all this malarky again?

    • @GaryExplains
      @GaryExplains  Год назад

      Before 256 bit comes 128 bit and there isn't a need for 128 bit at the moment. Most CPUs now support 128 bit SIMD instructions.

  • @marschrr
    @marschrr Год назад +11

    I feel that for integrated circuitry designers this feels like having to maintain an old delphi/pascal runtime while running everything else in modern C++ or Rust.

    • @PaulSpades
      @PaulSpades Год назад +4

      ahhh, yes. delphi - back when desktop software actually worked well.

    • @marschrr
      @marschrr Год назад +6

      @@PaulSpades yeah, back when automatic updates didn't exist as excuse to ship bad code lol

    • @PaulSpades
      @PaulSpades Год назад +3

      @@marschrr Honestly, there's way better tech, tooling and documentation in VB6, Borland 2005 and Lazarus than whatever passes for a GUI kit wrapping MFC these days. Modern devs just got a bad deal, and you don't know what it's like to have a stable platform and well written libraries that make sense (and don't change every other week).

    • @kevinurben6005
      @kevinurben6005 Год назад

      I still use Delphi V5 and V7 - and it's still awesome!

  • @ecmcd
    @ecmcd Год назад

    At about 6:36 you say, "....You can't get a 64-bit version of Windows 11, for example....", but I think you meant "can't get a 32-bit version....".
    Other than that little niggle I found your presentation to be excellent - thanks, it was helpful.

    • @GaryExplains
      @GaryExplains  Год назад

      Yes I misspoke, but you aren't the first to point out the little niggle! 😜

  • @US_Joe
    @US_Joe Год назад +3

    Very ironic, being as AMD started 64 bit . Beware - you may be running 32 bit apps without realizing it. 👍👍👍

    • @GaryExplains
      @GaryExplains  Год назад +6

      32-bit apps will still work. Did you watch the video?

    • @sarowie
      @sarowie Год назад +1

      no, AMD did not start 64 bit. For some reason we now consider x64/AMD64 to be obvious; but it is not.
      Intel thought Itanium; Microsoft went along.
      There was Sun Sparc CPUs and sun solaris.
      There was apple with PowerPC CPUs.
      Now, the end of x86/x64 is coming; either by ARM (look around - you will have more ARM cpus in your house then Intel) or what ever intel and amd agree.
      RISC V will also play a role. If you doubt that: Ever heard of Acorn RISC Machines in some early home computers?

    • @US_Joe
      @US_Joe Год назад +1

      @@sarowie Nice to know - thanx. I still believe AMD brought 64bit to the global market before anyone else.

  • @johnsmith1953x
    @johnsmith1953x Год назад +1

    *PLEASE! Intel don't get rid of 32-bit mode!*
    Intel can eliminate 16-bit mode, but not BOTH 16 and 32!!
    Nooooooooooooo!!!!

  • @thecount25
    @thecount25 Год назад +4

    While the proposed transition might seem like a simplification at first glance, it's important to recognize that such changes could potentially introduce new issues and failure modes to the ecosystem. This is not merely resistance to change, but a pragmatic outlook informed by our previous experiences.
    Take, for example, the transition from BIOS to EFI. Although EFI has offered some improvements like faster boot times, support for larger drives, and a more sophisticated interface, it's also been accompanied by a variety of challenges. EFI's larger codebase increased the attack surface, making systems more vulnerable to security threats. There were also complexities and compatibility issues that arose, often requiring users to navigate additional steps like disabling Secure Boot during the installation of certain operating systems.
    While some of these issues have been mitigated over time, it's worth noting that they led to complications and headaches for many users, IT professionals, and developers. The simplicity and intuitive nature of BIOS, which allowed users to configure their systems quickly and easily, was lost in some respects in the move to EFI.
    Yes, technological progress is often accompanied by disruption, but it's crucial to ensure the proposed advancements indeed provide tangible benefits that outweigh the potential drawbacks. It's about finding a balance between encouraging innovation and maintaining stability and usability in our computing environment.
    Furthermore, the assertion that x86's days are numbered isn't intended to dismiss potential improvements but rather to bring awareness to the shifting landscape of computing architecture. With ARM gaining prominence in various sectors, it might be prudent to consider where our resources and efforts are most effectively utilized.
    In conclusion, the key to successful innovation lies in careful analysis and cautious implementation. We should strive for a thoughtful approach that minimizes disruption, maximizes benefits, and ensures the continuity of the ecosystem while embracing inevitable changes.

    • @leisti
      @leisti Год назад +2

      Hello, ChatGPT.

    • @thecount25
      @thecount25 Год назад +1

      @@leisti It's just a tool the ideas are still mine.

  • @michaeldeloatch7461
    @michaeldeloatch7461 Год назад

    Pleased to be introduced to your channel -- and here I thought at first blush you were just another blathering brit hahaha... Great info on this topic -- thanks!

  • @IndianaRoy
    @IndianaRoy Год назад +11

    It's a good Idea, with virtualization there's no longer need to have those instructions for niche use cases

    • @lawrencemanning
      @lawrencemanning Год назад +2

      I think you mean emulation.

    • @gregorimacarioharbs5715
      @gregorimacarioharbs5715 Год назад

      @@lawrencemanning no he mean virtualization like VT-X (which still can be used to have 16bit and 32bit OS VMs)

    • @Henfredemars
      @Henfredemars Год назад +3

      ​@@gregorimacarioharbs5715 The hardware still needs to support the instructions to use VTX, and if 32-bit was removed this would not be available.

    • @volodumurkalunyak4651
      @volodumurkalunyak4651 Год назад

      @@gregorimacarioharbs5715 x86S removes 16 bit modes and some 32 bit modes entierly. Virtualisation will NOT bring those back except VMM takes a job emulating those.

    • @lawrencemanning
      @lawrencemanning Год назад

      @@gregorimacarioharbs5715 no he means emulation. Something like QEMU would be required. It would be the same as running, say, RISC-V code on x86-64 today.

  • @trevoro.9731
    @trevoro.9731 Год назад +2

    In theory, CISC have a potential to be faster than RISC in many applications. But it is stupid to make any changes without deep revision of the legacy x86 architecture, as it won't boost much the performance. There are many high-performance instructions (CISC) are missing like conditional operation prefix, jump prefetch and many others.

    • @totalermist
      @totalermist Год назад +2

      Funny you mention conditional operation prefixes - ARMv8 explicitly removed them, since they are actually detrimental to performance in many circumstances, as ARMv8 introduced branch prediction :)
      Directly from the horse's mouth: "The A64 instruction set does not support conditional execution for every instruction. Predicated execution of instructions does not offer sufficient benefit to justify its significant use of opcode space."

    • @trevoro.9731
      @trevoro.9731 Год назад

      @@totalermist It doesn't mean they are in the right mind. It is impossible to predict branches and achieve low latency within sane amount of logic and thermal package. They could improve the situation by introducing markup commands, as there is no way for the processor to guess the actual code behavior. They have to do that, if they don't do that, they are just mentally r******d. Even with all the things, branch misprediction penalty of over 10 cycles is not acceptable by any means, meaning that there is no alternative for conditional prefixes when processing complex data.

    • @TheEVEInspiration
      @TheEVEInspiration Год назад

      Many time-critical parts of code can be written in a branchless fashion, or at least unrolled so much that the branch cost approaches zero.

    • @trevoro.9731
      @trevoro.9731 Год назад +1

      @@TheEVEInspiration Tree parsing with data-based conditional branching can't.

    • @TheEVEInspiration
      @TheEVEInspiration Год назад

      @@trevoro.9731 Then do not use a typical tree when it can be avoided.
      Instead use data-structures with a memory layout that still contains the features of a tree, but aren't a actually tree and have not the memory access patterns associated with it.
      This sort of thing is where most speedup comes from these days. Picking the right algorithm for a given job and job-size.
      Linked list used to be king, then balanced trees, then hash tables and then arrays/vectors being scanned in a branchless way.
      Things keep evolving you know, adapting to the new underlying reality of memory speed/distance and CPU processing speeds.
      And be real, predicated execution in the form you suggest would not solve your own example.

  • @robertlawrence9000
    @robertlawrence9000 Год назад +3

    It would be nice if they still included some sort of emulation mode to run old applications.

    • @BruceHoult
      @BruceHoult Год назад

      That is what they've been doing. They're talking about stopping doing it, as Arm already has done..

    • @robertlawrence9000
      @robertlawrence9000 Год назад +1

      @@BruceHoult no emulation is different. There can be an emulator running the application

    • @GaryExplains
      @GaryExplains  Год назад +1

      @Robert Lawrence I think the confusion here is the word emulation. At an application level there are of course emulators like DOSbox or whatever, but there is also some hardware level emulation that the CPU can do. For example there is a Virtual 8086 mode in the 386 (and later) allows the execution of real mode applications while the processor is running protected mode. It is a hardware virtualization/emulation technique

    • @robertlawrence9000
      @robertlawrence9000 Год назад

      @@GaryExplains thanks for clarification. I hope there will still be ways to run old legacy software even If it means via emulation software only to simulate the legacy modes. I'm sure if it gets abandoned that there will be some smart people out there willing to make them.

    • @gregorimacarioharbs5715
      @gregorimacarioharbs5715 Год назад

      32bit applications still run fine (and natively... just the OS is forced to be 64bit... which is basically already the case with UEFI being 64bit only... so x86-s is basically removing what can't even be used)... just for 16bit programs that emulation (or VM capabilities) will need to be used, (which does not matter, because 16bit programs are faster that even with 1/10 of speed they still run faster than javascript stuff on browsers hehe)

  • @yuan.pingchen3056
    @yuan.pingchen3056 Год назад

    In Microsoft windows 3.1, you can install win32s subsystem, it allows you run certain 32bit program , and win95/98/ME can run true 32bit program, but the OS still didn't treat CLI(clear interrupt flag) instruction as dangerous instruction, and allows all program to execute it, I don't know what's the difference between windows NT and windows 98, maybe program runs in win98 are all ring 0 priviledge level, so it allows you to run CLI and did not trigger any protection fault....

  • @TiFredTheBest
    @TiFredTheBest Год назад

    Is this going to have any performance benefits? Is this related to reducing/simplifying the instruction set in any way?

    • @Henfredemars
      @Henfredemars Год назад

      Eliminating parts of the instruction set that aren't needed simplifies the design and could indirectly lead to small benefits, but I wouldn't expect the change alone to result in a direct benefit. It's cleaning up.

  • @ericnewton5720
    @ericnewton5720 Год назад +3

    If this makes the chips cheaper, then I’m for it. I have a feeling Intel will find a way to release “64 bit only!” Cpu for a premium somehow
    ARM and apple silicon running 64bit only is because they have a unique (niche) position in the industry of being able to force recompiles on software, whereas Intel has been the king of backwards compat. This became less needed during the hard push to hardware level virtualization. Imo, preserving 32bit as a chipper will be important going forward but only accessed in a virtualized container most likely.

    • @arthurmoore9488
      @arthurmoore9488 Год назад

      I doubt that it will be a premium. More like this means they don't have to support and validate those other modes. That's a stupid amount of testing for something almost no one using modern processors uses. Plus, imagine if there was a massive bug in there. Lots of egg on Intel's face if they didn't do enough validation!

  • @nimrodlevy
    @nimrodlevy Год назад

    Is compatibility can be achieved via emulation? So we'll be able to run programs and enjoy them for many years to come?

  • @jaimeduncan6167
    @jaimeduncan6167 Год назад +3

    They really want to compete with ARM. Removing all the old stuff will basically turn X86-S in a kind of risk engine (I know that the microcode engine on X86 is already some kind of "augmented risk engine" as the instructions are broken into a risk dispatch (if we can call Power risk). We also know that ARM was blessed by Apple's aggressive stance in that sense. I know Apple should not be mentioned, but one year before Android they moved to 64 bit ARM (A7)

    • @GaryExplains
      @GaryExplains  Год назад +1

      Apple moved to 64-bit using AArch64 as designed by Arm and Arm launched its AArch64 64-bit CPUs a year before Apple, however yes, on Android the actual chip makers were slower to jump over.

  • @someoneyouneverknow7529
    @someoneyouneverknow7529 Год назад

    So would it be better or worse?

  • @seancondon5572
    @seancondon5572 Год назад +3

    I think this title is a bit clickbaity. They appear to be killing off 16-bit compatibility. I... guess that's ok. I mean, the major use case for 16-bit compatibility on 64-bit systems is DOS gaming... but, uh... my dad still has a copy of AutoCAD 14 kicking around, and based on the fact it works on 32-bit Windows and not 64-bit, it has SOME ties to 16-bit libraries. As far as why he doesn't upgrade... he paid full price for 14 when he got it. He owns the license to that copy. He's not going to pay for another, let alone rent one through some idiotic subscription model.
    Now, if/when intel DOES kill off 32-bit compatibility... the edge x86 has had for the longest time will suddenly just... evaporate. It will be gone. x86-32 and x86-16 software will require emulation to run. And CPU emulation is slow. Whatever architecture can emulate it fastest will be preferred for those operations. And, of course, the use case for 32-bit lies primarily in gaming. You do NOT want to get gamers pissed off. Bad idea.

    • @repatch43
      @repatch43 Год назад +3

      CPU emulation is slow, except when the CPU you are running on is SO much faster than the CPU being emulated it won't make a like of difference. By the time you've got a 64bit only x86 CPU on your desk, it's CPU emulation speed will be far faster than any current x86 CPU that supports 16bit.
      And that's ignoring that current machines won't evaporate. I am typing this on an 11 year old machine, it works perfectly for my purposes.
      This is a non issue.

  • @nngnnadas
    @nngnnadas Год назад

    I don't assume it will affect dos-box or anything like that would it?

    • @vascomanteigas9433
      @vascomanteigas9433 Год назад

      No. DosBox emulation Code are CPU independent. Special optimized routines for x86-64 and ARM exists for DOS protected mode games, but those routines was designed to use user mode 32 and 64-bit registers, then no problem at all.

  • @dorion9111
    @dorion9111 Год назад +3

    This is a great idea for people like me that have lots of boards/cpu's that will age like fine wine due to that 32bit compatibility for people wanting to have access to older software which will be around for at least 8 years after the newer cpus stop supporting... Welp looks like my 8700k, 9900k, and 4 i5 10th gen cpus along with 3 ryzen 3000 systems are being time capsuled for value gain :D

    • @GaryExplains
      @GaryExplains  Год назад

      What do you mean by "old software"? An OS like XP? Or a 16-bit DOS program?

    • @GaryExplains
      @GaryExplains  Год назад

      @@forbidden-cyrillic-handle But that is a different problem, you can't run XP or Windows 95 successfully on a modern PC either (no drivers). So X86-S changes nothing about this. I don't believe that you are buying new Intel processors today and running Windows 95 on them.

    • @volodumurkalunyak4651
      @volodumurkalunyak4651 Год назад

      Why even bother with 8700k or similar???
      R9 7950x3d or i9 13900ks are still fully 16 and 32bit capable.

    • @GaryExplains
      @GaryExplains  Год назад

      @@forbidden-cyrillic-handle So you are saying that people in your area are running older versions of Windows on new hardware and they are writing the drivers themselves to do it. These are 16-bit apps for DOS etc that can't run on Windows 10 because of lack of support in Windows for those types of apps?

    • @GaryExplains
      @GaryExplains  Год назад

      @@forbidden-cyrillic-handle So if it can all run in a VM what is the worry and why all this G7/non-G7 sutff?

  • @alcorza3567
    @alcorza3567 Год назад +1

    It would be great to understand what benefits this actually would have over the regular x86 ISA. Would it mean less die area has to be spent on hardware supporting classic x86 ISA? Would security be drastically improved? Would boot times be faster and more secure? Would OSes be less complex and faster due to dropping that same legacy baggage? How would Windows and Linux be affected by this? Would there be performance benefits in applications? What would the effects on memory management mean? What would Intel and AMD do with the extra die space on silicon? Would there be a benefit in power efficiency (especially seeing as we know x86 is behind RISC in the power game)?

    • @GaryExplains
      @GaryExplains  Год назад +2

      The benefits are mainly just removing legacy stuff from 1978 that no one uses any more. Basically technical debit.

    • @alcorza3567
      @alcorza3567 Год назад

      @@GaryExplains absolutely, I get that, but I'd be interested to know based on my previous questions what (if any) benefits there would be by removing this legacy stuff, otherwise why remove it if not to get some advantage from it.

    • @GaryExplains
      @GaryExplains  Год назад +2

      As I said the main benefit is removing stuff from 1978 that isn't used any more. Fixing the technical debit. The answer to all your questions are all negative. OSes won't be less complex or faster. There won't be any performance benefits, etc.

    • @D0x1511af
      @D0x1511af Год назад

      x86 less power game? in mobile segment yes? the M1 excel very well? but hardcore supecomputer? x86 still ahead from RISC based CPU

    • @GaryExplains
      @GaryExplains  Год назад +1

      @Mich X Not quite, the second fastest supercomputer in the world, the Fugaku, uses Arm cores, it was the fastest during 2022. The 5th and 6th place computers use POWER9.

  • @Jayrod64
    @Jayrod64 Год назад +5

    By killing off 16-bit and 32-bit applications, you kill backwards compatibility along with it. Bad idea.

    • @a.thales7641
      @a.thales7641 Год назад +2

      Why? You don't have to buy 2024 products. Just buy old stuff.

    • @GaryExplains
      @GaryExplains  Год назад +7

      Interesting comment. What 16-bit applications are you using?

    • @Winnetou17
      @Winnetou17 Год назад +1

      They still have a compatibility mode (check the top half of, say, 3:25 ). But even without it, emulators, especially for 16bit should be able to do the job for maintaining compatibility.

    • @BruceHoult
      @BruceHoult Год назад +2

      16 bit applications probably run fine on a 16 MHz or slower processor. You can fully emulate a 16 MHz 8086 or '286 at full speed in Javascript. Probably in Python for that matter. For my meagre accounting needs (think 1 invoice per month to be PDFd and emailed ) I still use an accounting program I bought in 1991. It is written for the M68000 Mac and is laid out for a 512x342 screen (you can drag the windows bigger). These days I can run it in an emulator on modern Arm (or Intel) Mac, on a PC, on Linux on x86, arm, RISC-V, or in a web browser. At many times the original speed.

    • @jamesleetrigg
      @jamesleetrigg Год назад +5

      I think emulation has come so far and most backwards compatibility can be done through software

  • @arnoschaefer28
    @arnoschaefer28 Год назад

    So what does that mean in practice? I understand I cannot run 32-bit Linux (let alone MSDOS) on such a new system anymore (not that I would want to), but does this have any impact on running legacy software under 64-bit Windows? Or is that running in an emulator already today?

    • @GaryExplains
      @GaryExplains  Год назад +1

      32-bit software works as it did before on a 64-bit OS.

  • @AndersHass
    @AndersHass Год назад

    I would wish you link your own videos you mention in the description

  • @laughingvampire7555
    @laughingvampire7555 Год назад +2

    I support this, because Intel in terms of chip design has always favored complexity

  • @david_allen1
    @david_allen1 Год назад +1

    Great explanation of what Intel wants to achieve, but no mention of the reason why they want to. What benefits is Intel expecting to gain from the elimination of legacy modes? I'm thinking somehow it must somehow translate to financial gain, but that's not clear. I guess it's obvious that simplifying the design must make designing easier, but I'm curious if there are other reasons or any other motivation?

    • @GaryExplains
      @GaryExplains  Год назад

      Why must there be a direct financial gain, a technical gain is sufficient.

    • @david_allen1
      @david_allen1 Год назад +1

      @@GaryExplains I don’t know that there *must* be a financial gain, but in my experience the actions that companies take are typically motivated by financial gain either directly or because of competitive advantage. As a business, that is their primary goal.

    • @GaryExplains
      @GaryExplains  Год назад

      Not so much for engineering based companies. But technical debt has long term financial ramifications, so removing legacy stuff has a technical and therefore financial gain over the long term.

    • @the_bi11iona1re7
      @the_bi11iona1re7 Год назад +1

      @@david_allen1 efficiency gains in cpu cores

    • @david_allen1
      @david_allen1 Год назад

      @@the_bi11iona1re7 …and I’m guessing greater efficiencies would translate to a higher percentage of manufacturing lots binned at higher clock rates-definitely more profitable.

  • @taranagnew436
    @taranagnew436 Год назад

    what are the advantages/features of all modes?

  • @PrivateUsername
    @PrivateUsername Год назад +2

    It will simplify uEFI quite a bit. And will remove a whole lot of garbage that serves no purpose. And I hope they just go with 5lpt from the start; we already have progressed well beyond 48b addressing, so just go with 54b addressing and 5lpt and be done with it. For now.

  • @bensopher1653
    @bensopher1653 Год назад

    Is there any benefit for this shift?
    Any performance uplift or security improvements?

    • @haukikannel
      @haukikannel Год назад +1

      You save space in CPU and also makes it cheaper and simpler!
      Aka you CPU has now parts than don´t do anything, unless you use older programs.
      People whp use legasy programs needs this feature, but everyone else could use the same space to something more usefull. Either make CPU smaller or add new features at the same space!

    • @dsa1979
      @dsa1979 Год назад

      @@haukikannel people using legacy programs will be able to use a virtual machine to keep using them

    • @soundspark
      @soundspark 3 месяца назад

      At this point would it be in microcode?

  • @kienhwengtai8113
    @kienhwengtai8113 Год назад +1

    Given new machines only UEFI and dropped legacy support, makes sense for future x64 CPUs anyway.

  • @zaphhood4745
    @zaphhood4745 Год назад

    Things are changing.

  • @jn1mrgn
    @jn1mrgn Год назад +1

    I wish Intel would come out with this now as I am just about ready to build a new system and it would be nice to ditch the legacy junk.

  • @padidazg.1166
    @padidazg.1166 Год назад +1

    My question is what is the benefit of this removal to the customers? The system boots are going to get faster or what could be the real tangible advantages to a day to day user?

  • @Mainyehc
    @Mainyehc Год назад

    If the compatibility modes are still there, what is all that cruft still there for, then? 🤨

  • @paulkienitz
    @paulkienitz Год назад +1

    I had heard back in the days of, like, the 486 that there was so much backward compatibility in them that if you installed an older floppy drive, they could still boot and run CP/M. I wonder if that was still true in the Pentium or Core eras, and if so, when it became untrue. Probably not for CPU reasons, I'd bet - probably from changes of disk interfaces.

    • @GaryExplains
      @GaryExplains  Год назад +2

      In 2020 Intel firmware dropped support for running 16-bit/32-bit or non-UEFI operating systems natively. But it may have stopped working before then.

    • @ssl3546
      @ssl3546 Год назад +2

      486 had pretty poor backwards compatibility actually. I really do not remember the reason but there was definitely operating system software that only went as far as the 386.

    • @GaryExplains
      @GaryExplains  Год назад

      @@ssl3546 I don't think that is true.

  • @Henfredemars
    @Henfredemars Год назад +1

    I think it's misleading to say that the Pixel 7 is the first 64-bit only device. As I understand it's merely the operating system that doesn't have the 32 support. Hardware can support it if the software is modified. This is different from removing the hardware's capability of executing 32-bit code.

  • @rondlh20
    @rondlh20 Год назад

    6:36 No 64 bit version of Windows 11!? The key question is what is the benefit of dropping legacy support? Cheaper and/or faster CPUs?

  • @nakedeye44
    @nakedeye44 Год назад

    is there any convincing Advantage of 64 other than supporting more ram?!

    • @gregorimacarioharbs5715
      @gregorimacarioharbs5715 Год назад

      there's some tricks with increased address space as well, but for the rest... for each advantage there's an disvantage that can easily nullify it... so it does have double amount of registers... but they increase code, and with proper optimization they end not being really necessary, also with "context switch" (when you run more programs at same time... more data have to be copied for 64bit... which makes such stuff heavier).... (btw 32bit x86 and... 32bit windows/linux can use up to 64gb of ram, altough on windows (non server versions) that was disabled... and never enabled by them again (only trough patches) i have a win7-32bit with 8gb of ram (the limit is my motherboard hehe)

  • @CypherOzzie
    @CypherOzzie Год назад +1

    About time, Simpler chips, less cost !!

  • @ambotaku
    @ambotaku Год назад

    With all its backward compability, operating modes x86 got so complicated that there is a hidden Minix clone OS called Management Engine (ME) is needed to coordinate the processor startup. So the unixoid Minix OS is the most popular operating system of all. So ist that ME needed after cleaning up all the historic remains of the 8 /16 bit era ?
    But in computer history it is not unusual needing a small system to startup a big one. The first computer i used was a Digital Equipment pdp10 36 bit "mainframe". For managing, startup and communication a small pdp 11 (18 bit) was used. I think similar architectures appear with IBM hosts or even supercomputers.

    • @volodumurkalunyak4651
      @volodumurkalunyak4651 Год назад

      Intel ME runs on it's own management cores anyway. It barely uses main cores at all (turns those ON at system bootup and only manages DVFS, thermal throttling and somethings like that afterwards). Only thing to change in Intel ME - how to turn main cores ON (reset vector - now 64 bit, also prepeare page tables, that all may be done already). SMM (system management mode, it's code is part of BIOS) - some changes (first thing it does - goes from 16 bit mode to 32bit protected mode and further into 64bit long mode) also requires changes (those could also be made now with some silicon changes)

  • @jessstuart7495
    @jessstuart7495 Год назад

    Old software written for 16-bit and 32-bit instructions can be run in emulators. It's interfacing with legacy 16-bit or 32-bit hardware that will be a nightmare.

  • @RobertSmith-lh6hg
    @RobertSmith-lh6hg Год назад +1

    Lets do it.

  • @perfectionbox
    @perfectionbox Год назад +1

    "John, why are destroying all the 32-bit technical manuals?"
    "Hey, the industry is going pure 64-bit, and I don't do things halfway."
    "But someone might still need that information."
    "I DON'T DO THINGS HALFWAY"

  • @mikelieberman6924
    @mikelieberman6924 Год назад

    A couple of points.
    (1)Some Linix distros are actually dropping 32bit support, so that claim is not quite right.
    (2) There are a large number of devices that not only still are in use, but are being sold as new hardware (cctv cameras and NVRs are a good example), that require Internet Explorer with Active Scripting. Yes I know it is a security problem, but that's the state of the world when it comes to such devices. Such CCTV cameras include IP WiFi H.265+ 5MP PTZ capable cameras with 30x mechanical (and digital) zoom. So it's not the old stuff. It's pretty much state of the art CCTV equipment that still depends on IE. How this change in processor architecture will impact these devices that will forever need older OS support (even if run within a VM) is at least something to discuss. [I'd love to see someone come up with piece of code that imitates the IE/w Active Scripting that will run natively in a 64 OS.]

    • @GaryExplains
      @GaryExplains  Год назад

      There is a difference between "Linux" as a general term and any specific Linux distro. Linux the kernel and any distro that wants to can support older 32-bit hardware. That was my point. Secondly I don't get your point about CCTV etc. What is the relationship between those and x86-s?

    • @mikelieberman6924
      @mikelieberman6924 Год назад

      @@GaryExplains As to the second point. Applications that rely on older 16 bit code will, I suspect, disappear as OSs follow on with the change in processor architecture.
      It is not only Intel that wants to ditch legacy stuff. Microsoft is desperate to do this as well. (I am thinking back at when they moved to Vista and dumped all the earlier drivers. That is what caused so much blowback. It wasn't really the code. It was that older stuff no longer worked. By the time they replayed Vista with essentially renamed Vista / Windows 7, the drivers that were written for Vista worked with 7 and everone said, what a great new OS! Why couldn't MS do this sooner! ) So, I'm just guessing that change in Intel's processors will become the reason given to finally kill MS legacy stuff. Sometimes it's not the real technical limitations, but other factors that use technical issues as an excuse.
      As to distros as opposed to kernels... OK, but we don't run a pure kernel + pure GNU. We run a distro and most, if not all the current distros, will over the next few years dump 32bit processors. Do I care than all my Debian stuff will no longer run on a 32bit processor?, No. But I was just pointing out that it's coming.

    • @GaryExplains
      @GaryExplains  Год назад

      But you can't run older 16-bit code on 64-bit Win 10 or 11 already, so no change there.

  • @jimwinchester339
    @jimwinchester339 Год назад +1

    A *HUGE* mistake, IMO. Many of the 16 & 32-bit instructions helped the processor achieve much greater code density (a good example being the block/string instructions). There was a higher percentage of single-byte instructions in those processors than competitors throughout that era.
    This decision also ignores the ugly truth that many development environments still rely on 32-bit toolsets, or 32-bit execution environments.
    See other comments here about UEFI BIOS boot environments, GPT boot sector code, etc.

    • @GaryExplains
      @GaryExplains  Год назад +1

      32-bit tools chains and build environments will still work.

  • @andrewlankford9634
    @andrewlankford9634 Год назад +2

    I wonder if this is too little too late for Intel. I for one wouldn't miss the legacy junk (software emulation is all you need), but are they really going to free up a lot of transistors/chip real estate by doing this? And of course, has Microsoft signed on?

    • @valenrn8657
      @valenrn8657 Год назад

      Only one CPU core needs legacy since 16 bit MS-DOS only supports a sinlge CPU.

    • @jaimeduncan6167
      @jaimeduncan6167 Год назад

      My guess is yes, ARM is going to be dominated by Linux, like Android, Gravitron and now Fujistsu, and now Nvidia. Apple is small but significant both in the Phone and computer market. Even for Microsoft cloud Linux is super important. Keeping X86 alive could be good for Microsoft.

    • @DanB-0
      @DanB-0 Год назад

      Microsoft has already discontinued support for 16bit in windows 64 bit systems but the system can auto convert them to 32 bit versions on its own.

  • @TheNoodlyAppendage
    @TheNoodlyAppendage Год назад

    the x86 can address 4MB of memory, but the IBM PC and PC compatible use fewer chips and overlap all 4 segments and so can only address 1MB

  • @blakespot
    @blakespot Год назад

    You forgot to mention that the iPhone 5S, a huge ARM mobile device, went 64-bit in 2013, a year before Android.

    • @GaryExplains
      @GaryExplains  Год назад

      Apple moved to 64-bit using AArch64 as designed by Arm and Arm launched its own AArch64 64-bit CPUs a year before Apple. I was talking about Arm in that segment of the video, not Apple. For example, the Pixel 7 processor has Arm designed CPU cores.

  • @johansteenkamp9214
    @johansteenkamp9214 Год назад

    Although 16-bit support was in cpu's up to recent, Microsoft stopped supporting running ms-dos programs from win7 onwards. So there is anyway not really away one can run 16 bit code on pc anyway and one needs to use dosbox if you want to.
    So it makes sense that intel wants to remove the legacy stuff from their processors.

    • @gregorimacarioharbs5715
      @gregorimacarioharbs5715 Год назад +3

      MS didnt stopped running ms-dos program from win7... it stopped on 64bit windows... win7-32bit that i'm using still have supported NTVDM... win10-32bit still have supported NTVDM

    • @DanB-0
      @DanB-0 Год назад

      Which microsoft dropped support for windows 10 32bit with the may 2020 update, though the whole os goes end of life on Oct 25th 2025. Which windows 11 is 64bit only.

    • @gregorimacarioharbs5715
      @gregorimacarioharbs5715 Год назад +1

      @@DanB-0 no they didnt.... they just said that OEM version (that come installed) can't be 32bit. but you can still buy and upgrade win10 32bit to latest... but yes win11 does not have a 32bit version... so windows 10 32bit will be supported till the end of win10 life in oct 2025.

  • @sulimansal2990
    @sulimansal2990 Год назад +1

    Does this mean that the Playstation 5 at startup works in 16-bit mode?

    • @jort93z
      @jort93z Год назад

      Yes.

    • @soundspark
      @soundspark 3 месяца назад

      Likely yes, but likely a switch to long mode early in the firmware initialization.