Where do we go from here?

Поделиться
HTML-код
  • Опубликовано: 2 окт 2024
  • urcdkeys.com 25%
    code : C25
    Win10 pro key($15):biitt.ly/pP7RN
    Win10 home key($14):biitt.ly/nOmyP
    Win11 pro key($21):biitt.ly/f3ojw
    office2021 pro key($60):biitt.ly/DToFr
    Affiliate links (I get a commission):
    BUY an LG C1 OLED 48'': amzn.to/3DGI33I
    BUY an LG C1 OLED 55'': amzn.to/3TTpQp9
    Support me on Patreon: / coreteks
    Buy a mug: teespring.com/...
    My channel on Odysee: odysee.com/@co...
    I now stream at:​​
    / coreteks_youtube
    Follow me on Twitter: / coreteks
    And Instagram: / hellocoreteks
    Footage from various sources including official youtube channels from AMD, Intel, NVidia, Samsung, etc, as well as other creators are used for educational purposes, in a transformative manner. If you'd like to be credited please contact me
    #7950X #blackfriday #supersale

Комментарии • 265

  • @vitormoreno1244
    @vitormoreno1244 Год назад +21

    Stagnitaion on CPU where? CPUs got 200+% performance increase in last 5 years

    • @baoquoc3710
      @baoquoc3710 Год назад

      but not translated to the FPS Played on many games

  • @j340_official
    @j340_official Год назад +80

    Back in the 90s CPUs used to become obsolete sometimes within 6 months. Now… 😂😂
    but modern CPUs are already so fast at running most tasks that there’s really not a big need (for the majority of users) for maintaining a rapid increase in perf. Only a subset of users like games, developers, data center, renderers etc that need more compute. Skylake or Kaby Lake CPUs are still relevant for many office users.

    • @fajaradi1223
      @fajaradi1223 Год назад +9

      Yeah, even Nehalem or Sandy Bridge will be just fine.

    • @Dimology
      @Dimology Год назад +6

      hell, I still play latest of World of Warcraft on one screen and watching YT on other screen - on i5 - 760, 4C 4T 2.4Ghz, in combo with 1650 Super. Been deleying purchase of new CPU for years lol
      13 is my lucky number, so I might go for the 13400F next year :P
      But like you said, even older processor can still do just fine. I even have P55 motherboard that unfortunetly doesn't support the old Xeon X3450 that guy behind this channel is always talking about on his other channel :D

    • @j340_official
      @j340_official Год назад +5

      @@fajaradi1223 yep. Its good to see Intel and amd innovating but for most users we’ve hit diminishing returns on cpu compute

    • @BradleyGibbs
      @BradleyGibbs Год назад +1

      Yeah, had each of my CPUs for several years, most notably: the first one for about 10 and my current one for 4.

    • @Real_MisterSir
      @Real_MisterSir Год назад

      It would be nice if games enabled a gpu+cpu accelerated hybrid calculation of complex processes like raytracing. CPU based calculations actually give more accurate results than GPU based 3d render calculations when it comes to raytracing tasks - but since Nvidia isn't in the cpu market they probably aren't focused on this aspect for now.

  • @nfineon
    @nfineon Год назад +31

    The future is still system on a chip, similar to what we have with Apple Silicon, and Console chips which AMD makes all of.
    We want a Ryzen 7600x CPU integrated with Radeon 7600xt GPU all using shared hyperfast HBM or DDR5.
    A Ryzen 7800x CPU paired with a Radeon 7800xt on the same chip, with either on die/on substrate memory using a single massive cooler would be ideal.
    By integrating memory on die/on substate, you basically have level 5 cache speeds and latency which can be shared between CPU and GPU.

    • @reappermen
      @reappermen Год назад +7

      Except consoles don't use SoC's, they simply have the CPU and GPU cores together. They still have the ram split off far more than a SoC has.
      And while SoC's are definitely usefull, they have very hard limits on cooling due to beeing so densely packed, which limits the power draw a lot, meaning you will not be able to get anywhere close to the full performance overall.
      Plus, as performance growth slows down, most people will use their parts far longer. And the longer you use the parts, the higher the chance of one of the components failing. For a classic setup that means you need to replace a single component of your system, and then likely get years more use out of the rest. For a SoC a single component failing means the entire SoC is eWaste, making SoC's far, far more expensive on longer usage lives.

    • @RonnieMcNutt666
      @RonnieMcNutt666 Год назад

      agree, huge 10+ lb water cooled console I would like

    • @wawaweewa9159
      @wawaweewa9159 Год назад

      A cache layer acting as interconnect, with ccds and IO stack on top with seperate gpu chiplet should be the way to go

    • @Seskoi
      @Seskoi Год назад +1

      I want this!

    • @neliaironwood7573
      @neliaironwood7573 Год назад

      One huge problem is that if they do this, it reduces the repairability. One reason you do not see cryptominers on ARM/Apple Silicon/SoCs or even laptops/jailbroken consoles is that if that SoC breaks down, you'll _brick_ the whole thing.

  • @tanman379
    @tanman379 Год назад +12

    Funny that you and Gamer's Nexus have opposite viewpoints as to which products are stagnating. His claim is more on a performance per cost basis, so I wouldn't necessarily use that as a metric for stagnation, but you say CPUs have stagnated and point to GPU bottlenecked frame rates, so that's not really a good metric either. Honestly, both have made great progress recently (primarily due to node shrinkage) despite all of the "end of Moore's law" limitations.

    • @jintsuubest9331
      @jintsuubest9331 Год назад +5

      From an engineering stand point, they are doing fantastically.
      But me, and most people, as end user, gives zero fucks about the engineering side of stuff. It is of no use to us outside of an academic brain exercise.
      What matters to end user is how much more brrrr we get for our money. CPU are seeing great shit recently (4080 gn), and GPU are doing dogshit (4080 gn).

    • @greebj
      @greebj Год назад

      GPUs are only not "bottlenecking" at the top end when you set the threshhold at stupid levels like 4K RT ultra, 8K, >120fps etc. NV and AMD need to push the diminished returns of finicky eye candy, huge res, and huge fps, at a level beyond the average observer, to have a USP for their new stuff.

  • @duckilythelovely3040
    @duckilythelovely3040 Год назад +4

    Simple, stop focusing on single core IPC speed.
    And start focusing on making lazy developers
    actually take advantage of more than 1 CPU core.
    Make them take advantage of GPU tech.
    Fast storage.
    ECT>
    The real bottleneck is the clowns who don't optimize a single thing.,

  • @LordOfNihil
    @LordOfNihil Год назад +3

    im glad somone else is getting tired of stupid "gamer" hardware that im most certainly not buying for the number of rgb headers on it.
    i put a small led strip in one of my builds, it was too bright and distracting and took it out the next day wondering why anyone thought it was a good idea. .

  • @rua893
    @rua893 Год назад +4

    👍👍 bro plz change the voice actor .. the voice in the videos make me sleepy it has too much bass .. its bad .. cant watch

  • @danielthunder9876
    @danielthunder9876 Год назад +9

    I don't think we are seeing stagnation in CPU's, we are seeing CPU's getting massive increases each gen on parallel workloads that can use it. Game engines are not keeping up with parallel workload utilization. I remember about 5-6 years there where total performance went up 2-3% per gen during the Sandybridge - Skylake era. We are so much better off now.

    • @soylentgreenb
      @soylentgreenb Год назад

      Game engines are already too parallel for their own good. To chase parallelism an increase framerate they "pipeline" a single frame across multiple frames; introducing more latency. You need a framerate of 144 FPS today to feel as good as sprite hardware did in the 1980's. That's because they literally chased the raster beam back then; whereas now a frame can do physics and gameplay, while the previous frame is being rendered, while the frame previous to that is being sent to the graphics card by the driver, while the frame previous to that is waiting to be displayed etc.
      This is e.g. why VR games are drastically worse at extracting parallelism; they don't want to you projectile vomit so they can't.

    • @JorgetePanete
      @JorgetePanete Год назад

      CPUs*

  • @FlorinArjocu
    @FlorinArjocu Год назад +3

    Not so sure about the lack of inovation. Apple broke every record of performance per watt with their M1 SoC. If Qualcom or someone else manages some similar performance on PCs (Linux and Windows), AMD and Intel will have huge problems. Of course, on Windows side you also need a good ARM Windows + software, not sure we are there (Linux is way ahead, it runs also RISC-V). Not sure, but Qualcom was working at such SoC already.

    • @arm-power
      @arm-power Год назад +1

      - Apple A16 is best CPU .... about 40% higher IPC (performance per GHz) than best x86 (AMD Zen 4).
      - license Cortex X3 has 10% higher IPC than Zen 4 .... not bad for license core.
      - Qualcomm Oryon (former Nuvia Phoenix) will be in 2024 at same level as Apple.
      It's gonna be very interesting however ARM is already much better in terms of CPU architecture (IPC). Also in terms of instruction set ARMv9 offers 2048-bit SIMD vectors SVE2 (where x86 only AVX512) and also SME2 for matrix instructions. PC world can only dream about something similar.
      Huge change is now in SBC market - instead Raspberry Pi there is new star called RK3588 chip. This SoC uses famous Cortex A76, ARM core from 2018 is rather old, but has double IPC than RPi4 (Cortex A72) and with 2.4 GHz (up from 1.5 GHz of RPi4) is basically 3x ST performance. Total MT performance will be about 4x times higher as it uses 4x A76 + 4xA55. Manufactured at cheap and great 8nm Samsung process (used by Nvidia 3000 series GPU). Up to 16 GB LPDDR4X RAM, NVME port for SSD, price starts under $100 USD with Orange Pi 5.
      Pretty nice future ahead.

    • @FlorinArjocu
      @FlorinArjocu Год назад +1

      @@arm-power Nice info, thanks! That Raspberry Pi sounds quite interesting, that is quite PC level performance. By the way, we might see also some more powerful RISC-V SoC's in the following years, not only the current controllers; mostly China will want to get rid of Western technologies and dependence and as they don't have a CPU architecture, RISC-V might be seen as a good and open alternative.

    • @arm-power
      @arm-power Год назад +1

      @@FlorinArjocu Yes, more RISC-V as ARM tries to change license model now.
      BTW that Cortex A76 is first desktop class ARM core with performance/GHz similar to AMD Zen 1 or Intel Skylake while of course much smaller (1.2 mm2 at 7nm TSMC including 512kB L2 cache, AMD and Intel are 3 - 5 mm2) and much effective than x86 ( A76 core has 0.75 W = 750 mW at 2 GHz which is pretty impressive, 64 cores in Graviton 2 has TDP 100W at 2.5 GHz).
      My only complain is that it would be nice to have in SBC the A78 core from 2020 (has 25% higher IPC, almost like AMD Zen 3).

    • @FlorinArjocu
      @FlorinArjocu Год назад

      @@arm-power How is ARM licensing model changing? I have no idea, good or bad? It is just a matter of time until the latest cores will get into Raspberry PI, they don't have the money to buy everything and their clients until now were also favorable to cheaper versions. But who knows what the market response is? Maybe we'll have a surprise.

    • @arm-power
      @arm-power Год назад +2

      @@FlorinArjocu AFAIK ARM tries to move licensing model from SoC designers to final device designer.
      No effect for Apple and their custom CPU because Apple is also device designer as a result of their vertical integration. No effect for other SoC designers whos using license Cortex cores (most of them).
      But it will affect Qualcomm's new custom Oryon cores they acquired from Nuvia ex-Apple engineers. In this case Qualcomm would be forced to make whole device branded as Qualcomm, or keep using Cortex cores for mobile market. Or both at the same time. Anyway a sales of those new Oryon cores would be suppressed if not completely stopped.
      That's the rumour. And I don't like that Machiavellian power game by ARM to get easy money. It reminds me how Intel killed superior 64-bit Alpha EV8 and HP PA-RISC, destroyed StrongARM and tried to kill AMD by shifting to new ISA IA64 where AMD didn't have a license. All that Intel's evil started when IBM lost leverage over Intel (IBM forced Intel to sell x86 license to anybody hence so many x86 CPU designers - Cyrix, IDT, Nexgen, even forced Intel to give AMD 386 design to allow cheap clones ), but then Intel stopped give x86 license to limit competition.
      Competition is always the fastest way forward and that's why ARM is dominant ISA on the world. ARM tries to limit competition which is not good in general principle.

  • @Edward135i
    @Edward135i Год назад +24

    I think game engines need to evolve to take advantage of all those cores so most games run just fine on a 6 core cpu and a 8 core isn't much of a upgrade when it comes to gaming.

    • @ojhuk
      @ojhuk Год назад +6

      They target the average gaming PC which at the moment is 6-8 cores, once higher core count cpu's are more widespread they will target them. I believe it's got a lot to do with optimisation and probably budget. Some code is also very hard or impractical/impossible to spread across cores which is why you see the high frequency and low latency cpu's being in such high demand.

    • @InnuendoXP
      @InnuendoXP Год назад

      Been shopping around for a new build & with how rapidly performance turns to diminishing returns, for gaming, I still haven't seen a compelling case for why I'd need any more than a Ryzen 5600 which are currently going for like £130.
      The 5800X has come down massively to £230, but even then, why do I need it? I haven't seen a single case with a performance differential over 10% in gaming performance, & that price difference is a lot more than 10%

    • @Ferocious_Imbecile
      @Ferocious_Imbecile Год назад +5

      Forget the graphics. Graphics are more than good enough for gaming now. Superb graphics + crappy AI is the curse of modern gaming. How about using all those cores as virtual servers within the game for multiple independent AI players that will astound you with their actions before they severely kick your ass. Imagine Arma IV with a 64 thread CPU and 40 of the threads are working for the AI. Now THAT will make games worth playing again. Maybe even Total War could be salvaged and made worth playing.

    • @defeqel6537
      @defeqel6537 Год назад

      @@tstager1978 IIRC AMD's lead engineer said the difference is around 3% in low power designs, so yeah, not huge

    • @defeqel6537
      @defeqel6537 Год назад +3

      @@Ferocious_Imbecile This, games are basically the same now as they were during 7th gen console era. Apart from a few exceptions, game design and AI has barely progressed.

  • @hiphophead8053
    @hiphophead8053 Год назад +2

    That's not the future that's already here its been here for 2 years now with the m1 chip from apple. Or am I missing something ?

  • @thepro08
    @thepro08 Год назад +2

    translation: intel will be bailout by gov, with paid by gov fabs, ngreedia will do the same, amd needs to play ball as usa will push for own fabs, it will be the fabwars, and the richest corps will get the best fabs and therefore that will be their competetive avantage instead of serving a good product at a competetive price

  • @mylittlepwny3447
    @mylittlepwny3447 Год назад +4

    Everything is built for the lowest common denominator. Games are built to run great on consoles. The need for high end hardware for gaming is over. You can get by upgrading your PC about once every 5-10 years. Games will still run just fine.

    • @defeqel6537
      @defeqel6537 Год назад

      Yup, buy new PC HW 3-4 years after a new console generation and you'll be fine

  • @cyrusesfahani5935
    @cyrusesfahani5935 Год назад +4

    Is there a reason Coreteks doesn't include MI300in his heterogeneous compute point? Just speaking about Nvidia and Intel? It's exclusion is sooo odd and suspicious...

    • @jayvee8502
      @jayvee8502 Год назад

      He could be on the paycheck list. Last time he was disappointed by RDNA3 not performing better than Lovelace.

  • @MotoguyX5
    @MotoguyX5 Год назад +5

    I don't think CPUs have stagnated, rather then need for faster CPUs has. The average consumer hardly has a use for 8 cores, and most games can be played at over 100 fps on CPUs from a coupld of gens ago or low end current gens

  • @Ivan-pr7ku
    @Ivan-pr7ku Год назад +5

    In the 90s the prevailing concept for CPU architecture design was general purpose model with strong abstraction layering and moving as much functionality in software. This philosophy peaked with Itanium, sort of. After that came the era of SMP scaling -- as many cores and threads under the hood and we are now coming to the peak of this approach. The future path for advancement now is thought to be domain specific acceleration, where tightly integrated dedicated HW blocks provide high-performance functionality at much lower TDP, all accessible through low-level system APIs -- GPU, video/audio codecs, ISP/DSP, network packet handling, ML accelerators and many more. Apple's silicon already proved that the SoC concept can be scaled to desktop performance levels, where the CPU cores are just one of the many building blocks.

    • @Freshbott2
      @Freshbott2 Год назад

      In theory this was already proven with the previous gen consoles. Bad as they are compared to PCs, consoles have amazing performance given the wattage and the package size (of the chip). Which is why I don’t understand why AMD hasn’t been pushing this in the PC space. Now they’ve finally done it outside of Microsoft and Sony’s domain, they did it in the Steam Deck of all places which will go nowhere for AMD. But the Steam Deck is still proof of how good the semi custom SoC model is. By the time they get on it, Apple has already beaten them to the punch and Nvidia probably will too.

    • @user-lp5wb2rb3v
      @user-lp5wb2rb3v Год назад

      Amd needs to release massive desktop apus.
      Maybe 6/12T+ 24CU in an am4 socket.

  • @cern1999sb
    @cern1999sb Год назад +2

    Why do reviewers never compare compilation performance of CPUs? Productivity always seems to mean 7-zip and exporting videos

    • @TJ-hs1qm
      @TJ-hs1qm Год назад

      Jarrod'sTech includes LLVM and Firefox compilation benchmarks into their scores. Josh sometimes also includes rough Java productivity estimates. According to Jarrod the M1/2 chips wipes the floor with AMD/Intel in raw compilation power. However, the vast majority of reviewers are neither programmers or scientists and neither is their target audience. So there is no money to make.

  • @cjjuszczak
    @cjjuszczak Год назад +9

    18 years ago this was already predicted by some in the industry....
    "I think CPU's and GPU's are actually going to converge 10 years or so down the road. On the GPU side, you're seeing a slow march towards computational completeness. Once they achieve that, you'll see certain CPU algorithms that are amicable to highly parallel operations on largely constant datasets move to the GPU. On the other hand, the trend in CPU's is towards SMT/Hyperthreading and multi-core. The real difference then isn't in their capabilities, but their performance characteristics.
    "
    "When a typical consumer CPU can run a large number of threads simultaneously, and a GPU can perform general computing work, will you really need both? A day will come when GPU's can compile and run C code, and CPU's can compile and run HLSL code -- though perhaps with significant performance disadvantages in each case. At that point, both the CPU guys and the GPU guys will need to do some soul searching!" - Tim Sweeney - Epic Games Co-founder and Unreal Engine creator - February, 2004
    www.beyond3d.com/content/interviews/18/4

    • @JorgetePanete
      @JorgetePanete Год назад +1

      CPUs*
      GPUs*

    • @user-lp5wb2rb3v
      @user-lp5wb2rb3v Год назад

      It's called an apu, mi300 exists, we just need to wait for an order of magnitude increase in efficiency/ density and then consumers can get a fraction of that power

    • @cjjuszczak
      @cjjuszczak Год назад +1

      @@user-lp5wb2rb3v
      No, an APU is just closing the connection distance between discreet CPU, and GPU silicon.
      What Tim Sweeney foresaw, is a SINGLE silicon processor that use the SAME transistor for BOTH general computing, and graphical computing.
      Think what's happening with "Memristors" where we combine compute, and memory using the SAME silicon transistors, and NOT TWO separate silicon entities.
      Likewise, combining CPU, and GPU processing, leads to a processor that neither targets "CPU", nor "GPU" tasks, because the same silicone can process BOTH.
      A close idea to the goal is what we see with AI rendering, such as NeRF's (Neural Radiance Fields), whereby you're not using polygons, or even the traditional raster pipeline, it's a new way of doing graphics that doesn't specifically target a "CPU", or a "GPU" :)

  • @seylaw
    @seylaw Год назад +5

    Conceptually, this sounds a lot of an evolution of what AMD had in mind back 10 years ago with HSA, taken to the next level with chiplets and disaggregation. With CXL and other hard- and software technologies around this time for a more efficient communication between CPU-GPU and memory, I can see how this could totaly disrupt the market. But I am also still waiting for the new SFF-TA-1002 connector to make its debut in the server and consumer space as that promises to get away from PCIe for more cost-effective motherboards and even more possibilities to connect different devices. As this connector was already developed by the Gen-Z consortium years ago, I wonder why it takes ages to get commercialized.

  • @xpkareem
    @xpkareem Год назад +11

    CPUs have gotten so fast that at the consumer level, you really don't need to upgrade very often. The chip is going to do everything you need or want to do unless you have very specific unusual requirements for years and years and years. I always end up upgrading not because of the chip but because the version of Windows I'm using stops doing things I need it to do before the chip gets too slow. Both 7 and 10.

  • @GegoXaren
    @GegoXaren Год назад +1

    > Pronounceing Å as A.
    > pronounceing Ångström as Angstom.
    Yeah, you should probobly look up how those are pronounced.

  • @Variarte_
    @Variarte_ Год назад +1

    The CPUs aren't stagnating, 5.0Ghz 16 cores is frankly, absurd with a ridiculous amount of performance programs that take advantage of this prove it. What has stagnated is software. The only things AMD and Intel can do is show the market that low core count CPUs are in the past, and multi-threading as much as absolutely possible is the best thing to do moving forwards.

  • @samkim6127
    @samkim6127 Год назад +1

    UR cdkeys that Coreteks is advertising is a scam. The Visio Pro 2019 product key that I bought did not work. My credit card was charged from a Hong Kong location. The company that charged me did not match the advertised website name. You can be sure any keys that are far below Microsoft's retail prices are grey or black market keys. I've unfortunately learned the hard way.

  • @supernova874
    @supernova874 Год назад +3

    Startup for RT accelerator? it won't happened cause people afraid the "war" from major companies and the inevitable buyout (anyone remembers Ageia? yes i had ageia physx accelerator before nvidia bought and stop supporting them)

  • @tringuyen7519
    @tringuyen7519 Год назад +1

    So you’re just saying that Apple was right about the M1 CPU and its unified memory. Right?

  • @prashanthb6521
    @prashanthb6521 Год назад +2

    I like Apple M1/M2 design. They will remain the king for some time I think. I hope they enter server market.
    CXL will be the biggest up coming thing according to me.

    • @maxjames00077
      @maxjames00077 Год назад

      SoC for server market makes no sense. Repairability and upgradeability is awful with SoC's

  • @hello_there0
    @hello_there0 Год назад

    0:00 The CPU market is not at all stagnant when compared to the GPU market. Performance per $ is no better today than 3 and a half years ago.

  • @AlexSeesing
    @AlexSeesing Год назад +1

    I have an issue with "XPU" from Intel. What does it stand for? Xenophobic Processing Unit? It's weird and out of reality if it means "x" as variable to be replaced by any number. They better could have used the more logical Z, which is pronounced as zèta and means "the end" in some kind of way. The perfect conclusion in the era of silicon. Yet, Intel doesn't take this advantage at this moment. What is Intel thinking?

  • @jursh3936
    @jursh3936 Год назад +7

    idk why but my 7950x scored 37755 in r23 and clocked too 5.2 all core on its own. can there really be that large of a difference in clocks between two chips?

    • @christroy8047
      @christroy8047 Год назад +2

      Absolutely. More even. It's atypical though.

    • @ebonysweetroll
      @ebonysweetroll Год назад

      You must have a really good cooler

    • @jursh3936
      @jursh3936 Год назад

      @@ebonysweetroll ya i got a 420mm aio

    • @jursh3936
      @jursh3936 Год назад

      @@madd5 ya its all normal. i just learned that it overclocks itself the more cooling it has the higher it goes and i have a 420mm aio

  • @MatrixJockey
    @MatrixJockey Год назад +3

    How do you know that AMD will not go down the customization route as well? They have the graphics IP and compute IP that they can leverage.

    • @skypickle29
      @skypickle29 Год назад

      AI cores as chipsets/chiplets will enable a third dimension to computing. Software is already leveraging AI with applications showcasing things like conversational UI and graphics like stable diffusion. AI can enable’ automatic customization’ to each pc as the user spends more time with it. Gaming will offer smart assistants as well as better oponents. not just better graphics with frame insertion. But clever coding will b needed to hide the increasing latency of individual processes

  • @MetalGearMk3
    @MetalGearMk3 Год назад

    we can only shrink transistor for so many more years..

  • @0stre
    @0stre Год назад +2

    I'm having a great time playing on my 20 year old computer, I take good care of it and don't plan to upgrade it. Oh... yes, I've been working in IT for 10 years and I'm deep in it, but I really don't need anything faster for personal use. Is something wrong with me?

    • @mapesdhs597
      @mapesdhs597 Год назад +1

      Nothing wrong at all. One of my main systems is an SGI O2 from 1997 with only a 600MHz CPU, but it does what I need it to just fine. For gaming, I used a SandyBridge PC (i7 2700K) for almost a decade before finally upgrading to a 5600X, but for most ordinary tasks a system from circa 2011 is still perfectly viable. Last week I updated a some benchmarking setups (drivers, Afterburner and suchlike), including a Q9550 and i7 950; the only time I could tell any significant performance difference was when unpacking an archive or something, in all other regards both systems felt entirely normal. Having any kind of SSD makes all the difference.
      Just curious, what is your 20yr old computer?

  • @tufttugger
    @tufttugger Год назад

    AMD fans have been talking about a salad bar like chiplet strategy for years. The MI300 platform has 8 compute dies. Theoretically they could be any kind of compute each (as long as they are the right size and layout for connecting to the package), customized per client need. Whether CPU cores or compute cores focused on any type of integer or floating point combos, FPGAs, or even does dies from clients. Intel has no lead here. What it usually comes down to is volume. Custom packages won’t be produced unless there is expected to be a large enough volume (or high price) for the packages made. With the right volume and price, AMD has the lead in chiplets, whether semi-custom or not.

  • @adultlunchables
    @adultlunchables Год назад +40

    I'm not even that old and I remember when a 3 year old computer would be considered a dinosaur. Anyway, love these videos Coreteks, keep up the good work.

    • @peterjansen4826
      @peterjansen4826 Год назад +4

      It is not that extreme now but a 7700X vs a 3700X makes a huge difference, especially for gaming. Pretty good for 3 years (July 2019 to October 2022), contrast that with the progress between 2011 and 2017. Same for graphics cards, between 2016 and 2019 the progress stagnated strongly, between 2019 and 2022 we had big progress. AMD brought the competition back for both CPU's and GPU's, now they need to lower the price a bit.

    • @adultlunchables
      @adultlunchables Год назад +8

      @@peterjansen4826 I get what you're saying but there was a time that if your CPU was 3 years old, you would have 0 chance of playing any new video games. Now adays, you can still play the games just maybe at reduced settings or reduced resolution. I'm not saying I expect to ever get back to how things used to be, but it's sad to have seen the golden era of computing and to live through the downfall. This is off topic to some degree, but we should have known something was wrong when they started selling RGB gaming rigs. It used to be that computers were exciting enough on their own, you didn't have to put LED lights on them to make them desirable.

    • @Santiago-sh3cq
      @Santiago-sh3cq Год назад

      @adultlunchables you forgot to take your meds again, didn't you grandpa?

    • @MrMeanh
      @MrMeanh Год назад +4

      @@adultlunchables I remember buying a $2000 (almost $4000 today when adjusted for inflation) PC in the mid 90's only for new games to not even launch 2-3 years later. It was almost as bad in the early 00's, you got 3-4 years at most before your PC was junk. Today I can play and start most new games on my secondary PC (i5 6400/GTX970/8GB memory) that is a bit more than 7 years old.

    • @raptorhacker599
      @raptorhacker599 Год назад +2

      @@MrMeanh lol im playing on my pentium from 2014

  • @MrDebranjandutta
    @MrDebranjandutta Год назад +4

    I think photonic CPU's (lightmatter) will replace traditional silicon ones in a decade.

    • @realtonaldrum
      @realtonaldrum Год назад +3

      Yes. Are there already prototypes? Would like to see or read things how they can be realized.

    • @MrDebranjandutta
      @MrDebranjandutta Год назад

      @@realtonaldrum yes there are, this might be a good place to start probing. Right now the tech is limited to neural mesh networks and AI, but general purpose computing wont be far behind.
      ruclips.net/video/mF4QendzazQ/видео.html

    • @nathangamble125
      @nathangamble125 Год назад

      @@realtonaldrum Yes, there are prototypes. One was built at the University of Oxford in June.

    • @MrDebranjandutta
      @MrDebranjandutta Год назад

      @@Winnetou17 a decade is a lifetime in tech terms. How long did it take smartphones to replace dumb cellular phones?

    • @JorgetePanete
      @JorgetePanete Год назад

      CPUs*

  • @Aranimda
    @Aranimda Год назад +1

    I'm still at 14nm Skylake.

  • @kemsat-n6h
    @kemsat-n6h Год назад +2

    Dude, you should do a whole album of “listen to my nice deep soothing voice, and fall asleep feeling safe” lmao

    • @Shieftain
      @Shieftain Год назад

      Indeed, he should consider starting an ASMR channel.

  • @WXSTANG
    @WXSTANG Год назад

    Hate to say it but HBM2 is too slow, and the memory bus is not optimized for 64 bit workloads. Us miners found out real fast with Vega the higher clockspeeds you can pull out of the HBM2, the better the performance of the GPU. This all leads to believe Vega wasn't the bottleneck, the memory was. With SAM being supported with Vega this further enforces this mindset because with Vega, SAM does nothing but slow performance of the system. Also, using my mining settings with the GPU cores at 1.2gHz, and memory to 1.1gHz, I got better frame rates.

  • @m4r_art
    @m4r_art Год назад

    Ive been looking at computers since 2001. Back then having a Windows XP Packard Bell was the thing. Power wasn't quite on par with today's standards. In this time period it was all about Dell, HP and Packard Bell. In 2005-2006 people were very big plying World of Warcaft as an MMO. But already in 2007-2008 back when I was using PC's for music using DAW's and VST's, elite composers often had 32GB RAM computers. Sure it would be slower ram but 32GB nonetheless. GPU weren't usually part of the deal and most consumers had very basic fanless GPU, with 512MB VRAM at best during the 2000s.Over the next 7-8 years from 2005, 2012 GPU started getting big on VRAM, until we get around 2016 eith the GTX 1080 solidifying the standard at ~10GB. And mainly GPU has been pushing since then with 24gb vram currently being the golden sweet spot. CPU on the other hand wasn't improving as much. Over the years realistically since 2005 it has improved 7-10% by year, getting about 150%-200% faster in the span of 20 years.

  • @jklappenbach
    @jklappenbach Год назад

    You're fixated on Intel and AMD, and you're missing the bigger picture.
    ARM CPUs are taking over the data center slowly but surely. And neither AMD nor Intel are manufacturing these chips. Amazon is actually making its own, called Graviton, which its internal teams are switching to given the lower power consumption and cheaper price than anything offered by team red or blue.
    Apple has provided the most revolutionary take on the future of personal computing platforms with its M1/2 line of SoCs, ditching the inefficient PCI bus entirely, and making all memory on-die and shared between both CPU and GPU.
    If anything, Intel's future architectures are simply copying Apple's lead.
    Yet you mentioned nothing about Apple or Graviton.

  • @RevoEnerge
    @RevoEnerge Год назад +5

    3 am , time to watch...

  • @The_Chad_
    @The_Chad_ Год назад

    ".. it would seem Intel and Nvidia's strategies seem better..." What a revelation. Maybe AMD will come to terms with this if they ever get rid of all the sleezball accountants running the show over there that are focused on nothing but short-term profits

  • @FrankHarwald
    @FrankHarwald Год назад

    Legacy system memory bus topology & technology is what are holding CPUs back - more so then it (also) does GPU, but GPUs mostly can circumvent the efficiency problem by having easily to predict memory & compute patterns & then easily prefetch the memory - which you can't always do easily with the algorithms that are supposed to run on CPUs.

  • @livefromhollywood194
    @livefromhollywood194 Год назад +2

    Huh, apparently I won the silicon lottery. I got a 36792 with no tuning.

    • @johnbash-on-ger
      @johnbash-on-ger Год назад

      Congratulations! FYI The silicon 'lottery' thing with chips is a real thing, variations in production happen. Also later chips can have better performance due to the 'pipe cleaning' effects of the production of previous chips.

  • @Bourboelettah
    @Bourboelettah Год назад

    Foccen Audio is that!?!? Sandpaper audio!???

  • @TheLinkedList
    @TheLinkedList Год назад

    Why pay a premium for a 7950x if you're not getting the highest quality bins... I would RMA that thing, I bet it's a time bomb and will gradually degrade

  • @velo1337
    @velo1337 Год назад +1

    for servers having around 128GB of memory on the chip itself would be nice :)

  • @OsX86H3AvY
    @OsX86H3AvY Год назад +1

    i want to see a chiplet-based CPU with x86 and ARM packages, bit and LITTLE of both. Keep non-cache memory an GPU separate and it can fit. Make it run ANYTHING.

  • @jamesmetz5147
    @jamesmetz5147 Год назад +1

    Thank you for your insight. I think Nvidia may have by unrealistic pricing, killed their golden gaming goose in the future.

  • @MisterRorschach90
    @MisterRorschach90 Год назад

    How crazy would it be if the future of personal computing was using special technology to just unlock your brain and use it as the computer. Or even genetically modified brains that serve as the brains of the computer.

  • @vladioanalexandru4222
    @vladioanalexandru4222 Год назад

    Where do we go from here?
    And should we really care?
    The end is finally here
    God have mercy
    Now we've rewritten history
    The one thing we've found out
    Sweet taste of vindication
    It turns to ashes in your mouth

  • @nikolaiizotov6063
    @nikolaiizotov6063 10 месяцев назад

    You are right that the industry is stagnating... I predict that worthwhile products (really new developments) that are worth purchasing will appear no earlier than in 6 years (((.

  • @rayhaanomar1200
    @rayhaanomar1200 Год назад

    Good video as usual.
    You blocked me on Twitter and I’ve never said anything negative about you or your opinions. It was probably an accident but I’d like to be unblocked.

  • @sadasd-n2f
    @sadasd-n2f Год назад

    3D Cache technology and the general idea of stacking Cores/Memory on top of each other its definitely the future of CPU Innovation.

  • @409raul
    @409raul Год назад

    8:26 You think Nvidia will have little impact on the server market?? Please explain, why so?

  • @del46_60
    @del46_60 Год назад

    latest discrete GPU numbers for Q3 22: nvidia market share up to 88%, amd fell to only 8%.

  • @HPISRP
    @HPISRP Год назад

    Using fps in Games in Order to quantitize CPU Stagnation ? Wear your Clown Makeup proud

  • @savetheplanet8450
    @savetheplanet8450 Год назад

    how about going from triangles to voxels ,
    on high resolution triangles make no sence(100 triangles per pixel?)

  • @Mateus01234
    @Mateus01234 Год назад

    5:51 is it just me or does the graph makes it look like AMD is doing better because of lack of context?

  • @short5stick
    @short5stick Год назад

    It is time to get rid of the X86 and go to ARM processors. We need smaller and faster computers. No more X86. Time to move on.

  • @krimsonsun10
    @krimsonsun10 Год назад

    6:04 YESS someone finally said it. I am so sick of the gamer RGB.. Full steampunk case please

  • @WXSTANG
    @WXSTANG Год назад +1

    AMD is in a unique position to bring to market a motherboard with a CPU / GPU socket, doing similar to what Falcon shores is doing, but better. Think a PS5 with swappable CPU / GPU.

    • @umamifan
      @umamifan Год назад

      Isn't the cpu and gpu on a PS5 just an APU?

  • @zeekmx1970
    @zeekmx1970 Год назад

    Steve at Gamers Nexus isn't going to like this video because he said 7950x is not for gaming.

  • @JayzBeerz
    @JayzBeerz Год назад

    We need powerful APU’s. To eliminate the need for a dedicated GPU.

  • @Maisonier
    @Maisonier Год назад

    I won't change my 7700k until they include 32gb HBM3 cache.

  • @Farren246
    @Farren246 Год назад

    I remember all of this from 20 years ago. You think the tech has finally caught up to the theory?

  • @simplysimon966
    @simplysimon966 10 месяцев назад

    why is it stagnant. There's plenty of interested in ARM

  • @starkistuna
    @starkistuna Год назад

    I think 6 core ships should go away already the way of the DODO, Having consoles with 8 cores and Xbox Series X as well as the Ps5 makes developers program to get every ounce of performance out of those but fall short when they develop or port over to pc, base line should have been 8 cores since at least 2 years ago. Also its a shame new tech gets locked behind artificial barriers for price segmentation, I would love to see what tech demos could be made with EPYC cpu combined with a Instinct MI210 on gaming.

    • @starkistuna
      @starkistuna Год назад

      @@Winnetou17 8 core chips have been lingering in the 400$ on launch price since 2016 , most AMD chips come from cut down Epyc processors so there really nothing stopping them introducing entry lvl 8 cores at 250$ mark. When Threadripper was introduced they top of the line cpu cost just $999 for 16 cores today a TR 3990X 64 costs $4000, last gen AMD Ryzen 9 5950X 16-core can be bought for 500$ % 2 that should be 250$ they are super cheap to make for them.

  • @jasoncombs3232
    @jasoncombs3232 Год назад

    Maybe one of our children will get quantum computing up and running to the masses.

  • @furythree
    @furythree Год назад

    he did it
    the madlad was right lol. Look where we are. Apple silicon dominating the market

  • @tuckerhiggins4336
    @tuckerhiggins4336 Год назад +1

    Ponte Vecchio is a joke, it has been delayed, again, and again, and again

  • @dinozaurpickupline4221
    @dinozaurpickupline4221 Год назад

    nobody is talking about AI chip that can manage porn in soc,its also the need of the hour

  • @drewwilson8756
    @drewwilson8756 Год назад

    Critical manufacturing system mutations incoming! What an exciting time to be alive.

  • @terribleatgames-rippedoff
    @terribleatgames-rippedoff Год назад +3

    Excited for what 🤡-take this video presents.

  • @FOREST10PL
    @FOREST10PL Год назад

    Poor quality silicon? I doubt silicon quality would drop 900 Mhz. People are hitting 5.4 all core so I think 5.2 all core should be a safe bet with PBO. There's something wrong in your system

  • @EthelbertCoyote
    @EthelbertCoyote Год назад +1

    Think this would add a niche for a step after foundry to develop for a specialty chiplet assembly lab? My thinking is construction and interconnects may have to become a specialty discipline.

  • @BoomTag
    @BoomTag Год назад

    Curious why the audio starts to duck all of a sudden @6:35. Great video either way

  • @dimitrisgakis9206
    @dimitrisgakis9206 Год назад

    its ok if you dont understand almost anything from this video, we are both in this together

  • @JamesFox1
    @JamesFox1 Год назад

    i am thinking more of a board that resembles a gpu based led graphical access with built in cpu compute power for connect and process variables

  • @KeviPegoraro
    @KeviPegoraro Год назад

    i upgrated my CPU in average every 13 years, GPU every 7 years

  • @Prenihility
    @Prenihility Год назад

    I think Intel Nova Lake will be nasty and change this trend of stagnation. I'm hoping for Nehalem-like performance at the time of its release. 4.0GHz on air cooling. Ahaha. Those were the days. Unheard of.

  • @lencas112
    @lencas112 Год назад

    my 7600x stays pretty much all the time around 5.5

  • @josephjocson1385
    @josephjocson1385 Год назад

    09:40 "Custom" is unlcok when you subscribed 😅😅

  • @Hjominbonrun
    @Hjominbonrun Год назад

    +1 for the steampunk look.
    Agree that gamer look is not great.

  • @clementechs
    @clementechs Год назад

    Robot voice is only drawback in this video

  • @GegoXaren
    @GegoXaren Год назад

    Can you please.... Please do something about the echo in your voice recordings?
    Look up Booth Junkie's video on how to mitigated reverb and echos.

  • @kbubuntu6048
    @kbubuntu6048 Год назад

    Arm IS thé Future Apple and NVIDIA told us

  • @TJ-hs1qm
    @TJ-hs1qm Год назад

    ecosystem synergies = monopolization of the market

  • @phrasheekwerk354
    @phrasheekwerk354 Год назад

    there will be no raytracing acclerators as the pci bus is too slow

  • @meemahmed1804
    @meemahmed1804 Год назад

    we need 7nm or 5nm hybrid cpu at low price from amd

  • @arjunharikumar7176
    @arjunharikumar7176 Год назад

    ok mr traversal coprocessor and mr 1mill per bitcoin

  • @lilblackduc7312
    @lilblackduc7312 Год назад

    Audio level dropped noticeably after 6:00-minutes...Great video using much Research/Knowledge & Lively Editorial Comment...Thank you! 🇺🇸 😎👍☕

  • @oldskool9783
    @oldskool9783 Год назад +4

    If anyone can screw up a roll out, its intel, I have zero trust in intel making good on falcon shore. this will be 10nm all over again.
    I put my money on AMD. There is no way Lisa sue is sitting idle while the market matures.

  • @Mopantsu
    @Mopantsu Год назад

    Ai hardware acceleration of Unreal 5 own RT like methods would be cool.

  • @phrasheekwerk354
    @phrasheekwerk354 Год назад

    i got a 5950x and used a 280mm aio to cool it. the top speed i could get without the cores going above 82 degrees was a disappointing 4.1ghz. i was tested with prime95 which seems to have to most intense combination of instructions to test that processor and get that processor really tested. every review and site i saw was 4.7ghz plus all testing with cinebench.

  • @Zerrotox
    @Zerrotox Год назад

    I want intel to succeed so it keeps AMD in check and innovating But I think the video is a bit miss leading as Intel already has problems keeping up with their own tiemlines and AMD server solutions are simple to OP to get them worried about Intel future products not to mention that Nvidia does not even have chiplet tech yet..as I see it weather we like it or not after RDN4 AMD will be king in all Markets

  • @selohcin
    @selohcin Год назад

    I spent so much of this video being confused because I couldn't tell if he was saying "disaggregated" or "desegregated". Those two words sound the same when spoken by most non-native English speakers, yet their meanings are opposite!

  • @ruthlessadmin
    @ruthlessadmin Год назад

    FPGAs fascinate me...the idea that if a CPU or chipset had one integrated, applications could optimize themselves at the lowest level. I don't know if an FPGA could match the performance of an ASIC, but seems like they should far exceed what any general purpose CPU could do by itself.

  • @VictorMistral
    @VictorMistral Год назад

    I wander if the software requirement for HBM+DDR is just in kernel space, or not...
    But I think that I could be done transparently by the kernel memory tierring. Application could add some logic to "specify" the need of RAM, to reduce it's need to me in lower tiers, for better memory utilization, but should need to...
    Memory tiering is already done to handle NUMA node and SSD/HDD cache, and work is being merged in the Linux kernel to handle network attached and CXL attached memory...
    Windows probably doing to same...
    Ever since I saw the Haswell with the eDDR4 "L4" cache, and how much perf it gave (in certain scenerios) I was looking forward for embedded memory on "APUs"... It really surprised me that it wasn't done in the newest generations; and expected them to come up with a line of processor with 4 or 8gb of internal ram, (and maybe a small fast SSD built-in), that would be a full "soc", and that could be improved with external RAM. In intel case, I expected them to combine a single to a few gb of ram with just a bit of optane, (since according to Intel, optane was suppose to be cheaper then RAM) and sell that as mobile thin and light SOC; it would have done great for light load industrial computers (signage and the like) and chromebooks...

  • @thelogicmatrix
    @thelogicmatrix Год назад

    Raytracing card would be amazing