NVidia is launching a NEW type of Accelerator... and it could end AMD and Intel

Поделиться
HTML-код
  • Опубликовано: 29 сен 2024
  • Urcdkeys.Com 25% code: C25 【Mid-Year super sale】
    Win11 pro key($21):biitt.ly/f3ojw
    Win10 pro key($15):biitt.ly/pP7RN
    Win10 home key($14):biitt.ly/nOmyP
    office2019 pro key($50):biitt.ly/7lzGn
    office2021 pro key($84):biitt.ly/DToFr
    MS SQL Server 2019 Standard 2 Core CD Key Global($93):biitt.ly/oUjiR
    Support me on Patreon: / coreteks
    Buy a mug: teespring.com/...
    My channel on Odysee: odysee.com/@co...
    I now stream at:​​
    / coreteks_youtube
    Follow me on Twitter: / coreteks
    And Instagram: / hellocoreteks
    Footage from various sources including official youtube channels from AMD, Intel, NVidia, Samsung, etc, as well as other creators are used for educational purposes, in a transformative manner. If you'd like to be credited please contact me
    #nvidia #accelerator #rubin

Комментарии • 370

  • @CrashBashL
    @CrashBashL 3 месяца назад +203

    No one will end anyone.

    • @Koozwad
      @Koozwad 3 месяца назад

      AI will end the world, in time

    • @pf100andahalf
      @pf100andahalf 3 месяца назад +13

      Some may end themselves.

    • @modrribaz1691
      @modrribaz1691 3 месяца назад

      Based and truthpilled. This faked up competition has to continue for as long as possible,
      It's a 24/7 publicity stunt for the entirety of this market with Nvidia basically playing the big bully.

    • @PuppetMasterdaath144
      @PuppetMasterdaath144 3 месяца назад +10

      I will end this conversation.

    • @CrashBashL
      @CrashBashL 3 месяца назад +4

      @@PuppetMasterdaath144 No, you won't, you Puppet.

  • @Siranoxz
    @Siranoxz 3 месяца назад +121

    We are in dire need of a diverse GPU market.

    • @Koozwad
      @Koozwad 3 месяца назад +7

      yeah what happened to "diversity is strength" 😂

    • @mryellow6918
      @mryellow6918 3 месяца назад +4

      We have a diverse market, it's just they aren't a monopoly because they are the only ones they are a monopoly simply because they are the best.

    • @oussama123654789
      @oussama123654789 3 месяца назад +1

      sadly china still needs at least 5 years for a product worthy of buying

    • @Siranoxz
      @Siranoxz 3 месяца назад +1

      @@Koozwad I have no idea what you're trying to convey, its just about more companies building GPU's, nothing else.

    • @Siranoxz
      @Siranoxz 3 месяца назад

      @@mryellow6918 Sure that is one factor, or these invisible GPU manufacturers do not promote their GPU's and optimized support for the games like NVIDIA and AMD does.
      But being the best comes with a nifty price huh?.

  • @K9PT
    @K9PT 3 месяца назад +9

    CEO of Nvidia only Said IA . IA... IA dozen times, only one gaming...SAD TIMES

    • @Jdparachoniak
      @Jdparachoniak 3 месяца назад +1

      to me he said money, money money lol

    • @fabianhwnd6265
      @fabianhwnd6265 3 месяца назад +1

      Nvidia outgrow the gaming market they could abandon the gaming market and they wouldn't notice the losses

    • @LukeLane1984
      @LukeLane1984 3 месяца назад +1

      What's IA?

    • @K9PT
      @K9PT 3 месяца назад +2

      @@LukeLane1984 lol

  • @awesomewav2419
    @awesomewav2419 3 месяца назад +37

    jesus nvidia does not rest for the competition.

    • @visitante-pc5zc
      @visitante-pc5zc 3 месяца назад +4

      And consumers pockets

    • @maloxi1472
      @maloxi1472 3 месяца назад

      @@visitante-pc5zc As long as it aligns...

    • @fred-ts9pb
      @fred-ts9pb 3 месяца назад

      It's over for amd before it started. Keep throwing 50M a year at a failed amd ceo.

    • @Acetyl53
      @Acetyl53 3 месяца назад

      What a weird comment. Probably a bot or shill. Creep. Weirdo.

  • @hdz77
    @hdz77 3 месяца назад +34

    I might actually end Nvidia if they keep up their ridiculous pricing.

    • @gamingtemplar9893
      @gamingtemplar9893 3 месяца назад

      Prices are set by the consumers, the market, no the company. If anything, it would only go down with competition. Value is SUBJECTIVE, there is no intrinsic value on anything. You will pay what the market wants, if the pricing was "ridiculous" as you say, then nvidia would be losing money, it is not, then it is not ridiculous. Learn economy before saying communist shit.

    • @__-fi6xg
      @__-fi6xg 3 месяца назад +5

      their costumers, other billion companys can affort it with ease, no need worry.

    • @SlyNine
      @SlyNine 3 месяца назад +4

      Unfortunately the prices are high because that's what people are paying.

    • @AlpineTheHusky
      @AlpineTheHusky 3 месяца назад +1

      The pricing isnt bad when you actually look at what their products provide.

    • @dagnisnierlins188
      @dagnisnierlins188 3 месяца назад

      ​@@AlpineTheHuskyfor business and prosumers and 4090 in gaming, everything else is overpriced.

  • @christophermoriarty7847
    @christophermoriarty7847 3 месяца назад

    For what I'm gathering this accelerator is a type of cash for the GPU which means it won't be a dedicated card and consumer products it will probably be parts of the video card itself.

  • @AbolishTheInternet
    @AbolishTheInternet 3 месяца назад +1

    10:50 Yes, I'd like to use AI to turn my cat photo into a protein.

  • @Raphy_Afk
    @Raphy_Afk 3 месяца назад +19

    This is extremely interesting, I hope they will release discrete accelerators for desktop users

    • @hlbjk
      @hlbjk 3 месяца назад +5

      It's extremely boring actually.

    • @gamingtemplar9893
      @gamingtemplar9893 3 месяца назад

      @@hlbjk Actually, you are boring. Stop spamming your stupidity and go watch cat videos.

    • @visitante-pc5zc
      @visitante-pc5zc 3 месяца назад

      @user-et4qo9yy3z yes

    • @Raphy_Afk
      @Raphy_Afk 3 месяца назад

      @@hlbjk For those who do use only their pc for gaming .

    • @maloxi1472
      @maloxi1472 3 месяца назад

      @@hlbjk Oh look mommy ! The "akshually" guy is real !

  • @ageofdoge
    @ageofdoge 3 месяца назад

    Do you think Tesla will jump into this market at some point? Would the FSD HW4 be competitive? I don't know if this is a market they are interested in, but it seems like they have already done a lot of the work as far as low power usage inference.

  • @Sam_Saraguy
    @Sam_Saraguy 3 месяца назад

    Super interesting. Will be following developments.

  • @denvera1g1
    @denvera1g1 3 месяца назад

    28nm to 4nm is only a 3x density increase?
    But isnt the 4nm 8700G like 2.5x more transistors than the 7nm 5700G? (both around 180mm²)
    I mean i guess clock speeds makes a difference, but weren't clock speed lower on 28nm?

  • @HandyAndyG
    @HandyAndyG 3 месяца назад

    Wait a minute. If I go back to the archives, aren't you the guy who was saying that Nvidia was finished and AMD would overtake them? You were very very negative on NVDA not so long ago.

  • @patrikmiskovic791
    @patrikmiskovic791 3 месяца назад

    Becouse of price I will always buy and GPU and CPU

  • @oraz.
    @oraz. 3 месяца назад +1

    I don't care about llm ai assistants. If gaussian splatting takes over rendering than ok.

  • @rodfer5406
    @rodfer5406 3 месяца назад

    Video error-blacks out

  • @TfearWasHere
    @TfearWasHere 3 месяца назад +6

    is this just ai shit

    • @gamingtemplar9893
      @gamingtemplar9893 3 месяца назад +2

      oh sure, AI shit, that simple. You smart.

    • @awindowskrill2060
      @awindowskrill2060 3 месяца назад

      Doesn't seem like AI slop. Just tabloid-tier garbage to direct clicks to their windows key selling scam.

    • @TfearWasHere
      @TfearWasHere 3 месяца назад

      @@gamingtemplar9893 sounds like an ai voice because rhe inflection never changes

  • @boynextdoor931
    @boynextdoor931 3 месяца назад +1

    For the sake of the market, please step up other companies.

  • @davidtindell950
    @davidtindell950 3 месяца назад +4

    Do You think that there will be a competitor or alternative to NVidia within the next 18 months ?

    • @PaulSpades
      @PaulSpades 3 месяца назад +1

      There were dozens of startups developing exactly fixed function accelerators for inference - some have already ran out of money, some have been poached by the bigger players like Apple, Google, Amazon... some are developing in memory computing and analogue logic, which will probably never see the light of day this decade.
      Unless you can get TPUs from Google, there's not much actual commercial hardware you can get that's more efficient than nvidia's, if you need horsepower and memory.
      If you want to run basic local inference, any 8gig GPU will do, or any of the new laptop processors that can do around 40tops.

    • @nick_g
      @nick_g 3 месяца назад +1

      Nope. Even if a competitor started now and copied the ideas in the video, it would take about 18 months to design, validate, and produce them AND that would be version 1. NVDA is pushing ahead faster than anyone can keep up with

    • @omnymisa
      @omnymisa 3 месяца назад +1

      To the Nvidia spot I don{t think but anyone can enter the market and show what they can bring and try kicking AMD and Intel, but Nvidia looks very well and secure as the leader, but sure we would be very glad if there is some other strong competitors around because it feels like a monopoly, not that good.

    • @ps3301
      @ps3301 3 месяца назад +1

      These startup can try to sell but once they get any traction, nvidia will buy it with their one quarter profit

    • @004307ec
      @004307ec 3 месяца назад

      😅Huawei Ascend I guess? Though the software side is kind of bad.

  • @sacamentobob
    @sacamentobob 3 месяца назад +1

    One word: meh.

  • @Steamrick
    @Steamrick 3 месяца назад

    I seriously doubt that nvidia will try to sell external accelerator cards to consumers. That didn't work out well for PhysX accelerator cards and isn't likely to work better now.

  • @fred-ts9pb
    @fred-ts9pb 2 месяца назад

    amd has had 2 corrections in it's stock price this year and is now sitting below January price. It would seem to me amd is a horrible investment with a horrible overrated overpaid CEO.

  • @Wierie_
    @Wierie_ 3 месяца назад +1

    it could end AMD and Intel if you care enough to buy and play modern day slop LOL

  • @blueeyednick
    @blueeyednick 3 месяца назад

    the coprocessor is here

  • @gamingtemplar9893
    @gamingtemplar9893 3 месяца назад

    Premium quality as always. Great work.

  • @axl1002
    @axl1002 3 месяца назад +1

    After 16 years seems I'll be forced to buy nVidia card again. :(

    • @mryellow6918
      @mryellow6918 3 месяца назад

      You mean your choosing to buy the better card?

  • @TheModmc
    @TheModmc 3 месяца назад

    nvidia stuff is so boring, good we have you to sweeten it up

  • @Tential1
    @Tential1 3 месяца назад +1

    That's right. Stop doubting Nvidia. Drink the kool-aid. Buy the stock. Afford the gpu. Easy.
    Not financial advice.

    • @Zorro33313
      @Zorro33313 3 месяца назад +1

      hell ye!! kool aid!! AI NO BUBBLE!! AI FTW!!

  • @anttikangasvieri1361
    @anttikangasvieri1361 3 месяца назад

    Ai stuff yawn

  • @noanyobiseniss7462
    @noanyobiseniss7462 3 месяца назад +59

    So Nvidia wants to patent order of operations, kinda reminds me of apple trying to patent the rectangle.

    • @brodriguez11000
      @brodriguez11000 3 месяца назад +5

      Math can't be patented.

    • @GrimK77
      @GrimK77 3 месяца назад +4

      @@brodriguez11000 there is a loophole for it, unfortunatelly, that should have never been granted

    • @Wrek100
      @Wrek100 3 месяца назад +2

      Didn't Apple succeed? Iirc the 10
      2" galaxy tablet got squashed because the judge sided with Apphell.

    • @danis8455
      @danis8455 2 месяца назад +2

      Nothing new in Nvidia being scum bags really.

  • @misterstudentloan2615
    @misterstudentloan2615 3 месяца назад +123

    Just costs 30 pikachus to do that operation....

    • @dreamonion6558
      @dreamonion6558 3 месяца назад +6

      thats alot of pikachus!

    • @MyrKnof
      @MyrKnof 3 месяца назад +4

      @@dreamonion6558 You want as few pikachus pr op as possible! also, "a lot"

    • @alexstraz
      @alexstraz 3 месяца назад +2

      How many joules is in a pickahu?

    • @handlemonium
      @handlemonium 3 месяца назад

      And one Magikarp

  • @EnochGitongaKimathi
    @EnochGitongaKimathi 3 месяца назад +63

    Intel, AMD and now Qualcomm will be just fine.

    • @fred-ts9pb
      @fred-ts9pb 3 месяца назад

      amd is a bottom feeder.

    • @waynewhite2314
      @waynewhite2314 3 месяца назад +9

      ​@@fred-ts9pbWhat tech company are you running? Oh yes Trolltech!

    • @christophorus9235
      @christophorus9235 3 месяца назад +2

      @@fred-ts9pb Lol tell us more about how you know nothing about the industry...

  • @Johnmoe_
    @Johnmoe_ 3 месяца назад +15

    Sounds cool, but all I want is more VRAM under 10k

  • @avejst
    @avejst 3 месяца назад +22

    5:12-5:44, is there a Reason for the blackout in the video?
    Interesting video as always

    • @Slavolko
      @Slavolko 3 месяца назад +1

      Probably video editing or rendering error.

  • @hupekyser
    @hupekyser 3 месяца назад +6

    at some point. 3d and ai need to fork into dedicated architectures instead of having a general do it all GPU

    • @aladdin8623
      @aladdin8623 3 месяца назад

      Both intel and amd have fpga ip to achieve close to bare metal performance. And in comparison to nvidia's asic plans here a fpga is flexible. Its logic gates can be 'rewired' while nvidia's asics force you to buy more hardware again and again.

  • @PaulSpades
    @PaulSpades 3 месяца назад +65

    It's funny how we now need fixed function accelerators for matrices after 15 years of turning GPUs (fixed function FP accelerators) into programmable devices.
    Also, we went from 12/20/36bit word computers to 4bit and 8bit micros, to 16bit RISC processors and FP engines, to 32bit and now 64. Only to discover we now need much less precision, FP 32 and fp16, now 8 and 4bit. We could probably go down to ternary for large model nodes, or 2bit.

    • @BHBalast
      @BHBalast 3 месяца назад +12

      AI workflow != Classical computers, no one will get back to 8 bits on a consumer device :p

    • @maou5025
      @maou5025 3 месяца назад +1

      It is still 64. Floating point is kinda different.

    • @PaulSpades
      @PaulSpades 3 месяца назад +2

      @@BHBalast Well yes. But. Do you need photoshop if the AI-box-thing can generate the file you asked for and uploaded it? I'm not saying we won't need general computing like we do now, but most people won't.
      Because most people don't need programable computers, they need media generation and media consumption devices. Most tasks are filling forms, reading and writing.

  • @pierrebroccoli.9396
    @pierrebroccoli.9396 3 месяца назад +25

    Darn - hate to be held hostage to leather jacket man but it is a good way to go - local AI processing instead of relying on large corporates for AI services on ones data.

    • @Zorro33313
      @Zorro33313 3 месяца назад +7

      absolutely the same shit as an encrypted channel between you and processing datacenter.
      processing is not local anyway it seems. ai locally only fractures data in some bullshit tokens (just packets as usual) and send them to data center to get the processed response back. this sounds just like bs channel encryption using AI cuz nvidia can't do anything else but ai.

    • @BHBalast
      @BHBalast 3 месяца назад

      ​@@Zorro33313wtf?

    • @awindowskrill2060
      @awindowskrill2060 3 месяца назад

      @@Zorro33313 what meth are you smoking mate

    • @FlorinArjocu
      @FlorinArjocu 3 месяца назад +1

      We'd still need cloud AI services as the same methods will get to their datacenters and will create more advanced things to do online much faster. But more "simpler" will get local, indeed.

  • @edgeldine3499
    @edgeldine3499 3 месяца назад +2

    *1000 ex performance uplift" I'm annoyed by people using ex instead of times, been hearing it more lately I guess.. I know it's been a thing for awhile but maybe I'm getting old?

  • @edgeldine3499
    @edgeldine3499 3 месяца назад +3

    When did we change times to x? I've been hearing it more and more lately.. gamers nexus said it earlier (technically yesterday) and a few months ago I remember hearing it.. maybe it's been standing out more and more to me but I think it's now a pet peeve. It's ten times as much sounds like the proper way to say it rather than ten x as much.

    • @NightRogue77
      @NightRogue77 Месяц назад

      Haven’t noticed, but offhand I would guess it started in more scientific / communicator circles to avoid mixups. “Times” sounds like many things, “X” doesn’t.
      Total hot take

  • @Erribell
    @Erribell 3 месяца назад +6

    I would bet my life coreteks owns nvidia stock

    • @selohcin
      @selohcin 3 месяца назад +1

      I assume he owns Nvidia, AMD, Intel, and several other tech companies.

  • @SalvatorePellitteri
    @SalvatorePellitteri 3 месяца назад +7

    This time you are wrong! Inference is where AMD and Intel play in a plainfield with NVIDIA. NPUs have much more simple apis so the vertical stack is thin, almost irrelevant and all the applications are going to support NPUs from intel, amd and nvidia very easily and AMD and Intel have already NPUs enabled processors and PCIE cards on the wild

    • @sacamentobob
      @sacamentobob 3 месяца назад

      He has been wrong plenty of times.

  • @ragingmonk6080
    @ragingmonk6080 3 месяца назад +21

    This is nothing more than a joke!
    "Google, Intel, Microsoft, Meta, AMD, Hewlett-Packard Enterprise, Cisco and Broadcom have announced the formation of the catchily titled "Ultra Accelerator Link Promoter Group", with the goal of creating a new interconnect standard for AI accelerator chips."
    People are tired of Nvidia gimmicks and they will shut them out.

    • @120420
      @120420 3 месяца назад

      Fan boys have entered the building!

    • @ragingmonk6080
      @ragingmonk6080 3 месяца назад +12

      @@120420 I quote tech news and you call me a fanboy because you are a fanboy that didn't like the news. Way to go champ. mom must be proud.

    • @gamingtemplar9893
      @gamingtemplar9893 3 месяца назад +2

      People are not tired, some people are and they don't have any clue about what they are talking about. Same way people who defended Nvidia back in the day and still do, like Gamers Nexus defending the cable debacle to protect nvidia. You guys are all one side or the other fanboys who don't understand how things really work.

    • @ragingmonk6080
      @ragingmonk6080 3 месяца назад

      @@gamingtemplar9893 We understand how things work. Nvidia use to use the "black box" called GameWorks to add triangles to meshes that were not needed, to increase compute demand. Then they would program their drivers to ignore a certain amount of triangles to give them a performance edge. Wouldn't give devs access to the black box either.
      GSynce was a rip off to make money because adaptive sync was free.
      Limit which Nvidia cards can use dlss so you have to upgrade. Then limit which Nvidia GPU's can use frame generation so you have to upgrade again.
      We know what is going on and how the Nvidia cult drinks the kool-aid.

    • @ragingmonk6080
      @ragingmonk6080 3 месяца назад +3

      @@gamingtemplar9893 We know Nvidia's gimmicks too well. I cut them off with the GTX 1070.

  • @linuxguy1199
    @linuxguy1199 3 месяца назад +3

    This channel has become more and more off the rails, and frankly 90% of it is fake news and clickbait. Don't need this in my feed.

    • @ChrispyNut
      @ChrispyNut 3 месяца назад +2

      Yea, that's what struck me with the title, given what's been spewed here of late.
      I've joined you with the unsub.

  • @Thor_Asgard_
    @Thor_Asgard_ 3 месяца назад +2

    Lets be clear. I wouldnt buy Nvidia, even if they were the last one, id rather quit gaming. Therefor no, they ended nothing. They can keep their greedy shit to themselfes.

  • @shmookins
    @shmookins 3 месяца назад +1

    Thank you for the commentary.
    But why use a fake click bait image in your thumbnail? You are better than that. Click bait fake thumbnails are for low level people who have nothing to offer and try to trick ignorant people to click on their rubbish videos.
    You have nice commentary and a number of followers. No need to do that low level disappointing shit like fake thumb images.
    Cheers.

  • @WASD-MVME
    @WASD-MVME 3 месяца назад +2

    I'm already so tired of AI this AI that AI underwear AI toothbrush AI AI AI AI

  • @jackskalski3699
    @jackskalski3699 3 месяца назад +2

    Either I'm bad at listening or I just didn't understand but inference or running neural models locally is all the rage about NPUs and TOPS in current SoC's isn't it? Apple with M3/4, AMD Strix with 50 TOPS and Snapdragon Elite X and Windows 12 with Copilot is exactly that use-case, running models locally isn't it? So why not just cram in these NPUs or new type of accelerators into your CPUs or discrete GPUs and call it a day?
    What's so revolutionary about this new type of accelerators from NV, that the chips that are hitting the market TODAY, don't have? It's my understanding that optimisations happen on all fronts all the time, transistor level, instruction level, compiler level and software level.
    When I look for open job positions in IT it hits me how many compiler and kernel optimisation roles are opening for drivers, CUDA and ROCm... Don't get me wrong I love your videos but I just don't see the NV surprise, when everyone is releasing ai accelerators today vs NV promising them in maybe 1 year. NV was focused on the server market, while AMD was actually present in both server and client.
    Also notice, that NV was already using neural accelerators for their ray tracing workloads, which significantly lowered the required budget of rays, that needed to be cast as they could reconstruct the proper signal quite believably with neural networks.
    We'd need to assume that TOPS/W metric is only understood by NV and that everyone else will sit idle and be blind to it. I doubt that, judging on what is happening right now.
    Also we assume, that models will keep growing, at least the cost of learning. There are some diminishing returns somewhere, so I expect models to also shrink and be optimised as opposed to only grow in size. As more people/companies start releasing more models they really need to think how to protect the IP, which is the weights in neurons of these networks because transfer learning is a "biatch" for them :)
    With progress happening so fast, yesterdays models become commodities. As they become commodities they are likely also to become open-sourced. As such you can expect a lot of transfer learning activities happening, that will act as a force which leads to democratization of older still very good models. So this is a head wind for server HW as I can cheaply transfer learn locally...
    For me local models are mostly important in two areas of my life: coding aid and photography processing. I really follow what fylm.ai does with color extraction and color matching. As NPUs proliferate more and more cloud based features can be run locally.... (for example Lightroom, Photoshop, fylm ai or copilot like models to aid programmers).

    • @jackskalski3699
      @jackskalski3699 3 месяца назад +1

      I was thinking a bit more and there is another aspect that we're missing from the analysis: Data distance:
      If you are running a hybrid workload and you really care about perf/W you are actually going to host NPUs on the GPU and also separately as a standalone accelerator. So when you are running a chatbot or some generative local model you will use the standalone accelerator and throttle down your GPU. That's the dark silicon concept to conserve energy. If you are running a latency sensitive workload like 3D graphics, that are aided by neural networks, like the ray tracing / path tracing workloads, then you are going to utilise the low latency on-GPU NPUs because you need the results ASAP -> you might throttle down the standalone NPU accelerator.
      There is a catch. If your game uses these rumoured "AI" NPCs, then that workload will be run on the discrete NPU accelerator and you're going to be forced to keep it running along the GPU.
      Now the Lightroom use-case is interesting. Intelligent masking or image segmentation can be done on the discrete accelerator, especially if it means same results but lower Watt usage (in Puget benchmarks). However there might also be hybrid algorithms that utilise GPU compute along with NPU neural network for processing, in which case it might be more beneficial to run that on the GPU (with NPUs onboard).
      To prove I'm not talking gibberish, Intel is doing exactly that with Lunar Lake :) There are discrete NPUs with 60+ TOPS and the GPU hosts it's own local NPUs with ~40 TOPS. Thus intel can also claim 100+ "Platform" TOPS although that last naming is misleading as you are unlikely to see a workload, that utilises both to run your co-pilot. A Game on the other hand might be different.
      Lastly I remember years ago AMD's tile based design was marketed as exactly, that, a platform, that not only helps with yields (from a certain chip size onwards) but also allows you to host additional optimised accelerators like DSPs, GPUs, CPus and now NPUs on a single chip. So you could argue AMD has been preparing the foundations for that years ago...

  • @YourSkyliner
    @YourSkyliner 3 месяца назад +6

    4:44 oh no, they went from 30 Pikachus to only 1.5 Pikachus 😮 where did all the Pikachus go then??

    • @LtdJorge
      @LtdJorge 3 месяца назад

      They didn’t go from 30 to 1.5, 30 is how much energy it takes to load the values, and 1.5 how much it takes to compute one value once loaded (with FMA). With the HMMA instruction, it takes 110pJ to compute an entire matrix of values, so the overhead of loading becomes negligible, while with scalar operations like FMA, the loading part dominates the power consumption.

  • @Jackpkmn
    @Jackpkmn 3 месяца назад +2

    Ah so it's more AI fluff that will amount to more hardware in landfills after the AI bubble bursts.

  • @MrDante10no
    @MrDante10no 3 месяца назад +1

    With all due respect Coretex, first it was AMD going to destroy nVidia with chiplet design, then Intel were going to destroy AMD with new CPUs and now nVidia will destroy both! 🤔 Can you please make up your mind? 🙂 PLEASE! 🙏

  • @jonragnarsson
    @jonragnarsson 3 месяца назад +3

    Hate them or love their corporation, NVidia really has some brilliant engineers.

  • @New-Tech-gamer
    @New-Tech-gamer 3 месяца назад +2

    isn't that "accelerator" the NPU everybody is talking about nowaday? specialized in local low power inference. Nvidia may have the best prototype, but Qualcomm and AMD are already starting to ship CPU with NPU doing 40-50TOPS. all back up by Microsoft within W11. so even if nvidia comes to market in 2025 it may be too late.

  • @Acetyl53
    @Acetyl53 3 месяца назад +1

    I hate clickbait titles. Absolutely shameless. Unsubscribed, don't know why I was subbed to begin with.

  • @TotalMegaCool
    @TotalMegaCool 3 месяца назад +1

    If Nvidia is planning to sell another type of card "AI Accelerator" it would explain the rumors of the RTX 50xx GPUs being a dual slot GPU. If you own a tin foil hat you might think that the RTX 40xx GPU's were larger than they needed to be to prime users, by encouraging them into buying a bigger PSU and case.

  • @B4nan0n
    @B4nan0n 3 месяца назад +2

    Accelerator is not the same thing that you said the 40 series was going to have?

    • @NightRogue77
      @NightRogue77 Месяц назад +1

      Yea that and the holographic vr lens stuff are the two things I recall the most

  • @pedromallol6498
    @pedromallol6498 3 месяца назад +2

    Have any of CoreTek's predictions ever come true just as described?

    • @NightRogue77
      @NightRogue77 Месяц назад

      Survey says: “Not many, and especially not lately”
      Still waiting on my holographic VR lens tech I was promised like 2 yrs ago

  • @cjjuszczak
    @cjjuszczak 3 месяца назад +3

    Can't wait to buy a PhysX, i mean, Nvidia-AI PCIe card :)

    • @sirab3ee198
      @sirab3ee198 3 месяца назад

      lol also the 3D Nvidia glasses, the dedicated G-sync module in monitors etc ...

  • @ctu22
    @ctu22 3 месяца назад +2

    Green clickbait cope, thanks!

  • @GIANNHSPEIRAIAS
    @GIANNHSPEIRAIAS 3 месяца назад +2

    how is that new and how this will end amd or intel?
    like whats stopping amd from getting their xillinx accel to do the same job?

  • @Ronny999x
    @Ronny999x 3 месяца назад +1

    I think it will be just as Successful as the Traversal Coprocessor 😈

  • @big-gloom
    @big-gloom 3 месяца назад +2

    Unsubscribed

  • @Techaktien
    @Techaktien 3 месяца назад +2

    Stop hyping

  • @mramd9062
    @mramd9062 3 месяца назад +1

    You are too much of a NVIDIA fanboy.

  • @aizensama9141
    @aizensama9141 2 дня назад

    AMD just maximizes price to performance gaming. It's too affordable for the quality of gaming to be ended. And when Intel refines their GPUs they'll have some solid footing to scale down the outrageous pricing of Nvidia. And AMDs cpu price to performance is just too good right now also.

  • @blackjew6827
    @blackjew6827 3 месяца назад +2

    Ill pay to who ever does not have any AI shit.

  • @AshT8524
    @AshT8524 3 месяца назад +26

    Haha title reminded me of 30 series launch rumor, I really wanted it to happen but all we got was upgrade in prices lol

    • @bass-dc9175
      @bass-dc9175 3 месяца назад +4

      I never got why people want any company to destroy its competition.
      Because if Nvidia had eliminated AMD with the 30 series: Then we would not have the current increased GPU prices.
      No. It would be 10 times worse with Nvidia at a monopoly.

    • @Tential1
      @Tential1 3 месяца назад +1

      I wonder how long before you figure out you can benefit from Nvidia raising prices.

    • @FJaypewpew
      @FJaypewpew 3 месяца назад +1

      Dude gobbles nvidia hard

    • @Vorexia
      @Vorexia 3 месяца назад

      30-series would’ve been a pretty solid gen if it weren’t for the scalper pandemic.

    • @AshT8524
      @AshT8524 3 месяца назад +2

      @@bass-dc9175 I don't want the competition to die, I just want better and affordable products from both companies especially when comparing to previous generations.

  • @kazedcat
    @kazedcat 3 месяца назад +1

    Fetch and Decode does not move data they only process instruction not data. Also instructions are tiny 16~32bit vs. 512~1024bit SIMD vector data

  • @memeconnect4489
    @memeconnect4489 3 месяца назад +1

    and it could end AMD and Intel.....i feel like i have heard this a lot of time now... nice clickbait title

    • @brodriguez11000
      @brodriguez11000 3 месяца назад

      Zero-sum game. Everyone else must lose so one can win.

  • @XfStef
    @XfStef 3 месяца назад

    So the industry is having YET ANOTHER go at thin client BS. I hope, again, for them to, again, fail miserably.

  • @PristineTX
    @PristineTX 3 месяца назад +1

    It’s an NPU. This is a silly video.

  • @johndinsdale1707
    @johndinsdale1707 3 месяца назад +1

    I think the NPU accelerator is very much a open market. Both Apple and Qualcomm are embedding NPU accelerators into their ARM V9 SOCs. Also Groq has alternative approach to inference which is much more power efficient?

  • @TheDaswilhelm
    @TheDaswilhelm 3 месяца назад +1

    When did this become a meme page?

  • @FlyingPhilUK
    @FlyingPhilUK 3 месяца назад +1

    It's interesting how nVidia is still locked out of the Desktop & Laptop CPU market - with AMD, Intel and now Qualcomm pushing Co-Pilot PCs & Laptops
    - I know Qualcomm had an exclusive on Windows ARM CPU development, but that ends this year (?)
    - so obvious nVidia should be making SOCs for this market

  • @RoyBrown777
    @RoyBrown777 3 месяца назад +1

    Doubt it mate. No one cares.

  • @gstormcz
    @gstormcz 3 месяца назад

    AI acceleration sounds as good as 8ch sound for my pair of ears. The presentation looks more eyecatching than RGB, maybe because I like spreadsheets and informed narrative.(Just telling that I viewed it with will to absorb as much as my brain accepts🤷🏼‍♀️, when AI make it to desktop PC and games I will understand 2x more)
    You know more how ground breaking Nvidia acceleration could be, but I am sure I will watch it from distance with my slim wallet 😂
    GG, pretty news of this topic.. As usual by Core-news-tech 👍
    Patents last legally only limited time, right? AMD and all other will develop their acceleration at law bureau soon.

  • @donutwindy
    @donutwindy 3 месяца назад

    NVidia making $30,000 AI chips that consumers are not going to purchase should not affect AMD/Intel who aren't currently competing in AI. To "end" AMD and Intel, the NVidia chips would have to be under $500 as that is the limit most consumers will spend on a graphic card or CPU. Somehow, I don't see that happening anytime soon.

  • @jeffmofo5013
    @jeffmofo5013 3 месяца назад

    Idk,, I should be as enthusiastic as you. This may be an investment opportunity. Still waiting for NVidia stock to pull back. Technical analysis says it's at it's cycle top.
    But I'm also a machine learning expert. While inference is important. It's currently the fastest thing compared to training. The problem is phones not laptops. Laptops can more than handle inference. A phone on the other hand struggles. So Samsung with it's focus on an AI chip is more important in this arena. Unless NVidia is going to start making phones, I don't see this, as an implementer, as that impactful. And memory is more important on phones than cpu for this type of work.
    On a side note I don't even use GPUs for my AI work. I did a comparison and it only gave me a 20% increase in performance and cost twice as much. So at scale I can buy more CPUs than GPUs. And one more CPU is 100% increase in performance compared to gaining the 20% increase at twice the cost. So I don't see the NVidia AI hype.
    1x cpu 1x gpu is 120% at twice the cost
    1x cpu is 100% at the same cost is 2x so I get 200% increase for the same price as 1 cpu and 1 gpu

  • @J4rring
    @J4rring 3 месяца назад +1

    I feel like the next big step to properly implement the coming technologies in AI and acceleration would be to integrate such architectures directly into the motherboard. especially in light of how large NVidias highest end cards are becoming and how much more space efficient MoBos have gotten through the last half decade, and not to mention the power and efficiency of APUs and NPUs that are coming out this year. To physically offloading those calculations onto a dedicated spot on the motherboard could provide an upper hand in computer hardware.
    This also doesnt seem all too farfetched when you take into account the industry is planning to implement an "NPU standard" across mobile devices and various OS. Also the fact mobo manufacturers are already re configuring things like RAM from DIMM slots to CAM2 on desktops. combine all of this plus the fact that the technologies could potentially be tied closer to the CPU on the north bridge and it feels like a no brainer to work with mobo manufacturing to further push the limitations of computing power

  • @lamhkak47
    @lamhkak47 3 месяца назад

    Also, love in the local AI space, Apple has accidentally (or strategically) made their Mac Studio a rather economic choice for local inferencing.
    That and Nvidia's profit margin (nearly twice that of Apple's) making Tim Apple gushing also shows how dominating Nvidia is in the market right now.

  • @hikenone
    @hikenone 3 месяца назад +1

    the audio is kinda weird

  • @Neonmirrorblack
    @Neonmirrorblack 3 месяца назад +2

    18:39 Truth bombs being dropped.

  • @muchkoniabouttown6997
    @muchkoniabouttown6997 3 месяца назад

    I Wana geek out about this but I'm convinced that all the ai advancement and inference refinement is just snake oil for over 90% of buyers. So I hate it. I'm down for new approaches, but fr can any non-salesman or non fan boi explain how this will benefit any more than 1-3 companies??

  • @El.Duder-ino
    @El.Duder-ino 3 месяца назад

    Nvidia has plenty of cash to continue to be aggressive in its goal to disrupt and lead as many markets as possible. It's pretty much confirmed their next goal will be around edge and consumer AI which basically predicted their unsuccessful acquisition of the Arm in the past. It will be very interesting to see how their edge/consumer Arm SoC they r working on will compete with the rest of the players.
    Thx for the vid and for shedding more light about their future accelerator👍

  • @jimgolab536
    @jimgolab536 3 месяца назад

    I think much will depend on how aggressively NVIDIA builds and defends it (leading edge) patent portfolio. First is best. :)

  • @valimalidudu7991
    @valimalidudu7991 3 месяца назад

    You really didn't understand a shit of what this is... it's for companies, it will costs hundreds of thousands of dollars, it's designed to replace large AI servers (as big as buildings) with a much smaller solution. It's not for gamers, it's not for customers, it does not compete with any other company, because there are no other companies doing this.
    This channel must be reported.

  • @angellestat2730
    @angellestat2730 3 месяца назад

    In the last year you were saying that Nvidia was in a dead end and that their stock price should plunge, instead, their price has rise a 200%. Lucky for me at that time I did not listen to you and I bought, taking into account how important AI was going to be making a lot of profit.

  • @hatobeats
    @hatobeats 3 месяца назад

    Nvidia and their CUDA cores already makes me sick. Now they will pay developers to use this and close the loop. Yet another monopoly from Nvidia.

  • @ejtaylor73
    @ejtaylor73 3 месяца назад

    This won't end AMD for one reason, PRICING!!! Not everyone is related to Bill Gates or want to take out a loan to be able to buy this.

  • @MrChriss000
    @MrChriss000 3 месяца назад

    It all sounds very complicated, I will stick to Star Trek Online. 1080p, 60.

  • @Greez1337
    @Greez1337 3 месяца назад

    Return to the Emotion engine. AI revolution will help Game developers maximise the man jaws and diversity of every movie slop game.

  • @elmariachi5133
    @elmariachi5133 3 месяца назад

    I expect nvidia to soon produce Decelrators and have these slowdown any computer immensely unless the owners pays horrible subscription fees ..

  • @chuuni6924
    @chuuni6924 3 месяца назад +1

    So it's an NPU? Or did I miss something?

  • @ShaneMcGrath.
    @ShaneMcGrath. 3 месяца назад

    The more they push all this A.I. the more likely I am to end up switching off and going back outside.

  • @wrongthinker843
    @wrongthinker843 3 месяца назад

    Yeah, given the performance of their last 2 gens, I'm sure AMD is shaking in their boots.

  • @--waffle-
    @--waffle- 3 месяца назад

    When do you think NVIDIA will release their consumer desktop pc GH200 Grace Hopper Superchip style product? I'd love an Nvidia-ARM all-in-one Linux beast.

    • @--waffle-
      @--waffle- 3 месяца назад

      When Part 2!?!? (Also, New nvidia shield tablet!!....yes please)

  • @cosmefulanito5933
    @cosmefulanito5933 3 месяца назад

    How annoying they are with AI. I have seen many surveys and no one seems interested in the topic. However, they continue making products and bombarding the market with something that no one seems to be interested in.

  • @ITSupport-q1y
    @ITSupport-q1y 3 месяца назад

    Well done again, I want to be able to add several cards to my pc and cluster them, Might need a three phase plug.

  • @marktackman2886
    @marktackman2886 3 месяца назад

    Pci-e hopes are dead, not enough of a market because they have starved IO on motherboards

  • @Eskoxo
    @Eskoxo 3 месяца назад

    Mostly for Server side I do not think consumers have as much interest in AI as these corpos make it seem

  • @velmar123
    @velmar123 3 месяца назад

    Is it me only or there's something off again with the voice? Needs more training?

  • @francisquebachmann7375
    @francisquebachmann7375 3 месяца назад

    I just realized that Pikachu is just a pun for Picojoules.