NVidia is launching a NEW type of Accelerator... and it could end AMD and Intel
HTML-код
- Опубликовано: 29 сен 2024
- Urcdkeys.Com 25% code: C25 【Mid-Year super sale】
Win11 pro key($21):biitt.ly/f3ojw
Win10 pro key($15):biitt.ly/pP7RN
Win10 home key($14):biitt.ly/nOmyP
office2019 pro key($50):biitt.ly/7lzGn
office2021 pro key($84):biitt.ly/DToFr
MS SQL Server 2019 Standard 2 Core CD Key Global($93):biitt.ly/oUjiR
Support me on Patreon: / coreteks
Buy a mug: teespring.com/...
My channel on Odysee: odysee.com/@co...
I now stream at:
/ coreteks_youtube
Follow me on Twitter: / coreteks
And Instagram: / hellocoreteks
Footage from various sources including official youtube channels from AMD, Intel, NVidia, Samsung, etc, as well as other creators are used for educational purposes, in a transformative manner. If you'd like to be credited please contact me
#nvidia #accelerator #rubin
No one will end anyone.
AI will end the world, in time
Some may end themselves.
Based and truthpilled. This faked up competition has to continue for as long as possible,
It's a 24/7 publicity stunt for the entirety of this market with Nvidia basically playing the big bully.
I will end this conversation.
@@PuppetMasterdaath144 No, you won't, you Puppet.
We are in dire need of a diverse GPU market.
yeah what happened to "diversity is strength" 😂
We have a diverse market, it's just they aren't a monopoly because they are the only ones they are a monopoly simply because they are the best.
sadly china still needs at least 5 years for a product worthy of buying
@@Koozwad I have no idea what you're trying to convey, its just about more companies building GPU's, nothing else.
@@mryellow6918 Sure that is one factor, or these invisible GPU manufacturers do not promote their GPU's and optimized support for the games like NVIDIA and AMD does.
But being the best comes with a nifty price huh?.
CEO of Nvidia only Said IA . IA... IA dozen times, only one gaming...SAD TIMES
to me he said money, money money lol
Nvidia outgrow the gaming market they could abandon the gaming market and they wouldn't notice the losses
What's IA?
@@LukeLane1984 lol
jesus nvidia does not rest for the competition.
And consumers pockets
@@visitante-pc5zc As long as it aligns...
It's over for amd before it started. Keep throwing 50M a year at a failed amd ceo.
What a weird comment. Probably a bot or shill. Creep. Weirdo.
I might actually end Nvidia if they keep up their ridiculous pricing.
Prices are set by the consumers, the market, no the company. If anything, it would only go down with competition. Value is SUBJECTIVE, there is no intrinsic value on anything. You will pay what the market wants, if the pricing was "ridiculous" as you say, then nvidia would be losing money, it is not, then it is not ridiculous. Learn economy before saying communist shit.
their costumers, other billion companys can affort it with ease, no need worry.
Unfortunately the prices are high because that's what people are paying.
The pricing isnt bad when you actually look at what their products provide.
@@AlpineTheHuskyfor business and prosumers and 4090 in gaming, everything else is overpriced.
For what I'm gathering this accelerator is a type of cash for the GPU which means it won't be a dedicated card and consumer products it will probably be parts of the video card itself.
10:50 Yes, I'd like to use AI to turn my cat photo into a protein.
This is extremely interesting, I hope they will release discrete accelerators for desktop users
It's extremely boring actually.
@@hlbjk Actually, you are boring. Stop spamming your stupidity and go watch cat videos.
@user-et4qo9yy3z yes
@@hlbjk For those who do use only their pc for gaming .
@@hlbjk Oh look mommy ! The "akshually" guy is real !
Do you think Tesla will jump into this market at some point? Would the FSD HW4 be competitive? I don't know if this is a market they are interested in, but it seems like they have already done a lot of the work as far as low power usage inference.
Super interesting. Will be following developments.
28nm to 4nm is only a 3x density increase?
But isnt the 4nm 8700G like 2.5x more transistors than the 7nm 5700G? (both around 180mm²)
I mean i guess clock speeds makes a difference, but weren't clock speed lower on 28nm?
Wait a minute. If I go back to the archives, aren't you the guy who was saying that Nvidia was finished and AMD would overtake them? You were very very negative on NVDA not so long ago.
Becouse of price I will always buy and GPU and CPU
I don't care about llm ai assistants. If gaussian splatting takes over rendering than ok.
Video error-blacks out
is this just ai shit
oh sure, AI shit, that simple. You smart.
Doesn't seem like AI slop. Just tabloid-tier garbage to direct clicks to their windows key selling scam.
@@gamingtemplar9893 sounds like an ai voice because rhe inflection never changes
For the sake of the market, please step up other companies.
Do You think that there will be a competitor or alternative to NVidia within the next 18 months ?
There were dozens of startups developing exactly fixed function accelerators for inference - some have already ran out of money, some have been poached by the bigger players like Apple, Google, Amazon... some are developing in memory computing and analogue logic, which will probably never see the light of day this decade.
Unless you can get TPUs from Google, there's not much actual commercial hardware you can get that's more efficient than nvidia's, if you need horsepower and memory.
If you want to run basic local inference, any 8gig GPU will do, or any of the new laptop processors that can do around 40tops.
Nope. Even if a competitor started now and copied the ideas in the video, it would take about 18 months to design, validate, and produce them AND that would be version 1. NVDA is pushing ahead faster than anyone can keep up with
To the Nvidia spot I don{t think but anyone can enter the market and show what they can bring and try kicking AMD and Intel, but Nvidia looks very well and secure as the leader, but sure we would be very glad if there is some other strong competitors around because it feels like a monopoly, not that good.
These startup can try to sell but once they get any traction, nvidia will buy it with their one quarter profit
😅Huawei Ascend I guess? Though the software side is kind of bad.
One word: meh.
I seriously doubt that nvidia will try to sell external accelerator cards to consumers. That didn't work out well for PhysX accelerator cards and isn't likely to work better now.
Worked out for tensor.
amd has had 2 corrections in it's stock price this year and is now sitting below January price. It would seem to me amd is a horrible investment with a horrible overrated overpaid CEO.
it could end AMD and Intel if you care enough to buy and play modern day slop LOL
the coprocessor is here
Premium quality as always. Great work.
After 16 years seems I'll be forced to buy nVidia card again. :(
You mean your choosing to buy the better card?
nvidia stuff is so boring, good we have you to sweeten it up
That's right. Stop doubting Nvidia. Drink the kool-aid. Buy the stock. Afford the gpu. Easy.
Not financial advice.
hell ye!! kool aid!! AI NO BUBBLE!! AI FTW!!
Ai stuff yawn
So Nvidia wants to patent order of operations, kinda reminds me of apple trying to patent the rectangle.
Math can't be patented.
@@brodriguez11000 there is a loophole for it, unfortunatelly, that should have never been granted
Didn't Apple succeed? Iirc the 10
2" galaxy tablet got squashed because the judge sided with Apphell.
Nothing new in Nvidia being scum bags really.
Just costs 30 pikachus to do that operation....
thats alot of pikachus!
@@dreamonion6558 You want as few pikachus pr op as possible! also, "a lot"
How many joules is in a pickahu?
And one Magikarp
Intel, AMD and now Qualcomm will be just fine.
amd is a bottom feeder.
@@fred-ts9pbWhat tech company are you running? Oh yes Trolltech!
@@fred-ts9pb Lol tell us more about how you know nothing about the industry...
Sounds cool, but all I want is more VRAM under 10k
5:12-5:44, is there a Reason for the blackout in the video?
Interesting video as always
Probably video editing or rendering error.
at some point. 3d and ai need to fork into dedicated architectures instead of having a general do it all GPU
Both intel and amd have fpga ip to achieve close to bare metal performance. And in comparison to nvidia's asic plans here a fpga is flexible. Its logic gates can be 'rewired' while nvidia's asics force you to buy more hardware again and again.
It's funny how we now need fixed function accelerators for matrices after 15 years of turning GPUs (fixed function FP accelerators) into programmable devices.
Also, we went from 12/20/36bit word computers to 4bit and 8bit micros, to 16bit RISC processors and FP engines, to 32bit and now 64. Only to discover we now need much less precision, FP 32 and fp16, now 8 and 4bit. We could probably go down to ternary for large model nodes, or 2bit.
AI workflow != Classical computers, no one will get back to 8 bits on a consumer device :p
It is still 64. Floating point is kinda different.
@@BHBalast Well yes. But. Do you need photoshop if the AI-box-thing can generate the file you asked for and uploaded it? I'm not saying we won't need general computing like we do now, but most people won't.
Because most people don't need programable computers, they need media generation and media consumption devices. Most tasks are filling forms, reading and writing.
Darn - hate to be held hostage to leather jacket man but it is a good way to go - local AI processing instead of relying on large corporates for AI services on ones data.
absolutely the same shit as an encrypted channel between you and processing datacenter.
processing is not local anyway it seems. ai locally only fractures data in some bullshit tokens (just packets as usual) and send them to data center to get the processed response back. this sounds just like bs channel encryption using AI cuz nvidia can't do anything else but ai.
@@Zorro33313wtf?
@@Zorro33313 what meth are you smoking mate
We'd still need cloud AI services as the same methods will get to their datacenters and will create more advanced things to do online much faster. But more "simpler" will get local, indeed.
*1000 ex performance uplift" I'm annoyed by people using ex instead of times, been hearing it more lately I guess.. I know it's been a thing for awhile but maybe I'm getting old?
When did we change times to x? I've been hearing it more and more lately.. gamers nexus said it earlier (technically yesterday) and a few months ago I remember hearing it.. maybe it's been standing out more and more to me but I think it's now a pet peeve. It's ten times as much sounds like the proper way to say it rather than ten x as much.
Haven’t noticed, but offhand I would guess it started in more scientific / communicator circles to avoid mixups. “Times” sounds like many things, “X” doesn’t.
Total hot take
I would bet my life coreteks owns nvidia stock
I assume he owns Nvidia, AMD, Intel, and several other tech companies.
This time you are wrong! Inference is where AMD and Intel play in a plainfield with NVIDIA. NPUs have much more simple apis so the vertical stack is thin, almost irrelevant and all the applications are going to support NPUs from intel, amd and nvidia very easily and AMD and Intel have already NPUs enabled processors and PCIE cards on the wild
He has been wrong plenty of times.
This is nothing more than a joke!
"Google, Intel, Microsoft, Meta, AMD, Hewlett-Packard Enterprise, Cisco and Broadcom have announced the formation of the catchily titled "Ultra Accelerator Link Promoter Group", with the goal of creating a new interconnect standard for AI accelerator chips."
People are tired of Nvidia gimmicks and they will shut them out.
Fan boys have entered the building!
@@120420 I quote tech news and you call me a fanboy because you are a fanboy that didn't like the news. Way to go champ. mom must be proud.
People are not tired, some people are and they don't have any clue about what they are talking about. Same way people who defended Nvidia back in the day and still do, like Gamers Nexus defending the cable debacle to protect nvidia. You guys are all one side or the other fanboys who don't understand how things really work.
@@gamingtemplar9893 We understand how things work. Nvidia use to use the "black box" called GameWorks to add triangles to meshes that were not needed, to increase compute demand. Then they would program their drivers to ignore a certain amount of triangles to give them a performance edge. Wouldn't give devs access to the black box either.
GSynce was a rip off to make money because adaptive sync was free.
Limit which Nvidia cards can use dlss so you have to upgrade. Then limit which Nvidia GPU's can use frame generation so you have to upgrade again.
We know what is going on and how the Nvidia cult drinks the kool-aid.
@@gamingtemplar9893 We know Nvidia's gimmicks too well. I cut them off with the GTX 1070.
This channel has become more and more off the rails, and frankly 90% of it is fake news and clickbait. Don't need this in my feed.
Yea, that's what struck me with the title, given what's been spewed here of late.
I've joined you with the unsub.
Lets be clear. I wouldnt buy Nvidia, even if they were the last one, id rather quit gaming. Therefor no, they ended nothing. They can keep their greedy shit to themselfes.
Thank you for the commentary.
But why use a fake click bait image in your thumbnail? You are better than that. Click bait fake thumbnails are for low level people who have nothing to offer and try to trick ignorant people to click on their rubbish videos.
You have nice commentary and a number of followers. No need to do that low level disappointing shit like fake thumb images.
Cheers.
I'm already so tired of AI this AI that AI underwear AI toothbrush AI AI AI AI
AI condom.
Either I'm bad at listening or I just didn't understand but inference or running neural models locally is all the rage about NPUs and TOPS in current SoC's isn't it? Apple with M3/4, AMD Strix with 50 TOPS and Snapdragon Elite X and Windows 12 with Copilot is exactly that use-case, running models locally isn't it? So why not just cram in these NPUs or new type of accelerators into your CPUs or discrete GPUs and call it a day?
What's so revolutionary about this new type of accelerators from NV, that the chips that are hitting the market TODAY, don't have? It's my understanding that optimisations happen on all fronts all the time, transistor level, instruction level, compiler level and software level.
When I look for open job positions in IT it hits me how many compiler and kernel optimisation roles are opening for drivers, CUDA and ROCm... Don't get me wrong I love your videos but I just don't see the NV surprise, when everyone is releasing ai accelerators today vs NV promising them in maybe 1 year. NV was focused on the server market, while AMD was actually present in both server and client.
Also notice, that NV was already using neural accelerators for their ray tracing workloads, which significantly lowered the required budget of rays, that needed to be cast as they could reconstruct the proper signal quite believably with neural networks.
We'd need to assume that TOPS/W metric is only understood by NV and that everyone else will sit idle and be blind to it. I doubt that, judging on what is happening right now.
Also we assume, that models will keep growing, at least the cost of learning. There are some diminishing returns somewhere, so I expect models to also shrink and be optimised as opposed to only grow in size. As more people/companies start releasing more models they really need to think how to protect the IP, which is the weights in neurons of these networks because transfer learning is a "biatch" for them :)
With progress happening so fast, yesterdays models become commodities. As they become commodities they are likely also to become open-sourced. As such you can expect a lot of transfer learning activities happening, that will act as a force which leads to democratization of older still very good models. So this is a head wind for server HW as I can cheaply transfer learn locally...
For me local models are mostly important in two areas of my life: coding aid and photography processing. I really follow what fylm.ai does with color extraction and color matching. As NPUs proliferate more and more cloud based features can be run locally.... (for example Lightroom, Photoshop, fylm ai or copilot like models to aid programmers).
I was thinking a bit more and there is another aspect that we're missing from the analysis: Data distance:
If you are running a hybrid workload and you really care about perf/W you are actually going to host NPUs on the GPU and also separately as a standalone accelerator. So when you are running a chatbot or some generative local model you will use the standalone accelerator and throttle down your GPU. That's the dark silicon concept to conserve energy. If you are running a latency sensitive workload like 3D graphics, that are aided by neural networks, like the ray tracing / path tracing workloads, then you are going to utilise the low latency on-GPU NPUs because you need the results ASAP -> you might throttle down the standalone NPU accelerator.
There is a catch. If your game uses these rumoured "AI" NPCs, then that workload will be run on the discrete NPU accelerator and you're going to be forced to keep it running along the GPU.
Now the Lightroom use-case is interesting. Intelligent masking or image segmentation can be done on the discrete accelerator, especially if it means same results but lower Watt usage (in Puget benchmarks). However there might also be hybrid algorithms that utilise GPU compute along with NPU neural network for processing, in which case it might be more beneficial to run that on the GPU (with NPUs onboard).
To prove I'm not talking gibberish, Intel is doing exactly that with Lunar Lake :) There are discrete NPUs with 60+ TOPS and the GPU hosts it's own local NPUs with ~40 TOPS. Thus intel can also claim 100+ "Platform" TOPS although that last naming is misleading as you are unlikely to see a workload, that utilises both to run your co-pilot. A Game on the other hand might be different.
Lastly I remember years ago AMD's tile based design was marketed as exactly, that, a platform, that not only helps with yields (from a certain chip size onwards) but also allows you to host additional optimised accelerators like DSPs, GPUs, CPus and now NPUs on a single chip. So you could argue AMD has been preparing the foundations for that years ago...
4:44 oh no, they went from 30 Pikachus to only 1.5 Pikachus 😮 where did all the Pikachus go then??
They didn’t go from 30 to 1.5, 30 is how much energy it takes to load the values, and 1.5 how much it takes to compute one value once loaded (with FMA). With the HMMA instruction, it takes 110pJ to compute an entire matrix of values, so the overhead of loading becomes negligible, while with scalar operations like FMA, the loading part dominates the power consumption.
Ah so it's more AI fluff that will amount to more hardware in landfills after the AI bubble bursts.
With all due respect Coretex, first it was AMD going to destroy nVidia with chiplet design, then Intel were going to destroy AMD with new CPUs and now nVidia will destroy both! 🤔 Can you please make up your mind? 🙂 PLEASE! 🙏
Hate them or love their corporation, NVidia really has some brilliant engineers.
isn't that "accelerator" the NPU everybody is talking about nowaday? specialized in local low power inference. Nvidia may have the best prototype, but Qualcomm and AMD are already starting to ship CPU with NPU doing 40-50TOPS. all back up by Microsoft within W11. so even if nvidia comes to market in 2025 it may be too late.
I hate clickbait titles. Absolutely shameless. Unsubscribed, don't know why I was subbed to begin with.
If Nvidia is planning to sell another type of card "AI Accelerator" it would explain the rumors of the RTX 50xx GPUs being a dual slot GPU. If you own a tin foil hat you might think that the RTX 40xx GPU's were larger than they needed to be to prime users, by encouraging them into buying a bigger PSU and case.
Accelerator is not the same thing that you said the 40 series was going to have?
Yea that and the holographic vr lens stuff are the two things I recall the most
Have any of CoreTek's predictions ever come true just as described?
Survey says: “Not many, and especially not lately”
Still waiting on my holographic VR lens tech I was promised like 2 yrs ago
Can't wait to buy a PhysX, i mean, Nvidia-AI PCIe card :)
lol also the 3D Nvidia glasses, the dedicated G-sync module in monitors etc ...
Green clickbait cope, thanks!
how is that new and how this will end amd or intel?
like whats stopping amd from getting their xillinx accel to do the same job?
I think it will be just as Successful as the Traversal Coprocessor 😈
Unsubscribed
Stop hyping
You are too much of a NVIDIA fanboy.
AMD just maximizes price to performance gaming. It's too affordable for the quality of gaming to be ended. And when Intel refines their GPUs they'll have some solid footing to scale down the outrageous pricing of Nvidia. And AMDs cpu price to performance is just too good right now also.
Ill pay to who ever does not have any AI shit.
Haha title reminded me of 30 series launch rumor, I really wanted it to happen but all we got was upgrade in prices lol
I never got why people want any company to destroy its competition.
Because if Nvidia had eliminated AMD with the 30 series: Then we would not have the current increased GPU prices.
No. It would be 10 times worse with Nvidia at a monopoly.
I wonder how long before you figure out you can benefit from Nvidia raising prices.
Dude gobbles nvidia hard
30-series would’ve been a pretty solid gen if it weren’t for the scalper pandemic.
@@bass-dc9175 I don't want the competition to die, I just want better and affordable products from both companies especially when comparing to previous generations.
Fetch and Decode does not move data they only process instruction not data. Also instructions are tiny 16~32bit vs. 512~1024bit SIMD vector data
and it could end AMD and Intel.....i feel like i have heard this a lot of time now... nice clickbait title
Zero-sum game. Everyone else must lose so one can win.
So the industry is having YET ANOTHER go at thin client BS. I hope, again, for them to, again, fail miserably.
It’s an NPU. This is a silly video.
I think the NPU accelerator is very much a open market. Both Apple and Qualcomm are embedding NPU accelerators into their ARM V9 SOCs. Also Groq has alternative approach to inference which is much more power efficient?
When did this become a meme page?
It's interesting how nVidia is still locked out of the Desktop & Laptop CPU market - with AMD, Intel and now Qualcomm pushing Co-Pilot PCs & Laptops
- I know Qualcomm had an exclusive on Windows ARM CPU development, but that ends this year (?)
- so obvious nVidia should be making SOCs for this market
Doubt it mate. No one cares.
AI acceleration sounds as good as 8ch sound for my pair of ears. The presentation looks more eyecatching than RGB, maybe because I like spreadsheets and informed narrative.(Just telling that I viewed it with will to absorb as much as my brain accepts🤷🏼♀️, when AI make it to desktop PC and games I will understand 2x more)
You know more how ground breaking Nvidia acceleration could be, but I am sure I will watch it from distance with my slim wallet 😂
GG, pretty news of this topic.. As usual by Core-news-tech 👍
Patents last legally only limited time, right? AMD and all other will develop their acceleration at law bureau soon.
NVidia making $30,000 AI chips that consumers are not going to purchase should not affect AMD/Intel who aren't currently competing in AI. To "end" AMD and Intel, the NVidia chips would have to be under $500 as that is the limit most consumers will spend on a graphic card or CPU. Somehow, I don't see that happening anytime soon.
Idk,, I should be as enthusiastic as you. This may be an investment opportunity. Still waiting for NVidia stock to pull back. Technical analysis says it's at it's cycle top.
But I'm also a machine learning expert. While inference is important. It's currently the fastest thing compared to training. The problem is phones not laptops. Laptops can more than handle inference. A phone on the other hand struggles. So Samsung with it's focus on an AI chip is more important in this arena. Unless NVidia is going to start making phones, I don't see this, as an implementer, as that impactful. And memory is more important on phones than cpu for this type of work.
On a side note I don't even use GPUs for my AI work. I did a comparison and it only gave me a 20% increase in performance and cost twice as much. So at scale I can buy more CPUs than GPUs. And one more CPU is 100% increase in performance compared to gaining the 20% increase at twice the cost. So I don't see the NVidia AI hype.
1x cpu 1x gpu is 120% at twice the cost
1x cpu is 100% at the same cost is 2x so I get 200% increase for the same price as 1 cpu and 1 gpu
I feel like the next big step to properly implement the coming technologies in AI and acceleration would be to integrate such architectures directly into the motherboard. especially in light of how large NVidias highest end cards are becoming and how much more space efficient MoBos have gotten through the last half decade, and not to mention the power and efficiency of APUs and NPUs that are coming out this year. To physically offloading those calculations onto a dedicated spot on the motherboard could provide an upper hand in computer hardware.
This also doesnt seem all too farfetched when you take into account the industry is planning to implement an "NPU standard" across mobile devices and various OS. Also the fact mobo manufacturers are already re configuring things like RAM from DIMM slots to CAM2 on desktops. combine all of this plus the fact that the technologies could potentially be tied closer to the CPU on the north bridge and it feels like a no brainer to work with mobo manufacturing to further push the limitations of computing power
Also, love in the local AI space, Apple has accidentally (or strategically) made their Mac Studio a rather economic choice for local inferencing.
That and Nvidia's profit margin (nearly twice that of Apple's) making Tim Apple gushing also shows how dominating Nvidia is in the market right now.
the audio is kinda weird
18:39 Truth bombs being dropped.
I Wana geek out about this but I'm convinced that all the ai advancement and inference refinement is just snake oil for over 90% of buyers. So I hate it. I'm down for new approaches, but fr can any non-salesman or non fan boi explain how this will benefit any more than 1-3 companies??
Nvidia has plenty of cash to continue to be aggressive in its goal to disrupt and lead as many markets as possible. It's pretty much confirmed their next goal will be around edge and consumer AI which basically predicted their unsuccessful acquisition of the Arm in the past. It will be very interesting to see how their edge/consumer Arm SoC they r working on will compete with the rest of the players.
Thx for the vid and for shedding more light about their future accelerator👍
I think much will depend on how aggressively NVIDIA builds and defends it (leading edge) patent portfolio. First is best. :)
You really didn't understand a shit of what this is... it's for companies, it will costs hundreds of thousands of dollars, it's designed to replace large AI servers (as big as buildings) with a much smaller solution. It's not for gamers, it's not for customers, it does not compete with any other company, because there are no other companies doing this.
This channel must be reported.
In the last year you were saying that Nvidia was in a dead end and that their stock price should plunge, instead, their price has rise a 200%. Lucky for me at that time I did not listen to you and I bought, taking into account how important AI was going to be making a lot of profit.
Nvidia and their CUDA cores already makes me sick. Now they will pay developers to use this and close the loop. Yet another monopoly from Nvidia.
This won't end AMD for one reason, PRICING!!! Not everyone is related to Bill Gates or want to take out a loan to be able to buy this.
It all sounds very complicated, I will stick to Star Trek Online. 1080p, 60.
Return to the Emotion engine. AI revolution will help Game developers maximise the man jaws and diversity of every movie slop game.
I expect nvidia to soon produce Decelrators and have these slowdown any computer immensely unless the owners pays horrible subscription fees ..
So it's an NPU? Or did I miss something?
The more they push all this A.I. the more likely I am to end up switching off and going back outside.
Yeah, given the performance of their last 2 gens, I'm sure AMD is shaking in their boots.
When do you think NVIDIA will release their consumer desktop pc GH200 Grace Hopper Superchip style product? I'd love an Nvidia-ARM all-in-one Linux beast.
When Part 2!?!? (Also, New nvidia shield tablet!!....yes please)
How annoying they are with AI. I have seen many surveys and no one seems interested in the topic. However, they continue making products and bombarding the market with something that no one seems to be interested in.
Well done again, I want to be able to add several cards to my pc and cluster them, Might need a three phase plug.
Pci-e hopes are dead, not enough of a market because they have starved IO on motherboards
Mostly for Server side I do not think consumers have as much interest in AI as these corpos make it seem
Is it me only or there's something off again with the voice? Needs more training?
I just realized that Pikachu is just a pun for Picojoules.