AMD was never going to catch up to Nvidia by themselves. They just don't have the size and R&D budget to match. Partnerships like this are the best way to fix that. And Sony gets new tech with less R&D investment needed for their products.
They actually do now, but these plans were laid 3-5 years ago. AMD didn't know that Intel would have a short blip with 12th gen, and then go back to sucking balls. But AMD split gaming and datacenter GPUs off from one another because one wasn't making money and the other was. AMD absolutely has the grunt now, but they don't see gaming GPUs as ever becoming a "real" cash cow. Nvidia, btw, thinks the same. Their margins on gaming GPUs suck compared to what they make on workstation and data center GPUs. Which makes sense. They're gigantic dies and gamers expect to get them cheap. And since Dr Su is a fiscally responsible lady she insists that the Radeon division must pull it's own weight financially, so trying to partner with as many as possible to gang up on Nvidia makes a lot of sense.
They can't catch up with dlss. It had few years more to train and nvidia supposedly have much bigger capabilities to do so. PSSR clearly showes that. The only way it could happen is from the incopetency or malice from Nvidia. Or diminishing returns in training. Expecting anything else than ,,AMD still not as good/AMD disapoints" clickbaits as a end result seems foolish to me.
Its obvious now why RDNA 3 was neglected.Timing and schedule of PS hardware and co-development of RT saved AMD's underfunded Radeon divison schedule to both catch up on RT featureset,denoisers,math;engineering that came from vector math work etc. Release it with RDNA4
I will have to say that I will expect little. AMD has a track record of releasing unfinished but promising technologies and not following well after, such as FSR and Anti-lag 2. If Sony is involved maybe chances are higher, but Nvidia is a trillion plus dollar company. And Sony just released Playstation Pro with a less than stellar performance improvement. "Expect little and be gladly surprise if they surpass it" is my motto.
Performance gains with PS5Pro is significant. Its what you make off it whats matter. Game developers still put all their effort in the base console as they clearly should.
@@stefanschuchardt5734 800 to 900 dollar console performs at 20%+ average and extremely controversial raytracing improvement for 2x the price. A pretty bad value proposition for consumers.
I just hope the future is not poor optimization, noise, temporal smudges, flicker and games that somehow have less fidelity with 10x the comoputing power.
Lisa Su used to work for Sony. She was the lead designer/ engineer of the PS3's main chip, as I understand it, so AMD's connections with Sony effectively go back well before the development of the PS4.
When I was doing my computer science degree in the late 90s, ray tracing was something so computationally expensive it was only used in images produced by PhDs in their research into ray tracing. It was something the profs might occasionally mention in conversation outside of lectures, but it wasn’t mentioned in any lecture, even in my computer graphics class. I honestly never thought we’d get to a point where there could be ray tracing in a video game. But I’m not an optimist about technology. So… given the expected price for the 5090 (based on the leaks), and the middling stats for the 5080 and below (considering the 4000 series), I’m beginning to wonder if consoles will be the future of high-end gaming. If I buy a new console every 6 to 8 years, that console could contain THE cutting edge video card inside it, and be more affordable than THE cutting edge video card in a PC. There would be manufacturing efficiencies gained through selling more cards, and doing so over 6-8 years, instead of the current 2-year lifespan of video cards. There would even be programming efficiencies gained through a 6-8 year lifespan of programming for that card. I applaud AMD and Sony for working together on quasi-open standards to improve gaming for both PCs and consoles. But, you know, if the 6090 launches at 3000$, and the PS7 is less powerful than a PC, and 4K gaming in still only available via upscaling, maybe the gaming hardware people need a consider a new approach, and the game designers need to reconsider ray tracing. I’m not entirely sure this was the right video for this comment… it’s just something I’ve been mulling over for the last couple weeks…
The guy who designed the Atari arcade game Marble Madness was Mark Cerny, and he was only 18 when he started working on it. The game came out in 1984 and was super innovative for its time, with those isometric graphics and the whole idea of guiding a marble through these tricky mazes. Cerny went on to have an amazing career in gaming-he worked on tons of iconic games and even became the lead architect for the PlayStation 4 and 5.
I will sometimes watch RUclips videos with unnaturally slow dialogue at 1.25x or even 1.5x in extreme cases, but Daniel that 2x speed was insane 😅 I can't believe you watch everything like that lol.
It's crazy that you can genuinely listen to videos at 2x speed. I just can't. Can't really understand what they're saying nor do I even want to listen to that in 2x. It sounds.. uncomfortable.
I can't do it, either. Maybe people who do tend to have subtitles on (as Daniel does here) to help them follow along. That kind of prevents you from multitasking in other tabs/apps while you listen, though, so I'm not sure how much time you actually end up saving. Also, if you're rewinding or pausing to read the subtitles because it's going a little too fast, that also eats away at the time savings.
My girlfriend thinks the same. She can't understand what's being said. She can however spell out whole sentences and I'll lose whatever she's saying after like 3 letters lol.
I can easily understand what's being said, but it's still uncomfortable. Especially when most of my RUclips watching is listening in the background while doing some other task. Not possible to do that at 2x speed when it requires your complete attention.
Just realized that the part about free use for the learnings etc. sounds like what UDNA would be like. I think that essentially confirms UDNA, probably in place of RDNA5.
When Ray Tracing is explained like that, I can see why it's so hard to run. The process looks so inefficient and convoluted I'm surprised it works in real time. But I guess that is the MOST efficient way to do it right now, otherwise someone would've already found a better way. I imagine the next step would be to embed more properties into the assets themselves to cut some time in comprobations and allat. But that would mean that the software isn't doing everything automatically and we don't do that round here.
Honestly it is never going to be fully efficient due to the divergent nature of RT (the full kind of RT and not just sun shadow). To combat the inefficiency, instead of fully resolving an image or RT effects, comes the denoiser. The denoiser alone will not be enough for real time application, thus comes accumulation which is using data from multiple frames to help resolving the image of current frame. What you ended up with is a result that is smooth and a bit laggy. A game with full path tracing will have noticeable lag in lighting and reflection resolve (need time to resolve fully) and also becoming less detailed, especially in movement. Unless they comes up with a fundamentally different way of doing RT, I don't think this problem can be fully fixed, but should be able to be reduced with faster hardware and probably more AI to fill in the gaps.
the most efficient way is to implement ray rtacing into the agme itself and not require nvidia only gpu to do good rt. That is why amd did open source fsr. You can see that mod FSR games are more good implement than the dev itself LOL
@@mikelay5360 yea they just wait for Nvidia to launch/price and then follow. if the 5070 is $699 the 8800XT will be $599/$649 so you might aswell buy Nvidia which is what people do
@@rangersmith4652 Correct me if I'm mistaken, but I believe most of their consumers are corporate entities and not gamers. We don't really matter to them anymore. However, if Intel (INTC) and AMD can create viable products and features for the corporate space and offer real competition in that market, it may encourage Nvidia to make a greater effort with gamers. We're only approximately 17 percent of their revenue now.
One thing, the improvement is not that the BVH now holds twice the nodes per level, it is that the ray intersector can do double the intersection, thus being able to make a wider and shorter BVH. 4->8 parallel ray-box intersections and 1->2 parallel ray-triangle intersections per cycle, that’s the key.
do you know about the "video speed controller" add on? I always watched youtube at 2x as well, but sometimes even that is too slow. With the add on you can speed it up to 16x. I have it bound to speed up / slow down on 2 of my mouse buttons. Makes watching content so much more time efficient. Also, the add on works on pretty much any video on the internet, not just youtube.
Not gonna lie, it's hard not getting excited about this lol This could very well become a transformative step not only for console gaming but also for the overall gaming space and many many devs across basically all levels. Knowing that Mark Cerny is fully focused on working stuff like this is great to see, especially since they already have a pretty clear vision and path how to move forward. But I agree, the PS6 will be VERY interesting to see then, if they can stick the landing with that machine in combination with Amethyst, it might as well become a incredibly powerful piece of hardware that could REALLY push the boundaries of console gaming once again.
Well, the strategy used in the PS5Pro & PS6 are going to be very different with a constrained SoC, compared to full fat PC architecture. It's very clear that Sony does not want to pay for the extra silicon acting as a separate frame buffer on SoC. This is achievable with a 128MB Vcache acting as a frame buffer but that will make the SoC extremely expensive and add $300-400 to the base price of the console. Using the Register buffers is quite smart but comes with other performance considerations because switching context in the program requires emptying the register buffers and bringing back the register data for the previous instruction. Anyway, AMD can steal some ideas from PSSR and Ray Tracing optimisation but the hardware on the PC side will be general, NPU on the CPU for LLM for NPC interaction, RT & Vector processing on the GPU to handle all the graphical AI stuff. GPU using GDDR6/7 also has much greater bandwidth compared to the DDR5 used in consoles. What this means for PC is, in a less memory constrained environment, AMD can leverage the VRAM to a much higher degree and optimise the driver code to use less ram to perform the same tasks like upscaling, frame gen & ray reconstruction. This will be very helpful for low/mid-range GPUs with 12-16GB of VRAM when FSR4 comes out. Another thing about AMD AI is, ROCm is open-sourced. Sony may be forced to use open license for the hardware because the underlying driver for RDNA2 / RT / Vector processing is open-sourced. Only the Custom silicon can be closed sourced, so Sony probably has to yield a lot of the control to AMD. This is great news for everyone involved with graphics because console developers are well-known to be able to squeeze every drop of performance out of the silicon compared to the lax developers in the PC space who has the luxury of tons of memory. Now to be clear, I am NOT knocking PC dev. PC games pioneered rich open world games because PC dev concentrate on populating the huge world with content with less worries on optimising the game to use less RAM & VRAM. That's why PC games and console games are so different. Regarding openness of the CNN stuff, it is going to be proprietary to AMD due to driver. I am not aware that ROCm can run on NVidia or Intel hardware. However, the mathematical concepts & logical workflow of BVH (or using registry buffers as render buffer discussed in this video) is open and any GPU makers and game devs can opt to use those concepts in their graphical pipeline.
@@TkoddaLee The AI stuff are part of RDNA4 so I suppose PS6 will tap into that silicon. It's the Vcache that's very expensive. I suppose Sony will charge $100 more than PS5Pro.
When you build a console, the main cost driver is usually the die size. This is why console makers usually want the rights to have the chip design modified for smaller nodes later on. It's why cache is usually the first victim in the design phase and why they can't clock very high - because a more compact circuit cell means more interference from adjacent chip circuits e.g. Zen4c vs Zen4. Sony bringing out their DMA expertise from those PS2 days is great, because that thing was designed to not have any memory bottlenecks - 2560-bit fillrate monster.
@@neoey2x makes me anxious AF. Most RUclips videos I watch to have them in the background to relax while driving or whatever. Not worth it to me. I run 1.25 or 1.5 often depending on the speakers.
@@christophermullins7163 Same. It's not that I couldn't watch most videos at 2x speed, but why? The exception is some certification training videos, which have to be accessible to the lowest common denominator. Some of those have unnaturally slow speech and are basically normal at 2x speed.
Also AMD should remember the cloud server chip it made for Microsoft Azure. It is the “fully fused network” the issue is cost. The PS5 Pro doesn’t need 340GB of on CPU package HBM3 so it can probably cost much less than the chip in the Azure cloud. So they should know an answer or a few answers to the question of how the issue is how low priced is AMD willing to sell it and what will their cloud server client businesses say to them putting HBM3 on a console before an everyone can buy it server chip. Why do I say HBM3 not GDDR7? Yes GDDR7 is fast but it has the problem of not a wide enough bit bus for the NPU and on graphics cards that use it for the Tensor Process Unit. Nvidia had to get creative to get AI Upscaling to work on a consumer card. In short it takes an image that can git into 1MB to 10MB and streams it to the tensor processors for processing to be upscaled to a much bigger image at the end of post processing.
I was really concerned we'd never get to see these presentations from Cerny due to all the idiots throwing a tantrum about not seeing the plastic casing in his previous presentation. I really enjoyed it and will probably watch it again.
Will any of this RDNA 2.5+ tech for the PS5 pro have any software side benefits for RDNA 3 cards? Or will this only impact RDNA 4 and beyond GPUs with specific hardware modules?
We`re lucky to have PlayStation to motivate AMD to focus and improve their ATI GPU division at least a little bit. For some reason they go all out on Ryzen only. Unlike Intel, Nvidia is not sleeping around at least for now. The only way they can have Ryzen moment with Radeon is if they just... get better - or if Nvidia repeats the same mistakes of Intel, maybe even encountering brain drain
i think that the main problem with RDNA 2 and RDNA 3 is that the Ray and AI (for RDNA 3) accelerators are tied with the amount of CU on each GPU stacks, which limits what Radeon can do with ray tracing and upscaling scenarios (AI FSR is not released yet, but RDNA 3 definitely will also have it, given that they already got the cores, but not the task).
@@Kapono5150 I get it. I was hoping to clarify for those that don't realize AMD is huge, it's just Radeon that isn't. Lisa Su is limits their budget and R&D resources which is why partnerships like this are even necessary. If Radeon had the same level of resources available to them as Nvidia's GPU development department, it would likely be a lot harder for people to just be team green.
HAHAHA! You being the mouse pointer with your finger pointing is hilarious and brilliant! Love your channel and insight into this sort of stuff, you break down so that an older guy like me can understand. Thank you bro! 👍
Can anyone who is working on lightweight neural networks tell me why we don't have neural upscaling asics with a dedicated cache yet? Is it bc we aren't sure of the best way to get good quality yet and no one wants to lock hardware in on it or is it simply not doable?
pssr has had issues in some games, but then videos ignores the ones it's does very well in. That out the way i hope this means AMD has more fight in them, since i'd rather not team green be the only gpu's left in the game for my wallets sake alone.
It does. The 2 gb of ddr5 is for thw system. the 16gb of gddr6 id for gaming.. where are computer you can add ddr to help with the game and space for other programs..
It does.. even PS4 has a hidden DDR3 module. The idea is everything that needs frequent access goes there, so it doesn't eat bandwidth. They don't care much about space.
@DragonOfTheMortalKombat a decent amount of new games fall into the category of "unoptimized slop". If new games are going to run badly anyway there's less reason to upgrade.
So, reading between lines i see that Sony want unified architecture to make porting their games to PC easier.. which is good news, i dont like exclusivity.
Always remember. Lisa Su is the cousin of Jensen Huang. There are always secret midnight handshakes between these two at the end of the day. But as a pure Nvidia fanboy, I welcome this collaboration. It is never too late.
@@DBTHEPLUG multitasking is one way of increasing efficiency. However, you cannot focus on 2 things with 100% attention so it isn't always beneficial to do more than 1 task at a time. watching 2x and being able to watch 2 hours worth of videos in 1 hour is most efficient and opens up time for other things or learning more
@@DBTHEPLUG there were some studies about that, usually multitasking means that you do multiple things poorly or the time to switch between tasks and gather your thoughts (aka switching cost) to do all of them efficiently overweighs any benefits of multitasking. Humans are poor multitaskers, it's far better to arrange your tasks in a way that will allow you to complete them efficiently one by one.
i seriously recommend people start watching all videos in 2x it was a lifechanger for me when i first learnt of it i saved so much time and was able to watch so much more . normal talking pace feels so slow without it
To not confuse someone make it clear that if you talk of System Memory ( 22:20 ) you mean the GDDR6(7) on the GPU , its clearly labeled as GDDR6 in the Picture , it should be clear , but usually if someone talks of System Memory he usually mean the Main Memory of the Computer and not the VRAM of the GPU . To add more Cache Memory would make the GPU bigger and more expensive , it should be possible to add Infinity Cache which has 17,2 TB Bandwith on a seperate Chiplet , but not with RDNA4 RX 8000 GPUs which will have a monolithic Design and which are not meant for the high end Market . I guess there will be a tradeoff between costs and Cache size . Doubble the RT Speed should be possible with RDNA4 and i hope the new GPUs will be in the 400 - 700 € range but guess it will depend how fast the GPUs in general will be , on the lower end Intels Battlemaage may help to keep the Prices down
@@AnalogFoundry Cerny ? , i meant Daniel Owen , the Timestamp is there , 22:20 . Besides , the PS5Pro is basically an APU = it uses the 16GB DDR6 for Graphics and has 2 GB DDR5 for operational purposes
Yes , i meant Daniel Owen and because this Channel is a PC Gamer/Gaming Channel it can be slightly confusing if the talks of " System " Memory and means the GDDR6 VRAM instead . The Video is only on the surface about the PS5Pro , Daniel suggest that AMD and Sony could learn from each other bacause the next Console will very likely have an AMD APU as well , the Playstation is AMD Hardware with Sonys Software
Still doesn't quite matter, Nvidia is the one hindering performance behind their overbudgeted GPU paywall. There's absolutely no reason why games should run worse on GPUs that are way better than their Nvidia's counterparts. But it is Nvidia """sponsoring""" those titles after all, so these advancements don't matter outside of console as long as Nvidia keeps "suggesting" the developers they are partnering with to keep giving the competition the low end of the stick when it comes to optimizations and GPU usage.
@@mikelay5360 Yeah but aren't like 45/50% of those users per steam survey still using the 1080ti/1650/1660? It isn't like they are taking any advantages out of RT nor DLSS nor FG
@@praisetheoak NVIDIA has been at it since those days. It didn't start with the 4000 series. Also the 4090 is more popular than any current gen AMD cards on that survey, if you choose to believe it. I would say that game devs collect their own data during or after installation, They know where the masses are.
@@praisetheoak 1080 Ti, 1650 (both variants) and 1660 (all 3 variants) account for 9.38% of all Steam users as of November according to my quick napkin math. Nowhere near 50% lol. Another bit of quick math had non-RT capable GPU owners at slightly below 30%, and a lot of those are people who either have integrated graphics or have clearly given up on playing new games years ago (GTX 970, RX 550...)
Congrats on 215k subscribers! Random question, doing the math how long you’re thinking it would take you to hit 1 million at the current rate? 😅 I wish you all the best and very happy for you and the work you’re doing to the community.
The problem with the idea of quantum leaps being availabe in AI is that AI capability is necessarily tied to hardware advancement. As someone who researches AI, I don't think there's "quantum leaps" available for consumers. There's "quantum leaps" available in cost-cutting. There's "quantum leaps" available for investors. There'll be meaningful improvement for consumers, but I wouldn't hold your breath on "quantum leaps".
I get what you mean on the Desktop Segment because it is very general but you need to think different when it comes to AMD but especially Sony/PS imo. They can change Stuff for Consoles that only really work well for Consoles. Or especially well for Consoles.
Unfortunately a bit hard to watch. I also play videos double speed and then the Sony guy at 4x is way too hard to understand - trying to switch dynamically between 2x and normal speeds so both are actually 2x was too much work after the first 10 mins....sorry :(
This is marketing. Keeping the fire lit between generations. It's not about names or targets. It's about tiny algorithms and chipsets in the end. Which just aren't there yet.
Is there a market for new hardware for just a few games? Games are taking longer to make and with canceled games and studios we can expect fewer games.
More than 1.5x Play speed loses its edge if you are not a native speaker. I will go to Spiffing Britt-he uses lazy 0.25x speed for cheesing RUclips "features." Exploiting the clock of user interaction which feeds the algorithm, what else did you expect?
nVidia has dedicated servers to do the DLSS training for games. Does intel have for XeSS? Will AMD have for FSR4? Can end users use their own PCs to pool said training akin to BOINC? You pick the game you like and the hardware you have and Bobs your uncle!
@PCandTech-tr8xt Yeah I get your point of view but honestly I said that so I wouldn't sound crazy or random. That last part puts it into perspective that it's a joke about some niche and specific thing. I know there's a lot of young people here so maybe after that it could possibly compel them to search micro machines. It's an old commercial 😂
So PS5 Pro is a hardware/software test platform. But I'm cautiously optimistic AMD will finally leverage their console knowledge for PC. They have full platform access, CPU, GPU, chip set. Always wondered if they could optimize past others that only access single parts of the chain.
If you haven't addressed that need for speed, then there are a number of free browser extensions in the Chrome web store, useable on any chromium browser like Brave, Vivaldi, etc. I use "RUclips Playback Speed Control", and I can speed up videos to 16x lol. YT chokes on anything faster than 8x depending on your connection and YT servers.
I wish Nintendo were with AMD, since is portable, they could get something really good for small package instead of Nvidia. And Nvidia with Xbox and PS5 as they want more performance, dlss, ray tracing and those kind of things, lol
Nintendo did get something really really good for the purpose they wanted and if leaks are to believed the next switch is also looking really good with a gpu faster then amds 890m and 12 gigs of lpddr5 7500 as for the other two nvidia doesn’t offer a comparable soc design that delivers what Microsoft and Sony expect while maintaining backwards compatibility as they lack an x86 license
Considering, RDNA4 is the last of it's lineage, and AMD is supposed to be shifting to UDNA for all GPUs after that, I figure this is Sony's play at getting some personal wish list stuff into the mix before the first successor to the RDNA4 core gets taped out. Custom silicon may still be custom. But, even Sony doesn't ship in numbers strong enough to warrant AMD going very far off their own path. Now, the thing that's bugged me for years is how AMD hasn't enjoyed better gaming uptake on the PC side as a direct result of their dominant presence in the gaming console market. Hooowever, with Sony having bought up so many game developers, that may finally change a bit. Problem is that seems a pretty long ways away, unless the PS6 is closer to ready than we realized. And, to that end, maybe this presentation from Mark was more of a preamble for RDNA4's expected debut at CES. There just isn't enough information to make a confident prediction on. The best I can say is that it makes sense that Sony would take interest in the expanded options and performance potential of UDNA over RDNA. And with as weird and AI-tastic as things are getting, they'll need this to keep up.
Looking at the reasoning behind the collaboration by Sony and AMD, it makes sense. Nvidia has a commanding lead over their competitors in terms of AI technologies and has enough funds to spend on R&D - AMD doesn't have that luxury. If this collab means further ML-based upscaling for consoles and PCs, it's likely going to pay off in the long term.
Further proof that AMD and Sony are co-developing FSR4. I think PSSR is FSR4 Beta/Lite with AMD adding to it for PC. I just hope my 7900xt works with it...
This isn't surprising. (I am not an expert on anything.) Going all in on raytracing for the Playstation 6 was a pretty obvious move for Sony, even just with the current state of the art, before you consider the possiblity of making further advances. PlayStation largely isn't held back by the things that have caused PC hardware and software vendors to go slow with raytracing. No installed base of existing graphics cards to support; less need to support old games; no PC gamers huffing that that raytracing sucks. That means that Sony can provide game studios with a low-compromises raytracing machine and encourage them to stop dragging along the boat anchor of trying to support rasterisation and RT in the same game. (There are still portability risks to publishers in pushing _too_ far ahead of what can be replicated on PC or the next XBox, but those will be fairly manageable by early 2027, and of course true PS exclusives won't have to worry about that.) And the big risk to Sony isn't from going all out for a big generational leap from the PS5, it's from not doing so. Another generational improvement that feels incremental and meh is likely to hurt PlayStation: raytracing is the best and really only chance to deliver something like the staggering improvements of older console generations.
This is my main issue with the coming generations. It seems that AI has just contaminated every product tier. People do not understand that AI capabilities come at the cost GPU space which translates into lesser traditional computational gains and high prices due to process complexity. Cheaper solutions can solve typical GI, shadows and reflections with minimal cost to traditional computational power, and provide similar results for the vast majority of gamers. This is a solution in search of a problem that could be solved with simple smoke and mirrors tricks. But nooooooo, lets try to emulate real light behavior on a frame per frame basis on a single computer units for the typical gamer that will annoyed because is costing him 2 thirds of the performance while making a blur of everything.
The reading comprehension of your viewers is insane. These are all not console tech. These are Radeon tech. You guys should be happy since radeon r&d is getting help. They have significantly less employees and resources compared to nvidia. This is good news since they will have more budget and resource to spare. Good thing Playstation is stepping up to its hardware, forcing Radeon to innovate or at least improve where it lacks. RDNA 4 will just be a sneak peak to whats to come. I am an nvidia user, but hey, I am also a tech enthusiast.
A lot of viewers are unfortunately for most part just tribal about PCMR. Being tribal in general immediately make them focus on specific points here and there without thinking about the big picture and that's very sad. The most interesting part of that presentation indeed, even if you don't own a console or I would dare say hate consoles, is that it gives glimpses at future directions for AMD and Radeon but also games in general. PCMRs people like to forget that they like it or not, consoles are still there and will still be there for a while. Yes they are not for everyone, but there is a demand for a simple box that sits in the living room and that JUST WORKS. And this means that development of game have to take into account both PCs and consoles. The fact that AMD and Sony unite for ML based models as they can't fight alone, that even Cerny admits that rasterization has almost reached a dead end and RT/AI is the future, that shows glimpses of what the future of gamedev may be made of.
So are we thinking that rdna4 may be better at pathtracing than 40series? If 8800xt is $600 and if fsr4 is good and implamented in a lot of games and if i can sell my 4070 TiSuper for $600... I might axtually swap to amd just to help the progress of the undergod. Right now i have to use performance upscaling at minimum to run pathtracing and certainly not locked to 60. RDNA 4 may be incredibly compelling. Radeon.. its your time to shine.
@@Dempig it looks to like AMD is aiming to minimize the performance impact of FSR. I can see a situation where fsr balanced is similar quality to dlss performance and they both produce a similar frame rate. Example.. you run 4k performance dlss vs native 1080p.. it has the same internal resolution but the 1080p native will run much faster because upscaling to 4k reduces performance quite a bit. You might be a 4k 60 on both GPUs but when you turn on quality upscaling the Nvidia GPU goes to 90fps and and GPU goes to 100fps. I believe this is what they're referring to with this simultaneous parallelization of the upscaling algorithm by breaking it into small parts. If it looks 80% as good but has better performance.. that is parity. FSR2 and fsr3 framegen already achieves this to a small degree.. slightly better performance. I think this is because they have such low image quality that there must be a performance win or it just looks terrible. Nvidia has the quality and AMD has the higher fps. Win win.
ML graphics, in case of current PSSR, are a double edged sword. It can look great in places, but be totally lacking in others - in the very same game. I wish people wouldnt look for artistical shortcuts that inevitably will lead to results no one can improve on because it is out of their hands. Raytracing no matter how stunning it looks only is a fraction of what you see during fast gameplay, but costs too many ressources so JUST SCRAP this idea until it is possible with raw power in every situation instead of relying on estimations and guesswork of any sort of AI.
Imagine being able to reduce your youtube consumption by half or double the amount of videos watched in the same time by watching at 2x speed… 2x speed gang!
I watch your videos at 1.75x speed. It was annoying to have to switch speeds back and forth so I decided to watch the Sony video first. At 1.5x. The guy talks plenty fast.
I really hope AMD is planning to compete with an AI feature set that can match NVida or even Intel for that matter. Despite Intel’s recent failures I think they are in a better position than AMD with their AI feature set. I won’t l even consider buying AMD GPUs because I’m not giving up DLSS. Whenever I play on my 7840u handheld I just wish I had DLSS. These APUs from AMD are really good but they would be amazing with proper upscaling. Not to mentioned a potential shift towards ARM on PC could have them loosing marketshare to Nvidia in the mini pc and laptop markets. If that happens I wouldn’t be surprised if Sony switched to Nvidia and ARM for PS7.
If Sony is right about the future of the industry going the machine learning way, and AMD instead spends their silicon going ray tracing, they're going to be even more behind Nvidia, with all the neural stuff.
I watch YT on 1.25, any faster than that and too many people end up coming across as unatural in how they speak (there are enough people who speak quickly enough to make normies at 1.25 still sound normal). I'm so fed up of all this AI bullshit with fake frame and fake resolution just for the game devs to keep getting away with dogshit optimisation using the upscalers and fake frames as a crutch for their shitty games. We've not made any progress since the mid 2010s and it's really depressing, we should be getting way better performance out of the hardware than we currently do.
So basically, graphics are cooked because the only way for them to sell new hardware was to deliberately cripple rasterized graphics and then rely on AI techniques to fix it? Humanity sure has mastered the art of creating fake problems in order to sell solutions. Like what are we even talking about anymore? It's always AI-generated this, machine learning that. Am I supposed to be impressed that the computer is generating textures on its own? From an artistic perspective, that seems really devoid of meaning. Whenever AI is used by a company, at best it's blatant laziness that should be treated as slop, and at worst it's them looking for ways to fire the talent while the business people get promoted, which is incredibly deplorable and damaging to gaming as a whole. It's one thing if they figured out some way to have AI speed up the optimization process without sacrificing visuals somehow, but using AI to generate more realistic textures is not something that I find interesting because it gets rid of the artistic process. I don't know, maybe this was inevitable, but if the future of gaming is fake frames and AI-generated graphics, I think I'm gonna stick with my 4070 for a very, very long time. Which I guess is a good thing, saves me lots of money.
The problem isn't tech advancement, maybe there's diminishing returns but its about budgeting, Cerny even said it in Digital Foundry interview, PC hardware always brute forces GFX etc but in consoles its about budget, similar to cell phones, the high end gets everything tech wise while mid to low has sacrifices, consoles fall on mid tech when new and low when mid life etc Consoles will never be on par with PCs again after the 32 bit era
I disagree. DLSS and FG are the only reasons why I play at 4k/120 on my TV using a 4070ti Super and a 650watt PSU. If I wanted to do so rasterised I would have needed a 4080 and a PSU upgrade. Both of which cost more and would have generated more heat and noise inside my case.
You can't make that much sense in a single comment in yt comment section. People who get paid to develop such tech can not wrap their heads around this.
AMD was never going to catch up to Nvidia by themselves. They just don't have the size and R&D budget to match. Partnerships like this are the best way to fix that. And Sony gets new tech with less R&D investment needed for their products.
They actually do now, but these plans were laid 3-5 years ago. AMD didn't know that Intel would have a short blip with 12th gen, and then go back to sucking balls. But AMD split gaming and datacenter GPUs off from one another because one wasn't making money and the other was. AMD absolutely has the grunt now, but they don't see gaming GPUs as ever becoming a "real" cash cow. Nvidia, btw, thinks the same. Their margins on gaming GPUs suck compared to what they make on workstation and data center GPUs. Which makes sense. They're gigantic dies and gamers expect to get them cheap. And since Dr Su is a fiscally responsible lady she insists that the Radeon division must pull it's own weight financially, so trying to partner with as many as possible to gang up on Nvidia makes a lot of sense.
@@andersjjensenthis makes complete sense
They can't catch up with dlss. It had few years more to train and nvidia supposedly have much bigger capabilities to do so.
PSSR clearly showes that.
The only way it could happen is from the incopetency or malice from Nvidia. Or diminishing returns in training.
Expecting anything else than ,,AMD still not as good/AMD disapoints" clickbaits as a end result seems foolish to me.
@@damianabregba7476 Yeah nvidia would have to pull an intel and stagnate and AMD would need a generational leap or two
@@damianabregba7476 I think there are deminishing returns in training though
Its obvious now why RDNA 3 was neglected.Timing and schedule of PS hardware and co-development of RT saved AMD's underfunded Radeon divison schedule to both catch up on RT featureset,denoisers,math;engineering that came from vector math work etc. Release it with RDNA4
Radeon's best gens have been those that released alongside new consoles.
I will have to say that I will expect little. AMD has a track record of releasing unfinished but promising technologies and not following well after, such as FSR and Anti-lag 2. If Sony is involved maybe chances are higher, but Nvidia is a trillion plus dollar company. And Sony just released Playstation Pro with a less than stellar performance improvement. "Expect little and be gladly surprise if they surpass it" is my motto.
Performance gains with PS5Pro is significant. Its what you make off it whats matter. Game developers still put all their effort in the base console as they clearly should.
@@stefanschuchardt5734 800 to 900 dollar console performs at 20%+ average and extremely controversial raytracing improvement for 2x the price. A pretty bad value proposition for consumers.
Is RDNA4 AMD's Snyder's Director's Cuts?
I just hope the future is not poor optimization, noise, temporal smudges, flicker and games that somehow have less fidelity with 10x the comoputing power.
Yeah with them emphasizing ML/RT I'm not too hopeful about all this.
As long as that sells more GPUs, you can bet that's exactly what you'll get.
I don’t care if the games are actually good, but about 85% aren’t, so let’s start with that first
Lisa Su used to work for Sony. She was the lead designer/ engineer of the PS3's main chip, as I understand it, so AMD's connections with Sony effectively go back well before the development of the PS4.
Wait really? It's something I didn't know and very interesting!
Dr Su worked for IBM, not Sony. Sony, Toshiba and IBM co-developed the Cell Processor that was used in the PS3.
Cell processor was from IBM, where Su worked.
IBM @@ayac.4998
budget Elton John
Lmaoo
Elton John at home
hahaha
Yoooo is that why he looks so familiar?? Lmao
I thought it was Dana Carvey
When I was doing my computer science degree in the late 90s, ray tracing was something so computationally expensive it was only used in images produced by PhDs in their research into ray tracing. It was something the profs might occasionally mention in conversation outside of lectures, but it wasn’t mentioned in any lecture, even in my computer graphics class. I honestly never thought we’d get to a point where there could be ray tracing in a video game. But I’m not an optimist about technology.
So… given the expected price for the 5090 (based on the leaks), and the middling stats for the 5080 and below (considering the 4000 series), I’m beginning to wonder if consoles will be the future of high-end gaming.
If I buy a new console every 6 to 8 years, that console could contain THE cutting edge video card inside it, and be more affordable than THE cutting edge video card in a PC. There would be manufacturing efficiencies gained through selling more cards, and doing so over 6-8 years, instead of the current 2-year lifespan of video cards. There would even be programming efficiencies gained through a 6-8 year lifespan of programming for that card.
I applaud AMD and Sony for working together on quasi-open standards to improve gaming for both PCs and consoles. But, you know, if the 6090 launches at 3000$, and the PS7 is less powerful than a PC, and 4K gaming in still only available via upscaling, maybe the gaming hardware people need a consider a new approach, and the game designers need to reconsider ray tracing.
I’m not entirely sure this was the right video for this comment… it’s just something I’ve been mulling over for the last couple weeks…
The guy who designed the Atari arcade game Marble Madness was Mark Cerny, and he was only 18 when he started working on it. The game came out in 1984 and was super innovative for its time, with those isometric graphics and the whole idea of guiding a marble through these tricky mazes. Cerny went on to have an amazing career in gaming-he worked on tons of iconic games and even became the lead architect for the PlayStation 4 and 5.
This is RNDA 4 or atleast a version of RDNA4's raytracing soon to be UDNA.
Watching Mark Cerny through you at 4x and still understanding him makes me worried..
How do you understand that? I can barely keep up at 2x
That's just a lie lol
@Davinmk Any avid gamer who's has taken a course or 2 in ML/Ai should be able to understand this.
I can't really understand Cerny at 4x speed. It's actually really funny. It sounds like he's speaking a foreign language, and then being translated.
@Davinmk They have ADHD.
I will sometimes watch RUclips videos with unnaturally slow dialogue at 1.25x or even 1.5x in extreme cases, but Daniel that 2x speed was insane 😅 I can't believe you watch everything like that lol.
ive been watching almost everything in 2x for years eventhough english is not my native language and i watch mostly english videos.
It's crazy that you can genuinely listen to videos at 2x speed. I just can't. Can't really understand what they're saying nor do I even want to listen to that in 2x. It sounds.. uncomfortable.
I can't do it, either. Maybe people who do tend to have subtitles on (as Daniel does here) to help them follow along. That kind of prevents you from multitasking in other tabs/apps while you listen, though, so I'm not sure how much time you actually end up saving. Also, if you're rewinding or pausing to read the subtitles because it's going a little too fast, that also eats away at the time savings.
My girlfriend thinks the same. She can't understand what's being said. She can however spell out whole sentences and I'll lose whatever she's saying after like 3 letters lol.
I can listen to Daniel at like 3x lol
I can easily understand what's being said, but it's still uncomfortable. Especially when most of my RUclips watching is listening in the background while doing some other task. Not possible to do that at 2x speed when it requires your complete attention.
I watch videos at 2x speed when I'm not planning on sitting for long or it doesn't require my full attention.
Just realized that the part about free use for the learnings etc. sounds like what UDNA would be like. I think that essentially confirms UDNA, probably in place of RDNA5.
I really hope so, since chiplets shared with data-center is the only way for AMD to be able to take on Nvidia.
Yup
When Ray Tracing is explained like that, I can see why it's so hard to run. The process looks so inefficient and convoluted I'm surprised it works in real time.
But I guess that is the MOST efficient way to do it right now, otherwise someone would've already found a better way.
I imagine the next step would be to embed more properties into the assets themselves to cut some time in comprobations and allat.
But that would mean that the software isn't doing everything automatically and we don't do that round here.
its not the most efficient way. UE5 is everything but efficient. Its the way Nvidia enforced raytracing on all games.
*Coughs in Threat Interactive*
Honestly it is never going to be fully efficient due to the divergent nature of RT (the full kind of RT and not just sun shadow). To combat the inefficiency, instead of fully resolving an image or RT effects, comes the denoiser. The denoiser alone will not be enough for real time application, thus comes accumulation which is using data from multiple frames to help resolving the image of current frame. What you ended up with is a result that is smooth and a bit laggy. A game with full path tracing will have noticeable lag in lighting and reflection resolve (need time to resolve fully) and also becoming less detailed, especially in movement.
Unless they comes up with a fundamentally different way of doing RT, I don't think this problem can be fully fixed, but should be able to be reduced with faster hardware and probably more AI to fill in the gaps.
@brando3342 but that guy talks about baked light not lumen /raytraced GI
the most efficient way is to implement ray rtacing into the agme itself and not require nvidia only gpu to do good rt. That is why amd did open source fsr. You can see that mod FSR games are more good implement than the dev itself LOL
Great news, INTC and AMD are making decent strides. We as the consumer need to hope the monopoly gets weakened.
Hoping for the monopoly to be weakened is pointless. We have to actively weaken it. We do that by not buying Nvidia.
@@rangersmith4652 In some use case thats just impossible, there werent any reasonable alternatives.
Only If the price remains as competitive as intel. Undercutting NVIDIA by 50 dollars will not work.
@@mikelay5360 yea they just wait for Nvidia to launch/price and then follow. if the 5070 is $699 the 8800XT will be $599/$649 so you might aswell buy Nvidia which is what people do
@@rangersmith4652 Correct me if I'm mistaken, but I believe most of their consumers are corporate entities and not gamers. We don't really matter to them anymore. However, if Intel (INTC) and AMD can create viable products and features for the corporate space and offer real competition in that market, it may encourage Nvidia to make a greater effort with gamers. We're only approximately 17 percent of their revenue now.
One thing, the improvement is not that the BVH now holds twice the nodes per level, it is that the ray intersector can do double the intersection, thus being able to make a wider and shorter BVH. 4->8 parallel ray-box intersections and 1->2 parallel ray-triangle intersections per cycle, that’s the key.
this says more about how AMD is doing in this space than it does Playstation. Cerny is pushing boundaries of console gaming
Cerny is a hack that lies constantly lmao of course you’re praising a false idol
missed opportunity not calling it HOLLOW PURPLE
Domain expansion:..... *copyright infringement*
@@panakajack1525lmaoooooo
Dodged a bullet there
And ending like the user of that technique?
@@Proaz15don’t hurt them too bad
do you know about the "video speed controller" add on? I always watched youtube at 2x as well, but sometimes even that is too slow. With the add on you can speed it up to 16x. I have it bound to speed up / slow down on 2 of my mouse buttons. Makes watching content so much more time efficient. Also, the add on works on pretty much any video on the internet, not just youtube.
Cerny looks like the tech-version of Dana Carvey. I don't know about you, but I have never seen them in the same room before.
SNL? 🤭
Cerny (Černý) stands for BLACK in Czech, where it comes from
Turtle turtle 🐢 🐢 🐢
Not gonna lie, it's hard not getting excited about this lol
This could very well become a transformative step not only for console gaming but also for the overall gaming space and many many devs across basically all levels.
Knowing that Mark Cerny is fully focused on working stuff like this is great to see, especially since they already have a pretty clear vision and path how to move forward.
But I agree, the PS6 will be VERY interesting to see then, if they can stick the landing with that machine in combination with Amethyst, it might as well become a incredibly powerful piece of hardware that could REALLY push the boundaries of console gaming once again.
Well, the strategy used in the PS5Pro & PS6 are going to be very different with a constrained SoC, compared to full fat PC architecture.
It's very clear that Sony does not want to pay for the extra silicon acting as a separate frame buffer on SoC. This is achievable with a 128MB Vcache acting as a frame buffer but that will make the SoC extremely expensive and add $300-400 to the base price of the console.
Using the Register buffers is quite smart but comes with other performance considerations because switching context in the program requires emptying the register buffers and bringing back the register data for the previous instruction.
Anyway, AMD can steal some ideas from PSSR and Ray Tracing optimisation but the hardware on the PC side will be general, NPU on the CPU for LLM for NPC interaction, RT & Vector processing on the GPU to handle all the graphical AI stuff. GPU using GDDR6/7 also has much greater bandwidth compared to the DDR5 used in consoles.
What this means for PC is, in a less memory constrained environment, AMD can leverage the VRAM to a much higher degree and optimise the driver code to use less ram to perform the same tasks like upscaling, frame gen & ray reconstruction. This will be very helpful for low/mid-range GPUs with 12-16GB of VRAM when FSR4 comes out.
Another thing about AMD AI is, ROCm is open-sourced. Sony may be forced to use open license for the hardware because the underlying driver for RDNA2 / RT / Vector processing is open-sourced. Only the Custom silicon can be closed sourced, so Sony probably has to yield a lot of the control to AMD. This is great news for everyone involved with graphics because console developers are well-known to be able to squeeze every drop of performance out of the silicon compared to the lax developers in the PC space who has the luxury of tons of memory.
Now to be clear, I am NOT knocking PC dev. PC games pioneered rich open world games because PC dev concentrate on populating the huge world with content with less worries on optimising the game to use less RAM & VRAM. That's why PC games and console games are so different.
Regarding openness of the CNN stuff, it is going to be proprietary to AMD due to driver. I am not aware that ROCm can run on NVidia or Intel hardware. However, the mathematical concepts & logical workflow of BVH (or using registry buffers as render buffer discussed in this video) is open and any GPU makers and game devs can opt to use those concepts in their graphical pipeline.
Translation: I'm smart.
The end.
This just mean the ps6 with PSSR will cost between $800 to $1000 since it will have some AI
@@TkoddaLee The AI stuff are part of RDNA4 so I suppose PS6 will tap into that silicon. It's the Vcache that's very expensive. I suppose Sony will charge $100 more than PS5Pro.
When you build a console, the main cost driver is usually the die size. This is why console makers usually want the rights to have the chip design modified for smaller nodes later on. It's why cache is usually the first victim in the design phase and why they can't clock very high - because a more compact circuit cell means more interference from adjacent chip circuits e.g. Zen4c vs Zen4.
Sony bringing out their DMA expertise from those PS2 days is great, because that thing was designed to not have any memory bottlenecks - 2560-bit fillrate monster.
2:22 I actually watch in 2x. Impossible to understand in 4x 😂😂
same haha
@@neoey2x makes me anxious AF. Most RUclips videos I watch to have them in the background to relax while driving or whatever. Not worth it to me. I run 1.25 or 1.5 often depending on the speakers.
@@christophermullins7163 Same. It's not that I couldn't watch most videos at 2x speed, but why? The exception is some certification training videos, which have to be accessible to the lowest common denominator. Some of those have unnaturally slow speech and are basically normal at 2x speed.
@@BlueDrew10 Ijustcan'thandlelisteningtosuchjibberjabberathighspeeds
Also AMD should remember the cloud server chip it made for Microsoft Azure. It is the “fully fused network” the issue is cost. The PS5 Pro doesn’t need 340GB of on CPU package HBM3 so it can probably cost much less than the chip in the Azure cloud. So they should know an answer or a few answers to the question of how the issue is how low priced is AMD willing to sell it and what will their cloud server client businesses say to them putting HBM3 on a console before an everyone can buy it server chip.
Why do I say HBM3 not GDDR7? Yes GDDR7 is fast but it has the problem of not a wide enough bit bus for the NPU and on graphics cards that use it for the Tensor Process Unit. Nvidia had to get creative to get AI Upscaling to work on a consumer card. In short it takes an image that can git into 1MB to 10MB and streams it to the tensor processors for processing to be upscaled to a much bigger image at the end of post processing.
I was really concerned we'd never get to see these presentations from Cerny due to all the idiots throwing a tantrum about not seeing the plastic casing in his previous presentation. I really enjoyed it and will probably watch it again.
Ok
@23:35 so all we need is 128MB of cache on die? well and already has 96MB solution and that means we're not far boys
Will any of this RDNA 2.5+ tech for the PS5 pro have any software side benefits for RDNA 3 cards? Or will this only impact RDNA 4 and beyond GPUs with specific hardware modules?
This does not work in rdna. So amd 8800XT does not support this. Most likely one generation after 8000 series.
@haukionkannel Gotcha. Thanks!
"If you're watching me at 2x, watch him at 4x" is the best part.
We`re lucky to have PlayStation to motivate AMD to focus and improve their ATI GPU division at least a little bit.
For some reason they go all out on Ryzen only. Unlike Intel, Nvidia is not sleeping around at least for now. The only way they can have Ryzen moment with Radeon is if they just... get better - or if Nvidia repeats the same mistakes of Intel, maybe even encountering brain drain
i think that the main problem with RDNA 2 and RDNA 3 is that the Ray and AI (for RDNA 3) accelerators are tied with the amount of CU on each GPU stacks, which limits what Radeon can do with ray tracing and upscaling scenarios (AI FSR is not released yet, but RDNA 3 definitely will also have it, given that they already got the cores, but not the task).
If you watch how PS5 Pro got the 300 TOPs of INT8 + how to actually feed them you probably need RDNA4 for AI FSR.
RDNA3 won't have AI upscaling in their HW. Maybe it'll be software based, but definitely not HW like RDNA4.
Sounds like AMD needs Sony more than Sony needs AMD
Actually false. Radeon needs Sony. AMD does not. AMD makes a majority of its profit from data center, and AI.
@ Yea I agree, I should have said Radeon. Good reply and thank you for being mature about it
@@Kapono5150 I get it. I was hoping to clarify for those that don't realize AMD is huge, it's just Radeon that isn't. Lisa Su is limits their budget and R&D resources which is why partnerships like this are even necessary. If Radeon had the same level of resources available to them as Nvidia's GPU development department, it would likely be a lot harder for people to just be team green.
HAHAHA! You being the mouse pointer with your finger pointing is hilarious and brilliant! Love your channel and insight into this sort of stuff, you break down so that an older guy like me can understand. Thank you bro! 👍
Mark Cerny on 2x speed is basically Mordin Salus from Mass Effect
Cool that Dana Carvey works at AMD now
Can anyone who is working on lightweight neural networks tell me why we don't have neural upscaling asics with a dedicated cache yet? Is it bc we aren't sure of the best way to get good quality yet and no one wants to lock hardware in on it or is it simply not doable?
pssr has had issues in some games, but then videos ignores the ones it's does very well in. That out the way i hope this means AMD has more fight in them, since i'd rather not team green be the only gpu's left in the game for my wallets sake alone.
22:30 The PS5 does not have separate memory for the system and graphics.
It does. The 2 gb of ddr5 is for thw system. the 16gb of gddr6 id for gaming.. where are computer you can add ddr to help with the game and space for other programs..
It does.. even PS4 has a hidden DDR3 module. The idea is everything that needs frequent access goes there, so it doesn't eat bandwidth. They don't care much about space.
Yes. This is why consoles given their size has an advantage over PC's. This could be why AMD is pivoting towards APU's.
Love your channel, keep it up man.
Can you imagine Nvidia upscaling old Nintendo games on Switch 2 with ML?
These projects not matter when the games are terribly optimized by developers
Then don't buy unoptimized slop
@DragonOfTheMortalKombat a decent amount of new games fall into the category of "unoptimized slop". If new games are going to run badly anyway there's less reason to upgrade.
So, reading between lines i see that Sony want unified architecture to make porting their games to PC easier.. which is good news, i dont like exclusivity.
More about that it is cheaper to codevelop the hardware than porting the games…
Always remember. Lisa Su is the cousin of Jensen Huang. There are always secret midnight handshakes between these two at the end of the day. But as a pure Nvidia fanboy, I welcome this collaboration. It is never too late.
37:47 I think he means AMD udna that ps6 will use.
The fact that you watch youtube in 2x speed leads me to think you may be an actual psychopath lmao
ain't nobody got the time for 1x
Multitask @@aNINETIEZkid
@@DBTHEPLUG multitasking is one way of increasing efficiency. However, you cannot focus on 2 things with 100% attention so it isn't always beneficial to do more than 1 task at a time.
watching 2x and being able to watch 2 hours worth of videos in 1 hour is most efficient and opens up time for other things or learning more
@@DBTHEPLUG there were some studies about that, usually multitasking means that you do multiple things poorly or the time to switch between tasks and gather your thoughts (aka switching cost) to do all of them efficiently overweighs any benefits of multitasking. Humans are poor multitaskers, it's far better to arrange your tasks in a way that will allow you to complete them efficiently one by one.
I watch at 2.3x you slo mo 😤
i seriously recommend people start watching all videos in 2x it was a lifechanger for me when i first learnt of it i saved so much time and was able to watch so much more . normal talking pace feels so slow without it
Garth?
Underrated.
Bro, I usually watch 2x speed, and you watching Cerny at 2x speed while I'm watching you at 2x is kind of cartoonish. 😂
To not confuse someone make it clear that if you talk of System Memory ( 22:20 ) you mean the GDDR6(7) on the GPU , its clearly labeled as GDDR6 in the Picture , it should be clear , but usually if someone talks of System Memory he usually mean the Main Memory of the Computer and not the VRAM of the GPU . To add more Cache Memory would make the GPU bigger and more expensive , it should be possible to add Infinity Cache which has 17,2 TB Bandwith on a seperate Chiplet , but not with RDNA4 RX 8000 GPUs which will have a monolithic Design and which are not meant for the high end Market . I guess there will be a tradeoff between costs and Cache size . Doubble the RT Speed should be possible with RDNA4 and i hope the new GPUs will be in the 400 - 700 € range but guess it will depend how fast the GPUs in general will be , on the lower end Intels Battlemaage may help to keep the Prices down
The "system memory" Cerny in this video is referring to PS5 Pro's single pool of 16GB GDDR6 memory, not PC's main memory or the dGPU's VRAM.
@@AnalogFoundry
Cerny ? , i meant Daniel Owen , the Timestamp is there , 22:20 .
Besides , the PS5Pro is basically an APU = it uses the 16GB DDR6 for Graphics and has 2 GB DDR5 for operational purposes
@@MK-xc9to - you mean Daniel Owen? I'm aware of the PS5 Pro specs.
Yes , i meant Daniel Owen and because this Channel is a PC Gamer/Gaming Channel it can be slightly confusing if the talks of " System " Memory and means the GDDR6 VRAM instead . The Video is only on the surface about the PS5Pro , Daniel suggest that AMD and Sony could learn from each other bacause the next Console will very likely have an AMD APU as well , the Playstation is AMD Hardware with Sonys Software
Still doesn't quite matter, Nvidia is the one hindering performance behind their overbudgeted GPU paywall. There's absolutely no reason why games should run worse on GPUs that are way better than their Nvidia's counterparts. But it is Nvidia """sponsoring""" those titles after all, so these advancements don't matter outside of console as long as Nvidia keeps "suggesting" the developers they are partnering with to keep giving the competition the low end of the stick when it comes to optimizations and GPU usage.
Its worse than that. Nvidia sends engineers, for free, to "help" with engine modifications that benefit their hardware.
90% market share is very persuasive.
@@mikelay5360 Yeah but aren't like 45/50% of those users per steam survey still using the 1080ti/1650/1660? It isn't like they are taking any advantages out of RT nor DLSS nor FG
@@praisetheoak NVIDIA has been at it since those days. It didn't start with the 4000 series. Also the 4090 is more popular than any current gen AMD cards on that survey, if you choose to believe it. I would say that game devs collect their own data during or after installation, They know where the masses are.
@@praisetheoak 1080 Ti, 1650 (both variants) and 1660 (all 3 variants) account for 9.38% of all Steam users as of November according to my quick napkin math. Nowhere near 50% lol. Another bit of quick math had non-RT capable GPU owners at slightly below 30%, and a lot of those are people who either have integrated graphics or have clearly given up on playing new games years ago (GTX 970, RX 550...)
Congrats on 215k subscribers! Random question, doing the math how long you’re thinking it would take you to hit 1 million at the current rate? 😅
I wish you all the best and very happy for you and the work you’re doing to the community.
I can't get enough info at 2x2 too much word fused together but i can understand at 1.5x2, brain speed still not fast enough for 2x2.
The problem with the idea of quantum leaps being availabe in AI is that AI capability is necessarily tied to hardware advancement. As someone who researches AI, I don't think there's "quantum leaps" available for consumers. There's "quantum leaps" available in cost-cutting. There's "quantum leaps" available for investors. There'll be meaningful improvement for consumers, but I wouldn't hold your breath on "quantum leaps".
I get what you mean on the Desktop Segment because it is very general but you need to think different when it comes to AMD but especially Sony/PS imo. They can change Stuff for Consoles that only really work well for Consoles. Or especially well for Consoles.
Unfortunately a bit hard to watch. I also play videos double speed and then the Sony guy at 4x is way too hard to understand - trying to switch dynamically between 2x and normal speeds so both are actually 2x was too much work after the first 10 mins....sorry :(
Daniel checking his stream computer every other second like obsessed with it. I say he had problems with it one too many😂
This is marketing. Keeping the fire lit between generations. It's not about names or targets. It's about tiny algorithms and chipsets in the end. Which just aren't there yet.
Thanks Sony for supporting AMD :)
PSSR has a bright future that I am excited for
AMD GPUs + playstation combined are still a much smaller number than nvidia's market share, I don't see most PC devs catering towards this over RTX.
2x RUclips supremacy. 4x mode when???
lol no
I have 5x speed option
I watch you in 2x so the Amethyst announcement killed me
Is there a market for new hardware for just a few games? Games are taking longer to make and with canceled games and studios we can expect fewer games.
Plenty of games come out and keep coming out and are slated to come out lol you just play dog shit games or don’t know where to find accurate sources
@@Jyolski really? Then why do half have not upgraded their PS4 to PS5?
I feel like we’re getting a real peek at your teaching style here haha
Dana Carvey is that you😁
Ha! And when i said PSSR was a joint collab with AMD due to hardware ppl laughed well whos laughing now 😂
We just got a glimpse of what Mr. Owen sounds like in class. Makes sense ?
I was hoping we'd see some Choppin' Broccoli.
More than 1.5x Play speed loses its edge if you are not a native speaker. I will go to Spiffing Britt-he uses lazy 0.25x speed for cheesing RUclips "features."
Exploiting the clock of user interaction which feeds the algorithm, what else did you expect?
nVidia has dedicated servers to do the DLSS training for games. Does intel have for XeSS? Will AMD have for FSR4?
Can end users use their own PCs to pool said training akin to BOINC? You pick the game you like and the hardware you have and Bobs your uncle!
man I just wanna play games at 60fps with 2016-2019 era fidelity
2x speed makes it sound like *Micro* Machine learning.....If you know you know 🏎️
Bro, not everyone likes the whole 'I’m special because I know this' act. Just share the joke without the extra ego boost
@PCandTech-tr8xt Yeah I get your point of view but honestly I said that so I wouldn't sound crazy or random. That last part puts it into perspective that it's a joke about some niche and specific thing.
I know there's a lot of young people here so maybe after that it could possibly compel them to search micro machines. It's an old commercial 😂
@@SmoothSportsGaming got it, thx
@@SmoothSportsGamingThe tiny little toy cars??? I still have hundreds of those things lmao
@@lilpain1997 Do you remember the commercials? 😂 man that guy could talk fast!
Man that paused closed caption had me fucked up
So PS5 Pro is a hardware/software test platform. But I'm cautiously optimistic AMD will finally leverage their console knowledge for PC. They have full platform access, CPU, GPU, chip set. Always wondered if they could optimize past others that only access single parts of the chain.
xbox series s and x can be switched to windows. it is possible, just not for normal consumer
I've always wanted to watch RUclips at above 2x speed!
forealz. Some people talk so slow. 🤣🤣
[Cough]Gamers Nexus[/Cough]
If you haven't addressed that need for speed, then there are a number of free browser extensions in the Chrome web store, useable on any chromium browser like Brave, Vivaldi, etc. I use "RUclips Playback Speed Control", and I can speed up videos to 16x lol. YT chokes on anything faster than 8x depending on your connection and YT servers.
I have a modded youtube and it has 5x speed option💀
I wish Nintendo were with AMD, since is portable, they could get something really good for small package instead of Nvidia.
And Nvidia with Xbox and PS5 as they want more performance, dlss, ray tracing and those kind of things, lol
Nintendo did get something really really good for the purpose they wanted and if leaks are to believed the next switch is also looking really good with a gpu faster then amds 890m and 12 gigs of lpddr5 7500 as for the other two nvidia doesn’t offer a comparable soc design that delivers what Microsoft and Sony expect while maintaining backwards compatibility as they lack an x86 license
Considering, RDNA4 is the last of it's lineage, and AMD is supposed to be shifting to UDNA for all GPUs after that, I figure this is Sony's play at getting some personal wish list stuff into the mix before the first successor to the RDNA4 core gets taped out. Custom silicon may still be custom. But, even Sony doesn't ship in numbers strong enough to warrant AMD going very far off their own path. Now, the thing that's bugged me for years is how AMD hasn't enjoyed better gaming uptake on the PC side as a direct result of their dominant presence in the gaming console market. Hooowever, with Sony having bought up so many game developers, that may finally change a bit. Problem is that seems a pretty long ways away, unless the PS6 is closer to ready than we realized. And, to that end, maybe this presentation from Mark was more of a preamble for RDNA4's expected debut at CES. There just isn't enough information to make a confident prediction on. The best I can say is that it makes sense that Sony would take interest in the expanded options and performance potential of UDNA over RDNA. And with as weird and AI-tastic as things are getting, they'll need this to keep up.
Looking at the reasoning behind the collaboration by Sony and AMD, it makes sense.
Nvidia has a commanding lead over their competitors in terms of AI technologies and has enough funds to spend on R&D - AMD doesn't have that luxury.
If this collab means further ML-based upscaling for consoles and PCs, it's likely going to pay off in the long term.
so they will optimize the game for the developers, thats cool, we need that for UE5 games on pc as well
2:22 4th wall break inside a 4th wall break…… that’s like 16 walls!
Further proof that AMD and Sony are co-developing FSR4. I think PSSR is FSR4 Beta/Lite with AMD adding to it for PC. I just hope my 7900xt works with it...
I love your stuff but 2x speed is just wrong :)
Once again here to request a look at the history of generational GPU price:performance improvement over the years 🙏
This would definitely be nice to look at. Especially comparing msrp prices
lol... i thought i was the only one who watched youtube videos at 2X speeds or faster
such a shame its a custom RDNA chip I was hoping optimizations for the consoles would benefit gameplay on discrete amd gpu's
Marc Cerny and his team are really smart
Didn't he mean to say Future and Custom 'UDNA'..? 🤔
This isn't surprising. (I am not an expert on anything.) Going all in on raytracing for the Playstation 6 was a pretty obvious move for Sony, even just with the current state of the art, before you consider the possiblity of making further advances. PlayStation largely isn't held back by the things that have caused PC hardware and software vendors to go slow with raytracing. No installed base of existing graphics cards to support; less need to support old games; no PC gamers huffing that that raytracing sucks. That means that Sony can provide game studios with a low-compromises raytracing machine and encourage them to stop dragging along the boat anchor of trying to support rasterisation and RT in the same game. (There are still portability risks to publishers in pushing _too_ far ahead of what can be replicated on PC or the next XBox, but those will be fairly manageable by early 2027, and of course true PS exclusives won't have to worry about that.) And the big risk to Sony isn't from going all out for a big generational leap from the PS5, it's from not doing so. Another generational improvement that feels incremental and meh is likely to hurt PlayStation: raytracing is the best and really only chance to deliver something like the staggering improvements of older console generations.
Sony wants to do something that benefit the whole gaming industry? like blocking 180 countries from playing their exclusive PSN games? LMAO
Oh get over it, trying to hard lol.
@@TakaChan569 well hopefully Sony will get over the fact that many people like me will never support them again.
@@n9ne cool, from the sounds of it you never were in the first place and are just looking for internet points...but hey you do you.
@@TakaChan569 what has me not liking certain business practices to do with virtue signaling? or whatever you're trying to say..
Bro get off your high house
Oh great, more garbage AI instead of actually good, and powerful hardware 😒
This is my main issue with the coming generations. It seems that AI has just contaminated every product tier. People do not understand that AI capabilities come at the cost GPU space which translates into lesser traditional computational gains and high prices due to process complexity. Cheaper solutions can solve typical GI, shadows and reflections with minimal cost to traditional computational power, and provide similar results for the vast majority of gamers. This is a solution in search of a problem that could be solved with simple smoke and mirrors tricks. But nooooooo, lets try to emulate real light behavior on a frame per frame basis on a single computer units for the typical gamer that will annoyed because is costing him 2 thirds of the performance while making a blur of everything.
Ai is the way…
😂
But all in all it is easier to developers to leader AI to do the hard job instead of doing it themselfes
The reading comprehension of your viewers is insane. These are all not console tech. These are Radeon tech. You guys should be happy since radeon r&d is getting help. They have significantly less employees and resources compared to nvidia. This is good news since they will have more budget and resource to spare. Good thing Playstation is stepping up to its hardware, forcing Radeon to innovate or at least improve where it lacks. RDNA 4 will just be a sneak peak to whats to come. I am an nvidia user, but hey, I am also a tech enthusiast.
Whose fault is that? AMD dopped the ball with wannabe prices. Gaming market used to be 50/50 when AMD bought ATI.
A lot of viewers are unfortunately for most part just tribal about PCMR.
Being tribal in general immediately make them focus on specific points here and there without thinking about the big picture and that's very sad.
The most interesting part of that presentation indeed, even if you don't own a console or I would dare say hate consoles, is that it gives glimpses at future directions for AMD and Radeon but also games in general.
PCMRs people like to forget that they like it or not, consoles are still there and will still be there for a while. Yes they are not for everyone, but there is a demand for a simple box that sits in the living room and that JUST WORKS.
And this means that development of game have to take into account both PCs and consoles.
The fact that AMD and Sony unite for ML based models as they can't fight alone, that even Cerny admits that rasterization has almost reached a dead end and RT/AI is the future, that shows glimpses of what the future of gamedev may be made of.
These are not Radeon tech either, a lot of Vulkan/DX12 mesh shaders and primitives concepts came from the Playstation 2 GS pipeline..
@@mimimimeow Ackchyually vibes
So are we thinking that rdna4 may be better at pathtracing than 40series? If 8800xt is $600 and if fsr4 is good and implamented in a lot of games and if i can sell my 4070 TiSuper for $600... I might axtually swap to amd just to help the progress of the undergod. Right now i have to use performance upscaling at minimum to run pathtracing and certainly not locked to 60. RDNA 4 may be incredibly compelling. Radeon.. its your time to shine.
Fsr4 will take years to be in enough games to matter, if it's even any good
@Dempig it'll be alright... But I do agree. Unfortunately, dlss is the reason and is still the reason to have 40 series.
@@Dempig it looks to like AMD is aiming to minimize the performance impact of FSR. I can see a situation where fsr balanced is similar quality to dlss performance and they both produce a similar frame rate. Example.. you run 4k performance dlss vs native 1080p.. it has the same internal resolution but the 1080p native will run much faster because upscaling to 4k reduces performance quite a bit. You might be a 4k 60 on both GPUs but when you turn on quality upscaling the Nvidia GPU goes to 90fps and and GPU goes to 100fps. I believe this is what they're referring to with this simultaneous parallelization of the upscaling algorithm by breaking it into small parts. If it looks 80% as good but has better performance.. that is parity. FSR2 and fsr3 framegen already achieves this to a small degree.. slightly better performance. I think this is because they have such low image quality that there must be a performance win or it just looks terrible. Nvidia has the quality and AMD has the higher fps. Win win.
This is not for 8800XT… this for after AMD move away from rdna…
ML graphics, in case of current PSSR, are a double edged sword.
It can look great in places, but be totally lacking in others - in the very same game.
I wish people wouldnt look for artistical shortcuts that inevitably will lead to results no one can improve on because it is out of their hands.
Raytracing no matter how stunning it looks only is a fraction of what you see during fast gameplay, but costs too many ressources so JUST SCRAP this idea until it is possible with raw power in every situation instead of relying on estimations and guesswork of any sort of AI.
I was indeed watching you at 2x speed lol
Imagine being able to reduce your youtube consumption by half or double the amount of videos watched in the same time by watching at 2x speed…
2x speed gang!
I watch your videos at 1.75x speed.
It was annoying to have to switch speeds back and forth so I decided to watch the Sony video first. At 1.5x. The guy talks plenty fast.
Yeah i remember my days with Fluent cfd analysis with 8 gb ram, that makes sense, pssr gonna be much better on ps6
I really hope AMD is planning to compete with an AI feature set that can match NVida or even Intel for that matter. Despite Intel’s recent failures I think they are in a better position than AMD with their AI feature set.
I won’t l even consider buying AMD GPUs because I’m not giving up DLSS. Whenever I play on my 7840u handheld I just wish I had DLSS. These APUs from AMD are really good but they would be amazing with proper upscaling.
Not to mentioned a potential shift towards ARM on PC could have them loosing marketshare to Nvidia in the mini pc and laptop markets. If that happens I wouldn’t be surprised if Sony switched to Nvidia and ARM for PS7.
If Sony is right about the future of the industry going the machine learning way, and AMD instead spends their silicon going ray tracing, they're going to be even more behind Nvidia, with all the neural stuff.
I watch YT on 1.25, any faster than that and too many people end up coming across as unatural in how they speak (there are enough people who speak quickly enough to make normies at 1.25 still sound normal).
I'm so fed up of all this AI bullshit with fake frame and fake resolution just for the game devs to keep getting away with dogshit optimisation using the upscalers and fake frames as a crutch for their shitty games.
We've not made any progress since the mid 2010s and it's really depressing, we should be getting way better performance out of the hardware than we currently do.
i think you should buy a rond glasses ( cercle style ) to match your face, not the current ones :)
So basically, graphics are cooked because the only way for them to sell new hardware was to deliberately cripple rasterized graphics and then rely on AI techniques to fix it? Humanity sure has mastered the art of creating fake problems in order to sell solutions.
Like what are we even talking about anymore? It's always AI-generated this, machine learning that. Am I supposed to be impressed that the computer is generating textures on its own? From an artistic perspective, that seems really devoid of meaning.
Whenever AI is used by a company, at best it's blatant laziness that should be treated as slop, and at worst it's them looking for ways to fire the talent while the business people get promoted, which is incredibly deplorable and damaging to gaming as a whole. It's one thing if they figured out some way to have AI speed up the optimization process without sacrificing visuals somehow, but using AI to generate more realistic textures is not something that I find interesting because it gets rid of the artistic process.
I don't know, maybe this was inevitable, but if the future of gaming is fake frames and AI-generated graphics, I think I'm gonna stick with my 4070 for a very, very long time. Which I guess is a good thing, saves me lots of money.
creating fake problems in order to sell solutions. ? if you are talking about covid and climate change then you are spot on
The problem isn't tech advancement, maybe there's diminishing returns but its about budgeting, Cerny even said it in Digital Foundry interview, PC hardware always brute forces GFX etc but in consoles its about budget, similar to cell phones, the high end gets everything tech wise while mid to low has sacrifices, consoles fall on mid tech when new and low when mid life etc Consoles will never be on par with PCs again after the 32 bit era
I disagree.
DLSS and FG are the only reasons why I play at 4k/120 on my TV using a 4070ti Super and a 650watt PSU.
If I wanted to do so rasterised I would have needed a 4080 and a PSU upgrade. Both of which cost more and would have generated more heat and noise inside my case.
You can't make that much sense in a single comment in yt comment section.
People who get paid to develop such tech can not wrap their heads around this.
Lol a 4070 which relies heavily on ai upscaling? Lololol 😅😅😅😅
People tell me that's not Dana Carvey, and they are wrong.