Interested in this, because every notebook I have ever owned has been Intel. Four of them at this moment. It wasn't planned. It just happened that way. But every desktop going back to the 486 clones has been AMD, including my current 5800X which is an amazing CPU. And that was planned. AMD has absolutely owned desktop for me. Now using a Radeon GPU as well. I like the performance I'm getting and it's almost time for a new notebook. Change is definitely on the table.
Tech has always had problems with poorly functioning markets. The hype on AI and Intel Meteor Lake MIA Q4 last year has created an opportunity for market disruption. A CPU is just 1 laptop component, which is why that question on battery life was unanswerable, battery capacity for instance matters My arbitary decision shocker was Dell's new model removed a backlit keyboard when you selected an AMD CPU. Picking another model was necessary to get 16GB RAM, 1TB SSD avoid E-cores and have the back lighting which my bro-in-law wanted.
This would play to the consumer better if the NPU could unload the CPU by running background apps. Running Spotify & RUclips while upscaling audio would be significant value.
NPU is for AI tasks, not normal tasks. It'd be like saying "Intel Quicksync would play to the consumer better if it could unload the CPU by running background apps." Not the point. Not remotely what it's about. Right now it's like Quicksync but where no one uses it because no one encodes video yet. Or at least, that's what they're advertising it as. It does nothing currently.
I missed this question: what is the determining factor for when AMD will launch the X3D-CPU's? Is it about technical obstacles like needing sufficiently well enough binned dies which perform good enough with a lower voltage? Is it about countering whatever Intel launches? Is it something else? That would have been illuminating.
X3D is another manufacturing step. First you have to make zen5 chips and know they work well, then attach another chip on top and test that. The base design can always come earlier because it's made first.
Server X3D chips are probably the better binned ones with low voltage requirements. On a desktop you have lots of available power and only one CCD, not 16.
@@davidgunther8428 Sure, but on the other hand I wouldn't think that this is a complicated extra step for AMD. They just add some memory and they have done this before twice. I don't imply that the technology of adding that memory is simple but for the 'base'-CPU-chiplet it should not be technically complex, though I imagine that the firmware might need to be adapted for it. AMD already knows exactly how high the CPU can clock at a given voltage and they know how warm the CPU will get at a given voltage. It should be fairly straightforward for AMD to make those X3D-CPU's once the regular versions are done. That makes me suspect that they have another reason: either to launch something new and get media-attention and hype when Intel launches their new CPU's (I lean towards this being the reason) or some binning (yes, the servers get the best CPU's, then Threadripper and only then the best leftovers go to the most profitable Ryzen-models). Though indeed they bin the strongest for server-CPU's, great server-CPU's are not necessarily great desktop-CPU's. For the server you want the lowest power possible for a reasonable clock frequency, for the desktop you want the highest possible clock frequency and you don't care as much about the power. But there is no doubt that for Ryzen we get the leftovers, the best dies in one way (very low voltage for a reasonable clock frequency) or another (very high clock frequency at a relatively moderate voltage) go to Epyc and Threadripper. I never liked the 12-core Ryzen, having 6 cores on one chiplet just isn't great because of that chiplet-hopping (having to copy data from the cache on one chiplet to another or worse, getting it from RAM, when a task moves from one chiplet to another), especially on Windows (worse threadscheduling than on Linux, *BSD and of course macOS because that uses FreeBSD). For me it is either the 8-core with the stacked cache or the 16-core with the stacked cache. I like the stacked cache because I regularly play videogames and I also play RTS-games which benefit a lot from that extra cache. Also I prefer the more reasonable electrical power (not clocking as high but also not getting as hot and the fan not getting as noisy) of the X3D-CPU's, in my opinion AMD chose poorly by letting the regular Zen4-CPU's boost until the temperature gets to 90 degrees, that is a bad user-experience and you really don't notice that 200 MHz lower clock frequency that much so why all that heat and noise?
@@peterjansen4826 yeah, 95C temp target makes people nervous. I set my 7700X to 88W power target and the boost is the same and all core clocks only about 300MHz lower. I got that back up by using curve optimizer. Now 20k CB23 score, and doesn't throttle, with a basic air cooler.
@@davidgunther8428 Nice! Did you set that in the UEFI or in the Windows program (Ryzen Master?) from AMD? I myself use Linux so I would have to set it in the UEFI, I would probably just enable eco-mode if I would have a 7700X or 7950X, still having the high singlethreaded boost but a more reasonable wattage with an all-core load. You and I are 'nerds' who can figure it out ourselves, sadly for the regular user that is not an option and they will get annoyed by the noise. Many users even have a locked motherboard-BIOS (OEM's) so they can't even change it. I guess that AMD wanted to win in the benchmarks against Intel (reviews) and clocked it too high out of the box. My point of view: set the default a bit lower and allow the user to OC it if he wants.
It's not really a suit though, he's not got a tie on. It's more of a jacket and trousers. ;) My guess is it's colder there than it looks because the air conditioning is probably full blast - otherwise it'd be odd to wear a t-shirt under a thick long sleeved shirt like that.
Odds on, MS will screw up W11 24H2 but for AMD this opportunity as first mover with Hawk and Strix Point against Meteor and Raptor Lake justifies the risk. Consumers don't understand the 4 digit model numbers with suffices and Intel missing the main school/holiday buying season '23 gives an opening. Intel have expedited LNL to respond to X Elite & Strix, but another debacle will be very damaging. I can see the Ryzen AI 9 HX scheme giving OEM sellers a simple narrative, even if AI has become an over used buzz word. It seems like the Sierra Forest is on Intel 3nm and challenges Zen4c 5nm Bergamo with advantages on high utilisation shared client cloud systems, so this may prove the last lines with significant process advantage.
two ways this can go....bad battery life when compared to intel and qualcomm or comparable if not better to the other 2 but AMD is sandbagging its own and will let the reviewer sing its praises.
It won't have great battery life compared to Intel or LNL ok idle or low workload scenarios, which means if you mostly web browser or send emails, your battery life won't be as great. However, I do expect this to outperform the other two at load and on performance.
Battery life is a choice made by the laptop manufacturer, battery capacity, CPU power config 15-45W and other components. Strix Point is a full 12c/24t system, while LNL is just a 4+4c/8t one. AMD have Kraken for lower power & performance markets to come. Intel choose to emphasise tests near idle because it favours them, while testing under load favoured AMD (even Phoenix vs Meteor Lake). Hawk Point gaming handheld performs much better than Meteor Lake, LNL demos show it's caught up but it's not launched yet.
@@1idd0kun Just taking a guess. Asus' zen 5 has a rather large battery and is in a 16" form factor. IMO this says middling battery life at low power but better under load. This may be a tough cycle for AMD.
An “NPU” is just a bunch of 8 bit and float16 bit multiplier in a simpler more compact, less flexible form than a GPU. There is nothing an NPU can do that a GPU can’t. It just might use less power doing. But it lacks all the flexibility you get from a GPU.
Yep but that NPU is purposely there to try to by default log everything you do on your PC. That's what the FEDs are wanting from these companies. Power savings is just BS because they can do that without an NPU. Most people don't want this junk in their systems.
@@relucentsandman6447 The GPU can do what the 40-50Tops NPU faster and with higher precision. The GPU can be used for training and an 8bit NPU can't. The CPU can do anything a GPU does, but it will be slower because it lacks the parallelism.
This is the worst interview ever. Dude talks about absolutely nothing for 30 minutes and the interviewer doesn't interrupt him at all and just lets him ramble off his worthless marketing garbage. I think the lesson here is blacklist David McAfee from talking on camera. To get a good interview out of him you'd need both a very experienced interviewer and a lot more prep work.
The simple answer to Mark's question "what comes next" is AMD disaggregate miniaturization of E5 EP/MP and E7 Brickland MP lifted from Intel and placed on Feldman's fabric continues past AMD's original 5-year plan. AMD mobile at Dragon moves from uniprocessor to 2P and 3D stacking of L3 on TSMC production capability is now offered on desktop and mobile although 3D mobile sits on the shelf few will pony up for on the price premium which is a moot consideration on desktop BUT mobile remains pricey. AMD mobile now moves to improve x86 big little with Zen + C cores eyeing what there is to improve from Arm and Intel implementations of Big Little. NPU, well vector SIMD plus FPGA programmability whether pre or post processing and DSPs for image and sound and NIC because NPU needs data in or what can be inferred? In this space Arm is likely ahead in mobile phones, OCOM who knows mobile phones and Apple who knows mobile phones and Intel ahead in server accelerated applied 'computer science' experiments (race) are roughly starting off from the same blocks on 'if you don't invent it copy it". Nvidia is basically proximate the x86 CPU space in their compliment graphics space and likely capable of leapfrog on the PC side while continuing to carve out and fortify Nvidia application segment niche strongholds on the server and applications engineering, technical and scientific compute side. AMD remains a follower on platform looking and eyeing what can be systematically targeted and improved for production on a superior fabrication profess node albeit this 5-year plan has also basically been achieved as others catch up from behind. AMD benefited from leading on the process front over the past 7 years while meticulously lifting from others what could be lifted and miniaturized and now also has to enter the applied science race. mb
AMD says 3rd generation of AI, well, Phoenix? Hawk that's 2 nascent attempts. ah CDNA, well, Phoenix and Hawk are rejected as initial that are intermediate attempts and CDNA implementations proprietary to their commercial platform developer / business of compute operators. But still AMD is off the starting blocks with everyone else on what applications and which platforms take advantage of so said AI applications and whether end users find utility value in those whole platform applications on new and evolving use models. mb
I talk about rest of the world all the time on channel and specific Nvidia back generation graphics processing solutions where ROW is why Volta and Quadro Pascal are still demanded products on entry level learning. mb
@@mikebruzzone9570Mike, Intel falsely claimed Meteor Lake as the first CPU with neural engine, so the 300 is marketing to back the narrative of 40 TOPS. A lot of people believe outrageous lies, the Phoenix, Hawk to Strix Point is more spin. OEM want a narrative to push their new model.
Naw, if they didn't tell power efficiency, a grasp, idea, ballpark, not even a range. He just avoided the whole the statement, deflecting to: oh previous gen was competitive and great, yeah its up to the oems to decide and to make the blend, can't say bc depends on the apps, etc. With Qualcomm, AMD already knows how good they are, and it's not near as efficient enough. You can tell how much he stuffered constantly and so much justification, bet he was pissing his pants off. It's like a champ in a press conference before the fight. When have you seen a champ stating his gonna win the match and destroy his opponent while having his knees tremble in death? That guy was sweating. Anyways, in the power efficiency it will be def better than last gen, but not spectacular as the competition. Intel is in a better node and with an architecture that was built for pure power efficiency. When I say built, I mean near mad-doctor-scientist that is pushing to save every watt possible imagined. Intel: I want to slap Apple in the face, and this is my 1st swing! *Qualcomm enters the arena and gets slapped* Intel: Move you cheap wanna be apple knockoff, get out my freaking way! Qualcomm: I'm 400% faster and more efficient. Intel: the f#$% you aint, I'm preparing to slap Apple, not a cheap knockoff.
@mokahless Well bashing because you don't like the truth, it's not my problem. He really avoided the answer. If someone is expecting a node advantage as it was previously. It's the opposite on AMD. And if someone is expecting Qualcomm and Intel not reaching near Apple like battery life, they are wrong. For thin and light, it's more important battery life as all x86 pc laptops have been attacked in that regard against Apple. The other important metric is TOPS, which Intel has an advantage overall when using NPU, GPU, and CPU. Intel has always been a giant, and their bad decisions at the top level changed when Pat Gelsinger entered the arena. It's been 3 years so far since then, and Intel has been steering the boat in the right direction. Lunar Lake was designed from the ground up, not trying to scale down power consumption from a hungry performance core as mobile skus were done previously. LNL was built to be best performant on low wattage and then scale it up with energy. Heck, they improve so much the performance on the E cores that P cores are turned off most of the time, and use the E cores instead. The P cores only kick in when performance is needed. While Qualcomm on the other hand, enjoys the advantage of ARM being power efficient. Like I said. AMD in laptops will be a blend of performance and battery life. I expect to be faster than Qualcomm but not more power efficient. x86, it's a hungry energy beast, hence why very impressive Intel managed to be uber efficient to be called an Arm competitor on battery life. AMD doesn't have that, AMDs weakest point has always been RnD, AMD most power efficiency gains have been TMSC node shrinkage. AMD will be great for a mix of gaming and performance, especially strix halo. But in battery life, compared to the other two tech companies, they will not.
@1idd0kun not really, read on how efficient is Lunar Lake. Like if someone fails to accept how things are panning out, it's not my problem. AMDs chips this gen are more performant than ultra power efficient. And all the reviews when it's out will say the same. Great performance, decent battery life, but if your concern is battery life, you're better off with Qualcomm or Intel.
Well what would that be. You do realize that your wants does not effect the technology path of any company unless you are spending billions of dollars. Start your own chip company and address YOUR needs. Problem solved 😂😂😂😂😂
@@robertlawrence9000 Have you taken a poll to back up that unproven statistic? Are you willing to look past your confirmation bias to see what the average person truly thinks about AI tech? Like so many people who parrot, "AI sucks," you don't suggest an alternative or a new innovation to supplant it. It's useless if you don't offer any solutions. You're shouting into the wind. What should big tech companies invest in instead?
@@notsyzagts7967 It doesn't take far when I look in chats on the subject with an overwhelming amount of people who are against it. Oh and there are a lot of ways they can go improving performance of processors without having "AI" logging all of your information.
Phones had NPU for years. There are some really useful functions supported by them, like cutting out portraits on photos for example. Microsoft does shady stuff in Windows without AI already anyway.
Pressured? Phoenix and Hawk point are AMD's best-sealing mobile chips to date, and Strix is looking to be even more successful. It doesn't even matter if Intel and Qualcomm stuff are somewhat more efficient. AMD is still gonna sell every single chip they make.
@@potatorigs2155I mean, any laptop with a powerful x86 CPU can and in many cases will get hot if you do lot of CPU intensive tasks, which will then cause fans to ramp up, which then leads to shorter battery life. Saying it's specific to AMD or Intel isn't correct.
Supporting AM4 for the full ddr4 lifecycle is a laudable goal and will reduce e-waste. that is a stunning commitment
Interested in this, because every notebook I have ever owned has been Intel. Four of them at this moment. It wasn't planned. It just happened that way. But every desktop going back to the 486 clones has been AMD, including my current 5800X which is an amazing CPU. And that was planned. AMD has absolutely owned desktop for me. Now using a Radeon GPU as well. I like the performance I'm getting and it's almost time for a new notebook. Change is definitely on the table.
I kept trying to get a laptop with AMD but the other parts of the system weren't what I wanted, usually having bad displays.
Tech has always had problems with poorly functioning markets. The hype on AI and Intel Meteor Lake MIA Q4 last year has created an opportunity for market disruption.
A CPU is just 1 laptop component, which is why that question on battery life was unanswerable, battery capacity for instance matters
My arbitary decision shocker was Dell's new model removed a backlit keyboard when you selected an AMD CPU. Picking another model was necessary to get 16GB RAM, 1TB SSD avoid E-cores and have the back lighting which my bro-in-law wanted.
Thanks for asking the right questions even though he left unanswered or go around it.
This would play to the consumer better if the NPU could unload the CPU by running background apps. Running Spotify & RUclips while upscaling audio would be significant value.
too bad it's not suited for that. It's really just a specific part of a gpu's function that runs at even lower power.
NPU is for AI tasks, not normal tasks. It'd be like saying "Intel Quicksync would play to the consumer better if it could unload the CPU by running background apps." Not the point. Not remotely what it's about.
Right now it's like Quicksync but where no one uses it because no one encodes video yet. Or at least, that's what they're advertising it as. It does nothing currently.
I missed this question: what is the determining factor for when AMD will launch the X3D-CPU's? Is it about technical obstacles like needing sufficiently well enough binned dies which perform good enough with a lower voltage? Is it about countering whatever Intel launches? Is it something else? That would have been illuminating.
X3D is another manufacturing step. First you have to make zen5 chips and know they work well, then attach another chip on top and test that.
The base design can always come earlier because it's made first.
Server X3D chips are probably the better binned ones with low voltage requirements. On a desktop you have lots of available power and only one CCD, not 16.
@@davidgunther8428 Sure, but on the other hand I wouldn't think that this is a complicated extra step for AMD. They just add some memory and they have done this before twice. I don't imply that the technology of adding that memory is simple but for the 'base'-CPU-chiplet it should not be technically complex, though I imagine that the firmware might need to be adapted for it. AMD already knows exactly how high the CPU can clock at a given voltage and they know how warm the CPU will get at a given voltage. It should be fairly straightforward for AMD to make those X3D-CPU's once the regular versions are done. That makes me suspect that they have another reason: either to launch something new and get media-attention and hype when Intel launches their new CPU's (I lean towards this being the reason) or some binning (yes, the servers get the best CPU's, then Threadripper and only then the best leftovers go to the most profitable Ryzen-models).
Though indeed they bin the strongest for server-CPU's, great server-CPU's are not necessarily great desktop-CPU's. For the server you want the lowest power possible for a reasonable clock frequency, for the desktop you want the highest possible clock frequency and you don't care as much about the power. But there is no doubt that for Ryzen we get the leftovers, the best dies in one way (very low voltage for a reasonable clock frequency) or another (very high clock frequency at a relatively moderate voltage) go to Epyc and Threadripper.
I never liked the 12-core Ryzen, having 6 cores on one chiplet just isn't great because of that chiplet-hopping (having to copy data from the cache on one chiplet to another or worse, getting it from RAM, when a task moves from one chiplet to another), especially on Windows (worse threadscheduling than on Linux, *BSD and of course macOS because that uses FreeBSD). For me it is either the 8-core with the stacked cache or the 16-core with the stacked cache. I like the stacked cache because I regularly play videogames and I also play RTS-games which benefit a lot from that extra cache. Also I prefer the more reasonable electrical power (not clocking as high but also not getting as hot and the fan not getting as noisy) of the X3D-CPU's, in my opinion AMD chose poorly by letting the regular Zen4-CPU's boost until the temperature gets to 90 degrees, that is a bad user-experience and you really don't notice that 200 MHz lower clock frequency that much so why all that heat and noise?
@@peterjansen4826 yeah, 95C temp target makes people nervous. I set my 7700X to 88W power target and the boost is the same and all core clocks only about 300MHz lower. I got that back up by using curve optimizer. Now 20k CB23 score, and doesn't throttle, with a basic air cooler.
@@davidgunther8428 Nice! Did you set that in the UEFI or in the Windows program (Ryzen Master?) from AMD? I myself use Linux so I would have to set it in the UEFI, I would probably just enable eco-mode if I would have a 7700X or 7950X, still having the high singlethreaded boost but a more reasonable wattage with an all-core load. You and I are 'nerds' who can figure it out ourselves, sadly for the regular user that is not an option and they will get annoyed by the noise. Many users even have a locked motherboard-BIOS (OEM's) so they can't even change it. I guess that AMD wanted to win in the benchmarks against Intel (reviews) and clocked it too high out of the box. My point of view: set the default a bit lower and allow the user to OC it if he wants.
What a fantastic interview! 😊
If they are trying to get the hardware to do everything, they need to get their TensorFlow support upstream into the mainline codebase.
Bruh
APUs are something to watch
Wow! A time traveller from 2016 in the flesh! Welcome to the future!
Marketing departments played their version of "the floor is lava" at Computex...
"If you don't put one AI in every other sentence - you will burn!" 😈
Kinda nice to see execs in suits. Getting real tired of polo shirts or jeans.
It's not really a suit though, he's not got a tie on. It's more of a jacket and trousers. ;) My guess is it's colder there than it looks because the air conditioning is probably full blast - otherwise it'd be odd to wear a t-shirt under a thick long sleeved shirt like that.
@@jonevansauthorPiss off, didn't ask for the pedantry 😊
The staredown was intense.
AHAHAAHH...he almost broke the screen!
this guy is the master of fluff and dodging questions
Odds on, MS will screw up W11 24H2 but for AMD this opportunity as first mover with Hawk and Strix Point against Meteor and Raptor Lake justifies the risk.
Consumers don't understand the 4 digit model numbers with suffices and Intel missing the main school/holiday buying season '23 gives an opening. Intel have expedited LNL to respond to X Elite & Strix, but another debacle will be very damaging.
I can see the Ryzen AI 9 HX scheme giving OEM sellers a simple narrative, even if AI has become an over used buzz word.
It seems like the Sierra Forest is on Intel 3nm and challenges Zen4c 5nm Bergamo with advantages on high utilisation shared client cloud systems, so this may prove the last lines with significant process advantage.
Mark and Will both just sound like old grumpy men who hate lawn dwellers. Probably the same type of people who hated all new things.
two ways this can go....bad battery life when compared to intel and qualcomm or comparable if not better to the other 2 but AMD is sandbagging its own and will let the reviewer sing its praises.
It won't have great battery life compared to Intel or LNL ok idle or low workload scenarios, which means if you mostly web browser or send emails, your battery life won't be as great.
However, I do expect this to outperform the other two at load and on performance.
@@lugaidster
And you know that because....?
Let's not jump to conclusions, shall we? We'll have third party reviews soon enough.
Battery life is a choice made by the laptop manufacturer, battery capacity, CPU power config 15-45W and other components.
Strix Point is a full 12c/24t system, while LNL is just a 4+4c/8t one. AMD have Kraken for lower power & performance markets to come.
Intel choose to emphasise tests near idle because it favours them, while testing under load favoured AMD (even Phoenix vs Meteor Lake).
Hawk Point gaming handheld performs much better than Meteor Lake, LNL demos show it's caught up but it's not launched yet.
@@1idd0kun Just taking a guess. Asus' zen 5 has a rather large battery and is in a 16" form factor. IMO this says middling battery life at low power but better under load. This may be a tough cycle for AMD.
Absolutely hate that AMD marketing changed the name of the CPUs and motherboards just to confuse Intel customers, they should be fired
Make better apu for steam deck 2
An “NPU” is just a bunch of 8 bit and float16 bit multiplier in a simpler more compact, less flexible form than a GPU.
There is nothing an NPU can do that a GPU can’t. It just might use less power doing. But it lacks all the flexibility you get from a GPU.
Yep but that NPU is purposely there to try to by default log everything you do on your PC. That's what the FEDs are wanting from these companies. Power savings is just BS because they can do that without an NPU. Most people don't want this junk in their systems.
There is nothing a GPU can do that a CPU can't, we would use CPUs for video rendering if power efficiency was irrelevant
@@relucentsandman6447
The GPU can do what the 40-50Tops NPU faster and with higher precision. The GPU can be used for training and an 8bit NPU can't.
The CPU can do anything a GPU does, but it will be slower because it lacks the parallelism.
@@pweddy1 yes that is what I said
That's the point. It uses less power doing it.
pointless interview, I swear he didn't answer a single question.
sounds like an investors pitch
"general manager" and i don't think he understand about tech
Battery life AMD dodged all week. Cant be comparable.
@@larryskelly6928 especially, trying to say that better battery life would be something you outright don't want for any reason.
This is the worst interview ever. Dude talks about absolutely nothing for 30 minutes and the interviewer doesn't interrupt him at all and just lets him ramble off his worthless marketing garbage.
I think the lesson here is blacklist David McAfee from talking on camera. To get a good interview out of him you'd need both a very experienced interviewer and a lot more prep work.
And you watched it congrats you played yourself
The simple answer to Mark's question "what comes next" is AMD disaggregate miniaturization of E5 EP/MP and E7 Brickland MP lifted from Intel and placed on Feldman's fabric continues past AMD's original 5-year plan. AMD mobile at Dragon moves from uniprocessor to 2P and 3D stacking of L3 on TSMC production capability is now offered on desktop and mobile although 3D mobile sits on the shelf few will pony up for on the price premium which is a moot consideration on desktop BUT mobile remains pricey. AMD mobile now moves to improve x86 big little with Zen + C cores eyeing what there is to improve from Arm and Intel implementations of Big Little. NPU, well vector SIMD plus FPGA programmability whether pre or post processing and DSPs for image and sound and NIC because NPU needs data in or what can be inferred? In this space Arm is likely ahead in mobile phones, OCOM who knows mobile phones and Apple who knows mobile phones and Intel ahead in server accelerated applied 'computer science' experiments (race) are roughly starting off from the same blocks on 'if you don't invent it copy it". Nvidia is basically proximate the x86 CPU space in their compliment graphics space and likely capable of leapfrog on the PC side while continuing to carve out and fortify Nvidia application segment niche strongholds on the server and applications engineering, technical and scientific compute side. AMD remains a follower on platform looking and eyeing what can be systematically targeted and improved for production on a superior fabrication profess node albeit this 5-year plan has also basically been achieved as others catch up from behind. AMD benefited from leading on the process front over the past 7 years while meticulously lifting from others what could be lifted and miniaturized and now also has to enter the applied science race. mb
AMD says 3rd generation of AI, well, Phoenix? Hawk that's 2 nascent attempts. ah CDNA, well, Phoenix and Hawk are rejected as initial that are intermediate attempts and CDNA implementations proprietary to their commercial platform developer / business of compute operators. But still AMD is off the starting blocks with everyone else on what applications and which platforms take advantage of so said AI applications and whether end users find utility value in those whole platform applications on new and evolving use models. mb
I talk about rest of the world all the time on channel and specific Nvidia back generation graphics processing solutions where ROW is why Volta and Quadro Pascal are still demanded products on entry level learning. mb
Ah Zen 4C 'dense' as in slim instruction set match, for specific application instruction set optimization, aka calls, got it subject Sienna. mb
@@mikebruzzone9570Mike, Intel falsely claimed Meteor Lake as the first CPU with neural engine, so the 300 is marketing to back the narrative of 40 TOPS. A lot of people believe outrageous lies, the Phoenix, Hawk to Strix Point is more spin.
OEM want a narrative to push their new model.
Naw, if they didn't tell power efficiency, a grasp, idea, ballpark, not even a range. He just avoided the whole the statement, deflecting to: oh previous gen was competitive and great, yeah its up to the oems to decide and to make the blend, can't say bc depends on the apps, etc.
With Qualcomm, AMD already knows how good they are, and it's not near as efficient enough. You can tell how much he stuffered constantly and so much justification, bet he was pissing his pants off. It's like a champ in a press conference before the fight. When have you seen a champ stating his gonna win the match and destroy his opponent while having his knees tremble in death? That guy was sweating.
Anyways, in the power efficiency it will be def better than last gen, but not spectacular as the competition. Intel is in a better node and with an architecture that was built for pure power efficiency. When I say built, I mean near mad-doctor-scientist that is pushing to save every watt possible imagined.
Intel: I want to slap Apple in the face, and this is my 1st swing!
*Qualcomm enters the arena and gets slapped*
Intel: Move you cheap wanna be apple knockoff, get out my freaking way!
Qualcomm: I'm 400% faster and more efficient.
Intel: the f#$% you aint, I'm preparing to slap Apple, not a cheap knockoff.
You're English still needs a bit of work before you become intelligible. But keep at it, you're almost there!
We found the fanboy.
@mokahless Well bashing because you don't like the truth, it's not my problem. He really avoided the answer. If someone is expecting a node advantage as it was previously. It's the opposite on AMD. And if someone is expecting Qualcomm and Intel not reaching near Apple like battery life, they are wrong. For thin and light, it's more important battery life as all x86 pc laptops have been attacked in that regard against Apple.
The other important metric is TOPS, which Intel has an advantage overall when using NPU, GPU, and CPU.
Intel has always been a giant, and their bad decisions at the top level changed when Pat Gelsinger entered the arena. It's been 3 years so far since then, and Intel has been steering the boat in the right direction.
Lunar Lake was designed from the ground up, not trying to scale down power consumption from a hungry performance core as mobile skus were done previously. LNL was built to be best performant on low wattage and then scale it up with energy. Heck, they improve so much the performance on the E cores that P cores are turned off most of the time, and use the E cores instead. The P cores only kick in when performance is needed.
While Qualcomm on the other hand, enjoys the advantage of ARM being power efficient.
Like I said. AMD in laptops will be a blend of performance and battery life. I expect to be faster than Qualcomm but not more power efficient. x86, it's a hungry energy beast, hence why very impressive Intel managed to be uber efficient to be called an Arm competitor on battery life. AMD doesn't have that, AMDs weakest point has always been RnD, AMD most power efficiency gains have been TMSC node shrinkage.
AMD will be great for a mix of gaming and performance, especially strix halo.
But in battery life, compared to the other two tech companies, they will not.
@1idd0kun not really, read on how efficient is Lunar Lake. Like if someone fails to accept how things are panning out, it's not my problem. AMDs chips this gen are more performant than ultra power efficient. And all the reviews when it's out will say the same. Great performance, decent battery life, but if your concern is battery life, you're better off with Qualcomm or Intel.
@@MrPtheMan
Nice crystal ball you must have to predict the future like that.
Bullshit aside, stop jumping to conclusions and wait for reviews.
I really can't express enough how much I hate NPUs and AI. I really wish they would invest in things I want in place of that useless AI tech.
Well what would that be. You do realize that your wants does not effect the technology path of any company unless you are spending billions of dollars. Start your own chip company and address YOUR needs. Problem solved 😂😂😂😂😂
@@Manicmick3069 What does the consumer want is what you should really mean. I bet over 90 percent of people do not want this on their systems.
@@robertlawrence9000 Have you taken a poll to back up that unproven statistic? Are you willing to look past your confirmation bias to see what the average person truly thinks about AI tech?
Like so many people who parrot, "AI sucks," you don't suggest an alternative or a new innovation to supplant it.
It's useless if you don't offer any solutions. You're shouting into the wind.
What should big tech companies invest in instead?
@@notsyzagts7967 It doesn't take far when I look in chats on the subject with an overwhelming amount of people who are against it. Oh and there are a lot of ways they can go improving performance of processors without having "AI" logging all of your information.
Phones had NPU for years. There are some really useful functions supported by them, like cutting out portraits on photos for example. Microsoft does shady stuff in Windows without AI already anyway.
Desktop PC don't make much money anymore.
I wanna see the evidence of this
You're wrong. You obviously don't do any kind of simulation, 3D, or machine learning workloads.
AMD feels pressured. This marketing BS guy is just talking in circle. People care about battery life and graphics performance not TOPs.
Pressured? Phoenix and Hawk point are AMD's best-sealing mobile chips to date, and Strix is looking to be even more successful. It doesn't even matter if Intel and Qualcomm stuff are somewhat more efficient. AMD is still gonna sell every single chip they make.
maybe the battery not so good for talking about that with amd right now
Yeah they cant compete with qualcomm in battery life
I'm so tired of all these AI buzzwords, it's irrelevant to what users need
But it's the FUTURE! /s
ask him how to uninstall mcafee
Worthless interview...
Nobody buys AMD laptops
No..well, i have 2
Yep for sure. 20% of laptop sales as of 2023 is definitely "nobody."
Overheating laptop junk no thanks
@@mokahless He bought and hes a nobody so he said "Nobody buys AMD laptops"
@@potatorigs2155I mean, any laptop with a powerful x86 CPU can and in many cases will get hot if you do lot of CPU intensive tasks, which will then cause fans to ramp up, which then leads to shorter battery life. Saying it's specific to AMD or Intel isn't correct.