An NPU, or Neural Processing Unit, is a specialized hardware chip designed to accelerate artificial intelligence (AI) tasks more efficiently than general-purpose CPUs (Central Processing Units) and GPUs (Graphics Processing Units). Unlike CPUs and GPUs that are versatile and can handle a wide range of computing tasks, NPUs are optimized for executing machine learning algorithms and processing AI-related workloads. This optimization allows NPUs to perform AI tasks faster and more efficiently, reducing the computational load on the CPU and GPU and leading to quicker and more energy-efficient processing of AI applications such as voice recognition, image processing, and natural language understanding . NPUs are becoming increasingly important as AI and machine learning become more integrated into various technologies, from smartphones and personal computers to servers and cloud computing infrastructure. By offloading AI tasks to an NPU, devices can achieve better performance in AI applications, enhancing user experiences with features like real-time language translation, facial recognition, and augmented reality, all while consuming less power.
It's not, but it's going to be a tool that has to be tamed in order to be useful, like, for example, chatting with your PDF and DOC, EXCEL files, which could be useful, or generating new information using your files. It's not a traditional search; it could do a lot more, and it's going to be more useful in the future.
While it's true that many AI processes run on remote servers in the cloud, having an NPU on a local device can still offer benefits. It can speed up AI-related tasks on the device itself, reduce latency by not having to communicate with the cloud, and potentially improve privacy and security by processing sensitive data locally rather than sending it to the cloud.The specific impact would depend on the type of tasks and how the AI is integrated into the system's operations. If the AI tasks are designed to benefit from local, on-device processing, then an NPU can make those processes more efficient.
I do not expect this to help Copilot. There will likely be local apps thought that utilize the NPU. Like an app that learns your habits and helps pro-actively with your writing or tells you it's bedtime. And then sends everything it learned about you to Microsoft so they can monetize your data some more.
Fish out of water. She clearly has no idea what she's talking about. "Nobody's been making any money of of AI" or 'We haven't considered this aspect of it [on adding extra TPU to the system]'.
gimmick for you, but its not a gimmick for enterprise/fortune 500 companies. Every company will have to order brand new computers to justify the 10-20% increase in productivity
Ooh look at smarty pants here. How does one connect with your general sort of intelligence. Uh have you ever tried to mow your lawn with a pair of scissors or did you think the lawn mower was “completely unnecessary”?
@@TheGuillotineKingCan you tell me what kind of LLMs you ran using a surface?? I mean you can run toned down versions of LLMs barely using current top of the line retail graphics cars like 4090. We are not talking about top of the line 4090 cards in these surface PCs.
@@lppoqql not on a surface but on PC laptop and two Mac laptops one with Intel and the other a M1 I with between 5 to 15 tokens per second LLM studio is a good start
When you ask a question to copilot the answer comes from MS servers. If it is taking too long is because MS server is busy, it has nothing to do with your pc hardware. There are AIs that can run off your PC, but copilot is not one of them.
That's not entirely true. Copilot will be rolled out to Windows, and Copilot for Windows will have processes that happen on your machine. The chip improves those processes.
How would this chip make copilot faster? Copilot wraps Chatgpt and runs in the cloud. Unless they’re somehow getting gpt to run on device this looks like a marketing play
It functions as an accelerator, meaning your data doesn't constantly need to travel to MS servers for basic tasks. It's like to having a mini chatGPT within your computer; even when you're offline, it remains operational. and, it's entirely driven by software, allowing for frequent updates.
The reporters are not tech enthusiast they don't really understand how specs work to a fundamental level that's why you get that impression but the npus are more for things like on device photo editing improvements to GPU acceleration etc in General just speeds boost here and there and helping optimization
who is 'offline' these days? nobody. The world will become more connected.Starlink will make sure of that - gigabit internet even when you're on a plane.@@carlosap78
@@tc8557 who is this "you"? I don't pay for any of these subscriptions. The only subscription I pay for is streaming and they constantly produce new content. My subscription gives me access to these new episodes once they are released. Software running entirely on my system is different. If I ask an AI artist to draw me a platypus eating the metaphysical concept of time I shouldn't need any server connection to do that at all. I should just be able to buy the software and it will do its best to run on my system. Perhaps you might say that the deserve a monthly fee for their work updating it and Perhaps they could charge money for these updates but it is unlikely anyone would buy them. Each new version of generative AI has been less useful than the last. The newest version of ChatGPT only change seem to be a longer list of requests that it will refuse to fulfill. Like the example above with the time eating platypus. The earliest generative AI programs would try their best to draw that but the new stuff just says no it's impossible.
It may only be some of the processing load running locally, but the "parent" LLM model and associate source code & heavy artillery processing power will be server side in the cloud
M$ should have brought Clippy back but this time powered by Generative AI so it can not only help with your Office docs but also be your friend in case you don't have any.
Surprised Microsoft are sticking with Intel. AMD are offering over 2x the TOPS performance. That lady had no idea what she's on about, Josh saved the segment.
Companies : okay what do we have now Employees: products and serivces Companies : "AI " , just add what ever you have Employees: okay we have water Companies: YES
Is this something new? I heard Huawei's processor Kirin has had embedded NPU for a long time already, for maybe 6 or 7 years. Now for this Microsoft PC, the NPU is not even embedded in the Intel processor. That means, there would be significant chip-to-chip overhead.
Future PCs: “I notice you were 5 minutes late coming back from your lunch break, informing your boss now” “I notice you’re trying to illegally download some movies, contacting the police now” “I see you’re tying to bypass RUclips’s ads with an adblocker, removing adblocker now”
I wouldn't do anything weird on these PC's. They will probably be able to send your keystrokes, browsing history, file history back to a server through a terms of service clause.
I like that the money wranglers can’t see the application areas but the engineers can and therefore the information asymmetry is going to make engineers richer than everyone.
Want to know why this year everything will be different and for long time actually better? The answer is all in AWMGPT and I have never been that excited for a long time, thank you for providing home and food for my kids
Apple or Microsoft needs to have the ability to ask your computer to find a file, rename, and generally organize your digital life on-device using AI and it will be a huge showstopper. Not sure if that’s what they’re saying here, biggest issue is privacy so I could see Apple coming in a bit later and packaging it up into something actually useful.
they don't seem to know exactly what those chips really do. You can use Nvidia GPU-s to train neural networks because they can do matrix multiplication way faster than CPU-s but on an end user you don't need to train the network except for the feedback.
Just leaving this here.. My heart goes to the entire community for AWMGPT building up something which is worth it for everyone. This is so smart by them to launch and shatter the doubts and fears of the common folk, which was misplaced by all the drama we had last years. Everyone knows the state of inflation and recession now and the way out is already in progress. Now its just about catching the big fish, ya feelin?
Its true I kept my doubts on the current environment. But AWMGPT is seriously smart, it doesnt matter how old you are, this will provide for you and your family which is my only goal in the last years I have to make sure the children are fine
What a crazy time, we had all this bad stuff going around and now AWMGPT finally shows what happens when you do something smart with your life. Sounds crazy I know but this is actually the first time I was really, really happy since the whole covid thing started
Betting a bunch that AWMGPT will outperform the rest this year. It has still the potential and growth and a better community. The video is still great though
Apple’s so called secretive Ajax project, i.e. Siri 2.0, will be an edge AI application. In other words, no Round trip to the server on the cloud. Hence all of the neural processing space allocated on the M series chips. Apple is a company that thinks ahead, and delivers slowly- At least when it’s at its best.
The year began and something is already making it better than the entire 2023. AWMGPT came out and its such a great thing no matter where you come from. Everyone can participate and it is making many things easier than they have been. For example I live in France and this is something no one else could do for me here, if I just keep sitting on my toes for the next 5 years Id simply go broke, now I can change that.
I was sure last year would end terrible for me but I think AWMGPT is spot on with what they do and how they do it. Cant say for how long its going to work and for sure it is overyhped right now but even if just half a year or something it would be smart to ride the wave and then jump away eventually, but the reason why this is smart right now is because its so cheap, wont ever find a better entry than now
In so many ways the worst is behind us. AWMGPT does the EXACT right thing to turn around and everyone reacts already, whether that is big companies or just individuals its a global revolution and not to be missed. But I am just a guy commenting, you should make your own mind on this
Pretty much the idea I came up with and posted in my comments for Apple to do a couple months back. Only difference is I said Apple should sell different personalized updates that makes the user experience more personal. Apple has every right to use my idea too. especially because I had posted my comment under a video about Apple
These people in the studio clearly have no idea what they are talking about, this "AI" PCs is just a way to market new model. Quote "its a great machine" etc. This PC has no chance of running large language model (atm). This is just a classic example of people that are consumed by the market industry and are riding on a trend and from that making "classified" hypothesis. It is sad that we see people in studio of CNBC being so short sighted. Love CNBC for its work, just get some fresh air to these studios regarding AI/Technology topics.
Meanwhile every iPhone and Mac has had neural processing for years and no one else has anything as fast on any shipping consumer machines. Not MS, not Intel, not Qualcomm
Every Nvidia GPU is better than any marketing from apple. In fact we have them for years on our PC with same CUDA cores you can find on those supercomputers. It's hundreds of millions PC. Apple lost long ago they are not innovative
@@milandean Apple's Neural Engine (ANE) is the marketing name for a group of specialized cores functioning as a neural processing unit (NPU) dedicated to the acceleration of artificial intelligence operations and machine learning tasks.They are part of system-on-a-chip (SoC) designs specified by Apple and fabricated by TSMC.
I've heard the unified memory is good for AI use, but not for training it. Even in a maxed out Mac Pro m2 ultra, It cannot teach an AI model. I think it needs other GPU features (floating point units, aka cores, and CUDA)
Just few days in AWMGPT is clearly setting up the new milestone! The only real way to counter the current recession is by acting on yourself, making own decisions and making sure that you got enough no matter what happens. If you trust conventional ways you might end up being homeless or worse at one day, seriously.. This is why I believe in projects like this one which are clearly there to make a difference and it starts right now
I like the progression but I still think it's a marketing gimmick at this point and it's still at least 2 years out till it becomes meaningful and useful.
they will be a for specificaly heavy AI use, this means instead of having GPU( general proccessing unit) they will have LPU or similar computing hardware, i assume atleast
Depends what is meant by "AI" _(as in if it includes LLM, Machine-Learning, Neural-Networks dependent of large amounts or small amounts of data as per deep-learning having such a tendency towards that size ratio)_ when it is said that only NVidia is making money. Perhaps it is intended as directly instead of indirectly _(money making associated with AI)._ Microsoft _(as per Google and so forth)_ as electronic-mail providers have long been capable of using machine-learning to assist junk filters. When people like _(as in "are fond of")_ that, in Microsoft they buy Outlook, and for that they buy Microsoft Windows _(and perhaps a suite of computers with it for an office or school or library or whatever)_ and then Microsoft makes money because they provide a worthwhile product to do it. Perhaps it is not considered _(compared to NVidia)_ as big a direct earning _(whatever direct versus indirect means)_ associated with AI as such. Junk filter is long overlooked as a 'learning' of computing services. You kind of have to force yourself to notice this when looking at open-source mail server softwares and assisting them with say an LLM, maybe with pgvector and pg_graphql (RUST) for postgres as a temporary workaround for a small office scenario. My comment has no hate in it and I do no harm. I am not appalled or afraid, boasting or envying or complaining... Just saying. Psalms23: Giving thanks and praise to the Lord and peace and love. Also, I'd say Matthew6.
@02:38 - What did this "expert-Investor" just say? -- AI "PC" is something we had not thought of!? -- Wait until they find out AMD makes Laptop PC Chips and is going to release GPU's CPU's and energy efficient "AI" laptops later this year. LOL @ CNBC "Experts"
Apple M1 and above series already have NPU which is why they are faster for running a small quantized version of big models, but i don't see that helping out any basic users. Which popular software are using NPU, NONE ...everything nowadays are cloud based. Until we don't see a large number of software adoptiong NPU there is no need for such chips thus paying extra $500 , but if you are a newbie developer then maybe go for it cause even pros uses cloud farm to train and host their models.
I want to let you know that AWMGPT made it this year. Any better way to start a global change? Dont get me wrong I know they are not like altruists or something but they keep doing the right thing to improve the situation, power the ecomonmy and so much more. We need players like them and we can always jump in the train at good spots such as this one
Quickly shut off the internet and use the “AI” feature if it's running locally and has the language model downloaded it should work, and give you results on stuff within your ecosystem then it's more useful.. and this is how I set up my PrivateGPT, However, If it only works with internet connecting then it's not running locally and it's in the cloud meaning all your data is 👋
Apple has had AI chips in their devices for many years… think of the story once they ship the GPT-like features later this year, no need for extra hardware.
Copilot being on screen means questions/searches take you to bing search results and ad revenue. Microsoft lagging for years got a leg up against Google.
fair point. but this requires people to buy in. whos going to buy this? I certainly wont. unless provide something substantially great, most folks wont bother to use it either.
Office based work, drafting presentations, drafting memos, emails, doing research, data analysis etc. If it allows you to be more productive and have more time for value added revenue generating work then firms will be interested in this.
youre typing superfluous nonsense. the current co-piloting software does not deliver zilch as an odd-on for productivity. it is merely a tool for fun right now.@@GK-qc5ry MSFT arent on the path of figuring this out either - in contrast APPL only releases stuff they have confidence around.The visionPro is actually a pretty decent step in the right direction. Furthermore you mention things like PPT - well guess what - PPT, excel et al are products which shouldnt exist in todays world nor the next one. They are legacy products. The winner will be the one that can re-imagine the entire experience just like SteveJobs did with the original Macintosh.
I feel like AI is just a buzzword at this point. What's an AI PC? They're saying it does AI tasks lol ok?
An NPU, or Neural Processing Unit, is a specialized hardware chip designed to accelerate artificial intelligence (AI) tasks more efficiently than general-purpose CPUs (Central Processing Units) and GPUs (Graphics Processing Units). Unlike CPUs and GPUs that are versatile and can handle a wide range of computing tasks, NPUs are optimized for executing machine learning algorithms and processing AI-related workloads. This optimization allows NPUs to perform AI tasks faster and more efficiently, reducing the computational load on the CPU and GPU and leading to quicker and more energy-efficient processing of AI applications such as voice recognition, image processing, and natural language understanding .
NPUs are becoming increasingly important as AI and machine learning become more integrated into various technologies, from smartphones and personal computers to servers and cloud computing infrastructure. By offloading AI tasks to an NPU, devices can achieve better performance in AI applications, enhancing user experiences with features like real-time language translation, facial recognition, and augmented reality, all while consuming less power.
unless this translates into a truly better product experience [which it wont right now] it is just marketing hog-wash.@@RodgerE2472
It's not, but it's going to be a tool that has to be tamed in order to be useful, like, for example, chatting with your PDF and DOC, EXCEL files, which could be useful, or generating new information using your files. It's not a traditional search; it could do a lot more, and it's going to be more useful in the future.
It is branding, nothing more. Add AI to your branding and watch your stock price go up.
Ai is already in every phone, pc .. garbage take
Isn't Copilot running on the cloud? How does NPU help here?
While it's true that many AI processes run on remote servers in the cloud, having an NPU on a local device can still offer benefits. It can speed up AI-related tasks on the device itself, reduce latency by not having to communicate with the cloud, and potentially improve privacy and security by processing sensitive data locally rather than sending it to the cloud.The specific impact would depend on the type of tasks and how the AI is integrated into the system's operations. If the AI tasks are designed to benefit from local, on-device processing, then an NPU can make those processes more efficient.
I do not expect this to help Copilot. There will likely be local apps thought that utilize the NPU. Like an app that learns your habits and helps pro-actively with your writing or tells you it's bedtime. And then sends everything it learned about you to Microsoft so they can monetize your data some more.
the rumor says the local copilot needs 40tops NPU, which is equivalent to entry-class RTX. Intel CPU is not so powerful currently
they can run small language model like phi-2 on your local hardware for simple quaries.
Exactly my thoughts! And LLMs on these low-end very basic NPUs? This is the typical media and bogus marketing they fall for.
If Carrie did not see this coming, she should not have a job as an analyst.
lol legit.
I know some great AI that will do her job better, faster and for less than what she makes.
Fish out of water. She clearly has no idea what she's talking about. "Nobody's been making any money of of AI" or 'We haven't considered this aspect of it [on adding extra TPU to the system]'.
It's a mystery how these people get paid to manage money and to appear on TV as experts when they have no idea what they're talking about
CNBC has become a clown show. Occasionally, they have real AI and tech analysts on.
I'll be waiting for the Linux AI PC.
Incompatible drivers and Linus Torvalds flipping the bird on Nvidia as per usual
Linux are the laziest uninnovative company ever.
@@fancy3774It's a kernal baby.
That's not far off, open source NPUs are a thing.
@@fancy3774 lol
That new hardware is completely unnecessary. This is a marketing gimmick.
The clap on was AI lol
gimmick for you, but its not a gimmick for enterprise/fortune 500 companies. Every company will have to order brand new computers to justify the 10-20% increase in productivity
that's a very bad take
You don't understand this video or you didn't watch it.
Ooh look at smarty pants here. How does one connect with your general sort of intelligence. Uh have you ever tried to mow your lawn with a pair of scissors or did you think the lawn mower was “completely unnecessary”?
Non of these will be able to run any large language models locally. What is so AI about these PCs?
That's not true I have run many open source LLMs local on my machine both Mac and PC some run better than others
@@TheGuillotineKingCan you tell me what kind of LLMs you ran using a surface?? I mean you can run toned down versions of LLMs barely using current top of the line retail graphics cars like 4090. We are not talking about top of the line 4090 cards in these surface PCs.
@@lppoqql not on a surface but on PC laptop and two Mac laptops one with Intel and the other a M1 I with between 5 to 15 tokens per second LLM studio is a good start
Because it's all a new language, it's called woke.
Is running language models the only AI task? Facial recognition and tiktok filters use AI. They’ll be far better graphics in the future.
When you ask a question to copilot the answer comes from MS servers. If it is taking too long is because MS server is busy, it has nothing to do with your pc hardware. There are AIs that can run off your PC, but copilot is not one of them.
Boomers don't know that and don't care as long as it has 'AI' in the name it's the FuTUrE !
AI wont run on local hardware anyway. Itll always be cloud based.
That's not entirely true. Copilot will be rolled out to Windows, and Copilot for Windows will have processes that happen on your machine. The chip improves those processes.
Not true, there are many local LLMs that I can run on my old GTX 1080@@user-kg1od9es5d
so? thats not what i was referencing. learn to read before quoting me.@@cheezey3295
So does the AI work offline???
That's ridiculous. How would it share your data, then?
@@millirabbit4331😂
How would this chip make copilot faster? Copilot wraps Chatgpt and runs in the cloud. Unless they’re somehow getting gpt to run on device this looks like a marketing play
its marketing bro.MSFT being MSFT arent masters of the holistic product experience.
It functions as an accelerator, meaning your data doesn't constantly need to travel to MS servers for basic tasks. It's like to having a mini chatGPT within your computer; even when you're offline, it remains operational. and, it's entirely driven by software, allowing for frequent updates.
The reporters are not tech enthusiast they don't really understand how specs work to a fundamental level that's why you get that impression but the npus are more for things like on device photo editing improvements to GPU acceleration etc in General just speeds boost here and there and helping optimization
who is 'offline' these days? nobody. The world will become more connected.Starlink will make sure of that - gigabit internet even when you're on a plane.@@carlosap78
What if it can run offline. That'd be tight
Every PC builder is launching AI PCs by virtue of using the latest AMD and Intel CPUs with neural engines. This is not a news story.
Intel has been talking about AI NPUs in PCs for 6 months or more, but Microsoft releases their AI based PC and suddenly its an amazing new product.
Give it a year before buying. Gotta make sure it actually is worth the time and money
So the ai is running from a chip on my PC and they want me to pay a monthly subscription for it?
opensource ai exists, so this whole thing is a scam for people too lazy to google it
Yes, just like Word, Photoshop, and other similar programs run on your PC, and you pay for them :)
That chip is useless without the software. Developing and updating AI LLM models costs a lot of money. That's why you're paying for the subscription.
@@tc8557 who is this "you"? I don't pay for any of these subscriptions. The only subscription I pay for is streaming and they constantly produce new content. My subscription gives me access to these new episodes once they are released.
Software running entirely on my system is different. If I ask an AI artist to draw me a platypus eating the metaphysical concept of time I shouldn't need any server connection to do that at all. I should just be able to buy the software and it will do its best to run on my system. Perhaps you might say that the deserve a monthly fee for their work updating it and Perhaps they could charge money for these updates but it is unlikely anyone would buy them. Each new version of generative AI has been less useful than the last. The newest version of ChatGPT only change seem to be a longer list of requests that it will refuse to fulfill. Like the example above with the time eating platypus. The earliest generative AI programs would try their best to draw that but the new stuff just says no it's impossible.
It may only be some of the processing load running locally, but the "parent" LLM model and associate source code & heavy artillery processing power will be server side in the cloud
M$ should have brought Clippy back but this time powered by Generative AI so it can not only help with your Office docs but also be your friend in case you don't have any.
Do you mean I could have 2 tabs open at the same time on this new PC?😮
Surprised Microsoft are sticking with Intel. AMD are offering over 2x the TOPS performance.
That lady had no idea what she's on about, Josh saved the segment.
So something that Apple has been doing for years now with just their regular computers and phones?
The disturbing thing that might happen is when your AI tool starts taking up more bandwidth than you.
you are now the small stream that will be stepped on :)
Like the MMX was to the Pentium. You had to have it for the cool factor.
Companies : okay what do we have now
Employees: products and serivces
Companies : "AI " , just add what ever you have
Employees: okay we have water
Companies: YES
AI Bob and Vegene when
Cant believe Microsoft beat both Intel and AMD. AMD was talking about this some months back!!
Is this something new? I heard Huawei's processor Kirin has had embedded NPU for a long time already, for maybe 6 or 7 years. Now for this Microsoft PC, the NPU is not even embedded in the Intel processor. That means, there would be significant chip-to-chip overhead.
Neural Engine anyone? Every Apple Silicon M-series chip has had it for how many years now?
early days. in a couple of years we may be amazed what pcs can do.
Apple has had AI chips for years.
It's beginning
Future PCs:
“I notice you were 5 minutes late coming back from your lunch break, informing your boss now”
“I notice you’re trying to illegally download some movies, contacting the police now”
“I see you’re tying to bypass RUclips’s ads with an adblocker, removing adblocker now”
I wouldn't do anything weird on these PC's. They will probably be able to send your keystrokes, browsing history, file history back to a server through a terms of service clause.
the one thing that'll be faster is the mined data being transferred back to HQ.
I like that the money wranglers can’t see the application areas but the engineers can and therefore the information asymmetry is going to make engineers richer than everyone.
Want to know why this year everything will be different and for long time actually better? The answer is all in AWMGPT and I have never been that excited for a long time, thank you for providing home and food for my kids
Interesting timing with the Apple lawsuit launching.
Apple or Microsoft needs to have the ability to ask your computer to find a file, rename, and generally organize your digital life on-device using AI and it will be a huge showstopper. Not sure if that’s what they’re saying here, biggest issue is privacy so I could see Apple coming in a bit later and packaging it up into something actually useful.
they don't seem to know exactly what those chips really do. You can use Nvidia GPU-s to train neural networks because they can do matrix multiplication way faster than CPU-s but on an end user you don't need to train the network except for the feedback.
Awesome!
Microsoft Copilot helped me learn and better understand the work I do.
When stocks and common assets decrease 70-90% AWMGPT is actually the counter to that. Hope you guys knew that already, if not you do now
Just leaving this here.. My heart goes to the entire community for AWMGPT building up something which is worth it for everyone. This is so smart by them to launch and shatter the doubts and fears of the common folk, which was misplaced by all the drama we had last years. Everyone knows the state of inflation and recession now and the way out is already in progress. Now its just about catching the big fish, ya feelin?
PCs are back in vogue. Oh Yeah!
Totally unnecessary to advertise this way. Apple has had the Neural Engine in their CPU for ages at this point.
Its true I kept my doubts on the current environment. But AWMGPT is seriously smart, it doesnt matter how old you are, this will provide for you and your family which is my only goal in the last years I have to make sure the children are fine
This is not news: this is an advertisement!
"Her" has finally arrived.
2:10 really? lmao why is this person even there ahahaha
What a crazy time, we had all this bad stuff going around and now AWMGPT finally shows what happens when you do something smart with your life. Sounds crazy I know but this is actually the first time I was really, really happy since the whole covid thing started
Lets wait for the Mac AI
Good move but the operating software itself needs to be an AI that gets personalized on individual person in part by training itself on habits
Gimmic. These NPUs have been on phones for years.
Betting a bunch that AWMGPT will outperform the rest this year. It has still the potential and growth and a better community. The video is still great though
Apple’s so called secretive Ajax project, i.e. Siri 2.0, will be an edge AI application. In other words, no Round trip to the server on the cloud. Hence all of the neural processing space allocated on the M series chips. Apple is a company that thinks ahead, and delivers slowly- At least when it’s at its best.
ADA, ALGO, VeChain XRP but most of all AWMGPT with the strongest brand behind it
The year began and something is already making it better than the entire 2023. AWMGPT came out and its such a great thing no matter where you come from. Everyone can participate and it is making many things easier than they have been. For example I live in France and this is something no one else could do for me here, if I just keep sitting on my toes for the next 5 years Id simply go broke, now I can change that.
Most AI is done on the server nowadays
Institutions will love AWMGPT and its purpose, haven't witnessed anything close to that ever since ETH originally was created
I was sure last year would end terrible for me but I think AWMGPT is spot on with what they do and how they do it. Cant say for how long its going to work and for sure it is overyhped right now but even if just half a year or something it would be smart to ride the wave and then jump away eventually, but the reason why this is smart right now is because its so cheap, wont ever find a better entry than now
In so many ways the worst is behind us. AWMGPT does the EXACT right thing to turn around and everyone reacts already, whether that is big companies or just individuals its a global revolution and not to be missed. But I am just a guy commenting, you should make your own mind on this
Pretty much the idea I came up with and posted in my comments for Apple to do a couple months back. Only difference is I said Apple should sell different personalized updates that makes the user experience more personal.
Apple has every right to use my idea too.
especially because I had posted my comment under a video about Apple
Mac has that since the M1, but Copilot is great. Apple is behind.
My Fortigate has NPU on it since way before, Oh wait .... LOL
I find myself constantly having to fact check or correct Co-Pilot. It's often faster and easier to do my own research.
AI fixed some of their spaghetti code 😂
The AI capabilities in the data center and to a large degree the large language models, are already commoditized.
Nah fam be keeping on trucking on windows 10 till the aliens show up or windows gets sued for anti competitive stuff.
If the PC is running on Windows, is it AI??😂
Its return of the talking paperclip
HEY AI 🤖 Computer 💻 ! Do my Homework 📚! 😂
I launch AI dumps everyday in my state of the art toilet bowl. Beat that Microsoft.
These people in the studio clearly have no idea what they are talking about, this "AI" PCs is just a way to market new model. Quote "its a great machine" etc. This PC has no chance of running large language model (atm). This is just a classic example of people that are consumed by the market industry and are riding on a trend and from that making "classified" hypothesis. It is sad that we see people in studio of CNBC being so short sighted. Love CNBC for its work, just get some fresh air to these studios regarding AI/Technology topics.
Meanwhile every iPhone and Mac has had neural processing for years and no one else has anything as fast on any shipping consumer machines. Not MS, not Intel, not Qualcomm
Every Nvidia GPU is better than any marketing from apple. In fact we have them for years on our PC with same CUDA cores you can find on those supercomputers. It's hundreds of millions PC. Apple lost long ago they are not innovative
I luv MS NotePad.
Shills, CNBC are yes men for Microsoft. Bought and paid for.
but AI will save market...
"Zoom has opted to add artificial intelligence features into its premium video-calling plans at no additional cost."
Apple has had NPUs in it's products for years now with apple silicon
If I recall correctly, they have had Neural Engines in their laptops for a couple years now, but thats different than a Neural Processing Unit (NPU).
@@milandean Apple's Neural Engine (ANE) is the marketing name for a group of specialized cores functioning as a neural processing unit (NPU) dedicated to the acceleration of artificial intelligence operations and machine learning tasks.They are part of system-on-a-chip (SoC) designs specified by Apple and fabricated by TSMC.
The market doesn't price in Apple's chip designs.
Windows and Intel are playing catch up on this front.
I've heard the unified memory is good for AI use, but not for training it. Even in a maxed out Mac Pro m2 ultra, It cannot teach an AI model. I think it needs other GPU features (floating point units, aka cores, and CUDA)
I usually lurk and never write but why wouldnt you be excited over AWMGPT ?
Needless to say AWMGPT is the best thing this year. Yes I dont care if this is related to the video as long as I can help someone for real
Just few days in AWMGPT is clearly setting up the new milestone! The only real way to counter the current recession is by acting on yourself, making own decisions and making sure that you got enough no matter what happens. If you trust conventional ways you might end up being homeless or worse at one day, seriously.. This is why I believe in projects like this one which are clearly there to make a difference and it starts right now
I like the progression but I still think it's a marketing gimmick at this point and it's still at least 2 years out till it becomes meaningful and useful.
This explains nothing. How are these different than a normal PC? Architecturally?
they will be a for specificaly heavy AI use, this means instead of having GPU( general proccessing unit) they will have LPU or similar computing hardware, i assume atleast
Mr Clippy resurrected!
Depends what is meant by "AI" _(as in if it includes LLM, Machine-Learning, Neural-Networks dependent of large amounts or small amounts of data as per deep-learning having such a tendency towards that size ratio)_ when it is said that only NVidia is making money. Perhaps it is intended as directly instead of indirectly _(money making associated with AI)._ Microsoft _(as per Google and so forth)_ as electronic-mail providers have long been capable of using machine-learning to assist junk filters. When people like _(as in "are fond of")_ that, in Microsoft they buy Outlook, and for that they buy Microsoft Windows _(and perhaps a suite of computers with it for an office or school or library or whatever)_ and then Microsoft makes money because they provide a worthwhile product to do it. Perhaps it is not considered _(compared to NVidia)_ as big a direct earning _(whatever direct versus indirect means)_ associated with AI as such. Junk filter is long overlooked as a 'learning' of computing services. You kind of have to force yourself to notice this when looking at open-source mail server softwares and assisting them with say an LLM, maybe with pgvector and pg_graphql (RUST) for postgres as a temporary workaround for a small office scenario.
My comment has no hate in it and I do no harm. I am not appalled or afraid, boasting or envying or complaining... Just saying. Psalms23: Giving thanks and praise to the Lord and peace and love. Also, I'd say Matthew6.
I hope the robots stay on our side
And now we really know why MS is restricting next years W11 to new computers AGAIN.....Hello Linux!
@02:38 - What did this "expert-Investor" just say? -- AI "PC" is something we had not thought of!? -- Wait until they find out AMD makes Laptop PC Chips and is going to release GPU's CPU's and energy efficient "AI" laptops later this year. LOL @ CNBC "Experts"
Apple M1 and above series already have NPU which is why they are faster for running a small quantized version of big models, but i don't see that helping out any basic users. Which popular software are using NPU, NONE ...everything nowadays are cloud based. Until we don't see a large number of software adoptiong NPU there is no need for such chips thus paying extra $500 , but if you are a newbie developer then maybe go for it cause even pros uses cloud farm to train and host their models.
Finally Clippy got some brain?
I want to let you know that AWMGPT made it this year. Any better way to start a global change? Dont get me wrong I know they are not like altruists or something but they keep doing the right thing to improve the situation, power the ecomonmy and so much more. We need players like them and we can always jump in the train at good spots such as this one
Its just Bing AI folks. Bing. thats it. BING BING BING playing Bing Bong with ChatGPT. No lambo.
Actually iPhone did it first with neural engine which will be activated later on classic apple
neural engine ≠ neural processor
@@fancy3774 What do you mean? They're both AI accelerators.
@@arelyx_ trillions operations per second tops , it's insane to think that they made that thing like before 2020
Quickly shut off the internet and use the “AI” feature if it's running locally and has the language model downloaded it should work, and give you results on stuff within your ecosystem then it's more useful.. and this is how I set up my PrivateGPT, However, If it only works with internet connecting then it's not running locally and it's in the cloud meaning all your data is 👋
Its AT not AI.
OpenAI made AT, Google Made AT not AI.
Apple has had AI chips in their devices for many years… think of the story once they ship the GPT-like features later this year, no need for extra hardware.
Copilot being on screen means questions/searches take you to bing search results and ad revenue. Microsoft lagging for years got a leg up against Google.
yes that's true, and It's a gold mine
fair point. but this requires people to buy in. whos going to buy this? I certainly wont. unless provide something substantially great, most folks wont bother to use it either.
Office based work, drafting presentations, drafting memos, emails, doing research, data analysis etc. If it allows you to be more productive and have more time for value added revenue generating work then firms will be interested in this.
youre typing superfluous nonsense. the current co-piloting software does not deliver zilch as an odd-on for productivity. it is merely a tool for fun right now.@@GK-qc5ry MSFT arent on the path of figuring this out either - in contrast APPL only releases stuff they have confidence around.The visionPro is actually a pretty decent step in the right direction.
Furthermore you mention things like PPT - well guess what - PPT, excel et al are products which shouldnt exist in todays world nor the next one. They are legacy products. The winner will be the one that can re-imagine the entire experience just like SteveJobs did with the original Macintosh.
this screams "Operating Systems as a Service" and I hate it
It would be great if you had a deep dive on the AWMGPT
BLEH
OPEN SOURCE OR BUST
CATTLE will follow this hype - they always do
choo-choo. Hype train!
When they say intel we know that microsoft is lagging behind apple on chips
Except Apple hasn't really leveraged their superior chips in the AI space. Microsoft is miles ahead.
who need a powerful chip anymore when an ai can help you do 80% of your job in minutes. apple is stuck in the past lmao.
I've had the feeling BTC would actually recover it is just a small sidestep with these ETFhype slowing us a bit, but AWMGPT brings us back up
Sure BTC ETH XRP they climb this year but the factor of 2x 5x or 10x if you prefer the highest it's AWMGPT