and which headlines should people read, back of the cereal boxes? you may be more cautious of the news you read but you are acting like the news headlines don't have impact on millions (if not billions) of people regardless of the medium. cancellation vs improvement is one thing but ai or not, corporations should be held accountable for false headlines.
This is not True AI capable of self-learning: it’s generative and relies on reading thousands of things written by humans. So if it’s being fed garbage, it will spit out garbage.
This is referred to as AI "hallucination" within the industry. This is a known weakness of the current generation of generative AI tech, it is simply unable to deliver reliability in creating headlines from content, so should be relying on human curators to do this job, I.e. Rely on the media companies themselves.
Our willingness to accept the cheap facsimile of human behavior that AI presents is the natural guardrail to that. Oops, turns out we have zero standards, love lies, and are easy suckers for a perceived deal. Cattle?
Any one with common sense would know that ai is not completely accurate so to read into the article or turn the feature off if you don’t like it. I find the feature useful even if it is not accurate. Or they can turn it off for controversial things
People read the garbage from the AI as fact, they don't use critical thinking for anything anyway, so if a tool can give them the information they will not check , and they will not deactivate the function by themselves
The AI generated summary of notifications on the iPhone Lock Screen indeed has problems. I had a summary notification showing last week that “Liam Payne to undergo heart surgery” - thinking wtf surely he can’t have risen from the dead and opened it up to see it had incorrectly summarised two news articles together into one.
Executives are probably asking advice and strategies from ChatGPT. And probably what they'll do for the next following months. Maybe that's why their 2024 is pure shit for both hardware and software (except for one product: M4 Mac mini)
Funny how BBC made tons of contents about it but never told people how they actually delivered that notification to be summarized. That’s important. Secondly, if you click on the summary, you would see the full notification. They are calling a ban and don’t even say how the original was delivered. Seems like this was made for kids. Even if it was summarizes wrong, which we don’t know, you could only click on it and see the original notification and news. It could be misinformation only if you’re a kid and don’t know how to use the feature.
@KelvinF. Surely if you click on the summary and the notification and story are correct, the issue lies with how it is summarised? And if you have to click to get a correct version, what's the point of even having the summary in the first place? It just adds unnecessary confusion?
Wow, it's almost like they didn't know the problems of ai from the beginning. Don't pretend like Apple has the only problematic implementation of ai. It's an inherent characteristic of ai, regardless of what content filters attempt to solve. Datasets are biased, people are biased, and ai is the embodiment of just that. If we act like this is something novel, we are in for quite the decade.
Sure, while this may apply to fact checking large news agencies like BBC you can't expect someone to fact check summarized messages from friends and family!
Another interesting angle is autonomous EV's, where the argument is that sure their programming will have to decide to kill people from time to time and that seems weird, but the end result is much, much safer than human piloting, and yet we have a problem with it. Which we think of as the AI being fallible, and that's not okay. Yet we're okay driving on roads with other people, which is actually psycho. So AI headlines being wrong once and we freak, or AI being systematically biased, or whatever the particular legitimate complaint, it's interesting to ask, yes, but how were humans doing? We do it on purpose for malicious reasons, at least I think the AI cannot be accused of having intentions.
These software models are total artificial but will never be intelligent. We can answer the questions "What is intelligence and how are creatures intelligent". If we can't solve these how can we create intelligence?
No, but if a particular type of vehicle is proven to have a flaw that is the cause of the crashes, it is withdrawn while they address it. I don't think your analogy quite fits the scenario.
You missed the point, like it's literally explained in the video, you can't predict what the AI will do, but can predict one thing the AI will hallucinate in more than 50 of the time ....
Did anyone write a paper about Ai models performing genuine logical reasoning. You brought this on yourself. Looks like that 10 bits per second showed up at the end.
AI is not going anywhere, and that's for multiple reasons I won’t fully dive into here. The main point I want to make is that when we come up with a good solution to a problem, and there are cons to that solution, we don’t scrap it-we improve the environment around it. A perfect example of this is cars. When cars first came around, they caused a lot of harm. People died in crashes, pedestrians were run over, and accidents disrupted daily life. Death rates increased significantly with the introduction of cars. Did we get rid of them? No. Instead, we adapted. We added seatbelts, zebra crossings, traffic lights, and legal requirements for manufacturers. We improved roads and designed infrastructure to fit cars into society while reducing their risks. I feel like right now, people are scared of AI to the point where a single issue leads to calls to scrap it entirely. One mistake, and it’s: “See? AI is dangerous. Get rid of it.” I’ll acknowledge that AI, especially advanced systems like AGI or super AGI, is inherently more dangerous than cars because once it reaches a certain level, mistakes could be catastrophic. But we’re not at that level yet-probably not for 4-8 years. Take a current example: AI is a great summarizing tool, but sometimes it gets things wrong. In certain contexts, like news reporting, those mistakes can have serious consequences. Does that mean we throw the whole idea out? No. We figure out how to address the issues-whether that’s improving accuracy, adding oversight, or creating safeguards. I don’t have the solution, but my point is that scrapping AI isn’t the answer. The same logic applies to Tesla and self-driving cars. One video of a Tesla making a mistake, and suddenly it’s: “This was a terrible idea. Get rid of it.” But self-driving cars aren’t going anywhere, just like AI isn’t going anywhere. Once a solution solves a major problem and has massive profit potential, it’s inevitable. You don’t have a say in whether AI or self-driving cars stick around-they’re here to stay. AI is transformative. It can revolutionize industries like healthcare, transport, gaming, development-everything. The reality is, if we don’t develop it, someone else will. If they don’t, someone else will. It’s not a matter of if AI reaches its full potential, but when. And whoever perfects it first will have an unmatched advantage-economically, militarily, and socially. Wars will change. Healthcare will change. Everything changes with AI. The best thing we can do is accept AI’s inevitability and focus on making it as safe and effective as possible. The conversation shouldn’t be about stopping AI-it should be about addressing its problems and figuring out how to work around them. Thinking we can stop AI is naive. It’s not happening.
I do not agree on this,as leeway should be given for tech development, not complete banning it. Half baked tech products like current apple AI systems, should be thoroughly checked for privacy issues, performance issues, encryption issues and other issues by the independent cyber security firms, designated by the government, to validate the legal aspects of all their digital products. However, i think, there should be more stricter legal oversight, checking on adherence of all business companies, to all legal limits including business laws, and corporate laws, and AI computer laws and digital laws and privacy laws especially by every IT adminstration panels in various places. If servers are in USA, oversight should be demanded from the government,whichever be it be, present there.
I seemed to recall from some article that Apple normally wait for tech to mature before adopting them, but their investors pushed them to release AI features.
"AI" is a marketing term for what it really is "machine learning" Please folks, I'll keep imploring people to research and understand what machine learning is and exactly how it works. Please 🙏. If you understand the current state of the art in AI, you could save a life from the sad greed that has gripped these tech companies advertising machine learning as some kind of renaissance. Until the core infrastructure of the state of the art in machine learning is completely changed, it is nothing better than a pretty witty prediction machine [which may look like magic] until its not.
Let's be realistic here - that screenshot of the summary had 22, yes TWENTY TWO notifications from the BBC News app behind it. Either one of two things is happening, somebody is not looking at their phone for an absurdly long time to be able to garner 22 news notifications OR the BBC News are abusing notifications and are sending out excessive amounts, which has actually happened before with them abusing the "BREAKING NEWS" notifcation. Whatever the reason, the AI is formulating the summary from all 22 of them notifcations, if it had just been the latest 3 or 5 it was summarising, it wouldn't have happened. Also lets not forget, these media outlets have historically misled people with headlines to get them to engage with the articles - it was suggested something as far as 70-80% of people ONLY read the headlines, which often leads them to come up with their own story which isn't entirely accurate. BBC need to get off their high horse here, acting like they're perfect.
It’s a teething problem with a new feature. Just report the issue and lodge a complaint if it is resolved in a reasonable amount of time. This is an overreaction.
In much the same way that Apple Intelligence can summaries emails, I think that Apple needs to create a development API for third party development teams to allow Apple Intelligence complete access to the content within the application to read/understand the article and then provide a synopsis. But either way you shouldn't be reading only the headlines let alone have a new form of AI summaries the article and make your decision based on that...
This whole video dumb lol 😂 it’s not even fully developed yet lol even the text and email summaries don’t be accurate half the time lol choose another story, this not it lol
don't give Apple such a hard time, because the SVP obviously is so cool with his electric guitar in the keynote event, and well, he has got to collect his $200M paycheck too
You improve upon something that works properly but could work better. Clearly, Apple's AI does not work at all. It's not meant to create false headlines. It needs to be fixed first.
I was just about to say all of this. They should fix it, not scrap it. Once it’s fixed, then improvements could be made where they’re needed. Plus, it clearly lets you know that it is still in beta so it could’ve been reported as a concern, and a huge one that needs urgent fixing.
They're not claiming they want Apple to abandon it altogether though. They're talking about not releasing it to the general public until it's more reliable. If this were a vehicle or whatever that could do physical harm, it would be taken off the market until it was made safe. I'm not saying that is necessary in this case, just saying the idea that you leave potentially harmful things out there while you make them better isn't really true.
The user has the responsibility to ensure that any information they acquire is accurate. You cannot read a summary of information and then remain complacent.
Apple explicitly states that the accuracy of the AI model is not guaranteed, and it may generate false or inaccurate results. Users have the option to choose not to use the AI. In fact, your device does not run Apple’s intelligence by default; you must actively enable it. Therefore, I believe there is no obligation on Apple to remove or disable their AI features at this time.
Wow, your organization has a super big ego to expect one of the most successful companies in the world to scrap the tens of billions of dollars after one mistake. That’s ridiculous. I googled and saw a number of articles the BBC made retractions for. Maybe you should close those divisions of your news service…. See how ridiculous this story is. Maybe a polite letter to an Apple executive pointing out their error would better serve the public.
Given that generative AI is well known to "hallucinate" (make up stuff), allowing it to summarize news is asking for trouble. Though maybe "buyer beware" is the better approach rather than banning tech that doesn't work all that well yet. We should all be somewhat skeptical about what we read.
For those who dont know, u CAN go into settings and turn the Notification Summarization on or off on a per app basis, so if u dont want ur news to be summarized bc of errors like this, u can take action and stop it. Plus for Beta Testers like me, there is a way to report errors, so i recommend more beta testers doing that. I dont think the feature needs to be removed, but actions need to be taken to improve it, or, if ppl dont wish to use it, they should be made aware they can toggle it off on a per app basis.
Yes, also to emphasize this feature is actually still in beta even after being released, apple has nothing to do with peoples ignorance, and its sad that journalists would be so misinformed as to rely on it.
False headline. It may sound off topic but I chanced on this cool translator that does everything and more of what a translator should have, name is Immersive Translate and one thing that can really help is it’s new feature, which lets you create a custom AI expert for translating anything. Thank me later, it's gold.
Oh please , it needs to improve, but not cancel, if you make your life decisions by reading headlines on your phone you shouldn’t be near tech
and which headlines should people read, back of the cereal boxes? you may be more cautious of the news you read but you are acting like the news headlines don't have impact on millions (if not billions) of people regardless of the medium. cancellation vs improvement is one thing but ai or not, corporations should be held accountable for false headlines.
@@icyveins-24I understand where you’re coming from
But Apple said it’s still In beta
This is not True AI capable of self-learning: it’s generative and relies on reading thousands of things written by humans. So if it’s being fed garbage, it will spit out garbage.
exactly, I don't understand the overhype ai came from. It's another way of scam
Tell us you know nothing about AI....
@@commentorcommentor-s6l Yes! It is a term that replace anything to sell
This is referred to as AI "hallucination" within the industry.
This is a known weakness of the current generation of generative AI tech, it is simply unable to deliver reliability in creating headlines from content, so should be relying on human curators to do this job, I.e. Rely on the media companies themselves.
@naomieyles210 The reason corporation keep overhyping us with ai and slow down the technology process because they found us easily getting fooled.
Corporations ability to replace employees with AI is far more important to them than the adverse impact to society.
Our willingness to accept the cheap facsimile of human behavior that AI presents is the natural guardrail to that. Oops, turns out we have zero standards, love lies, and are easy suckers for a perceived deal. Cattle?
Oh please, how many times have news outlets put misleading headlines that are completed different from the story itself!!
Name one time the BBC has ever done that?
Any one with common sense would know that ai is not completely accurate so to read into the article or turn the feature off if you don’t like it. I find the feature useful even if it is not accurate. Or they can turn it off for controversial things
People read the garbage from the AI as fact, they don't use critical thinking for anything anyway, so if a tool can give them the information they will not check , and they will not deactivate the function by themselves
Exactly this
The AI generated summary of notifications on the iPhone Lock Screen indeed has problems. I had a summary notification showing last week that “Liam Payne to undergo heart surgery” - thinking wtf surely he can’t have risen from the dead and opened it up to see it had incorrectly summarised two news articles together into one.
But no one says anything when BBC purposely uses misleading headlines
Name one time they have, please
Getting your news from AI is like getting your news from FOX news, its unreliable and madeup. 😂
All news is, home. Wake up.
@@brianhopson2072 Stop telling people to be woke
Same as CNN, CBS, BBC etc
It just appears on your phone
And newsmax and gb news and talk tv and sky news Australia
Apple, are probably asking ChatGPT how to respond 😂
Executives are probably asking advice and strategies from ChatGPT. And probably what they'll do for the next following months. Maybe that's why their 2024 is pure shit for both hardware and software (except for one product: M4 Mac mini)
FYI the summarizing feature isn't powered by Open AI (ChatGPT), it's entirely an Apple Intelligence feature
get the self protecting bbc to assign some real bots to the task
Google AI calls on BBC for falsely reporting on Palestine 😂
Funny how BBC made tons of contents about it but never told people how they actually delivered that notification to be summarized. That’s important.
Secondly, if you click on the summary, you would see the full notification.
They are calling a ban and don’t even say how the original was delivered. Seems like this was made for kids. Even if it was summarizes wrong, which we don’t know, you could only click on it and see the original notification and news. It could be misinformation only if you’re a kid and don’t know how to use the feature.
@KelvinF. Surely if you click on the summary and the notification and story are correct, the issue lies with how it is summarised? And if you have to click to get a correct version, what's the point of even having the summary in the first place? It just adds unnecessary confusion?
By enabling this software you agree info may not be true. Read the policies!
If I’m not mistaken isn’t this feature Beta as well?
@@iamhoracio Correct
Wow, it's almost like they didn't know the problems of ai from the beginning. Don't pretend like Apple has the only problematic implementation of ai. It's an inherent characteristic of ai, regardless of what content filters attempt to solve. Datasets are biased, people are biased, and ai is the embodiment of just that. If we act like this is something novel, we are in for quite the decade.
Ai learns from the information it takes in, if anything ai is just a reflection of us.
It’s crazy how people that don’t like someone want to ban it for all of us just because they don’t like it
I wonder why they don't urge Meta, OpenAI or Google...
I've had nothing but good experiences with Apple intelligence! One minor error doesn't mean it should get removed lmao, these people are insane
That’s how these people think though.
Nothing is perfect from start. Just report the bugs to them so they can improve in the future.
The error here isn’t the AI, it’s the absence of human fact-checking.
Sure, while this may apply to fact checking large news agencies like BBC you can't expect someone to fact check summarized messages from friends and family!
Thought the idea of AI, was to have less work for humans...
It's both.
Another interesting angle is autonomous EV's, where the argument is that sure their programming will have to decide to kill people from time to time and that seems weird, but the end result is much, much safer than human piloting, and yet we have a problem with it. Which we think of as the AI being fallible, and that's not okay. Yet we're okay driving on roads with other people, which is actually psycho. So AI headlines being wrong once and we freak, or AI being systematically biased, or whatever the particular legitimate complaint, it's interesting to ask, yes, but how were humans doing? We do it on purpose for malicious reasons, at least I think the AI cannot be accused of having intentions.
These software models are total artificial but will never be intelligent. We can answer the questions "What is intelligence and how are creatures intelligent". If we can't solve these how can we create intelligence?
This is so ridiculous. How is this making news!
Because it says something BBC has never claimed. What if the AI told the world, you were a rapist and you weren't?
You wouldn't respond?!
bruh its literally one notification could very easily be faked
Right ? 😂
😂😂😂😂😂😂
The BBC should never be able to use the word News
Lmao, at scrap? Who they think they are? They think they can completely halt technology? Lmao
Typical of people like this developing AI themselves but since the competition made a mistake it should be banned 😂
So, if a car crashes, every one stops driving?
Nice try, AI.
That seems to be the logic here..
No, but if a particular type of vehicle is proven to have a flaw that is the cause of the crashes, it is withdrawn while they address it. I don't think your analogy quite fits the scenario.
You mean like, if the handbrake fails while it's on a hill? Or do you actually mean "when a human crashes a car..."
You missed the point, like it's literally explained in the video, you can't predict what the AI will do, but can predict one thing the AI will hallucinate in more than 50 of the time ....
Does the BBC sack its reporters when they make 1 mistake? That is the standard they want right?
Nice try ai
BBC urged to axe its headlines after false features.
Where is this headline? Also..a call for removal is just nonsense..
I think this video has a fake headline. There is no Apple fake headline, Apple AI hallucinated news when asked for a summary -according to BBC.
Found the apple glazer
Was not here 1 month ago, no one has died before 🤔
@@KennyakaTIfound the moron that thinks because he doesn’t like something it should be banned typical UK peasant mind
@@KennyakaTI You are the stupid one here 😂😂😂😂😂
Classic AI hallucination
Ai is basically shite. It’s the most pointless thing ever
Did anyone write a paper about Ai models performing genuine logical reasoning. You brought this on yourself. Looks like that 10 bits per second showed up at the end.
It’s still in beta 🙄
AI is not going anywhere, and that's for multiple reasons I won’t fully dive into here. The main point I want to make is that when we come up with a good solution to a problem, and there are cons to that solution, we don’t scrap it-we improve the environment around it. A perfect example of this is cars. When cars first came around, they caused a lot of harm. People died in crashes, pedestrians were run over, and accidents disrupted daily life. Death rates increased significantly with the introduction of cars. Did we get rid of them? No. Instead, we adapted. We added seatbelts, zebra crossings, traffic lights, and legal requirements for manufacturers. We improved roads and designed infrastructure to fit cars into society while reducing their risks.
I feel like right now, people are scared of AI to the point where a single issue leads to calls to scrap it entirely. One mistake, and it’s: “See? AI is dangerous. Get rid of it.” I’ll acknowledge that AI, especially advanced systems like AGI or super AGI, is inherently more dangerous than cars because once it reaches a certain level, mistakes could be catastrophic. But we’re not at that level yet-probably not for 4-8 years. Take a current example: AI is a great summarizing tool, but sometimes it gets things wrong. In certain contexts, like news reporting, those mistakes can have serious consequences. Does that mean we throw the whole idea out? No. We figure out how to address the issues-whether that’s improving accuracy, adding oversight, or creating safeguards. I don’t have the solution, but my point is that scrapping AI isn’t the answer.
The same logic applies to Tesla and self-driving cars. One video of a Tesla making a mistake, and suddenly it’s: “This was a terrible idea. Get rid of it.” But self-driving cars aren’t going anywhere, just like AI isn’t going anywhere. Once a solution solves a major problem and has massive profit potential, it’s inevitable. You don’t have a say in whether AI or self-driving cars stick around-they’re here to stay. AI is transformative. It can revolutionize industries like healthcare, transport, gaming, development-everything. The reality is, if we don’t develop it, someone else will. If they don’t, someone else will. It’s not a matter of if AI reaches its full potential, but when. And whoever perfects it first will have an unmatched advantage-economically, militarily, and socially. Wars will change. Healthcare will change. Everything changes with AI.
The best thing we can do is accept AI’s inevitability and focus on making it as safe and effective as possible. The conversation shouldn’t be about stopping AI-it should be about addressing its problems and figuring out how to work around them. Thinking we can stop AI is naive. It’s not happening.
sorry but its the inevitable truth
I could live without AI. I haven’t installed the latest iOS yet. Not interested in AI.
Rushed half baked apple pie.
How much do you guys bet these journalists were paid by competitors to go at Apple. Just a theory
Didn't Apple say this was a beta and there's a small print that says information may not always be accurate.
Pretty much all of the Generative AIs say this.
And btw Apple Intelligence is not being rolled out to the EU due to regulatory uncertainty.
AI makes a mistake shock!
AI, as fascinating as the tech us, is the lamest technology that's being touted as replacing work basically. People will get stupider.
Grok is the only AI I use & trust!
Apple intelligence is so good I would miss it if they take it down
Fake news will b at a all time high
Best news of the week
Requesting the removal of an entire software feature from the ecosystem is like demanding the closure of a BBC venue every time they make a mistake.
Oh my god and you never checked the source. That’s very bad on the part of BBC I thought you were journalist. Cut, copy paste is not journalism.
I do not agree on this,as leeway should be given for tech development, not complete banning it. Half baked tech products like current apple AI systems, should be thoroughly checked for privacy issues, performance issues, encryption issues and other issues by the independent cyber security firms, designated by the government, to validate the legal aspects of all their digital products. However, i think, there should be more stricter legal oversight, checking on adherence of all business companies, to all legal limits including business laws, and corporate laws, and AI computer laws and digital laws and privacy laws especially by every IT adminstration panels in various places. If servers are in USA, oversight should be demanded from the government,whichever be it be, present there.
Apple wants the headlines and the news to be real and trusted ,and the artificial inteligence to work without mistakes. They want to be profitable.
We should only have the bbc tell us our news lol
😆😂
So we need an ai to fact check the ai news, problem solved.
🙃🙃🙃
Toss artificial intelligence in the rubbish bun
Just don’t use it lmfao why would you take it away from user that do use it makes no sense
Toss you in the rubbish bin
That "AI-powered summary' just showed how people usually read the news, living humans too, not only artificial intelligence.
Remember when Apple fans be like *Apple implements new tech once it's perfected* 😂
you answered your own comments, it is a new tech
@@edicarlos4704 How does Mr. Cook's rooster taste
First plane was also not perfect to be fair they are still not perfect
“Be like”???
I seemed to recall from some article that Apple normally wait for tech to mature before adopting them, but their investors pushed them to release AI features.
Yeaaaaaaahhh for sure!!! because only Apple devices use AI!
Where tf did you find this guy ?
Omg i can't even listen to him , his voice omg 😮💨
0:44 is this guy ok? 😢
Like you really care
train a model if you wannt do AI or just dont release or like most apps, dont realase a beta app until it proper tested
lol you can’t ask ONE tech company to remove this kind of feature. Maybe ALL tech companies
maybe set a constraint so that it doesn't summarize from news app
This already been pointed out but techtubers lmfao
"AI" is a marketing term for what it really is "machine learning"
Please folks, I'll keep imploring people to research and understand what machine learning is and exactly how it works. Please 🙏. If you understand the current state of the art in AI, you could save a life from the sad greed that has gripped these tech companies advertising machine learning as some kind of renaissance.
Until the core infrastructure of the state of the art in machine learning is completely changed, it is nothing better than a pretty witty prediction machine [which may look like magic] until its not.
Please folks 🙏
They shouldn't release half-arsed features
Programmers in the EU, get your visa and come to America.
A bad workman blames his tools.
So the BBC has never had to retract a false story? Got it
It is clearly known artificial so what all the fuss! As artificial don't use it if you want perfection!
Let's be realistic here - that screenshot of the summary had 22, yes TWENTY TWO notifications from the BBC News app behind it.
Either one of two things is happening, somebody is not looking at their phone for an absurdly long time to be able to garner 22 news notifications OR the BBC News are abusing notifications and are sending out excessive amounts, which has actually happened before with them abusing the "BREAKING NEWS" notifcation.
Whatever the reason, the AI is formulating the summary from all 22 of them notifcations, if it had just been the latest 3 or 5 it was summarising, it wouldn't have happened.
Also lets not forget, these media outlets have historically misled people with headlines to get them to engage with the articles - it was suggested something as far as 70-80% of people ONLY read the headlines, which often leads them to come up with their own story which isn't entirely accurate.
BBC need to get off their high horse here, acting like they're perfect.
What a ridiculous idea to axe it, because it gave a false summary! I’ve heard of a slow news day but c’mon guys.
Even if 50% of the headlines AI spits out are false, it's still more accurate than most media today
It’s a teething problem with a new feature. Just report the issue and lodge a complaint if it is resolved in a reasonable amount of time. This is an overreaction.
In much the same way that Apple Intelligence can summaries emails, I think that Apple needs to create a development API for third party development teams to allow Apple Intelligence complete access to the content within the application to read/understand the article and then provide a synopsis. But either way you shouldn't be reading only the headlines let alone have a new form of AI summaries the article and make your decision based on that...
The BBC is so consistent at posting fake news and misleading propaganda headlines, that it confused and gaslit the Generative AI. 😂😂😂😂
No, they shouldn't remove the tech, people should just not be lazy and read the articles.
This whole video dumb lol 😂 it’s not even fully developed yet lol even the text and email summaries don’t be accurate half the time lol choose another story, this not it lol
don't give Apple such a hard time, because the SVP obviously is so cool with his electric guitar in the keynote event, and well, he has got to collect his $200M paycheck too
That’s not how it works though. You improve something not just scrap it😂😂
You improve upon something that works properly but could work better. Clearly, Apple's AI does not work at all. It's not meant to create false headlines. It needs to be fixed first.
I was just about to say all of this. They should fix it, not scrap it. Once it’s fixed, then improvements could be made where they’re needed. Plus, it clearly lets you know that it is still in beta so it could’ve been reported as a concern, and a huge one that needs urgent fixing.
Its all just a big gimmick for them to sell more and retain relatability and show face in the new “age of AI” or whatever
They're not claiming they want Apple to abandon it altogether though. They're talking about not releasing it to the general public until it's more reliable. If this were a vehicle or whatever that could do physical harm, it would be taken off the market until it was made safe. I'm not saying that is necessary in this case, just saying the idea that you leave potentially harmful things out there while you make them better isn't really true.
The user has the responsibility to ensure that any information they acquire is accurate. You cannot read a summary of information and then remain complacent.
I really do think this feature is really unnecessary and can cause negative impacts.
I'm so happy with my Samsung Galaxy. Can't understand how people are still putting up with Apple crap and emptying their wallets for them
Apple explicitly states that the accuracy of the AI model is not guaranteed, and it may generate false or inaccurate results. Users have the option to choose not to use the AI. In fact, your device does not run Apple’s intelligence by default; you must actively enable it. Therefore, I believe there is no obligation on Apple to remove or disable their AI features at this time.
Cellphone sales slowed down, suddenly AI "is necessary". They're lying, give us faster processors instead.
The feature is still in Beta, "it obviously doesn't work" is disingenuous. It obviously doesn't work perfectly, it's not a finished product.
Wow, your organization has a super big ego to expect one of the most successful companies in the world to scrap the tens of billions of dollars after one mistake. That’s ridiculous. I googled and saw a number of articles the BBC made retractions for. Maybe you should close those divisions of your news service…. See how ridiculous this story is. Maybe a polite letter to an Apple executive pointing out their error would better serve the public.
The question to ask is how did Apple AI came up with this erroneous answer? Was this question summarized by chatGPT?
Little rich coming from the BBC, know for miss information and lies......
Bit rich coming from robdotreynolds, actually miss informing and telling lies
@@Will-bh7kg
Why are the BBC refusing to play the Christmas song, "it will be freezing this Christmas"??? 🐑
@@pessi6185How can they refuse to play something that doesn’t exist.
@@davecooper3238
The video of it is on RUclips, keep up 👍🏿
Remove BBC instead
😂😂
I agree
Given that generative AI is well known to "hallucinate" (make up stuff), allowing it to summarize news is asking for trouble. Though maybe "buyer beware" is the better approach rather than banning tech that doesn't work all that well yet. We should all be somewhat skeptical about what we read.
😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂
Human professionals before billion dollar AI businesses. Or we are all in peril.
And human professionals are infallible? Honestly, I can’t believe how stupid some of the comments here are.
@kevinmcfarlane2752 Infallibility is not the point. Nothing is infallible is the point. Especially for peasants, with stupid comments.
This is a tool, why would you complain about what is essentially a tool that was effectively being used without supervision?
Lol no they just need to fix it, gawd non tech people are like ultra religious people zealots
For those who dont know, u CAN go into settings and turn the Notification Summarization on or off on a per app basis, so if u dont want ur news to be summarized bc of errors like this, u can take action and stop it. Plus for Beta Testers like me, there is a way to report errors, so i recommend more beta testers doing that. I dont think the feature needs to be removed, but actions need to be taken to improve it, or, if ppl dont wish to use it, they should be made aware they can toggle it off on a per app basis.
Yes, also to emphasize this feature is actually still in beta even after being released, apple has nothing to do with peoples ignorance, and its sad that journalists would be so misinformed as to rely on it.
People do know right Apple do give you the option to turn it off
It's a feature, shutup.
Seems facts are not profitable enough these days
Haha - BBC got a taste of fake news, karma! lol
Artifcial Inteligence makes an Artificial news. It is Artificial. AI must think being Artificial is a good thing.
I have been waiting for AI to come out
bbc verify law bot does not compute, the chances of influencing apples AI progress is practically nil
Your complainting for no reason
False headline. It may sound off topic but I chanced on this cool translator that does everything and more of what a translator should have, name is Immersive Translate and one thing that can really help is it’s new feature, which lets you create a custom AI expert for translating anything. Thank me later, it's gold.
I turned these summaries off the moment my device updated. 😂 NOPE.
Luigi Mangione is my hero
A murder is your hero? Says a lot about you
@@Weaselszone I think this act told the 1% that the governments and law may look the other way but the people will not tolerate.
Ai sounds like it’s copying the sun
ffs...
I urge to axe the BBC for doing the same - but by humans