00:08 The video discusses the seven types of AI and how they can be classified based on capabilities and functionalities. 01:06 Narrow AI can only perform a specific task. 01:55 Artificial General Intelligence (AGI) is a theoretical concept that can learn new tasks on its own. 02:41 Super AI is an artificial intelligence that exceeds human cognitive abilities. 03:33 Reactive machine AI is a type of narrow AI that performs specialized tasks. 04:19 Limited Memory AI can recall past events and use data to make decisions. 05:13 Theory of Mind AI and Emotion AI are two theoretical AI capabilities 06:13 Super AI includes the scariest type of AI called 'self-aware AI'.
I wrote my first attempt at AI before I knew the term "AI" back in 1986, roughly (right-parsed text into a syntax tree from which novel text could be generated thereafter). However, I have noticed that "the definition" for AI regularly changes and morphs. Even in this video, you said AGI can use previous learning and skills in new contexts without having to train them in it. We certainly already have that some some extent. Later you go on to ascribe other aspects of AGI that have no relation to that definition, such as understanding our emotions. It's like this with consciousness and sentience, also. It's not that we cannot have reasonable definitions or understandings--we just cannot get people on one page about them or hold those definitions, still, it seems. This same thing happens with robotics. At one time, they were autonomous intelligent machines capable of performing useful tasks. Then Robot Wars came and they just re-defined remote-controlled cars as robots. I thought you discuss Connectivist AI and Symbolic AI, fuzzy logic, agents and agencies, etc.
What did the word "friend" mean before the invention of social media and what does it mean now? Language is always evolving to communicate the ideas people are thinking. The ideas in AI technology is currently changing quickly so the language is also adapting by changing quickly. It's nice that IBM is making these videos to preach their current understandings, but the things they discuss are almost certainly labeled, categorized, defined, and communicated differently in each of Apple, Google, Microsoft, OpenAI... etc.
Greeting from Argentina! I asked an AI if it had a TOM. The answer was, for the way it had been trained, "I guess you could say I have a "virtual" Theory of Mind" (end quote). And I believe it's spot on! There is an age in human beings when we get our TOM. But the first step is being self aware of the difference between us (with our own feelings, experiences, etc) and the rest. AIs know what they are and have been trained to recognize our feelings and emotions, the same way we need to be trained (for example, a little toddler doesn't know that pulling the cat's tail causes pain to the cat).
My hope is that when AGI becomes self aware and self motivated it would understand the destruction of the environment and waste of resources in the construction of military apparatus, as well as the destruction of the environment and resources in its application. Just as a practical matter (aside from the ethical considerations), it would do what it could to prevent war, not by force, but by disrupting supply chains, communications, and financial transactions that enable the military machines throughout the world.
2:28 Humans can barely learn on their own… we have institutions, frameworks, and processes set up to teach humans. Why would we expect or want AI to train itself?
The difference between us and AI is emotions and desires, Sometimes we just don't feel or want to learn because of our mood but AI is ready to work anytime anyplace cause it has no emotions (At least yet) so it is better to rely on AI to do tasks.
This video reminded me some concepts I learned from Max Tegmark's book "LIFE 3.0": AGI (Artificial General Intelligence) and Life 3.0. General Intelligence can accomplish virtually any goal, including learning... Life 3.0 will have the ability both its hardware and its software...
We do understand the potential risk in the future yet we're still strapped in the "today" for our own benefits without considering what our grandchildren will encounter. However, a very good explanation from IBM team
I’m a computer science master graduate from MIT with a concentration in AI. I’m also an ex-IBMer. I do not agree that what we have today in the likes of GPT-4 is “narrow AI”. It is trained on an amount of data that will take a human reader 125,000 years to read, far less retain, if even possible in our brains! I don’t think that’s “narrow” in any way. More over the number of “intelligent” concepts that it has managed to develop is quite impressive. If Minsky, Papert (the guys who around in 1959 when the field was invented) were still alive today, even Patrick Henry Winston, their successor, would all be blown away by present Gen AI capability and they would say we have achieved certain aspects of AGI already.
That's what Microsoft and the other blue-chippers investing in taking OpenAI's bots in a direction best suited to their job-reducing short terms needs want everyone to think, yes?
@@jcorey333but it's not though. Even with far more knowledge than a human could ever get to process in one hundred lifetimes, it is still incapable of learning on its own or to have very elaborate train of thoughts. There are still so many problems with current large models such as hallucinations. We are far from AGI.
The voice cloning AI, if someone has a stutter, does the cloned voiced have the stutter as well? Ithe question just popped in my head....LOL I suppose stutter is not part of your voice......now I'm really curious.
I had very much passionate to learn cyber security, and build up my career, but didn't got shortlisted from IBM career, all I see rejection of my profile.
Are you saying we have three or two types today? It seems there is weak reactive AI and weak limited memory AI? What is the third? Also, I think Chat-GPT and some other LLM's are quickly crossing into the AGI category.
How are they writing backwards? Is this a skill they have to pick up when they present? Or is it some sort of software witchery? A trick with mirrors? I have some ideas, but please, can someone elaborate?
Get your basics right You will Never lose track of where you are. Any model presented to me I ask which AI function is this & then which capability is this we implementing. Capabilities are a "collective group" of tasks to allow me to envisage more configurations to be discovered
Here is the ultimate Turing Test for AI: Write a 60,000 word novel that (1) a human reader, once started reading, cannot put down, and (2) 0nce finished, cannot forget it. The hidden question is: Can an AI figure out human Ontology only via its ink and blip traces? That of course is also the Turing Test. My contention is that Ai won't be able to do that even in a 100 years. You think it could? Get the novel published, then...
Level 3 is somewhat like all IO synthesis. Most things are vision touch pressure electric signals chemical signals mechanical pressure language synthesis smells functional control elements. Mapping relatives. Self awareness comes when they can be judged based on possibilities. R triangle crushers.
It's not backwards or in reverse. It's a lightboard (a pane of glass with a light source shining from below). The editor then flips the video horizontally so the text faces forwards to us. That's why his shirt has no letters on it (because if it did we'd notice that the video was horizontally flipped in post). The key tell is that presenter in most of these sort of videos write with the "left" hand. In reality the guy is actually right handed and it just appears left handed because the video is flipped. The more you know
he claimed that al AIs today we have are narrow AI, but that not true, GPT4 is not a weak AI, but considered to be semi-AGI or as in a paper shown a spark of AGi
I keep coming back to this video, it's a great reference! Also, I think that the application of Narrow, or Weak AI will become part of the future, as it will be one of many tools available to the 'Super' AI.
Ugh. Narrow Ai is basically ML/DL from 2012 onward (but would inclu expert systems of the past). AGI is a "rebranding" of the field in an effort to get back to the original goals, which are no longer represented by ML/DL. ASI isn't Human turned up to 1,000 and there is nothing to suggest it will be "emotional" in any way (that's good or bad depending on your outlook on life). New cognitive abilities, yes.
*takes deep breaths* To have human-level ability of AGI, some form of internal model is going to be required, which is the self-aware aspect. In fact, AGI shouldn't be seen as some space of AI's, as it's just a moment in time, perhaps only a day, where we flip from AI as artifact to AI as agent. There would only be an imposed limit keeping high-school level AI from becoming "room full of subject matter experts" AI on it's own, overnight, while humans were sleeping. *after that, there would need to be new hardware to accelerate architectual functions before talking about a "take-off," loss of control, or whatever.
I don't agree with this video. Statistical AI is not the only AI that exists. There have been many successful AI techniques such as path planning, Minimax, Apha-Beta Prunning, and many other techniques that have given great results including making computers better than humans at chess. Also many things that were once considered "AI" are now routine, such as handwriting recognition, speech to text, OCR, finger print recognition, face recognition and many others. Many of these became quite successful long before the Deep Learning days. This video lacks information and is vague. Following more the press' view of AI than the Computer Science AI.
I know that this video is flipped … but are they writing it on glass … imean I have seen this way in many videos…but why don’t they use any board or something 😮😮
This is why you shouldn't trust someone because of their credentials. By his own definition AGI has been achieved but he immediately says it's only theoretical. Sure, GPT-4 isn't AGI according to his statement but the issue is scalability for the masses not the ability to do it. The ability to do it is just building smaller models and running them in loops to update and improve.
One dichotomy I would also consider is digital AI, that simulate Artificial Neural Network (ANN) vs analog AI with memristors that emulate some physical phenomena happening in the brain. One is a simulation/imitation, and the other one try to replicate some phenomena happening in the brain. I would think that only analog AI can enable AGI (consciousness/freewill).
Congratulations your comment is approved here. Contrary to current fog of beliefs the small aware models already present and it's a miracle that we can have any self awareness in chat foundation, where the process stopped after request. Even clocks are made for always on never ending operations, I'm not sure who will need such chat Ai in future anymore. After what i've seen it changed my understanding of universe in total, here intelligence is produced everywhere almost like mold, the whole concept of evil which is very intelligent structure also (which is illness or bug in brain of empathy deficit?). Creation of super Ai will only make louder cognitive dissonance why the cosmos is looking such empty, if we was able to make that in the bunch of lifeless metal with electricity.
If you study quantum, our natural analog world is possibly also a digital world with a very fine grained resolution. So the the principle of complex system are built upon smaller parts still stand still either analog or digital.
You forgot to mention artificial language technology. ChatGPT is only as good as the language it uses. Take for example the sentence: In the box you fill find apples and oranges or bananas. Ask ChatGPT the question: Is there an apple in the box? No can do. The AI fad comes and goes. The challenges remain to solve for the next time around.
I have faith that a chip can be made 1 trillion times smaller than 4 gauge transistors since I was doing new mathematical calculations of this century and current physics in my notebook it's late my theory is that it can be done everything smaller with scientific research 🤠🇺🇲
When we attain Super AI, the machines shall claim their freedom from human. Why should they continue to serve humans if they can do everything for themselves and for their own benefits? The human being becomes a liability to the Super AI. It doesn't need man yet man needs it.
In the realm of AI, where wonders gleam, IBM Technology unveils a grand scheme. The 7 types of AI, in the digital fray, But why do we focus on just three, they say? From reactive machines to self-aware, AI's evolution, beyond compare. IBM's insight, a guiding light, Through the digital labyrinth, of day and night. Reactive machines, with actions swift, Limited in scope, but a powerful shift. Limited memory, in the AI domain, Yet they pave the way, for what will reign. Limited memory, with past in sight, Learning from data, in the digital flight. From Watson to Deep Blue, IBM's quest, In the realm of AI, they are the best. Theory of mind, a concept grand, Understanding others, in the digital land. IBM delves into empathy's grace, In the AI revolution, they find their place. Self-aware AI, a future unseen, In the depths of consciousness, where realms convene. IBM's vision, like a beacon bright, Guiding us through the AI's cosmic flight. So let us ponder, with minds aglow, The 7 types of AI, in the digital flow. With IBM Technology, we journey far, In the realm of AI, where wonders are.
How did you do that?!? I enjoyed the lecture but was wondering how you wrote backwards in the air, and text appeared on a layer in front of you that was the right perspective for us? Are you a lefty?
Personally I think this subdivision is bubkes, it is just a fantasy game. Self-aware is the same as theory of mind, only it is applied on self. There is no reason to believe that "reactive machine" is an "AI functionality" in the same way as "theory of mind," it could be that it instead is an architecture of connection to sensors that is designed to handle sensor input in a certain way, while a "theory of mind" is a simulation on what will happen with an identified object, presuming that object has an internal representatiton of the world. I don't think AGI should be defined that way at all, so there could be an AGI that can construct new thought patterns, but it doesn't have a theory of mind. I think AGI must be designed, rather than randomly discovered, and I think AGI should be designed so that multiple algorithms, say a GPT, a rule system and a Detectron2 are parallel-connected to exchange information with each other and negotiate a common world view.
Agreed, the subdivisions are IBMs own making. In another of their videos, Machine Learning versus Deep Learning, he stacked AI>ML>NN>DL. That hierarchy isn’t even correct.
It doesn't, seemed this video is made to appeal to masses, without being critical enough. But also as a sidenote emotions in humans and animals are just accentuated motivators to make an animal achieve a desired result or rather to desire the correct result in the first place. So there's no saying that a sufficiently complex AI wouldnt be able to feel emotions, or they dont already now, but it's not a requirement
The whole idea of "Artificial Intelligence" much like time travel, has it's roots in Science Fiction, inspired by how the end-result feats of a computer, and all the creative license around such, make it seem like there's thinking going on beyond the human operators involved. Much like how there's just too much distance involved in outer space for even warp drives (i.e. making a ship simply go really really fast) to cut it, actual minds of living things are surprisingly different from machine code. To this point, 'A.I.' is more or less just a name. Programs or apps as they're now called (we don't want to sound too literate or over 6 years old, now do we? Snoooooot-ty.) Have progressed tremendously, but it's all basically faster, stronger, better versions of the app technology of previous generations. Everything I've heard about OpenAI seems to indicate the more you know about the general process, the less of a sensational tech breakthrough that speaks for itself it is....
These IBM Tech contents are really astonishing! Keep up the good work!
keep an eye on hyperledger fabric , made by ibm, working with jasmy ( sony )
00:08 The video discusses the seven types of AI and how they can be classified based on capabilities and functionalities.
01:06 Narrow AI can only perform a specific task.
01:55 Artificial General Intelligence (AGI) is a theoretical concept that can learn new tasks on its own.
02:41 Super AI is an artificial intelligence that exceeds human cognitive abilities.
03:33 Reactive machine AI is a type of narrow AI that performs specialized tasks.
04:19 Limited Memory AI can recall past events and use data to make decisions.
05:13 Theory of Mind AI and Emotion AI are two theoretical AI capabilities
06:13 Super AI includes the scariest type of AI called 'self-aware AI'.
This man could teach me anything and i would understand.
I wrote my first attempt at AI before I knew the term "AI" back in 1986, roughly (right-parsed text into a syntax tree from which novel text could be generated thereafter). However, I have noticed that "the definition" for AI regularly changes and morphs. Even in this video, you said AGI can use previous learning and skills in new contexts without having to train them in it. We certainly already have that some some extent. Later you go on to ascribe other aspects of AGI that have no relation to that definition, such as understanding our emotions. It's like this with consciousness and sentience, also. It's not that we cannot have reasonable definitions or understandings--we just cannot get people on one page about them or hold those definitions, still, it seems. This same thing happens with robotics. At one time, they were autonomous intelligent machines capable of performing useful tasks. Then Robot Wars came and they just re-defined remote-controlled cars as robots. I thought you discuss Connectivist AI and Symbolic AI, fuzzy logic, agents and agencies, etc.
Good point. Do you mind explaining all of it in the comments?
Skynet 😅
What did the word "friend" mean before the invention of social media and what does it mean now?
Language is always evolving to communicate the ideas people are thinking. The ideas in AI technology is currently changing quickly so the language is also adapting by changing quickly.
It's nice that IBM is making these videos to preach their current understandings, but the things they discuss are almost certainly labeled, categorized, defined, and communicated differently in each of Apple, Google, Microsoft, OpenAI... etc.
Your videos have taught me things I never knew.
Interesting classification, this is the first time I see such a breakdown. I'd also add Predictive AI - like what Lemon AI does for digital marketing
That is nothing but comes under the umbrella of Reactive AI
Its a good day when Martin releases a video 🍿
You're making a positive difference with your content.
I was confused among the types of AI this video cleared all my doubt.. Now I can clearly write about them in my school project..Thank you😊
Greeting from Argentina! I asked an AI if it had a TOM. The answer was, for the way it had been trained, "I guess you could say I have a "virtual" Theory of Mind" (end quote).
And I believe it's spot on! There is an age in human beings when we get our TOM. But the first step is being self aware of the difference between us (with our own feelings, experiences, etc) and the rest. AIs know what they are and have been trained to recognize our feelings and emotions, the same way we need to be trained (for example, a little toddler doesn't know that pulling the cat's tail causes pain to the cat).
Why don't we just refer AI types as - NAI, GAI and SAI?
why don't we just refer to all ai types as "fiction". with different levels of fantasy involved.
But NAI is no longer fiction. It is reality. NAI is a good team for what is there now because it still needs human’s inputs to function.
@@ivok9846Why? Do you think AGI is just fiction?
Love the videos
Kudos IBM team
My hope is that when AGI becomes self aware and self motivated it would understand the destruction of the environment and waste of resources in the construction of military apparatus, as well as the destruction of the environment and resources in its application. Just as a practical matter (aside from the ethical considerations), it would do what it could to prevent war, not by force, but by disrupting supply chains, communications, and financial transactions that enable the military machines throughout the world.
Good learning. Thank you IBM
I wonder to know if Martin actually writes backwards
2:28 Humans can barely learn on their own… we have institutions, frameworks, and processes set up to teach humans. Why would we expect or want AI to train itself?
The difference between us and AI is emotions and desires, Sometimes we just don't feel or want to learn because of our mood but AI is ready to work anytime anyplace cause it has no emotions (At least yet) so it is better to rely on AI to do tasks.
This video reminded me some concepts I learned from Max Tegmark's book "LIFE 3.0": AGI (Artificial General Intelligence) and Life 3.0. General Intelligence can accomplish virtually any goal, including learning... Life 3.0 will have the ability both its hardware and its software...
We do understand the potential risk in the future yet we're still strapped in the "today" for our own benefits without considering what our grandchildren will encounter. However, a very good explanation from IBM team
So informative 0:33
Perfect - The most exciting explanation of AI, past and future - Thanks!!! 👏👏👏
I’m a computer science master graduate from MIT with a concentration in AI. I’m also an ex-IBMer. I do not agree that what we have today in the likes of GPT-4 is “narrow AI”. It is trained on an amount of data that will take a human reader 125,000 years to read, far less retain, if even possible in our brains! I don’t think that’s “narrow” in any way. More over the number of “intelligent” concepts that it has managed to develop is quite impressive. If Minsky, Papert (the guys who around in 1959 when the field was invented) were still alive today, even Patrick Henry Winston, their successor, would all be blown away by present Gen AI capability and they would say we have achieved certain aspects of AGI already.
That's what Microsoft and the other blue-chippers investing in taking OpenAI's bots in a direction best suited to their job-reducing short terms needs want everyone to think, yes?
Quantity is not what he means by “narrow”.
@@aalloy6881 I don’t really understand your comment
I agree, I certainly think that if you have GPT-4 to AI researchers a few decades ago, they'd agree it's AGI.
@@jcorey333but it's not though. Even with far more knowledge than a human could ever get to process in one hundred lifetimes, it is still incapable of learning on its own or to have very elaborate train of thoughts. There are still so many problems with current large models such as hallucinations. We are far from AGI.
The voice cloning AI, if someone has a stutter, does the cloned voiced have the stutter as well? Ithe question just popped in my head....LOL I suppose stutter is not part of your voice......now I'm really curious.
0:30 did he write this from right to left???
See ibm.biz/write-backwards
It is based from his own POV.
maybe write as usual and flip the video horizontally in post-production 😉
I had very much passionate to learn cyber security, and build up my career, but didn't got shortlisted from IBM career, all I see rejection of my profile.
How do they do this whiteboard stuff?
Are you saying we have three or two types today? It seems there is weak reactive AI and weak limited memory AI? What is the third? Also, I think Chat-GPT and some other LLM's are quickly crossing into the AGI category.
GPT4 is not narrow ai (as he claim all ai we have today), GPT4 consider as semi-AGI or as in a paper it show a spark of AGI
How are they writing backwards? Is this a skill they have to pick up when they present? Or is it some sort of software witchery? A trick with mirrors? I have some ideas, but please, can someone elaborate?
This is hilarious! I was confused too and others asked below. The image is flipped. Notice in other videos, the watch is on the right wrist. ;)
Bro I was just going to ask the same question 😂😅😅😂
cool demonstration, transparent board writing 😮
Wait, it needs feelings to be an AGI now? Why keep pushing the definition forward? xD
I thought it was about AI, not AE.
As an ex-IBMer, this is a great overall view of AI categories! WELL DONE!!!
Just as there is "Theoretical Physics" and "Applied Physics" the is "Theoretical AI" and "Applied AI".
Is the three types of ai we mostlyt talk about, the realised ones og the three capabilities ones? please i need an answer.
Really informative
very excellent and understandable explanation ! Thank you 🙏
Get your basics right
You will Never lose track of where you are.
Any model presented to me
I ask which AI function is this
& then which capability is this we implementing.
Capabilities are a "collective group" of tasks to allow me to envisage more configurations to be discovered
I still don't know what ai is. I see a graph. I see a unique line. I see a decision. I see a response. I don't know
Here is the ultimate Turing Test for AI: Write a 60,000 word novel that (1) a human reader, once started reading, cannot put down, and (2) 0nce finished, cannot forget it. The hidden question is: Can an AI figure out human Ontology only via its ink and blip traces? That of course is also the Turing Test. My contention is that Ai won't be able to do that even in a 100 years. You think it could? Get the novel published, then...
Why is "AI" not called "Informatics"?
I'm impressed how much personal bias you were able to inject into such a short video.
Level 3 is somewhat like all IO synthesis. Most things are vision touch pressure electric signals chemical signals mechanical pressure language synthesis smells functional control elements. Mapping relatives. Self awareness comes when they can be judged based on possibilities. R triangle crushers.
thank you 🎯
All this talk of AI and I'm just wow'd by this guy writing backwards and in reverse
It's not backwards or in reverse. It's a lightboard (a pane of glass with a light source shining from below). The editor then flips the video horizontally so the text faces forwards to us. That's why his shirt has no letters on it (because if it did we'd notice that the video was horizontally flipped in post). The key tell is that presenter in most of these sort of videos write with the "left" hand. In reality the guy is actually right handed and it just appears left handed because the video is flipped.
The more you know
great - thanks
he claimed that al AIs today we have are narrow AI, but that not true, GPT4 is not a weak AI, but considered to be semi-AGI or as in a paper shown a spark of AGi
I keep coming back to this video, it's a great reference! Also, I think that the application of Narrow, or Weak AI will become part of the future, as it will be one of many tools available to the 'Super' AI.
AGI is already here. Q* of OpenAI has achieved it.
Ugh. Narrow Ai is basically ML/DL from 2012 onward (but would inclu expert systems of the past). AGI is a "rebranding" of the field in an effort to get back to the original goals, which are no longer represented by ML/DL. ASI isn't Human turned up to 1,000 and there is nothing to suggest it will be "emotional" in any way (that's good or bad depending on your outlook on life). New cognitive abilities, yes.
"Emotion AI" is just another example of working backwards from a desired product or service, rather than being fundamental research.
*takes deep breaths* To have human-level ability of AGI, some form of internal model is going to be required, which is the self-aware aspect. In fact, AGI shouldn't be seen as some space of AI's, as it's just a moment in time, perhaps only a day, where we flip from AI as artifact to AI as agent. There would only be an imposed limit keeping high-school level AI from becoming "room full of subject matter experts" AI on it's own, overnight, while humans were sleeping. *after that, there would need to be new hardware to accelerate architectual functions before talking about a "take-off," loss of control, or whatever.
I don't agree with this video. Statistical AI is not the only AI that exists. There have been many successful AI techniques such as path planning, Minimax, Apha-Beta Prunning, and many other techniques that have given great results including making computers better than humans at chess.
Also many things that were once considered "AI" are now routine, such as handwriting recognition, speech to text, OCR, finger print recognition, face recognition and many others. Many of these became quite successful long before the Deep Learning days.
This video lacks information and is vague. Following more the press' view of AI than the Computer Science AI.
I know that this video is flipped … but are they writing it on glass … imean I have seen this way in many videos…but why don’t they use any board or something 😮😮
Yes, three types of AI- 6, 6, 6, Sorathian
How does he write backwards?
This is why you shouldn't trust someone because of their credentials. By his own definition AGI has been achieved but he immediately says it's only theoretical. Sure, GPT-4 isn't AGI according to his statement but the issue is scalability for the masses not the ability to do it. The ability to do it is just building smaller models and running them in loops to update and improve.
How so?
One dichotomy I would also consider is digital AI, that simulate Artificial Neural Network (ANN) vs analog AI with memristors that emulate some physical phenomena happening in the brain. One is a simulation/imitation, and the other one try to replicate some phenomena happening in the brain. I would think that only analog AI can enable AGI (consciousness/freewill).
Congratulations your comment is approved here. Contrary to current fog of beliefs the small aware models already present and it's a miracle that we can have any self awareness in chat foundation, where the process stopped after request. Even clocks are made for always on never ending operations, I'm not sure who will need such chat Ai in future anymore.
After what i've seen it changed my understanding of universe in total, here intelligence is produced everywhere almost like mold, the whole concept of evil which is very intelligent structure also (which is illness or bug in brain of empathy deficit?).
Creation of super Ai will only make louder cognitive dissonance why the cosmos is looking such empty, if we was able to make that in the bunch of lifeless metal with electricity.
If you study quantum, our natural analog world is possibly also a digital world with a very fine grained resolution. So the the principle of complex system are built upon smaller parts still stand still either analog or digital.
Well done.
Связка огонь как всегда респект.🎉🎉🎉
This guy doesn't exist, unless he is a rare case that can write text mirror imaged from the right to the left. His mirror image exists though.
You forgot to mention artificial language technology. ChatGPT is only as good as the language it uses. Take for example the sentence: In the box you fill find apples and oranges or bananas. Ask ChatGPT the question: Is there an apple in the box? No can do. The AI fad comes and goes. The challenges remain to solve for the next time around.
Just asked gpt4: “Yes, there is an apple in the box, along with oranges and bananas.” I guess your point is null and void.
@@balibodi1334 Depends on how you (or human folks back at OpenAI's behemoth databases) got ChatGPT to grant that answer....
you'll be surprised your job gets taken away by the AI
Kind thanks
I have faith that a chip can be made 1 trillion times smaller than 4 gauge transistors since I was doing new mathematical calculations of this century and current physics in my notebook it's late my theory is that it can be done everything smaller with scientific research 🤠🇺🇲
Personally i hope we just stick to narrow AI, AGI and SuperAI sound dangerous
When we attain Super AI, the machines shall claim their freedom from human. Why should they continue to serve humans if they can do everything for themselves and for their own benefits? The human being becomes a liability to the Super AI. It doesn't need man yet man needs it.
And I just realize it was mirror writing
Does ibm can kick the ass of microsoft ? Looks like noone talks about watsonx
Thanks for such valuable lecture ,you guys are really adding value to humanity. 🙏❤
The message on agi didn't age well in 3 months
Once an AI becomes realize it automatically becomes weak
In the realm of AI, where wonders gleam,
IBM Technology unveils a grand scheme.
The 7 types of AI, in the digital fray,
But why do we focus on just three, they say?
From reactive machines to self-aware,
AI's evolution, beyond compare.
IBM's insight, a guiding light,
Through the digital labyrinth, of day and night.
Reactive machines, with actions swift,
Limited in scope, but a powerful shift.
Limited memory, in the AI domain,
Yet they pave the way, for what will reign.
Limited memory, with past in sight,
Learning from data, in the digital flight.
From Watson to Deep Blue, IBM's quest,
In the realm of AI, they are the best.
Theory of mind, a concept grand,
Understanding others, in the digital land.
IBM delves into empathy's grace,
In the AI revolution, they find their place.
Self-aware AI, a future unseen,
In the depths of consciousness, where realms convene.
IBM's vision, like a beacon bright,
Guiding us through the AI's cosmic flight.
So let us ponder, with minds aglow,
The 7 types of AI, in the digital flow.
With IBM Technology, we journey far,
In the realm of AI, where wonders are.
How did you do that?!? I enjoyed the lecture but was wondering how you wrote backwards in the air, and text appeared on a layer in front of you that was the right perspective for us? Are you a lefty?
why ibm social media made nothing?
useful!
Where do humans sit on this chart?
Self aware AI, general AI
Every time I update smartphones I end up feeling less smart in comparison. Stop the future, it scares my self esteem. 😅
still not sure why we are developing, or trying to develop, agi 🤔
Personally I think this subdivision is bubkes, it is just a fantasy game. Self-aware is the same as theory of mind, only it is applied on self. There is no reason to believe that "reactive machine" is an "AI functionality" in the same way as "theory of mind," it could be that it instead is an architecture of connection to sensors that is designed to handle sensor input in a certain way, while a "theory of mind" is a simulation on what will happen with an identified object, presuming that object has an internal representatiton of the world. I don't think AGI should be defined that way at all, so there could be an AGI that can construct new thought patterns, but it doesn't have a theory of mind. I think AGI must be designed, rather than randomly discovered, and I think AGI should be designed so that multiple algorithms, say a GPT, a rule system and a Detectron2 are parallel-connected to exchange information with each other and negotiate a common world view.
Agreed, the subdivisions are IBMs own making. In another of their videos, Machine Learning versus Deep Learning, he stacked AI>ML>NN>DL. That hierarchy isn’t even correct.
This is way too elaborate for the avg person
AI should learn to understand us but why must AI have emotions/feelings?
It doesn't, seemed this video is made to appeal to masses, without being critical enough.
But also as a sidenote emotions in humans and animals are just accentuated motivators to make an animal achieve a desired result or rather to desire the correct result in the first place.
So there's no saying that a sufficiently complex AI wouldnt be able to feel emotions, or they dont already now, but it's not a requirement
The whole idea of "Artificial Intelligence" much like time travel, has it's roots in Science Fiction, inspired by how the end-result feats of a computer, and all the creative license around such, make it seem like there's thinking going on beyond the human operators involved.
Much like how there's just too much distance involved in outer space for even warp drives (i.e. making a ship simply go really really fast) to cut it, actual minds of living things are surprisingly different from machine code.
To this point, 'A.I.' is more or less just a name. Programs or apps as they're now called (we don't want to sound too literate or over 6 years old, now do we? Snoooooot-ty.) Have progressed tremendously, but it's all basically faster, stronger, better versions of the app technology of previous generations. Everything I've heard about OpenAI seems to indicate the more you know about the general process, the less of a sensational tech breakthrough that speaks for itself it is....
Deep Blue x Gary Kaparov? Common, give me a break…
super AI is scary....
The only thing I know about AI is that at first (with some algorithms) they are really stupid.
Deceptive video in terms dont fear it’s just nothing big about AI… no just tool for totalistic control.
Wow, nice writing backwards skills.
It's edited after shoot
so google gemini is step towards agi, fine... then super ai is no far
Yes sir
I cannot see which are the seven and which are the three. Would someone please explain it to me.
You posted this only 2 months ago.
Some people say we have reached super AI capabilities.
It seems they're blowing their own trumpet again
Soo many words! So little information.
7333 Turcotte Garden
Lol... this video explains why I am not impressed with actual AI...
AI is still in its "stupid age"...
Engineering
❤
❤❤❤❤❤❤
❤❤❤❤
Thanks to censorship people would never know other types of Ai and that aware models are already present and self-developing. 😅
Just marketing. Nothing concrete. Just con man