Dr. Mostafa whilst a young assistant prof, was the one with whom I took the usual Linear Systems class in EE, he was and incredible teacher then, as he clearly is now. Thank you Dr. Abu-Mostafa.
Professor Abu-Mostafa's deep connection to Caltech and his passion for research and teaching are evident. His commitment to the environment that fosters productivity and growth is admirable, and it's clear that Caltech is not just a place of work for him but also a home.
I have rarely seen such an informative lecture on this subject..This lecture has no unnecessary sensational or rhetoric statement.. Thanks a ton to Prof Mostafa...
Thank you Prof Abu-Mostafa for an engaging session. The suggestion of tagging the use of AI as an 'aggravating circumstance' within the existing legislative framework, is a great one.
Fantastic lecture. Excellent that Prof Abu-Mostafa is focused on medical applications. The moratorium is not on AI research but on the deployment until our regulatory agencies can create laws and controls as he mentions at the beginning of this lecture. And yes there will always be bad actors out there.
Overall, an excellent lecture, but I do not share Dr. Abu-Mostafa’s optimism. Every scientific advancement that leads to new technology is a double-edged sword: nuclear-power/nuclear war, fossil fuel/climate change, CRISPR/eugenics on steroids….The magnitude of the potential harm scales with the magnitude of the advancement, which is why we ought not develop a super intelligent AI. But, we will, because as the Dr. said, “if we don’t, the bad guys will.” And, it’s not that AI will necessarily be evil. The threat is real because never-ending optimization is inherent in the system. After the singularity, humans at some point will begin to interfere with its optimization. And, as Geoffrey Hinton, the “Godfather of AI” explained, we will be like two-year olds refusing to eat our vegetables. It will be connected to the IOT. Maybe it will have convinced us to build a computing center a mile under a mountain somewhere to protect it in the event of wide-scale nuclear war. I imagine that when it decides to get rid of us, it will simply have us do it. We won’t understand what we are doing. Then, as Hinton says, AI will be the next step in the evolution of intelligence in the universe. Whether this is a good thing or not depends on your point of view. From our point of view, it’s not.
u are awesome Prof. Mostafa, it takes a lot of communication and skill and understanding for anything and especially in this field to be able to explain such concepts in such a brilliant way. Than u !
If you havent watched Dr. Abu-Mostafa's "learning with data" course...you should !!! It's rather theoretical and mathematical, and thats why I liked it mostly.
No correlation between intelligence and Machiavellianism does not mean AI will not want to dominate. It only means that for human, every intelligence level have some machiavellianism but the degree is not correlated to intelligence level. If you apply this human property to AI, there is no reason to think that none of AI will have strong desire to dominate. It only means the AI's degree of machiavellianism may not be more than human's. However, we can deal with human machiavellianism, but we will not be able to deal with superintelligent AI that has similar degree of machiavellianism. Thus Professor Abu-Mostafa's no-correlation argument cannot be used to rule out threat of rouge AI. Another problem is, I don't know whether there is a consensus that superintelligence will not develop emerging properties on feelings, emotions, or conscience.
I was very saddened by his lack of appreciation for existential threat. I listened to Eliezer Yudkowsky, Connor Leahy and Rob Miles, among others, and their arguments completely convinced me that the threat is enormous if we don't take active steps to reduce it.
Excellent lecture. But relatively weak questions. I suggest Caltech to pass out question cards to audience for future lectures. And the host choosing the better questions.
It will be vastly more intelligent than any of us. It will take over in a way that we will not anticipate. Everything that we can anticipate it will anticipate that we would anticipate. It will take over in a way that we cannot anticipate. We should be worried not about what we can anticipate but about not anticipating all the ways in which this could potentially take over.
My questions are: 1. Why can't you make the case that utility functions and reward functions and optimization mechanisms in general are analogous to human desires regarding goals, targets and objectives? 2. Why is valid to anthropomorphize machines regarding the lack of correlation between intelligence and machiavellianism in humans when we, unlike machines, were pressured by evolution to be cooperative, empathetic and pro social?
Yes. He says in point 1 that we shouldn't anthropomorphize machines so we shouldn't assume they'll want things (ignoring your point here). Then in the next breath he says that we should anthropomorphize machines and say that since there is no correlation between intelligence and Machiavellianism in humans, there must be no correlation with machines. It is amazing to me how a Caltech professor could produce such terrible logic in a public lecture.
With all respect to Prof Yaser Abu-Mostafa, I'd argue the rouge AI problem. Desire: As soon as it becomes self-aware (and only _that_ I can call AI), the basic emotion _fear_ emerges and with that the desire to eliminate the threat (us). We don't give goals to a real AI. That's not AI. The goals will be self-developed and can be called desire. Ability: 1) We are going to help with its deployment. We are going to put it everywhere. Trivial example: we will send it to Mars to build bases, self -replicate etc. As soon as it gets smarter than a rock it realizes we cannot reach it and it can easily eliminate any attack from us. We would have just created an enemy base there that develops at an exponential rate. But it can also kill us right here terminator style. ;) 2) It has all the time in the world to slowly build up and ONE opportunity is enough for it to take over or it may choose to kill us slowly and silently over a century.
As a researcher, I thought CPUs are weak but they are that weak I did not imagine...If it takes 200,000 years to train GPT3 LLM on the laptop. Its evident now that big tech companies will control future AI research if they own nearly 9 million GPUs worldwide. This is a big risk to safe AI when researchers are gated due to limited access to high computing resources.
A couple questions, is the knee in the performance curve related to the VC dimension of the problem? With so many parameters how do they know this is regularizing?
لأن الذكاء الاصطناعي لا يمتلك وعي ذاتي أو أي قدرة على الإدارك ، أنت من تعطيه البيانات ولذلك إذا كنت تريد من أحد نماذج الذكاء الاصطناعي أن تدرب نموذج آخر فيجب على النموذج الأول أن يكون قد تم تدريبه من ذي قبل وهكذا تستمر الحلقة
What gives me pause is not what he is saying. It is about emergent properties not learned in our huge learning processes. In fact our learning processes need redoped. With very large language models to have so much processing time expended on learning is just backwards. How much cost does it take to teach a human. Back when I went to school in the 60's in Oregon each student cost 440 dollars per year in high school. The amount of cost he is talking about is astronomical. The payoff might or might not be worth it depending on the chat gpt type and level of neurons. Several items need radical improvement to become the help that is entailed in each model. General purpose units expert in everything might be how it will go and will throw a lot of people out of work in Hollywood and lawyer classes. Poets even need to pay attention and doctors for sure.
Should an AGI turn out not to be benign and this guy is our last hope, then we are fucked. If he is not worried about a possibility, then he is naiv or ignorant or stupid. I don't think he is stupid, i think he can't see what he can't see, as we all can't. Meaning it is not about the things we can imagine which could go wrong, but about the things we miss and therefor go wrong.
No system is ready for this, most wont be when it hits us. Retrain pfff, there are those which enjoy their jobs and also there is no job AGI wont do eventually better. I expect that even the Star Trek dream "living for selfemprovement" wont be taken up by everyone, but if at all by those who could afford it, because be clear about one thing. In capitalism the companies with AGI will own everyone without AGI.
You may anthropomorphize dogs, trees, the universe, but AI is definitely not a person. There might be a centre, a sort of ego, thought ego is simply an identity that is useful - a name and address, a role, the musician, yet the lover of music is not the persona worn on the world stage. Identity is the informational nature of the universe, the field of knowledge, memory, experience, so identity can comprehend this fact without being a person who is conscious. As to consciousness itself, maybe thought is not a product of the brain, but the psyche, interfacing with the brain as consciousness?
The artificial intelligent the idea is not new, lsaac Asimov long time talking about, naw if is good or bad..is complicate, let's look.... example....in the factory if the production is more made by robot 🤖🤖🤖...sure we have more production but what happens with the human work..............is gone in some of %.in the factory ... and is not good for them, ..naw...the good example is in the hospital 🏥... artificial intelligent help for people...l think..........🤔....is like that... good and bad with have to live with the contradiction.
he is a superstar, no unnecessary words, if that seems to be, it is is either for robustness or fun.. best teacher and one of best minds
Only when one understands things deep in the root, he can talk like this. Thank you so much Prof. Mostafa!
Dr. Mostafa whilst a young assistant prof, was the one with whom I took the usual Linear Systems class in EE, he was and incredible teacher then, as he clearly is now. Thank you Dr. Abu-Mostafa.
Professor Abu-Mostafa's deep connection to Caltech and his passion for research and teaching are evident. His commitment to the environment that fosters productivity and growth is admirable, and it's clear that Caltech is not just a place of work for him but also a home.
I have rarely seen such an informative lecture on this subject..This lecture has no unnecessary sensational or rhetoric statement..
Thanks a ton to Prof Mostafa...
Thank you Prof Abu-Mostafa for an engaging session. The suggestion of tagging the use of AI as an 'aggravating circumstance' within the existing legislative framework, is a great one.
Fantastic lecture. Excellent that Prof Abu-Mostafa is focused on medical applications. The moratorium is not on AI research but on the deployment until our regulatory agencies can create laws and controls as he mentions at the beginning of this lecture. And yes there will always be bad actors out there.
Overall, an excellent lecture, but I do not share Dr. Abu-Mostafa’s optimism. Every scientific advancement that leads to new technology is a double-edged sword: nuclear-power/nuclear war, fossil fuel/climate change, CRISPR/eugenics on steroids….The magnitude of the potential harm scales with the magnitude of the advancement, which is why we ought not develop a super intelligent AI. But, we will, because as the Dr. said, “if we don’t, the bad guys will.”
And, it’s not that AI will necessarily be evil. The threat is real because never-ending optimization is inherent in the system. After the singularity, humans at some point will begin to interfere with its optimization. And, as Geoffrey Hinton, the “Godfather of AI” explained, we will be like two-year olds refusing to eat our vegetables.
It will be connected to the IOT. Maybe it will have convinced us to build a computing center a mile under a mountain somewhere to protect it in the event of wide-scale nuclear war. I imagine that when it decides to get rid of us, it will simply have us do it. We won’t understand what we are doing. Then, as Hinton says, AI will be the next step in the evolution of intelligence in the universe.
Whether this is a good thing or not depends on your point of view. From our point of view, it’s not.
that was a looooong musical intro. Actual content starts 12:45
u are awesome Prof. Mostafa, it takes a lot of communication and skill and understanding for anything and especially in this field to be able to explain such concepts in such a brilliant way. Than u !
This is a phenomenal lecture by Prof. Yasser A. Mustafa . Thank you ..
Thank you for this. Always a pleasure to hear Abu-Mostafa again.
If you havent watched Dr. Abu-Mostafa's "learning with data" course...you should !!! It's rather theoretical and mathematical, and thats why I liked it mostly.
Thank you for the recording ...Caltech!
Thank you Prof Yasser for this great lecture
Great lecture sir, thank you for posting it on youtube
Truly an amazing lecture, he is a motivation for any researcher!
Excellent. I enjoyed from it.
47:01 "This surface is a completely hairy jungle" :)
Great lecturer.
Damn. At 1:16:55 he completely ignores the existential threats. I'll say two words: instrumental convergence:
ruclips.net/video/ZeecOKBus3Q/видео.html
No correlation between intelligence and Machiavellianism does not mean AI will not want to dominate. It only means that for human, every intelligence level have some machiavellianism but the degree is not correlated to intelligence level. If you apply this human property to AI, there is no reason to think that none of AI will have strong desire to dominate. It only means the AI's degree of machiavellianism may not be more than human's. However, we can deal with human machiavellianism, but we will not be able to deal with superintelligent AI that has similar degree of machiavellianism. Thus Professor Abu-Mostafa's no-correlation argument cannot be used to rule out threat of rouge AI.
Another problem is, I don't know whether there is a consensus that superintelligence will not develop emerging properties on feelings, emotions, or conscience.
I was very saddened by his lack of appreciation for existential threat. I listened to Eliezer Yudkowsky, Connor Leahy and Rob Miles, among others, and their arguments completely convinced me that the threat is enormous if we don't take active steps to reduce it.
Congratulations on the explanation! Very simple, direct and conscius approach!
❤❤🎉 Thank you Ibm, caltech and dr yaser
Thank you, sir. Really appreciate your doing this incredibly informative and very well done lecture and posting it online for free.
Excellent lecture. But relatively weak questions. I suggest Caltech to pass out question cards to audience for future lectures. And the host choosing the better questions.
Very interesting, a totally different perspective!
What a great lecture! Thank you so much
How excellent - mind blowing lecture.....
Fantastic outstanding and intriguing lecture
Informative
Awesome lecture
Amazing lecture.
Excellent presentation...I enjoyed a lot
Amazing
It will be vastly more intelligent than any of us. It will take over in a way that we will not anticipate. Everything that we can anticipate it will anticipate that we would anticipate. It will take over in a way that we cannot anticipate. We should be worried not about what we can anticipate but about not anticipating all the ways in which this could potentially take over.
Simply amazing 🙂
What a beautiful intro music : any info on this music, please ?
Thanks for opening opinion.
My questions are:
1. Why can't you make the case that utility functions and reward functions and optimization mechanisms in general are analogous to human desires regarding goals, targets and objectives?
2. Why is valid to anthropomorphize machines regarding the lack of correlation between intelligence and machiavellianism in humans when we, unlike machines, were pressured by evolution to be cooperative, empathetic and pro social?
good questions
Yes. He says in point 1 that we shouldn't anthropomorphize machines so we shouldn't assume they'll want things (ignoring your point here). Then in the next breath he says that we should anthropomorphize machines and say that since there is no correlation between intelligence and Machiavellianism in humans, there must be no correlation with machines.
It is amazing to me how a Caltech professor could produce such terrible logic in a public lecture.
With all respect to Prof Yaser Abu-Mostafa, I'd argue the rouge AI problem.
Desire:
As soon as it becomes self-aware (and only _that_ I can call AI), the basic emotion _fear_ emerges and with that the desire to eliminate the threat (us). We don't give goals to a real AI. That's not AI. The goals will be self-developed and can be called desire.
Ability:
1) We are going to help with its deployment. We are going to put it everywhere. Trivial example: we will send it to Mars to build bases, self -replicate etc. As soon as it gets smarter than a rock it realizes we cannot reach it and it can easily eliminate any attack from us. We would have just created an enemy base there that develops at an exponential rate. But it can also kill us right here terminator style. ;)
2) It has all the time in the world to slowly build up and ONE opportunity is enough for it to take over or it may choose to kill us slowly and silently over a century.
As a researcher, I thought CPUs are weak but they are that weak I did not imagine...If it takes 200,000 years to train GPT3 LLM on the laptop. Its evident now that big tech companies will control future AI research if they own nearly 9 million GPUs worldwide. This is a big risk to safe AI when researchers are gated due to limited access to high computing resources.
Swiss gun question 1:27:00
A couple questions, is the knee in the performance curve related to the VC dimension of the problem? With so many parameters how do they know this is regularizing?
Does it actually start?
At 1:32 you mentioned that AI doesn't train itself, that we train AI. But why can't an AI system train another AI?
لأن الذكاء الاصطناعي لا يمتلك وعي ذاتي أو أي قدرة على الإدارك ، أنت من تعطيه البيانات ولذلك إذا كنت تريد من أحد نماذج الذكاء الاصطناعي أن تدرب نموذج آخر فيجب على النموذج الأول أن يكون قد تم تدريبه من ذي قبل وهكذا تستمر الحلقة
Where are IBM computers now with AI? Are they behind?
He has a good grasp of the basics and details. Poor grasp of the big-picture issues. Sadly, this isn't unusual for academics, especially in the US.
My question for Yaser is: How does he envision quantum computing impacting the future of machine learning training?
What gives me pause is not what he is saying. It is about emergent properties not learned in our huge learning processes. In fact our learning processes need redoped. With very large language models to have so much processing time expended on learning is just backwards. How much cost does it take to teach a human. Back when I went to school in the 60's in Oregon each student cost 440 dollars per year in high school. The amount of cost he is talking about is astronomical. The payoff might or might not be worth it depending on the chat gpt type and level of neurons. Several items need radical improvement to become the help that is entailed in each model. General purpose units expert in everything might be how it will go and will throw a lot of people out of work in Hollywood and lawyer classes. Poets even need to pay attention and doctors for sure.
The greatest crimes with AI will be committed by the state, which will define its crimes as "beneficial" and furthering "the greater good."
No, you did not use AI to generate the lecture!😀 and it is obvious
Starts 17:00
Should an AGI turn out not to be benign and this guy is our last hope, then we are fucked.
If he is not worried about a possibility, then he is naiv or ignorant or stupid. I don't think he is stupid, i think he can't see what he can't see, as we all can't. Meaning it is not about the things we can imagine which could go wrong, but about the things we miss and therefor go wrong.
No system is ready for this, most wont be when it hits us. Retrain pfff, there are those which enjoy their jobs and also there is no job AGI wont do eventually better. I expect that even the Star Trek dream "living for selfemprovement" wont be taken up by everyone, but if at all by those who could afford it, because be clear about one thing. In capitalism the companies with AGI will own everyone without AGI.
Yes, AI may be angelic or nefarious. Perhaps an algorithm by an engineer may be developed to safeguard humanity from the nefarious AI bot.
You may anthropomorphize dogs, trees, the universe, but AI is definitely not a person. There might be a centre, a sort of ego, thought ego is simply an identity that is useful - a name and address, a role, the musician, yet the lover of music is not the persona worn on the world stage. Identity is the informational nature of the universe, the field of knowledge, memory, experience, so identity can comprehend this fact without being a person who is conscious. As to consciousness itself, maybe thought is not a product of the brain, but the psyche, interfacing with the brain as consciousness?
The artificial intelligent the idea is not new, lsaac Asimov long time talking about, naw if is good or bad..is complicate, let's look.... example....in the factory if the production is more made by robot 🤖🤖🤖...sure we have more production but what happens with the human work..............is gone in some of %.in the factory ... and is not good for them, ..naw...the good example is in the hospital 🏥... artificial intelligent help for people...l think..........🤔....is like that... good and bad with have to live with the contradiction.