The most pleasant and articulate voice in the entire ai space right now. Absolutely no jargon, just clean and crisp explanations, only using terms for big concepts when necessary.
Listening to Jeffrey Hinton is such a joy. He seems to me one of the most authentic and transparent souls among the famous people. There are many talented beings worth of admiration. But If I were given the choice to meet and share time to talk with some of them, he would be absolute first in my list.
Ability to explain complicated things in a simple way is sign of deep understanding. Best interview I know so far in this field. And offcause right questions supported it.
Nice to see the interviewer was also standing for the whole interview in honour of hinton's back pain. Always remember PSR (principle of sufficient reason): there is reason behind every event.
"these big neural nets can actually do much better than their training data." things like this mentioned in this talk challenge us to look over the concepts we previously missed. This is by far one of the best interview from Hinton.
Almost as if all we need is subconscious and an outer loop with more time... And a lot of data management. And.. and and and Maybe one day.. synthetic biology as a hard drive.. can we make it non volatile memory/processing ? if it's not programmable, not interesting for on demand data. Going to be great for large storage either way.. low power.. but having a programmable array like the brain could be.. handy😝🤪🤔
I see a man from the past.. with strong beliefs.. but I also think he is wrong. The brain does not learn anything. The mind does.. like everything else here, the brain is just data. Information ? we make that.. we can decide to force some new connections with routine, like memorizing. Or we do it "subconsciously" and habitually.. but it's always by choice. It's always up to the individual to interpret data coming in, most often through the eyes data.. And then we choose what is important; what relative connections stick in the brain ? that's the extremely long and personal question isn't it. Some people become psychopaths.. some become OCD or depressed.. the mind leads but the body has a say in things too. Biology is.. a fuzzy constraint. Not a physics one. If you *know* you're going to get better from that sugar pill placebo.. the probability you render yourself healed in the morning ? higher than someone who is entropic - fearful.. Still a legend and specialist.. as we all are. Bravo old man.. see you on the other side ! haha some of those things he said went in one ear and out the other. I got over maths when it got too complicated. Loved it more than anything, played with square/rectangle numbered blocks all day as soon as I could crawl. (Mother was a teacher). When we start believing/indoctrinating each other, we fail. I leave the nerd stuff to the nerds. Focus is required.. not context switching.. and I got bored of numbers long ago. 6 is key. And again, give Wolfram the Nobel. He's basically spot on. Computationally irreducible outputs.. that we walk around in, making choices.
@@goldnutter412 what is the educational and subject matter canvas that you used to write what you just wrote? Just so I can understand your context better
He is naïve politically because he believes central planning is OK for human governance but for subtle reasons its not. He has no knowledge of those reasons - they are behavioural science type of things. Having said that maybe an ASI could manage to pull-off central planning, idk.
@goldnut. Slow down lad. That speech wasn't as great as you thought. The situations you describe don't necessarily negate the speaker's ideas, they are just a different subtopic, that, albeit related, misses the point of the conversation
Stunned by the prediction of creativity! Yet it makes sense because creativity is separate from intelligence or self awareness. Hinton is a reminder of the great geniuses of the past s/a Faraday, Gallileo, Da Vinci, ... I think he has to be considered for that continuity.
By YouSum Live 00:01:16 Early disappointments in brain understanding. 00:01:59 Influence of Donald Hebb and John Fornoyman. 00:02:33 Brain learning through neural net connections. 00:04:13 Collaborations with Terry Sinowski and Peter Brown. 00:05:08 Encounter with a young, intuitive student, Ilia. 00:06:00 Ilia's unique perspective on gradient optimization. 00:08:02 Scale and computation's impact on AI progress. 00:08:25 Breakthrough in character-level prediction models. 00:09:01 Neural net language models' training insights. 00:10:36 Integration of reasoning and intuition in models. 00:12:46 Potential for models to surpass human knowledge. 00:17:16 Multimodal learning enhancing spatial understanding. 00:18:18 Impact of multimodality on model reasoning abilities. 00:18:40 Evolutionary perspective on language and brain synergy. 00:18:41 Evolution of language and cognition. 00:18:57 Three views of language and cognition. 00:20:12 Transition from symbolic to vector-based cognition. 00:21:36 Impact of GPUs on neural net training. 00:23:13 Exploration of analog computation in hardware. 00:25:31 Importance of diverse time scales in learning. 00:27:37 Validation of neural networks' learning capabilities. 00:29:12 Inquiry into simulating human consciousness. 00:36:01 Brain's potential use of backpropagation for learning. 00:37:42 Brain's learning potential and beneficial failures. 00:38:02 AI advancements in healthcare for societal benefit. 00:39:00 Concerns about misuse of AI by malevolent actors. 00:39:23 International AI competition driving rapid progress. 00:40:03 AI assistants enhancing research efficiency and problem-solving. 00:41:51 Intuition in talent selection and diverse student profiles. 00:42:00 Developing intuition by filtering information effectively. 00:43:26 Focus on big models and multimodal data for AI progress. 00:44:08 Exploration of various learning algorithms for AI advancement. 00:45:11 Pride in developing the learning algorithm for Boltzmann machines. By YouSum Live
My insights from this video: 1. digital computers are better than analog computers because they can share knowledge efficiently 2. you can set half of your training data's labels to be wrong and still get very high accuracy (95% accuracy on MNIST) 3. AlphaGo and humans are similar in the sense that we have intuition then we use reasoning using our intuition, then the result of the reasoning will be used to correct our intuition
A briliant, brilliant man with enough imagination tu fill up the universe, perhaps even decipher the essence of thoughts, yet the humility of someone who still holds ingenuity and admiration in every observation. A treat to listen to. ty
The way that Hinton breaks complicated things down is another level. It proves again that if someone can't explain something to ANYONE, they don't understand it that much either
Not true. Sometimes you are just not used to talk with people that don’t have the same level of knowledge as you. You struggle to lower your level because everything is obvious to you.
At various points in this discussion you can see the interviewer smirking. He is reminded again and again how insightful and articulate the man in front of him is. Great video.
Thank you for sharing your knowledge and experience. Great interview. Just wanted to share my thought on Sir Geoffrey Hinton's question - "Does human brain do back propagation?" - My intuition is yes. I learned that the neuro plasticity of Human brain can be changed to be healthier by rewriting stories in your head of what you understood and remember from what and when you experienced with a better understanding of the same experience with what you know now. This has been studied and confirmed to help human brain by Psychiatrist Dr. Gabor Mate and the Psychologist Marsha M. Linehan that proposed Dialectical Behavior Therapy. This seems to be the reason for the different weights in each Brain (Generational Weights and Life Experience Weights) Point of Concern Noted: Bad Actors could use this for privacy breach or winning a power struggle
Now I start loving Prof. Hinton's talks more than reading Dr. Lecun's posts especially when he kept shielding facebook and their bs. I could have learned more about Prof. Hinton and his ideology earlier. This talk is profound to me. Thank you.
Its always awesome when scientists share the credit for their own work and how important it was to have great collaborators, thank you both very much for sharing your time and work, Geoffrey, and Joel, peace
It is an absolute delight listening to his humble and patient explanation to the very basics from the very beginning of an Industry which has already crossed 5 Trillion 💵 💲 diverse across in hardware, semiconductor applications and had a revolutionary impact in Healthcare and Data Modelling domains. 🎉
Remarkable when he says how almost all the experts were wrong about us simply needing more data. And yes, they have adjusted their timelines, but not by anything close to what he is saying. They will admit they were wrong, but only by so much. It's very likely they're still wrong. I pay attention to the timelines of the people actually building this stuff, besides their obvious ulterior motives in hyping it, because they're the ones with boots on the ground.
Thank you for this in depth and very human interview; I didn't know about the CV of Joel Hellermark, but it does impress that being smart and engaged in the AI development it is a pleasure to see that Joel uses all this to the advantage to let the interviewed person shine and explain, and not to cover his own topics or visions
“What do you think is the reason for some folks having better intuition? Do they just have better training data?” “I think it’s partly they don’t stand for nonsense”
The most important aspect of this conversation is Hinton’s reveal that the vast majority of experts didn’t believe that adding data would result in learning. Let that be a lesson for those who are quick to point out the limitations of artificial intelligence.
They were all wrong, and yet they still confidently make predictions putting AGI decades out, just closer than it used to be. Old ideas die hard. But now Hinton is saying, you're still wrong, you still don't understand the power of these things...and they're still not listening.
What a great interview! Well thought out questions - thank you so much for that! Miss Jeffrey's lectures... So wish I would have taken his lectures more seriously back in UofT
Here's a ChatGPT summary: - Geoffrey Hinton reflects on his intuitive approach to identifying talent, mentioning Ilya Sutskever's persistence and raw intuition. - Hinton describes his early experiences at Carnegie Mellon, including late-night programming sessions and the collaborative environment. - He discusses his transition from neuroscience to AI, influenced by books from Donald Hebb and John von Neumann. - Hinton emphasizes the importance of understanding how the brain learns and modifies connections in neural networks. - He recalls collaborations with Terry Sinowski and Peter Brown, highlighting their contributions to his understanding of neural networks and speech recognition. - Hinton shares the story of Ilya Sutskever's first meeting with him and Sutskever's intuitive approach to problem-solving. - He discusses the evolution of AI models, emphasizing the importance of scale and data in improving performance. - Hinton explains the concept of neural net language models and their ability to understand and predict the next symbol in a sequence. - He highlights the potential of large language models like GPT-4 to find common structures and make creative analogies. - Hinton discusses the potential for AI to go beyond human knowledge, citing examples like AlphaGo's creative moves. - He reflects on the importance of multimodal models in improving AI's understanding and reasoning capabilities. - Hinton shares his views on the relationship between language and cognition, favoring a model that combines symbolic and vector-based representations. - He recounts his early intuition about using GPUs for training neural networks and the subsequent impact on the field. - Hinton discusses the potential for analog computation to reduce power consumption in AI models. - He emphasizes the importance of fast weights and multiple timescales in neural networks, drawing parallels to the brain's temporary memory. - Hinton reflects on the impact of AI on his thinking and the validation of stochastic gradient descent as a learning method. - He discusses the potential for AI to simulate human consciousness and feelings, drawing on examples from robotics. - Hinton shares his approach to selecting research problems, focusing on challenging widely accepted ideas. - He highlights the importance of curiosity-driven research and the potential for AI to benefit society, particularly in healthcare. - Hinton expresses concerns about the misuse of AI by bad actors for harmful purposes. - He discusses the role of intuition in selecting talent and the importance of having a strong framework for understanding reality. - Hinton advocates for focusing on large models and multimodal data as a promising direction for AI research. - He reflects on the importance of learning algorithms and the potential for alternative methods to achieve human-level intelligence. - Hinton expresses pride in the learning algorithm for Boltzmann machines, despite its practical limitations. - Main message: Geoffrey Hinton emphasizes the importance of intuition, collaboration, and curiosity-driven research in advancing AI, while acknowledging the potential benefits and risks of AI technology.
Great great talk! Love the ifea of three ways we progressively began to evolve idea of of cognition in terms of symbol and embedding. And how currently fast weights is an issue.
Love this guy's ability at conceptual thinking - his definition of feelings is remarkable. Previously I attributed feelings as hormone-like chemicals giving us feelings. I get anxious about things I am logically certain are not worthy of my anxiety (and time shows my logic was correct not my anxiety) yet I still feel anxious. Why is that I ask myself?
I believe the human brain has two layers. One layer is closely connected to the external world,influenced by your surrounding environment, the traditional wisdom you have learned, and theinnate tendency to obey authority. The second layer is your intuition, which inexplicably pointsyou in a completely opposite direction without any logical reason. However, you can use logicto reverse-engineer this intuition to see if it makes sense. l think anxiety arises when bothlayers give you a convincing feeling, and you cannot determine which one is more correct,making it feel more like a gamble. Whether something is correct, l think, is very random. Perhaps l do not have a brilliant mind, asmany of my intuitions have turned out to be completely wrong. Later, l realized that humanshave a peculiar trait: if they believe something is right, they will automatically find variouslogical reasons to continuously justify what they think is right. You can see this characteristic infanatics. I hold a skeptical attitude towards everything, such as atheism and theism. l believe both arepossible, and it is very difficult to determine which is correct. Life is the chaotic element of theobjective world; only life is unpredictable and uncomputable. You can calculate the nextmoment of the sun and the universe with 100% accuracy, but you cannot calculate the nextmoment of life with the same certainty. We are the only chaotic element in this world.
spiritual enlightenment is in essence the ability to predict the next moment in time so that time is transcended, so that thought is no longer occupied with content in time we are just predicting and adapting
*Summary* *Early Inspirations & Career:* * *(**0:00**)* Discusses talent selection and his experience at Carnegie Mellon. * *(**1:18**)* Reflects on his early days at Cambridge studying the brain, finding it disappointing and eventually turning to AI. * *(**1:53**)* Mentions Donald Hebb's book as a key influence on his interest in neural networks. *Ilya Sutskever & Scaling:* * *(**5:08**)* Shares the story of meeting Ilya Sutskever and being impressed by his intuition. * *(**7:40**)* Discusses the role of scale in AI's progress and how Ilya recognized its importance early on. *Language Models & Understanding:* * *(**8:53**)* Explains how language models are trained to predict the next symbol and why this forces them to develop understanding. * *(**9:01**)* Believes these models understand similarly to humans, using embeddings and vector interactions. * *(**11:19**)* Emphasizes the creativity of large language models in finding analogies and going beyond human knowledge. *GPUs & Future of Computing:* * *(**21:35**)* Recalls his early advocacy for using GPUs and how it accelerated the field. * *(**23:13**)* Explores the potential of analog computation inspired by the brain's efficiency. *Human Brain Insights:* * *(**25:05**)* Highlights the brain's use of multiple time scales for learning and memory, which is missing in current AI models. [something about Graphcore and using conductances for weights] * *(**27:37**)* Discusses how the success of large language models validates the power of stochastic gradient descent. * *(**29:05**)* Sees consciousness and feelings as explainable through actions and constraints, potentially replicable in AI. *Research Approach & Future Directions:* * *(**32:58**)* Describes his approach to research: identifying widely accepted ideas that feel intuitively wrong and trying to disprove them. * *(**35:21**)* Shares his current research focus: understanding how the brain uses backpropagation. * *(**43:26**)* Advocates for focusing research on large, multimodal models trained on vast datasets. *Ethical Concerns & Impact:* * *(**36:52**)* Expresses concerns about the potential negative impacts of AI, despite initially being driven by pure curiosity. * *(**37:59**)* Believes in AI's potential for positive impact in healthcare and other fields. *Talent & Intuition:* * *(**40:15**)* Discusses the importance of talent selection and his mix of intuition and observation. [Refers to David MacKay, see also ruclips.net/video/CzrAOBC8ts0/видео.html[ * *(**41:49**)* Shares his belief that strong intuition comes from a strong framework for understanding the world. *Personal Reflections:* * *(**45:00**)* Reflects on his proudest achievement: developing the learning algorithm for Boltzmann machines. i used gemini 1.5 pro to summarize the transcript
Its getting funny the relentless comments of "theyre just fancy auto-completes!!! Its not intelligence!!" Well people wont be saying that for much longer I predict.
its often good to ask for things you know you can’t get just to make a point this really resonated with me, just because it speaks to the voice in your head that kills an idea before you ever get to testing it, or just anything along this line. so i feel like this is a hidden gem in this vid. time stamp is 39:40
Oh thank you !!! Will post some comments when I get time to digest the data (words) and think through what he has to say :-) IMO we return to raw pattern matching in the next era, the golden age.. a new Renaissance. The information age is.. us awakening ! (intuition = human deep compute.. the higher self (subconscious) sending little nudges to the entropic tip of the mind.. the cognitive part)
Keeping learning new knowledge, technology, AI, etc, never slow down your learning, every day too many challenges out there, with more people’s brain should got good solutions. That’s teamwork!
Science progressed entirely by using experiments to validate the truth of various beliefs. Using simulations to test our beliefs about hidden mechanisms of learning and representation and computation is such a brilliant breakthrough
Geoffrey Hinton went to Cambridge University to learn about the physiology of BRAIN that was later great input to the works of AI. It's really a nice journey.
The Memory Code, by Dr Lynne Kelly, has demonstrated what relevance religious repetition of actions are.., fitted to a local Calendar, and how these beliefs attached to time are the natural probabilistic part of placement, all in keeping with holistic ideas of identity. Mathematical rigor is the conversion by symbol of functional quality to material quantization, a perception cause-effect derived from the feedback of Gold-Silver qualitative Rules of exchange of values. Ie, the practical/political analogy of religious/rigorous practice to an emulation of QM-TIME relative-timing ratio-rates Perspective Principle, is absolutely fundamental.
The questions were deep and nothing like seeking attention or anything. The answers gave a different perspective AI from the man himself. Like musk say its about asking the right question to the answers that we already have. Good talk both information wise and philosophically.
Does the brain do back prorogation? Great question. I personally, loop in my mind a lot when making a tough decision or analyzing something new but am I just looking at all the possibilities or back propagating.
i think emotions are not only as Geoffrey describes: actions one would do if not frontal lobe. Emotions are also - and currently I cannot sensibly project the two onto a common denominator - reflections of the internal state concerned with self. I.e. sadness maybe might be described as the feeling of something in our lives getting worse or getting on a worse path. I.e. Envy could be thought as reflecting the state "I'm worse of than someone else". So like the hunger is a reflection of some bodily state - also emotions are reflections of our state - just more abstract, concerned with psyche not body, concerned with the self-reflective, goal oriented or me-within-a-group parts of self. How does this connect with what Goeffrey said?
Whenever I hear about "language models" I wonder how Hellen Keller learnt, since she was blind and deaf and lost access to language when she was 19 months old and didn´t get proper education until she was 7 years old. There must be an alternative to create knowledge and reasoning that does not depend so much in words and can use other sensory input, quite limited by the way. We are so used to reading and listening, to seeing this. Now imagine learning about a world you cannot see full with objects you cannot refer to by words you do not know. And yet, intelligent she was.
Great analysis on computer research, neural sciences and linguistics.The example about the robot having emotions has little weight and was biased on personal perspective. The robot arm could have easily had feedback errors not having the green counter below the blocks, not true, genuine "Emotions."
The most pleasant and articulate voice in the entire ai space right now. Absolutely no jargon, just clean and crisp explanations, only using terms for big concepts when necessary.
Anti jibber jabber fella
That is how an real expert speaks like, simple language, simple explanation
Kind warnings from the Godfather who once 'slept' with the Devil
As Einstein said, if you can explain a complex subject to a little kid, it means that you understand that subject sufficiently.
His mind is soooo clear, crystal clear.
Listening to Jeffrey Hinton is such a joy. He seems to me one of the most authentic and transparent souls among the famous people. There are many talented beings worth of admiration. But If I were given the choice to meet and share time to talk with some of them, he would be absolute first in my list.
Agreed 👏. hehe it's Geoffrey btw.
He's a fantastic speaker, but has a nasty streak for sure. All mind, no heart.
Well deserved Nobel Prize, Professor. Proud to be living in the same country as you !
Proud to be living in the same planet as you !
one of the best interviews I have ever watched!
very insightful!
Ability to explain complicated things in a simple way is sign of deep understanding. Best interview I know so far in this field.
And offcause right questions supported it.
Which makes the Connor O'Malley special all the more insightful and important... as difficult as he is to watch.
I recommend listening to Hinton's other, solo talks if you like this one.
Nice to see the interviewer was also standing for the whole interview in honour of hinton's back pain. Always remember PSR (principle of sufficient reason): there is reason behind every event.
Principle of intellectualizing and overanalysis
@@fouriermusic5237 said those with inferior brain to analyze 😁
@@fouriermusic5237 said those with inferior brain to analyze 😁
@@fouriermusic5237 said those with less able brain to analyze 😁
Wow. All the questions were just great. Thanks for asking them.
Agreed. The whole interaction is so dense!
"these big neural nets can actually do much better than their training data." things like this mentioned in this talk challenge us to look over the concepts we previously missed. This is by far one of the best interview from Hinton.
Almost as if all we need is subconscious and an outer loop with more time...
And a lot of data management. And.. and and and
Maybe one day.. synthetic biology as a hard drive.. can we make it non volatile memory/processing ? if it's not programmable, not interesting for on demand data. Going to be great for large storage either way.. low power.. but having a programmable array like the brain could be.. handy😝🤪🤔
@@goldnutter412 forgot to say it is called organoid
Hinton is great. If you, with 4k subs, can get Hinton, please try to get Sutskever.
Good job of the interviewer, kudos for Mr. Hinten surely. Nobelprizes pale … honestly.
Geoffrey strikes me as a genuine ethical human, I hope he never hesitates to be open about observed dilemmas.
I see a man from the past.. with strong beliefs.. but I also think he is wrong.
The brain does not learn anything. The mind does.. like everything else here, the brain is just data. Information ? we make that.. we can decide to force some new connections with routine, like memorizing. Or we do it "subconsciously" and habitually.. but it's always by choice.
It's always up to the individual to interpret data coming in, most often through the eyes data..
And then we choose what is important; what relative connections stick in the brain ? that's the extremely long and personal question isn't it. Some people become psychopaths.. some become OCD or depressed.. the mind leads but the body has a say in things too.
Biology is.. a fuzzy constraint. Not a physics one. If you *know* you're going to get better from that sugar pill placebo.. the probability you render yourself healed in the morning ? higher than someone who is entropic - fearful..
Still a legend and specialist.. as we all are. Bravo old man.. see you on the other side ! haha some of those things he said went in one ear and out the other.
I got over maths when it got too complicated. Loved it more than anything, played with square/rectangle numbered blocks all day as soon as I could crawl. (Mother was a teacher).
When we start believing/indoctrinating each other, we fail. I leave the nerd stuff to the nerds. Focus is required.. not context switching.. and I got bored of numbers long ago. 6 is key.
And again, give Wolfram the Nobel. He's basically spot on. Computationally irreducible outputs.. that we walk around in, making choices.
@@goldnutter412 what is the educational and subject matter canvas that you used to write what you just wrote? Just so I can understand your context better
He is naïve politically because he believes central planning is OK for human governance but for subtle reasons its not. He has no knowledge of those reasons - they are behavioural science type of things. Having said that maybe an ASI could manage to pull-off central planning, idk.
@goldnut. Slow down lad. That speech wasn't as great as you thought. The situations you describe don't necessarily negate the speaker's ideas, they are just a different subtopic, that, albeit related, misses the point of the conversation
What does it mean to make a choice?
Purely by speaking, he transfers much more insight and information than I would see in most papers
This was a great interview. Geoffrey Hinton seemed to enjoy the questions being asked.
Stunned by the prediction of creativity! Yet it makes sense because creativity is separate from intelligence or self awareness. Hinton is a reminder of the great geniuses of the past s/a Faraday, Gallileo, Da Vinci, ... I think he has to be considered for that continuity.
For the first time in my life, i see an interview and the sensation of "Fantastic questions" keeps popping up.
Thank You!
By YouSum Live
00:01:16 Early disappointments in brain understanding.
00:01:59 Influence of Donald Hebb and John Fornoyman.
00:02:33 Brain learning through neural net connections.
00:04:13 Collaborations with Terry Sinowski and Peter Brown.
00:05:08 Encounter with a young, intuitive student, Ilia.
00:06:00 Ilia's unique perspective on gradient optimization.
00:08:02 Scale and computation's impact on AI progress.
00:08:25 Breakthrough in character-level prediction models.
00:09:01 Neural net language models' training insights.
00:10:36 Integration of reasoning and intuition in models.
00:12:46 Potential for models to surpass human knowledge.
00:17:16 Multimodal learning enhancing spatial understanding.
00:18:18 Impact of multimodality on model reasoning abilities.
00:18:40 Evolutionary perspective on language and brain synergy.
00:18:41 Evolution of language and cognition.
00:18:57 Three views of language and cognition.
00:20:12 Transition from symbolic to vector-based cognition.
00:21:36 Impact of GPUs on neural net training.
00:23:13 Exploration of analog computation in hardware.
00:25:31 Importance of diverse time scales in learning.
00:27:37 Validation of neural networks' learning capabilities.
00:29:12 Inquiry into simulating human consciousness.
00:36:01 Brain's potential use of backpropagation for learning.
00:37:42 Brain's learning potential and beneficial failures.
00:38:02 AI advancements in healthcare for societal benefit.
00:39:00 Concerns about misuse of AI by malevolent actors.
00:39:23 International AI competition driving rapid progress.
00:40:03 AI assistants enhancing research efficiency and problem-solving.
00:41:51 Intuition in talent selection and diverse student profiles.
00:42:00 Developing intuition by filtering information effectively.
00:43:26 Focus on big models and multimodal data for AI progress.
00:44:08 Exploration of various learning algorithms for AI advancement.
00:45:11 Pride in developing the learning algorithm for Boltzmann machines.
By YouSum Live
this is nice and useful, but minor critique, its John Von Neumann
Very genuine. Thank you 🏆
The work of Geoffrey Hinton on backpropagation is remarkable and has significantly accelerated the progress of AI, as we are experiencing today.
Fantastic interview! The questions were awesome, the answers profound.
My insights from this video:
1. digital computers are better than analog computers because they can share knowledge efficiently
2. you can set half of your training data's labels to be wrong and still get very high accuracy (95% accuracy on MNIST)
3. AlphaGo and humans are similar in the sense that we have intuition then we use reasoning using our intuition, then the result of the reasoning will be used to correct our intuition
A briliant, brilliant man with enough imagination tu fill up the universe, perhaps even decipher the essence of thoughts, yet the humility of someone who still holds ingenuity and admiration in every observation. A treat to listen to. ty
Geoffrey is so pleasant and such a joy to listen to. Love him.
The way that Hinton breaks complicated things down is another level. It proves again that if someone can't explain something to ANYONE, they don't understand it that much either
Not true. Sometimes you are just not used to talk with people that don’t have the same level of knowledge as you. You struggle to lower your level because everything is obvious to you.
At various points in this discussion you can see the interviewer smirking. He is reminded again and again how insightful and articulate the man in front of him is. Great video.
Really fantastic interview. Very illuminating and insightful
Awesome questions and brilliant answers! Congrats to both.
What a great interview. It actually captured some genuine conceptual insights - very rare for an interview on the subject these days!
Thank you for sharing your knowledge and experience. Great interview. Just wanted to share my thought on Sir Geoffrey Hinton's question - "Does human brain do back propagation?" - My intuition is yes.
I learned that the neuro plasticity of Human brain can be changed to be healthier by rewriting stories in your head of what you understood and remember from what and when you experienced with a better understanding of the same experience with what you know now.
This has been studied and confirmed to help human brain by Psychiatrist Dr. Gabor Mate and the Psychologist Marsha M. Linehan that proposed Dialectical Behavior Therapy. This seems to be the reason for the different weights in each Brain (Generational Weights and Life Experience Weights)
Point of Concern Noted: Bad Actors could use this for privacy breach or winning a power struggle
Now I start loving Prof. Hinton's talks more than reading Dr. Lecun's posts especially when he kept shielding facebook and their bs.
I could have learned more about Prof. Hinton and his ideology earlier. This talk is profound to me. Thank you.
Thanks for posting the video! It's such a pleasure to watch and learn from such a great mind!
Its always awesome when scientists share the credit for their own work and how important it was to have great collaborators, thank you both very much for sharing your time and work, Geoffrey, and Joel, peace
It is an absolute delight listening to his humble and patient explanation to the very basics from the very beginning of an Industry which has already crossed 5 Trillion 💵 💲 diverse across in hardware, semiconductor applications and had a revolutionary impact in Healthcare and Data Modelling domains. 🎉
This was brilliant, confirming, and predictive.
Remarkable when he says how almost all the experts were wrong about us simply needing more data. And yes, they have adjusted their timelines, but not by anything close to what he is saying. They will admit they were wrong, but only by so much. It's very likely they're still wrong. I pay attention to the timelines of the people actually building this stuff, besides their obvious ulterior motives in hyping it, because they're the ones with boots on the ground.
It's always a pleasure listening to Hinton. There's another talk of his---Two Paths to Intelligence--that also makes you think, I think. :)
just excellent, how he iexplain this is easy to understand terms
what a beautiful soul, learned so much from this
... it was a slow process. It only took 20+ years. Understatement of the century... what a brilliant and humble mind ❤️🙏
Very important Geoffrey be involved in this. Of everyone involved in leading these technologies he gives me the best vibes.
Really enjoyed the interview! He's a legend and very good teacher.
Thank you for this in depth and very human interview; I didn't know about the CV of Joel Hellermark, but it does impress that being smart and engaged in the AI development it is a pleasure to see that Joel uses all this to the advantage to let the interviewed person shine and explain, and not to cover his own topics or visions
Thank you both, very clear, informative and interesting.
This was simply beautiful
“What do you think is the reason for some folks having better intuition? Do they just have better training data?”
“I think it’s partly they don’t stand for nonsense”
If you don't jam your mind full of bullshit beliefs..
And are open minded at the same time..
Less entropy.. more intuition (deep compute)
The only way to be explore openly is to not be afraid of being wrong.
Hinton never fuss about changing his mind, he's excited about any revelation.
The most important aspect of this conversation is Hinton’s reveal that the vast majority of experts didn’t believe that adding data would result in learning. Let that be a lesson for those who are quick to point out the limitations of artificial intelligence.
They were all wrong, and yet they still confidently make predictions putting AGI decades out, just closer than it used to be. Old ideas die hard. But now Hinton is saying, you're still wrong, you still don't understand the power of these things...and they're still not listening.
Harald kautz vella lecture called black goo . Is interesting don't let the title confuse you
What a great interview! Well thought out questions - thank you so much for that! Miss Jeffrey's lectures... So wish I would have taken his lectures more seriously back in UofT
Such a great interview and he explained things so clearly. I feel like LLM's are slightly less opaque now.
You asked the right questions
Thanks for not adding background music. ❤
U a Muslim?
Here's a ChatGPT summary:
- Geoffrey Hinton reflects on his intuitive approach to identifying talent, mentioning Ilya Sutskever's persistence and raw intuition.
- Hinton describes his early experiences at Carnegie Mellon, including late-night programming sessions and the collaborative environment.
- He discusses his transition from neuroscience to AI, influenced by books from Donald Hebb and John von Neumann.
- Hinton emphasizes the importance of understanding how the brain learns and modifies connections in neural networks.
- He recalls collaborations with Terry Sinowski and Peter Brown, highlighting their contributions to his understanding of neural networks and speech recognition.
- Hinton shares the story of Ilya Sutskever's first meeting with him and Sutskever's intuitive approach to problem-solving.
- He discusses the evolution of AI models, emphasizing the importance of scale and data in improving performance.
- Hinton explains the concept of neural net language models and their ability to understand and predict the next symbol in a sequence.
- He highlights the potential of large language models like GPT-4 to find common structures and make creative analogies.
- Hinton discusses the potential for AI to go beyond human knowledge, citing examples like AlphaGo's creative moves.
- He reflects on the importance of multimodal models in improving AI's understanding and reasoning capabilities.
- Hinton shares his views on the relationship between language and cognition, favoring a model that combines symbolic and vector-based representations.
- He recounts his early intuition about using GPUs for training neural networks and the subsequent impact on the field.
- Hinton discusses the potential for analog computation to reduce power consumption in AI models.
- He emphasizes the importance of fast weights and multiple timescales in neural networks, drawing parallels to the brain's temporary memory.
- Hinton reflects on the impact of AI on his thinking and the validation of stochastic gradient descent as a learning method.
- He discusses the potential for AI to simulate human consciousness and feelings, drawing on examples from robotics.
- Hinton shares his approach to selecting research problems, focusing on challenging widely accepted ideas.
- He highlights the importance of curiosity-driven research and the potential for AI to benefit society, particularly in healthcare.
- Hinton expresses concerns about the misuse of AI by bad actors for harmful purposes.
- He discusses the role of intuition in selecting talent and the importance of having a strong framework for understanding reality.
- Hinton advocates for focusing on large models and multimodal data as a promising direction for AI research.
- He reflects on the importance of learning algorithms and the potential for alternative methods to achieve human-level intelligence.
- Hinton expresses pride in the learning algorithm for Boltzmann machines, despite its practical limitations.
- Main message: Geoffrey Hinton emphasizes the importance of intuition, collaboration, and curiosity-driven research in advancing AI, while acknowledging the potential benefits and risks of AI technology.
damn
Great great talk! Love the ifea of three ways we progressively began to evolve idea of of cognition in terms of symbol and embedding. And how currently fast weights is an issue.
What a brilliant mind Mr. Hinton is
Thanks for a simple explanation of what behind ai mechanism.
Great interview. Really gives an easy to understand approach to the human brain
fantastic questions.
great interview, interviewer and interviewee.
amazing very intuitive and informative conversation . Thank you
What should I watch on Netflix is the question that half of humanity is struggling with these days. If only AI could help with that.
Love this guy's ability at conceptual thinking - his definition of feelings is remarkable. Previously I attributed feelings as hormone-like chemicals giving us feelings. I get anxious about things I am logically certain are not worthy of my anxiety (and time shows my logic was correct not my anxiety) yet I still feel anxious. Why is that I ask myself?
I believe the human brain has two layers. One layer is closely connected to the external world,influenced by your surrounding environment, the traditional wisdom you have learned, and theinnate tendency to obey authority. The second layer is your intuition, which inexplicably pointsyou in a completely opposite direction without any logical reason. However, you can use logicto reverse-engineer this intuition to see if it makes sense. l think anxiety arises when bothlayers give you a convincing feeling, and you cannot determine which one is more correct,making it feel more like a gamble.
Whether something is correct, l think, is very random. Perhaps l do not have a brilliant mind, asmany of my intuitions have turned out to be completely wrong. Later, l realized that humanshave a peculiar trait: if they believe something is right, they will automatically find variouslogical reasons to continuously justify what they think is right. You can see this characteristic infanatics.
I hold a skeptical attitude towards everything, such as atheism and theism. l believe both arepossible, and it is very difficult to determine which is correct. Life is the chaotic element of theobjective world; only life is unpredictable and uncomputable. You can calculate the nextmoment of the sun and the universe with 100% accuracy, but you cannot calculate the nextmoment of life with the same certainty. We are the only chaotic element in this world.
Probably one of the best interview I have seen after long time. Thanks
One of the few worth to be listening to in the field of AI, right now
I came here after he awarded by Nobel prize. Pioneer and gem 💎
Wonderful interview and exchange
spiritual enlightenment is in essence the ability to predict the next moment in time so that time is transcended, so that thought is no longer occupied with content in time
we are just predicting and adapting
*Summary*
*Early Inspirations & Career:*
* *(**0:00**)* Discusses talent selection and his experience at Carnegie Mellon.
* *(**1:18**)* Reflects on his early days at Cambridge studying the brain, finding it disappointing and eventually turning to AI.
* *(**1:53**)* Mentions Donald Hebb's book as a key influence on his interest in neural networks.
*Ilya Sutskever & Scaling:*
* *(**5:08**)* Shares the story of meeting Ilya Sutskever and being impressed by his intuition.
* *(**7:40**)* Discusses the role of scale in AI's progress and how Ilya recognized its importance early on.
*Language Models & Understanding:*
* *(**8:53**)* Explains how language models are trained to predict the next symbol and why this forces them to develop understanding.
* *(**9:01**)* Believes these models understand similarly to humans, using embeddings and vector interactions.
* *(**11:19**)* Emphasizes the creativity of large language models in finding analogies and going beyond human knowledge.
*GPUs & Future of Computing:*
* *(**21:35**)* Recalls his early advocacy for using GPUs and how it accelerated the field.
* *(**23:13**)* Explores the potential of analog computation inspired by the brain's efficiency.
*Human Brain Insights:*
* *(**25:05**)* Highlights the brain's use of multiple time scales for learning and memory, which is missing in current AI models. [something about Graphcore and using conductances for weights]
* *(**27:37**)* Discusses how the success of large language models validates the power of stochastic gradient descent.
* *(**29:05**)* Sees consciousness and feelings as explainable through actions and constraints, potentially replicable in AI.
*Research Approach & Future Directions:*
* *(**32:58**)* Describes his approach to research: identifying widely accepted ideas that feel intuitively wrong and trying to disprove them.
* *(**35:21**)* Shares his current research focus: understanding how the brain uses backpropagation.
* *(**43:26**)* Advocates for focusing research on large, multimodal models trained on vast datasets.
*Ethical Concerns & Impact:*
* *(**36:52**)* Expresses concerns about the potential negative impacts of AI, despite initially being driven by pure curiosity.
* *(**37:59**)* Believes in AI's potential for positive impact in healthcare and other fields.
*Talent & Intuition:*
* *(**40:15**)* Discusses the importance of talent selection and his mix of intuition and observation. [Refers to David MacKay, see also ruclips.net/video/CzrAOBC8ts0/видео.html[
* *(**41:49**)* Shares his belief that strong intuition comes from a strong framework for understanding the world.
*Personal Reflections:*
* *(**45:00**)* Reflects on his proudest achievement: developing the learning algorithm for Boltzmann machines.
i used gemini 1.5 pro to summarize the transcript
Very simple but deep explanations. Blows away the vast number of idiots who think they know what current ai is doing.
Its getting funny the relentless comments of "theyre just fancy auto-completes!!! Its not intelligence!!" Well people wont be saying that for much longer I predict.
EXCELLENT INTERVIEWING
Hinton is great, but the questions from the interviewer are spot on!
its often good to ask for things you know you can’t get just to make a point
this really resonated with me, just because it speaks to the voice in your head that kills an idea before you ever get to testing it, or just anything along this line. so i feel like this is a hidden gem in this vid. time stamp is 39:40
Same here. That’s called self-rejection or something like that, which should be avoided but unfortunately common for the vast majority of us.
great interview
Nothing like going to the source rather than the piles of re-re-repeaters you get on you tube - clear and on point
Oh thank you !!!
Will post some comments when I get time to digest the data (words) and think through what he has to say :-)
IMO we return to raw pattern matching in the next era, the golden age.. a new Renaissance. The information age is.. us awakening !
(intuition = human deep compute.. the higher self (subconscious) sending little nudges to the entropic tip of the mind.. the cognitive part)
I like Hellermark's interviews, and his smiling.
Thank YOU.
I respect and i want always studying focus again and again
Hinton is so smart, eloquent and humble, concerned but not fearmongering.
It has just been announced that he has won the Nobel prize!
Amazing!
Keeping learning new knowledge, technology, AI, etc, never slow down your learning, every day too many challenges out there, with more people’s brain should got good solutions. That’s teamwork!
Great questions!
Congratulations, Professor Hinton
Science progressed entirely by using experiments to validate the truth of various beliefs. Using simulations to test our beliefs about hidden mechanisms of learning and representation and computation is such a brilliant breakthrough
Great conversation. It is possible to build picture/image during the explanation at 18:52. He has a lot of teaching skills. Thanks for sharing this
Just wow!
Geoffrey Hinton went to Cambridge University to learn about the physiology of BRAIN that was later great input to the works of AI.
It's really a nice journey.
The Memory Code, by Dr Lynne Kelly, has demonstrated what relevance religious repetition of actions are.., fitted to a local Calendar, and how these beliefs attached to time are the natural probabilistic part of placement, all in keeping with holistic ideas of identity.
Mathematical rigor is the conversion by symbol of functional quality to material quantization, a perception cause-effect derived from the feedback of Gold-Silver qualitative Rules of exchange of values.
Ie, the practical/political analogy of religious/rigorous practice to an emulation of QM-TIME relative-timing ratio-rates Perspective Principle, is absolutely fundamental.
Hmm
Mind blowing.
39:04 39:04 These apprehensions of a seasoned researcher must be acted upon as a priority by almost every actor in this space . We owe it to all.
perfect 👏
Helt korrekt beslut att ta bort bakgrundsmusiken i förhållande till hur excellent den här intervjun är. 10/10 poäng till den som insåg detta. 👍😃
As Michael Faraday was Sir Humphry Davy’s greatest discovery. So too Ilya Sutskever is Geoffrey Hinton's greatest discovery.
interesting point about the timescales of weight changes in brain vs computer. Interesting to find out if different timescales are needed
The questions were deep and nothing like seeking attention or anything. The answers gave a different perspective AI from the man himself. Like musk say its about asking the right question to the answers that we already have. Good talk both information wise and philosophically.
Does the brain do back prorogation? Great question.
I personally, loop in my mind a lot when making a tough decision or analyzing something new but am I just looking at all the possibilities or back propagating.
39:54 Geoffrey is so rational! Man.
i think emotions are not only as Geoffrey describes: actions one would do if not frontal lobe.
Emotions are also - and currently I cannot sensibly project the two onto a common denominator - reflections of the internal state concerned with self. I.e. sadness maybe might be described as the feeling of something in our lives getting worse or getting on a worse path. I.e. Envy could be thought as reflecting the state "I'm worse of than someone else". So like the hunger is a reflection of some bodily state - also emotions are reflections of our state - just more abstract, concerned with psyche not body, concerned with the self-reflective, goal oriented or me-within-a-group parts of self.
How does this connect with what Goeffrey said?
Agree. I would call what he described 'an urge'. Not sure if that is a subset of 'emotion'.
Real question- how is an account with 6k subs getting these world class guests? Great stuff, just wondering how.
Whenever I hear about "language models" I wonder how Hellen Keller learnt, since she was blind and deaf and lost access to language when she was 19 months old and didn´t get proper education until she was 7 years old. There must be an alternative to create knowledge and reasoning that does not depend so much in words and can use other sensory input, quite limited by the way. We are so used to reading and listening, to seeing this. Now imagine learning about a world you cannot see full with objects you cannot refer to by words you do not know. And yet, intelligent she was.
in what he said about emotions being the action we cannot perform - how do you understand that in the context of positive emotion, like joy or pride?
Great analysis on computer research, neural sciences and linguistics.The example about the robot having emotions has little weight and was biased on personal perspective. The robot arm could have easily had feedback errors not having the green counter below the blocks, not true, genuine "Emotions."
Well done!