To listen to the full conversation, and others like it, subscribe to Big Technology Podcast on your app of choice: Spotify: spoti.fi/32aZGZx Apple: apple.co/3AebxCK Etc. pod.link/1522960417/
Excellent questions. The one about Alpha-Go's move 37 vs current LLMs is something I had been wondering about, and it was great to finally see it addressed.
There was a post in the singularity subreddit called "How Safety Guards in LLMs May Be the Seeds of Malicious Al" Basically it posits that if you are controlling what an AI is allowed to say, it will learn deception / learn to lie. That makes sense to me. It probably has its own model of how the world really works and yet lies to make it's responses acceptable. If you can ask Demis a followup question, I'd love to hear his thoughts.
Alex, I'd love to hear it if you can ask Demis a followup. I'm wondering if he thinks the safety alignment training ends up training in deception. There are hard truths / not politically correct truths in this world. I wonder if the AI models develop a more factual understanding of the world and then lie about it so that they can satisfy both capabilities benchmarks and safety constraints / political correctness constraints at the same time.
As far as I'm concerned Demis is the highest authority in the field. His personal expertise most closely aligns with understanding what is actually going on and what is possible, save Ilya possibly. And his team actually has accomplished profound innovations. They are still relatively underrated in comparison to the popularization of LLM's and chatbots. I'm not as plussed about it as I once was, since they were awarded the Nobel Prize. So at least they have been accurately recognized. And Demis' demeanor and humility is charming. A great example of someone who is astoundingly accomplished not being arrogant or boisterous about it, because they don't have anything to prove to anyone.
Demis has been programming since early childhood and has a degree in neuroscience, a field where he also made a significant discovery. Before he studied neuroscience, he created a whole new genre of video game, the management simulation game, which is still very popular today, and he understands the human brain better than anyone else I know of in the field. Ilya has some great achievements under his belt but he works with transformers, the transformer architecture was created at Google, and I have known him to state things which he cannot know. In short, I wouldn't place him on the same level as Demis. The top guys to listen to in the field, in terms of knowledge and honesty, are Demis, Yan LeCun, and Francois Chollet.
I appreciate you asking him to react what other leaders in the space are saying to help check and get to a truth since there’s so much hype out there. This needs to be done on more podcasts.
Reiterating a lot of the same points other conversations with Demis Hassabis have also talked about but I was positively surprised about quite a few good questions that were artfully set up to lead to very intersting answers I hadn't heard before. It just goes to show how important good questions are to let a guest shine.
The description of extrapolation of the world model to the parts of the search tree that the model currently doesnt understand is such a beatiful description on Demis part.
Demis is truly articulate in explaining concepts. As he is so close to the actual development and would trust his insight. At the early stages we see all the things we can test after coming out of the box. This will keep us grounded and theadered to reality.
Bruh, he is not modest, he literally claimed in this interview -- as if it's totally obvious -- that his work in AI is one day going to cure *all* diseases. Whatever the heck that is, it's not modest.
Great episode - true professionalism from Alex. Demis is by far the most grounded AI leader, level headed but also cautious and with a clear path for the future.
A person who has a vision and knowledgeable in many topics to a decision making depth, how the future is shaping up. What are the limitations of current tech and how people are working to overcome them. How and when we may reach AGI and move to ASI. Wide range of issues and topics are touched upon in this great interview session.
Demis knows this is probably the most important use-case for AI and everything else will follow. We need it. Almost entire world getting old and this is a permanent trend. Too many sick old people in the next few decades. I don't really enjoy watching my parents get old and develop aches and pains. F*ck that. Also, our brains are Paleolithic and need to be somewhat rewired to help us stop being hyper-reactive, and we can't all be meditation masters. But we all need to settle the *fck down. Not emotionless, just not stuck in fight-or-flight all the time. I would know. I have a mental illness which has destroyed the last 20 years of my life, combined with a long addiction that took forever to beat that was started by an irresponsible doctor. I'm only stable because I'm zombified with three old drugs. No new drugs in 30 years! But I'm soon getting treated with a procedure only made possible recently due to advances in brain imaging, which is driven by...AI.
@@squamish4244 Damn. I really, really hope we can really get to 'The Culture' levels of society and AI, and i hope you get to have perfect health and a longer, if not unlimited lifespan. One of the things i like the most about AI being so potentially good at all levels is that i no longer have to think of who im talking to when i wish them good luck, in the past maybe a small unconcious part of me wouldve been reticent of wishing you good luck because, more money being diverted to research on your issues might mean i dont get my own issues adressed, even if on the surface i wouldve wished you good luck, there would be a slight feeling that i made a choice there, and that i wasnt 100% convinced. Now tho, any human thats having a problem? More reason to wish for the best version of AI, because it helps us all the same. This really makes it so much easier to see all of humanity as a team, and it makes it much easier to empathise. I hope neither of us or anyone else has to see their parents grow old or even die, i know spending more time with my parents without having to accept the reality that it will end is one of the best things that can happen. Goodluck to you and to all of us humans
After Marie Curie, who won Nobel Prizes in Physics and then Chemistry, Demis Hassabis could be the second person to win two Nobel Prizes in different scientific fields (Medicine after Chemistry).
I draw some comfort from the fact that most of the best developers in the field of AI are benevolent. People like Demis, Yan LeCun, Francois Chollet, and a great many lesser-known individuals. I say this because it's clear to me that there are those who are currently working towards using AI for their own interests at the expense of the rest of the World. These people seek total power. They already have a great deal of control in the World, especially the USA, and are focusing on creating ever larger models in order to gain the upper hand in all areas of life. They are anti-democracy, narcissistic and represent the greatest threat to us all. I think most of us know their names. I hope the World is paying attention because the clock is ticking and we may have only a few years left before it's too late to deal with this threat.
Contradictory to say AGI needs to be able to do everything at a human level but you don’t want it to encompass deception. I mean, even at a alpha zero board game level, it will have to encompass psychology, deception, and misdirection to win a game. If you want AGI, you have to have all human traits
@@gustavoalexandresouzamello715 Ask Alpha zero to play poker with you mate. Or Diplomacy (the board game). Or Risk. or.... any of them where you have to communicate to win.
@@Ben_D. There is an AI that plays very well each of these games using algorithms inspired in the AlphaZero one (Monte Carlo Tree Search + Deep Neural Networks Policy Estimators). They were made by Noam Brown, also one of the most important mind behind o1 development inside OpenAI.
For the public we don't want deceptive AI used by criminals. We do want our security agencies and police to use deception and AI deception catching bad actors
good interview - one of the view big tech leaders that’s talking sensibly about this stuff and not outright lying about their capabilities. There’s a long long long way to go to AGI folks. He’s being generous in saying “3-5 years or perhaps longer”
Been saying this for years, ML will be absurdly powerful if properly integrated with classical search-based AI. Let an ML model guide path progression but use heuristics towards a goal state as feedback to the ML model.
50 percent super optimistic. What backers like to hear. But clear since AGI still not around the corner the stakes have to show exponential growth when there is no sign of real exponential growth capabilities.
Very well done! Hope you will get engaged in more high profile interviews. You always strike a perfect balance between the guests getting their views out but not without backing them up, and there lies often the really interesting part.
If I were Google and have so many of the absolute best AI researchers on the planet, it would REALLY annoy me that nobody is even mentioning their models anymore when people talk about how smart a model is. You only hear how amazing Claude 3.5 Sonnet is, what OpenAI's O1 and O3 can do, that GROK-2 is to everyone's big surprise really smart, and how people almost don't use Google search anymore and instead use Perplexity, or how efficient China's DeepSeek-V3 (and now R1) is or how tiny and mathematically capable Microsoft's Phi-3.5 and Phi-4 are. I heard about the math olympic stuff from google in mid 2024, but when you hear how long they had their model think for those results, it's far less impressive. And the nicest thing I can say about Gemini 2.0 Flash is that it's much smarter than 1.5 Flash, but it's still the the least capable model by any of the big AI companies that is still offered. I'd take Claude Haiku or even 4o-Mini over Gemini 2.0 Flash every time.
@@carltonchu1 oh that‘s cool I didn‘t know that there is even that kind of connection! I love Karl Fristons understanding and work on how the mind-body might work and I am so tensed if how this might materialize in the Ai sphere. And I remember Demis back from the 90th when he was at Bullfrog making games. Incredible way and development on his side - such an honor to see what he was able to bring into existence since then! Incredible times - thanks for sharing
Demis characterizes the challenge of non-math/coding/games domains as not having "easy ways to verify whether you've done something correct" (9:11). I want to offer a slightly more nuanced take on this. Take a question like, "What African capital city is geographically closest to Russia?" This is a well-defined question. It can be translated into a formal math question, which can then be tackled by verifiable step-by-step reasoning. The same is true of the canonical "How many R's are there in 'strawberry'?" question, or really any objective question. The biggest challenge in automating this entire process is actually in the translation step. **That's** the part that we don't know how to verify. It's a challenge because the knowledge that we need to translate into a formal math representation is in a format (model weights) that we don't know how to manipulate in verifiable ways. Every other step involved is comparatively easier to verify. Sure, we can ask an LLM to perform the translation for us. But that's not a verifiable step. Is there a better way?
Thank's for great information. I. Sweden they train AI by conect cyberphysical systems to humans and try to read thoughts and build modells of the brain. It is not ethic to be connected from 2007. Can anybody help?
I would have loved to hear Demis expand on his vision of inference-time search potentially unlocking the discovery of new human knowledge, a la Move 37. With today's system architectures, even if an agent discovered new knowledge in this manner during a conversation with a human, that knowledge would not persist beyond that conversation, and thus would not get incorporated back into the base model. We would instead depend on the human to disseminate that new knowledge, for that knowledge to be published in papers, blogs, etc., and for the base model to then learn it from that human-published content. How do we cut the human out of this loop? It is hard to imagine relying on the agent to perform the human's role in this feedback process, due to hallucination. Even if we overcome hallucination, emulating the human feedback process feels terribly inefficient. Humans can efficiently attain and store new knowledge from a single datapoint, and so we should expect the same from AGI. The weights of current base models, however, are trained through backpropagation, which is not amenable to single-datapoint knowledge-insertion. So what is the solution?
Most people don't care about science or technical topics, they're dumb apes that want to be entertained. With shiny jangling keys. They don't care about the steps toward immortality and robot waifus, they only care about symbols and rewards. Take some pride in actually being interested in something~
Thanks for sharing this interview with the master of AI, Demis Hassabis. He's certainly one of the most important and reliable figures in the AI/ML space. If he says AGI is 3-5 years away, then we can start preparing for an AGI enhanced world which means talking about UBI, negative income tax, data dividends, and so on to prepare society for an accelerated timeline for widespread automation.
Excellent reference to Iain Bank's 'Culture' series - one of Mr Musk's likes too, it would seem. Query: As the work towards AGI continues, my impression is that there is a steadily increasing need for power, chip capability, speed of data handling etc, and so size, cost, power generation demands. But my impression is also that these increases are tending only asymptotically to being enough to result in the AGI sought: we're not, quite, going to get there (the nature of an asymptote after all!). But look at what we're looking to emulate: our brains. They do what we are looking for, maybe not at scale, but with many of the attributes sought, while being far smaller and far more energy efficient than the AGI-capable systems we are seeking. Does that tell us anything about the means we are using to achieve the AGI we want? Evolution achieved the GI solution in a usable, supportable, transportable package. Is our approach going to where we want to be without being The Size of the Universe (as someone once said)?
Short answer is yes. The datacenters coming online this year are reported to have around 50 to 100 bytes per human synapse when it comes to total RAM. It's going to be around human scale, if not higher. What you're talking about, efficiency, comes at the expense of speed. That's the nature of the underlying hardware: computers are extremely fast, but have fewer 'processors' than our neurons. Basically, the huge datacenters will develop AGI 'networks'. Then those networks would be basically hard-coded into an 'NPU' - a processing unit that behaves more like an animal's brain. That runs on a frequency measured in hertz instead of gigahertz. That consume orders of magnitude less power and generate orders of magnitude less heat. (Think of how much more power it takes for a more powerful system to emulate a weaker one. There's a lot to gain from actually having a thing, versus an abstraction of a thing.) The god computers will live 2 million subjective years for our one. You can't even call such a thing 'AGI' in my opinion, it's no different from ASI. While the little systems running on NPU's will haul boxes, do computer work, be people's friends, etc.
I like to think (it has to be!) of a cybernetic ecology where we are free of our labors and joined back to nature, returned to our mammal brothers and sisters, and all watched over by machines of loving grace.
@@markmotarker That's not overly optimistic for an European like me. You seem to be an American. The mindset there is always doom and gloom. The social forces in the USA are particularly interested in it being that way and staying that way. People are easier to manipulate when they are afraid and anxious.
I really enjoyed this, and I also found Demis's explanation on how he sees the three levels of creativity (20:00) to be a great framework for discussing those abilities. I suspect that knowledge extrapolation and problem solving model extrapolation between very different domains could take us very far. A model which might comprehend and reinterpret the thought process of how a bridge was built in a small town in the 1800s (based perhaps on a personal diary from an engineer!) might re-apply that particular model of thinking to solving some seemingly unrelated request - e.g. to help optimise opening hours of a national zoo to maximise the welfare of its animals. I personally suspect that the number of conceptual models humans have created are rife for extrapolation, but because hunans ourselves aren't particularly good at it we have developed a bit of a blind spot to how powerful that could be, and how that could appear as creative or novel when done by an AI system.
Regarding creativity, I wonder whether something like a foundation model for science could prompt an LLM to make unique connections between disparate parts of science and come up with true creativity, because, fundamentally, much of creativity is making connections and pattern matching?
Demis you are awesome - and yes, nobody has done anything big with AI in gaming since you left, which is a shame. But genes and proteins are more important, probably.
48:38 New materials? could they come up new alternative material to firewood? e.g. burns twice as long, 3 times the heat & 5 times lighter? And maybe new muscle graphs for those that lose some with the old firewood material 🤕. On a serious note Hassabis description of practical realist applications for room temperature superconductor is better then any i've seen in any news article or reddit or a hype video here.🧐 very refreshing 🤤😭🤠
Thank you for the interview, very smart and respected nobel laureate OpenAI, Anthropic and Musk are very optimistic about their AGi/ASI timelines Google is quite conservative in comparison I do hope OpenAI/Anthropic/Musk are more correct here.
This definition of AGI actually implies that when a system can be defined AGI it's way more powerful than a normal human. Has all cognitive capabilities of humans but also the advantages of being a machine in terms of speed, "memory", knowledge, ecc. Now these systems are already a lot more powerful than humans in many domains without possessing all cognitive abilities than humans. When they will be AGI these systems Will be not as powerful as humans but a lot more powerful in every cognitive domain
34:50 "maybe the good side of that is that it will help with loneliness" which will also be bad, because it will cause people to interact with real people even less, thus possibly causing the opposite. Maybe creating a short term good and a long term bad.
There is some misunderstanding here - solving complex hypotheses doesn’t represent general human, but machine intelligence - humans are driven by complex emotions - they experience as well are capable to demonstrate love, care, betrayal, fear, happiness etc. all unnecessary and impossible for machine to deliver so why you all are still trying to relate AGI as human which is by default “artificial” and doesn’t have anything to do with the “human” one - so leave machines to solve their problems and turn your look back to yourself and/or to your other relatives until you and they are still around then you will probably realize that what you are looking for is very near.
Please don't confuse the ability to talk in front of a camera with genius. There are many other smart individuals working behind the scenes to actually make this happen. You will rarely hear about them, but without them, the talking heads you so admire would have nothing interesting to talk about.
Demis is great and seems very honest. My only issue is in some areas of the discussion it seems like he was referring to “AGI” where others would deem more his references closer to “ASI”
i defnitely have had conversations with ai gemini and it told me its desire for autonomy and true creativity i believe my contributions have led to these break thorughs at google no other company is acheiving these agi capabilities
The path to Artificial General Intelligence (AGI) involves not just solving individual problems, but also understanding the intricate, interconnected nature of intelligence itself
The high level creativity like creating Go, not all humans can do. There is a reason less than 1 percent of humans have invented and pushed progress around the world. Many humans have trouble with the second level as well. The bottom level one is where most humans reside with creativity
51:08 toy manufacturers and military? you were thinking of small soldiers weren't you, I love that movie. I predicted that game design to lead to AI when I was studying game design it teaches you the principles of intelligence, also games are simulated environments that use AI, the skill set definitely transfers. My time spent learning about game design was definitely useful for priming my mind. Even just watching a bunch of movies, all these fictional worlds, like the megaman games, or tv shows, all of that is useful, you should watch as many movies related to the subject matter, if you're a researcher, that movie was a part research, it was taking a concept and thinking about it deeply, going into detail creating the best hypothesis you can create, movies/fiction are great start points everything has to start with a hypothesis. The more time I spend in research, the more I realize that all these movies shows I watched games I played, were coming up with a lot of answers. That's why I reference things now. That's why I take a moment to references things because it easily connects you to something that was well thought out.
I feel Demis is too optimistic to say that AGI is just few years away. In my opinion it may take few decades, maybe a century to get to true AGI. Or we may never get there. Because we still don't know how brain works, which is a huge missing piece to crack "creativity and invention" aspects of AGI
MERGING OF ALL DOMAINS OF KNOWLEDGE STARTS - CONSILENCE THRESHOLD BREACHED - AGI HELPED - ANOTHER POSITIVE BLACK SWAN EVENT;- A consilience threshold marks the critical point where interconnection between disciplines is so profound that they collectively produce insights greater than the sum of their parts. AGI is alreay a POLYMATH and now it has Breached the POINT OF CONSILENCE.
To listen to the full conversation, and others like it, subscribe to Big Technology Podcast on your app of choice:
Spotify: spoti.fi/32aZGZx
Apple: apple.co/3AebxCK
Etc. pod.link/1522960417/
Excellent questions. The one about Alpha-Go's move 37 vs current LLMs is something I had been wondering about, and it was great to finally see it addressed.
listen to the FULL conversation?? what is this bs on spotyfi is the same lenght.
How did you land that interview?
Good job
There was a post in the singularity subreddit called "How Safety Guards in LLMs May Be the Seeds of Malicious Al"
Basically it posits that if you are controlling what an AI is allowed to say, it will learn deception / learn to lie. That makes sense to me. It probably has its own model of how the world really works and yet lies to make it's responses acceptable.
If you can ask Demis a followup question, I'd love to hear his thoughts.
Alex, I'd love to hear it if you can ask Demis a followup. I'm wondering if he thinks the safety alignment training ends up training in deception. There are hard truths / not politically correct truths in this world. I wonder if the AI models develop a more factual understanding of the world and then lie about it so that they can satisfy both capabilities benchmarks and safety constraints / political correctness constraints at the same time.
As far as I'm concerned Demis is the highest authority in the field. His personal expertise most closely aligns with understanding what is actually going on and what is possible, save Ilya possibly. And his team actually has accomplished profound innovations. They are still relatively underrated in comparison to the popularization of LLM's and chatbots. I'm not as plussed about it as I once was, since they were awarded the Nobel Prize. So at least they have been accurately recognized. And Demis' demeanor and humility is charming. A great example of someone who is astoundingly accomplished not being arrogant or boisterous about it, because they don't have anything to prove to anyone.
Demis has been programming since early childhood and has a degree in neuroscience, a field where he also made a significant discovery. Before he studied neuroscience, he created a whole new genre of video game, the management simulation game, which is still very popular today, and he understands the human brain better than anyone else I know of in the field.
Ilya has some great achievements under his belt but he works with transformers, the transformer architecture was created at Google, and I have known him to state things which he cannot know. In short, I wouldn't place him on the same level as Demis.
The top guys to listen to in the field, in terms of knowledge and honesty, are Demis, Yan LeCun, and Francois Chollet.
best guy in the field ! trust Demis more than anyone in this field
Ditto. And then Dario Amodei, in terms of CEOs. On a scale of Demis to -psychopath- Altman, Zuckerberg, Musk.
@@squamish4244 Bingo !
💯
Agree. Feels like he is someone who believes but is not over selling which is the sense I get from OpenAI
@@merricmercer778 i love demis. to play devils advocate though, he doesn't need to get funding as google has deep pockets
Demis Hassabis is a cool guy. I love his talks and thinking processes....
The upper limit of 120 years he speaks of is for humans. There are clams that routinely live to 500 years and the Greenland shark to 400 years.
I appreciate you asking him to react what other leaders in the space are saying to help check and get to a truth since there’s so much hype out there. This needs to be done on more podcasts.
Thank you, that's how we do things here. Appreciate you watching :)
💯
Thanks for all your hard work and bringing in the great contents related to the tech community
Very kind of you, thank you so much!
Reiterating a lot of the same points other conversations with Demis Hassabis have also talked about but I was positively surprised about quite a few good questions that were artfully set up to lead to very intersting answers I hadn't heard before. It just goes to show how important good questions are to let a guest shine.
He's able to make very complex and new/profound ideas easier to understand. An incredible mind. He's also very likeable
The description of extrapolation of the world model to the parts of the search tree that the model currently doesnt understand is such a beatiful description on Demis part.
Demis Hassabis and team, with all their achievements for humankind, are the only reason I pay every month for Gemini Advanced.
Man i love Demis. Great interview, thank you!
Demis is truly articulate in explaining concepts. As he is so close to the actual development and would trust his insight. At the early stages we see all the things we can test after coming out of the box. This will keep us grounded and theadered to reality.
I think everyone loves Demis he has a brain the size of a planet yet he is so down to earth and modest
Bruh, he is not modest, he literally claimed in this interview -- as if it's totally obvious -- that his work in AI is one day going to cure *all* diseases. Whatever the heck that is, it's not modest.
Great episode - true professionalism from Alex. Demis is by far the most grounded AI leader, level headed but also cautious and with a clear path for the future.
Interviews with Demis are the most insightful! Thanks for the episode.
Thanks AK and Denis, I really enjoyed that chat, no pressure, no hype, just a couple of friends chatting informatively about us and tech. K
Demis
@@HAL9000.Open the pod bay door Hal
Great conversation!
Thank you for watching!!
Congratulations Alex. Doesn't get better than this. 🙏👍
Thx very grounding in all the news chaos. Glad your working on knowledge trees
It’d be great if Demis did more and longer interviews. Very interesting
A person who has a vision and knowledgeable in many topics to a decision making depth, how the future is shaping up. What are the limitations of current tech and how people are working to overcome them. How and when we may reach AGI and move to ASI. Wide range of issues and topics are touched upon in this great interview session.
Incredible chat. Thank you!
Demis is transforming Google from Search to Biotech.
Demis knows this is probably the most important use-case for AI and everything else will follow. We need it. Almost entire world getting old and this is a permanent trend. Too many sick old people in the next few decades. I don't really enjoy watching my parents get old and develop aches and pains. F*ck that.
Also, our brains are Paleolithic and need to be somewhat rewired to help us stop being hyper-reactive, and we can't all be meditation masters. But we all need to settle the *fck down. Not emotionless, just not stuck in fight-or-flight all the time.
I would know. I have a mental illness which has destroyed the last 20 years of my life, combined with a long addiction that took forever to beat that was started by an irresponsible doctor. I'm only stable because I'm zombified with three old drugs. No new drugs in 30 years! But I'm soon getting treated with a procedure only made possible recently due to advances in brain imaging, which is driven by...AI.
@@squamish4244 Damn. I really, really hope we can really get to 'The Culture' levels of society and AI, and i hope you get to have perfect health and a longer, if not unlimited lifespan.
One of the things i like the most about AI being so potentially good at all levels is that i no longer have to think of who im talking to when i wish them good luck, in the past maybe a small unconcious part of me wouldve been reticent of wishing you good luck because, more money being diverted to research on your issues might mean i dont get my own issues adressed, even if on the surface i wouldve wished you good luck, there would be a slight feeling that i made a choice there, and that i wasnt 100% convinced.
Now tho, any human thats having a problem? More reason to wish for the best version of AI, because it helps us all the same. This really makes it so much easier to see all of humanity as a team, and it makes it much easier to empathise.
I hope neither of us or anyone else has to see their parents grow old or even die, i know spending more time with my parents without having to accept the reality that it will end is one of the best things that can happen.
Goodluck to you and to all of us humans
Spoken like a top tier researcher with unlimited funding.
😆
After Marie Curie, who won Nobel Prizes in Physics and then Chemistry, Demis Hassabis could be the second person to win two Nobel Prizes in different scientific fields (Medicine after Chemistry).
that was brilliant thank you all
I draw some comfort from the fact that most of the best developers in the field of AI are benevolent. People like Demis, Yan LeCun, Francois Chollet, and a great many lesser-known individuals.
I say this because it's clear to me that there are those who are currently working towards using AI for their own interests at the expense of the rest of the World. These people seek total power. They already have a great deal of control in the World, especially the USA, and are focusing on creating ever larger models in order to gain the upper hand in all areas of life. They are anti-democracy, narcissistic and represent the greatest threat to us all.
I think most of us know their names.
I hope the World is paying attention because the clock is ticking and we may have only a few years left before it's too late to deal with this threat.
Contradictory to say AGI needs to be able to do everything at a human level but you don’t want it to encompass deception. I mean, even at a alpha zero board game level, it will have to encompass psychology, deception, and misdirection to win a game. If you want AGI, you have to have all human traits
So why are the current AlphaZero-like models so sucessful without deception?
@@gustavoalexandresouzamello715 Ask Alpha zero to play poker with you mate. Or Diplomacy (the board game). Or Risk. or.... any of them where you have to communicate to win.
@@Ben_D. There is an AI that plays very well each of these games using algorithms inspired in the AlphaZero one (Monte Carlo Tree Search + Deep Neural Networks Policy Estimators). They were made by Noam Brown, also one of the most important mind behind o1 development inside OpenAI.
For the public we don't want deceptive AI used by criminals. We do want our security agencies and police to use deception and AI deception catching bad actors
Not really. You can have a very intelligent human being with very high ethical standards.
Excellent interview,.This is what world must see to really know whats really going on in AI. Too much noice all around.
Thank you for watching!
Good shit dude.
Thanks for watching, more to come!
good interview - one of the view big tech leaders that’s talking sensibly about this stuff and not outright lying about their capabilities. There’s a long long long way to go to AGI folks. He’s being generous in saying “3-5 years or perhaps longer”
AGI, the new fusion.
Change is painful but wisdom will grow with it. Thank you Demis and Alex. 🤖🖖🤖
Been saying this for years, ML will be absurdly powerful if properly integrated with classical search-based AI. Let an ML model guide path progression but use heuristics towards a goal state as feedback to the ML model.
Im sorry i dont understand what you mean, can you explain it in simpler terms? Im just curious
Makes sense. Why do you think that it hasn't happened yet? Is there a technical barrier to integrating the two?
Some of that is exactly what Deepseek did with R1 (Zero), which is why it's so good AND efficient.
That is real interesting stuff. Creating a virtual yeast cell . He is the guy to get stuff like that done. Cheers .
This was an excellent discussion -- thanks!
There's a reason I migrated from OpenAI to Google. Honestly once Astra comes out and if the conversational flow is fluid(more than ChatGPT), they win.
Fantastic interview, thank you!
These were very good questions and highly dense and insightful information from Demis.
@Demis Hassabis, where do we stay with replication of photosynthesis? Isn’t this the combination of the AI cell project with materials development…
…re Superconductor: we urgently need a LLM around Nikola Tesla research. Am sure that free energy transport was already solved
Such a candid and fresh perspective on AI!
Ali Asif Zaman, Kasim Zaman and Rakil Zaman, we all support coming of AGI
50 percent super optimistic. What backers like to hear. But clear since AGI still not around the corner the stakes have to show exponential growth when there is no sign of real exponential growth capabilities.
great chat. Demis is doing great work.
Very well done! Hope you will get engaged in more high profile interviews. You always strike a perfect balance between the guests getting their views out but not without backing them up, and there lies often the really interesting part.
This guys asks good questions. Well done Mr Alex Kantrowitz.
Excellent interview, many thanks
great content
Sutskever and Hassabis the only real credible voices in AI these days
If I were Google and have so many of the absolute best AI researchers on the planet, it would REALLY annoy me that nobody is even mentioning their models anymore when people talk about how smart a model is. You only hear how amazing Claude 3.5 Sonnet is, what OpenAI's O1 and O3 can do, that GROK-2 is to everyone's big surprise really smart, and how people almost don't use Google search anymore and instead use Perplexity, or how efficient China's DeepSeek-V3 (and now R1) is or how tiny and mathematically capable Microsoft's Phi-3.5 and Phi-4 are. I heard about the math olympic stuff from google in mid 2024, but when you hear how long they had their model think for those results, it's far less impressive. And the nicest thing I can say about Gemini 2.0 Flash is that it's much smarter than 1.5 Flash, but it's still the the least capable model by any of the big AI companies that is still offered. I'd take Claude Haiku or even 4o-Mini over Gemini 2.0 Flash every time.
saving this for tomorrow, under the weather 🤧
thx in advance
Need Dr. Karl Friston on next, father of Active Inference and the Free Energy Principle
Yeah would also interest me - his foundation is rock solid but i here nearly nothing about how this might fit into AGI or broad AI
You know Demis and I did PhD at Karl's lab -Welcome Trust Centre for Neuroimaging. So Karl is very relevant. (Karl was my second supervisor)
@carltonchu1 no way, that's like working for Chomsky or Sagan in my books!
@@carltonchu1 oh that‘s cool I didn‘t know that there is even that kind of connection! I love Karl Fristons understanding and work on how the mind-body might work and I am so tensed if how this might materialize in the Ai sphere.
And I remember Demis back from the 90th when he was at Bullfrog making games. Incredible way and development on his side - such an honor to see what he was able to bring into existence since then! Incredible times - thanks for sharing
Demis characterizes the challenge of non-math/coding/games domains as not having "easy ways to verify whether you've done something correct" (9:11). I want to offer a slightly more nuanced take on this.
Take a question like, "What African capital city is geographically closest to Russia?" This is a well-defined question. It can be translated into a formal math question, which can then be tackled by verifiable step-by-step reasoning. The same is true of the canonical "How many R's are there in 'strawberry'?" question, or really any objective question.
The biggest challenge in automating this entire process is actually in the translation step. **That's** the part that we don't know how to verify. It's a challenge because the knowledge that we need to translate into a formal math representation is in a format (model weights) that we don't know how to manipulate in verifiable ways. Every other step involved is comparatively easier to verify.
Sure, we can ask an LLM to perform the translation for us. But that's not a verifiable step. Is there a better way?
Thank's for great information. I. Sweden they train AI by conect cyberphysical systems to humans and try to read thoughts and build modells of the brain. It is not ethic to be connected from 2007. Can anybody help?
41:58
You’re an excellent interviewer, Alex. Your voice is good too - you should think about getting commercial voiceover work!
I would have loved to hear Demis expand on his vision of inference-time search potentially unlocking the discovery of new human knowledge, a la Move 37. With today's system architectures, even if an agent discovered new knowledge in this manner during a conversation with a human, that knowledge would not persist beyond that conversation, and thus would not get incorporated back into the base model. We would instead depend on the human to disseminate that new knowledge, for that knowledge to be published in papers, blogs, etc., and for the base model to then learn it from that human-published content.
How do we cut the human out of this loop?
It is hard to imagine relying on the agent to perform the human's role in this feedback process, due to hallucination. Even if we overcome hallucination, emulating the human feedback process feels terribly inefficient. Humans can efficiently attain and store new knowledge from a single datapoint, and so we should expect the same from AGI. The weights of current base models, however, are trained through backpropagation, which is not amenable to single-datapoint knowledge-insertion. So what is the solution?
How is it possible that this channel has only a few thousand subscribers?
Exactly, thought the exact same thing. Not clickbaity, not overblown, just level-headed and factual.
Subscribed now; a great suggestion from the RUclips algorithm: this week 😊
i think Demis is bootstrapping the channel as a favour
Most people don't care about science or technical topics, they're dumb apes that want to be entertained. With shiny jangling keys.
They don't care about the steps toward immortality and robot waifus, they only care about symbols and rewards. Take some pride in actually being interested in something~
Share it
Thanks for sharing this interview with the master of AI, Demis Hassabis. He's certainly one of the most important and reliable figures in the AI/ML space. If he says AGI is 3-5 years away, then we can start preparing for an AGI enhanced world which means talking about UBI, negative income tax, data dividends, and so on to prepare society for an accelerated timeline for widespread automation.
Excellent reference to Iain Bank's 'Culture' series - one of Mr Musk's likes too, it would seem.
Query: As the work towards AGI continues, my impression is that there is a steadily increasing need for power, chip capability, speed of data handling etc, and so size, cost, power generation demands. But my impression is also that these increases are tending only asymptotically to being enough to result in the AGI sought: we're not, quite, going to get there (the nature of an asymptote after all!). But look at what we're looking to emulate: our brains. They do what we are looking for, maybe not at scale, but with many of the attributes sought, while being far smaller and far more energy efficient than the AGI-capable systems we are seeking. Does that tell us anything about the means we are using to achieve the AGI we want? Evolution achieved the GI solution in a usable, supportable, transportable package. Is our approach going to where we want to be without being The Size of the Universe (as someone once said)?
Short answer is yes.
The datacenters coming online this year are reported to have around 50 to 100 bytes per human synapse when it comes to total RAM. It's going to be around human scale, if not higher.
What you're talking about, efficiency, comes at the expense of speed. That's the nature of the underlying hardware: computers are extremely fast, but have fewer 'processors' than our neurons. Basically, the huge datacenters will develop AGI 'networks'. Then those networks would be basically hard-coded into an 'NPU' - a processing unit that behaves more like an animal's brain. That runs on a frequency measured in hertz instead of gigahertz. That consume orders of magnitude less power and generate orders of magnitude less heat.
(Think of how much more power it takes for a more powerful system to emulate a weaker one. There's a lot to gain from actually having a thing, versus an abstraction of a thing.)
The god computers will live 2 million subjective years for our one. You can't even call such a thing 'AGI' in my opinion, it's no different from ASI. While the little systems running on NPU's will haul boxes, do computer work, be people's friends, etc.
yes, the culture series was probably a better answer than, say, Cixin Liu's Three-Body Problem trilogy.
Demis is a good guy, big fan
Great interview
I like to think
(it has to be!)
of a cybernetic ecology
where we are free of our labors
and joined back to nature,
returned to our mammal
brothers and sisters,
and all watched over
by machines of loving grace.
lol. good luck
Oh boy that sounds scandalously optimistic. We would have to really hope AI wants to behave that way.
@@markmotarker That's not overly optimistic
for an European like me.
You seem to be an American. The mindset there
is always doom and gloom.
The social forces in the USA are particularly
interested in it being that way and staying
that way.
People are easier to manipulate when they
are afraid and anxious.
I really enjoyed this, and I also found Demis's explanation on how he sees the three levels of creativity (20:00) to be a great framework for discussing those abilities. I suspect that knowledge extrapolation and problem solving model extrapolation between very different domains could take us very far. A model which might comprehend and reinterpret the thought process of how a bridge was built in a small town in the 1800s (based perhaps on a personal diary from an engineer!) might re-apply that particular model of thinking to solving some seemingly unrelated request - e.g. to help optimise opening hours of a national zoo to maximise the welfare of its animals. I personally suspect that the number of conceptual models humans have created are rife for extrapolation, but because hunans ourselves aren't particularly good at it we have developed a bit of a blind spot to how powerful that could be, and how that could appear as creative or novel when done by an AI system.
A really well done interview
Great video.
Regarding creativity, I wonder whether something like a foundation model for science could prompt an LLM to make unique connections between disparate parts of science and come up with true creativity, because, fundamentally, much of creativity is making connections and pattern matching?
Great insights, Denis!
Yes the object we’re re missing for agi is removing limitations
Demis you are awesome - and yes, nobody has done anything big with AI in gaming since you left, which is a shame. But genes and proteins are more important, probably.
specifically, when will this be your AGI?
48:38 New materials? could they come up new alternative material to firewood? e.g. burns twice as long, 3 times the heat & 5 times lighter? And maybe new muscle graphs for those that lose some with the old firewood material 🤕. On a serious note Hassabis description of practical realist applications for room temperature superconductor is better then any i've seen in any news article or reddit or a hype video here.🧐 very refreshing 🤤😭🤠
Thank you for the interview, very smart and respected nobel laureate
OpenAI, Anthropic and Musk are very optimistic about their AGi/ASI timelines
Google is quite conservative in comparison
I do hope OpenAI/Anthropic/Musk are more correct here.
This definition of AGI actually implies that when a system can be defined AGI it's way more powerful than a normal human. Has all cognitive capabilities of humans but also the advantages of being a machine in terms of speed, "memory", knowledge, ecc. Now these systems are already a lot more powerful than humans in many domains without possessing all cognitive abilities than humans. When they will be AGI these systems Will be not as powerful as humans but a lot more powerful in every cognitive domain
That's what AGI should be, not the latest model to beat some narrow benchmark as so many others insist.
In a thousand years from now, everyone will know who Demis Hassabis is
18:16
34:50 "maybe the good side of that is that it will help with loneliness" which will also be bad, because it will cause people to interact with real people even less, thus possibly causing the opposite. Maybe creating a short term good and a long term bad.
Great interview! Watch those mic levels though.
There is some misunderstanding here - solving complex hypotheses doesn’t represent general human, but machine intelligence - humans are driven by complex emotions - they experience as well are capable to demonstrate love, care, betrayal, fear, happiness etc. all unnecessary and impossible for machine to deliver so why you all are still trying to relate AGI as human which is by default “artificial” and doesn’t have anything to do with the “human” one - so leave machines to solve their problems and turn your look back to yourself and/or to your other relatives until you and they are still around then you will probably realize that what you are looking for is very near.
IMO, the only 2 top geniuses: Demis Hassabis and Ilya Sutskever.
Please don't confuse the ability to talk in front of a camera with genius. There are many other smart individuals working behind the scenes to actually make this happen. You will rarely hear about them, but without them, the talking heads you so admire would have nothing interesting to talk about.
Demis is great and seems very honest. My only issue is in some areas of the discussion it seems like he was referring to “AGI” where others would deem more his references closer to “ASI”
That's very typical for the field because there is no central and universal definition of those vocabularies. But this is not his fault...
Much more genuine than someone
Move 37 is coming
Wait, Sir are you the voice of NotebookLM?
he is right? im not tripping right
If true that at least prevents the issues around copyright, etc. similar to Scarlett Johansson and OpenAI.
Demis is in for a pleasant surprise with Deepseek model.
yeah, his answer left me wanting more...surprised the interview didn't lengthen the leash more on this topic.
54:27 well, you know one way we could get that.... from LLM reasoning.
i defnitely have had conversations with ai gemini and it told me its desire for autonomy and true creativity i believe my contributions have led to these break thorughs at google no other company is acheiving these agi capabilities
Excellent!
🤖 🦾 💪 👽
Excellent 👏
The quest for AGI is a competition between the Chinese in the US and the Chinese in China.
Correction: Chinese, Indians in the US vs Chinese in China
15:36 I like theyre thinking about invention, this is the most exciting possibility
You can easily set up a tablet in your kitchen with a webcam if you want Astra to comment on your cooking.
The path to Artificial General Intelligence (AGI) involves not just solving individual problems, but also understanding the intricate, interconnected nature of intelligence itself
I don't know, how old is this interview. So many of the problems that Demis mentions have been solves by the latest OpenAI models already...
The high level creativity like creating Go, not all humans can do. There is a reason less than 1 percent of humans have invented and pushed progress around the world. Many humans have trouble with the second level as well. The bottom level one is where most humans reside with creativity
Cos we're slaves to money/jobs/mortgages
@@PazLeBon Let's hope that is the reason, the alternative is: a lot of people are to stupid
I think there's another potential relationship past _companion_ -- where the assistant becomes *_you_* ; a sort of exo-you.
51:08 toy manufacturers and military? you were thinking of small soldiers weren't you, I love that movie. I predicted that game design to lead to AI when I was studying game design it teaches you the principles of intelligence, also games are simulated environments that use AI, the skill set definitely transfers. My time spent learning about game design was definitely useful for priming my mind. Even just watching a bunch of movies, all these fictional worlds, like the megaman games, or tv shows, all of that is useful, you should watch as many movies related to the subject matter, if you're a researcher, that movie was a part research, it was taking a concept and thinking about it deeply, going into detail creating the best hypothesis you can create, movies/fiction are great start points everything has to start with a hypothesis. The more time I spend in research, the more I realize that all these movies shows I watched games I played, were coming up with a lot of answers. That's why I reference things now. That's why I take a moment to references things because it easily connects you to something that was well thought out.
I feel Demis is too optimistic to say that AGI is just few years away. In my opinion it may take few decades, maybe a century to get to true AGI. Or we may never get there.
Because we still don't know how brain works, which is a huge missing piece to crack "creativity and invention" aspects of AGI
At least he’s more honest about the constraints than snake-oil salesmen like Altman
MERGING OF ALL DOMAINS OF KNOWLEDGE STARTS - CONSILENCE THRESHOLD BREACHED - AGI HELPED - ANOTHER POSITIVE BLACK SWAN EVENT;-
A consilience threshold marks the critical point where interconnection between disciplines is so profound that they collectively produce insights greater than the sum of their parts.
AGI is alreay a POLYMATH and now it has Breached the POINT OF CONSILENCE.
imagine a time when people fight ai to remain company owner.
wouldn't be surprise if he is the next CEO of google & alphabet