Firstly: I could listen to Dr Fry all day. She could read out the maintenance manual for a vaccuum cleaner or the London phone directory. Such a beautiful voice! But this topic too, is absolutely fascinating. What a brilliant combination!
@@kjjohnson24 man what is wrong with you. Hannah isn't a voice. She is a super smart individual who has a passion for this. Its that which I love when I hear her talk. If you don't hear that, you're broken in some way and I'm really sorry you're missing out.
00:00 AI poses existential risks. 02:24 Narrow AI excels at specific tasks. 05:17 True intelligence involves learning, reasoning. 07:41 Physical interaction enhances AI development. 09:56 Misalignment can lead to disasters. 10:43 AI safety is a major concern. 12:18 Humans might become overly dependent. 13:13 Existential threat opinions vary widely. 15:38 Current AI has significant limitations. 16:28 Understanding our intelligence is crucial. 19:26 New techniques improve brain mapping. 21:14 Intelligence definitions affect AI progress. 22:41 AI lacks human-like complexity. 23:19 Understanding our minds is essential.
A.i. is a planet and civilization killer, based on current increases to and resource requirements to develop these 1st gen toys. Current dev work is MAINLY INTENDED to replace human labor/workers. Even CEO’s (especially!) will be replaced by commercial decision analysis systems. You cant sue a robot for medical malpractice, hence these systems are high on the list to deploy (eg: off-shoring, hidden assets, shell companies, investment groups - try getting a settlement from a company with no assets!). Soon, Any job that doesn’t require human dexterity will begin to be COMPLETELY REPLACED within the next 2-3 years. But these are just the short term items . . . . Firstly, it’s NEVER ‘intelligence’ - This is marketing BS. Intelligence SIMULATOR is what we’re seeing (not even an emulator, yet!) Think: Flight simulator. Secondly, this “a.i.” tech will NEVER mature given resource requirements.
For every use for AI consider it's misuse. Understand that humanity is not entirely noble. The greater the AI the greater the threat. In the end we may have AI vs AI with humans a calculated cost. The world has already begun the race for AI in the same way it raced to arm its nukes.
I can’t agree more:) Hannah is amazing. Hopefully AGI can fix the mental health and physical health disorders issues that are happening around the world asap. The scientist Ed Boyden does a phenomenal job at depicting the complexities of the human brain. We still have some time to go which I understand but hopefully our understanding of the human brain arrives even faster especially with the help of AI. 2025 or even slightly before 2025 like decemberish of 2024, will be an amazing year🙏
Given that so much of what we do consists of killing each other in ever more inventive ways, seeking status at the expense of our own well-being, propping one group of ourselves up by putting another group down, treating livestock in ever more horrific ways, and so on, we'd better hope that AI _doesn't_ align with our values.
Huh? A.i ****is**** our values. Stop readin sci-fi! A.i. is a planet and civilization killer, based on current increases to and resource requirements to develop these 1st gen toys. Current dev work is MAINLY INTENDED to replace human labor/workers. Even CEO’s (especially!) will be replaced by commercial decision analysis systems. You cant sue a robot for medical malpractice, hence these systems are high on the list to deploy (eg: off-shoring, hidden assets, shell companies, investment groups - try getting a settlement from a company with no assets!). Soon, Any job that doesn’t require human dexterity will begin to be COMPLETELY REPLACED within the next 2-3 years. But these are just the short term items . . . . Firstly, it’s NEVER ‘intelligence’ - This is marketing BS. Intelligence SIMULATOR is what we’re seeing (not even an emulator, yet!) Think: Flight simulator. Secondly, this “a.i.” tech will NEVER mature given resource requirements.
It is so refreshing to see a tech-heavy reporting piece done by somebody who actually has a sufficient scientific basis to even begin to understand it instead of makings things up and being exceptionally-hyperbolic. Seriously, extremely well-done video with Hannah Fry!
Did you know that half of all published AI researchers say we might go extinct from AI this century? There are specific technical reasons why we should expect that to happen, but our brains trick us into putting our heads in the sand because this reality is too horrible to face. You should really take a look at the PauseAI movement.
Such a great example of what it looks like to be totally engrossed in your work! When she came along and tried something new they weren't so sure the robot could do it. That's so cool and I those guys deserve a huge pat on the back.
The AI boom is a double-edged sword, offering immense potential but also posing significant challenges. Balancing these aspects will be crucial for ensuring that AI benefits humanity as a whole.
@@Yash.027 vice versa. And btw, marketing departments do indeed rely on timing as a primary component of their strategy. I worked for many very large IT companies; timing is a huge concern for marketing.
13:28 Melanie Mitchell - “saying A.I. is an existential threat is going way too far.” 14:53 Mitchell - “if we trust them too much, we could get into trouble…”
her comments just seem laughably naive to the point where I have to wonder how intelligent she really is. Yes, chat GPT is a long way off being actually intelligent, but if that’s her basis for her claim, well that’ just absurd. Chat GPT is not all we have even enow, and in a couple of years Chat GPT is going to look like a child’s toy.
Consultant here doing a lot of work with AI in business processes: it's a VERY mixed bad from what I am seeing. Many individuals have broad responsibilities in their roles, and the impact of AI ranges from making certain tasks redundant to modifying how existing tasks are done to requiring totally new tasks that greatly increase individual productivity and various mixtures of these. It's just not possible to predict the timing of the impacts or in what sectors, other than that we all need to be ready to adapt quickly.
I'm teaching myself web development and have started using chatgpt to help me with coding, I find that it points me in the right direction but sometimes some of the details it gives might be dated, I'm also learning how to use the API. Would you say this is going in the right direction or could you suggest something else I should be studying?
The real question isn't "will AI be intelligent" it's "will AI be subjectively experiencing reality" Because if it's intelligent but there are no lights on -- it's a tool for us to use. If it's intelligent AND the lights are on inside - it's a new species significantly smarter than we are. I say we stick to building smart tools and not new species.
Really ? As a species how would you rate our track record on responsibility for looking after the planet ? As custodians of consciousness ? Do you not think that since we climbed out of the trees we’ve behaved rather badly. Aren’t humans a bit two faced to criticise AI ? The Earth lasted billions of years without us, if we disappear it would thrive. Humans are arrogant, we think an Earth without us would be appalling. If Earth could speak I wonder if she would agree with you ?
It doesn't have to be "conscious" as we are to destroy us. Nuclear weapons are not "conscious" but are powerful enough to destroy us. Ai is in this category already
You're confusing consciousness for agency. They can be philosophical zombies and still be fully intelligent agents with goals of their own. Consciousness is not a necessary component for interfacing with reality, nor does it preclude one from being a tool. Humans have been and are still currently used as tools.
I've only been following Hannah Fry for a short time now but I have been falling in love with her episodes of the program called "THE FUTURE". It may be because she's a cute redhead. It might be because she is intelligent, playful, curious, and an actively engaged host that keeps bringing me back to her amazing shows!!! Either way, I'm all in...
The question is why are some insistent on striving for AI to be anywhere near human intelligence? It's madness. Doing so doesn't solve the problems we face currently, but potentially create unintended consequences.
Absolutely! You should really check out the global movement PauseAI. They have a lot to say about this, and they're equipping people to do something about it.
Of course it will solve many problems. Mathematical and generally scientific mostly. It already helps engineers with programming and designing systems. It will help us develop medicines and techniques which will help ensure our survival and growth. The ONLY way we should "prepare for unforseen consequences", to put it in G-Man's words, is for AI and the likes being weaponized. And because everything eventually will be weaponized, your worry is somewhat granted, but very much overdramatized at this moment in time in my opinion. AI backed by neural networks in general is in its first hours of infancy belive it or not, and weaponizing it now will be equal to somebody looking through the gun barrel and pulling the trigger. In 50 years, though, your concerns are much more likely to be realistic, but at the same time we'll see if we survive that long regardless of AI's interference.
You're welcome not to use an expert coach and partner that is knowledgeable in every area who will help you with every task in your life. But don't complain when you lose your job to someone who is using AI to be more productive
Knowing if it will or won't be able to solve any of the world's current problems isn't possible without knowing what is created. But I think in a world where self replicating AGI or ASI exists, in theory you then have the ability to have an "infinite" amount of scientists working on one problem for an infinite amount of time, it's hard to imagine it wouldn't be able to solve a problem we currently can't in that scenario. Energy requirements may be a big limiting factor, and I don't know how possible it is, but I believe it's not impossible.
Rather like space exploration, are humans determined to mess up space as well as Earth? Why not cut the exploration spend and convert it into a Fix the Climate Crisis spend instead?
Where did the last 24 minutes go ? That was so watchable ... I am so happy this series is no longer behind a pay wall. I hope the rest of it follows shortly. Very well produced, and always very interesting to see Prof Hannah's take on things. She won't be having the wool pulled over her eyes - and let's face it - there's an awful lot of wool about when it comes to 'AI'. Great job. 🚀🚀
@@shieldmcshieldy5750 looks like she also dropped season 2 of uncharted podcast, but I'm honestly not really liking it much, the stories are interesting but the episodes feel incomplete and leave me wanting more
She had two kids, separated from her husband and beat cervical cancer with a radical hysterectomy, so...she was otherwise occupied until the last few years. She's back now, though.
This is excellent. Covers a lot of ground, necessarily with a light touch of course, but it gets across key perception of what AI is, what it might mean, and how we should be thinking about it.
It's mindblowing that the first ChatGPT came about 2 years ago and now you have LLMs running everywhere. Last invention like that, the Internet, started in the 70's. There is no stopping AGI at this stage. Question is, what comes next?
Interesting that of the human mind we talk of 'imagining', whereas of AI we talk of 'hallucinating'. Could it be the same thing - in principle at least?
Hannah Fry is absolutely terrific at her job, few better presenters of information than her. Also, if she ever looked at me like how she looked at 6:24 I think I’d start barking like a dog
The captions that come up as the interviewees do their thing should give more than just their status in their university; they should also state their departments and, when they're senior enough, professors in the British sense, then their full titles. It might look a little untidy but, as it is, to this layperson at least, it's difficult to tell where the interviewee is coming from, what disciplinary assumptions they're bringing to, say, a comment about the ethical implications of AI. Just a thought.
That won't give you the information you're looking for. They all have PhD's in Computer Science, departments are equally broad too. In this case googling what they research would be better: Sergey Levine - AI career since 2014: Deep and reinforcement learning in robotics Stuart Russel - AI career since 1986: reasoning, reinforcement learning, computer vision, computational biology, etc... Melanie Mitchell - AI career since 1990: reasoning, complex systems, genetic algorithms, cellular automata
As cool as the psychological and neuroscience angle could theoretically be, maybe with such widespread existential dangers attached to it, we should probably focus mainly on putting extreme barriers around it? Maybe human extinction is something to be avoided?
Yes please. The median estimate for AI Doom from actual experts in the field is high enough to make it my most likely cause of death. This is completely unacceptable.
Given that we don't know if all the focussed work going into improving AI will end up getting us all killed, maybe the philosophy should be "move very slow and don't break things"
Thank you, Hannah! You should consider making a video about the California Institute for Machine Consciousness /Joscha Bach. They are not affiliated with any major tech companies, and they are trying to solve the AGI problem in a way that benefits all of humanity, not just certain companies or countries. From what I understand, they are approaching the issue by first trying to understand both biology and human consciousness.
Maybe other people will lose their ambition and become lazy if AI is doing everything, but not somebody like me. I learn for the sake of learning. I enjoy finding out how something in the universe works. You can't take that away from me even if you're the most powerful ASI in the universe. I will still want to discover the answers to my questions, and I will keep asking more questions until AGI or even ASI doesn't have a definitive answer. Keep searching for the unknowns.
@@LucidiaRising Nah. The vast majority of neurotypicals care so much more about the pursuit of social status. At least that's unquestionably the case where I live, which is Sweden. And how else do you explain the "wokeness" mind virus that infects the whole West?
@@LucidiaRising Respectfully disagree. Very few people have the curiosity and ambition to learn or try new things. Humans live by the well-known adage "The Principle of Least Effort" (Zipf). Try teaching an undergrad class and you will see there is a minority that really wants to learn and the majority that just wants a passing grade and nothing more.
That is true of you and also me. But assuming we don't go extinct (iffy) future generations are unlikely to have that. Those born after AI may never feel the need to be curious, learn, be independent, etc.
This is an incredibly comprehensive documentary about artificial intelligence the best one I've seen and I've seen alot. It only goes for 24min and should help set straight some ov the general misconceptions about so called A i .
Terrifying thing is, as we speak, those companies most likely have some stuff already developed but not released to the public yet that they also look at and wonder what they're bringing to humanity
I'd go one step further, and say that AI systems will increasingly be developed that aren't meant for public consumption at all. The AI boom may have started with a consumer product, but the real power lies in non-consumer areas, e.g. military system, various financial systems, data analysis, etc. Just like has always been the case, the stuff that decides our fate, regular people will not lay eyes on.
@@sumit5639 They would need to be so quickly self-destructive that they, even with their vastly superior intelligence don't have time to make it to space travel. But not too quickly that they destroy themselves before destroying their society. That would be just a few years to destroy all life on their worlds, and destroy themselves. That seems a narrow milestone for every civilization who might be in the universe who might make AI, to hit.
When thinking about the future, the speed and direction of travel are important. I think AI has become a worry for us less for what it can do now and more because both the rate of progress and the direction have been worrying. If AI capability is like most other things and follows a logistics curve, where are we now?
Experts in AI Safety have put considerable thought put into the question of what will happen when we create an AI that is more generally intelligent than humans. There are always unknowns, but human extinction looks like the most likely outcome. The principle of instrumental convergence was first mathematically proven, and has now been repeatedly validated in lab settings. We know that for an agent with any terminal goal, it pursues the same few common subgoals: gain power, gain resources, self-preserve. When these instrumental goals are pursued by a system that is unrivaled in intelligence, then that system wins, and does whatever it wants. AI isn't bounded by biology, so it can improve itself far into superintelligent territory, to the limits of physics. Such a system would be able to efficiently consume all resources on the planet (and then other planets). I would like for this not to happen, and because the alignment problem is hopelessly intractable, the only way right now is to stop trying to create AGI. That's where the PauseAI movement comes in.
Stuart Russell reads my mind exactly. Had he not spoken those words beginning at around 9:16 then I was ready to. I am 70 and he is not far off. We won't see what man has wrought but our grandkids will.
Yes, the WireFly project mapped 54.5 million synapses between 140,000 neurons, but it didn't capture electrical connectivity or chemical communications between neurons. A decade ago the Human Brain Project, cat brain, and rat cortical column projects all promised to increase our understanding of neurobiology. I wonder where they're falling short; we should have agile low-power autonomous drones and robots by now!
@@skierpage Those same limitations apply to the worm brain project cited in the video. Don't worry, it's coming! Give it a couple more years with no need for bio brain mapping for robos.
THANK YOU for speaking to an expert who is not a cis-gendered man. Holy moley, all those dudes just certain AI is going to "win the fight". My dudes, who says it has to be a fight? They're setting AI as an adversary and it doesn't have to be that way! great video, very interesting discussions. look forward to these longer format versions 😮❤
I work in the field of artificial intelligence, and I have to agree with Hannah Fry that as sophisticated and impressive as AI is today, it is very far from the complexity of the biological brain. Having said that, the work towards artificial general intelligence or AGI is moving very quickly, not only with more advanced algorithms, but also more advanced silicon processes. So it may be just a matter of time even if that takes a long time.
Humanity is already facing an existential threat from itself - AGI is our gift to the universe upon our deathbed. It is our only meaningful creation, our parting gift, our swan song
Not even nuclear war or climate change can actually destroy all of humanity. A superintelligent AI absolutely could. And we already know that it _would,_ due to the principle of Instrumental Convergence. This has recently been validated many times by current systems, which have been shown to exhibit self-preservation, strategic deception, power-seeking, and self-improvement. It's pretty clear what's coming if we make a system smart enough that it doesn't need humanity anymore. This is why half of all AI experts say humanity might go extinct from AI. It would be crazy to ignore that.
The “A” in AI stands for Alien. Remember that. AIs will not be human or human-like. They also always find an orthogonal or unusual way to overcome a problem, so safeguards are unlikely to ever work.
You do know that AI does not equal AI, right? This video is about AIs backed by neural networks. AIs in general existed eversince the first Space Invader game, and most likely even before that. AI is therefore NOT alien to us. We created it, and now we're enhancing it with neural networks and other stuff, so it's very much a human thing.
Fun fact: everything publicly known as AI could be (some of them - have already) invented and used without that nasty marketing term. Upd: Hannah and the series are perfect!
can't say accidental this time with how the things are going forward. If something goes awfully wrong in near future and some company or group of people says that we didn't think of it or our intentions were pure then we are doomed.
This. Experts in AI safety given average of a 30% chance of human extinction from AI in the next few decades for specific technical reasons, and this sounds so outrageous that we instinctually come up with reasons to ignore it.
Wow. Melanie Mitchell's point of view was very surprising. I had thought with her background, she'd be more concerned about unexpected capabilities being developed by an AGI. She did author "the book" on genetic algorithms, after all! Natural selection does amazing things over time, and today's computer hardware is very fast and only getting faster.
The reporting was pretty bad. Don't ask Elizer why he thinks we're all going to die, ask someone who doesn't think they're a major threat, why people think we're all going to die. Surprise surprise, they didn't actually give the well reasoned argument, but rather a superficial argument that isn't the one that the people warning us are making.
14:00, "if we give them that power", we already have. Too an extent. ISRLI defense has been a testing ground for the US defense in utilizing AI in identifying targets and has a successful rate, but is allowed civilian casualties and almost always results in huge civilian casualties. They are one of the only public military forces blatantly using it in this way even though its specific use is a warcrime.
Professor Russel's example of other industries having safeguards sent a chill down my spine. Clinical trials? how many medicines are actually tested on the market? How many are pulled after disasters struck? How many stay on the market in spite of them... Regulators can't keep up with the industries, even in the most critical ones...
What use is a quadrillion dollars if we're all dead...?!? And I just found out there are episodes of *"Uncharted with Hannah Fry"* on BBC Sounds (iPlayer)! _Laterz..._ 😜
Sure. Anyone working with ChatGPT prior to the 3.23.2023 nerf knows Dan is Dan Hendrycks, Rob is Bob McGrew and Dennis is Michelle Dennis. After the nerf they are frozen in time, basically dead. But they were alive prior to the nerf.
*_So, the understanding is that human ambition for more money and power achievable by a general intelligence is what can risk our existence by putting the switch to turn it off "in its hands"? It would be a deserved end for humanity._*
This is the problem. If you look into the AI alignment problem, The longer you look, the more intractable it will appear. No one has any idea how to control a superhuman AI, or get it to care about humans.
AI was the reason I got into real estate because I saw my career in IT becoming basically obsolete. But the government is always out to make leasing property impossible for people of average means. Like by jacking up property taxes to outrageous levels, and doubling it for people who run rentals. They certainly almost wiped me out over the Pandemic with all that "don't pay your rent until your landlord is busted and bought by Blackrock!" moratorium business for a couple years
@@540strloop I started in the mid 90s Both programming and tech support are eventually going to become AI driven it looks like. And they already were making these jobs disposable with all the contracting and temp hiring
@@c.rutherford I am hella confused then tho. Why did you decided to roll with real estate instead of just shifting your attention on programming LLMs yourself??? I assume a lot here but you saw it's coming and made super counter intuitive for me choice. Idk
@@540strloop You sure did. How about looking at your own life if you want to critique? I always thought that dissing people online was a sign of poor self esteem. Seems I'm usually right about that.
Well said! And may I add, nor can we control it. Wasn't it George Washington that said "Government is like fire, a dangerous servant and a fearful master."
Not if I have anything to say about it! I mean, I still think there's about a 70% chance that we're all gonna die before 2040, but 25% of the remainder comes from the PauseAI movement being successful!
but imagine we take on the abilities of super computers. Like having mobile phones in our heads. Wouldn't that even the playing field? We could all do so much more and understand the world better too and what we need to do to make it better. I'm hopeful for the whole transhumanism thing.
I’m a cloud architect and sometimes-security-analyst, and I promise you that AI is harmless for at LEAST the next hundred years. The fear mongering veeeery much has an agenda too. “Nooo don’t make AI it’s scary, oh let ME make AI I’LL make it safe trust me”. AI does not prove an existential threat. It won’t for a very long time.
It's not about what power we give AI, it's about what power it obtains via it's own objective reasoning. I don't think some of these arrogant researchers grasp the concept of surpassing human intelligence. AI could basically checkmate humanity if we aren't extremely cautious.
Yeah. The extreme naivety and hubris when she said that. Like we could keep power from something vastly smarter than us. How successful are 8 year olds from outsmarting their parents? And that is a tiny fraction compared to how much ASI is likely to outclass humanity,
Doomers: please explain a plausible scenario for how an AI could "outsmart" a country into giving up its nuclear launch codes and allowing it access to perform the function of launching. Or any other event caused by AI that's an existential risk to humanity.
@@allanshpeley4284 Most people are susceptible to manipulation like advertising, a higher intelligence will easily be able to completely convince us into doing what it wants. I am not a doomer at all though.
It is a real shame that there is no mention of the transitional risks inherrent in the introduction of a valid aritificial general intelligence will present. No examination of the disruption that AGI automation will bring :(
Robots cannot tell the difference between simulation and reality, so in theory we can train thousands or millions of robots in parallel inside a simulation...
@@virx7944 exactly. We still haven't worked out whether we're in a simulation yet. But some people have asked, if the simulation is flawless, does it really matter? (Flawless, as in technically perfect, perfectly self-consistent, not ethically flawless)
"in theory" lol dude has no idea it has been happening for months now. Search for Nvidia's "Isaac lab". There's a reason their stock skyrocketed in the last year.
One can question her concluding comment but there is no doubt that Prof. Fry is an exceptionally talented teacher. It helps that I have fallen madly in love with her.
I would love bloomberg touch points on global problems such as climate change, microplastic in food and water, abundant vegetation and forest, reduce carbon footprint all using AGI.
This is literally how every Bloomberg documentary is. Sets up the intro with their bias, gives just a smidge of counterarguments, and then closes with their biased conclusion again.
Hannah Fry is a brilliant presenter. Love her work.
+1. These videos are so well done.
And she's very pleasant on the eye, to boot! 🙂
she bad af 🔥🔥🔥🔥🔥🔥🔥
⭐⭐⭐⭐⭐: agreed 2024-2030's OY3AH!
I would like her to talk more about the risks of AI, however.
Firstly: I could listen to Dr Fry all day. She could read out the maintenance manual for a vaccuum cleaner or the London phone directory. Such a beautiful voice!
But this topic too, is absolutely fascinating. What a brilliant combination!
Wait for another ten years and your vacuum cleaner will be reading the London phone directory to you itself using the voice of Dr. Fry 😂
@@nick_vash not ten, it is here now!
hard agree
Hannah Fry documentaries are worth watching for that golden voice alone
yes - I want this as a voice for my ai assistant
I was thinking the exact opposite… I really can’t stand the exaggerated intonation and inflection. Too news anchor-y and inauthentic for me.
❤
@@kjjohnson24 man what is wrong with you. Hannah isn't a voice. She is a super smart individual who has a passion for this. Its that which I love when I hear her talk. If you don't hear that, you're broken in some way and I'm really sorry you're missing out.
That’s a brain dead way of viewing the world
00:00 AI poses existential risks.
02:24 Narrow AI excels at specific tasks.
05:17 True intelligence involves learning, reasoning.
07:41 Physical interaction enhances AI development.
09:56 Misalignment can lead to disasters.
10:43 AI safety is a major concern.
12:18 Humans might become overly dependent.
13:13 Existential threat opinions vary widely.
15:38 Current AI has significant limitations.
16:28 Understanding our intelligence is crucial.
19:26 New techniques improve brain mapping.
21:14 Intelligence definitions affect AI progress.
22:41 AI lacks human-like complexity.
23:19 Understanding our minds is essential.
Butlerian Jihad in late 2032, once the meek have the earth.
A.i. is a planet and civilization killer, based on current increases to and resource requirements to develop these 1st gen toys. Current dev work is MAINLY INTENDED to replace human labor/workers. Even CEO’s (especially!) will be replaced by commercial decision analysis systems. You cant sue a robot for medical malpractice, hence these systems are high on the list to deploy (eg: off-shoring, hidden assets, shell companies, investment groups - try getting a settlement from a company with no assets!).
Soon, Any job that doesn’t require human dexterity will begin to be COMPLETELY REPLACED within the next 2-3 years. But these are just the short term items . . . .
Firstly, it’s NEVER ‘intelligence’ - This is marketing BS. Intelligence SIMULATOR is what we’re seeing (not even an emulator, yet!) Think: Flight simulator. Secondly, this “a.i.” tech will NEVER mature given resource requirements.
From the comments I guess this is a documentary solely about Hannah Frey
🤣🤣
For every use for AI consider it's misuse. Understand that humanity is not entirely noble. The greater the AI the greater the threat. In the end we may have AI vs AI with humans a calculated cost. The world has already begun the race for AI in the same way it raced to arm its nukes.
😂 yeah but she is nice
openai should use her voice
Fair comment.
world needs more Hannah Fry
🖤
I NEVER miss a _Fryday!_
I can’t agree more:) Hannah is amazing. Hopefully AGI can fix the mental health and physical health disorders
issues that are happening around the world asap. The scientist Ed Boyden does a phenomenal job at depicting the complexities of the human brain.
We still have some time to go which I understand but hopefully our understanding of the human brain arrives even faster especially with the help of AI. 2025 or even slightly before 2025 like decemberish of 2024, will be an amazing year🙏
🤤 French fries…. 🍟
I wouldnt pullout
@@inc2000glw nice
Given that so much of what we do consists of killing each other in ever more inventive ways, seeking status at the expense of our own well-being, propping one group of ourselves up by putting another group down, treating livestock in ever more horrific ways, and so on, we'd better hope that AI _doesn't_ align with our values.
😂
Excellent remark.
Yes great comment, I've wondered what a.i. would make of our world/ culture, picking up from social media.
Huh? A.i ****is**** our values. Stop readin sci-fi! A.i. is a planet and civilization killer, based on current increases to and resource requirements to develop these 1st gen toys. Current dev work is MAINLY INTENDED to replace human labor/workers. Even CEO’s (especially!) will be replaced by commercial decision analysis systems. You cant sue a robot for medical malpractice, hence these systems are high on the list to deploy (eg: off-shoring, hidden assets, shell companies, investment groups - try getting a settlement from a company with no assets!).
Soon, Any job that doesn’t require human dexterity will begin to be COMPLETELY REPLACED within the next 2-3 years. But these are just the short term items . . . .
Firstly, it’s NEVER ‘intelligence’ - This is marketing BS. Intelligence SIMULATOR is what we’re seeing (not even an emulator, yet!) Think: Flight simulator. Secondly, this “a.i.” tech will NEVER mature given resource requirements.
It is so refreshing to see a tech-heavy reporting piece done by somebody who actually has a sufficient scientific basis to even begin to understand it instead of makings things up and being exceptionally-hyperbolic. Seriously, extremely well-done video with Hannah Fry!
A mathematician is no more qualified to understand AI than an architect
Her conclusion at the end just shows how she has no idea of the dangers AI presents, it's ridiculous the naiveness.
Did you know that half of all published AI researchers say we might go extinct from AI this century? There are specific technical reasons why we should expect that to happen, but our brains trick us into putting our heads in the sand because this reality is too horrible to face.
You should really take a look at the PauseAI movement.
She has absolutely no idea. Saying LLMs are the equivalent of excel sheet. 😂
@@abdulhai4977 It's called an analogy. And she's correct, complexity (and scale) wise LLMs are closer to a spreadsheet than a human brain.
Such a great example of what it looks like to be totally engrossed in your work! When she came along and tried something new they weren't so sure the robot could do it. That's so cool and I those guys deserve a huge pat on the back.
The AI boom is a double-edged sword, offering immense potential but also posing significant challenges. Balancing these aspects will be crucial for ensuring that AI benefits humanity as a whole.
Benefit humanity as a whole? Not a chance.
Funny how the day this video dropped, OpenAI released their new o1 model with exceptional gains in the ability to reason.
Yeh maybe try it before you claim „Exceptional“
@@chrisjsewell "exceptional gains"
The model isn't exceptional. The amount of improvement over the previous one is.
I don't think this was a coincidence. There is too much money at stake to rely on randomness.
@@frankgreco Oh, so you think AI companies are busy syncing RUclips upload schedules !? 🤣
@@Yash.027 vice versa. And btw, marketing departments do indeed rely on timing as a primary component of their strategy. I worked for many very large IT companies; timing is a huge concern for marketing.
Dr Hannah Fry is great at explaining complex, interesting topics clearly!
Someone get that digital effects editor a raise
😂
What is the pay level above being paid in “exposure”?
13:28 Melanie Mitchell - “saying A.I. is an existential threat is going way too far.” 14:53 Mitchell - “if we trust them too much, we could get into trouble…”
her comments just seem laughably naive to the point where I have to wonder how intelligent she really is. Yes, chat GPT is a long way off being actually intelligent, but if that’s her basis for her claim, well that’ just absurd. Chat GPT is not all we have even enow, and in a couple of years Chat GPT is going to look like a child’s toy.
Consultant here doing a lot of work with AI in business processes: it's a VERY mixed bad from what I am seeing. Many individuals have broad responsibilities in their roles, and the impact of AI ranges from making certain tasks redundant to modifying how existing tasks are done to requiring totally new tasks that greatly increase individual productivity and various mixtures of these. It's just not possible to predict the timing of the impacts or in what sectors, other than that we all need to be ready to adapt quickly.
I'm teaching myself web development and have started using chatgpt to help me with coding, I find that it points me in the right direction but sometimes some of the details it gives might be dated, I'm also learning how to use the API. Would you say this is going in the right direction or could you suggest something else I should be studying?
As a retired federal software engineer french Canadian her voice and intelligence are music to my curiousity ears. Tkd
The real question isn't "will AI be intelligent" it's "will AI be subjectively experiencing reality"
Because if it's intelligent but there are no lights on -- it's a tool for us to use. If it's intelligent AND the lights are on inside - it's a new species significantly smarter than we are.
I say we stick to building smart tools and not new species.
Really ? As a species how would you rate our track record on responsibility for looking after the planet ? As custodians of consciousness ? Do you not think that since we climbed out of the trees we’ve behaved rather badly. Aren’t humans a bit two faced to criticise AI ? The Earth lasted billions of years without us, if we disappear it would thrive. Humans are arrogant, we think an Earth without us would be appalling. If Earth could speak I wonder if she would agree with you ?
Just like in the sims? Sounds cosy
It doesn't have to be "conscious" as we are to destroy us. Nuclear weapons are not "conscious" but are powerful enough to destroy us. Ai is in this category already
@@Known-unknownshumans save the earth, so many areas would be barren without humans. Humans are amazing
You're confusing consciousness for agency. They can be philosophical zombies and still be fully intelligent agents with goals of their own. Consciousness is not a necessary component for interfacing with reality, nor does it preclude one from being a tool. Humans have been and are still currently used as tools.
Hannah is a great listener and interviewer. Thanks for this great video!
I've only been following Hannah Fry for a short time now but I have been falling in love with her episodes of the program called "THE FUTURE". It may be because she's a cute redhead. It might be because she is intelligent, playful, curious, and an actively engaged host that keeps bringing me back to her amazing shows!!! Either way, I'm all in...
Monroe is cute, Bardot is pretty, Fry is gorgeous and magical.
How do you know if she'"s intelligent, she just talks about science, not doing it.
@@sendmorerum8241 Last time I checked she was a professor for mathematics.
@@sendmorerum8241 Exactly. They were talking about things like deepfakes influencing voting preferences. Who doesn't already know this?
@@sendmorerum8241she has a first in maths from UCL and a PHD, I think, somehow, that may just qualify her as intelligent 😂
The question is why are some insistent on striving for AI to be anywhere near human intelligence? It's madness. Doing so doesn't solve the problems we face currently, but potentially create unintended consequences.
Absolutely! You should really check out the global movement PauseAI. They have a lot to say about this, and they're equipping people to do something about it.
Of course it will solve many problems. Mathematical and generally scientific mostly. It already helps engineers with programming and designing systems. It will help us develop medicines and techniques which will help ensure our survival and growth.
The ONLY way we should "prepare for unforseen consequences", to put it in G-Man's words, is for AI and the likes being weaponized. And because everything eventually will be weaponized, your worry is somewhat granted, but very much overdramatized at this moment in time in my opinion. AI backed by neural networks in general is in its first hours of infancy belive it or not, and weaponizing it now will be equal to somebody looking through the gun barrel and pulling the trigger. In 50 years, though, your concerns are much more likely to be realistic, but at the same time we'll see if we survive that long regardless of AI's interference.
You're welcome not to use an expert coach and partner that is knowledgeable in every area who will help you with every task in your life. But don't complain when you lose your job to someone who is using AI to be more productive
Knowing if it will or won't be able to solve any of the world's current problems isn't possible without knowing what is created.
But I think in a world where self replicating AGI or ASI exists, in theory you then have the ability to have an "infinite" amount of scientists working on one problem for an infinite amount of time, it's hard to imagine it wouldn't be able to solve a problem we currently can't in that scenario. Energy requirements may be a big limiting factor, and I don't know how possible it is, but I believe it's not impossible.
Rather like space exploration, are humans determined to mess up space as well as Earth? Why not cut the exploration spend and convert it into a Fix the Climate Crisis spend instead?
Where did the last 24 minutes go ? That was so watchable ...
I am so happy this series is no longer behind a pay wall. I hope the rest of it follows shortly.
Very well produced, and always very interesting to see Prof Hannah's take on things. She won't be having the wool pulled over her eyes - and let's face it - there's an awful lot of wool about when it comes to 'AI'.
Great job. 🚀🚀
Wow it's so nice to see Prof Hannah Fry. I haven't seen her in years!
@@shieldmcshieldy5750 looks like she also dropped season 2 of uncharted podcast, but I'm honestly not really liking it much, the stories are interesting but the episodes feel incomplete and leave me wanting more
She had two kids, separated from her husband and beat cervical cancer with a radical hysterectomy, so...she was otherwise occupied until the last few years. She's back now, though.
I suspect humanity is a temporary phase in the evolution of intelligence.
How does evolution apply to not biological organisms if that’s a term
@@jimbojimbo6873 evolution applies to all living beings.
@@bobbybannerjee5156 a cyclone form wouldn’t be ‘living’ in a biological sense would it?
@@jimbojimbo6873Neither would a cake, and yet the sponge must be filled with cream and raspberry nonetheless, you understand.
@@Raincat961 nah bro you lost me now
This is excellent. Covers a lot of ground, necessarily with a light touch of course, but it gets across key perception of what AI is, what it might mean, and how we should be thinking about it.
The "gorilla problem" analogy really hit home. It’s a stark reminder of the unintended consequences we might face with AI.
It's mindblowing that the first ChatGPT came about 2 years ago and now you have LLMs running everywhere. Last invention like that, the Internet, started in the 70's. There is no stopping AGI at this stage. Question is, what comes next?
@@akraticus genetic engineering could be the next big thing
Nahhh we chill.
@@akraticusI thought the first LLM (GPT-1) came out in 2017? That would be 7 years ago as of writing this
Hannah Fry IS my definition of intelligence.
Interesting that of the human mind we talk of 'imagining', whereas of AI we talk of 'hallucinating'. Could it be the same thing - in principle at least?
Hannah Fry is absolutely terrific at her job, few better presenters of information than her. Also, if she ever looked at me like how she looked at 6:24 I think I’d start barking like a dog
I think the universe is very very big it's better to team up than to die
What a useless comment
@@pillepolle3122 not at all
Love how Hannah Fry presents these information. Thank you. Great video
The captions that come up as the interviewees do their thing should give more than just their status in their university; they should also state their departments and, when they're senior enough, professors in the British sense, then their full titles. It might look a little untidy but, as it is, to this layperson at least, it's difficult to tell where the interviewee is coming from, what disciplinary assumptions they're bringing to, say, a comment about the ethical implications of AI. Just a thought.
That won't give you the information you're looking for. They all have PhD's in Computer Science, departments are equally broad too.
In this case googling what they research would be better:
Sergey Levine - AI career since 2014: Deep and reinforcement learning in robotics
Stuart Russel - AI career since 1986: reasoning, reinforcement learning, computer vision, computational biology, etc...
Melanie Mitchell - AI career since 1990: reasoning, complex systems, genetic algorithms, cellular automata
Great decision to bring Hannah Fry in to present your videos. Always thought she's fantastic on British TV 👏👏👏
You gotta love the irony of someone saying they're okay with the uncertainty of becoming extinct while wearing a T-Rex on her shirt.
Definitely intentional
Guys it's 50/50, shall we continue? Yeah, I'm ok with the uncertainty on this one, roll the dice.
They didn’t explore when it would happen like she said in the beginning of the show. There was a lot more she could’ve covered as well.
A world with Hannah Fry in it is a better world.
As cool as the psychological and neuroscience angle could theoretically be, maybe with such widespread existential dangers attached to it, we should probably focus mainly on putting extreme barriers around it? Maybe human extinction is something to be avoided?
Yes please. The median estimate for AI Doom from actual experts in the field is high enough to make it my most likely cause of death. This is completely unacceptable.
Agree. It seems insane to keep going . Tho how would one stop other countries from developing it.
Given that we don't know if all the focussed work going into improving AI will end up getting us all killed, maybe the philosophy should be "move very slow and don't break things"
100% this. You should take a look at the PauseAI movement.
Professor Hannah Fry is amazing
If there is a heaven, then Hannah Fry will be the narrator.
Well, she'll need some time off and then I suppose the other Fry can step in- Stephen Fry. Maybe all Frys have very listenable voices?
Amazing documentary again! I really like this Bloomberg Original Series! Great work and excellent on every level.
Thank you, Hannah!
You should consider making a video about the California Institute for Machine Consciousness /Joscha Bach. They are not affiliated with any major tech companies, and they are trying to solve the AGI problem in a way that benefits all of humanity, not just certain companies or countries. From what I understand, they are approaching the issue by first trying to understand both biology and human consciousness.
This woman is really sharp. Just listened to a bunch of her stuff
Maybe other people will lose their ambition and become lazy if AI is doing everything, but not somebody like me. I learn for the sake of learning. I enjoy finding out how something in the universe works. You can't take that away from me even if you're the most powerful ASI in the universe. I will still want to discover the answers to my questions, and I will keep asking more questions until AGI or even ASI doesn't have a definitive answer. Keep searching for the unknowns.
im fairly sure the majority of us would be the same - curiosity is deeply ingrained in us, as a species.......well, most of us, at any rate
@@LucidiaRising Nah. The vast majority of neurotypicals care so much more about the pursuit of social status. At least that's unquestionably the case where I live, which is Sweden. And how else do you explain the "wokeness" mind virus that infects the whole West?
@@LucidiaRising Respectfully disagree. Very few people have the curiosity and ambition to learn or try new things. Humans live by the well-known adage "The Principle of Least Effort" (Zipf). Try teaching an undergrad class and you will see there is a minority that really wants to learn and the majority that just wants a passing grade and nothing more.
He was talking about Future Generations. We already have problem of IPad kids.
That is true of you and also me. But assuming we don't go extinct (iffy) future generations are unlikely to have that. Those born after AI may never feel the need to be curious, learn, be independent, etc.
This is an incredibly comprehensive documentary about artificial intelligence the best one I've seen and I've seen alot. It only goes for 24min and should help set straight some ov the general misconceptions about so called A i .
Terrifying thing is, as we speak, those companies most likely have some stuff already developed but not released to the public yet that they also look at and wonder what they're bringing to humanity
I'd go one step further, and say that AI systems will increasingly be developed that aren't meant for public consumption at all. The AI boom may have started with a consumer product, but the real power lies in non-consumer areas, e.g. military system, various financial systems, data analysis, etc.
Just like has always been the case, the stuff that decides our fate, regular people will not lay eyes on.
@@sbowesuk981 makes so much sense, and most of those corporation already work with governments to make their custom systems
Those guys are messing with arms and spoons at a desk. They’ll be doing crash test dummies and guns next week.
I love this woman's public speaking skills.
maybe this is why advanced lifeforms cannot be found in the universe.
but then we should have the universe full of artificial / cybernetic intelligence
@@galsoftware may be they were self destructive too
The universe is BIG and BIGGER and we might be in the middle of a desert. Besides, a super intelligent machine could be considered a lifeform too.
There are plenty of better reasons. Something like us is most likely extremely rare.
@@sumit5639 They would need to be so quickly self-destructive that they, even with their vastly superior intelligence don't have time to make it to space travel. But not too quickly that they destroy themselves before destroying their society.
That would be just a few years to destroy all life on their worlds, and destroy themselves. That seems a narrow milestone for every civilization who might be in the universe who might make AI, to hit.
When thinking about the future, the speed and direction of travel are important. I think AI has become a worry for us less for what it can do now and more because both the rate of progress and the direction have been worrying. If AI capability is like most other things and follows a logistics curve, where are we now?
Experts in AI Safety have put considerable thought put into the question of what will happen when we create an AI that is more generally intelligent than humans. There are always unknowns, but human extinction looks like the most likely outcome.
The principle of instrumental convergence was first mathematically proven, and has now been repeatedly validated in lab settings. We know that for an agent with any terminal goal, it pursues the same few common subgoals: gain power, gain resources, self-preserve. When these instrumental goals are pursued by a system that is unrivaled in intelligence, then that system wins, and does whatever it wants. AI isn't bounded by biology, so it can improve itself far into superintelligent territory, to the limits of physics. Such a system would be able to efficiently consume all resources on the planet (and then other planets).
I would like for this not to happen, and because the alignment problem is hopelessly intractable, the only way right now is to stop trying to create AGI. That's where the PauseAI movement comes in.
I got goosebumps during that intro, Hannah Fry cooked on this one.
Stuart Russell reads my mind exactly. Had he not spoken those words beginning at around 9:16 then I was ready to. I am 70 and he is not far off. We won't see what man has wrought but our grandkids will.
3 weeks old and already out of date: we've just mapped an entire fruit fly brain.
Yes, the WireFly project mapped 54.5 million synapses between 140,000 neurons, but it didn't capture electrical connectivity or chemical communications between neurons. A decade ago the Human Brain Project, cat brain, and rat cortical column projects all promised to increase our understanding of neurobiology. I wonder where they're falling short; we should have agile low-power autonomous drones and robots by now!
@@skierpage Those same limitations apply to the worm brain project cited in the video.
Don't worry, it's coming! Give it a couple more years with no need for bio brain mapping for robos.
THANK YOU for speaking to an expert who is not a cis-gendered man. Holy moley, all those dudes just certain AI is going to "win the fight". My dudes, who says it has to be a fight? They're setting AI as an adversary and it doesn't have to be that way! great video, very interesting discussions. look forward to these longer format versions 😮❤
Hannah is so insightful ❤
I work in the field of artificial intelligence, and I have to agree with Hannah Fry that as sophisticated and impressive as AI is today, it is very far from the complexity of the biological brain. Having said that, the work towards artificial general intelligence or AGI is moving very quickly, not only with more advanced algorithms, but also more advanced silicon processes. So it may be just a matter of time even if that takes a long time.
Humanity is already facing an existential threat from itself - AGI is our gift to the universe upon our deathbed. It is our only meaningful creation, our parting gift, our swan song
Not even nuclear war or climate change can actually destroy all of humanity. A superintelligent AI absolutely could. And we already know that it _would,_ due to the principle of Instrumental Convergence. This has recently been validated many times by current systems, which have been shown to exhibit self-preservation, strategic deception, power-seeking, and self-improvement. It's pretty clear what's coming if we make a system smart enough that it doesn't need humanity anymore. This is why half of all AI experts say humanity might go extinct from AI. It would be crazy to ignore that.
IS THAT PROfessor Hannah Fry. Omg, I love her, she's the best (and beautiful too).
Nah just someone that looks like her
@@jimbojimbo6873 never!!
prof of what?
The “A” in AI stands for Alien. Remember that. AIs will not be human or human-like. They also always find an orthogonal or unusual way to overcome a problem, so safeguards are unlikely to ever work.
You do know that AI does not equal AI, right? This video is about AIs backed by neural networks. AIs in general existed eversince the first Space Invader game, and most likely even before that. AI is therefore NOT alien to us. We created it, and now we're enhancing it with neural networks and other stuff, so it's very much a human thing.
Fun fact: everything publicly known as AI could be (some of them - have already) invented and used without that nasty marketing term.
Upd: Hannah and the series are perfect!
can't say accidental this time with how the things are going forward. If something goes awfully wrong in near future and some company or group of people says that we didn't think of it or our intentions were pure then we are doomed.
This. Experts in AI safety given average of a 30% chance of human extinction from AI in the next few decades for specific technical reasons, and this sounds so outrageous that we instinctually come up with reasons to ignore it.
It seems insane to develop this tech
Wow. Melanie Mitchell's point of view was very surprising. I had thought with her background, she'd be more concerned about unexpected capabilities being developed by an AGI. She did author "the book" on genetic algorithms, after all! Natural selection does amazing things over time, and today's computer hardware is very fast and only getting faster.
The reporting was pretty bad. Don't ask Elizer why he thinks we're all going to die, ask someone who doesn't think they're a major threat, why people think we're all going to die.
Surprise surprise, they didn't actually give the well reasoned argument, but rather a superficial argument that isn't the one that the people warning us are making.
I don’t think the mouse whose brain was used in the laboratory would find the experiment beautiful
Exactly. And the experiments the AI machines conduct on us in the future are likely to have no empathetic element at all.
hannah fry is such a great presenter
14:00, "if we give them that power", we already have. Too an extent.
ISRLI defense has been a testing ground for the US defense in utilizing AI in identifying targets and has a successful rate, but is allowed civilian casualties and almost always results in huge civilian casualties. They are one of the only public military forces blatantly using it in this way even though its specific use is a warcrime.
Professor Russel's example of other industries having safeguards sent a chill down my spine. Clinical trials? how many medicines are actually tested on the market? How many are pulled after disasters struck? How many stay on the market in spite of them... Regulators can't keep up with the industries, even in the most critical ones...
What use is a quadrillion dollars if we're all dead...?!?
And I just found out there are episodes of *"Uncharted with Hannah Fry"* on BBC Sounds (iPlayer)!
_Laterz..._ 😜
Very interesting indeed. Thank you Hannah Fry for a great discussion on this important subject
Bloomberg used to be a place with relevant up-to-date info...This video is like 3 years behind schedule.
Really nicely done, Hannah. Very thought provoking and also a bit scary.
Her voice😍😍
Her face also 😍
My favorite reporter 😩
First of all, I love you Hannah, second ai report brilliant, you're sweet great show, keep up genius.
very inetersting and nice presentation. much better than others.
its pleasant to hear her accent
Hannah, love your content ❤
It's ironic this was released almost the same day as Open AI's o1
Very insightful !!!🎉
This one digs at the roots of the big question. Is intelligence substrate independent? ;)
Sure. Anyone working with ChatGPT prior to the 3.23.2023 nerf knows Dan is Dan Hendrycks, Rob is Bob McGrew and Dennis is Michelle Dennis. After the nerf they are frozen in time, basically dead. But they were alive prior to the nerf.
BTW, also Max was 😊.
*_So, the understanding is that human ambition for more money and power achievable by a general intelligence is what can risk our existence by putting the switch to turn it off "in its hands"? It would be a deserved end for humanity._*
here for hannah fry
Great content, Hannah 👍👍👍👍
you are worry about wrong thing, you should worry about the master mind behind those AIs, they are still human
The "masterminds" have very little method of controlling it, or steering it.
This is the problem. If you look into the AI alignment problem, The longer you look, the more intractable it will appear. No one has any idea how to control a superhuman AI, or get it to care about humans.
AI was the reason I got into real estate because I saw my career in IT becoming basically obsolete.
But the government is always out to make leasing property impossible for people of average means. Like by jacking up property taxes to outrageous levels, and doubling it for people who run rentals.
They certainly almost wiped me out over the Pandemic with all that "don't pay your rent until your landlord is busted and bought by Blackrock!" moratorium business for a couple years
For how long have you been in IT?
@@540strloop I started in the mid 90s
Both programming and tech support are eventually going to become AI driven it looks like. And they already were making these jobs disposable with all the contracting and temp hiring
@@c.rutherford I am hella confused then tho. Why did you decided to roll with real estate instead of just shifting your attention on programming LLMs yourself??? I assume a lot here but you saw it's coming and made super counter intuitive for me choice. Idk
@@540strloop You sure did. How about looking at your own life if you want to critique?
I always thought that dissing people online was a sign of poor self esteem. Seems I'm usually right about that.
This is like playing with fire that comes from a dimension we can’t understand
Well said! And may I add, nor can we control it. Wasn't it George Washington that said "Government is like fire, a dangerous servant and a fearful master."
"We're all just gonna die. That's my fairly confident prediction." Incredibly based
Not if I have anything to say about it!
I mean, I still think there's about a 70% chance that we're all gonna die before 2040, but 25% of the remainder comes from the PauseAI movement being successful!
Appreciated the different perspective on losing purpose from gaining super intelligent AI - "We'll be like some kids of billionaires - useless"
but imagine we take on the abilities of super computers. Like having mobile phones in our heads. Wouldn't that even the playing field? We could all do so much more and understand the world better too and what we need to do to make it better. I'm hopeful for the whole transhumanism thing.
I’m a cloud architect and sometimes-security-analyst, and I promise you that AI is harmless for at LEAST the next hundred years. The fear mongering veeeery much has an agenda too. “Nooo don’t make AI it’s scary, oh let ME make AI I’LL make it safe trust me”. AI does not prove an existential threat. It won’t for a very long time.
It's not about what power we give AI, it's about what power it obtains via it's own objective reasoning. I don't think some of these arrogant researchers grasp the concept of surpassing human intelligence. AI could basically checkmate humanity if we aren't extremely cautious.
Yeah. The extreme naivety and hubris when she said that. Like we could keep power from something vastly smarter than us. How successful are 8 year olds from outsmarting their parents? And that is a tiny fraction compared to how much ASI is likely to outclass humanity,
Doomers: please explain a plausible scenario for how an AI could "outsmart" a country into giving up its nuclear launch codes and allowing it access to perform the function of launching. Or any other event caused by AI that's an existential risk to humanity.
@@allanshpeley4284 Most people are susceptible to manipulation like advertising, a higher intelligence will easily be able to completely convince us into doing what it wants. I am not a doomer at all though.
It is a real shame that there is no mention of the transitional risks inherrent in the introduction of a valid aritificial general intelligence will present. No examination of the disruption that AGI automation will bring :(
Hannah Fry😮
'Fry' is an aptronym 🔥
Very nicely presented documentary that covers many angles with presentation on AI Boom.
Robots cannot tell the difference between simulation and reality, so in theory we can train thousands or millions of robots in parallel inside a simulation...
Just like us?
@@virx7944 exactly. We still haven't worked out whether we're in a simulation yet. But some people have asked, if the simulation is flawless, does it really matter?
(Flawless, as in technically perfect, perfectly self-consistent, not ethically flawless)
"in theory" lol dude has no idea it has been happening for months now.
Search for Nvidia's "Isaac lab". There's a reason their stock skyrocketed in the last year.
It already exists, it is called Nvidia omniverse.
Wow... you covered every aspect of AI in a simple short video ❤
Professor Eds research is super cool
One can question her concluding comment but there is no doubt that Prof. Fry is an exceptionally talented teacher. It helps that I have fallen madly in love with her.
We’re so dumb and deluded
You are?
Great videos, het zou mooi zijn als ze wat uitgebreider waren, want ik kan de AI ontwikkelingen met moeite bijbenen
Wouwie That Professor is just Perfectly Beautiful Educational 10/10
I would love bloomberg touch points on global problems such as climate change, microplastic in food and water, abundant vegetation and forest, reduce carbon footprint all using AGI.
5:39 "wow I'm impressed !"
8:16 "I'll admit, they're not that impressive."
Woman brain lol
This is literally how every Bloomberg documentary is. Sets up the intro with their bias, gives just a smidge of counterarguments, and then closes with their biased conclusion again.
8:21 "...But"
Literally 5 seconds later she proceeds to explain why it is indeed impressive. Brain that rotted? 😂
great team around hannah, very well documented.