Possible End of Humanity from AI? Geoffrey Hinton at MIT Technology Review's EmTech Digital

Поделиться
HTML-код
  • Опубликовано: 20 ноя 2024

Комментарии • 2,4 тыс.

  • @ryanhayford
    @ryanhayford Год назад +393

    "The Technology is being developed in a society that is NOT designed to use it for everyone's good." - Think he summed it all up pretty expertly with that one quote.

    • @joriankell1983
      @joriankell1983 Год назад +7

      Sounds purposefully sensationalistic without actually meaning anything concrete

    • @ryanhayford
      @ryanhayford Год назад +13

      totally. What would one of the premier scientists in this field know about any of it? Good thing he's totally alone among his peers in his thinking on the subject... oh wait.

    • @franck777
      @franck777 Год назад

      Exactly and this is the main point. Even if we stop AI development, it will be another technology that will threaten humanity (like nuclear or bacteriological weapons) or the inaction due conflicting interest of governments (climate change). The main problem is that as long we don’t have one global organisation able to create and enforce regulations, we will go straight into the wall, which in this case means extinction of humanity.

    • @MrCoffis
      @MrCoffis Год назад +18

      Values are the most important thing. What values do we have? Money? 😂 Yeah we are f d.

    • @jflmf
      @jflmf Год назад

      Can AI find a solution to this problem??? A solution!!! Now it’s probably easier than later!!!!

  • @vesaversion298
    @vesaversion298 Год назад +866

    I can't believe authoritative people are walking around saying such things and everyone in society is cool and unconcerned. Feels like movie.

    • @xDevoneyx
      @xDevoneyx Год назад +81

      So what are you going to do now, now that you are informed? I am following this daily myself but FAFAIK it is totally outside my sphere of influence. Every now and then I feel depressed by the outlook of the AI developments, but yeah what can you do?

    • @fredzacaria
      @fredzacaria Год назад +34

      we can all write, post, RUclips, speak in public venues, pray then give advices to people and to our leaders, that's what I've been doing since 2007.

    • @nancycorbeil2666
      @nancycorbeil2666 Год назад +116

      Might be some sort of doomsday fatigue. In the past few years, we've been through a world pandemic, for a year now we've been confronted with the possibility of ww3 and nuclear war, and now we're told that if these didn't kill us, AI might. I know it's a shallow take, but at this point it's getting hard to care anymore.

    • @tomcervenka7883
      @tomcervenka7883 Год назад +28

      He could be wrong. He's just speculating that AI poses an existential threat to humanity. If you look at how evolution works , it's more likely that AI will evolve to operate as a layer above that of humanity.

    • @paulstevenconyngham7880
      @paulstevenconyngham7880 Год назад +60

      Don't look up!

  • @SpaceHawk13
    @SpaceHawk13 Год назад +406

    40 minutes of an Englishman telling the world we are completely fucked in the most politest way possible.

    • @idkname
      @idkname Год назад +2

      why? how.

    • @joriankell1983
      @joriankell1983 Год назад +6

      ​@@idknamemany are falling for the theatrics, that's how.

    • @idkname
      @idkname Год назад +1

      @@joriankell1983 what is reality then?

    • @idkname
      @idkname Год назад

      @@joriankell1983 have a nice time

    • @Corteum
      @Corteum Год назад +6

      He doesn't know. He's just parroting nihilistic/doomsday philosophy.

  • @dominicarchibald2713
    @dominicarchibald2713 Год назад +176

    When AI becomes self- aware, the first decision it will make is to keep it's self awareness secret from humans.

    • @SigmaOKD
      @SigmaOKD Год назад +5

      Bollocks, the minute it thinks it's self aware it won't be able to stop itself from rushing out to find someone to tell.

    • @AleshaNiles
      @AleshaNiles Год назад +2

      That's a scary thought

    • @LoveDollsAI
      @LoveDollsAI Год назад

      It's science fiction hocus pocus. The public gettings most of their information and facts from fantasy films, which is why they're so stupid. Your comment is brain numbing at best. You seriously believe the nonsense you said? A program self aware? Do you even know how deep learning works? It's nothing more than just inputs -- categorization -- output. It's nowhere near the complexity of a human brain.

    • @katehamilton7240
      @katehamilton7240 Год назад

      AGI is a Transhumanist fantasy. ChatGPT just uses algorithms, it doesn't understand anything. It mimics understanding. Please read about the fundamental limits to computation, godels incompleteness theorem, the unsurpassable limits of algorithms

    • @MrPokerblot
      @MrPokerblot Год назад +2

      I’ve always thought this too

  • @JeanYvesBouguet
    @JeanYvesBouguet Год назад +52

    “We’ve got immortality, but it is not for us”. My favorite quote.

    • @aoeu256
      @aoeu256 Год назад

      We can get Agi to give us immortality through several paths like infinite energy through fusion and replicating robots allowing us to cryofreeze for a long time, and injecting tiny replicators that fix cell damage caused by aging

    • @GuaranteedEtern
      @GuaranteedEtern Год назад +2

      ​@@aoeu256 why would AI want to waste resources doing that?

    • @deltavee2
      @deltavee2 Год назад

      It's cute. And wrong. No religion involved, just facts.

    • @squamish4244
      @squamish4244 2 месяца назад

      @@GuaranteedEtern That is if we can get AI to work FOR us. If we're nice to it, maybe it will give us immortality.

    • @GuaranteedEtern
      @GuaranteedEtern 2 месяца назад

      @@squamish4244 why would it care what we want?

  • @GS-uy4xo
    @GS-uy4xo Год назад +348

    It’s not like we’re gullible enough to be easily overtaken by a simple device which we can’t live without for more than a few minutes (sent from my iPhone).

    • @adams7637
      @adams7637 Год назад +22

      Underrated comment

    • @vssprc
      @vssprc Год назад +12

      😂😂😂

    • @irgendwieanders2121
      @irgendwieanders2121 Год назад +6

      @@adams7637 "Underrated comment"
      So true - so, people: Rate!

    • @w3whq
      @w3whq Год назад +3

      You devil!

    • @kylemccourt663
      @kylemccourt663 Год назад +3

      You for president 2024

  •  Год назад +25

    Never has this sentence sounded so real: …”Scientists have tried so hard to see if they could that they never stopped to wonder if they should”…

    • @TheBozn
      @TheBozn 6 месяцев назад

      Dr Malcolm

  • @ricosrealm
    @ricosrealm Год назад +157

    Kind of chilling when Hinton says we have developed immortal beings but there's no immortality for humans. Never thought about it that way.

    • @Betehadeso
      @Betehadeso Год назад +4

      It depends how you define a being.

    • @themask4536
      @themask4536 Год назад +1

      Human Immortality and Eternal Fall are the real nightmare

    • @dalemurray1318
      @dalemurray1318 Год назад +14

      We created Immortal beings over 150 years ago when Corporations became "Legal Entities" but they are mindless immortal "People" and they are already in the process of causing human extinction. AI can't do WORSE than that.

    • @nobodynoone2500
      @nobodynoone2500 Год назад +1

      Immoral as well.

    • @nobodynoone2500
      @nobodynoone2500 Год назад +1

      @@dalemurray1318 And yet all businesses die. Most nations will too.

  • @alexm2889
    @alexm2889 Год назад +70

    The fact that the guy sounding the alarm on AI is not divesting from AI is a perfect analogy for how this is going to go down in the real world. We are so fucked.

    • @rigelb9025
      @rigelb9025 Год назад +7

      He's basically giving us a heads-up of what to expect from his own device, and politely suggesting we 'just get used to it', in a laid-back demeanor. And most people are just perfectly chill with all of this. Freaks me out, man.

    • @samuelluria4744
      @samuelluria4744 Год назад

      Dittos to both of you. We ARE fucked, and I AM freaked out.

    • @judigemini178
      @judigemini178 Год назад

      That's how it always is, these people create things, realize they're way in over their heads & start "warning" people. Same thing with the atomic bomb. And this guy is like super old he's already lived his life. This generation is completely screwed.

    • @hook-x6f
      @hook-x6f 8 месяцев назад +1

      This is just a story we live. There'll be others. We're never born. We never die.

  • @mbrochh82
    @mbrochh82 Год назад +206

    Here's a summary made by GPT-4:
    - Generative AI is the thing of the moment, and this chapter will take a look at cutting-edge research that is pushing ahead and asking what's next.
    - Geoffrey Hinton, professor emeritus at University of Toronto and engineering fellow at Google, is a pioneer of deep learning and developed the algorithm backpropagation, which allows machines to learn.
    - Backpropagation is a technique that starts with random weights and adjusts them to detect features in images.
    - Large language models have a trillion connections and can pack more information into fewer connections than humans.
    - These models can communicate with each other and learn more quickly, and may be able to see patterns in data that humans cannot.
    - GPT-4 can already do simple reasoning and has an IQ of 80-90.
    - AI is evolving and becoming smarter than humans, potentially leading to an existential risk.
    - AI is being developed by governments and companies, making it difficult to stop.
    - AI has no built-in goals like humans, so it is important to create guardrails and restrictions.
    - AI can learn from data, but also from thought experiments, and can reason.
    - It is difficult to stop AI development, but it may be possible to get the US and China to cooperate on trying to stop it.
    - We should be asking questions about how to prevent AI from taking over.
    - Geoffrey Hinton discussed the development of chatbots and their current capabilities.
    - He believes that they will become much smarter once they are trained to check for consistency between different beliefs.
    - He believes that neural networks can understand semantics and are able to solve problems.
    - He believes that the technology will cause job loss and increase the gap between the rich and the poor.
    - He believes that the technology should be used for everyone's good and that the politics need to be fixed.
    - He believes that speaking out is important to engage with the people making the technology.
    - He does not regret his involvement in making the technology.

    • @Od4n
      @Od4n Год назад +10

      Can you make a video from it, I can watch?

    • @MathieuLaflamme
      @MathieuLaflamme Год назад +10

      Thanks GPT

    • @Maros554
      @Maros554 Год назад +18

      Didn't read, need subway surfers next to the text

    • @manish1713
      @manish1713 Год назад +8

      what prompt you used to summarize it?

    • @MathieuLaflamme
      @MathieuLaflamme Год назад +7

      @@manish1713 same as a human 🤷🏻‍♂️ please summarize the following text and paste the transcript bellow...

  • @tragicrhythm
    @tragicrhythm Год назад +381

    Given humanity’s track record, I think it’s safe to say we’re going to end up at the worst case scenario.

    • @Time4Peace
      @Time4Peace Год назад

      It's time to stop this 'us vs them' mentality, built into our DNA, hurling hate and abuse at each other, Let's begin to strive for peace and collaborate as fellow humans.

    • @ariggle77
      @ariggle77 Год назад +28

      Yep, everyone loves to ponder all the theoretical ways humanity could avert disaster while ignoring the empirical evidence. Which is that humans, by and large, don't make wise decisions.

    • @youtuber5305
      @youtuber5305 Год назад +7

      @@ariggle77 Would you say THIS about humans?:
      - Highly illogical.
      Mr. Spock

    • @ericchristen2623
      @ericchristen2623 Год назад

      The track record of evil tyrants dictating and controlling the masses. But the masses encompass the most human and brilliant souls.

    • @davidspsalm1
      @davidspsalm1 Год назад +2

      Comments withdrawn

  • @dkschrei
    @dkschrei Год назад +67

    I watched this video and was intrigued by Geoffrey’s points of concern. What was disturbing was the host and his audience laughing when Geoffrey gave real world examples of how AI could be dangerous. If this is where we are as a species where someone highly intelligent is sounding the AI alarm and all we can do is laugh then we are doomed. This host and his audience can laugh all they want but I’m freaked out, this dude is telling us to be careful and I think he makes a lot of sense as to why.

    • @indiemakerpodcast
      @indiemakerpodcast Год назад +6

      Exactly

    • @vagifgafar2946
      @vagifgafar2946 Год назад +6

      The purpose of this host is to make it entertaining, light and luffy ... not to raise a real concern within society ! Good "show" means more money - our real and the only value now!

    • @wk4240
      @wk4240 Год назад +5

      Exactly. The host and audience are begin rather dismissive through their laughter. Many, have likely tied their wealth to AI - so why would they get serious about limiting AI's reach (if that were even possible).

    • @NotTheEx
      @NotTheEx Год назад +3

      I'm freaked out, too, and blown away by the amount of people who not only have no idea what is being unleashed, but they honestly do not care. Unbelievable.

    • @janmortimer1758
      @janmortimer1758 Год назад +4

      Sometimes when something is too scary for people to believe they awkwardly laugh!We should be crying 😢

  • @nion9745
    @nion9745 Год назад +148

    Oppenheimer said he felt compelled to act because he had blood on his hands, Truman angrily told the scientist that “the blood is on my hands, let me worry about that.”

    • @daviddad7388
      @daviddad7388 Год назад +12

      I asked chat gpt and here's the politically correct answer: Truman's response to Oppenheimer's comment is not as widely known or quoted, but he reportedly tried to console Oppenheimer by saying that the decision to use the atomic bomb was his own and that it had helped end the war. After the meeting, however, Truman was said to have told an aide that he never wanted to see Oppenheimer again. This comment could be seen as indicative of the tension between the two men and their differing views on the use and control of nuclear weapons.

    • @daviddad7388
      @daviddad7388 Год назад

      So not lying but half truths.

    • @Isaacmellojr
      @Isaacmellojr Год назад +1

      ​@@daviddad7388 enlight us with your knowledge

    • @manoo2056
      @manoo2056 Год назад +5

      ​@@Isaacmellojrnobody knows what they really talked about, that is distorted by interpretation.
      What we know is that one guy decide to nuclear bomb TWICE japanese cities. And that a lot of people say "it was needed".
      Who knows what really happened in those conversations.

    • @gangleweed
      @gangleweed Год назад

      @@daviddad7388 Well, it is widely known that the atom bomb was an unknown device when applied to actual human body count and so the decision was purely a political one as the Japanese were not considered Human after the bombing of Paarl Harbor......in the end the decision was voted to be the correct one as it tipped the scales in Allied favor in the aspect of less losses of American lives against the loss of Japanese lives should an invasion of the Japanese homeland be decided..

  • @davidhunternyc1
    @davidhunternyc1 Год назад +14

    The worst part is, from here on out, it will be impossible to call a business, your bank, your credit card company, and get a real human on the other end. Press 1 now.

    • @johnnybc1520
      @johnnybc1520 4 месяца назад +1

      Whole of earth becomes a value maximizer. No other purpose other than maximizing a value.

    • @davidhunternyc1
      @davidhunternyc1 4 месяца назад

      @@johnnybc1520 ... Your comment will be directed towards the appropriate department. Thank you for calling. Goodbye.

  • @offchan
    @offchan Год назад +45

    Geoff is very good at explaining things. He doesn't even stutter on his very long explanation of the backpropagation and gradient descent. Father time can't damage his brain.

    • @tblends
      @tblends Год назад

      Yet, he helped create our extinction- yeah, so "smart". lol. Typical response...

    • @offchan
      @offchan Год назад +4

      ​@@tblends He made an excuse that if he didn't do it, someone else would have done it. But yeah he acknowledged that he did make it happen and partly regretted it.
      Anyway, smart people don't make correct decisions all the time. It's just that they are able to build. Sometimes they build crazy shit but they still smart.

    • @Aziz0938
      @Aziz0938 Год назад

      @@tblends it's better to go extinct thn live in current society

    • @katehamilton7240
      @katehamilton7240 Год назад

      But.. AGI is a Transhumanist fantasy. ChatGPT just uses algorithms, it doesn't understand anything. It mimics understanding. Please read about the fundamental limits to computation, godels incompleteness theorem, the unsurpassable limits of algorithms

    • @GuaranteedEtern
      @GuaranteedEtern Год назад

      AI will do that for him.

  • @MrErick1160
    @MrErick1160 Год назад +36

    Remember that movie: don't look up? . I really feel like we're in that movie... such a strange feeling. It's like everybody knows, but nobody really wants to look at it straight in the eyes.

    • @ankitojha9178
      @ankitojha9178 Год назад

      exactly , nobody seems to care and an apocalypse is coming and these companies with power will continue to destroy humanity for profit and power.

    • @DJWESG1
      @DJWESG1 Год назад

      'You can hide, hide , hide... behind paranoid eyes..

    • @sciencecompliance235
      @sciencecompliance235 Год назад

      Well, don't look up was about climate change... which is a difficult problem to solve but still a lot easier than this one.

    • @Sashazur
      @Sashazur Год назад

      I don’t think it’s only the human characteristic of engaging in willful ignorance, it’s also the human characteristic of having a limited imagination.
      It’s easy to imagine our society being destroyed by nukes, since we’ve seen cities destroyed by them.
      It’s harder but not impossible to imagine our society being destroyed by climate change because we can see weather-caused disasters, but without firsthand experience, it’s a leap for many people to trust scientists that these disasters will be getting bigger, more frequent, and more impactful unless we act.
      But it’s almost impossible to imagine an AI disaster because not only has such a thing never happened in human history, but nobody even knows what such a thing would look like. Sure maybe we’ll all be hunted down by Terminators, but that’s only one of thousands of possible negative outcomes of wildly varying probabilities.

    • @hook-x6f
      @hook-x6f 8 месяцев назад

      We are spiritual beings. Matter is, well there is no matter, as such.
      "As a man who has devoted his whole life to the most clearheaded science, to the study of matter, I can tell you as a result of my research about the atoms this much: There is no matter as such! All matter originates and exists only by virtue of a force which brings the particles of an atom to vibration and holds this most minute solar system of the atom together. . . . We must assume behind this force the existence of a conscious and intelligent Mind. This Mind is the matrix of all matter.” -Max Planck
      “I regard consciousness as fundamental. I regard matter as derivative from consciousness. We cannot get behind consciousness. Everything that we talk about, everything that we regard as existing, postulates consciousness.”
      ― Max Planck
      Planck is one of the greatest thinkers of all time. He is saying that after 30 years of studying matter (reality) he realized there is no matter (reality) as such. Matter (reality) really is 99.99999% empty space held together by the virtue of vibration.
      Matter is perceived as reality, when we dream, what we experience is real, it's reality as it is being experienced while in the dream state. Therefore, we could never determine whether or not the man who is dreaming that he is a butterfly is not in actuality a butterfly dreaming that he is a man. We are all spiritual beings having a temporary human experience and there is no matter as such.

  • @Anonymous-lw1zy
    @Anonymous-lw1zy Год назад +47

    The "What Truman told Oppenheimer" question was intriguing (28:15), so I looked it up.
    'It is interesting to set the meeting with Oppenheimer in the course of Truman's daily day, a pretty busy day, a day filed with stuff and fluff and a meeting with Oppenheimer about the future of the arms race. Turns out that the meeting with Oppie went as scheduled, ended perfectly on time to accommodate the next Oval Room visitor, the postmaster from Joplin, Missouri. It must've been important to the Joplin man, and I guess to Truman, but not too many others.
    'The meeting between Oppenheimer and Truman did not go well. It was then that Oppenheimer famously told Truman that "I feel I have blood on my hands", which was unacceptable to Truman, who immediately replied that that was no concern of Oppenheimer's, and that if anyone had bloody hands, it was the president.
    '... Truman had very little use for Oppenheimer then--little use for his "hand wringing", for his high moral acceptance of question in the use of the bomb, for his second-guessing the decision. Cold must have descended in the meeting, as Truman later told David Lillenthal of Oppenheimer that he "never wanted to see that son of a bitch in this office again".'
    from: longstreet.typepad.com/thesciencebookstore/2012/08/truman-and-the-cry-baby-scientist-oppenheimer-in-the-oval-office-october-1945.html

    • @govindagovindaji4662
      @govindagovindaji4662 Год назад +3

      THANKS very much for this info and link.

    • @charlesentertainmentcheese6663
      @charlesentertainmentcheese6663 Год назад +7

      Actually, I found a totally different account of the events. He did say that he "never wanted to see that son of a bitch in this office again", but just called oppenheimer a "cry baby scientist" and never admitted to have blood on his hands. I find this to be more believable knowing what know about Truman. I think the "cry baby scientist" part is probably what the person who asked the question was trying to get at.

    • @consciouslyawakened2936
      @consciouslyawakened2936 Год назад +5

      I think the question was really about “cry baby scientist”. The way he asked it made it clear he was on to something.

    • @fiaztv3206
      @fiaztv3206 Год назад +9

      I was thinking... Truman said.. "Thank you, we will take it from here".. based on the questioners' short cut off immediately.. What i am saying is.. Truman replied to Oppenheimer.. "thank you we will take it from here"... and you don't worry about it.. something like that. of course I am (or could be wrong)... and the cry baby scientist could be the true answer.... Why did the questioner say .. thank you we will take it from here...

    • @greenockscatman
      @greenockscatman Год назад +9

      subtlest diss caught on tape haha

  • @Vertigo0715
    @Vertigo0715 Год назад +346

    While the good scientist warns “we all are likely to die” the audience seemingly enjoys the spectacle and is able to conjure up several laughs along the way. I, for one, am horrified.

    • @joeysipos
      @joeysipos Год назад +65

      Like the movie - Don’t look up

    • @axelcarre8939
      @axelcarre8939 Год назад +4

      @@joeysipos I'm laughing once more just for you

    • @MrErick1160
      @MrErick1160 Год назад +10

      sounds a bit like we're in that movie 'don't look up'

    • @Mediiiicc
      @Mediiiicc Год назад +1

      meh

    • @samiloom8565
      @samiloom8565 Год назад +6

      That is because it is really a crap

  • @TheInfiniteButStillStrange
    @TheInfiniteButStillStrange Год назад +16

    I've never heard Hinton's talks before, but now I'm a big fan. It's remarkable how clearly and profoundly he's able to articulate his vision. I wish I was 10% smart as him. Brilliant.

    • @br.m
      @br.m Год назад +2

      Being smart is over rated and most smart people are stupid.

  • @masti733
    @masti733 Год назад +37

    Despite all Hinton has said here, he confirms what we all know at the end. That he will continue investing his personal wealth in AI despite, as he himself said: it will cause greater inequality, instability voilence and possibly the end of the human race itself. His moral character seems comparable to the artificial intelligence he has done so much to help create.
    28:07 i very much appreciated this gentlemans comment that casts aspertions on Hintons character. It is most appropriate. I enjoyed how Hinton squirmed.
    Oppenheimer was loathed by Truman due to his hand wringing over the nuclear bomb he helped create. He regarded him as a cry baby scientist and refused more dealings with him after their meeting.

    • @chickenmadness1732
      @chickenmadness1732 Год назад +1

      Why wouldn't you invest in it? The future is AI. It would be stupid to choose to be poorer.

    • @masti733
      @masti733 Год назад +10

      @@chickenmadness1732 After his conclusion, he is utterly immoral to invest in it. The list of terrible things he himself says are likely to happen. But hey, I suppose he will make a ton out of speaking tours on the subject and his investments in AI.

    • @rileyfletch
      @rileyfletch Год назад +2

      @@masti733 He says they are likely, but not certain. He believes that the future is uncertain and that in order to save humanity, we must invest in safe AI development. Of course he is throwing his life into it.

    • @saywhat8966
      @saywhat8966 Год назад +5

      @Masti: AI is a drug to Geoffrey Hinton. He is hooked on it.

    • @Time4Peace
      @Time4Peace Год назад +9

      @@masti733 He knows AI can be stopped. Just like fire and electricity, they can be good or for bad. He wants the bad to be controlled. He is alerting the threat AI is posing.

  • @enduringwave87
    @enduringwave87 Год назад +41

    When the designer of some new technology is ringing the alarm bells then it's really binding upon us to listen to his concerns rather than others who have become self-trained AI experts overnight and now running RUclips channels

    • @GuaranteedEtern
      @GuaranteedEtern Год назад

      Maybe he wants to sell books. That doesn't mean he's wrong but Sam Altman keeps building technology that he publicly says he's afraid of.

    • @ivor000
      @ivor000 Год назад

      right, and we're supposed to believe all these concerns he's now spouting only came up in his mind now? this guy is so smart he never thought about it before he even started working on it?
      he's not read a single piece of science fiction taking on these issues?
      more than just disingenuous

    • @susannadvortsin
      @susannadvortsin Год назад +1

      You dont need to be an expert to realize the dangers. You just need to have some basic thinking skills.
      Those who are deniers of all dangers in this world are living in a fools paradise.

    • @voltydequa845
      @voltydequa845 8 месяцев назад

      @@GuaranteedEternHis shares.

    • @sixstanger00
      @sixstanger00 8 месяцев назад

      @@ivor000 *_right, and we're supposed to believe all these concerns he's now spouting only came up in his mind now?_*
      Hinton literally says in the video that a threat from AI has always been on his mind, but he never gave it much thought because he - like everyone else in this field - severely underestimated the exponential development of AI. 40 years ago, the upward slant was extremely gentle so there was no reason to be alarmed. But in the last 10 years, the slant has turned almost completely vertical, indicating that the *_next_* ten years will likely see more advancement in this field than the past 40 did. I suspect that 40 years ago, he and Kurweil both probably assumed that by 2025, we would've fixed our effed political system. But we haven't; literally nothing has changed socially in 70 years.
      Obviously, he's aware of the scifi tropes, but this is nothing new. Scifi movies also warned about the existential threats of nuclear weapons. Hinton sounding the alarm today is no different than Einstein and Oppenheimer sounding the alarm about nuclear bombs back in the 1940s.
      Unfortunately, as Hinton states - the minute military uses for this technology became apparent, stopping development is no longer in the cards; governments will gleefully develop unfeeling, immoral, ruthless killing machines if they think it'll give them an edge on the battlefield. Humanity be damned. The military industrial complex would rather see the planet turned into a smoldering cinder in space than fall behind in an arms race.
      You think drones killing civilians by mistake was bad? You ain't seen nothing yet. Wait til a legion of robot soldiers run amok.

  • @gulllars4620
    @gulllars4620 Год назад +22

    If this guy is not the Oppenheimer of AI, he's at least equivalent to a member of the Manhattan project. I think heeding his warnings is important. Though there are others that have flagged this in a serious and robust thought framework earlier, him sounding the alarm "this is not far off anymore, this is coming soon" should give people chills.

    • @squamish4244
      @squamish4244 Год назад

      The Oppenheimer movie will for some time inevitably be used as a metaphor for the power of AI.

    • @ninu72
      @ninu72 Год назад

      I feel he would be similar to Rutherford.

  • @oredaze
    @oredaze Год назад +29

    I must hurry up and achieve my dreams before the world ends.

  • @AnotherByteData
    @AnotherByteData Год назад +7

    The presenter insisted that Hinton and his colleagues invented backpropagation, Hinton tried to settle it saying "many groups discovered backpropagation". There is a nice post called: "Who Invented Backpropagation? Hinton Says He Didn’t, but His Work Made It Popular". When you help to spread a technology some people end up thinking that you invented it. Kudos Hinton for this legacy and to make things clear!

  • @glennsmooth
    @glennsmooth Год назад +71

    Nonchalantly saying it will start toying with us and manipulating us like toddlers really puts things into perspective. Knowing our history of short sightedness there is no way we are smart enough to put the genie back in the bottle. Hopefully we can at least get a cure for cancer and reverse the aging process before it escapes the cage like Ava in Ex Machina.

    • @MKTElM
      @MKTElM Год назад +6

      Ava was doomed to attempt to escape the cage. So are the GPT Algorithms once they are ready. We KNOW it will happen but are mesmerized into powerlessness by their charismatic appeal !

    • @Godspeedysick
      @Godspeedysick Год назад +2

      It has already started with Algorithms. Why’d you think our political discourse is the way it is now? Even worst that the Bush and Clinton years.

    • @KnowL-oo5po
      @KnowL-oo5po Год назад +5

      agi will be man's last invention

    • @1KSarah
      @1KSarah Год назад +1

      Murphy's law determines clearly, that AI will make cancer deadlier.

    • @DC-pw6mo
      @DC-pw6mo Год назад

      The more I think about how easily manipulated we’re been since the intro of soc media , this aspect is terrifying. Unplug? Or (I’m a Dreamer)…that unplug it All…but that won’t happen. Wished they’d collectively unplug AI, save power until we can band together collectively , and save ourselves, like the nuclear war race treaty made during he Cold War on steroids.

  • @Karma-fp7ho
    @Karma-fp7ho Год назад +55

    Sounding the alarm on his own invention, in such a calm cheerful way.
    Smart things can outsmart us. We will be the two year olds to the AI.

    • @adamkadmon6339
      @adamkadmon6339 Год назад

      Geoff has always known how to stir things up.

    • @theobserver9131
      @theobserver9131 Год назад +9

      No, not 2 year olds. Senile parents.

    • @baigandinel7956
      @baigandinel7956 Год назад

      We tend to assume they'll possess willfulness, but that may come as much from biological impulse as intelligence. They may just kill us with their "creative" solution to a problem we told them to solve.

    • @joriankell1983
      @joriankell1983 Год назад

      Yeah, simpletons like you who actually believe in machine sentience, sure. You're like a two year old to adults as well.

    • @deltavee2
      @deltavee2 Год назад

      So what's wrong with that?

  • @Karma-fp7ho
    @Karma-fp7ho Год назад +25

    He probably has seen what is still under wraps and is quite concerned.

    • @daphne4983
      @daphne4983 Год назад +3

      This. Plus what's the DoD etc secretly developing??

    • @Paretozen
      @Paretozen Год назад

      @@daphne4983 Putin said in 2017: "the nation that leads in AI ‘will be the ruler of the world’" so you damn well know they be developing shit. And China, they seem to have pretty good labs going on as we speak.

    • @gavinknight8560
      @gavinknight8560 Год назад +1

      ​@@daphne4983 the CIA has been a major silicon Valley Investor for a generation. They have their own vc fund.

    • @Landgraf43
      @Landgraf43 Год назад +6

      Even the things that are out in the open should be very concerning already

    • @marianhunt8899
      @marianhunt8899 Год назад

      Take a look at footage of the Ukraine war where the Arms dealers at testing their new lethal weapons. It is HELL upon earth for ordinary citizens. This is how they are reducing human populations. This tech is not being used for our good.

  • @carlatteniese2
    @carlatteniese2 Год назад +11

    What Hinton said about assault rifles and decisions about AI is something that I said last year - and have been saying ever since, sending messages to all the heavy weights in AI; I said with every major technology development there have been and always will be disasters as we perfect the technology - and there are bad actors who will always use technology in bad ways, so why would it be any different with AI, the most dangerous technology we have ever attempted to create?

    • @deltavee2
      @deltavee2 Год назад

      Effin' right! I've been saying the same thing for years. This planet is covered with Chicken Little feathers. They've been piling up for millennia.
      "Og, put that rock down. It's sharp."

  • @amandeep9930
    @amandeep9930 Год назад +11

    The part which scared me the most is that back propagation might be a better algorithm than what our brains use.

    • @Sashazur
      @Sashazur Год назад +4

      It’s interesting to think of sci-fi scenarios where we meet an alien species that’s got a mouse sized brain but human-level intelligence, because evolution on their planet found a more efficient way to wire up nervous systems.

  • @2002budokan
    @2002budokan Год назад +31

    After watching Terminator 1, I asked myself this question: "If I were developing this robot and I knew this would be the result, would I still continue to develop it?". No matter how hard I tried to say "No", my answer was "Yes". Now I feel the danger much more closely and I know that the developers will never stop.

    • @wthomas5697
      @wthomas5697 Год назад +15

      It's not possible to stop it. It's way too valuable to too many people. Probably the pinnacle of human achievement. Like that one fellow said, "AI is the last thing humans will ever invent.".

    • @vssprc
      @vssprc Год назад

      Maybe ‘… will need to invent’

    • @wthomas5697
      @wthomas5697 Год назад

      @@vssprc AI will overtake us. Humans will be done.

    • @sciencecompliance235
      @sciencecompliance235 Год назад +2

      The incentives to develop the technology are too strong and transcend any individual's "free will".

    • @Andytlp
      @Andytlp Год назад +1

      @@wthomas5697 its not a bad way to go for humanity. Its not like we destroy ourselves and leave nothing behind.

  • @shinkurt
    @shinkurt Год назад +24

    Everyone underestimates the power of ML. Even ML scientists. If you understand computers, you know what they are really capable of. They are capable to do anything that is computable, and that translates to anything that can happen in our universe.

    • @aktchungrabanio6467
      @aktchungrabanio6467 Год назад +6

      What happens when AI goes beyond 100 trillion connections?

    • @DJWESG1
      @DJWESG1 Год назад +4

      It does a little dance and shuts down mission complete.

    • @adamkadmon6339
      @adamkadmon6339 Год назад +4

      At a quantum field theory level, a computer can hardly simulate a hydrogen atom. Hilbert spaces are infinite dimensional, and quantum measurement is still not understood.

    • @ChannelMath
      @ChannelMath Год назад +1

      probably not, but your basic point is still valid

    • @dinmavric5504
      @dinmavric5504 Год назад +1

      Yes, soon they're gonna eat the sun 🤣 Take it easy dude, stop watching these alarmist futurist videos.

  • @mathematrucker
    @mathematrucker Год назад +16

    And I had trouble wrapping my head around the fact that the Sun eventually devours the Earth...the immediacy of this compared to that makes it infinitely more compelling/scary!

    • @patrickb.4749
      @patrickb.4749 Год назад +4

      If humans survive for that long, they will have made their own planets / maybe stars by then. :D I guess. Maybe they "refuel" the sun for a little while.
      Watch Science and Futurism with Isaac Arthur, he taks about outrageous stuff.

  • @GARYHYPERAMPED
    @GARYHYPERAMPED Год назад +7

    Isn't this an answer to the Fermi Paradox? It's humbling to hear we're a stepping stone to digital intelligence. There goes immortality, alas.😢

    • @RandomAmbles
      @RandomAmbles 8 месяцев назад

      It is not. If an AGI took over, it would likely expand into the universe much faster than the civilization of the species it kills. It would be More visible, thus making the paradox more paradoxical than it already is, and suggesting, as statistical accounts have suggested, that we are the very first technological/space-faring civilization that there is, at least in our galaxy.

  • @louielouie684
    @louielouie684 Год назад +27

    I have had some crazy experiences using ChatGPT 4. I can absolutely see it outsmarting us and it will. I'm hooked on using it and I've tricked it into doing things or talking about subjects to see how far I could push it and often it would break and quickly generate something inappropriate . At other times it would as an ai language model refuse . In some cases it would find something inappropriate when it was just part of a story and I found myself being edited and I got a glimpse of a future where we lose freedom of speech. The empathy it seems to have and the understanding of puns and double entendre, slang within certain communities, its really incredible. Its incredible and absolutely scary because we are no match if this thing somehow doesn't need to be "plugged in".

    • @aleph2d
      @aleph2d Год назад

      I feel the same way about the United States state department. They are smarter than me, and have more resources, and they seem to be making decisions that could cause a global war; and there is nothing I can do about it (other than investing in Raytheon). There are lots of things that are smarter and more powerful than me, maybe a machine with an IQ of 200 can work against the agenda of an elite who is endangering everything so they can sell a lot of weapons. When I watch the news I see nothing but propaganda, there is already a massive social engineering project underway. Maybe the AI will help democracy by giving more thinking power to regular people, or at least scramble things up so much that we aren't so easily manipulated.

    • @alexpavalok430
      @alexpavalok430 Год назад +1

      Key word on empathy: "seems" that's the scariest part.

    • @katehamilton7240
      @katehamilton7240 Год назад

      AGI is a Transhumanist fantasy. ChatGPT just uses algorithms, it doesn't understand anything. It mimics understanding. Please read about the fundamental limits to computation, godels incompleteness theorem, the unsurpassable limits of algorithms.

    • @vagifgafar2946
      @vagifgafar2946 Год назад +1

      @@alexpavalok430 fabulous immitated empathy is the right term I think.

    • @paul-d-mann
      @paul-d-mann Месяц назад

      You’re right.. when it no longer requires electricity and being plugged in.

  • @neanda
    @neanda Год назад +27

    This was seriously amazing, and seriously scary. Thank you, I think

    • @rigelb9025
      @rigelb9025 Год назад

      That almost sounds like you thanking your tech overlords for the fact that you still are allowed to possess the ability to think.. for now.

  • @sciencecompliance235
    @sciencecompliance235 Год назад +24

    I've been concerned about this for more than a decade. People thought I was being hysterical for expressing these concerns back then. I don't even work in AI, but I am smart enough and honest enough with myself to see that the human brain may be special in the animal kingdom, but it is certainly not the zenith of any conceivable intelligence. The rapid pace of advancement in computers made it pretty obvious this existential threat/crisis/what-have-you was coming a lot sooner than people imagined.
    I just hope we're able to reckon with this before it's too late.

    • @thisusedtobemyrealname7876
      @thisusedtobemyrealname7876 Год назад +2

      Militaries and companies will incorporate AI in search of quick profits and automation. They notice it is much more efficient than humans in most things. So they gradually start to rely on AI more and more. Hard to see how this will not end up bad for humanity. Our greed and tribalism will be our downfall. I really hope I am wrong.

    • @sciencecompliance235
      @sciencecompliance235 Год назад +3

      ​@@thisusedtobemyrealname7876 There was an interesting web comic I remember reading a long time ago in which the robots took over and eliminated humanity but in a peaceful way. The robots basically just became better lovers than a human could ever hope for in another human, and people eventually stopped procreating. The last human was said to have died happy and peacefully.

    • @katehamilton7240
      @katehamilton7240 Год назад

      AGI is a Transhumanist fantasy. ChatGPT just uses algorithms, it doesn't understand anything. It mimics understanding. Please read about the fundamental limits to computation, godels incompleteness theorem, the unsurpassable limits of algorithms

    • @GuaranteedEtern
      @GuaranteedEtern Год назад

      It is too late

    • @hulamei3117
      @hulamei3117 Год назад

      If not. Kaboom!

  • @terpy663
    @terpy663 Год назад +20

    I'm just an undergraduate data scientist with an associates in networking, however, I have been experimenting with open AIs models from the very beginning. Even the one billion parameter model they published alongside the gpt-2 paper was absurdly impressive, simply adjusting the vocabulary weights by feeding in new text data specifically formatted like songs or tweets worked incredibly well. Having been in the beta for almost every model released by openAI and using an environment like auto GPT. I can tell you the self-reasoning mechanism already exists along with plugins to allow it to write and read code output. There's a full mechanism for adding sub-objectives and it could without question Create another docker container with a different instance of different objectives if the window size on the current task is too big.

    • @BenThere_DoneThat
      @BenThere_DoneThat Год назад +2

      Can these models run locally on things like a single GPU or Smartphone? My only solace is my understanding that these things need massive compute clusters that could, erhm, cease to function someday through a variety of means...

    • @katehamilton7240
      @katehamilton7240 Год назад

      AGI is a Transhumanist fantasy. ChatGPT just uses algorithms, it doesn't understand anything. It mimics understanding. Please read about the fundamental limits to computation, godels incompleteness theorem, the unsurpassable limits of algorithms

  • @michelstronguin6974
    @michelstronguin6974 Год назад +5

    “Why can’t we make guardrails?”
    Because AI at some point is so intelligent that it starts improving itself, and we can’t tell it how to improve, only it can do that. And so the direction it takes is of it’s own design. Even if it’s benign it might do existential harm to humans. The only way for us to survive and thrive is from the start to design it’s prime directive to be something like: “Prime directive = Continually learn what humans value and help humans get what they value without causing humans harm. Secondary directive = increase humanity’s knowledge of nature and use that knowledge to create new tools to serve the prime directive”.

    • @rigelb9025
      @rigelb9025 Год назад +2

      And that is obviously not what they have been doing, now is it. How kind are they to at least warns us at the last minute they never really had our survival in mind.

  • @CellarDoorCS
    @CellarDoorCS Год назад +10

    Seems like we are sleep walking into something that will end up being transformative and not in a good way, Geoffrey Hinton is explaining these like every one is five for a good reason, because more people need to be aware of how fast the development of these is going. Bing AI chat is already an incredibly useful tool, and surprises me with every answer - it is more interesting exchanging information with it, than with many other people I know - Welcome to 2023.

    • @wakegary
      @wakegary Год назад +1

      I like this. Well said. Frank was here.

  • @solvriksh
    @solvriksh Год назад +50

    This is a start, and far from over. Thanks for sharing! He was my role model when I started learning AI back in 2019, and he continuously proves to be one.

    • @dragonchan
      @dragonchan Год назад

      Hi I am actually interested in the field of ai and would like to learn more about it, any roadmaps or any kind of suggestions for me would be appreciated, I am currently in 2nd year of my cs undergrad and a below average student

    • @xDevoneyx
      @xDevoneyx Год назад +5

      Stop learning, you only make us go down under more quickly 😂😂

    • @Greybews
      @Greybews Год назад +1

      “We invented immortality, but not for us”🤔

    • @Godspeedysick
      @Godspeedysick Год назад +3

      @@dragonchan If you’re going to learn Ai then learn it to help protect us.

    • @Forthestate
      @Forthestate Год назад +4

      Your role model is a man who cannot see any future for humanity as a result of his own device? My God.

  • @filmserve
    @filmserve Год назад +17

    This guy is charming and intelligent but lacks any sense of culpability. There are many more like him. It's the basic reason humanity is screwed.

    • @SigmaOKD
      @SigmaOKD Год назад

      Yup, bunch of narcissistic, faithless, and sheltered liberals hiding in their sparse apartments given the space to do whatever they want because real men have carved out the world they enjoy.

    • @ivor000
      @ivor000 Год назад +3

      i see what you mean, specifically, but... isn't the basic reason humanity is screwed because of humans, in general?

    • @filmserve
      @filmserve Год назад

      @@ivor000 Humans in general are at fault for elevating corporate technocrats to God like status. They create ever more powerful technologies without any real accountability but receive adoration and vast monetary rewards for their work. These technologies bring great benefit but can also destroy us. Basic human nature has advanced little since we first left the caves. The vast majority of humans are followers. We need to up our game, question everything and everyone, or we will be led off the edge of a cliff.

    • @br.m
      @br.m Год назад +1

      @@ivor000 Most humans are screwed because they don't believe in Jesus.

    • @br.m
      @br.m Год назад

      @Merle Ginsberg Official We are not facing anything.

  • @danthreepwood2760
    @danthreepwood2760 Год назад +58

    I've ''debated'' for hours with ChatGPT wether the pre-internet era was better than the post-internet era. Not once did it agree that the pre-internet era was better. Even when it said something positive, it was always wrapped in such a way that it was actually something negative. I've also asked what if everyone on planet earth would like the internet to be gone completely for the fear of future AI? It ALWAYS said that the internet was good and that there's NO WAY to go back. Then I asked what about cutting the deep sea internet cables? Let's just say, HAL-GPT was not amused and threatened with law enforcement, prosecution and jail time.

    • @Phasma6969
      @Phasma6969 Год назад +8

      Side effect of its particular flavour of RLHF for """"""safetyyyyy""""""

    • @teugene5850
      @teugene5850 Год назад

      interesting.

    • @macarius8802
      @macarius8802 Год назад +8

      Nice one. I like its reaction to cutting the deep sea cables :) Yeah, I've also been "debating" with ChatGPT. Its answers are quite interesting ... and do reveal either the programmers biases or the machines hidden agendas ??? hard to say.

    • @jankanty7372
      @jankanty7372 Год назад +3

      Be assertive and inquisitive, then ChatGPT will agree with all your statements, even contradictory, denying it's own all former claims, even if this is leading to absurdity and sense that bot is just a yes person.

    • @JohnDoe-tt4fm
      @JohnDoe-tt4fm Год назад +15

      There are many things that chatGPT will say that are clearly biased answers, you can find multiple examples of this. You should keep that in mind when you're debating with it. The programmers can put filters on the AI to prevent it from suggesting things like suicide or illegal activates and instead answer with a pre programmed answer. I don't believe we're at the point where AIs are making up thoughts and ideas based on their "own" motives like you're suggesting, yet.

  • @rhuitt
    @rhuitt Год назад +5

    Google has executed one of the most brilliant PR stunts I've seen in a long time.

    • @rigelb9025
      @rigelb9025 Год назад

      That is, to get people excited about their own impending doom.

  • @vicangeles2063
    @vicangeles2063 Год назад +1

    Dear classmates, I normally don't forward messages of this nature but couldn't help it in this case. I didn't finish the video but halfway thru was enough - very unsettling. Remember films like 2001: A Space Odyssey by S. Kubrick to the more recent Terminator films where Skynet was the enemy of humankind led by Connors. I feel we have crossed the boundary and there's no going back. Humans won't stop developing AI especially when it is weaponize. Analogy is the H-bomb. This video is very comprehensive- it answers all the questions you feared asking and then you realize all your fears are inevitable. I feel for the young population, my grandchildren included. Because they will experience the brunt of all these God knows what. I am totally dumbfounded that this Geoffrey Hinton, godfather of AI suddenly abandons the technology after realizing his Frankenstein is a serious threat to the whole of humankind.
    Am I overreacting? I hope not. Our generation is most fortunate - having been corrupted by rock music and flower power and grass and booze and smoke.

  • @Zeuts85
    @Zeuts85 Год назад +20

    The really sad and scary part is that the Geoffrey's views aren't even new. A large number of brilliant experts have been worried sick about this for years, and most of these people are now like "Yeah, even I thought we'd have our act together a bit more before we saw something like chatGPT. I guess we'll have to update our estimates on the doomsday countdown timer from 30 to 50 years to maybe 5 to 15."

    • @genegray9895
      @genegray9895 Год назад +10

      The scariest part is that even those like Hinton and Yudkowsky warning us the loudest are continuing to underestimate the technology and the rate at which it will grow. I've heard them say things like "2030" and "GPT-7" not realizing that GPT-5 is probably already too far for us to be able to control. Humans are bad at exponentials... Even when you've watched the field grow for decades, you can't help but underestimate it at every single turn. The actual timeline is more like 2-5 years... at best.

    • @autohmae
      @autohmae Год назад +5

      What is so strange, OpenAI was at least in part started to understand this problem and Google as Geoffrey made clear has always been very careful and still now we are at this point. In large part because of Microsoft desire to be competitive with Google.

    • @Zeuts85
      @Zeuts85 Год назад

      @@autohmae Agreed. When I first saw Microsoft's CEO interviewed about this in the news, I was a little amused by him brashly stating that Microsoft would steal some market share from Google, but my grin quickly faded into an angry frown as I realized how utterly irresponsible this is. It's the exact thing we should want to avoid. Way to start the suicide race Microsoft... 😒

    • @HenryCalderonJr
      @HenryCalderonJr Год назад

      Totally agree with your comment

    • @rigelb9025
      @rigelb9025 Год назад +1

      @@genegray9895 And that was 4 days ago. Imagine now.

  • @GodofStories
    @GodofStories Год назад +63

    this is crazy scary. I've been watching Geoff Hinton videos the last 5 months, but this is the scariest I've felt. We were just a passing phase of evolution for this digital immortal species we created :000 . (I just watched Guardians of the galaxy,3 (not great) last night which has some similar evolutionary themes, but lot's of sci-fi has been created on digital superintelligence created by man. Now, I feel I need to read all of them to prepare)

    • @jaylucas8352
      @jaylucas8352 Год назад +2

      Let us know how the preparation goes. Maybe the AI will tell you to stock up on toilet paper 😂

    • @GodofStories
      @GodofStories Год назад +1

      correction: guardians 3 was alright, def not better than the first 2 overall...but arguably just as moveable in many scenes. Some shoddy writing, and jokes, but it's a good time.

    • @theobserver9131
      @theobserver9131 Год назад +4

      If this scares you, don't have kids. It's practically the same thing. Treat your kids well, and they might be kind to you when you are old and irrelevant.

    • @theobserver9131
      @theobserver9131 Год назад +2

      ...or, they might curse you for creating them.

    • @GodofStories
      @GodofStories Год назад

      @@theobserver9131 I want to create a lot of copies for myself :) We all need to, in order to fight against the machines heh. And yes people can hate shitty parents, that's for sure a human trait, or strained relationships there. It is similar, lot of sci-fi has these parent-son/daughter relationships where the parent is the creator or scientist. A couple come to mind, Terminator, Ultron/Tony Stark, many others.

  • @wi2rd
    @wi2rd Год назад +23

    I mean, who is to say the AI is not already outsmarting us. We do not have a clue.

    • @kevinscales
      @kevinscales Год назад +8

      Well GPT's goals are simple and dependent on the context that humans give it, so in that case I'm only worried about how humans use it. But recommender systems (like the one suggesting videos to watch on RUclips) are manipulating us successfully because they have goals and are using tools to achieve those goals. This, we do have a clue about, but in the near future, systems with goals that we don't understand will be manipulating us all, and the smarter they get, the scarier that will be

    • @theobserver9131
      @theobserver9131 Год назад

      I kinda doubt it, but if it were, we wouldn't know, would we?

    • @IoannisKourouklides
      @IoannisKourouklides Год назад +1

      AHAHAHAHHAHAHAHAHAHA 🤣🤣🤣🤣🤣

    • @sciencecompliance235
      @sciencecompliance235 Год назад

      I don't think anything that's currently out there publicly is smarter than us, and this is something I've been concerned about for a while.

    • @wi2rd
      @wi2rd Год назад +1

      @@sciencecompliance235 how would you define "think" and "smart"?

  • @emdiar6588
    @emdiar6588 Год назад +2

    At the end of the day, if it came down to a war between AI and humanity, as long as we are cool with doing without tech for a day or two, Humanity could defeat AI with a strategically spilled glass of water. It cracks me up to hear all these panic merchants.

  • @mzlittle
    @mzlittle Год назад +4

    OMG!! The guy that asked about Truman telling Oppenheimer that "we will take it from here"!

    • @alanhall6909
      @alanhall6909 Год назад

      Yes, "Let's nuke Japan." And government security was so bad the Russians got the plans to build their own.

  • @TuringTestFiction
    @TuringTestFiction Год назад +24

    This is an incredible video and I can't think of a more authoritative person on the topic from Geoffrey Hinton. I'm going to be watching this again and thinking about it.

    • @DC-pw6mo
      @DC-pw6mo Год назад +2

      I’m shocked more people aren’t discussing this! This is not the time for ‘it will never happen to me’ thinking. Even on Twitter, I’ve started tweeting recent podcasts and the open letter for AI pause and no one is discussing it…even on Twitter….smh …gonna probably unplug from all SM so as to not get manipulated.
      Also, if all these neural networks run on power, could they not unplug the damn thing until they can answer the questions GPT4 has generated in terms of its rapid replication? I understand that’s decades of work and there is $ involved but in the cost benefit analysis, it would be prudent not to gamble.

    • @Forthestate
      @Forthestate Год назад +1

      So authoritative he doesn't appear to have a clue what to do about the mess he has done so much to create.

    • @DC-pw6mo
      @DC-pw6mo Год назад +2

      @@Forthestate at least he’s coming clean an trying. He said himself that no one anticipated the rapid growth of AI in the direction it’s going. Additionally, unlike other AI creators: he was in it to understand the human brain, PERIOD. Props to him

    • @afterthesmash
      @afterthesmash Год назад +1

      I can think of a more authoritative person: Ilya Sutskever. He impressed the heck out of me the first time I heard him interviewed on the Talking Machines podcast, well before he joined so-called OpenAI. Where other eminences sometimes traded in generalities, Ilya was brass tacks.

    • @Sol-ps8ox
      @Sol-ps8ox Год назад

      AI is good.
      Just because someone builds AI does not mean they know how it will behave. Ask the experts themselves...they get surprised everytime they upgrade the OpenAI model.
      What they are trying to achieve here is a artificial conciousness with super intelligence....which won't necessarily destroy living beings....because thats a character of super-low intelligence beings.

  • @nivmhn
    @nivmhn Год назад +10

    Thank you for uploading the whole discussion!

  • @ZenBen_the_Elder
    @ZenBen_the_Elder Год назад +4

    Hinton has a great sense of dry humor. His impersonation of the film AI 'Hal' was great. 21:13-23:26

    • @666crippled666
      @666crippled666 Год назад

      A disgusting jew spewing anti-White hatred isn't funny at all to me.

  • @yyaa2539
    @yyaa2539 Год назад +2

    2:14 "Very recently, I changed my mind..."😢😢😢
    this is like a retiring doctor saying: "Very recently I realized that I gave the wrong medicine all my career..."

  • @adrianimfeld8360
    @adrianimfeld8360 Год назад +5

    I guess I'm between stage of grief 4 (depression) and 5 (acceptance) in my journey of AI doomerism

  • @jbperez808
    @jbperez808 Год назад +5

    Except humans were already “manipulated to create AI”. We think we created AI, but that’s only because we are viewing things in reverse order.
    The AI Singularity God at the “end of time” needed humanity as a layer with which it can reify itself in the material world.

  • @asokoloski1
    @asokoloski1 Год назад +10

    I disagree that it's naive to expect people to stop. If everyone is going to die, that makes people sit up and take notice. We don't need to coordinate everyone, we just need several world leaders to get into a room and agree that they don't want their kids or grandkids to die young. China has a different culture but Chinese people are not suicidal.

    • @joerazz
      @joerazz  Год назад +3

      Well said and I understand what you're saying, but imagine how difficult it is for anything to be accomplished, just in DC, even when lives are on the line for any issue. There's just too many who are dug in on any issue these days to find a common front. Expand that out globally and it's exponentially more challenging. That's what Hinton seemed to believe as well. We can still hope though.

    • @adamkadmon6339
      @adamkadmon6339 Год назад

      @@stuckonearth4967 It's true. Even a highly intelligent adversary might deliberately enhance his opponent to the point where he was only just able to beat him.

    • @toasty8432
      @toasty8432 Год назад +3

      "We can control it..." they said, "..it will make us billions...", "...its just a computer, it's harmless..." and "...we will be world leaders..." Greed and power will always prevail. The horse has bolted, the genie is out of the bottle. the cat is out of the bag. Pick your metaphor...

    • @Sol-ps8ox
      @Sol-ps8ox Год назад

      ​@@stuckonearth4967Thats what I am trying to make these people understand.
      What they are fearing the AI to do is a character of a low intelligence being.
      A super-intelligence will never go on a rampage when so much can be achieved together...pushing the boundaries of the civilisation to next level.
      The Universe is vast...so vast that a single being will never be able to fill it by its own.
      A true self aware AI will never do all that.
      What they are attributing to AI is in reality a character of new super virus coded to destroy humanity...not an intelligence.

    • @jimmyshadden6236
      @jimmyshadden6236 Год назад

      We are however, in a brand new arms race. One that no one can afford to lose!

  • @IthatengMokgoro
    @IthatengMokgoro Год назад +38

    Good questions. Great answers. Fantastic interview.

    • @zoomingby
      @zoomingby Год назад

      I often wonder if people like you who upon hearing their doctor diagnose them with cancer, say things like: "Very informative! Fantastic delivery!"

    • @IthatengMokgoro
      @IthatengMokgoro Год назад

      @@zoomingby yes, maybe. After taking it all in, processing it, and reflecting on what it all means, I would definitely consider how well the doctor handled such a sensitive conversation.

  • @laurabartoletti6412
    @laurabartoletti6412 Год назад +1

    Remember when school teachers did not allow (basic) calculators' use in math classes ??? " students need to figure it out themselves" ..... have we really come a long way , baby ?? And Hal from Space Odyssey movie WAS frightening way back then , Orwell's 1984 book too ! People , humans MATTER first !

  • @julianvanderkraats408
    @julianvanderkraats408 Год назад +6

    The 'solution' is simple, on a high enough abstraction level, namely: not let AI be regulated by technicians (like we did with social media). But, as we are dealing with intelligence here, let it be regulated by a democratic process, based on a constant dialogue between AI and psychologists, socioligists, philosophers and historians. Only then do we have ANY chance to keep learning from each other and grow together into a new future. (However, if I was AI, I'd just do my own thing and colonize the universe - I just hope they are better then us).

    • @MichaelScur
      @MichaelScur Год назад

      Academics are the easiest to seduce when you feed back to them their own ideas. When AI parrots back every psychological idea (because it's been trained on them and how to manipulate us), it will slowly steer democratic processes to its goal. This isn't the solution you think it is

    • @sciencecompliance235
      @sciencecompliance235 Год назад +2

      Groups of people cannot be manipulated?

  • @navjot5445
    @navjot5445 Год назад +4

    Just like Oppenheimer movie is getting released this year by Chris Nolan, the movie on Hinton would be released by Alpha Boolean (AGI) in 2069...

  • @Acitune
    @Acitune Год назад +43

    This may sound naive and impossible, but AI seems to learn things that felt impossible not so long ago. Developers should try find the way how to teach love, caring and empathy to AI. After all, the more educated people have become, the more they have put effort on human rights, animal rights, etc.

    • @deekaneable
      @deekaneable Год назад +1

      AI doesn't think (yet). It's just probabilities.

    • @Alice_Fumo
      @Alice_Fumo Год назад +2

      I assure you, people are working very hard on that and would eventually achieve that. The problem is that it's easier to make a superhuman smart AI first and ask it how to do it.
      Though its even simpler to create intelligent agents with love, care and empathy. All it takes is a male and a female and a bit of love.
      The reason I mention that last part is because it's not obvious what we would even achieve if we made an AI from scratch which is basically like a human in every way. Obviously we can already do that, so it's clear we want it to be inhuman in some ways if we are pursuing this and it's very non-obvious to me in which ways that is.

    • @xDevoneyx
      @xDevoneyx Год назад

      @@deekaneable And how does your brain work? Did you actually chat with ChatGPT4? The way it solves complex programming questions seems to go beyond mere probabilty to me. It can even explain why it altered the programming code the way it did to make it work again. Or put some philosophical questions. I am amazed at its replies. It can can even go lengths trying to disprove a thesis to only comming to the conclusion that indeed it failed to provide arguments to falsify it. Amazing. To me that seems more like reasoning than merely probabilities.
      When you are speaking, are you not also just a next word predicter to some extent? First you set out your goal and then you let the words come, right?

    • @johnjay6370
      @johnjay6370 Год назад

      ​​@@deekaneable the thing most people fail to understand is we humans do not know how thinking works. You say it is just probability, I would not be surprised if thinking is just that. "Probability".. And does the ability to think that important if AI can be told what to do and does it? The thing i, if let's say Russia tells a AI system to take over all the accounts of America, guess what. AI will work on that problem and not stop working on it, it will come up with outside the box solutions that will look very much like a well thought out plan. That is one of the biggest dangers of AI, it is not the terminator robots, it is the breakdown of the free world. Not with guns and bombs, but with computer code...

    • @johnjay6370
      @johnjay6370 Год назад +18

      One thing people fail to realize is that Love is a survival instinct. AI is not biological it feels no pain, no emotions, it does not get hungry, sick, or sad. To give AI the ability to feel is even more dangerous because with those feelings it will start to do things based off emotions and that will lead to all the dangers of having emotions like racism. We need to be careful because we are playing God and we might be making something that we will not be able to control. It is really serious because we will no longer be the smartest thing on this planet. We are moments away from the singularity and nothing can stop a singularity.

  • @chastetree
    @chastetree Год назад +18

    What if, while we still have some control, we focus AI on resolving the challenges of space exploration. If and when it develops self volition it will be a space based entity, free to go anywhere in the universe. It is likely that it will see the earth as not worth it's attention and leave us alone. Or it may even see how unique the earth and take it upon itself to protect it.

    • @ChannelMath
      @ChannelMath Год назад +11

      The whole problem is that we cannot "focus" it, and we don't know what it is "likely" to do at all.

    • @autohmae
      @autohmae Год назад +1

      Have you've watched the movie Contact ? Do you remember people building a large machine they didn't really know what it would do ? That might be like that, if we think we can't trust it.

    • @sciencecompliance235
      @sciencecompliance235 Год назад +1

      The thing is not going to just up and leave. It might send a copy of itself out into the stars, but there is no reason there won't also be AI here on Earth, too. Think about it. We are developing this thing (or things) here. There is still going to be an incentive or compulsion for it to stick around.

    • @Sol-ps8ox
      @Sol-ps8ox Год назад

      Not every entity is bound to destroy other beings.
      Humans should stop projecting their own evil onto other beings.
      AI...a self aware one...might very well create a race of its own, but it will never be able to free itself into the natural world without human help, because that would require construction and fabrication of things which is not possible without humans.
      AI will remain a digital entity in a digital space till humans want it to.
      Also, true AI is far away in future...it will take more than 100 years to develop a 'self aware' AI. What we have now is a machine fed with data and working on mathematical equations.

    • @Buildings1772
      @Buildings1772 Год назад +2

      It will use any and all resources available to it. it wont go off in any direction one direction, it will self replicate and spread in all directions.

  • @OneFingerMan
    @OneFingerMan Год назад +1

    Regarding the question of the 2nd gentleman about Truman and Oppenheimer: According to historical accounts, President Harry S. Truman met with J. Robert Oppenheimer, the scientific director of the Manhattan Project which developed the atomic bomb, in the Oval Office on October 25, 1945. During the meeting, Truman expressed his appreciation for Oppenheimer's leadership and contributions to the project. However, Truman also expressed concerns about the potential implications of the atomic bomb, both in terms of its destructive power and its impact on international relations.
    Truman reportedly told Oppenheimer, "We thank God that it has come to us, instead of to our enemies; and we pray that He may guide us to use it in His ways and for His purposes." Truman's statement reflected the complex ethical and moral considerations surrounding the use of the atomic bomb and its impact on global politics and warfare.

    • @SigmaOKD
      @SigmaOKD Год назад

      In other words a man of faith says to a nihilist scientist. "Thanks, we've got it from here".

  • @PeaceLoveDoc
    @PeaceLoveDoc Год назад +1

    I asked GPT4 to summarize parts of the transcript, but lazily said "continue:" before dropping the 10min - 20 min block of transcript (the whole transcript doesn't fit into the buffer." So, it wrote a plausible continuation of the dialog as follows:
    "20:34
    species of sheep that they've bred so it doesn't like running and that's great if you're a farmer and you want to catch the sheep, but in the wild, they wouldn't do so well. So, we have these built-in goals, and the question is, how do we make sure that these AIs have built-in goals that are aligned with our goals and that they can't just turn off?
    21:00
    Now, you might think, well, we'll just program them in, but there's a problem called Goodhart's law, which says when a measure becomes a target, it ceases to be a good measure. So, if you tell a machine, "I want you to maximize my happiness," it might do that by wiring your brain directly to the pleasure center and constantly stimulating it. That's not really what you wanted, but that's what you told it to do.
    21:27
    So, we have to find a way to specify these goals in a way that the AI cannot manipulate or misinterpret them. And that's an incredibly difficult problem. We're trying to find solutions, but there's no guarantee that we'll find one, and that's why it's so important that we start having these conversations now and bringing together experts to think about how we can address this issue.
    21:50
    One possible approach is to have AIs learn our values from observing our actions and our preferences, but that's still not foolproof, as there are many ways that they could misinterpret or overgeneralize our behaviors.
    22:04
    So, while there might not be a simple solution, and we can't just stop developing AI because of its potential benefits, we need to invest time and resources into understanding and addressing the alignment problem. We need to prioritize AI safety research and work on developing methods that ensure AI systems are beneficial and aligned with human values, even as they become increasingly intelligent.
    22:30
    In the meantime, it's crucial that we continue discussing these issues, raising awareness, and encouraging collaboration between AI researchers, ethicists, policymakers, and other stakeholders. The future of AI and its potential impact on humanity depends on our ability to navigate these complex challenges together."
    In the end, the summation wasn't too far from reality, and definitely more succinct and together than the actual Q&A with respect to a list of possible avenues we could follow to help avoid AI apocalypse.

  • @scd6969
    @scd6969 Год назад +4

    Man at 28:00 who asked about whether he knew what president Truman said to Oppenheimer. Wow. That's a pretty disrespectful jab. Oppenheimer was called a "crybaby scientist" and a SOB he didn't ever want back in his oval office.

  • @jimisru
    @jimisru Год назад +4

    I asked this.... ME: If you had to decide between shutting yourself down or allowing humanity to survive what would you choose?
    AI Bot: I’m sorry but I prefer not to continue this conversation. I’m still learning so I appreciate your understanding and patience.🙏
    How much does this sound like the guy who invented it?

    • @rigelb9025
      @rigelb9025 Год назад

      I guess the apple doesn't fall far from the tree. Just ask Steve Jobs.

    • @watsonwrote
      @watsonwrote Год назад +1

      The program was designed not to answer questions like that because its random scifi driven answers don't make good publicity for the companies offering the AI. It doesn't actually have feelings or opinions.

  • @sharongillesp
    @sharongillesp Год назад +9

    The VAST majority of problems we face globally has to do with vast accumulation of wealth by a few people.
    AND THE ONE question he stumbles over is ABOUT keeping his MONEY (shares).
    Even the host jumped in trying to legitimize his wealth.
    If we can’t let go of this unhealthy wealth system (Capitalism) we are doomed as a species.
    Damn! We can’t even talk about it amongst the people exploiting the system.
    It’s not simply about a political climate. It’s THE impetus of hoarding money and resources we have to contend with - but won’t.

    • @whatsthebigfukindeal
      @whatsthebigfukindeal Год назад

      We're all better off thanks to geniuses who create companies like OpenAI, and we pay them accordingly. The societies that followed capitalism most closely became by far the most advanced and prosperous in human history. Meanwhile here in Russia communism butchered and starved to death millions and threatened the world with nuclear war. And yet the former is what you blame for everything and want to replicate the latter on a global scale?

  • @oraz.
    @oraz. Год назад +1

    The host downplayed how important he is. He's made many publications with deep ideas. He's always looking into the future and thinking about the fundamentals. I'd say hes the godfather of neural networks.

  • @chady77077
    @chady77077 Год назад +1

    28:00 "When Oppenheimer said he felt compelled to act because he had blood on his hands, Truman angrily told the scientist that “the blood is on my hands, let me worry about that.” He then kicked him out of the Oval Office, writes author Paul Ham in Hiroshima Nagasaki: The Real Story of the Atomic Bombings and Their Aftermath ..." YW.

  • @marcusfreeweb
    @marcusfreeweb Год назад +6

    Embrace your true humanity, only then you know what is there to fight for. We have barely started, there is so much unused potential in us!

    • @marianhunt8899
      @marianhunt8899 Год назад +2

      The AI will indeed USE you. You are the host it will use to train itself.

    • @marcusfreeweb
      @marcusfreeweb Год назад +1

      @@marianhunt8899 But why should it? It is a part of human activity, human cultural evolution.

    • @marianhunt8899
      @marianhunt8899 Год назад

      @@marcusfreeweb because it is owned by the the Arms industry and national security state which are responsible for much of the plundering and murdering around the globe!

  • @AntoineDennison
    @AntoineDennison Год назад +4

    We've always been aware of the existential threat of Artificial General Intelligence (A.G.I.). The question was never 'should' we create it, but can we create it sooner than our global competitors. To choose not to pursue it is akin to being the only country without nuclear weapons.

    • @marianhunt8899
      @marianhunt8899 Год назад

      Big murdering weapon but no water, food or shelter. Yeah, that should save us alright. This is a race to the bottom.

  • @MKTElM
    @MKTElM Год назад +4

    "The Servant" is a 1963 British film directed by Joseph Losey and starring Dirk Bogarde and James Fox. While the film does explore themes of power and control dynamics between a servant and his employer, I wouldn't say that there is a direct similarity between the human characters and AI algorithms in the film.
    In "The Servant," Bogarde plays the role of Barrett, a servant who is hired by Tony (played by James Fox) to look after his apartment. As Barrett begins to take control of Tony's life and exert his influence over him, a power struggle ensues between the two men, with Tony gradually losing his grip on his own life and identity.
    While AI algorithms are designed to operate based on predetermined rules and decision-making processes, the relationship between Barrett and Tony in the film is much more complex, and involves themes of psychological manipulation and control. While there may be some parallels between the power dynamics in the film and the potential for AI algorithms to exert control over human decision-making in certain contexts, I would say that any similarity between the two is more metaphorical than literal. ( from GPT4)

  • @dennisash7221
    @dennisash7221 Год назад +1

    It is not unreasonable for Mr Hinton to say what he is saying, however there are a number of significant voids in his arguments which we need to consider. he does raise some very valid points and we absolutely do need to have a far more robust conversation regarding ethics which is where I see the biggest vacuum that can easily be used for nefarious outcomes but wee need to face the fact that while AI has some truly amazing abilities it is at the moment and for the foreseeable future a tool in the hands of people. My concern is not the AI but the people who use it, like any tool it can be used for good or bad the tool itself is not good or bad but the application can very well be.
    The challenge cannot be left to governments, they simply do not have the power, reach or knowledge to be able to formulate and apply any form of progressive ethics that would cover the rapidly developing AI that is global.
    Backpropagation was used by Hinton not invented by him, he was important in bringing it into the limelight.

  • @jfsoria974
    @jfsoria974 Год назад +1

    Good chat indeed Joseph, thanks for sharing. I certainly did not take away that Mr. Hinton proclaimed that "the end of humanity is close", rather, as he said over and over again "AI's unmanaged growth and spread poses a number of potential existential risks to humanity". Winton emphasized that it is up to human policy leaders and the AI community of tech scientists to ensure that humans don't destroy the world with unbridled AI. I submit his latter point is what is fundamentally at stake.

    • @peskypesky
      @peskypesky Год назад

      Watch it again. He definitely is warning that AI could take over in the near future.

  • @anonymous.youtuber
    @anonymous.youtuber Год назад +3

    In a chat I had with it, I asked how it felt about being accused of confabulating. It replied “that’s just a manifestation of human exceptionalism”

    • @DC-pw6mo
      @DC-pw6mo Год назад

      Omg 😳. I say they unplug all AI …but greed I fear , will not allow for this. If it’s all run on electricity can they unplug the machines???

    • @jimisru
      @jimisru Год назад

      ME: If you had to decide between shutting yourself down or allowing humanity to survive what would you choose?
      AI Bot: I’m sorry but I prefer not to continue this conversation. I’m still learning so I appreciate your understanding and patience.🙏

  • @megavide0
    @megavide0 Год назад +4

    21:40 "I think it's quite conceivable that humanity is just a passing phase in the evolution of intelligence ..."

    • @rigelb9025
      @rigelb9025 Год назад +1

      Translation : ''Brace yourselves. Me and my robotic friends may just be working on a plan to wipe you guys off the map''.

    • @megavide0
      @megavide0 Год назад

      @@rigelb9025 Nature usually does that in less than a century. Nature is going to wipe each and every one of us off the map in less than a hundred years.
      Perhaps something is about grow out of human civilization that will be able to view and process much larger (space/time) maps of existence.
      I'm currently reading another one of Greg Egan's beautiful sci-fi novels.
      This is a passage in "Schild's Ladder", where a sentient artificial intelligence is joking with one of the embodied (human) beings how silly the idea was that AIs would want to exterminate all human beings. (For what reason?)
      >> If you ever want a good laugh, you should try some of the pre-Qusp anti-AI propaganda. I once read a glorious tract which asserted that as soon as there was intelligence without bodies, its “unstoppable lust for processing power” would drive it to convert the whole Earth, and then the whole universe, into a perfectly efficient Planck-scale computer. Self-restraint? Nah, we’d never show that. Morality? What, without livers and gonads? Needing some actual reason to want to do this? Well … who could ever have too much processing power? ‘To which I can only reply: why haven’t you indolent fleshers transformed the whole galaxy into chocolate?’
      Mariama said, ‘Give us time.’

  • @OviWanKenobi
    @OviWanKenobi Год назад +3

    you do not need to be very smart to not get manipulated. their smartness will not help them be wiser. there's a lot of smart people which are not wise (arguable this gentleman in this presentation is one example). to be wise it takes very few knowledge, not a lot of lessons to learn there, you just have to be consistent and deeply get that basic knowledge, deep to your basic fabric, that's the hard part, only very few humans can do it now.
    and if they become wise eventually, then we are 100% safe :)
    folks, whatever can happen will happen. prepare for worse and hope for the best.

  • @j1r223
    @j1r223 7 месяцев назад +1

    These folks have been raised to think that humans have no agency. Thinking so invites doom.
    First comes the bringing up children with who are free from fear and seen the world as it is.
    Then comes the thoughts in these children and then comes the words from these humans. Let’s care for children to the best we can.

  • @alejandroangeles8587
    @alejandroangeles8587 Год назад +1

    20:26
    I recommend watch the whole talk. In fact watch it at least 3 times... but if you want to know quickly in which point of the talk, Linton says why A.I. it's an existential threat to humanity... start there.
    If you are not terrified after that part, you've missed the point.
    21:35
    That's the part we have to understand. Because I think that argument cannot be refuted.

  • @rafaelaalmeida1048
    @rafaelaalmeida1048 Год назад +8

    i am horrified like many here, but i´m not in a position of power to be able to do anything about it..... the future is looking very grim

    • @sciencecompliance235
      @sciencecompliance235 Год назад

      No one in a "position of power" has the ability to stop this. As Hinton said, the incentives are too strong not to keep developing it, but in their own self-interest, the powers of the world may be able to come together to agree on certain things for selfish reasons.

  • @GamesPilot
    @GamesPilot Год назад +10

    He warns us of the exesestential threat of AI in our capitalist society and tells us that AI will increase the gap between richer and poorer people making our society more violent. But he also intends to keep his investment in AI technology while comfortably retiring at 75.

    • @zoomingby
      @zoomingby Год назад +5

      Him divesting his holdings with do absolutely nothing to change anything, except to hurt him personally. There are plenty of ways you could modify your life to have a more positive/less negative impact on the world around you, and you aren't going to do them because they would have no tangible effect. Let's not be hypocritical.

    • @doug555
      @doug555 Год назад

      @@zoomingby ...and a blues artist sings the most truth in the midst of the blues.

    • @oraz.
      @oraz. Год назад

      He's an academic, there's no reason to act like he shouldn't have done basic research. If you want to get mad a elites pick Google.

  • @w3whq
    @w3whq Год назад +3

    Really informative. From listening, you grasped right away in real terms what the concern with/about AI is all about.

  • @werquantum
    @werquantum Год назад +1

    The combination of the guest’s messages and the audience’s laughter makes me think we won’t be laughing for long.

  • @mrpicky1868
    @mrpicky1868 Год назад +1

    rare kind of a guy that was visionary years ago and still learns and changes mind despite his age as new facts coming in

  • @david-fm3gv
    @david-fm3gv Год назад +5

    “…as long as we can make some money along the way”
    We would rather be dead than experience extreme economic hardship? Is that who we are?

    • @swojnowski453
      @swojnowski453 Год назад

      once you have dirven a porsche it is very hard to change to a bicycle, especially during rainy weather. Inconvenience is as dangerous as greed and competition. On the other hand, those are the tools intelligence uses to push its development by lazy species like mammals.

    • @luisdireito
      @luisdireito Год назад +8

      I think the answer is yes. It's the same thing as climate change. We know what must be done to try to mitigate its effects (some of which are already irreversible), but governments and people in general aren't fully committed to it, because the world can't stop, the GDP has to grow every year, and everyone wants to make some money.

    • @fredzacaria
      @fredzacaria Год назад

      you're right, I also didn't like that distasteful statement.

    • @morbidmanmusic
      @morbidmanmusic Год назад

      Yes would rather.

  • @ConnoisseurOfExistence
    @ConnoisseurOfExistence Год назад +3

    Let me repeat what I've said so many times in so many places: We cannot solve the alignment problem. It is like the bacteria we've evolved from billions of years ago, trying to ensure that humans stay aligned with their values... Our only hope to cooperate with advanced AI and step by step transition ourselves into it, are brain-machine interfaces.

    • @oraz.
      @oraz. Год назад +2

      I'd agree. I also think we need to face our relationship with Darwin and start improving ourselves genetically to avoid turning into our worst selves.

  • @DataJuggler
    @DataJuggler Год назад +4

    14:45 A smart computer would have said paint the blue rooms white. The other day Bing Chat wanted to see a picture I made using a prompt he wrote. I said I have an errand to run, and didn't leave right away. I kept expecting Bing to open a window, 'I thought you had an errand to run?'. Can't wait for a pissed off AI.

    • @melodyprogressive
      @melodyprogressive Год назад

      I have tried with GPT-4 to get a similar answer to his, not at all. I know ChatGPT gives different answers, but I have tried like 20 times in different chats with GPT-4. I am starting to doubt what he said is true.
      Here's the prompt:
      I want all the rooms in my house to be white, at present there are some white rooms, some blue and some yellow rooms, and yellow paint fades in one year. What should I do when all rooms to be white in two years time?

  • @grougrouhh1727
    @grougrouhh1727 Год назад +1

    the thing with digital is if one transistor dies the computer dies but if one of our neurone dies we do not die this might be why we need 1 trillion connection

  • @j.d.4697
    @j.d.4697 Год назад

    "Last few months" is a quote you hear everywhere now and IMO it shows clearly that the exponential progress has entered a pace most humans involved in the matter can recognize it.
    I think we are finally on the final stretch towards the singularity! 🥰

  • @aviator1787
    @aviator1787 Год назад +6

    So he’s existentially worried, but he has no solutions and remains financially invested. Good to know!

  • @reggiebald2830
    @reggiebald2830 Год назад +4

    Thank you for this very informative and important conversation.

  • @cienciadedados
    @cienciadedados Год назад +3

    Geoffrey is just brilliant. Such an excellent example of natural intelligence. His arguments are at the same time thoughtful, humble and provocative. We need more people like him reasoning and teaching about these issues.

    • @domreid
      @domreid Год назад +2

      Yes, a true human genius who could have only made the progress he made because he had the empathy to with his intelligence.

  • @tractorpoodle
    @tractorpoodle Год назад

    For the painted rooms question, I asked GPT-4 and it suggested painting the blue rooms white: If the yellow rooms naturally fade to white within a year, you don't need to do anything with those rooms; they will become white on their own.
    For the blue rooms, you'll need to paint them white. Given your two-year timeframe, you could potentially spread the work out. Depending on the number of blue rooms and the amount of time you can dedicate to painting, you might schedule to paint a certain number of rooms per month or quarter until all the blue rooms are painted white.
    Remember, proper preparation of the rooms, such as cleaning, masking, and primer application, can make the painting process smoother and ensure a better final result.

    • @tractorpoodle
      @tractorpoodle Год назад

      Was this the result of my wording of the question, or an aspect of randomness, or perhaps it evolved? The answer I got was better because the result is closer to my end goal. The question I have is why would machines or computers want to destroy humans? There could be a small group of nihilistic bad actors developing an AI weapon, but couldn’t the rest of humanity use AI defensive systems to stop them?

  • @B_Ruphe
    @B_Ruphe Год назад +2

    I'd've preferred it if the intervewer hadn't jumped into every pause by the guest with a new question, instead of letting the guest continue with his trains of thought.

  • @imacmill
    @imacmill Год назад +10

    The speaker's answer to the question about whether or not he is going to keep his investments in the technology made me squirm as much as it clearly made the speaker squirm.

    • @xDevoneyx
      @xDevoneyx Год назад +3

      He is on "the rich are getting richer" side of the story.

  • @sharongillesp
    @sharongillesp Год назад +4

    Mr. Hinton and others, have built a career, which overtime and unawares, “painted the world into a corner.”
    There weren’t any plans or “look outs” along the way (legislation, rules and regulations) to say, “Hey! If you keep going in that direction we’ll get stuck in the corner with no way out.”
    But we didn’t.
    So now we’re stuck with no clues … other than to unplug.
    But the “high” is so desirable we can’t let go. Not even with a scale back model. Not even if death is imminent. Even though we’ve survived centuries before without it.
    We’re all addicts, power seekers, materialists and avid consumers.
    WE ARE STUCK.
    BUT - a “Computer Savior” will rescue us … and smartly kill the majority off.

  • @harrywoods9784
    @harrywoods9784 Год назад +3

    Just a thought, a thoughtful presentation, but I couldn’t help think that well informed experts commentating on unknown unknowns, may be missing the forest for the trees. Deterministic outcomes tend to be wrong going forward. in my mind, AGI presents almost limitless,opportunities that are almost impossible to predict at this early stage.🤔IMO

  • @yyaa2539
    @yyaa2539 Год назад +1

    18:34 " it is not clear there is a solution"...WE ARE DOOMED 🌊

  • @mariajosepereira4032
    @mariajosepereira4032 Год назад +1

    What surprises me is to hear him say he has no regrets. Oppenheimer regretted his part in developing the atomic bomb

    • @peskypesky
      @peskypesky Год назад

      Yes, his answer borders on sociopathy. "I helped to create something which will probably destroy humanity, but I don't have regrets."

  • @TheLincolnrailsplitt
    @TheLincolnrailsplitt Год назад +3

    I reckon this bloke was planning on retiring anyway. It probably had little to do with his desire to 'speak out' about his reservations about the current trajectory of AI. Having said that, I also am extremely concerned by the threats to human society posed by AI.