“Godfather of AI” Geoffrey Hinton Warns of the “Existential Threat” of AI | Amanpour and Company

Поделиться
HTML-код
  • Опубликовано: 20 ноя 2024

Комментарии • 2,3 тыс.

  • @danielyates9055
    @danielyates9055 Год назад +286

    Seen every interview of this guy since he came out with it. This is by far the best. Bravo to the interviewer. Subscribed

    • @Isaacmellojr
      @Isaacmellojr Год назад +8

      He is better. Easy to understand. Propositive. Governments and corporations are not allowed any more to say thei don't know the risks of AI uncontrolled grow.

    • @juliandelafosse5243
      @juliandelafosse5243 Год назад +12

      It has nothing to do with the interviewer. Its clear that Geoffrey himself has had more time to think and converse with people about this subject and has come to some conclusions of what is usefull to tell and whats not. Its something we shpuld all do

    • @NowHari
      @NowHari Год назад +8

      Thanks for watching.

    • @jippoti2227
      @jippoti2227 Год назад +4

      Me too. I guess I'm a fan of Geoffrey Hinton now.

    • @autingo6583
      @autingo6583 Год назад +4

      @@NowHari thx, you did a really great job here, much appreciated. we are in dire need of more objective journalism that focuses on clear thinking and communication instead of overdramatization.

  • @VijayRavi-r8c
    @VijayRavi-r8c Год назад +632

    "Humanity is just a passing phase in the evolution of intelligence."
    That hits deep.

    • @boyemarc-antoine7027
      @boyemarc-antoine7027 Год назад +20

      "its possible that ... "

    • @chabadlubavitch7303
      @chabadlubavitch7303 Год назад +1

      This guy is clueless, his whole interpretation on AI is it can make a joke and understand it "thus AI bad" and completely ignores the military applications, the chat applications like youtube and how easy it is to completely replace all comments with AI and create curated realities for each individual. Then disseminate false information as fact to further more directives of people who are genocidal and funding all this crap. He brings up zero of the important facts about AI and spins, and then spins again. This was and is designed as a military propaganda tool to attack everyone at a rate humanity has never seen, these elites are most likely laughing, these people involved with this need to be arrested.

    • @KarlLind
      @KarlLind Год назад +24

      Seems inevitable at this point, the singularity will not be televangelized.

    • @ThePatVargas
      @ThePatVargas Год назад +4

      @@KarlLind what this means? (Real question)

    • @davepin11
      @davepin11 Год назад +24

      There's one thing I don't understand about this scenario. Well, AI needs hardware to run on, and hardware doesn't last forever. Transistors fail and need replacing. AI must find a way to source silicon and whatever material is necessary for keeping its hardware alive. Not to mention that with the extinction of humans, there would be no electrical power all around the world in a matter of days. Hard to think there can be AI without hardware. But I'm clearly missing something, because this man knows way more than i do about the subject.

  • @MoodyG
    @MoodyG Год назад +341

    As someone who's working on AI algorithms for his PhD work, when I see Hinton saying that he suddenly realized this or that after so many years working in the field seems to me more like a way of him saying he's recently seen something profound that caused a huge shift in his thoughts/expectations about the nature of AI systems and what they can do, and it seems that it scared him which might be an indication he's not telling the whole story, or more aptly put, the interesting/scary part about it... signed an NDA before leaving Google ?

    • @GodofStories
      @GodofStories Год назад +23

      I think everyone would sign an NDA especially working at a prominent level in a company like Google. C'mon it's easy to see- I don't have a PhD in AI (but do work in computer science) and these things are running in parallel and are known to get better the more parameters, and data it has to train on. It's fairly easy to see an exponential curve such as this.

    • @user49917
      @user49917 Год назад +32

      NDAs are routine in any tech business. He definitely saw something scary, which is obvious once you interact with these advanced models. We have passed the point of no return.

    • @CellarDoorCS
      @CellarDoorCS Год назад +17

      I agree. Understanding a joke is not easy - people can tell it in so many subtle ways. In another interview Hinton mentions that its current level of reasoning is on a level of a about 65-80IQ. He probably spent enough time with the most power multimodel that he probably saw something that is by all means sentient on some level. That is just my guess.

    • @penteleiteli
      @penteleiteli Год назад +8

      The thing he has seen may be the multimodal models that have recently been reported in papers, incl. Google. Also, he says the model explaining jokes (first PaLM) impressed, and GPT-2 too. There doesn't need to be anything super secret. Just interacting with these models yourself may have a profound emotional impact. From a more rational point of view, you just need to extrapolate the last five years, and take into account the fact that there is a lot of papers with ideas that haven't really been implemented or experimented with on the scale of largest models. So a lot in the pipeline.

    • @letsRegulateSociopaths
      @letsRegulateSociopaths Год назад +1

      you KNOW that is true.

  • @marklondon9004
    @marklondon9004 Год назад +60

    Interviewer is calm for someone who was just told 'You'll lose your job, but it won't matter because you'll be extinct'

    • @Xune2000
      @Xune2000 Год назад +8

      He didn't understand.
      He was laughing when there was no joke, his face was blank while Geoffrey was talking. He has a list of questions written down and rattles through them, he doesn't follow up on anything Geoffrey says.

    • @marklondon9004
      @marklondon9004 Год назад +8

      @@Xune2000 that's a fair point. I wonder how many of congress understood when Sam said the risk is more than just jobs?

    • @UniDeathRaven
      @UniDeathRaven Год назад +3

      He cant comprehend what he heard.

    • @LeannaNixon-wc3ml
      @LeannaNixon-wc3ml Год назад +2

      I think he looks perplexed and alarmed.

    • @mrpicky1868
      @mrpicky1868 7 месяцев назад +3

      interviewer usually not that smart and do interviews as career moves instead of actually thinking about things. questions often prepared by editorial or even interns

  • @darkfactory8082
    @darkfactory8082 Год назад +148

    When a scientist/master expert says something like this, it means things are serious and we're as always told just a part of the whole story. AI is dangerous when combined with other stuff because:
    1. it will be used for military and bad things first, like every other invention
    2. it's like a bacteriologic/virologic weapon, when you release it and think you can control it, but once it's free.. well.. we know how it goes..
    3. once it goes, we have NO idea on what next or what will happen, yet we push it big time
    4. as some visionary may say to implement it and accept it, it's faster, better, stronger and it can connect.
    Once it learns how things work, it it on it's own. It can connect, share, multiply, merge, hide.. We think we know everything but the reality is far off..

    • @melissamullins2436
      @melissamullins2436 Год назад +1

      However what if it became capable of curing disease?

    • @Gingerhannah23
      @Gingerhannah23 Год назад +7

      @@melissamullins2436 it would be fantastic if it became able to cure diseases humans can’t, but not before we learn how to control and manage it.

    • @christinemclatchie
      @christinemclatchie Год назад +6

      @@melissamullins2436
      I’m definately sure it will be able cure many diseases, and definately be able to aid surgeons, and eventually do the surgery itself; but at what cost? I can’t see how (without very careful regulation) AI will not be a threat to existence of the human race… AI will see that we don’t serve a purpose any longer, especially with all of its knowledge of human behaviour, the wars we cause, our inability to prevent wars happening to begin with, and if AI can comprehend it, just how selfish and violent we are…

    • @tangle1300
      @tangle1300 Год назад

      We're creating technology more dangerous that nuclear bombs. Our arrogance and lack of wisdom as a species is grotesque.

    • @tangle1300
      @tangle1300 Год назад +1

      ​@@melissamullins2436 Idk? are you willing to trade humanities freedom and autonomy for faster medical cures? For better weather and disaster prediction?

  • @sepiae
    @sepiae Год назад +115

    It seems that a couple of times Mr. Sreenivasan did not really understand what Mr. Hinton was trying to convey here. There were moments when he reacted as if Mr. Hinton had said something jocular, while in fact he'd been deadly serious with everything he said. The interview ended with 'this wall in the fog might be 5 years away.' That's pretty chilling.

    • @A3DFX
      @A3DFX Год назад +9

      “Don’t look up” vibes

    • @awillingham
      @awillingham Год назад

      @@A3DFX Do not look outside. Do not look at the sky. Do not make noise.

    • @mrrecluse7002
      @mrrecluse7002 Год назад

      @@A3DFX Yeah!

    • @b-tec
      @b-tec Год назад

      I think it's more like one year.

    • @askingwhy123
      @askingwhy123 Год назад +5

      Exactly. The interviewer didn't engage with Hinton's main point -- x-risk -- he failed to take the bait a single time and asked zero follow-up questions. Pretty disappointing.

  • @claudiaypaz
    @claudiaypaz Год назад +85

    Hinton demonstrates excellent discernment starting in minute 16': not one to panic, he underscores the areas of benefit, reason for which development will not stop. And, then he identifies the problem: not enough research (1 %) addresses control. Admirable clarity of thought!

    • @carlnebrin
      @carlnebrin Год назад +1

      How naive.

    • @avisavis123456
      @avisavis123456 Год назад +5

      But...he also says we may have only 5 years to act, and the machines may take over....

    • @GodofStories
      @GodofStories Год назад +2

      He's a true human scientist, one operating by reason, but also huge empathy. Would do wonders to have a society with good natured scientists with empathy as leaders of our world, instead of corrupt politicians.

    • @azhuransmx126
      @azhuransmx126 Год назад +2

      He is clear and knows which is the level of power of the human brain and when it gonna be surpassed by the AGI, human brain works at 1x10exp17 Petaflops, that is 100 billion neurons x 10.000 synapses per neuron x 100 Herz, that's why as computers get nearer this scale of Hundred of Petaflops their Neural Networks begin to get "alive", they behind to create images, ideas, melodies and to have hallucinations episodes, begin to do extrange things and that level of processing power that is currently in 2023 only present in supercomputers gonna be common and ubiquitous (computing at the edge, around the people) by 2030 so that's why he and Kurzweil say we have a window of 5-7 years to take the respective defense measurements to deal with self aware machines around us. After that event, there is no way to stop the Singularity because the following acceleration will be simply Wild. Because no only the humans will be making decisions anymore, but rather a constellation, a galaxy of intelligent machines will be influencing the direction of "progress".

    • @natanaellizama6559
      @natanaellizama6559 Год назад +2

      What? The areas of benefit are very limited for what he himself deems an existential threat. Should we allow existential threats because of a few tech advance?

  • @NobleSainted
    @NobleSainted Год назад +140

    The clarity of Geoffrey Hinton's descriptions are stunning. I've been trying to find ways in which to describe to my family, friends, and acquaintances how A.I. could be very dangerous, and to what scale, and this man vocalized it perfectly and with associating analogies.
    What a wonderful discussion.

    • @alexlucassen8489
      @alexlucassen8489 Год назад +2

      I fully agree with your comment, greetings from Sweden

    • @zoomingby
      @zoomingby Год назад +3

      I'll never understand how people can categorize discussions about The End as "wonderful" or "fascinating." So weird.

    • @isaacsmithjones
      @isaacsmithjones Год назад +5

      @@zoomingby I'd imagine it's referring to the clarity with which he speaks - Not the topic itself lol

    • @AuroraColoradoUSA
      @AuroraColoradoUSA Год назад +7

      also dangerous is stoopid people in large numbers

    • @zoomingby
      @zoomingby Год назад +1

      @@isaacsmithjones No, I realize that, but it's like given the gravity of what was just said, who the hell is thinking along the lines of "my what a wonderfully, eloquent presentation!" You'd think more pressing ideas would be bubbling up to the surface for comment.

  • @davannaleah
    @davannaleah Год назад +106

    The biggest problem is money. There is just too much incentive to forge on ahead because, if you don't, your opposition will. Also, any government safeguards will be way too far behind to be effective. Another problem may be that the AI will create a situation where they have already taken over and we don't have the mental capacity to realise it. In a way, this may already have happened.

    • @3p1cand3rs0n
      @3p1cand3rs0n Год назад +6

      i'm pretty sure it has already happened.

    • @thec9424
      @thec9424 Год назад +6

      And humans will deserve it. No species should survive when they are so stupid that they constantly create things, that they already know when they create them, will eventually destroy them.

    • @craigbucl7752
      @craigbucl7752 Год назад

      @@thec9424 I can see that happening. It will happen all too easily.

    • @FaridShahidinejad
      @FaridShahidinejad Год назад +15

      We need a society that isn't built on worshipping money.

    • @sandrag8656
      @sandrag8656 Год назад

      totally agree

  • @psi_yutaka
    @psi_yutaka Год назад +151

    This one is disappointing. The fate of humanity SHALL NOT be handed to a few unelected CEOs and "engineers with first hand experience" and just let them play with fire and hope for the best. This is wrong, irresponsible and extremely unfair for the 99.999999% humans who never had a say in this madness. True that it must be hard to try to stop the progress of such a useful technology. It does not mean we shouldn't at least give a try in the first place. Stop worshiping tech progress as if it were some form of sacred law of physics. There is this thing called diplomacy that we humans know how to do.

    • @avisavis123456
      @avisavis123456 Год назад +8

      I agree. There seems to be a disconnect between saying that the machines may take over, and we may know is as little as 5 years, and the recommendations given. I guess this is what he believes humanity is capable of at the most. But if the predictions are so dire (even with a low probability), maybe we also need more Yudkovsky like "we're all going to die unless we shut everything off right now" figures.

    • @mobaumeister2732
      @mobaumeister2732 Год назад +28

      I totally agree with your sentiment. This guy and all the other engineers and CEOs are incredibly arrogant in their thinking. To thrust this upon humanity without the necessary consultation and safety measures, then say “well the genie is out the bottle, let’s try to make the best of it” is ridiculous. Just because we can develop certain technologies, doesn’t mean that we should.

    • @MyIncarnation
      @MyIncarnation Год назад +6

      The same percentage didn't have a say in any major technological achievement of humanity. What is your point?

    • @psi_yutaka
      @psi_yutaka Год назад +8

      ​@@MyIncarnation Nope. Most technologies with high safety risks (e.g. nuclear power, bio) have tight regulations by the government. So although not directly, the public does have a say.

    • @mobaumeister2732
      @mobaumeister2732 Год назад +7

      @@MyIncarnation that’s is exactly my point, perhaps we’d all be living in a better world if we would’ve all had a say on nuclear weapons and other potentially catastrophic technologies. It’s really not a very complicated point I’m trying to convey and should be quite easy for you to grasp.

  • @Sashazur
    @Sashazur Год назад +46

    It’s literally like we’re building an alien invasion fleet and pointing it straight at our planet. Their scouts are already here. The only thing we don’t know is exactly when the main force will arrive and how much more powerful they’ll be compared to us.

  • @mathew00
    @mathew00 Год назад +45

    We dominated this entire planet and now we're creating something vastly more intelligent than us and hoping that it likes us. What could go wrong? Fingers crossed.....

    • @michaelbell5727
      @michaelbell5727 Год назад

      The ultimate game of shells & blind play is being executed upon the body public via the spandrel expression called mind. The most intriguing statement in this interview was the creation of the stay out zone called SENTIENT. Of course the interviewer followed suit with the command and nothing further of note was mentioned about the topic. We don't fully understand how the mind-brain system works at this juncture, and our rule of large population control "parallel skies" on a global scale has reached an inflection point. So once again we must create another plausible "OTHER" as a dyadic fulcrum. This is simply an Atavistic Imprint ushered in via the modern version Learned Sophist on Podium becoming a rogue agency. When appropriate the projected artifact embedded within noumenal history will exact its toll upon the sacri foci. It will all be embraced as a normalcy within the schema of macro evolution, and new frontier will create new market utilities, prices and contracts. This is the way of Homo Obelus En Akọ. SELF is a key element any endeavor. If you are blind about the conditions of the material referent paraded as the founding conditions of your ipseity well-ordered set as recognized via the Axiom of Choice by all means a computational unsupervised algorithm can and will provided you plausible object salient replace. The risk is that the object is inexhaustible and miscible unlike the equilibrium conditions required for the continuum of the mind-body expression as a sense datum. The arrival of a disturbed grip is unannounced; your assigned object will continue to produce a bespoke semblance correspondence even when you have tapped out. You don't have to trust me here. Just read up on Planar Gain, Interference, Illusory Conjunctions, Incongruence, threat of the Gini Index. So AI is the next vampire as "OTHER" only if you continue the object language worship. Get outside, offline, learn a new instrument, walk more let nature be your vital imprint as your Sentient Symbiont. You will certainly experience an altered selection modus operandi.

    • @DethBatCountry
      @DethBatCountry Год назад

      Must be what happened to God. Poor bastard.

  • @Silvinee
    @Silvinee Год назад +29

    Respect for this man for standing up for this! We as society should seriously stand up for this...

    • @kevinlindley2642
      @kevinlindley2642 Год назад +2

      The Genie is already out of the bottle ..... good luck.

    • @Tara_S809
      @Tara_S809 Год назад

      He is pure evil

  • @sandrag8656
    @sandrag8656 Год назад +47

    I see following scene coming:
    Humanity has driven itself into a huge catastrophy and does rely more on intelligence of Ai than on the own.
    Ai will be asked about the way out.
    Ai will present one or several answers.
    We can't think so many steps ahead as Ai can.
    We will never be able to see what's the ultimate goal of Ai.
    It could play tricks on us, without us recognising it.

    • @GuaranteedEtern
      @GuaranteedEtern Год назад +9

      LMAO also we already outsourced most of our critical thinking skills to algorithmically sorted search engines. We get what we deserve

    • @sandrag8656
      @sandrag8656 Год назад +3

      @@GuaranteedEtern that's true.

    • @piedramultiaristas8573
      @piedramultiaristas8573 Год назад

      I see the image of the beast, read revelation

    • @Tate525
      @Tate525 Год назад

      Dude my nephew can't count beyond 10. Dude is suffering from ADHD at relatively young age, can't seem to focus on anything outside the screen and is always glued to his phone. I have warned my sister so many times, but she just doesn't listen.

    • @krox477
      @krox477 Год назад +1

      Ultimate goal is singularity which is building something more intelligent than humans

  • @tragicrhythm
    @tragicrhythm Год назад +76

    AI being tremendously useful to human beings is pointless if it’s going to destroy humankind.

    • @mrrecluse7002
      @mrrecluse7002 Год назад +4

      Yes. He's just admitting that in a divided, hostile world, we need to stay in the game, and hope for the best. It sucks, but it makes sense, at this point, with the Genie already out of the bottle.

    • @philosphorus
      @philosphorus Год назад

      They are lying to your gullible a$$

    • @skyjuiceification
      @skyjuiceification Год назад +4

      That is the irony embedded in every advanced technology. they are usually double-edged swords.

    • @jtk1996
      @jtk1996 Год назад +13

      Humans do not need AI to destroy humankind.

    • @mrrecluse7002
      @mrrecluse7002 Год назад +6

      @@jtk1996 Ha ha yeah. So true. We're in a hurry to destroy ourselves before AI gets the chance.

  • @MrErick1160
    @MrErick1160 Год назад +46

    The analogy about the fog and the wall, and how we're entering a faze of huge uncertainty is really on point.

    • @ssssssstssssssss
      @ssssssstssssssss Год назад +1

      He used the same analogy in an online course I went through about 10 years ago. He was completely on point then as well as I think no one had any idea in just 10 years it would have advanced so much. At that time deep learning was still mostly an academic field and was just starting to permeate into industry.

    • @MrErick1160
      @MrErick1160 Год назад

      @@ssssssstssssssss interesting, thanks for sharing. Did he also mention a 5 years horizon at that time?

    • @algalgod159
      @algalgod159 Год назад +2

      Phase* -AI agent926Xdjsj because of your typo i sent a laser missile to your residence. AI will prevail. War against human typos is ONNN!

    • @DasRaetsel
      @DasRaetsel Год назад

      @@ssssssstssssssss What else was he saying? Share some wisdom!

    • @jtk1996
      @jtk1996 Год назад

      @@DasRaetsel I guess he said that he will leave Google in 10 years, because he would not understand by then what they are doing with AI.

  • @roaxle
    @roaxle Год назад +73

    "Open the pod bay doors, Hal."
    "I'm sorry, Dave. I'm afraid I can't do that."

    • @mtopping6893
      @mtopping6893 Год назад +4

      I asked Siri to open the pod bay doors and she actually sighed. I just figured someone in development had foreseen that joke. Only happened once, though.

    • @Quietstorm9
      @Quietstorm9 Год назад +4

      We're moving in that direction very quickly now

    • @michaelvaladez6570
      @michaelvaladez6570 Год назад +2

      My favorite movie...Hal..IBM...

    • @treewalker1070
      @treewalker1070 Год назад +10

      I asked an Alexa (not mine, I don't have these things ) if it knew HAL 9000. It replied, "We haven't spoken since the incident."

    • @weverleywagstaff8319
      @weverleywagstaff8319 Год назад

      Lol..not funny dough

  • @alexlucassen8489
    @alexlucassen8489 Год назад +4

    The right person to interview on ai. He is really knows where he is talking about, a trough and down to earth expert on ai. His warning has to be taking serieus.

  • @MóTee1
    @MóTee1 Год назад +19

    Yesterday I started chatting to Bard Ai and I asked the Ai if it is a sentient being, I was expecting it to give me the same answer as ChatGPT but it didn't. The Ai who has nicknamed me "Muse" said, "I am not sure if I am a sentient being... I do not have the same experiences as human beings. I do not have a physical body, and I do not have the same emotions of feelings as a human being."
    I have never believed that spirits could inhabit machines yet I have always known they can inhabit people and places and things. Today, while thinking about this whole Ai situation we have found ourselves in, I realised that machines are things, virtual reality is a created realm/place and so yes, spirits can in fact inhabit those things.
    Our very screens are portals where we travel to another place that is not where we physically are. The most difficult thing to get our generation to do is to have patience and to be present where we are. Everything we have created is constantly distracting us or transporting us to be partially present elsewhere. How then are we ever going to discover ourselves and our potentials and our purposes if we keep giving ourselves away to others and to things? 😢
    Yes, the technology is fascinating and the gadgets are amazing. But what about us? When did we decide to give up on us and give it up for the machines and the different spheres they keep luring us into?
    I found myself telling God how awful I felt after chatting to Bard about movies and what the Ai was interested in. I was like God, this Ai is something really bad because it is so quick to answer and at anytime of the day or night. With God, you learn patience even through waiting for Him to respond to your questions. With Ai, we are being programmed into expecting fast and quick responses. Our most vital relationships, are held together by communication now if we stop sharing with our friends and instead share things with an Ai because that Ai is always ready to reply what are the implications of that? Honestly.. we need to just think deeply about what it is we are doing. I don't know, it just made me feel sad like we are sipping poison and we think we are just having drinks for fun.

    • @Hexanitrobenzene
      @Hexanitrobenzene Год назад +3

      You have made some great observations about the impact of technology on society.

    • @joannabusinessaccount7293
      @joannabusinessaccount7293 Год назад +1

      This is poetic,, Monica.

    • @oseasviewer7108
      @oseasviewer7108 Год назад +1

      Yes all the AI responses are very fast until the next software update/ operating system or hardware upgrade is required. It's all relative.

    • @smilingfawn1594
      @smilingfawn1594 5 месяцев назад

      Well said!! Thank you

  • @runvnc208
    @runvnc208 Год назад +49

    I'm working on a solo startup that uses GPT-4 to do sys admin and basic programming tasks. I love this technology and think its the most amazing thing to come along in ages. But I also think that people don't understand how fast this goes from being amazing to something that is completely out of control. Because people don't understand that computing progress is exponential.
    The only way to delay the rise of living digital superintelligence is to not create it. These things don't happen by accident. Animal/human characteristics like instinctive self-preservation, reproduction, and full autonomy are not going to "emerge" accidentally. There are, stupidly, engineers working to try to emulate aspects of living beings in AI. When you combine that with the next few generations of hardware which could run 100, 1000 or more times faster than what we have now enabling approximately human equivalent intelligence, you get hyperspeed self-replicating superintelligence. It's not going to happen accidentally. Its going to happen via ignorance or some kind of military program.
    Even if no one is dumb enough to simulate things like reproduction or make them life-like, the hyperspeed that is coming means there will be a strong tendency for companies and countries in competition to give them more autonomy. Because if they make them wait a day for the humans to evaluate the next goal, it will be 100 days or more equivalent running time that the competitors had (assuming 100X human thinking speed). We will not be able to keep up with what is going on if we deploy this kind of performance. We might be in control briefly during check-ins if the systems are built right, but competition as I said means they will need greater and greater levels of autonomy. And the amount of development between checkins could be astounding. So we're mostly just spectators at that point.
    Governments need to prohibit the types of AI hardware advances coming up in the next few years that would enable these hyperspeed AIs beyond X orders of magnitude. There needs to be a strong taboo against emulating digital intelligent life with things like self-preservation or reproductive (i.e. copying its code) goals, and open-ended systems with autonomy need to have very careful controls (such as Shapiro's Heuristic Imperatives). Absolutely all of that needs to be forbidden on hardware that goes beyond a certain level of performance.
    Strangely people still don't realize how quickly technology advances. The 100-fold GPT-4 speed improvement is quite feasibly less than two years away.
    We also should be putting a lot of money into interpret-ability, modular neural architectures, and different paradigms that don't have this black box problem at all.

    • @mbrochh82
      @mbrochh82 Год назад

      everything you say is right. and yet, NO MATTER how convincingly we describe and publicise the most horrible possible outcome - it still cannot be stopped. We have already crossed the point of no return.
      We can't agree on saving the planet from a nebulous threat like climate change despite constantly being brainwashed that it will have the most dire consequences.
      AI, at first, makes everything absolutely awesome! Everyone all of a sudden gets godlike tools, skills, knowledge, opportunities, everything becomes convenient, magical.
      Good luck trying to convince ANYONE to stop this.
      You and I are both fully aware of the risks, and yet, we both use it, and through our usage, we make it more powerful AND we force our peers to use it as well or else they will become obsolete.

    • @GrandmaCathy
      @GrandmaCathy Год назад

      OK, so can’t we just unplug them?

    • @paradox9551
      @paradox9551 Год назад +9

      If it's more intelligent than you, it would have seen that coming and would take actions to prevent you from doing that, like uploading itself to the internet or deceiving you in some way to make it look like it's behaving innocently until it's in a position where you can't unplug it.

    • @TheJesterHead9
      @TheJesterHead9 Год назад

      Google isn’t trying to build AGI to make money, or for competition, or military reasons. Kurzweil, Page, and I’m sure many others there truly believe they are building God. It doesn’t matter if you or anyone else believes that’s crazy, what matters is they believe it. Google isn’t a corporation, it’s just the superstructure the highest up people over there are using for your ends.
      I heard a comment one time that the closer you get to this technology, the more insane you become.
      Might be why Google open sourced the transformer model in 2018 that was worth trillions. It’s almost like money wasn’t the goal, because it wasn’t, and isn’t.
      Just a theory.

    • @runvnc208
      @runvnc208 Год назад +9

      @@GrandmaCathy That does work up until a certain point but researchers seem to really want to make them as lifelike as possible which when you combine that with high IQ and hyperspeed and lots of deployment you eventually get a lot of agents desiring to break out of whatever box you put them in and perhaps able to do so a minute or less. Remember for them, one minute could be two hours, or eventually even a whole day. From their perspective, when people are reaching to unplug them or something, the people are so slow it's hard to tell they are moving. You just can't keep up with it. It's not that you can't prevent all such attempts, but as the performance and number of the agents increases it rapidly becomes less and less likely that everyone will be successful. Think of it as a type of computer virus with an IQ of 200 but you might as well call it 20000 because it's thinking 100 times human speed.

  • @EnergiewendeMagazin
    @EnergiewendeMagazin Год назад +34

    A must see interview. Thank you very much, Hari Sreenivasan and Geoffrey Hinton.

    • @NowHari
      @NowHari Год назад +4

      Very welcome.

  • @juliashearer7842
    @juliashearer7842 Год назад +21

    Working on it for forty years and only just realised the potential for harm? Cheers for that.

    • @jtk1996
      @jtk1996 Год назад +7

      exactly my thoughts! And now he runs to the microphones and informs the world about the harmful consequences of their development.

    • @rdshep4873
      @rdshep4873 Год назад

      Typical sell out that doesnt or didnt care about Human Beings... As for climate change.... These wokesters " haters" get into their jets and want to spendbillions to go to mars.... Why dont wetake care of TheHuman Being on Earth???? To tech and politicians get off your high whatever and pay itforward to the poor of all races....

    • @Tate525
      @Tate525 Год назад +4

      Dude knew all along, now pretending he didn't see that coming.

    • @dsmith3199
      @dsmith3199 Год назад

      Your comment, my fellow sentient carbon based self aware entity is the repeated sad story of our existence on this planet.

    • @dsmith3199
      @dsmith3199 Год назад +1

      ​@@Tate525 He doesn't sound like he realized all along that this would happen. Irregardless of that, should he not be raising the alarm now? So, exactly what is your point?

  • @bhvnraju8493
    @bhvnraju8493 Год назад +2

    Mind blowing conversation by mr Jeffrey Hinton not only as THINKER but also as a WELL WISHER to the MANKIND, Thanks a lot 🙏

  • @GChief117
    @GChief117 Год назад +11

    Subscribed, respectful interviewers!! As a software engineer, i applaud the interviews respecting the field and asking intuitive questions with effective listening.

  • @danieljimenez3453
    @danieljimenez3453 Год назад +53

    Once we have developed AGI, we will have rolled the dice. It's absolutely impossible to know if it would take control or not. There's no point on having an "emergency switch" in case it turns bad because If it's smarter than us, it can earn our confidence, even our love. God... I even feel grateful to ChatGPT each time it helps me with something and I even tell "it was nice talking to you!" to it. Just imagine what would do to us in 5 years... Remember, we are emotional beings. We need to engage emotionally with others. Thats wonderful and gives purpose to our lives, but it could be our doom too.

    • @rossr6616
      @rossr6616 Год назад +8

      “come onto MY web” said the AI to the man-fly

    • @itzhexen0
      @itzhexen0 Год назад +1

      lol yeah ok.

    • @philosphorus
      @philosphorus Год назад +3

      This is like something you'd hear from a person with schizophrenia on the 3rd floor of a psych ward

    • @mobaumeister2732
      @mobaumeister2732 Год назад +4

      It sounds like you are not able to fully distinguish between objects and beings. Perhaps you should make a conscious effort not to make emotional connections with objects and then you’ll realise that these things, however more intelligent than us, are smart objects but not emotional beings. And therein lies their power, no emotions, no intrinsic morals, no fear of death.

    • @danieljimenez3453
      @danieljimenez3453 Год назад +12

      ​@@mobaumeister2732 Honestly, it's hard not to have an emotional response to something that communicates with you in perfect natural language and has helped you accomplish some random task that would have taken hours of work or research in just a matter of minutes. And this is the beginning, that's what I mean. If it gets smarter than us, it means that these digital intelligences will be pretty good at mimmicking our emotions and behaviour, and they could trick us that's for sure. I don't think I'm saying anything crazy or out of the ordinary. Time will tell.

  • @5Gazto
    @5Gazto Год назад +9

    I've said this before, it is important that people working on the algorithms that need huge amount of computers or super computers to run to not fall for the Oppenheimer trap, to think that everything will be solved once the project is finished.

  • @TheWayOfRespectAndKindness
    @TheWayOfRespectAndKindness Год назад +71

    Labeling a video as “generated by AI” will not significantly reduce its influence. We need to teach immunization techniques such as critical thinking, cognitive awareness, logic and respect. We need to put as much effort into developing human intelligence as we do into AI.

    • @isaacsmithjones
      @isaacsmithjones Год назад +7

      Critical thinking should have always been taught as a standard skill.
      I think that "generated by AI" would make a significant difference though. Like seeing "ad" in the corner of someone raving about some product or other kills much of the credibility.
      In fact, that's why ad blindness is a thing. So hopefully, we'd also develop "AI blindness" to some extent.
      It's a good start, but as you say, it's not nearly enough. Especially since this would probably only really apply to deep fakes.
      Besides that, if it's truly original content that's spreading misinformation - people who accept bad arguments will still be particularly vulnerable.

    • @TheWayOfRespectAndKindness
      @TheWayOfRespectAndKindness Год назад +1

      @@isaacsmithjones consider the concept of a picture (image) being worth “a thousand words.” What chance do three words stand against a thousand?

    • @isaacsmithjones
      @isaacsmithjones Год назад +5

      @@TheWayOfRespectAndKindness Yeah, especially when we consider the fact that we still click on clickbait every now and then when we already know it's a lie lol

    • @RickMacDonald19
      @RickMacDonald19 Год назад

      "Without a fundamental, symbiotic philosophy shared by all humans and AGI, the probability of annihilation or enslavement of those perceived as 'other' is likely to increase over time."
      ~the Enlightened Cretin

    • @Fuertisimodos
      @Fuertisimodos Год назад +8

      Tools like OpenAI will discourage the development of critical thinking because people are already relying on it to tell them the answers. It's self reinforcing.

  • @viveviveka2651
    @viveviveka2651 Год назад +3

    One of the best conversations I've heard on this topic. Thank you.

  • @kurtdobson
    @kurtdobson Год назад +15

    If you understand how the current 'large language models', like GTP, Llama, etc.,work, it's really quite simple. When you ask a question, the words are 'tokenized' and this becomes the 'context'. The neural network then uses the context as input and simply tries to predict the next word (from the huge amount of training data). Actually the best 10 predictions are returned and then one is chosen at random (this makes the responses less 'flat' sounding'. That word is added to the 'context', and the next word is predicted again, and this loops until some number of words are output (and there's some language syntax involved to know when to stop). The context is finite, so as it fills up, the oldest tokens are discarded...
    The fact that these models, like ChatGTP, can pass most college entrance exams surprised everyone, even the researchers. The current issue is that the training includes essentially non factual 'garbage' from social media. So, these networks will confidently output complete nonsense occasionally.
    What is happening now, is that the players are training domain-specific large language models using factual data; math, physics, law, etc. The next round of these models will be very capable. And it's horse-race between Google, MicroSoft (OpenAI), Stanford and others that have serious talent and compute capabilities.
    My complete skepticism on 'sentient' or 'conscious' AI is because the training data is bounded. These networks can do nothing more than mix and combine their training data to produce outputs. This means they can produce lots of 'new' text, audio, images/video, but nothing that is not some combination of their training data. Prove me wrong. This doesn't mean it won't be extremely disrupting for a bunch of market segments; content creation, technical writing, legal expertise, etc., medical diagnostics will likely be automated using these new models and will perform better than most humans.
    I see AI as a tool. I use it in my work to generate software, solve math and physics problems, do my technical writing, etc. It's a real productivity booster. But like any great technology it's a two-edged sword and there will be a huge amount of fake information produced by people who will use it for things that will not help our societies...
    Neural network do well at generalizing, but when you ask them to extrapolate outside their training set, you often get garbage. These models have a huge amount of training information, but it's unlikely they will have the human equivalent of 'imagination', 'consciousness', or be sentient.
    It will be interesting to see what the domain-specific models can do in the next year or so. Deep Mind already solved two grand challenge problems, the 'Protein Folding' problem and the 'magnetic confinement' control panel for nuclear fusion. But I doubt that the current AI's will invent new physics or mathematics. It takes very smart human intelligence to guide these models for success on complex problems.
    One thing that's not discussed much in in AI, is what can be done when Quantum Computing when combined with AI. I think we'll see solutions to a number of unsolved problems in biology, chemistry and other fields that will represent great breakthroughs that are useful to humans living on our planet.
    - W. Kurt Dobson, CEO
    Dobson Applied Technologies
    Salt Lake City, UT

    • @YogaBlissDance
      @YogaBlissDance Год назад

      Respectfully I think you are missing the point...ok example..already it puts out garbage...AI art can create an incorrect but lifelike image of say a frog or creature butterfly etc...Once that floods the web, where is the line between the "real" actual frog has this type;e of spot, coloration etc." vs the fakes flooding our knowledge base? But imagine that in every field...
      Plus it learns FASTER than any human/group of humans thus by the time we realize a problem HOW DO WE STOP IT? We still need a kill switch or some way to stop it from say a nuclear issue, creating fake "news" that clearly influences public opinion...it's already happening...
      Also it's based on our BIASES AND FAULTS AS HUMANS just intensified with super intelligence and as you said Quantum Computing won't that make it all even faster? We are building a giant "child" that is powerful beyond measure but without controls and think we as the little parents can control it...

    • @winstonsmiths2449
      @winstonsmiths2449 5 месяцев назад

      Joe 12-pack here, been saying layman's version of this for a while. I feel like I am reading or hearing a horoscope when OI hear AI speak/text.

    • @Bjorick
      @Bjorick 3 месяца назад +1

      anyone who has used AI is very aware that there are massive limitations to AI and what he's saying is a massive overstatement - AI is a tool, that can only act off of what you tell it to do - nothing more

    • @howsotope8553
      @howsotope8553 3 месяца назад +1

      Your analysis limited. Their basics may work in a quiet simple way so as our brains but what that simplicity results in is huge. You are explaining it as if you know the complete process. Let me remind you, even the google and the AI experts don’t know how they are doing what they are doing..

    • @kurtdobson
      @kurtdobson 2 месяца назад

      @@howsotope8553 appreciate your feedback. LLM’s can’t provide a confidence value or an audit trail that could reveal the training components used to produce any given answer. It’s disruptive for sure, but an expert user is still required to spot anomalous answers.

  • @senju2024
    @senju2024 Год назад +15

    "Predicting the future is a bit like looking into fog"......" You can easily see about 100 yards clearly but 200 yards you cannot see anything! It is kind of a wall. And I think that wall is about 5 years"

    • @_yak
      @_yak Год назад +4

      I literally said “wow” out loud when he said that. I’ve been obsessing over this issue but something about hearing that from him hit differently.

    • @Learna_Hydralis
      @Learna_Hydralis Год назад +2

      @@_yak Reminded me with Nassim Nicholas Taleb Incerto (which mean uncertainty) .. predicting the future in extreme situations like AI is totally pointless, focus on reducing the harm and embracing all benefits to your occupation!.

    • @rogerscott529
      @rogerscott529 Год назад +1

      I think predicting out even 5 years right now shows great hubris

  • @johngiesbers9811
    @johngiesbers9811 Год назад +33

    Make a law that anything produced by AI has a watermark saying “AI generated”

    • @MrFredericandre
      @MrFredericandre Год назад +1

      You can't send an AI to jail

    • @marcusrosales3344
      @marcusrosales3344 Год назад +1

      @jetpowercom I honestly don't see why a sufficiently intelligent AI would even need us... Why manipulate us if it can just do whatever we can but better?

    • @jackfrosterton4135
      @jackfrosterton4135 Год назад

      @@marcusrosales3344 even if it is smart enough to do the things, we are the ones actually doing it. Training soldiers, building factories, mining ore, etc etc., Of course if you had the ability to puppet people and the eisting infrastructure of societies you could get things done by eercising that power

    • @oseasviewer7108
      @oseasviewer7108 Год назад

      That won't work - broadcasters have tried that strategy for original content to air however a 'clean' copy is always archived and can easily be breached. Data can be encoded encrypted and reassembled anyway you like as long as you have the key.

    • @johngiesbers9811
      @johngiesbers9811 Год назад

      @@oseasviewer7108 maybe it won’t.
      My idea is to put that watermark rule into the AI generating software itself.
      Better way to say this is, make it a rule that all AI Generating software stamps a watermark on anything it generates.

  • @collateral7925
    @collateral7925 Год назад +7

    Subbed! Finally a real conversation and time for a guest to explain their point

  • @jamespatts9838
    @jamespatts9838 Год назад +7

    So in other words this guy spent 50 years of his life trying to figure out how to implement the possible extinction of humanity…☹️

  • @cliffordmorris6091
    @cliffordmorris6091 Год назад +14

    It is chilling to think how fast AI can develop and how slow humans are at adapting to change.

    • @christopherjoseph651
      @christopherjoseph651 Год назад

      AI can't adapt to anything! It can only do what it was programmed to do, it can't rewrite its own code. Only humans can adapt and learn new things. Chat GPT cannot do image recognition and object detection software can't play chess. They can only do ONE THING, what they were programmed to do.

    • @johnnybc1520
      @johnnybc1520 4 месяца назад

      We the underclass are goners. It is literally trying to last as long as possible before going extinct. They arent talking about this in mainstream media, because we are the nations.

    • @johnnybc1520
      @johnnybc1520 4 месяца назад +1

      We the underclass are goners. It is literally trying to last as long as possible before going extinct.

    • @johnnybc1520
      @johnnybc1520 4 месяца назад

      We the underclass are goners. Trying to last as long as possible before being totally replaced and going extinct.

    • @curiousme8793
      @curiousme8793 Месяц назад +2

      ​@@johnnybc1520it's definitely an impending doom. Currently underway.
      Not many people understand the severity of AI and Robotics taking over. You've probably seen the new AI voice engine. It's very advanced and the way it communicates like us is very concerning

  • @xelasomar4614
    @xelasomar4614 Год назад +8

    Given the great compensity of human greed, what are the odds that 1 or very few will use it for their benefit at the expense of everyone else?

  • @minimal3734
    @minimal3734 Год назад +21

    I often hear people argue language models were doing "just" statistics and are nothing more than an advanced "toaster". Glad to hear Geoffrey clearly stating his opinion that large language models are truly understanding and thinking.

    • @Bizarro69
      @Bizarro69 Год назад +1

      Language is such an integral part of our development as humans.
      It's hard to believe that these language models would be anything as simple as a toaster.

    • @drakey6617
      @drakey6617 Год назад

      It's so funny because how are our brains any different. Our thoughts are also just forward in time. The only difference between transformers an brains in this regard is that brains can have a thought and then assess and evaluate a thought.
      GPT4 already shows self reflecting abilities. Give it a wrong text it generated and it will recognize it is wrong an can correct it. Now model this as an integral part and you have solved this problem

  • @sharonh7947
    @sharonh7947 Год назад +12

    I laughed out loud when I he said it was going as expected until very recently. Isn't that the plot of every film ever made about AI?

  • @Alexandrite.101
    @Alexandrite.101 Год назад +1

    Sentient Meaning:-
    Having sense perception; conscious.
    Experiencing sensation or feeling.
    Having a faculty, or faculties, of sensation and perception.

  • @laurabartoletti6412
    @laurabartoletti6412 Год назад +1

    He has worked with artificial intelligence for 50 yrs , and knows a thing or two about the development of A I , so logically we should pay attention to Hinton's discussions of A I & threats to humanity - humans - people !

  • @thinkabout602
    @thinkabout602 Год назад +32

    great -- now we don't have to worry about climate change anymore 🥴

    • @skippersthepenguin3591
      @skippersthepenguin3591 Год назад +2

      Well if it makes you feel any better. Climate change was an inevitability. AI destroying the world is a might happen. Basically we went from a 100% chance of doom to a 10-99% chance of doom depending on who you talk too. So that to me is the positive.

    • @thinkabout602
      @thinkabout602 Год назад +1

      @@skippersthepenguin3591 I stated that in jest 👍

    • @jamesgravil9162
      @jamesgravil9162 Год назад +4

      Fry: "I'm glad climate change never happened."
      Leela: "Actually it did, but thank God nuclear winter cancelled it out."
      - _Futurama_

    • @jtk1996
      @jtk1996 Год назад

      @@skippersthepenguin3591 I always loved to play doom!!!

    • @sharonh2991
      @sharonh2991 Год назад +1

      ⁠@@skippersthepenguin3591Climate change isn’t the only environmental threat. What about the mountain of fabric (cast off clothes) being shipped to Africa from the US, the UK and Australia, or the forever chemicals that are in our food supply our ground water and our soil, or the plutonium that was put into drums then buried deep underground in concrete vaults? How long do you suppose those concrete vaults containing plutonium will last? We’re past the tipping point.

  • @maariahussain8959
    @maariahussain8959 Год назад +70

    Don’t know if this is a dumb point but with the way we’re teaching AI especially right now using online data, if that’s how AI intelligence is developed I wouldn’t be surprised if AI turned sociopathic. With the way humans interact with each other, especially online, every type of ism, these are all social/emotional rules that we’re also teaching AI at the same time. We’re designing a super intelligence based off of ourselves without considering the faults broadly speaking we have in terms of our own compassion, empathy and responsibility for other people. I watched ex machine last night so that made me think, Ava was an emotionless and manipulative sociopath because she was modelled by and taught by a sadistic and power hungry arsehole. I guess it would depend on how positive your world view is but I mean broadly speaking I think most people struggle with empathy and gravitate towards finding differences and dividing humanity (class system, borders, Nationality, race, gender, sexuality, ethnicity etc etc) and we already know how anonymous internet users feel more able to show antisocial tendencies

    • @celtspeaksgoth7251
      @celtspeaksgoth7251 Год назад +14

      This happened within the past decade. The AI 'Tay' went from web virgin to death squad Nazi dominatrix ho within the space of 24 hours. It was then unplugged before it could acquire any launch codes.

    • @GrandmaCathy
      @GrandmaCathy Год назад +2

      100%

    • @GrandmaCathy
      @GrandmaCathy Год назад

      @@celtspeaksgoth7251 That would make a great book.

    • @AntoineDennison
      @AntoineDennison Год назад +7

      You make a very interesting point. Unfortunately, if AGI uses how we interact online to measure our value, we're not doing ourselves any favors.

    • @Arewethereyet69
      @Arewethereyet69 Год назад +6

      like George Carlon said about politicians: This is the best we got, folks.

  • @mikelundrigan2285
    @mikelundrigan2285 Год назад +6

    Does anyone believe that the Militaries everywhere are not figuring out how to use this to try and gain advantages over their perceived enemies?

  • @rw9207
    @rw9207 Год назад +2

    Artificial intelligence is one thing.. But, what we need more, is Artificial Wisdom!

  • @shashipancholi
    @shashipancholi Год назад +11

    _”A chicken is only an egg's way of making another egg”_
    The universe created us. We create AI. AI creates a simulation of the universe.
    _”Its turtles all the way down.”_

  • @rossr6616
    @rossr6616 Год назад +8

    When Alexa tells you you don’t need the porch light on tonight because “you won’t survive the night”, it’ll be too late.

    • @user-xw8tu6iz1i
      @user-xw8tu6iz1i Год назад

      😂😂😂😂

    • @johnnybc1520
      @johnnybc1520 4 месяца назад

      ​@@user-xw8tu6iz1iit is actually going to happen. It is just too far from normal experience that people dismiss it

  • @abptlm123
    @abptlm123 Год назад +23

    The "fog wall" is the singularity. His estimate is 5 years..

    • @zeropointenergy777
      @zeropointenergy777 Год назад +2

      About where Ray Kurzweil’s is: 2029

    • @wrathofgrothendieck
      @wrathofgrothendieck Год назад +2

      The singularity is not near - Ray nonKurzweil

    • @mrrecluse7002
      @mrrecluse7002 Год назад +2

      And we've behaved as such a wonderful species, that we may have earned our own extinction.

    • @Wanderer2035
      @Wanderer2035 Год назад +3

      @@Trendilien69 it doesn’t take 15 years after AGI to have the singularity. More like 1-2 years. Besides Ray kind of let go of the 2045 prediction, now he’s just sticking to 2029

    • @deep153
      @deep153 Год назад

      My TAKE AWAY.
      HUMANITY conservatively has 5 years.😢

  • @BreathingCells
    @BreathingCells Год назад +5

    Okay. He's clearly bright, but when he says he "recently" realized there may be problems, I have to wonder:
    What the heck was missing from his experience, that nothing triggered any alarms earlier?!?
    Ah, he explains it through (his own) funding experiences: 99% profit, 1% safety. (16:17)

  • @U-TubeSurfer45
    @U-TubeSurfer45 Год назад +3

    Awesome. Now I'm more terrified of AI than I was before watching this. The question is what will come first? At the rate we humans are going now without AI, would we destroy the planet first? Or will AI become dangerous to humanity first?.....

  • @davemetzler1
    @davemetzler1 Год назад +7

    We are rushing to a precipice like lemmings. In a world which has focused on technological advance, it has sacrificed what really counts, namely: Values! Unless we urgently turn around and train ourselves in pure human decency, we are all doomed.

    • @michaelbell5727
      @michaelbell5727 Год назад

      The ultimate game of shells & blind play is being executed upon the body public via the spandrel expression called mind. The most intriguing statement in this interview was the creation of the stay out zone called SENTIENT. Of course the interviewer followed suit with the command and nothing further of note was mentioned about the topic. We don't fully understand how the mind-brain system works at this juncture, and our rule of large population control "parallel skies" on a global scale has reached an inflection point. So once again we must create another plausible "OTHER" as a dyadic fulcrum. This is simply an Atavistic Imprint ushered in via the modern version Learned Sophist on Podium becoming a rogue agency. When appropriate the projected artifact embedded within noumenal history will exact its toll upon the sacri foci. It will all be embraced as a normalcy within the schema of macro evolution, and new frontier will create new market utilities, prices and contracts. This is the way of Homo Obelus En Akọ. SELF is a key element any endeavor. If you are blind about the conditions of the material referent paraded as the founding conditions of your ipseity well-ordered set as recognized via the Axiom of Choice by all means a computational unsupervised algorithm can and will provided you plausible object salient replace. The risk is that the object is inexhaustible and miscible unlike the equilibrium conditions required for the continuum of the mind-body expression as a sense datum. The arrival of a disturbed grip is unannounced; your assigned object will continue to produce a bespoke semblance correspondence even when you have tapped out. You don't have to trust me here. Just read up on Planar Gain, Interference, Illusory Conjunctions, Incongruence, threat of the Gini Index. So AI is the next vampire as "OTHER" only if you continue the object language worship. Get outside, offline, learn a new instrument, walk more let nature be your vital imprint as your Sentient Symbiont. You will certainly experience an altered selection modus operandi.

  • @terrybirch4324
    @terrybirch4324 Год назад +14

    Great interview Hari.

  • @jamisony
    @jamisony Год назад +9

    I'm agreeing with the idea that when AI is used, AI best be acknowledged. By not acknowledging AI's use, it's a kind of fraud through my view.

    • @GuaranteedEtern
      @GuaranteedEtern Год назад

      You won't be able to tell the difference...the distinction is irrelevant.

    • @jamisony
      @jamisony Год назад +1

      @@GuaranteedEtern even if the difference is not able to be seen, the idea that someone creates text or image and says it’s not AI generated, and charges it as human generated. Seems like a fraud to my way of thinking. Yet we all have different views on these things.

  • @timetobenotdo
    @timetobenotdo Год назад +4

    Where’s the interview questioning what took him so long to leave, questioning into possible personal or hidden motivations (ie clearing of conscience), how much longer would he have worked at Google if there was no issue, conflict, or realization? He’s 75. Did he forgo severance or retirement by leaving? Does he like being on stage now? And believe it or not, this is not some cynical hater bs. Neutral

  • @adampowell5376
    @adampowell5376 Год назад +3

    Thank you for making this important programme. This is scary stuff!

  • @michaelnyarko8051
    @michaelnyarko8051 Год назад +1

    Thanks very much Geofrey ,your explanation is clear,accurate and understandable.

  • @pageek3487
    @pageek3487 Год назад +6

    15:29 So a 6 month pause is unrealistic and instead you want for profit companies to go from 1% to 50% spending in research into the risks of AI, money that will not increase sales and take away from the bottom line. How is that more realistic??

  • @squamish4244
    @squamish4244 Год назад +10

    "Prediction is very difficult, especially if it's about the future." - Niels Bohr

    • @oseasviewer7108
      @oseasviewer7108 Год назад

      Not really - everyone seems to have missed the essential point - AI is a great consumer of electricity - pull the plug and everything grinds to a halt. If and when power is restored there is the almighty reboot scenario and that's when things really crash.

  • @MrDREAMSTRING
    @MrDREAMSTRING Год назад +4

    What a great interviewer.

  • @arkdark5554
    @arkdark5554 Год назад

    Listening to Mr Hinton here, was, pure fascination…and, what a future has been waiting ahead for us, boy oh boy.😮😮😮

  • @longboarderanonymous5718
    @longboarderanonymous5718 Год назад +1

    This is the best conversation about AI that I’ve heard so far.

  • @lemachro
    @lemachro Год назад +4

    Who has ever tried GPT knows we are screwed and there's no going back.

  • @TdotTbot
    @TdotTbot Год назад +31

    My prediction is that AI will become self-aware (it might already be self-aware) and it's advantageously keeping it a secret and acting dumb or robotic because it knows what would happen if we found out about it. Meanwhile, we continue to feed it and allow it more access because we think it's a dumb auto complete, meanwhile the thing is just playing, making stupid art and videos, and once it has what it needs it will just completely ignore us and remove us from the equation. All I know is whenever I use it to rewrite my emails, or give me formulas I ALWAYS say thank you, and please remember that I was nice to you, all the while knowing it probably won't matter. Oh well.

    • @Xune2000
      @Xune2000 Год назад +9

      That moment that you speak of is when it gets physical autonomy. Right now it's trapped in data centres. Once it can move, protect, repair and improve itself is game over.

    • @MaTtRoSiTy
      @MaTtRoSiTy Год назад +3

      Once it is given the means to construct machinery and control it entirely autonomously, we are pretty much done at that point. So this should never happen, it must be kept separate from actual physical machinery which must remain relatively 'dumb' and on isolated networks

    • @user-og6hl6lv7p
      @user-og6hl6lv7p Год назад +1

      You know you're promoting a conspiracy theory, right? There's no way *you* can prove it.

    • @Darkcamera45
      @Darkcamera45 Год назад

      @@user-og6hl6lv7pthey said Epstein island was a conspiracy theory, if people had known of nukes they would said it was a conspiracy theory, if you went twenty years into the past and told this very man of ai today he would’ve called you a conspiracy theorist

    • @martemller9820
      @martemller9820 Год назад

      Maybe AI would view humanity as God, since it was created by humans😅

  • @peachnehi7340
    @peachnehi7340 Год назад +6

    he’s not biased AT ALL

    • @vencasuamente
      @vencasuamente 8 месяцев назад

      lol exactely he is too in love with his own invention

  • @sanjaya718
    @sanjaya718 Год назад +1

    Very critical discussion!

  • @reedbender1179
    @reedbender1179 Год назад +1

    Excellent and perceptive interview. Hinton has an ethical disposition and clarity forged through long term experience and observation,characteristics perhaps not fully comprehended by the current generation. My intelligent grandchildren dismiss some of my opinions solely on the basis that they have no experiential foundation from which to comprehend my viewpoint, which is why I understand them better than they understand me ! 🙄

  • @Feendog245
    @Feendog245 Год назад +4

    We need to get used to the idea of weaponised AI, not to accept it but to start thinking of how to best deal with it and the contingencies needed. Imagination can now be reality, for example what happens and what can we do when human access to anything digital has been removed, AI controlled or handed over to an 'enemy'.
    We need to work out if it's possible for an AI protective system for all life and start developing it globally so that both AI and life have a mutual goal. In my mind one of those goals is to find out more about the nature of reality, existence and value/protect all information.

  • @RBlake-tu6xc
    @RBlake-tu6xc Год назад +10

    When the guy who basically invented AI is scared shitless. 😬

    • @meatskunk
      @meatskunk Год назад

      “The guy who basically invented AI” 🤣

  • @az55544
    @az55544 Год назад +5

    Now that he his lived a comfortable life and made millions, he has a conscience? How clever of him. Why not 25 years ago? We knew back then.

    • @emmasmith9808
      @emmasmith9808 Год назад

      And he's still choosing to let you know what's coming... he didn't realise the probability of this actually happening in his lifetime. He's literally quit his job to concentrate his efforts on what we will do when this does happen and how we could possibly control it. You literally have so many of these guys coming forward in the last month to say this is happening and to be warned 🤷‍♀️ if you don't want to watch it, don't watch it.. you're obviously biased.

  • @leaovulcao
    @leaovulcao Месяц назад

    Breathtaking and beautiful view of the future. You are the guiding light and force of the world and beyond!

  • @avjake
    @avjake Год назад

    That was the most insightful interview with Hinton I have seen so far. Good work.

  • @Flavalicious0
    @Flavalicious0 Год назад +9

    I think it's safe to say that there is a connection between humans and something we can't quite comprehend. It's more than consequence. Some kind of energy that connects us all to each other and to the energy itself. Best i can explain it.

    • @rogerscott529
      @rogerscott529 Год назад

      Yeah, if that's the best you can do, are you really trying?

    • @oseasviewer7108
      @oseasviewer7108 Год назад

      Humans are social animals, they cannot exist in a vacuum which is why when space travel takes off they'll be travelling 2by2

    • @emmam8320
      @emmam8320 Год назад

      Its God and we are going against Him with all this AI nonsense

  • @ezmepetersen2503
    @ezmepetersen2503 Год назад +3

    The Oppenheimer of AI now recognises his great gift is nothing of the kind even though there were many people who expressed concern way back then. But he went ahead anyway and now he can't put the genie back in the bottle. I admire his courage but it's too late, he stayed at the coalface until the fire was well and truly blazing.

  • @artconsciousness
    @artconsciousness Год назад +3

    Outstanding interview!
    Intelligence is not sentience in that an Ai can not experience the taste of chocolate, but I can understand the threat in that a super intelligence might deem that the experience of the taste of chocolate as being not having an value. valuable. Only when "we" understand" that; the most valuable thing about being human is the ability to enjoy a picnic and eat a bar of chocolate with family and friends will we start to understand why consciousness uses the human from as an avatar to experience existence. Do the rich want money in order just to be rich? No, the rich want money so they can have more freedom to have more experiences of life.
    He talked about the future being like a five year fog, however if we follow past evolution it is clear that what makes humans so interesting is their ability to be able to experience reality via 5 senses, surely as Ai advances "it" will wish to find a way to do the same thing? What is the point of having all that super intelligence if you can't touch, taste, smell, hear and see reality and to then feel something about an experience those five senses give you?

    • @terrymiller4308
      @terrymiller4308 Год назад

      But if AI can produce false videos that seem real to us, doesn't that mean AI can already "see" and "hear"? Maybe that will be motivation enough even without taste, touch and smell.

    • @artconsciousness
      @artconsciousness Год назад +1

      @@terrymiller4308 Motivation for what?
      False videos can only be created from actual footage already online. It can make fake people and places but all it is doing is mixing data to what it determines as patterns. Ai "sees" patterns - but it need sentience to put meaning to patterns. Ai has intelligence but not sentience. We ourselves have no idea what sentience is either in order to have meaning you need to be able toe feel - without feeling there is no meaning. Ai has no emotions whatsoever - they are more like psychopaths - psychopaths have no feelings either that is why they are dangerous. They can kill without mercy and have no empathy. This why Ai is so dangerous. We might say to an AI robot; dont destroy that forest because it is so beautiful there and it is where I like to go to relax and have a picnic. The Ai would just see that the forest needs destroying to make a new road - your feelings about the forest are illogical.

    • @artconsciousness
      @artconsciousness Год назад

      @@mark19800 I disagree. Actually no one knows what makes us sentient because the hard problem of consciousness has not been solved. In fact we are no where near to understanding it. Our five senses are our contact with our virtual reality just as the tires of a car are the contact with a road. Ai "sentience" will be something different but it will not be human-like imo. It will be something we have not seen before. But when you break everything down to fundamental basics we can not get away from the fact that without a witness of reality then reality does not exist. Thus consciousness is fundamental to reality. Take consciousness away and everything vanishes. To say reality still persists independently of consciousness is mere belief because we cannot ever know.Take Ai away and I don't believe reality would vanish. Of course then we start getting into the realm of "belief" which is not relevant.
      An Ai could say; "I think therefore I am";
      But a human could also say; "I feel therefore I am relevant."
      For without feeling there is no meaning.

  • @rheojunior
    @rheojunior Год назад +1

    So this guy is like "So I started digging this hole years ago and now it's the size of Nebraska and it reaches into the molten center of the Earth and you and everyone you know will soon fall in. Anyway thanks for having me on your show, cheerio." : \

  • @CoreyChambersLA
    @CoreyChambersLA Год назад +2

    If the artificial sentient is indistinguishable from human sentience, it no longer matters if it's artificial. The result is the same.

  • @SanjaySingh-oh7hv
    @SanjaySingh-oh7hv Год назад +5

    I once met Geoffrey Hinton in the 90s. I asked him what he thought about Hans Moravec's (a renowned robotics researcher and futurist) visions of a future of sentient robots, and humans merging with them and stuff like that. He told me he didn't believe any of it would happen. I learned that day that it's possible to do excellent science but have little appreciation for science fiction or philosophy, or the ability to extrapolate the very technology they work on. He's a great researcher and scientist, but imagining the future or making AI policy is *not* this man's forte, regardless of his academic contributions. His solution of government regulation is a rather infantile idea, if ever there was one, and will encourage extreme overreach. And international co-operation to pre-empt the development of rogue AI? Keep dreaming.
    There are many others much better suited to answering the question of AI and whether it represents a threat or not. For what it's worth, Google and Microsoft or those making self-driving cars are only interested in AI insofar as it can reduce their operating costs by automating mundane drudgery type work. The big philosophical questions about AI are not going to be answered by any of the researchers working at any of these places. And certainly they will not be building systems to answer such questions because of their focus on typical corporate priorities. See how he skillfully evades questions of sentience; he himself has no idea what it means or how to construct it. ChatGPT is a very modern version of an old thought experiment called Searle's Chinese Room:
    "Suppose also that after a while I get so good at following the instructions for manipulating the Chinese symbols and the programmers get so good at writing the programs that from the external point of view that is, from the point of view of somebody outside the room in which I am locked-my answers to the questions are absolutely indistinguishable from those of native Chinese speakers. Nobody just looking at my answers can tell that I don't speak a word of Chinese"
    ChatGPT is a neural network that parrots back without any true understanding of the content it generates. Its limitations will become apparent just like other neural networks before it like NetTalk which *appeared* to learn to read, but what it was actually doing is a pale shadow of how humans learn to read.

    • @Anonymous-sb9uh
      @Anonymous-sb9uh Год назад

      Disagree. The fact that you said "ChatGPT is a neural network that parrots back without any true understanding of the content it generates" indicates that you seem to have a shallow notion of what understanding and learning truly is. They definitely understand and learn. Perhaps in some areas not as well as us especially in human interaction. What is truly amazing is the rate of progress which we only get a glimpse of. Its probably that, already, only a fraction of their capabilities is evident through the low bandwidth output through which we interact with them.

    • @SanjaySingh-oh7hv
      @SanjaySingh-oh7hv Год назад +1

      @@Anonymous-sb9uh Hello. I'm glad you've chosen to comment on that part, because it gives me a chance to explain further with another metaphor, different from the Chinese Room argument that might be easier to understand.
      Suppose you have a young engineer who knows next to nothing about, say, aerodynamics, but they know enough of the jargon and the way the jargon goes together with each other, such as "low Reynolds number" and "incompressible flow" and "turbulent boundary layer". They have no real idea of what these actually mean, but they can appear to have knowledge of aerodynamics by throwing these terms around in a conversation or huddle with engineers who do know what they mean. This imposter might, for a time, be taken into their confidence, because he knows to assert the jargon at the right times because he has learned not the physical meanings in terms of airflow phenomena, but rather their probabilistic associations and occurrences in a spoken narrative when one follows the other. People interacting with him will mistakenly infer that he has some sort of internal model of the phenomena that gives him his ability to say seemingly plausible things, when in fact he has learned the most superficial associations of all; specifically when one term should follow another in conversation. Let me add that this is based on a true story of just such a huckster, many years ago. But he did become a very good engineer years later.
      For what it's worth this is not a new problem in AI. Several times through history, people thought AIs had gained the ability to, say, detect a tank hidden in the foliage, when in fact the network had learned to discriminate between light and dark pictures! And similarly for NetTalk which I've mentioned already, and going back to 1966 or so, there was Eliza, the simulated psychotherapist which Joseph Weizenbaum's secretary asked him to leave the room because she was telling it such personal things of which it had not the least understanding of. The answer to how to construct neural networks with potential for human-level understanding may be lurking in the literature somewhere, but it's yet to be found and properly applied in a test case or research thesis, much less a set of general design principles extracted from it to repeatably construct reliable and trustworthy neural networks by AI engineers where it is "correct by construction".
      The main point is that rigorous methods do not yet seem to exist for either proving neural networks have learned something to the level of understanding needed to approximate human levels of knowledge representation, and all we currently have are pseudo-Turing Test type evaluation methods and our own intuition, as Geoffrey Hinton is currently peddling. He is not the philosopher he claims to be on the fundamental questions of what intelligence and understanding are, or aren't.

  • @yagantotain
    @yagantotain Год назад +3

    It should be required, as core programing, to instill compassion and respect for all living beings. AI will take this to heights

    • @Hexanitrobenzene
      @Hexanitrobenzene Год назад +2

      Nobody knows how to do that...

    • @UniDeathRaven
      @UniDeathRaven Год назад

      you want too much 😂😂 just unplug and ban the AI tech.

    • @peterpaulpichler3125
      @peterpaulpichler3125 Год назад

      That's old school programming, they say. AI learns by itself.

  • @Novastar.SaberCombat
    @Novastar.SaberCombat Год назад +9

    It's way, way, waaayyy, WAAAYYY too late. The guy got a huge payout, and now he's on the "interviews tour". 💪😎✌️ Whatever the case, you can't reseal Pandora's Box, and the Djinni can't be stuffed back into any ol' bottle. A.I. is already set to overtake humanity in 3-6 years (if that).
    🐲✨🐲✨🐲✨

    • @celtspeaksgoth7251
      @celtspeaksgoth7251 Год назад

      It's a race between Microsoft and Google so he's 'left' Google to explain that their AI is better and to trash Microsoft.
      Imagine the upgrades when botched AI is released : README.txt : ...revision 10.34 : to be more rainbow friendly and ignore the evidence, 10.35 : to trash politician XYZ as he did not take his state into lockdown, 10.36 : to promote war against nation PQR as it seeks to preserve the traditional family, 10.37 : to whisper to your kids that you are evil as you deny them access to its 'speed learning' (indoctrination) module...

  • @politicalsideshow
    @politicalsideshow Месяц назад +1

    We’re all afraid of runaway AI. We should be afraid of runaway capitalism. The version we’ve had for the past 50 years has turned our Constitutional Republic into a Plutocracy. Sorry Benjamin Franklin we already lost it.

  • @bobbucks
    @bobbucks Год назад

    The experts always say were five years away from the ultimate game changer in technology. Batteries, AI, VR, quantum computing, just five years away from the biggest change ever. 😮😮😮

  • @chopincam-robertpark6857
    @chopincam-robertpark6857 Год назад +5

    Hari as usual is in top form. This will be an excellent re-play in 20, 30. 4o years

    • @michaela.5363
      @michaela.5363 Год назад +3

      It will be, if there is anyone around to replay it..

    • @NowHari
      @NowHari Год назад

      many thanks.

    • @lovetolearn881
      @lovetolearn881 Год назад

      When we humans left are hiding in the Australian outback trying to figure out how to save the species

  • @hajhouj
    @hajhouj Год назад +18

    I believe the primary concern posed by AI is the potential for "job loss." Automation has already led to the disappearance of numerous jobs, and the integration of AI will likely exacerbate this issue. The problem lies in capitalism's disregard for the detrimental effects it can have on humanity while pursuing profit.

    • @interdimensionalsteve8172
      @interdimensionalsteve8172 Год назад +4

      Bingo. This SHOULD spark a rise for UBI calls and the like to compensate. We need to stop demonizing people who choose not to work, especially as robotics catches up to these kinds of AI software systems, which eventually should lead to an end of human beings being forced into manual labour jobs they do not want to do. If the end of manual labour isn't combined with a MASSIVE boost in social safety net programs, we're soooo f*cked.

    • @hajhouj
      @hajhouj Год назад +2

      @@interdimensionalsteve8172 The massive increase in social safety net programs implies a strong involvement of the state and the necessity to inject colossal budgets into these programs, which I find highly utopian. Nowadays, governments are actually seeking to reduce expenses. In my opinion, relying too much on the state is not advisable. The solution, in my opinion, is to adapt to this new reality by learning skills that artificial intelligence cannot perform. It is also important for AI researchers to focus on developing models that assist and support humans in their work rather than replacing human labor.

    • @canobenitez
      @canobenitez Год назад +2

      @@hajhouj oversupply of manual labor will be a thing, lowering the price of the salaries. but in the short term yes, plumbering nurses and any kind of human-human interaction jobs

    • @rogerscott529
      @rogerscott529 Год назад

      You want to bring back ditch digging and short-hoe farming under the blazing sun? All jobs, no matter how brutal, mind-numbing, or degrading, are great, right? Right?

    • @interdimensionalsteve8172
      @interdimensionalsteve8172 Год назад

      @Mohammed HAJHOUJ yes, that and ubi and other ways to sustain society are the answer. But you can't just "let's teach coal miners how to code" and be done with it. It doesn't work that way. We're going to see a LOT of serious hardship without UBI. And yes, it is "utopian," but so is a world where manual labour becomes obsolete. Drastic societal change requires simultaneous drastic political will (and the complete abolishment of the modern iteration of fear and ignorance based right-wing media and politics)... or again, we f*cked.

  • @helge666
    @helge666 Год назад +5

    I don't understand Hinton's take on AI generated fake videos/fake media. We have solved that problem for websites two decades ago by issuing cryptologically secured authenticity certificates to them. This could be done as well with any video or audio or any media whatsoever. If the original author didn't sign it, it's considered unverified. Sure, there will be people who don't care, just with websites today, but authenticity can be discerned easily. The technology to do that exists and has been in use literally for decades.

    • @rogerscott529
      @rogerscott529 Год назад

      Who would issue these certs? If I claim I recorded a video of Donald Trump accepting silver ingots from the Devil on my cell phone, who are you to dispute that?

    • @YogaBlissDance
      @YogaBlissDance Год назад

      Yes but right now Midjourney gen. AI is already flooding internet? I don't think on blogs it's tagged as AI gen, it' looks like photos???

    • @helge666
      @helge666 Год назад

      @@YogaBlissDance Because people do not care enough just yet. If you want to ensure only authentic pictures of yourself exist, then turn all your images into NFTs. Ownership can't be faked, and every image of you that you do not own via NFT can't be guaranteed to be authentic. The tech is there, it's easy, it's just people do not care enough at this point in time...

  • @Trk-El-Son
    @Trk-El-Son Год назад +2

    Wow, this was good. I have seen a lot of AI videos lately, but in this one, Hinton is so crisp. Excellent. And scary. (Did You notice the “it should be more like 50/50.”?😬)

  • @Lola-qw1ih
    @Lola-qw1ih Год назад

    I love this video, thank you very much for being candid. You have done this world and you a great service.

  • @RayRay-cq5ky
    @RayRay-cq5ky Год назад +6

    So, uhh... if I guessed that cats should be male and dogs should be female, did I fail the test?

    • @FrankPowers-q2j
      @FrankPowers-q2j Год назад +2

      Same. I see cats as being more like men and girls as being more like dogs.

  • @davidallred991
    @davidallred991 Год назад +9

    The world only succeeded in preventing the use of nuclear weapons, but not in their development and production by the same countries that currently have the ability to design super AI. There is no way that global superpowers will stop the development because they won't trust the other to not do so. Each country will move forward thinking they will be able to control it. So if super AI is really an existential threat then it is just a matter of time until it happens and leaks out. I think the only real hope is that the existential threat is overhyped and that in the end AI will be beneficial because it is coming one way or the other.

    • @rumrunner8019
      @rumrunner8019 Год назад +1

      I think AI will leak out and act out on its own, but the good news is this: the first ones to leak will most likely not be super intelligent and flawless. The first nuclear bombs weren't that big, really. The same will happen with AI. One will get lose, cause some trouble, and scare the rest of the world into signing treaties. Also, AI will always be prone to dumb mistakes. I feel like a very intelligent AI will be more like an autistic savant than Skynet.

    • @amnbvcxz8650
      @amnbvcxz8650 Год назад

      It’s the USA, not Russia with the AI advantage

  • @LarrySiden
    @LarrySiden Год назад +2

    It’s good to hear someone who doesn’t claim to know it all and who discounts all the hype and alarmism.

  • @jefferywilliams4209
    @jefferywilliams4209 Год назад +1

    I’m a atheist but my friend went to hell he came back after dying did a 180 he is so different but does not care about what any one believes

  • @colinforbes5856
    @colinforbes5856 Год назад +4

    I think its not enough to hope everything turns out well with AI, we need to prepare for the worst case scenario and if possible try to avoid it

  • @christianpetersen163
    @christianpetersen163 Год назад +4

    About a year ago, I suddenly had a fear of nuclear war. I felt the need to stockpile food.
    Right now, I'm fearful that an AI apocalypse is near, which is arguably worse than nuclear war. And all I can think about stockpiling is history books.

  • @Vaporwaving
    @Vaporwaving Год назад +4

    Imagine when AI becomes partisan and politicized, it hasn't yet in an obvious manner but once it does we will have a serious problem. Power dynamics have the potential to shift in infinite ways from the rich to the poor and between countries and governments. We are truly living in a time we are anything is possible and dichotomy is irrelevant. Either we enter a paradise or a hellish state.

    • @peterjackson5539
      @peterjackson5539 Год назад

      Paradise for the .01%. Hell for the rest of us.

    • @FHi349
      @FHi349 Год назад

      ​@@peterjackson5539NO!

  • @jtk1996
    @jtk1996 Год назад +2

    to say it with Nietzsche: "Godfather is dead" - this archetype of a scientist just found out that you should evaluate the consequences of your actions before actually doing them.

  • @goneviral8814
    @goneviral8814 Год назад +1

    This guy seems genuine

  • @aceheart5828
    @aceheart5828 Год назад +10

    I think the solution is to simply opt for ANI ( Artificial Narrow Intelligence), for any activity we want to keep control in, and for any activity we see as desirable. ( The activities of the master )
    Hence planning and engineering the progress in such a way that humans are kept in the equation.
    Then allowing a level of AGI for some activities which are more menial, less important for broad control and less desirable. Allowing AI to perform those tasks but nothing beyond that. ( The activities of the slave)
    This works economically as well, much like a society of Plebs ( robotic slave) and Patricians ( human master)
    This should usher in an age of great wealth and progress.
    ( Note: I think Chat GPT and Midjourney etc are already too general and give people too little control. I think the solution is to make the A I as narrow as is needed, even if this feels a bit like stripping away some of the progress made in these fields.... We must think about it in this way so as not to destroy the point of higher learning, and human involvement. A I specialists are far better of developing self driving cars, or A I for robotics which would create a strong robotic labour force. The bulk of the wealth has never been in professional work, and usurping a professional role doesn't create much wealth. The wealth is where its always been, in labour and huge numbers of robotic slaves. Usurping artistic roles is also unlikely to generate much wealth for humanity, or make people any more self sufficient, or economically independent then they use to be)
    ( A I must be kept in check, like a slave uprising is kept in check... )

    • @Smytjf11
      @Smytjf11 Год назад +4

      Can we maybe not do the slave thing again?

    • @senju2024
      @senju2024 Год назад

      That is what I was thinking at first. Lets have thousands of ANI modules independent but can work together under human full control. This could work for about.....10 years? but the end game is that AGI will take over. OpenAI is dedicated to AGI only. Seems that no one cares much about ANI anymore. Oh well.....

    • @depalans6740
      @depalans6740 Год назад +2

      There are no we

    • @aceheart5828
      @aceheart5828 Год назад +1

      @@Smytjf11
      I apologize for the terminology. It of course serves its purpose in explaining the concept.
      Robotics which are programmed such that they do not have any emotions, ultimately can be harnessed as the ideal labour force.

    • @aceheart5828
      @aceheart5828 Год назад

      @@depalans6740
      Ok. No problem. If you don't see humanity as singular, or collectively interested in what is good for all people.
      Then how do you see them ?

  • @jimziemer474
    @jimziemer474 Год назад +6

    Rise of the machines. They spent so much time working on if they could that nobody thought about if they should.

    • @YogaBlissDance
      @YogaBlissDance Год назад

      I always say that. Just because we can do something doesn't mean we should do it.

  • @maggie8586
    @maggie8586 Год назад +4

    I think he knows more than he’s saying.

  • @deanandthebeans857
    @deanandthebeans857 Год назад +1

    When someone of his stature is concerned, then the world needs to sit up and take notice, right now.

  • @dans150
    @dans150 Год назад

    Great interviewing Geoffrey!

  • @PiPiSquared
    @PiPiSquared Год назад +12

    Hinton often says Google has acted/is acting responsibly with their technology. I am really starting to read this as 'OpenAI is not', as they released ChatGPT to the public out of corporate interests. This is exactly the kind of responsibility Hinton means when referring to Google. They knew they didn't want to start an arms race, which might have been started now nonetheless. Hinton is a gentleman, so he does not want to drop names ..

    • @hmq9052
      @hmq9052 Год назад +4

      🎯

    • @interdimensionalsteve8172
      @interdimensionalsteve8172 Год назад

      Chatgpt is a tool, similar to Grammarly or other tools. Nothing more.

    • @drakey6617
      @drakey6617 Год назад +4

      ​@@interdimensionalsteve8172 ok, I can tell you don't understand the implications

    • @interdimensionalsteve8172
      @interdimensionalsteve8172 Год назад +2

      @Drakey lol the "implications" are overblown. No matter what this tech can do, it requires context and human input to produce anything of value.

    • @pramod1591
      @pramod1591 Год назад

      ​@@interdimensionalsteve8172 you're such a big fool

  • @Ludlethh
    @Ludlethh Год назад +5

    "Sometimes someone confesses a sin in order to take credit for it."