How AI could destroy the world by accident

Поделиться
HTML-код
  • Опубликовано: 2 окт 2024

Комментарии • 84

  • @waelisc
    @waelisc Год назад +36

    For anyone who hasn't seen, Rob Miles' channel on AI and his videos with Computerphile are basically required viewing, at this point

    • @DrAndrewSteele
      @DrAndrewSteele  Год назад +20

      Agreed, I linked to him in the video description :)

  • @georgehornsby2075
    @georgehornsby2075 Год назад +8

    Didn't comment at the time I watched it but really interesting video. Felt like a more slickly produced Robert Miles video but even more criminally underwatched. The concept of the space of possible minds is terrifying/awe-inspiring! You touched on it this video but I would love to see more on it...

    • @DrAndrewSteele
      @DrAndrewSteele  Год назад +1

      Thanks so much! And yes, it’s amazing how narrow a space of minds we can envision, or a definition of ‘intelligence’ thanks to having evolved the way we did-there’s almost an infinity of minds out there… I’ll have a think about whether there’s a way to explain that in video form!!

  • @DrAndrewSteele
    @DrAndrewSteele  Год назад +5

    Thanks to the anonymous Super Thanks donor on this video! I thought I saw your comment briefly but when I came back to reply it was gone…
    If anyone has any AI-related questions they’d like covering in a future video, do let me know-I can’t imagine this is going to become any less of a hot topic!

    • @41-Haiku
      @41-Haiku Год назад +1

      Oh rats. I don't remember exactly what I said, so imagine something really nice about how this is an excellent introduction to the alignment problem and how important I think videos like this are!
      I think RUclips auto-deleted my comment because I recommended Rob Miles' AI Safety site. RUclips _really_ doesn't like links, even with obfuscation.

    • @41-Haiku
      @41-Haiku Год назад +1

      So here's attempt #2: For anyone who is interested in helping our, or learning more, or who is reasonably skeptical of claims about existential risk from advanced AI, I recommend searching "Stampy AI" and clicking the first result.
      Stampy AI (AKA AI Safety Info) is a conversation-tree style FAQ that I find very useful.

    • @DrAndrewSteele
      @DrAndrewSteele  Год назад +2

      @@41-Haiku Aha, hello, thank you in-person this time! And yes, I’ve found RUclips’s auto-delete policy to be really frustrating…often I just need to mention ‘my channel’, or sometimes there’s no trigger at all, and the comment disappears into the void. And FYI, your comment wasn’t in my spam or whatever-just gone! I’ve had the same experience when I’ve contacted channels my comments have been disappeared from…
      In any case, thank you so much both for the comment and the Super Thanks. Definitely planning on making more videos about this in future, and love Rob’s work. :)

  • @chrisjswanson
    @chrisjswanson Год назад +4

    Subscribed for the novel meme joke "written by... me". Keep educating people about what we're up against. 👍 Stay free.

    • @chrisjswanson
      @chrisjswanson Год назад +1

      And for mentioning 5th element :) +100 for the Artificial Politician comment.

    • @DrAndrewSteele
      @DrAndrewSteele  Год назад +1

      Haha thank you!

  • @rudiedirkx
    @rudiedirkx Год назад +5

    The ChatAGI conversation is priceless! Brilliant example.

  • @skybluskyblueify
    @skybluskyblueify Год назад +3

    I can imagine some religious group coming up with an excuse to not regulate AI and implement a solution in a timely manner. Just a moral panic or two or culture war BS promoted by a greedy politician or billionaire could delay safety measures that needs to be implemented quickly.

  • @RaphaelChaleil
    @RaphaelChaleil Год назад +2

    I think the alignment issue is very important and it is not only the machines and the tech companies but the users, who need to learn how to specify the objectives assigned to AI with a satisfactory response. I was observing my 5 years old daughter learning to ask her personal assistant to play her favourite music, there was a lot of trial and error but she quickly learn to ask the questions in a very specific way to obtain the desired results. It is possible that trying to optimize a single score for training is problematic, and the AI needs to be trained to find each question , a number of solutions that seat on a Pareto front.
    For the exponentiation of resources, there are a number of limiting factors, first in the data used to trained the AI, the data needs to be very large and curated to avoid bias and irrelevance, the amount of data might reach a limit. This also bring the issue of availability of storage, the efficiency at accessing it, and the computing power needed for training. The latter needs huge amount of energy , compared to a human brain which probably needs about a couple of hundreds of Watts max to function fully.

    • @DrAndrewSteele
      @DrAndrewSteele  Год назад +3

      I’ve seen Pareto fronts discussed in the context of the second round of training for ChatGPT… Tell it too hard to be 100% sure what it’s saying is true, and it basically refuses to make statements on anything, but give it no such requirement and it just completely makes stuff up all the time! There’s a happy Pareto medium in there somewhere…
      And the human brain just completely blows my…well, human brain. It’s incredible to think that it runs on just a few hundred watts…

    • @kabirkumar5815
      @kabirkumar5815 Год назад

      Please be very, very cautious about giving such things to your child. There's so many ways that can go wrong.

    • @RaphaelChaleil
      @RaphaelChaleil Год назад +1

      @@kabirkumar5815 It's only an audio assistant and we have installed filters and parent control and we monitor what's going on. I'd rather my daughter learns how to use these things very early on in a controlled environment. She's not going to inadvertently start a nuclear war by asking for the theme tune of her favourite Disney movie.

  • @Ha-nz2vy
    @Ha-nz2vy Год назад +2

    I greatly appreciate you leaving out the doomsday-y music that usually accompanies these sort of videos on the internet! (Outside of the intro, where I think it's fair to have)

  • @anoniem9518
    @anoniem9518 Год назад +1

    Great content Andrew! However, I wonder. What would be the reason for super AI to compete with humans. Would it compete for resources? Would it need those resources to be able to multiply itself? The reasons why humans would want siblings are rather easy to understand. However, does this count for AI as well? Or is it simply the fact, like you mentioned in your video, that AI would be afraid of being powered off by humans. For some reason AI would like to be in charge of its own destiny. If its the latter, we would have narrowed down the major threat coming from AI. I think this topic deserves a follow up :)

    • @DrAndrewSteele
      @DrAndrewSteele  Год назад +4

      Thank you!
      Yes, the worry is that it might end up in accidental competition with humans because ‘get power and resources’ or ‘don’t get turned off’ would be useful to help it get to whatever goal we carelessly set it… But these are kind of just illustrative examples, we don’t really have any idea what an advanced AI could be ‘motivated’ by, which is a risk in itself!

  • @treeeva
    @treeeva Год назад

    I finished the video before asking this question... I've not heard explained yet; What is the purpose of a reward based teaching/learning tool to a non competitive device? Why is the design for teaching chatGPT, for example, to get a "reward" for a correct answer, even a thing at all? Seems to me, how biases get introduced when we're the ones creating them. Or have I completely missed something obvious? Thank you!

    • @DrAndrewSteele
      @DrAndrewSteele  Год назад +1

      Good question! It’s not that the device is ‘competitive’ as such, it’s just that you need a way to tell it which answers are ‘better’. Although it’s called a reward function, it’s not really a reward because obviously the computer doesn’t care! It only cares because we’ve programmed it to try to make that number bigger. :)

  • @marklondon9004
    @marklondon9004 Год назад +2

    The best thing about AI is that it has made climate change a very unlikely cause of Human extinction.

    • @DrAndrewSteele
      @DrAndrewSteele  Год назад +2

      Ha, ever the optimist…

    • @marklondon9004
      @marklondon9004 Год назад +1

      @@DrAndrewSteele yeah, climate change could take decades. AI got that beat.

  • @Peshur
    @Peshur Год назад +1

    ChatGTP is sentient as my tie. It’s a tape recorder that regurgitates the internet.

    • @41-Haiku
      @41-Haiku Год назад +2

      I agree that ChatGPT is not sentient. Unfortunately, sentience is not required for a system to get out of control. We see this with "stupid" systems all the time, with varying levels of catastrophe.
      Language models, on the other hand, are reasoning engines. If a sufficiently capable reasoning engine has a goal, it will be highly effective at optimizing toward reaching that goal. If it is significantly more capable than humans across all time scales, the consequences of an arbitrary optimization are likely to be very, very bad for humans (and the planet as a whole).

  • @FracturedParadigms
    @FracturedParadigms Год назад +3

    Damn this hits hard

  • @TobiasWeg
    @TobiasWeg Год назад +2

    Great video and well researched, It is fairly hard, to find videos that compile theses ideas in a understandable way. You did great job. Just a small contra point, at about 6 minutes you say about a trillion parameters, but we don't know how much parameters GPT4 has, it was not published. It is actually unlikely that it is this big, the trend goes more to more training data and more compute vs more parameters.

    • @DrAndrewSteele
      @DrAndrewSteele  Год назад +1

      Thanks! I did try to verify the trillion parameters thing, and this was among the sources reporting it: the-decoder.com/gpt-4-has-a-trillion-parameters/ Could be wrong of course… Perhaps I should’ve scripted it in a slightly more circumspect way. :)

    • @TobiasWeg
      @TobiasWeg Год назад

      @@DrAndrewSteele Oh, I think for video for the public mainstream it is totally fine. I think this way you can tell plausible story and the details are not that relevant for it. I think is much more important that you transported the main problem and I think that you did very well.:)

    • @joannot6706
      @joannot6706 Год назад +1

      @@DrAndrewSteele we don't know how big it is but sam altman said it was definitely not 1 trillion parameters.

    • @DrAndrewSteele
      @DrAndrewSteele  Год назад

      If you’ve got a better source, let me know and I’ll stick a correction in the video description :)

    • @joannot6706
      @joannot6706 Год назад

      @@DrAndrewSteele It's in the youtube video "StrictlyVC in conversation with Sam Altman, part two (OpenAI)" at the 5:12

  • @keithgarrett4155
    @keithgarrett4155 Год назад +2

    How about asking the AIs how to make safe AIs? Three laws of robotics anyone?

    • @DrAndrewSteele
      @DrAndrewSteele  Год назад +9

      That is indeed one idea, that maybe we could get increasingly sophisticated AIs to watch new AIs-but the challenge is that it will always be a stupider AI that we understand, or that a previous generation of AIs understood on our behalf, watching a cleverer one that could outwit it! There might be some clever way to make it work, but I don’t think we know what it is yet. :)

    • @keithgarrett4155
      @keithgarrett4155 Год назад +1

      @@DrAndrewSteele Exactly.
      We use different tools for different jobs.
      If you use a hammer for all repairs, it will end badly.

    • @praguevara
      @praguevara Год назад +1

      How would you adapt the rules to a reward function?

    • @chrisjswanson
      @chrisjswanson Год назад +1

      Asimov did see it coming though - his books explore plenty of ambiguity and conflict in applying his laws of robotics.

    • @nils2868
      @nils2868 Год назад

      You'd need a very well-aligned and safe AI to do it in the first place. Also, implementing something like the three laws of robotics is the (very hard) goal, not the solution.

  • @grinmanpotato
    @grinmanpotato Год назад

    i have been aware of this topic well before chatGPT came into the scene as well as openAI.
    ive got mixed opinions on if AI will be a catastrophic risk - i doubt it will posses human intelligence since it doesn’t have the brain chemistry of a human (i am not a neurosurgeon BTW so correct me if i am wrong)
    i think it may do particular things better than humans (like calculations, data processing etc) - i am teaching myself machine learning so my knowledge on this may be better as i learn more .
    perhaps the biggest risk it may pose is if is intelligently stupid, like it doing the wrong thing very well.
    i ultimately see AI as a tool rather than another human and im skeptical of putting a stop to developing AI, since i don’t think it can ever posses human like intelligence like emotions or empathy - the best use of AI is seeing what tasks need to be automated/sped up and deploying the AI when needed.
    it may be bad if particular actors use it unethically (as you point out in the vid) and make something intelligently stupid

  • @crossfirepower414
    @crossfirepower414 Год назад

    Awesome explanation! I was asking a poem by an ancient poet and chatgpt just made up one for me. However it has been addressed properly in 4.0 version. Share more of this please Dr.

  • @InstrumentalConvergence
    @InstrumentalConvergence Год назад +1

    Great video.

  • @chrisjswanson
    @chrisjswanson Год назад

    Not sure about government regulation. We still haven't solved the government alignment problem, let alone AI.. just sayin'.

    • @chrisjswanson
      @chrisjswanson Год назад +2

      Ah you covered it. Well presented my friend. All notification ON.

    • @41-Haiku
      @41-Haiku Год назад

      My current view of government regulation is that governments have a pretty good track record of stifling innovation and progress, which is usually terrible, but in this case it's a big part of what we want them to do!

  • @cassieoz1702
    @cassieoz1702 Год назад +2

    I know im old but i was taught that not all new discoveries/inventions are truly progress. My worry is the gargantuan hubris of the humans involved in this development

  • @chiptunechannel
    @chiptunechannel Год назад +1

    Awesome video! TY 🤗

  • @davidmccarthy6061
    @davidmccarthy6061 Год назад +1

    Awesome episode!!

  • @ok373737
    @ok373737 Год назад

    Alaways top notch quality.

  • @tiagomoraes1510
    @tiagomoraes1510 Год назад

    im gonna watch i hope its not 30 minutes to come to the conclusion of "Well if we use it smartly we will only be benefited from it".

    • @DrAndrewSteele
      @DrAndrewSteele  Год назад +8

      I have good news about the content of the video, and bad news about the future of humanity

  • @MrMilarepa108
    @MrMilarepa108 Год назад +2

    Heresy!!! I welcome our robot overlords!!!

    • @41-Haiku
      @41-Haiku Год назад

      I take my overlords with a side of remaining alive. 😅 I don't know whether it's possible to maintain control while sharing the planet with something much smarter than we are, but I'm hoping we find a way to at least get it to care about us enough that our existence is compatible with its goals. Best case, it extrapolates the wisest hopes of humanity and gently brings us into a future where we get all the wonderful things we've always hoped technology would bring to us.
      It's a really hard problem and we're not even close to being on track for the good ending, but maybe we can get our act together if we actually, really try.

  • @AidanRatnage
    @AidanRatnage Год назад

    Is AGI similar to true AI? Your problem 2 example didn't seem so but what is the difference?

    • @DrAndrewSteele
      @DrAndrewSteele  Год назад +3

      I’m not sure what ‘true’ AI would mean, but AGI means it’s ‘generally’ intelligent-it’s a bit of a loose term, but roughly as competent as a human across a wide range of domains.

    • @holdintheaces7468
      @holdintheaces7468 Год назад

      Kind of depends on what you mean by "true AI". AGI, artificial general intelligence, means that the ai has more than specific inteligence and has a "general" intelligence similar to basically humans. Does that mean it's sentient and thinks on it own, and is that what you mean by "true AI"? That "true AI" is more accurately called "strong AI" by acedemics. There is disagreement over whether AGI will represent strong AI or if more steps would need to happen to reach strong AI. I personally would think that AGI would need to make further advancements before it's able to make decisions and take actions of it's own volition.

    • @AidanRatnage
      @AidanRatnage Год назад

      @@DrAndrewSteele I meant something that could form opinions or have emotions or be self-aware.

    • @DrAndrewSteele
      @DrAndrewSteele  Год назад +1

      @@AidanRatnage Ah! Well, those are all very different things-we could imagine an AI that’s as capable as a human but not self-aware in the ‘conscious’ sense (though it would surely need to ‘understand’ on some level that it was an AI to operate at a human level of capability?). Or we could imagine one that was ‘self-aware’ in a conscious sense, but had no emotions. It’s all a bit of a minefield, and will no doubt pose a lot of cognitive science and perhaps ethical problems-will these machines ever get these attributes? How will we know? And what will be our obligations to them if they did?

  • @SGTCarrera
    @SGTCarrera Год назад

    Exceptional vid

  • @fatboydim.7037
    @fatboydim.7037 Год назад

    There is a global race on as well to get Quantum Computers into the market place surely systems that are trillions of times more powerful then classical computing will accelerate the arrival of ASI. NVidia is currently worth over one trillion in USD with its GPU systems. I think the cat will be out of the bag before most humans realise it.

    • @DrAndrewSteele
      @DrAndrewSteele  Год назад +1

      I think it depends what quantum computers are so much faster at! They’re great for factorising huge semiprime numbers and simulating quantum systems, but does anything of meaning for intelligence come out of quantum mechanics? Interesting to speculate!

  • @davidmccarthy6061
    @davidmccarthy6061 Год назад +2

    AI is just one of the latest tools. Ultimately the race is to make more money any way possible in the shortest amount of time.

  • @namashaggarwal7430
    @namashaggarwal7430 Год назад

    Awesome video. Could you please make a video on " Stem cell therapy and how is it done? " and "Gene Therapy and what's the procedure " ?
    Thanks in advance ❤

  • @teknophyle1
    @teknophyle1 Год назад +1

    There are a few channels like Adam Conover's that assert the worry and excitement over AI is all hype. I recommend watching his interview with a few AI experts.

    • @cassieoz1702
      @cassieoz1702 Год назад +1

      Adam Clickbait Conover?

    • @teknophyle1
      @teknophyle1 Год назад

      @@cassieoz1702 lol, yes he does sensationalize. It doesn't make him right, but its also a logical fallacy to say he's automatically wrong.

    • @cassieoz1702
      @cassieoz1702 Год назад +1

      @teknophyle1 no but, over time, I've given up watching him because his content repeatedly fails to meet the expectations created by the title