AIs will not take over the world

Поделиться
HTML-код
  • Опубликовано: 6 авг 2024
  • ----------------------------------------------------------------------------------------------------
    Get a slice of pie for yourself:
    makit.wtf/charity/
    (In case the site don't work, use the direct link)
    www.gofundme.com/f/a-slice-of...
    In case you'd like to support me:
    patreon.com/sub2MAKiT
    my discord:
    / discord
    ----------------------------------------------------------------------------------------------------
    Once again I'd like to thank sodaguyz for allowing me to use their music in my video, here are the links:
    their main channel: / @sodaguyzmusic
    their second channel: / channel
    ----------------------------------------------------------------------------------------------------
    Intro: 00:00
    The narrowness of this video: 00:30
    AI is a function: 01:10
    Function output is safe: 02:15
    Safe example: 02:41
    It ain't bad: 04:20
    Outroduction: 06:00
    Outro: 06:49
  • НаукаНаука

Комментарии • 5

  • @brandonhan9201
    @brandonhan9201 6 месяцев назад +3

    your content is too underrated man 😭

  • @stickispowerful
    @stickispowerful 10 месяцев назад

    1:43 handsome MAKiT lol

  • @matthewkhoriaty9700
    @matthewkhoriaty9700 8 месяцев назад +2

    What about: you also can’t read the output if the AI understands human psychology because it could manipulate you or convince you to do what it wants. Deny it that, and you have an AI that can’t hurt you because it can’t do anything. That isn’t useful.
    So if you want AI output you can read, don’t make it understand humans. That limits its usefulness, and invalidates the current training paradigm of unsupervised learning on human text.

    • @MAKiTHappen
      @MAKiTHappen  8 месяцев назад +3

      Yes, that is a very valid concern
      My point this video is not exactly that "Output of AIs can be controlled, no matter how smart they are" because that would be a lie, a smart enough AI will always find a way to exploit the system.
      The point would be more like "Yes, AIs are increadibely dangerous due to just how powerful they are, but if research is done correctly, we should be able to bring them onto our side, and we should let professionals work on creating AGI, because the reward would be great"
      The key thing to remember is "Professionals", so it is important to mention just how easy it would be for the AI to exploit the system. That's why your concern is extremely valid