AI Safety…Ok Doomer: with Anca Dragan

Поделиться
HTML-код
  • Опубликовано: 1 дек 2024

Комментарии • 33

  • @justinparnell9572
    @justinparnell9572 3 месяца назад +37

    We need more resources put towards alignment and safety.

  • @brianhershey563
    @brianhershey563 3 месяца назад +4

    This was pure GOLD top to bottom... wow, thank you! 🙏

  • @daspradeep
    @daspradeep 3 месяца назад +5

    Awesome interview by Hannah

  • @sthompson18
    @sthompson18 3 месяца назад +2

    Fantastic discussion. It seems to me that the missing piece in the safety efforts is a serious and capable public contribution. How can we have smart oversight from smart policymakers and lawmakers? How fast can we get that?

  • @En1Gm4A
    @En1Gm4A 3 месяца назад +7

    Let's go. Podcasts are great keep going

  • @aiforculture
    @aiforculture 3 месяца назад +2

    Always really enjoy these interviews, thank you. (Also, loving the robot behind Hannah!)

  • @neilclay5835
    @neilclay5835 3 месяца назад +4

    Fascinating interview

  • @Tititototo
    @Tititototo 3 месяца назад +1

    Well, that is the one of the really 'deep mind' dialogue on LLMs I've seen since the BOOM :)

  • @ivanrodriguezc
    @ivanrodriguezc 3 месяца назад

    I'd love to see someone from the robotics team, keep up the great work

  • @joseperez-ig5yu
    @joseperez-ig5yu 3 месяца назад +1

    Quite interesting scenarios posed that seem somewhat futuristic, but are here already. Stephen Hawking warned us many years ago of the dangers of engineering AI. Hopefully steps will be taken as to not have things get out of control.😅

  • @swagger7
    @swagger7 3 месяца назад +11

    I've been following AI for years and I still can't find the exact point in time where pulling the plug was no longer an option.

    • @olivrobinson
      @olivrobinson 3 месяца назад +8

      multiple copies all over the internet, ability to self copy and instantiate, etc

  • @emc3000
    @emc3000 3 месяца назад +2

    Oh my GOD I love this woman as a content host. I see her popping up all over and its always an interesting listen.

  • @ajadrew
    @ajadrew 3 месяца назад +2

    Really facinating to imagine Anca is going through thought processes that are almost unimaginable. Re the driverless car scenario & creating the ability to consider everything in relation to the supposed human intereaction something popped into my head which I'm quite certain isn't original. Would these AI systems only ever interact with humans in the ultimate way if we, all of us were fitted with chips?
    As said, the concept is ancient & I for one would fight to the end before allowing a chip inside of me! But it's easy to see how management of AI systems coupled with chipped humans would work. Killing two birds with one stone as they say.

  • @lequi7
    @lequi7 2 месяца назад

    Guest suggestion: Joscha Back

  • @Steelwatercube
    @Steelwatercube 3 месяца назад +4

    Crazy time to be alive. 😭😂😑

    • @Ofer_Davidi
      @Ofer_Davidi 3 месяца назад +2

      It is always crazy times for humans, true that not for people that build bridges, but for sure for people that were inventing plastic, they did not deep div to understand what it means to make plastic. The list of stuff no one spend time analyzing what it means to create/invent, is very long and creating many crazy moments 😂

  • @CStefan77
    @CStefan77 3 месяца назад +1

    Alignment can be done with the load of Symbols which is based on the environment/parties involved (the company vs. personal data assessment), and stay shared, open, externsive for a consensus, not postulates. That and XAI can work better if we can identify the Symbols and the Symboliad subset (of that shared interaction.) AI systems are not quite there yet, due black box, framing issues, not complete Symbols, no real causality. Tractable (acounts for "free will" tracing) and consensus constriction to a Symboliad for AI system (counts as World Limit). EM AI Lab.

  • @henrikbergman4055
    @henrikbergman4055 3 месяца назад +26

    Eh... "Ok, Doomer" Wouldn't that better describe someone who is over exaggerating the dangers of AGI? As in: "doom(er) and gloom(er)". Or did I miss something?

    • @AdityaMehendale
      @AdityaMehendale 3 месяца назад +10

      did you watch the interview?

    • @henrikbergman4055
      @henrikbergman4055 3 месяца назад +1

      ​@@AdityaMehendaleA bit early to say 'yes' as I'm still watching it, but it's not the first one I see. I can revise my comment if I did miss something.

    • @AdityaMehendale
      @AdityaMehendale 3 месяца назад +2

      @@henrikbergman4055 I guess the discussion was meant to point out the short-sightedness (or at least the rationale) of people responding with "okay *oomer" , and a thought-out reasoning of "how else one could go about it".

    • @henrikbergman4055
      @henrikbergman4055 3 месяца назад +2

      ​​@@AdityaMehendaleOk. Listened again and think I misunderstood 'expression for', vs 'description for'. Made more sense second time I heard it.

    • @0og
      @0og 3 месяца назад +4

      Yes, they stated that the people who want to diminish AI's risks are the ones who use the phrase.

  • @MeBornAgin
    @MeBornAgin 3 месяца назад +7

    Ok Google ❤

  • @jorgerangel2390
    @jorgerangel2390 3 месяца назад +1

    What a pleasant surprise to find you here doctor

  • @karen-7057
    @karen-7057 3 месяца назад +7

    Ok evil corporation. Chill

  • @vishnuprasadkr5541
    @vishnuprasadkr5541 2 месяца назад

    Bring alpha zero to play chess

  • @WillyB-s8k
    @WillyB-s8k 3 месяца назад +3

    Huh? Why isn't Timnit Gebru todays guest on Hannah Fry's podcast? Did I miss something?? What's going on???
    ✌️

  • @chriscross7671
    @chriscross7671 3 месяца назад +1

    Ok Doomer

  • @filosofiahoy4105
    @filosofiahoy4105 3 месяца назад +2

    In short, make AI right-wing.

  • @Reach41
    @Reach41 3 месяца назад +2

    Computers will become actually intelligent when pigs fly.