S3 Ep 10 MacArthur Genius Prof Yejin Choi: Teaching AI Common Sense and Morality

Поделиться
HTML-код
  • Опубликовано: 8 янв 2025

Комментарии • 15

  • @duudleDreamz
    @duudleDreamz Год назад +4

    GPT4 now correctly answers the "5 pieces of clothing to dry in the sun" question. It even explains that drying clothes in the sun is not a sequential task but a parallel task. Great video, but already outdated regarding this example from Yejin.

    • @PieterAbbeel
      @PieterAbbeel Год назад +1

      Thanks for checking and reporting, everything is moving so fast, it's amazing

  • @tylerparks5656
    @tylerparks5656 Год назад

    Thank you for these wonderful discussions.

  • @netscrooge
    @netscrooge Год назад +1

    Cognitive psychologist Daniel Kahneman calls "System 1" thinking fast, automatic, and often based on heuristics or 'rules of thumb.' This is in contrast with "System 2" thinking, which is slower, more deliberate, and more logically rigorous. GPT-4 was designed to mimic aspects of human "System 1" thinking: generating responses quickly in a single pass, without a System 2 error correction stage.
    When I asked GPT-4 the clothesline question, it got it wrong. When I simply asked it to double check its answer, it immediately found its mistake. So go ahead and laugh at GPT-4 for being less intelligent than a young child, but you're laughing at it for being exactly what we designed it to be.
    Yes, we can create systems with even better System 1 thinking. But no matter how good those get, we will probably be able to improve them greatly by adding a System 2 layer on top.

    • @netscrooge
      @netscrooge Год назад

      Update: I just found the paper "Tree of Thoughts: Deliberate Problem Solving with Large Language Models" that came out a few days ago. Very relevant.

  • @ChrisStewart2
    @ChrisStewart2 Год назад

    It looks to me like mll's build a language model based on general data (the web in general) and then require specific training to learn facts.

  • @tostane
    @tostane Год назад

    I have been trying to teach "The Bard" not to say I when it refers to itself. Also trying to get it to say The Bard.

  • @ChrisStewart2
    @ChrisStewart2 Год назад

    The RL training is happening very fast now that millions of people are voluntarily training it.
    This has actually been happening for several years where OpenAI has been using large groups of beta testers and is a big reason why GPT4 can answer a much wider range of questions.

  • @One00042
    @One00042 6 месяцев назад

    👍🏿

  • @MarcoMugnatto
    @MarcoMugnatto Год назад

    Human intelligence is never about annnn... hmm... eehhh... predicting the next word...

  • @DavosJamos
    @DavosJamos Год назад +1

    You have Geoff Hinton in the description by mistake. If this is helpful you can delete this comment once you've seen it.

    • @PieterAbbeel
      @PieterAbbeel Год назад +1

      thanks, fixed!

    • @DavosJamos
      @DavosJamos Год назад

      @@PieterAbbeel Thanks so much for another amazing interview.