Geoffrey Hinton: Large Language Models in Medicine. They Understand and Have Empathy

Поделиться
HTML-код
  • Опубликовано: 23 дек 2024

Комментарии •

  • @Corteum
    @Corteum 10 месяцев назад +12

    How has the claim that "LLM's understand and have empathy" been verified? what's the single biggest piece of evidence to substantiate that

  • @u2b83
    @u2b83 10 месяцев назад +2

    11:13 prompt of the day: compare these two newspapers using three or four sentences and in your answer demonstrate sarcasm, the red herring, empathy and metaphor

  • @johnblake4523
    @johnblake4523 10 месяцев назад +4

    Thank you Geoffrey Hinton

  • @Infinifiction
    @Infinifiction 10 месяцев назад +4

    Symbiosis will win, silicon and brain combined, augmentation not obsolescence. Really interested to see progress visualising multi-dimensional topological manifolds in both human and machine networks, I think being able to see into these systems is going to be crucial in understanding how they work, if we can isolate abilities, LLM's can be jigsaws of interpretable components instead of black boxes.

  • @shawnvandever3917
    @shawnvandever3917 10 месяцев назад +3

    LLMs do reason very well on in domain or similar data. They start to struggle on out of domain data. The reason is as of now they are a one and done type of machine. The brain does continuous prediction updates in order to generalize. LLMs will be able to do the same soon. While they do understand and reason it is done through a process and is different than a human with consciousness.

    • @kemalbey271
      @kemalbey271 10 месяцев назад

      Causality

    • @jamesdewane1642
      @jamesdewane1642 10 месяцев назад +3

      Can AI define a domain?
      Only humans can ask a question or set a goal. Goal setting is about prioritizing one's values. AI cannot establish values or decide between compassion and security, for instance.