You're Not Hallucinating: Demystifying Generative AI

Поделиться
HTML-код
  • Опубликовано: 4 дек 2024

Комментарии • 4

  • @akirachisaka9997
    @akirachisaka9997 3 месяца назад +1

    Also I feel like I disagree with the conclusion of “not anthropomorphizing Non Human Intelligent Systems”, but I feel like I am actually in the same direction.
    As in, while “anthropomorphizing” them, I’m actually less “treating them like human” and more “treating humans like intelligent systems”…
    I feel like I’m rambling and I don’t know what I’m talking about. But I think while it’s important to understand the Shoggoth is a Shoggoth and quite different from your conventional humans, I do believe “pretending they are human” is an important part of advancing the phenomenon until Shoggoth can behave and function more like a human.
    I guess what I want to say is, “haha eliza effect go brrrrrr”. And that human to human interactions are fundementally based on eliza effects anyway.

  • @akirachisaka9997
    @akirachisaka9997 3 месяца назад +3

    I am in the camp that considered “human consciousness is a type of controlled hallucination that significantly benefits survivalbility of both the individual and the tribe”, so LLMs being able to hallucinate seems like a positive trait for me lol

  • @danscieszinski4120
    @danscieszinski4120 3 месяца назад

    Isn’t it time to start using the term ‘volition’ a lot more in these discussions. AI never leads the conversation in its current form.

  • @brulsmurf
    @brulsmurf 4 месяца назад

    LLMs interpolate. They almost never are trained on the exact input you give them. Humans also do this, but are way better at it.