The Surprising Reason Bots & AI Are BOTH Geniuses and Fools

Поделиться
HTML-код
  • Опубликовано: 16 дек 2024

Комментарии • 20

  • @brianbeasley7270
    @brianbeasley7270 Месяц назад +1

    Nice episode. On the internet, there are very few people who have a clue how neural nets and LLMs work and most people are mired in a "computer programming" model.

  • @MichaelDeeringMHC
    @MichaelDeeringMHC Месяц назад +1

    I can't wait until a robot asks me if I want fries with that.

  • @robertboudreau8935
    @robertboudreau8935 Месяц назад

    Elon needs to train an Optimus robot to do his own job. This is hard but it gets directly to the issue you bring up here. This is critical for Elon because he could then have an apprentice robot to help him run his companies, help with information at meetings, and maybe someday take over his job when Elon gets old. It would be exciting to see what Optimus Elon’s inference computer would say.

  • @paulboyle5794
    @paulboyle5794 Месяц назад

    John, this video was right in your swim lane. Very insightful. Well done.

  • @davidb7381
    @davidb7381 Месяц назад +1

    "LLMs know everything and understand nothing." Wish I could give credit to the person who said this. I share this with individuals who want me to help them use AI in their work and personal life.

  • @sol3citizen847
    @sol3citizen847 Месяц назад

    Loved your excellent video today. So much to think about with what remains of my mental acuity! A spot of wisdom, before you reach the big 6-oh it seems like a big deal, afterwards… not so much. 😂

  • @Tquadpod
    @Tquadpod Месяц назад

    You really nailed it in the end of this video

  • @louisstanwu
    @louisstanwu Месяц назад

    I find myself agreeing with much of what you say. Thanks

  • @tfragia1
    @tfragia1 Месяц назад

    Pick any word like refrigerator. Does an LLM have a physical model of a refrigerator in its mind? A model whereby heat is removed from the inside to the outside using a compressor? A model about it's size and the fact that it keeps food from rotting? Does it have personal experience with a refrigerator and/or heat or lack of it? If I ask it, it will tell me. But when I ask it, I use words like "how does it work?", which it uses to look up information. IMO, that doesn't prove understanding. It proves it is really good at looking things up, and responding with correct language. So it seems to me like LLMs lack complex models of the real world and not acuity. 🤷‍♂️

  • @IntoTheFray.58
    @IntoTheFray.58 Месяц назад

    Could dolphins and whales communicate in sonar pictures? Or their languages started there and perhaps have been abstracted from there? I would think that would fit the way they perceive the world better than words like ours.

  • @Rolyataylor2
    @Rolyataylor2 Месяц назад

    You didnt mention the problem with giving the Chatbots memory, Remembering the hallucinations, My latest video touches on that.

  • @marc0443
    @marc0443 Месяц назад

    Guessing because they depend on probability. They are correct most of the time but not always.

  • @blengi
    @blengi Месяц назад

    what bot has a trillion parameter tuned motor system, controlling 600 muscle with 100s of eccentric tension storing and compensating tendons so can walk anything like the 100s of million years evolve human mammal which still takes years to walk and even longer to write its name legibly, in order to reasonably compare the 2?

  • @juliahello6673
    @juliahello6673 Месяц назад

    Conscious means having subjective awareness. It doesn’t mean thinking in certain ways. I think professionals need to be very careful about how they use the word conscious. As soon as robots perfect imitating emotions and subject awareness (facial expressions etc.) people will start anthropomorphizing them and this will be very destructive. Although it’s probably inevitable, we shouldn’t encourage it by sloppy use of terminology that implies that they have human qualities that they will never have.

  • @NO3V
    @NO3V Месяц назад +1

    Did an LLM write the video title? ;-)
    *are

  • @andrasbiro3007
    @andrasbiro3007 Месяц назад

    I don't think it's true. I'm using LLMs for work, and often they feel as intelligent as a human. And it's not knowledge, but reasoning and common sense.
    I think these are problem the main issues:
    - LLMs learn on internet data, which is mostly garbage, and nowhere near complete. LLMs have weaknesses exactly in areas that are rarely put in writing, if at all. For example things like stacking various everyday items on each other.
    - LLMs don't have personal experience. They can't validate and patch their knowledge, they can only find patterns in the data we give them. It's like learning only from books, but never practicing.
    - LLMs are fine tuned with human feedback, which probably injects human cognitive biases into them. Like giving BS answers instead of admitting they don't know something. I'm pretty sure this is because human trainers accept answers that feel right, instead of rigorously checking them.
    - LLMs have to answer without thinking, which drastically reduces the quality of answers. Like when you have to answer instantly to complex questions, or make split second decisions. Of course your error rate is much higher. This one is being solved right now, and it does improve the quality of answers a lot.

  • @michaelbartell1166
    @michaelbartell1166 Месяц назад +1

    Forgive me I'm trying to figure out what this video is about what are you actually conveying maybe you should step back and watch the video maybe three times I don't understand I'm trying to be helpful I don't know I have Asperger's is your audience eating this up I don't know❤

  • @AdamSWolff
    @AdamSWolff Месяц назад

    First

  • @rays2506
    @rays2506 Месяц назад

    When you read your post off the screen, it's annoying. You don't need that crutch. Face the camera and speak your piece. Personally, I turn off the audio and just read what you wrote.