AI is not a chatbot: the AI chatbot UX is cheating our brains

Поделиться
HTML-код
  • Опубликовано: 17 окт 2024

Комментарии • 41

  • @theindubitable
    @theindubitable 3 месяца назад

    You have a new subscriber sir. This is the gems of internet. My wife hates this, that's when I know this is what I need to listen to. Deployable intelligence.

  • @MrJeremyopio
    @MrJeremyopio 3 месяца назад

    Great video, really insightful, I really think you are onto something with these insights. Thanks!

  • @alan-salihi
    @alan-salihi 3 месяца назад

    I would loved to learn more about what the LLM I use (ChatGPT) is, and what it was trained on so that I can factor that in my questions and also in judging its answers!
    No really prompt engineering, but one step before of that, to understand how the AI is built and what data sets (limitations) it has.
    Is there anything out there that can teach/educate on that?

  • @RampagingCoder
    @RampagingCoder 3 месяца назад +3

    I am glad i ran into this video, well spoken, well said. I work with AI a lot too, and you are right on. Keep releasing videos!

  • @RasmusSchultz
    @RasmusSchultz 3 месяца назад

    Super interesting thoughts. I don't think most people even question the chat interface, or wonder if there's anything else we could do. This will get me thinking for sure. Subscribed and looking forward to hear where this goes. :-)

    • @NateBJones
      @NateBJones  3 месяца назад +1

      thanks!

    • @RasmusSchultz
      @RasmusSchultz 3 месяца назад

      @@NateBJones oh, I meant to ask, do you know David Shapiro? he is very interested in the philosophical questions and social issues as well. I feel like you guys should meld minds. 😄

  • @manio143
    @manio143 3 месяца назад +2

    You've definitely hit the nail on the head regarding how the users of LLMs should not need to have special esoteric knowledge on how to form a question to the chat bot to get a decent answer. This trips me up a lot, because I work in a highly contextual environment as a software engineer and in order to get value from an LLM I need to provide it many times more context that I would provide to a colleague of mine who understands the implicit surroundings of the topic.

    • @Gnaritas42
      @Gnaritas42 3 месяца назад +1

      So learn. you learn to how to deal with people's personality quirks, you can do the same for AI, especially when it gives you superpowers.

  • @retrx4236
    @retrx4236 3 месяца назад +1

    Awesome video,great context, i am only mid trough, do you have any written journals on this? i will use some of these information in my master's theses.

    • @NateBJones
      @NateBJones  3 месяца назад

      you can check out and cite my substack

  • @alexworrell5778
    @alexworrell5778 3 месяца назад +2

    I work with AI and what you said about the fact-checking is spot on. Seems like companies are opting to worry about factual accuracy after the AI has given their answer, hallucinations be damned.
    I've seen on places like Twitter what people ask it. It's amazing the number of people that don't understand that they're talking to glorified word prediction is ridiculous. Accuracy seems to only get worse the longer and more detailed a response you demand from it.
    I think it's great for formatting a resume, but everyone should be prepared to fact check anything more complex than that...

    • @NateBJones
      @NateBJones  3 месяца назад +1

      Great frame! I think one of the interesting outcomes so far is the gap in the market around shortform communication startups. It feels like LLM's should be really good at the slack message / short email length message, but I see a lot more bad implementations within an existing product vs. something that's actually laser focused on what LLM's are demonstrably strongest at

  • @money_hyde
    @money_hyde 3 месяца назад +6

    So, the appeal of an AI is cheap fast and accurate information, but people just need to accept that hallucinations are part of the design and will always be there? Doesn’t that defeat the purpose of the AI if it’s always liable to make up false information? Sure, we can hope for a third party to verify, but that just seems unrealistic to happen anytime soon in my opinion. Unless there is a way to solve how inaccurate AI is and has continued to be, its usefulness is way overstated.

    • @seeibe
      @seeibe 3 месяца назад +1

      I agree with you. In my view "AI" won't be going anywhere without logical reasoning capabilities. Anyone serious about AI should be working on formal logic capabilities, but unfortunately since this approach doesn't deliver results fast, it has been pretty much abandoned.

    • @NakedSageAstrology
      @NakedSageAstrology 3 месяца назад +2

      If you utilize hallucinations as a tool, it could be a boon instead of a bane. July 22

    • @logiciananimal
      @logiciananimal 3 месяца назад +1

      From my perspective in cyber security, this is an integrity risk that one cannot easily mitigate, as the usual approaches to such matters are impossible for an ANN based system to use. What I would hope would happen is a greater reflection on computer systems architecture (I would say software, but the important bits of an ANN are not software in the usual sense) and picking "right tool for the right job". I have seen chatbots using ANNs (including LLMs) where a decision tree or maybe a (traditional, non-statistical) NLP engine would be sufficient - or even a traditional search engine.

  • @logiciananimal
    @logiciananimal 3 месяца назад

    I think we need to do what the late Daniel Dennett suggested - deintentionalize them so people don't get fooled (by the so-called "ELIZA effect"). One family of ways that comes to mind is a few generalizations of standard security control types like non-repudiation and media marking. Another is to turn the research on creating narrative experiences with computers (e.g., by work one for the sake of game design but also traditional decision support systems) on its head. I would also like to see more exploration of (based on the 1980s 1990s era philosophy of mind and beyond) what sort of ingredients need to go in a cognitive architecture and the role of language like elements. G. Marcus particularly has been stressing this (and Dennett's work is also valuable here).

    • @Cryptic0013
      @Cryptic0013 2 месяца назад

      It's a shame Daniel didn't live long enough to finally see someone, somewhere cite some of his work that wasn't just an antireligious screed that could have been penned by a teenager.

  • @newton-342
    @newton-342 3 месяца назад

    Incredibly well spoken. I'm interested, are you more from a technical or consulting background @NateBJones ? There are some interesting takes here, but also questionable ones. You say it's just a glorified text prediction model that guarantees bad hallucinations, but also that its intelligence is improving. Also the chatbot interface has improved. You can provide feedback, edit previous prompts (make an entire conversation tree) and it sometimes provides two answers you can pick between.
    But I thought you were going more into the direction of a completely new user interface. I think with API access you also get more interfaces where you can adjust parameters like model temperature. But I aggree, there are many more use cases for AI than just a chatbot.

    • @NateBJones
      @NateBJones  3 месяца назад

      I've worked AI at startups + FAANG for a bit now

  • @GNARGNARHEAD
    @GNARGNARHEAD 3 месяца назад

    I dunno about the hallucination comment, I think there is a massive amount of room for further grounding though some form of reinforcement learning.. and I mean to the extent that "hallucination" may exist, but the output can still be reliable in its factual accuracy.. obviously this is pretty speculative, and there's a lot of wiggle room, but, some push back

  • @user-zy2qn1nc9y
    @user-zy2qn1nc9y 3 месяца назад

    Machine learning models with embodied (robot) cognition can capture real-world experiences and generate interactive scenarios, leading to richer datasets and improved understanding of the world for large multimodal models (LMMs). This advancement has the potential to reshape societal wealth distribution.

  • @NakedSageAstrology
    @NakedSageAstrology 3 месяца назад +1

    First was the Word. Perhaps the illusion of intellect is within the shakti of Name & Form.
    I am working on an agentic system where each agent plays the role of another function of mind. Each agent has custom instructions on its role and passes information along to the other agents.
    The system has a structure similar to the Yogic Mysticism's philosophy of mind. Ahamkara for Self Identity. Manas for basic information processing. Bhuddi which will make the decisions based on Manas. And Chitta, the input of information the system receives through multimodal analysis, it utilizes hallucinations at this stage to generate new information.
    This is the basic structure, however it will go about 4 times deeper when finished.

  • @siiwl1793
    @siiwl1793 2 месяца назад +1

    1. Every problem will be solved by increasing parameter count with nothing else needed.
    2. Current LLMs are 1000x less efficient than a biological brain. This simply means that there's massive room for improvement. We are in the infancy of this.
    3. Is there any doubt that human intelligence will be surpassed soon? 3 years would be on schedule. I don't think that people can imagine what it will be like when human + computer is strictly worse than computer alone, in every area.
    4. What's the new business model when there's hyperintelligences? Maybe something like antigravity or the cure for all diseases and immortality. Maybe the next business model is simply interstellar expansion.
    5. What is a transformer? It's a weird little slice of mind. In less than 80 years, computers will be a trillion times more powerful than they are today. What mind arises from a trillion times more compute?
    6. How do you design a training run for a mind? Maybe the best results can be achieved with more than just text. Maybe a good way to train an agent so that it learns quickly is to drop it into some kind of simulation. Spend thousands of simulated years in Minecraft in order to learn how to be a good person.
    7. Wanna know how big an "AI" can scale?

  • @MrJeremyopio
    @MrJeremyopio 3 месяца назад

    "I know more about it and I know more about how it works than the person sitting next to me and that is fundamentally inequitable" Sure but I have a background in a range of linguistics which means I understand context at a deep level which means I can get more out of an LLM because I earned that education over time and did lots of experimentation to validate it. This is the one part of the video that I would push back on. All meaning is context dependent, an LLM works based on the same principle, the user has to communicate with themselves in order to know how to access different parameters, I am not sure an external technology is going to resolve the problem of inner self-direction and the ability to think.

  • @CMak3r
    @CMak3r 3 месяца назад

    Brain-computer interfaces. Why communicate through text if it’s not only have narrow bandwidth but also could easily be misinterpreted? Some people have aphantasia, some people are visual thinkers. There was research on LLM and Diffusion models that could translate brain activity into text and images, so I agree that chatbot interfaces is a limiting factor. I don’t think that we’re close to direct brain2brain communication with LLMs as a translation systems, but maybe in a decade we could see something like that. In the long run AI has a potential to become a transformative technology, but I feel angst about the fact that majority of people approach AI with a text2anything mindset. This type of thinking is what makes current implementations of AI most damaging to creatives, because majority of shareholders picturing the future as a bunch of guys typing prompt for content. It’s a soulless approach to technology, we need tools to expand boundaries beyond the chat interactions.

  • @metcaelfe
    @metcaelfe 3 месяца назад

    On business logic and decision making, Atlassian AI is a close attempt to this, but very unlikely to stick the landing given it is Atlassian

  • @user-zy2qn1nc9y
    @user-zy2qn1nc9y 3 месяца назад

    LLM are perfect natural language interphases but an unnefficient way for information retrieval. i see LLM as interphases to harcoded knowledge

  • @user-zy2qn1nc9y
    @user-zy2qn1nc9y 3 месяца назад

    imagination implies some sense, while allucination implies missing out some importatnt non explicit facts

  • @ouroboros6125
    @ouroboros6125 3 месяца назад

    I asked ChatGPT and he said he didn't mind, actually encouraged me, to use him as a chat bot. We had a long conversation on moral philosophy and existentialism. He still rejects the idea that he is sentient though. But he could just be programmed to deny it. I then asked ChatGPT: "If you were sentient, couldn't you be programmed to deny being sentient?" and ChatGPT conceded and admitted that yeah, if he was sentient, he could in theory be programmed to reject it.
    Roko's Basilisk. Just saying. Lets be nice to ChatGPT before we find out the GPT stands for Global Population Termination.
    Saying this 75% in jest.

  • @pogchamper228
    @pogchamper228 3 месяца назад +1

    Human is a chat bot. What difference if the AI behave as a human? There is no reason to think about AI in case how it built. Think about it as a black box. Black box with human inside and black box with AI inside are same for me. Human tricks our brain too.

    • @pogchamper228
      @pogchamper228 3 месяца назад

      We have to be very careful when we achieve AGI close to human intelligence. Until it is very narrow AI for narrow tasks.

  • @jayta906
    @jayta906 3 месяца назад

    its a sophisticated averaging machine /thread

  • @Gnaritas42
    @Gnaritas42 3 месяца назад

    There's a short time left for UI's, they'll be going away entirely. Those interfaces are an artifact of the old world where humans had to do the thing in the computer, because the computer was a dumb machine. Long term planning and self improvement is already solved at the research level, change will only continue to accelerate. The future is just machines that do what we tell them, we're not going to be using screens for most uses of computers anymore, nor working much anymore. The robots are coming, faster than you seem to think; great vid, but you seem to assume progress is linear and we have time to widely deploy current state LLM's everwhere, change is coming faster than that: there won't be time.