Why AI products suck

Поделиться
HTML-код
  • Опубликовано: 3 авг 2024
  • In today's video blog post, Austin looks over why the current crop of AI products all kind of suck, discussing GPT 4o, Rabbit R1, Google AI Summaries, Humane Ai Pin, and Microsoft Copilot.
    Chapters
    00:00 Intro
    02:25 Why do all these AI products suck?
    03:07 Why the Humane Ai Pin sucks
    04:03 Latency & GPT 4o
    11:19 Inaccuracy & Bing, Google AI Overviews
    20:28 Limited Functionality & Rabbit R1
    24:50 Closing

Комментарии • 33

  • @MarcoMugnatto
    @MarcoMugnatto 2 месяца назад +6

    One thing you may not have noticed is that there is a strong resistance from current influencers towards AI. They are influencers of the smartphone era, having grown accustomed to it, and they find it difficult to detach themselves. If none of them are able to do so, the solution will come from a new generation without this constraint. This has weighed much more than the supposed deficiencies of the products.

    • @uncoverage
      @uncoverage  Месяц назад

      you’re definitely right that the influencers of the smartphone era are now entrenched in a smartphone-centric world, but i’m not convinced that a smartphone-centric world is going anywhere anytime soon

  • @techsuvara
    @techsuvara Месяц назад

    Thanks for your contribution to shining a light on the realities of this product. By the way, you can screen record the iPad, makes it easier to present it instead of using a camera to record a screen. However, doing it this way may take away from your style. :)

  • @Timlockwood8818
    @Timlockwood8818 2 месяца назад +4

    1:50 I don’t think that’s entirely true. Most LLMs have some level of “reason” and would acknowledge that putting glue in your pizza is a bad idea. I think Google was just using a much smaller model meant only to summarize.

    • @bornach
      @bornach Месяц назад +1

      Its reasoning capability is limited by the examples of reasoning in the training data. OpenAI has scraped Stackoverflow questions and their worked example answers to help their GPT models generate output that resembles a person reasoning out a solution. But the training data also contains a lot of shitpostings in Reddit comments. This is why they need annotation gig workers in Africa and Asia to clean up the data and provide human feedback during the InstructGPT fine-tuning.

    • @uncoverage
      @uncoverage  Месяц назад

      data quality is a huge piece of the puzzle. unfortunately, so is data quantity!

    • @freecivweb4160
      @freecivweb4160 12 дней назад

      Google was good at search. Am I the only one who thinks everything else good they did was merely acquisitions or copying things that others had done? As their search deteriorates, their only value will remain in their monopolistic acquisitions, such as RUclips.

  • @2phonesbabyken
    @2phonesbabyken Месяц назад +3

    Do I gatekeep this channel or hope this video blows up?

    • @uncoverage
      @uncoverage  Месяц назад

      how kind of you :) thanks for watching!!

  • @sneedchuck5477
    @sneedchuck5477 Месяц назад +2

    If the only thing you can reliably ask AI is questions that have a very clear cut (and thus probably well known) answer, what does AI offer over just looking it up anywhere online?

    • @justinbaker2883
      @justinbaker2883 Месяц назад

      Problem is internet search sucks now with SEO and general Google ADs and video pushing. AI coming from Bing made it sound like we would get back to the glory days of search, where the LLM could sift through all the SEO nonsense and actually bring back the answer. But with hallucination we back to square one. It's demoralizing. Things are getting worse, shown a fake future where it's fixed, only to be shown that this is even worse solution than the original problem. I'll just dig through old Google search, thx

    • @uncoverage
      @uncoverage  Месяц назад +1

      agreed!

  • @freecivweb4160
    @freecivweb4160 Месяц назад

    Our start-up is now introducing a wearable AI called Rabbit-hole. Stay tuned for upcoming announcements.

  • @stevensonrf
    @stevensonrf 2 месяца назад +5

    Is that a new iPad M4 I see before me😄

    • @elwire
      @elwire 2 месяца назад +1

      I think the camera placement on the iPad tell it´s not M4.

    • @uncoverage
      @uncoverage  Месяц назад +1

      that’s right! still an old iPad Air!

  • @marklsimonson
    @marklsimonson Месяц назад +2

    I've often wondered about the fact that AI (at least so far) is built on language models and processing. But human language is just the means we use to convey ideas and concepts to each other. As such I think it is never going to be able to capture human reasoning (and meaning and understanding) as long as it's built around language. It's a bit like thinking that a parrot can think like a human since it's able to mimic human speech. Language and speech is just the outer layer of what's happening in our brains, not the core of thinking and reasoning.

    • @bornach
      @bornach Месяц назад +1

      I wouldn't say never. It is more a case of token sequence data being a very inefficient format for providing sufficient training data to allow the machine learning model to fully generalize the skill it is supposed to learn. For example, the Large Language Models are famous for making human-like errors in arithmetic. But that could be solved by providing many more examples of arithmetic being applied in all the different problem domains for which the AI is being trained. This rapidly becomes an exponential explosion of data required for training in order to chase down all the edge cases where an insufficiently trained AI fails. Solving a quadratic expressed with 2 digit numbers, then 3 digit numbers, then 4... Now solve a cubic with 2 digits, etc

    • @uncoverage
      @uncoverage  Месяц назад

      @marklsimonson i love the idea that language is the outer layer of our brains, and it makes me wonder if that’s why it’s a good UI layer for the computer (as long as it’s implemented well).
      thanks for the great comment!

    • @uncoverage
      @uncoverage  Месяц назад

      @bornach do you have an example of a paper or something that talks about how LLMs make human-like errors in arithmetic? i hadn’t heard of that but it sounds fascinating

    • @marklsimonson
      @marklsimonson Месяц назад

      @@uncoverage Considering how easily we misunderstand each other through spoken and written speech, yeah, could be a problem.

  • @BenjiManTV
    @BenjiManTV 2 месяца назад +1

    F*+€ Smith??? 😂

  • @samvirtuel7583
    @samvirtuel7583 2 месяца назад +1

    You are on the wrong track, you want to regress to the days of expert systems.
    LLMs are perfectly capable of reasoning, they simply lack precision in their weights, which requires a lot of resources.

    • @oxygenkiosk
      @oxygenkiosk Месяц назад

      Exactly, and those recourses are a) getting cheaper and more efficient b) are being invested in. It is a revolution which is in it's infancy, you can't blame co's seeking to make it early with half baked products, however the big picture is way more important.

    • @uncoverage
      @uncoverage  Месяц назад +1

      the major resource that is not getting cheaper is high-quality data!

    • @bornach
      @bornach Месяц назад

      @@uncoverage Hence the pressure on Scale AI and competitors to race to the bottom chasing the cheapest data annotation gig workers that still do a reasonable job at sorting between good LLM responses from the bad ones mimicking Reddit shitposts. There is a boom in data annotation gigs in India which can answer a microtask a lot cheaper than a domestic based Amazon Mechanical Turker. But with that comes growing stories of abuse of humans working in the training data gig economy.