GPT-4o: What They Didn't Say!

Поделиться
HTML-код
  • Опубликовано: 13 май 2024
  • While yesterday's GPT-4o announcement has been covered in detail in lots of places I want to not only cover that but talk about some of the things they didn't say and what the implications are for GPT-5
    openai.com/index/hello-gpt-4o/
    openai.com/index/spring-update/
    🕵️ Interested in building LLM Agents? Fill out the form below
    Building LLM Agents Form: drp.li/dIMes
    👨‍💻Github:
    github.com/samwit/langchain-t... (updated)
    git hub.com/samwit/llm-tutorials
    ⏱️Time Stamps:
  • НаукаНаука

Комментарии • 70

  • @Aidev7876
    @Aidev7876 14 дней назад +49

    So what was ot that they didn't tell us? This is the only reason I listened...

    • @JG27Korny
      @JG27Korny 14 дней назад +13

      clickbait

    • @rikschoonbeek
      @rikschoonbeek 14 дней назад +2

      I think from 8:00 you'll hear it

    • @pluto9000
      @pluto9000 14 дней назад +5

      rikschoonbeek
      I didn't hear it. But you saved me watching all.

    • @Merlinvn82
      @Merlinvn82 13 дней назад +2

      The gpt-4o is not actually a finetune from 4, it's a new one trainned with the same gpt-4 datasets.

    • @fellowshipofthethings3236
      @fellowshipofthethings3236 13 дней назад +2

      congratulations for being baited..

  • @winsomehax
    @winsomehax 14 дней назад +38

    Why free? I think it's the same reason they removed the login. The more people using it, the more data they get to train on. They couldn't stay ahead of google in the data game otherwise - google has gigantic amounts of people's data. This explains why google has been so stingy with their bots. They don't need more of your data

    • @ondrazposukie
      @ondrazposukie 14 дней назад +8

      I think they just want to be as open as possible to make many people use their AI.

    • @neoglacius
      @neoglacius 14 дней назад

      exactly, why facebook or google is free? because YOURE THE PRODUCT , now including logistical and operational data from all companies in the planet

    • @rikschoonbeek
      @rikschoonbeek 14 дней назад +5

      Can't say there is a single motive. But data seems extremely valuable, so that's probably a big motive

    • @kamu747
      @kamu747 14 дней назад

      That's not the reason. We'll, not the main, there might be something there but...
      1) It's a competition. Meta changed the pace when they decided to provide their AI for free. Others will need to offer better for free to stay in the game.
      2) It has always been part of their mission to provide free services. There are altruisitic reasons behind the intentions of those involved. OpenAI isnt really a company as you know it, it is a movement. A changed world is their ultimate goal. If they don't do this, the global implications are catastrophic, AI risks cresting an irreversibly massive divide between classes because believe it or not, a lot of people can't afford to pay $20.
      This is what OpenAI started as am NGO, but their mission was too expensive, they needed to placate some investors and monetise a little in order to be sustainable for the time being while compute was expensive.
      Compute just got cheaper with Nvidias new H200 which allows them to afford to offer services to more people. However, there's certainly more advanced capabilities that paid users will benefit from later on.
      4) As for user data, they no longer need it as much as you think they do.

    • @4l3dx
      @4l3dx 14 дней назад +7

      Their actual product is the API; ChatGPT is like the playground

  • @kai_s1985
    @kai_s1985 14 дней назад +4

    It makes sense that this model is based on a different transformer (or tokenizer) because they were calling it gpt2 (gpt2-chatbot or sth like that).

  • @BiztechMichael
    @BiztechMichael 14 дней назад +2

    Re the 1.5 models and the new tokenizer but still GPT-4, I see this as comparable to the Intel “tick-tock” pattern of CPU upgrades - you’ve got a new process node, first you port the old CPU architecture to run on it - that’s the tick - and once that’s proved out, then you get your new CPU architecture running on it, and that’s the tock. Then repeat. This let them split the challenge into two different phases, and gave them something good to release at each phase.

  • @hemanthakrishnabharadwaj6127
    @hemanthakrishnabharadwaj6127 14 дней назад +2

    Great content as always Sam! Excited by how this could be a teaser to a GPT5, totally agree with what you said.

  • @micbab-vg2mu
    @micbab-vg2mu 14 дней назад +6

    great update - I am waiting for audio vesrion in GPT4o - so far i use it for coding and imagies analysing,

  • @lucianocontri239
    @lucianocontri239 14 дней назад +1

    don forget that gpt depends on deep-learning advancements in the cientific field to deliver something better, its not like a regular company.

  • @chrisconn5649
    @chrisconn5649 14 дней назад +5

    I am not sure they were ready for launch "Sorry, our systems are experiencing high volume". Shouldn't that be expected?

  • @MeinDeutschkurs
    @MeinDeutschkurs 14 дней назад +1

    Gpt4o was able to refactor complex JavaScript code. I was impressed.

  • @ChrisBrennanNJ
    @ChrisBrennanNJ 9 дней назад

    Great reviews. Loves the channel! Voice IN! Voice OUT! Human doctors learning (better) bedside manners from machines! (Film @ 11).

  • @mickelodiansurname9578
    @mickelodiansurname9578 14 дней назад +2

    the default setting I seen in their API docs for gpt4o was 2 FPS... however it can be increased.... i'm thinking there's a sweet spot but I hope its not 2 FPS! Also the audio API controls are not integrated yet and you have to use the old 'whisper' rigmarole of TTS and ASR

  • @willjohnston8216
    @willjohnston8216 14 дней назад

    Wow, what an interesting time to be alive. I think it's an improvement in many ways, but only around the edges and not the core 'intelligence'. I'm seeing very similar answers to previous versions and other LLMs. Also, I see that it now does a more web search to include in the results and it is telling me that it can store persistent information from our sessions, which seems like a big enhancement. I don't see the 10x improvement that 3.5 to 4 showed and I suspect that they are quite a ways from achieving that in a version 5, but I'd love to be proven wrong.

  • @Luxcium
    @Luxcium 14 дней назад

    This is a very interesting and informative for anyone interested who has not seen the presentation and I am also watching just because your voice is so calming and engaging… Thanks for bringing this to the AI Community 🎉🎉🎉🎉

  • @SierraWater
    @SierraWater 14 дней назад +1

    Been playing with all last night and this am.. this is a world changer..

  • @jondo7680
    @jondo7680 14 дней назад +2

    It's simple if gpt 3.5 got replaced by this, gpt 4 will most likely replaced by something better for paid users.

  • @user-me7xe2ux5m
    @user-me7xe2ux5m 12 дней назад

    Something that I haven't seen largely discussed yet is the opportunity for __personalized tutoring__ that was demoed at OpenAI's GPT-4o announcement event.
    Imagine a world where every student struggling with a subject like math or physics has a personal tutor at hand to help them grasp a difficult subject. Not solving a homework problem for a student, but guiding them step by step through the solution process, so they can derive the solution on their own with minimal help.
    IMHO this will make the entire (on-site as well as online) tutoring industry to some degree obsolete.

    • @samwitteveenai
      @samwitteveenai  12 дней назад

      I agree this is huge. I know there are people working on it, but agree it is going to be one of the biggest areas for all these models.

  • @bennie_pie
    @bennie_pie 10 дней назад

    I found it incredibly quick with a simple text completion but it didnt actually read or do what I asked. It needed reminding to visit the URL I gave it (tool use) which I had to do several times and it still seemed to prioritise its own out of date knowledge over the content it had just fetched. I need to try out all the features fully (limit hit after a few messages) but it came across as a bit too quick to churn out code without reading the initial prompt properly...it felt a bit lazy. Perhaps I just need to learn how to prompt it to get the best from it (as was the case with Claude)

  • @Emerson1
    @Emerson1 13 дней назад

    Are you going to any fun events in bay area?

  • @seanmurphy6481
    @seanmurphy6481 14 дней назад +2

    To me it would seem OpenAI is using multi-token prediction method with this new model, but I could be wrong. What do you think?

  • @samvirtuel7583
    @samvirtuel7583 13 дней назад

    If the audio and image are integrated into the model and use the same neural network, how did they manage to dissociate them in the version currently available?

    • @samwitteveenai
      @samwitteveenai  12 дней назад

      The model will have different heads out and they can just turn it one off etc.

  • @Decentraliseur
    @Decentraliseur 14 дней назад

    They understood the ongoing market shares competition

  • @maciejzieniewicz4301
    @maciejzieniewicz4301 13 дней назад

    I have a feeling GPT-4o was trained using knowledge distillation Teacher-Student framework, with 4o being the student, and Arrakis or whatever else as multimodal teacher. 😅 I have no proof of it anyway. Also good to mention optimized tokenization process.

  • @ahmad305
    @ahmad305 14 дней назад

    I wonder if Dall-E will be available for free users?

  • @darshank8748
    @darshank8748 14 дней назад

    Actually Ultra 1.0 is able to do img in and out. But as usual with Google we will witness it next year

  • @carlkim2577
    @carlkim2577 14 дней назад

    That's it i think. It's a mid point to version 5. Sam talked before about how multi model would lead to better reasoning. They were running out of text data so clearly shifting focus to video plus audio. That resulted in Sora . Now the audio gets us this.

    • @Anuclano
      @Anuclano 13 дней назад

      If they ran out of text data, why it has no idea about what Pushkin wrote.

  • @buffaloraouf3411
    @buffaloraouf3411 14 дней назад

    im free user how can i try it

  • @Evox402
    @Evox402 14 дней назад

    I tested it with the mobile app. Its quite amazing how fast it can respond. But the whole thing with different emotions, sounding sad, happy, excited, did not work at all. The voice was using the same tone and "emotion" everytime.
    Did anyone had different experience with it? Could anyone re-create what they showed in the live-demo?

    • @XerazoX
      @XerazoX 14 дней назад

      the voice mode isn't updated yet

  • @ps3301
    @ps3301 13 дней назад

    The world doesn't have enough chip to make ai cheap. Technology still requires a lot more innovation

  • @digzrow8745
    @digzrow8745 13 дней назад

    the free 4o access is pretty limited, especially for conversations

  • @nyambe
    @nyambe 14 дней назад

    My GPT 4o does not do any of the new things. It is just like gpt 4

  • @trixorth312
    @trixorth312 14 дней назад

    When does it roll out in australia?

  • @74Gee
    @74Gee 14 дней назад

    Can't build much on the free tier. 16 messages per 3 hours

    •  14 дней назад

      Why would you build something on a free tier?

    • @74Gee
      @74Gee 14 дней назад

      Well I wouldn't, but at 1:20 that's the suggestion.

  • @CaribouDataScience
    @CaribouDataScience 14 дней назад

    So is it only free on the desktop?

    • @markhathaway9456
      @markhathaway9456 14 дней назад

      an Apple machine first and others over a couple of weeks. They said the API is also free, so we'll see some apps for iOS ans Android.

  • @user-ff6cs5em7f
    @user-ff6cs5em7f 8 дней назад

    one eye just a illuminati thing it is

  • @clray123
    @clray123 14 дней назад

    Who are these guys? Free Ilya!

  • @yomajo
    @yomajo 14 дней назад +1

    spoiler: they told everything. go on to next video.

  • @J2897Tutorials
    @J2897Tutorials 11 дней назад

    One thing they didn't say is that you can only ask 'GPT-4o' about 5 questions before being blocked for the day unless you pay up.

  • @OnigoroshiZero
    @OnigoroshiZero 13 дней назад

    I would bet that GPT-4o is 3-5 times smaller than the original GPT-4, if not even smaller.
    There have been so many advances in the field since GPT-4 released, especially from Meta, which would be stupid to not take advantage of. And the model being completely free backs this up, if it was similar to the original, going free would completely bury the company financially.
    And I would guess that GPT-5 will be similar size with GPT-4, but taking advantage of every new known innovation in the field, plus dozens more that OpenAI will most likely have made internally, will make it a couple times better, plus with having true multimodality and better memory will likely make it be the first glimpse of AGI by the summer of next year.

  • @user-ix1je3sp4k
    @user-ix1je3sp4k 14 дней назад

    THAT The deal announced by Perplexity and SOUND HOUND $SOUN platform is being used by GPT-40