Hey ChatGPT, Summarize Google I/O

Поделиться
HTML-код
  • Опубликовано: 6 июн 2024
  • This was a week full of AI events! First, Marques gives a few thoughts on the new iPads since he missed last week and then Andrew and David bring him up to speed with all the weirdness that happened during Google I/O and the OpenAI event. Then we finish it all up with trivia. Enjoy!
    Chapters
    00:00 Intro
    01:17 Marques iPad Thoughts
    16:49 OpenAI GPT-4o
    43:05 Trivia Question
    44:05 Coda.io (Sponsored)
    45:04 Google I/O Part 1
    01:14:06 Trivia Question
    01:14:54 Ad break
    01:14:59 Google I/O Part 2
    01:46:49 Trivia Answers
    01:52:44 Outro
    Links:
    MKBHD iPad Impressions: bit.ly/3WzFFWk
    MacStories iPadOS: bit.ly/3V1G0Qq
    The Keyword: bit.ly/4blfFm5
    OpenAI GPT-4o Announcements: bit.ly/3V3Sabv
    9to5Google I/O 2024 Article: bit.ly/3V2rDLv
    Merch tweet: bit.ly/4bnhNcV
    Shop products mentioned:
    Apple iPad Air: geni.us/SsXTRLt
    Apple iPad Pro M4: geni.us/HXDlXo
    Shop the merch:
    shop.mkbhd.com
    Socials:
    Waveform: / wvfrm
    Waveform: www.threads.net/@waveformpodcast
    Marques: www.threads.net/@mkbhd
    Andrew: www.threads.net/@andrew_manga...
    David Imel: www.threads.net/@davidimel
    Adam: www.threads.net/@parmesanpapi17
    Ellis: / ellisrovin
    TikTok:
    / waveformpodcast
    Join the Discord:
    / discord
    Music by 20syl:
    bit.ly/2S53xlC
    Waveform is part of the Vox Media Podcast Network.
  • НаукаНаука

Комментарии • 960

  • @RyanMorey1
    @RyanMorey1 21 день назад +620

    going to predict the number of times "AI" appears in this podcast: 34

  • @MarsOtter
    @MarsOtter 21 день назад +606

    david’s “daaaammmnnn” in the intro needs to be on the soundboard

  • @Wade2003
    @Wade2003 21 день назад +244

    The natural human response for How tall the Empire state building is? Should be, "Uhh.... I don't know bro, why don't you google it.".

  • @melissa.deklerk
    @melissa.deklerk 20 дней назад +72

    Gmail's search function is ABSOLUTELY Google's worst search function. 100% correct, Andrew. Thank you for saying that.

    • @ssnjr1299
      @ssnjr1299 20 дней назад +6

      RUclips search is even worse

    • @digheanurag
      @digheanurag 18 дней назад +1

      God I put it on outlook just to get some good search going

  • @gundmc13
    @gundmc13 20 дней назад +96

    The idea of the "Where did I leave my glasses" was not to suggest that you would actually ask the AI assistant where you left things as a use case - it was a flex to show the assistant could recall a detail that wasn't explicitly discussed from a previous image a minute ago that wasn't directly in its current field of view. It's another example of a huge context window and how it's helpful.
    A 2 million token context window isn't just for writing a really long prompt. It means everything in that 2 million tokens can be retrieved with perfect recall, whether that's a super long conversation dialogue, or if it's a 2 hour video file. Honestly I think people are sleeping on how huge of a deal that can be and Google isn't doing a good job of telling people why they should care about a context window.

    • @TreesPlease42
      @TreesPlease42 20 дней назад +7

      How many tokens is your context window? Seriously though, the massive increase in tokens directly addresses the issues with coherent conversations.

    • @AlricoAmona98
      @AlricoAmona98 20 дней назад

      Nah some people are just dense. You don't need to be a genius or have revolution marketing fed to you to understand larger context window.

    • @T_Time_
      @T_Time_ 19 дней назад

      It’s not that impressive, with your computer, you can run an python script to collect all of the items of short time frame, you can run a script that recognize objects, once recongize you can make the list of how to each other they are. Then when you need to ask the ai, where is your glasses, it would search that list. The list would probably no more than a 100, in this situation where it is revolving around the room, if you remove the duplicates for every frame. As an coder and who works with vision opencv to be particularly, this was underwhelming. In the sense that they tried to package this is as something of the future. They could have atleast stress test, because the object was not surrounded by clutter, or other anomalies, that would have shown a performance increase different then you can get on an normal computer. Openai is probably going into the product now, but they probably going to have some shitty robot next, but gbt 5 is going to take along time to make

    • @ApertureLabs
      @ApertureLabs 18 дней назад +5

      ​@@T_Time_ The entire point of Transformers, though, is how generally capable they are compared to other methods. You can't ask a narrow vision model how to cook a meal with the ingredients in front of you, and then ask that same model to sing a lullaby about the ingredients you used 5 minutes ago. Sure, you might be able to code your way to that very specific use case with some rudimentary ML models and a little python code, but it would lack the flexibility and generalization that Transformer models provide.
      Traditional techniques like OpenCV are fundamentally limited in their ability to understand and reason about the content of an image in a flexible, context-aware manner. When a Transformer model "looks" at an image, it doesn't just recognize objects, it builds an understanding of the scene. The self-attention mechanism at the heart of Transformers allows the model to consider the relationships between different parts of the image and encode information about the scene in a highly structured way, it's able to understand what's actually happening in a scene and reason about it. These long context lengths allow these models to reason across video as well, which opens up thousands of use-cases.

    • @T_Time_
      @T_Time_ 18 дней назад

      @@ApertureLabs you are overestimating what the transformer is doing in that scene, in which I provide an script to easily replicate what happen in the demo , “to find my glasses”. It’s cool that you have an app that can do this, but it is not impressive or groundbreaking for simple tasks. Like if I can think about the simple python code (find object, create list, google search “meals” + list) to use narrow vision models to find a meal based off of ingredients in a meal, is not impressive. Since it is probably searching google as well. It’s not trying to understand the chemistry between food. Lol

  • @vishnuvardhan.s
    @vishnuvardhan.s 21 день назад +191

    Marques is back!!

    • @none_the_less
      @none_the_less 21 день назад +4

      He never left.

    • @tayt_
      @tayt_ 21 день назад +1

      This is clone.

    • @cmaysautoyt
      @cmaysautoyt 20 дней назад +2

      ​@@none_the_lessyou must've missed the last episode...

  • @kaytieanddreambreen4554
    @kaytieanddreambreen4554 21 день назад +339

    Make chatGPT4o talk to Gemini. That would be the awkwardest meet ever

    • @dplj4428
      @dplj4428 6 дней назад

      Yep. OpenAi is in talks with Apple, and Apple is in talks with Google.
      7:11 the camera is in the way so they had to move the charging magnets further away from that 2.25mm center strip.

  • @carrieonaccessibility
    @carrieonaccessibility 21 день назад +137

    As a blind person, having these models have vision is super important and could be really, really helpful. It already is. Just look up. Be my eyes...... I can't wait until it can help real time with visual things. And actually be right about things. LOL

    • @thajunglelibrary
      @thajunglelibrary 20 дней назад +23

      thank you!! it’s frustrating to hear them say nobody will use these features

    • @bullymaguire8380
      @bullymaguire8380 20 дней назад +10

      @@thajunglelibrary For people so deep in tech there views on these events are surprisingly shallow.

    • @chevy_21
      @chevy_21 20 дней назад +2

      ​@@bullymaguire8380Lol they're not deep into tech... They're just the crew. Only trust things coming from Marques and David. 😂
      I love them all tho but this is just the truth 😂😂

    • @FirestormX9
      @FirestormX9 19 дней назад +3

      @@bullymaguire8380 it shouldn't be surprising at all. They're way too deep in a life full of privilege to even fathom anything beyond that. Especially Ellis and Adam. Just straight up facts, no hate or something. Shouldn't even need to mention that but interpretation can be hard in online comments.

    • @CyanAnn
      @CyanAnn 17 дней назад

      ​@@bullymaguire8380unfortunately, accessibility is not in the forefront of a lot of tech people's minds. There's been efforts made obviously but it's often either half-assed or abandoned. Here's to a better future though!

  • @melissa.deklerk
    @melissa.deklerk 21 день назад +25

    Marques tapping the mic to trigger the lights was low-key the funniest moment of the episode.

    • @abdullahemad9457
      @abdullahemad9457 18 дней назад +3

      the scream when they were confused about what the name of the google chat thing was in the past

    • @melissa.deklerk
      @melissa.deklerk 18 дней назад

      @@abdullahemad9457 true. The exasperation of how many things Google has made and killed. Can relate.
      I used to use gChat all the time

  • @rosetheblackcat
    @rosetheblackcat 21 день назад +46

    I appreciate the longer episodes of the pod. This is what podcasts are for! Getting into the nitty gritty of the products and chopping it up, letting your personalities show.

    • @hamza-chaudhry
      @hamza-chaudhry 14 дней назад

      Yet they still did a terrible job at covering the 4o

  • @MrKevinPitt
    @MrKevinPitt 21 день назад +38

    Love this show listen/watch every week! But sometimes I think they are so immersed in the field of tech they kinda miss the wonder of some of these innovations. I watched the OpenAI event and was absolutely blown away. Yes it was silly at times and the use case demos were a bit contrived but where we are in contrast to where we were 15 years ago just absolutely amazes me. Wish the fellas took a step back sometimes and just appreciated that for a nanosecond. Love yea! We truly live in a age of wonders ;-)

    • @Aryan214T
      @Aryan214T 21 день назад +11

      Yeah they really downplayed it

    • @tiagomaqz
      @tiagomaqz 20 дней назад +6

      This exactly how I feel. It’s like they see so much tech that nothing impresses them anymore and they forget there’s a world outside theirs where tech such as ChatGPT4o is absolutely groundbreaking. I use it very often for conversation and it was already impressive before the “o” update.

    • @KishoreKV
      @KishoreKV 20 дней назад +7

      Yeah.. This episode was painful to hear. Sometimes I feel the tech reviewers feel the need to criticize every new device/app/service/feature.. Of course it’s not as complete as possible!

    • @AlricoAmona98
      @AlricoAmona98 20 дней назад +1

      @@KishoreKV I'm sorry when it comes to AI it's clear they have no idea what they are talking about

    • @saxon6621
      @saxon6621 20 дней назад +3

      Why contrast to 15 years ago? This is an update to an existing product. Of course it can look impressive in a tech demo that doesn’t mean that much in itself.
      They’re being critical and asking important questions which is good.

  • @jonathanvu769
    @jonathanvu769 20 дней назад +8

    David is really writing off the extended context window 😂 this is a huge step toward the potential for a personal assistant who can know everything about you. It’s also a big divergence from OpenAI, as Google is moving toward an infinite context window whereas OpenAI seems to be maximizing vector stores and RAG.
    A specific use case for my industry - the eventual possibility of having individualized GPTs trained on patient data, meaning that physicians can have a model that is queryable via natural language that can give clinical summaries of a patient history.
    I agree a lot of new AI features are overhyped but I don’t think we should write off the underlying advancements in these models - very exciting stuff on the horizon!

  • @PrajwalDSouzaCrazyTalks
    @PrajwalDSouzaCrazyTalks 20 дней назад +42

    Podcast didn't do GPT4-o justice. Singing. Text to music. Music to music. Perfect text in image generation.. soundscape generation..

    • @harbirsingh7266
      @harbirsingh7266 18 дней назад +9

      Don't worry. They'll all be EXTREMELY impressed by the exact same model the moment Apple gets ChatGPT on iPhones.

    • @Jaden378
      @Jaden378 16 дней назад

      @@harbirsingh7266literally watch that be the case lol...

    • @rzuku97TV
      @rzuku97TV 16 дней назад

      @@harbirsingh7266 now it's weird and nonsense, later it will be amazing

  • @EkoFairy
    @EkoFairy 20 дней назад +8

    Are we getting mad at AI generating sympathy messages when there’s a greeting card industry that does the same thing 🤔

  • @anonimous__user
    @anonimous__user 21 день назад +6

    I definitely agree with Marques' take on the very simple example that OpenAI used to showcase their new model. I can say that the moment I saw their demo of how GPT4o can read math problems on a piece of paper, and especially their RUclips video showing how it can even understand things like geometrical objects on a screen, I immediately thought "Oh! Maybe it can help me with my work!". And sure enough, I tested it and I can say that it's very, very good (much better than before) as an assistant helping you figuring out what kind of statistical analysis you can run on a dataset, guiding you through all steps of the process from testing assumptions, to suggesting alternative steps such as transformations or different type of analyses, to checking graphs of residuals distribution and so on. Up until the very end of the process. It can even guide you on how to perform each step on a specific software (as long as it's popular enough, for example SPSS). It really is great! I cannot wait for their desktop app to be released for Windows, because it would make the experience even smoother!

  • @azaelandy04
    @azaelandy04 21 день назад +40

    Maybe I’m out of the loop but GPT4o is the most impressive tech I’ve seen in a while.

    • @tiagomaqz
      @tiagomaqz 20 дней назад +14

      I know right?! What are these guys talking about?! It’s like they see so much tech that nothing is impressive to them anymore. Like take a break guys.

    • @juanstevens873
      @juanstevens873 20 дней назад +6

      Sometimes I dunt know if they're just trying to be funny. I see a lot of tech and this was still impressive. Just seems like they talk so much and maybe doing think things out much before having a pod cast. Simple math problems? Do they understand his much money and effort this will save parents? I dunno. Maybe I'm just trying to find s podcast that gives serious thought and commentary. They didn't talk about the really impressive parts.

    • @AlricoAmona98
      @AlricoAmona98 20 дней назад +15

      You aren't out of the loop, they are just out of their depth when it comes to AI and have no idea what they are talking about.

    • @jcolonna12
      @jcolonna12 20 дней назад +8

      No they completely missed the mark in this episode

    • @user-pu1kw2pq8i
      @user-pu1kw2pq8i 19 дней назад +1

      @@AlricoAmona98 it’s because AI doesn’t have practical applications for most people yet including them

  • @josephhodge5387
    @josephhodge5387 20 дней назад +9

    Just wanted to say from a person who is blind the visual aspects of what Open AI is doing is pretty exciting for me. I know that you guys would’ve gloss over the facial expression thing, but imagine going through life not being able to see peoples facial expressions. There’s a lot of things that I miss not having Nonverbal communication. For example, most times conversations are started up by talking with your eyes.

    • @AlricoAmona98
      @AlricoAmona98 20 дней назад +2

      Love this example. It's a shame they downplayed the technology. This is my biggest gripe with tech reviewers. They never consider people with disabilities nor do they shed light on the features that bridge better accessibility.

    • @kavishbansal
      @kavishbansal 18 дней назад +3

      This channel lacks the ability to recognise the problems physically challenged people face in their day to day life. Which is why having a diverse range of people work with you is a great way to understand the perspectives that people would never think about.

  • @menithings
    @menithings 20 дней назад +14

    The new iPad Pro is thinner (really lighter) so its center of gravity can be lower when docked to the Magic Keyboard. This weight shift means that the iPad can be suspended further back on the keyboard (check out the hinge's new 90 degree angle), and therefore free up more space on the case for a larger trackpad and a function key row. The Magic Keyboard is an almost ubiquitous accessory for the Pro, so the iPads lighter weight now resolves the Magic Keyboard's two major flaws - making it a more attractive upsell.

    • @dplj4428
      @dplj4428 6 дней назад

      But the keyboard is top-heavy leaning. So awkwardly not going to be your lapdog.

  • @sachoslks
    @sachoslks 21 день назад +34

    I think you are underselling GPT4o. The fact it does all the "understanding" in audio to audio form is such a big leap vs the previous way it worked. You lose so much detail when doing audio to text -> text to text -> text to audio.
    I think of the new model as kind of like the ChatGPT moment but for audio instead of text. The thing can whisper, laugh, sing, "breath", snicker, giggle, talk fast or slow, be sarcastic, do different voices all while significantly reducing latency.
    Not to mention all the multimodal examples they showed on their blog post like the SOTA text generation in an image, text to 3d capability, sound fx generation (although im not sure about that example yet).
    All of this is happening in a single neural network, think how amazing is that, plus it is much cheaper and faster than regular GPT-4 to begin with. It seems they managed to get GPT-4 level intelligence in a much smaller model that allows it to run cheaper and faster while unlocking new modal capabilities so i think it is fair to expect when they scale it up to a much bigger model we could see some big improvements although it may negatively affect the speed/response time of conversations.

    • @AlricoAmona98
      @AlricoAmona98 20 дней назад +9

      Yeah they did a terrible job with this podcast episode

    • @harbirsingh7266
      @harbirsingh7266 18 дней назад +7

      They have no idea how LLMs actually work, just like any other average joe. Only those who know how conventional RNNs worked and how the invention of the Transformer fundamentally changed the AI models know how impressive the latest advances are.

    • @heinzerbrew
      @heinzerbrew 16 дней назад +1

      @@harbirsingh7266 Can't expect much from people that don't know how small/big a millimeter is when they were taught in school and it is on the vast majority of our rulers and measuring devices.

    • @BaroloBartolo
      @BaroloBartolo 15 дней назад

      It’s the coolest thing I’ve ever used in my entire life. Becomes a mentor when I’m randomly wondering specifics of technical topics. The benefit of someone speaking to you, especially if you’re more of an auditory learner, is spectacular

  • @fooey88
    @fooey88 21 день назад +14

    Crazy that Google no longer has pages. They changed the search results to endless scrolling.

    • @josh9418
      @josh9418 20 дней назад +6

      Exactly, they've done this for a while now. I was surprised when they said they go to the next page like that's still how it is

  • @BonJoviBeatlesLedZep
    @BonJoviBeatlesLedZep 21 день назад +33

    Ellis says "what a horrendous reality where you need an AI to write your condolence letter" but Marques made a great point that that's one of the primary functions of regular assistants. Just writing those sorts of letters for you. Very morbid to think about

    • @TreesPlease42
      @TreesPlease42 20 дней назад +1

      Like MLMs sending you fancy hand written letters. It's an appeal to authority, a gift, a personal touch for their target audience that used to receive letters but no longer do.

    • @OperationDarkside
      @OperationDarkside 20 дней назад

      Life writes the best dramas. We just automate it.

  • @markmuller7962
    @markmuller7962 20 дней назад +20

    They didn't "had" to interrupt 4o... They did interrupt a lot because that's a new feature

    • @FlowingLifeAlchemist
      @FlowingLifeAlchemist 19 дней назад +5

      I feel like they haven't used the voice option that's currently available by OpenAI, so they didn't get exactly why a new update where you can interrupt is a feature and not a flaw.

  • @acelovesit
    @acelovesit 21 день назад +42

    I think this version of ChatGPT 4 is kinda revolutionary.
    It can work as a personal tutor to assist woth homework or troublesome topics you don't really understand very well. In my experience, kids don't like to put their hand up with fear of being called dumb. Also parents don't know much about kids topics in school,.so to sit there and learn with your kid would be a game changer.
    Don't sleep on this, I do think this is the real start of practical uses, and guess what? No extra hardware to buy.

    • @TreesPlease42
      @TreesPlease42 20 дней назад +1

      Welcome to the Diamond Age

    • @harbirsingh7266
      @harbirsingh7266 18 дней назад +3

      I can't believe these guys are more impressed by a stylus than the next step towards AGI

  • @dotintheuniverse4637
    @dotintheuniverse4637 19 дней назад +10

    I think what that dude said about AI ending up not being able to collect valueable information about certain issues or problems from the Internet is actually a valid concern and was pretty much glossed over. Without online discussions about new topics from different medias and forums, can we really trust that the data it collects about said new topics is correct? What would it end up basing the conclusions out of? Pretty interesting stuff.

  • @BecauseBinge
    @BecauseBinge 20 дней назад +5

    There are papers out on LLM's having internal monologue. Giving AI ability to think before it speaks. It literally revises what it is about to say before it says it.

  • @awauser
    @awauser 20 дней назад +11

    1:16:46 "Imagen" and "Veo" are both Spanish words. They mean "Image" and "View/See" respectively

    • @dplj4428
      @dplj4428 День назад

      Mirar, ver?

    • @awauser
      @awauser День назад

      @@dplj4428 Yes

  • @TPGReddo
    @TPGReddo 21 день назад +25

    The unintended references to Her with the AI and ghostwriters convo is hilarious.

    • @enisity
      @enisity 21 день назад +2

      I thought the same 😂

  • @divyz1010
    @divyz1010 21 день назад +9

    We need a podcast once Marques is brought up to speed....would really admire and curious for his thoughts. The intonations / emotions gpt-4o displays and infers are really a leap ahead (technically) anything we have right now and need a better review apart from that it's generic and of little use

    • @AlricoAmona98
      @AlricoAmona98 20 дней назад +1

      Agreed, I couldn't even finish the podcast.

  • @BrightPage174
    @BrightPage174 21 день назад +8

    Ngl the wow moment at I/O for me was mostly the ai overview stuff being able to take the really long and weirdly specific searches that my mom always does and actually give her a proper response. Huge for the people who don't realize search isn't a person with contextual knowledge of your situation
    56:35 Exactly this. Being able to ask the computer questions like you would a regular person instead of keyworded queries to me is the real maturation point. People grow up learning how to talk to humans, not search engines

  • @sameerasw
    @sameerasw 21 день назад +22

    Can confirm, in work, Google Chat is one of the most used tools for me. Especially since we use the whole Google workspace, the chat's integration with meets and such are awesome. It's much helpful in projects/ teams for discussion and also sometimes as an alternative for less professional chat than an email. But hate the fact that it's in the GMail app as well as with it's own app. I prefer the separate app so it's isolated from my email browsing.

    • @AgentNix
      @AgentNix 21 день назад +4

      You can disable the chat in gmail so that you dont get double notifications. Its in the settings

    • @sameerasw
      @sameerasw 21 день назад +1

      @@AgentNix yeap... But I prefer having it on the same app for school Gmail and separate for work but I guess Google doesn't want us to do so...

    • @saliljnr
      @saliljnr 20 дней назад +1

      Came to comment section to say exactly this. I run a startup company and we use Google Workspace and especially Chat and it's awesome! It does just enough. No clutter, no excesses. Love it.

    • @sameerasw
      @sameerasw 20 дней назад

      @@saliljnr yeap... It's like Skype in Microsoft work ecosystem.... (We use Skype too ☠️)

  • @ole4983
    @ole4983 21 день назад +16

    I kinda like the name ChatGPT 4o, because there are 4 modalities being combined (text, audio, photo, video).
    So '4 omni' kinda fits.

    • @IreshSenadeera
      @IreshSenadeera 11 дней назад

      Not technically true, still only 3 modalities. There's no fundamental diffrence between the way video and images are processed by the LLM. It's processing multiple images to form a video

  • @sanjaygoopta
    @sanjaygoopta 21 день назад +37

    The conversation around kids not learning how to navigate search results feels reminiscent of the how adults felt about kids not learning how to use the dewey decimal system to find the books they're looking for at a library.

    • @jzzsxm
      @jzzsxm 20 дней назад +6

      Except they pivoted to search results. Now the pivot is to an answer from your AI uncle Larry

    • @dahstroyer
      @dahstroyer 20 дней назад +13

      The concern is not being able to have the skill of fact checking information

    • @sanjaygoopta
      @sanjaygoopta 19 дней назад +1

      @@dahstroyer I think the concern is valid, but even with search you have the "Don't believe everything you read on the internet" issue. I feel like the AI fact checking is something that can for sure be solved to be at least as reliable as the average person's ability with search. I don't think it's quite ready and their critiques are valid. I just think it's interesting to see if these concerns are going to be seen in the same light as the book vs internet argument years ago

  • @macro_concepts
    @macro_concepts 18 дней назад +4

    I don't think OpenAI did a good enough job of conveying how important and impressive it is to have a natively multimodal model that works this well. 4o is a very significant accomplishment. It's far ahead of GPT-4 in capabilities for half the price, and its design is far more scalable. Their event marked a major step forward for the field.

  • @Daniean
    @Daniean 21 день назад +65

    I asked GPT 4o Will a Turkey Really Drown If It Looks Up During the Rain and in the answer it used this podcast as one of its sources. That's really meta

  • @DolisterEric
    @DolisterEric 19 дней назад +4

    This feels like our parents complaining how we don't know how to read physical maps but for us, it's complaining to our kids how they don't know how to sift through Google links lol

  • @sshkeys
    @sshkeys 21 день назад +21

    BANGER TITLE

  • @godminnette2
    @godminnette2 21 день назад +10

    Adam, there are Discord servers for what you want with books; fantasy books in particular. Typically there will be a channel where you can talk about a book, and you just state the chapter then spoiler tag the rest of the message, and people can respond to you and do the same; maybe even making a thread for discussing spoilers only up to that chapter.

  • @jakebj
    @jakebj 21 день назад +184

    Not a great description of 4o

    • @nicolasnino9781
      @nicolasnino9781 21 день назад +43

      agreed, they downplayed it

    • @MobikSaysStuff
      @MobikSaysStuff 21 день назад +56

      @@nicolasnino9781 Yeah, they missed a lot of stuff, usually I love how they go over stuff, I guess it's because Marques didn't watch the stream, he usually has more insightful things to say

    • @RajithAki
      @RajithAki 21 день назад +7

      True..

    • @Nivolon
      @Nivolon 20 дней назад +30

      Yeah they didn't talk about the vocal flexibility it has(whispering, robot voice, dramatic voice, laughter, singing etc). They said it has filler lines while it processes information which is not true at all(thats just the effects of system prompts they added to make it extra friendly/flirty). They didn't talk about the image generation capabilities it has, and how its miles ahead of any other image model when it comes to consistency( for characters/objects), its so good that you can generate multiple images of a single object and then make a 3d object out of it, it also has the best text output quality out of any image model, it can edit existing images with a text prompt and keep the consistency in the image.

    • @WigganNuG
      @WigganNuG 20 дней назад +19

      @@Nivolon Yea the "it uses filler that why they can claimer its faster" is such lazy research or just plain not paying attention. Its obviously WAY faster and it evidenced by the TEXT OUTPUT like FLYING 10x faster than 4.0 Turbo. Insane. Plus all the other stuff you said :)

  • @davodernstberger7895
    @davodernstberger7895 21 день назад

    With Marques the studio look’s really perfectly set. Welcome back!

  • @EmmaMoshood-qb8tt
    @EmmaMoshood-qb8tt 19 дней назад +1

    Each time I watch your videos especially the main channel (MKBHD) , the studio and waveform I wish i could subscribe 200 times

  • @simonvutov7575
    @simonvutov7575 19 дней назад +4

    Andrew, tokens are a way of converting strings (sentences and words) into lists of numbers. These numbers are fed into the transformer. Tokenization is important in converting any medium into a list of numbers, because that is what tranformers understand. Similarly, audio files, images, and other forms of information are tokenized to be lists of numbers for the transformer.

  • @Kromface
    @Kromface 21 день назад +4

    Been waiting for this since the GPT-4o reveal

    • @WigganNuG
      @WigganNuG 20 дней назад +4

      yea and they FUCKED IT UP BAD...

  • @WPPatriot
    @WPPatriot 21 день назад +1

    I like it when the podcast is this long. I was honestly excited when I saw how long it was. At the same time, I can see how clocked out Marques was at the end so I won't be salty that you're not going to make it a regular thing 😂

  • @lightpohl
    @lightpohl 21 день назад

    It's ridiculous how much I look forward to trivia every week!

  • @julesm1434
    @julesm1434 21 день назад +4

    Waveform time!! The cherry on top of my Friday 🎉

  • @CreaminFreeman
    @CreaminFreeman 21 день назад +4

    The spark in Andrew's eye when Marques said he was watching Drive to Survive... I felt like I was right there with you in that moment!
    This is exactly me trying to get my friends into F1, so we'll have more nerdy things to talk about, haha!

    • @gabmano4877
      @gabmano4877 21 день назад +1

      DTS Is kinda one of the worst things happened to F1

    • @CreaminFreeman
      @CreaminFreeman 21 день назад

      @@gabmano4877 I respectfully disagree. It's got a lot of faults but it's definitely brought in more fans, that's for sure. At least, with this most recent season, they've gotten back to more behind the scenes stuff instead of race highlight feeling episodes.
      They need to cut out the made up storylines though. There's so much good real drama!

    • @gabmano4877
      @gabmano4877 21 день назад +1

      @@CreaminFreeman then we agree to disagree

    • @CreaminFreeman
      @CreaminFreeman 21 день назад

      @@gabmano4877 I can agree with that, friend

  • @GustavKampp
    @GustavKampp 21 день назад +1

    Oh the confusion about the Mandelbrot question 😂😂😂😂
    Thank you for including that at the end!

  • @nadia6579
    @nadia6579 20 дней назад +1

    Honestly loved the length of this episode, didn’t find it boring at all!

  • @strafanich
    @strafanich 21 день назад +75

    Google is slowly but surely turning into Hooly from Silicon Valley.

    • @sasmit.9846
      @sasmit.9846 21 день назад +8

      Could be but OpenAI != Pied Piper.
      Altman does not apper to be neraly as principled

    • @vigneshraghunathan1537
      @vigneshraghunathan1537 21 день назад +9

      Google always was Hooli lol.

  • @LightspeedLad
    @LightspeedLad 21 день назад +61

    GPT-4o showing groundbreaking technology:
    Andrew & David: “It’s not as good as it should be 😡”

    • @RevelsCat
      @RevelsCat 21 день назад +7

      Me "that woman`s voice is super annoying"

    • @PrajwalDSouzaCrazyTalks
      @PrajwalDSouzaCrazyTalks 19 дней назад

      Yeah. It's the most powerful model that we have right now. But why was it called gpt2-chatbot?

  • @agut5587
    @agut5587 9 дней назад

    This is my first time visiting your podcast and I really enjoyed listening to you all.

  • @bayandamsweli2005
    @bayandamsweli2005 18 дней назад +1

    The book club idea is on point. I've had the same desire for years now. I want to talk about the book but with someone who's experiencing the book at the same time and possibly same pace as I am.

  • @eksperiment6269
    @eksperiment6269 21 день назад +5

    We need a The Studio video with Marques and Andrew watching Her and discussing it and AI :D

  • @aimilist
    @aimilist 20 дней назад +2

    i d love to see more long podcasts... i think it's the perfect formact for these kinds of discussions!

  • @ColonelGrande
    @ColonelGrande 20 дней назад

    Been waiting for this one

  • @MilesRCH
    @MilesRCH 21 день назад +8

    As an artist, I can say the Apple Pencil Pro 'squeeze' and 'barrel-roll' features (especially squeeze) are attractive, as well as the nano, matte, textured screen thing.

    • @tiagomaqz
      @tiagomaqz 20 дней назад +3

      I know right?! They talk about it from their pov but forget people like me and you actually need/want those features.

    • @tanmayjaiswal5935
      @tanmayjaiswal5935 20 дней назад +1

      The shitty part was that those were apple pencil features but you need to buy a new iPad to use them.

    • @networkrage
      @networkrage 17 дней назад

      Don't get the Nano texture screen it worsens the contrast and makes it look way dimmer and the Nano texture can get so easily ruined. If you use the wrong microfiber cloth it's over

  • @ThoughtfulAl
    @ThoughtfulAl 21 день назад +3

    It's so fun playing with 4o

  • @CharlieBasta
    @CharlieBasta 21 день назад

    Very excited to watch this one.

  • @lilliegreenlaw
    @lilliegreenlaw 21 день назад +2

    A feature I’ve wanted for a long time is be able to find weather on road trips. I’d love to be able to plan my route or be able to see what the best time to leave is based on the weather forecast (as unpredictable as that is).

  • @mattjgraham
    @mattjgraham 21 день назад +29

    Its the weekend starter in the UK and it is time for the great Waveform! :)

    • @none_the_less
      @none_the_less 21 день назад

      Matt!!!

    • @dbkarman
      @dbkarman 21 день назад +1

      Same! Just finished work and popped this on. Makes the tram ride so much faster

    • @none_the_less
      @none_the_less 21 день назад

      @@dbkarman I saw you in the tram.

  • @ayoubsalhi8191
    @ayoubsalhi8191 21 день назад +3

    we missed you marques ❤

  • @mohammedmujammilshaikh4546
    @mohammedmujammilshaikh4546 21 день назад +2

    That gems is a system prompt tailored to get a specific response, often used in prompt engineering. There are many different styles of prompts, and one of the best is the 'COSTAR' method, which recently won a competition for eliciting the best responses. Other styles include 'TAG' and 'RTF'.

  • @boabmatic
    @boabmatic 21 день назад +1

    Stems are game changing for DJing , software like Serato DJ Pro have had it for a while which allows to you to live remix tracks by using the seperate stems to instantly replace/blend the individual track elements during the mix.

  • @CreaminFreeman
    @CreaminFreeman 21 день назад +8

    Let. Us. Go!
    Happy Friday my dudes.

    • @nuvjoti
      @nuvjoti 21 день назад

      Lettuce* Go!

  • @kindofanmol
    @kindofanmol 20 дней назад +13

    Crazy how these "tech people" downplayed the OpenAI event. It left me and everyone else with an open-mouth multiple times during the presentation.

  • @mommyofceos
    @mommyofceos 21 день назад

    Glad to see Marques back looking well rested. Also, his frisbee team sounds like a group of superheroes! Their backstories are so good lol

    • @cooliipie
      @cooliipie 19 дней назад

      He's so out of loop

  • @RYN988
    @RYN988 21 день назад

    This is becoming my favorite tech podcast.

  • @anillgupta
    @anillgupta 21 день назад +4

    It's so good to see that Andrew is worried about critical issues in the tech world i.e. Camera cutout symmetry in Apple products. It's a good example of when you haven't done any homework but you have to shoot for the Podcast. Dont mind dude..just pulling your leg.

  • @nikkoXmercado
    @nikkoXmercado 21 день назад +19

    How are these guys not MINDBLOWN by GPT-4o. I'm upset 😂
    1. There are no fillers to make the illusion of faster responses. In the stream, 4o could literally respond with a simple yet very human answer in 300 ms. Which, on their website, says it's a bit faster than human response time.
    2. This is a HUGE turning point in history. This is already an AGI. Have you heard Scarlet Johansson in Her? The humanness & naturalness in not only the voice, but the tone, empathy, and range in voice the AI is capable of. This is nothing like the previous GPT voice call. The reason this GPT is super fast now is it doesn't transcribe to text anymore in order for it to understand you. It actually understanda the AUDIO. It hears dog barks, people laughing.
    3. It can tell between different people talking and talk to each one separately, even addressing them by their name. It can sing, it can talk however you want it to. It's almost as if the voice was malleable where it isn't just a text to speech bot. Rather an actual voice that is dynamic and ridiculously impressive. Don't tell me you haven't seen how it reacted to that puppy.
    4. You can literally video call with it where the app feeds video frame by frame to the AI which means it can basically see you in real time. It can even help a blind person (watch that clip) in real time. Everything is just so seamless.
    Marques should've watched the event, I'm sure he would've been blown away. I'm sure he would've seen what the other guys here didn't.

    • @craxy890
      @craxy890 21 день назад +5

      Probably because it's just another tech demo with big promises, and possible best case scenario scripting. Until they get hands on, they (and many of the rest of us that have been around tech announcements a while) will save the excitement until the proof is in hand and is the final reviewable / generally available version.

    • @CharlieQuartz
      @CharlieQuartz 21 день назад +4

      Disagree we're not even close to AI even if the current model might be able to pass the Turing test with 10% of people

    • @shivashankar28
      @shivashankar28 21 день назад

      Disagree dude, the original videos seemed scripted

    • @daledavies_me
      @daledavies_me 21 день назад

      Google demoed Duplex a bazillion years ago and it still isn't a thing. I think they're pretty used to impressive demos that don't go anywhere at this point that it's hard to stay hyped.

    • @oli.2844
      @oli.2844 20 дней назад +3

      Because what am I going to do with it? Idk why people are so pumped about what is basically another search engine.
      None of this is “AI”

  • @DemetriusWren
    @DemetriusWren 21 день назад +1

    The Logic plugin is also great for editing videos when you want to drop the vocals so it doesn't compete with the dialogue but then bring the vocals back on the b-roll sections, etc.

  • @Mummyfier87
    @Mummyfier87 21 день назад +2

    I saw a video talking about AI being multi agent model. Which means that you have multi AIs fact checking before the result is populated, may be a bit slower initially but will get faster eventually.

  • @talktorobi
    @talktorobi 21 день назад +3

    If Google doesn't release project Astra as a product open AI will eat their lunch with gtp4o

  • @viruscat385
    @viruscat385 21 день назад +8

    What's funny to me was the part at Google I/O where they talked about the risks and responsibility of AI (like misuse and misinformation). Not once did they mention copyright, which IS the biggest ethical problem of generative AI. It basically works because it steals from all the hard work other people are doing and tries to replace these people more and more

    • @TreesPlease42
      @TreesPlease42 20 дней назад

      Biggest ethical problem is the war. Next is bad actors using custom AI. Third is opening AI to the public. Fourth is content theft and its effect on the market.

    • @TreesPlease42
      @TreesPlease42 20 дней назад +1

      Biggest ethical problem is the international conflict

  • @REVIEWSONTHERUN
    @REVIEWSONTHERUN 21 день назад

    Thanks for sharing it. ✌️

  • @jameswhitaker4357
    @jameswhitaker4357 21 день назад +1

    22:12 yes the newest idea is ReFT models, which is supposed to fine tune its results iteratively. Not sure how tough implementation is though with LLM!

  • @gauravmukherjee2678
    @gauravmukherjee2678 21 день назад +3

    Just a Important Note : Watch the OpenAI event fully and all the other videos that were released by them alongside it before you all debate . Did no one spend anytime doing some research ?
    Now I am starting to doubt other reviews and things i have watched till date ..!
    Sorry guys , but I expected better than this from you guys ..!

    • @acelovesit
      @acelovesit 21 день назад +2

      Did you get the same impression, that they were just immediately dismissive.

    • @jakebj
      @jakebj 21 день назад +3

      Would be interested to hear Marques’s take after him watching the demos vs that poor explanation

    • @gauravmukherjee2678
      @gauravmukherjee2678 21 день назад

      @@acelovesit yes, absolutely . By the way just looking at the timestamp already gives a particle confirmation about this.

    • @gauravmukherjee2678
      @gauravmukherjee2678 21 день назад

      @@jakebj true ..!

  • @Andres_Acosta
    @Andres_Acosta 21 день назад +7

    FYI Gemini flash costs .35 cents per 1 million tokens used so yes Microsoft should be scared it’s cheap for gpt 4 levels of performance. Also for project astra people did get to use it live and some developers have used flash to recreate the demo. Y’all spent more time criticizing the presentation than discussing the tech cmon now.

    • @Ricky-cn2io
      @Ricky-cn2io 19 дней назад

      Definitely not enough time between the keynotes and this pod. It’s all very surface value

  • @deRykcihC
    @deRykcihC 20 дней назад

    the amount of new names introduced in this event is hilarious, good luck for those media friends who need to track all these mother and daughter variations of the names

  • @levzowk
    @levzowk 21 день назад +1

    “I’d rather talk with AI” 😂😂 nice shot Andrew, Reddit do be like that sometimes

  • @Faust-Hell
    @Faust-Hell 21 день назад +3

    Marques ipad, not amazed. 🗣WHAT ABOUT THOSE STUDIO L.E.D.s DURING TRIIIIIVIIIIAAAAAA!

  • @rickiek
    @rickiek 21 день назад +12

    So, as a regular tech user, I was fascinated by the iPad, OpenAI and the Google IO presentations. I felt the progress was so remarkable, and I am going to be thoroughly inquisitive about the new prospects these tools and services open up.
    Meanwhile, the hosts here:
    New iPads - meh
    GPT4o - meh
    GoogleIO - meh
    Am I that easy to please, or are the tech reviewers getting too hard to please?

    • @JaceKeller
      @JaceKeller 20 дней назад +1

      This comment was written by chatGPT

  • @strawhatsmanager
    @strawhatsmanager 21 день назад

    kudos to the title this week it made me laugh !!

  • @NicholasClooney
    @NicholasClooney 20 дней назад +1

    I agree with what Adam said about talking to AI any moment when reading books / watching a movie / tv show can be a great feeling. I use chatgpt for that too and it was an awesome companion for this purpose, whether you just want to be able to express yourself or have a spoiler free "Wikipedia" about characters or lorrs of the book or movie or show.
    I was watching Zack Snyder's Justice League but I am also pretty new to the DC cinematic universe. I know some but not a lot. ChatGPT did a perfect job of not spoiling any stories while filling in my knowledge gap.
    Having it right there for me to talk to and the immediate response (instant gratification) felt so so good!

    • @WigganNuG
      @WigganNuG 20 дней назад

      THIS! This is what I can't wait for! Our own Jarvis / Friday personal assistant. I'm gonna use it to wake me up in the morning and by my ass kicker to get my lazy ass going and keep me on schedule! 🤣🤣😂😃😀🙂😐😑

  • @joe5head
    @joe5head 21 день назад +3

    @45:00 as a company Google is entering it's midlife crisis phase. They are offering investors dividends for the first time and the change you are seeing at i.o. is just a reflection of that next phase in company maturity. Expect to see this phase shift with Meta in the coming years, it's already started with Zuck walking back metaverse stuff.

  • @JiiruKoga
    @JiiruKoga 20 дней назад +3

    Chat gpt-4o is like a huge step for mankind. It’s scarily good and listening to these bros talking how dumb it is, feels like a Nelson Mandela effect. Am I in the same dimension, using the same chat gpt-4o as these bros?

  • @arun279
    @arun279 21 день назад +2

    36:23 i have been doing the same thing with Gemini! now there’s too many rate limits im even considering paying for Gemini advanced. it’s really good even while reading nonfiction to get a better perspective, opposing viewpoints, context, things i may have missed, etc

  • @systoxity
    @systoxity 19 дней назад +1

    No, pencil can not go on the opposite side because of keyboard. The pencil can’t go at the top or bottom because of speaker grills, power button, and charge port

  • @6bthedevil
    @6bthedevil 21 день назад +6

    Watching non artist review the newest iPad (which is made specifically for digital artists finally giving us the barrel roll movement desperately needed to actually use iPad professionally like all the other high end stylus art tablets) is like watching dentists review heart surgery tools..
    Artists around the world are rejoicing and ready to form parades over this feature and everyone else’s is shrugging shoulders.
    Like the hover feature: the most important feature for digital artists allowing any screen you use to become the iPad drawing surface.. every non artist shrugged on that too.
    Look y’all, there are iPads for all of you non artists, but now there is finally one made just for us artists.
    Have a successful artist review the iPad Pro.

    • @tiagomaqz
      @tiagomaqz 20 дней назад

      I agree 100%. I wish they did more research before talking about tech. They only talk about impressions stead of real world use and feedback.

    • @tanmayjaiswal5935
      @tanmayjaiswal5935 20 дней назад +1

      Hard disagree. Even if they did nothing to the iPad and just released a new apple pencil, they could have accomplished the same thing. The iPad itself added no value except for moving the camera to a slightly better spot. In fact, they have forced you to buy a new iPad just because you want the new apple pencil. If anything, this was a really shitty move but Apple.

    • @6bthedevil
      @6bthedevil 20 дней назад

      I draw with parallel pens as do MANY comic book artists.
      We can finally do digitally what was only possible traditionally.
      Your opinion means nothing to me.

  • @mallow610
    @mallow610 21 день назад +18

    You guys really need an AI specialist on your panel. Some of the stuff said was pretty dated and not understanding the underlying tech and implications of last week’s announcements. None of the other emergent capabilities were mentioned for GPT-4o, nothing about how significant audio-in to audio-out is. Being able to give an audio interpretation of an image without being converted to text is groundbreaking.

    • @mallow610
      @mallow610 21 день назад +3

      It just comes off as no research was done on this topic.

    • @jakebj
      @jakebj 21 день назад

      this.

    • @acelovesit
      @acelovesit 21 день назад +1

      I feel like they're drained from it, so it comes of negative because of bias. But they really do need to pull their finger out with the AI stuff, or just not cover it.
      Kinda disappointed in this episode, I was expecting some excitement about these achievements, but all I felt and saw was ridicule to the user and the technology.

    • @mallow610
      @mallow610 21 день назад +3

      @@acelovesit I would argue it’s because people who do not know the ins and outs of AI tech are not able to actually understand what it is doing and what it is capable of. I think we are at the point where the general public will see something and be like “oh that’s cool”, then brush it off. For example, I showed some family members shocking Sora video and they barely responded cuz to them it’s just a video. They don’t get how the system is actually working and how insane it is. I agree with not covering it, or be like “we are covering this from the viewpoint of the general public seeing this as presented”

  • @ZenCodingStudio
    @ZenCodingStudio 15 дней назад +1

    Hey MKBHD and Team,
    I'm Sai Harshith, and I've got a strong reason regarding the positioning of the Apple Pencil on the iPad that might convince you.
    Apple likely maintained the position of the Apple Pencil because many users have a habit of holding the iPad in landscape mode. Just picture yourself holding the iPad with your left hand, and the pencil is on the left side of the iPad in landscape mode, while you're also holding a coffee with your right hand during a meeting with the camera on. Managing everything could be quite challenging. Although one could suggest swapping the positions of the iPad and coffee, some people are either left-handed or right-handed, and there's also the issue of not having the option to carry the Apple Pencil separately. If you unintentionally bought the pencil without considering its position, your only option is to attach it to the iPad.
    So, imagine how you would feel.
    Some might wonder why Apple changed the position of the Pencil, right? That's precisely why Apple likely didn't change the position.
    If you find this explanation helpful, please consider responding or even pinning this message so that iPad users can understand the reason behind it.
    Thank you for taking the time to read this message.

  • @valeriallapiz
    @valeriallapiz 20 дней назад +1

    Arrived home after school and saw this and though how well IA now works because today my physics teacher asked Chat-GPT to give him some exercises on momentum and did that the whole class.

  • @midnight_yota
    @midnight_yota 21 день назад +2

    I feel like sick Marques sounds way more like David than normal.

    • @dobo1044
      @dobo1044 21 день назад

      Is it just me or is Marques's voice got deeper?

  • @oimrqs1691
    @oimrqs1691 21 день назад +13

    You guys should really be more informed than the average viewer. Just watching the event and spilling it out without actually digging around to really understand GPT4o is a bit frustrating.

    • @nikkoXmercado
      @nikkoXmercado 21 день назад +5

      Exactly. Everything here is poor & inaccurate understanding of the impressiveness of GPT-4o, which is basically now an AGI.

    • @dosesandmimoses
      @dosesandmimoses 10 дней назад

      Interesting. Where do you live? Because .. they are. And the format is actually calming for people (that must be “lower” in compute and tech knowledge than you) .. so congrats- you are officially ahead of the curve. What do suggest they do? How would you run your channel? And remember.. this might be in perpetuity.. no pressure. But come play and show us what you got! Mystikal’s show us what you’re working with is playing in my head now.. and that song.. Danger! Danger.. great song

    • @dosesandmimoses
      @dosesandmimoses 10 дней назад

      Well said Nikko. Thanks for dissing with no suggestions. So helpful. I wish you could see through my eyes and then .. well. You would probably find something unacceptable with my content as well. No judgement. You do you.. just be conscious of what you are saying bec the world is about to get a lot smaller.

  • @emilegouba7817
    @emilegouba7817 21 день назад +1

    I could wake up to the waveform opening/break song!

  • @eyeamwema
    @eyeamwema 20 дней назад

    19:43 the audio was cutting out because of the connection to the servers, it happens on the current/old voice chat too. That’s why they used the direct input internet vs wifi so they could minimize it. There were also moments where the audience made noise and it thought it was being interrupted. That part is understandable but leads me to wonder how perfectly quiet of an environment one needs to be in. Like would background music cut it off, or if someone yells to a friends outside your window type of thing. I imagine it’s hard to have it figure that out at this point (doable but still). But a lot of the times it was cutting out was just a server lag if I’m not incoreect

  • @chickendinner6456
    @chickendinner6456 21 день назад +4

    They are roasting all AI related products releasing and then Apple will release the same half baked AI crap, they will do a first impressions video, 2 test videos, a live dedicated to it and a bunch of twitter posts.

    • @yaboipookiepook
      @yaboipookiepook 21 день назад +1

      The hypocrisy is unreal. You just said what I was thinking and you wrote it aloud, my HERO. Bravo to you 🫣🤦🏾‍♂️✨🙌🏾

    • @acelovesit
      @acelovesit 21 день назад +2

      Powered by the same Open AI they've just been dismissive of.

  • @chrissmith635
    @chrissmith635 20 дней назад +5

    David is so condescending to people who trust or connect with ai at all

  • @tristansoucy655
    @tristansoucy655 20 дней назад +2

    Logic Pro was on Logic Pro X, which stood for ten, just like how Apple jumped to iPhone X! Logic for iPad two has that new stem splitter, it might be using similar technologies to what is found in Apple Music “sing”. I always welcome the audio stuff Ellis!

  • @jerrydmaya
    @jerrydmaya 18 дней назад +1

    For years RUclipsrs (who don’t use the iPad as their primary device by the way) have asked that the orientation of the front facing camera be changed, knowing full well that it’s going to go on the same side as the Apple Pencil charging dock.
    It was bound to bring changes to the charging system of the pencil, thus why the baseline iPad can’t charge from the same dock and forcing the Apple Pencil type c to be introduced.
    Changing the Apple Pencil dock was out of the question from the beginning for us who use the iPad as our primary device. Its current dock just makes sense.
    I think the comment about the Apple Pencil pro being a money making scheme is a bit of a naive statement from Marques who he himself has worked on products of his own and must know design change will compromise function.

  • @Serifinity
    @Serifinity 21 день назад +3

    Seriously you guys liked that DJ at Google IO? 🤦‍♂️ I think whoever hired him should be fired instantly. I do not see how screaming at your audience would ever encourage them to enjoy the experience. Also after using GPT-4o all week, I'm not sure you are right about the filler words, it's so fast they are not needed, I think it was more to keep the conversation moving along as in the live demos not every response had filler words.

    • @nouxcloete3129
      @nouxcloete3129 21 день назад +2

      Bra I’m with ya on this one, that was extremely uncomfortable to watch, I’m sure the Google peeps are doing the opposite of promoting that dj portion of the show, it was , well, an experience

    • @kevinweyrauch4875
      @kevinweyrauch4875 20 дней назад

      Educate yourself.