Generative AI Has Peaked? | Prime Reacts

Поделиться
HTML-код
  • Опубликовано: 22 май 2024
  • Recorded live on twitch, GET IN
    Reviewed Video
    • Has Generative AI Alre...
    By: Computerphile | / @computerphile
    My Stream
    / theprimeagen
    Best Way To Support Me
    Become a backend engineer. Its my favorite site
    boot.dev/?promo=PRIMEYT
    This is also the best way to support me is to support yourself becoming a better backend engineer.
    MY MAIN YT CHANNEL: Has well edited engineering videos
    / theprimeagen
    Discord
    / discord
    Have something for me to read or react to?: / theprimeagenreact
    Kinesis Advantage 360: bit.ly/Prime-Kinesis
    Hey I am sponsored by Turso, an edge database. I think they are pretty neet. Give them a try for free and if you want you can get a decent amount off (the free tier is the best (better than planetscale or any other))
    turso.tech/deeznuts
  • НаукаНаука

Комментарии • 866

  • @JGComments
    @JGComments Месяц назад +557

    Devs: Solve this problem
    AI: 10 million examples please

    • @DevPythonUnity
      @DevPythonUnity Месяц назад +12

      "Actually, AI should strive to be just smart enough to acquire and contemplate new data, including introspection. What do you do when confronted with an unsolvable problem? You gather data, experiment, collect results, then engage in self-reflection to update your knowledge base. It's not merely about amassing data, but rather about possessing the capability to acquire fresh data, experiment with it, and engage in introspection."

    • @tempname8263
      @tempname8263 Месяц назад +22

      @@DevPythonUnity please repeat your message, but this time use no more than 1 space inbetween words
      generate 4 different versions of such message

    • @alexandrecolautoneto7374
      @alexandrecolautoneto7374 Месяц назад +5

      Devs: Generate 10 million example of this problem.

    • @alexandrecolautoneto7374
      @alexandrecolautoneto7374 Месяц назад +7

      @@DevPythonUnity ! Disclaimer: GPT was trained on data until 2021, any answers after that date can hallucinate. We will solve this by searching google and feeding the 1st results in your context but you will feel like we now are able to generalize to any answers.

    • @Sky-fk5tl
      @Sky-fk5tl Месяц назад +1

      Isn't that how humans learn too...

  • @drditup
    @drditup 28 дней назад +99

    if only all windows users would start taking pictures of everything they do so the AI algorithms can get more data. Maybe like a screen shot every few seconds. I think I recall something like that

    • @samblackstone3400
      @samblackstone3400 18 дней назад +12

      AI data collection legislation now.

    • @magfal
      @magfal 15 дней назад +2

      ​@@samblackstone3400could even drop the word AI from it.....

    • @definitelynotacyborg
      @definitelynotacyborg 8 дней назад +1

      Don't worry since recall has been recalled, we will have Apple Intelligence which is going to do the exact same thing from the moment you give it access to your device.

    • @Akab
      @Akab 4 дня назад

      ​@@definitelynotacyborg you mean until Apple gives it access to their devices 😉

  • @MrSenserus
    @MrSenserus Месяц назад +158

    The computerphile guys are my uni lecturers atm and for the coming year. It's pretty cool to see this.

    • @Michael-ty2uo
      @Michael-ty2uo 28 дней назад +9

      damn lucky asf they definitely enjoy teaching others about comp sci and math topics that cant be said about most professors

    • @WretchMusou
      @WretchMusou 24 дня назад +1

      Are they nice people in real life? they seems to be in videos...

    • @MrSenserus
      @MrSenserus 23 дня назад +2

      @@WretchMusou Yeah generally! Definitely some characters though, Steven is a great lecturer and awesomely knowledgeable but definitely a quirky character.

    • @precooked-bacon
      @precooked-bacon 17 дней назад +2

      very lucky. make good use of the time.

  • @Watanabe911
    @Watanabe911 27 дней назад +72

    Isn't it crazy that you hear more people worry about AI polluting the internet for training future AI's than the fact that it is polluting the internet for , you know, YOU AND ME?

    • @g_wylde
      @g_wylde 24 дня назад

      True but I guess most of us who are vaguely internet savvy can tell the AI crap from legitimate information. AIs themselves cannot do that, they'll just take it in and regurgitate something even worse out. Which means that those people who are less savvy will be faced with more and more fake information and all of us will be swimming through growing piles of garbage to find anything useful.

    • @jakke1975
      @jakke1975 14 дней назад +7

      Environmental pollution by AI is even a lot worse and honestly, for what? An advanced chat toy for adults that operates with the "intelligence" of a dog?

    • @VinnyMickeyRickeyDickeyEddy
      @VinnyMickeyRickeyDickeyEddy 3 дня назад

      @@jakke1975 Yup. Rarely gets discussed. Same with VR graphics.

  • @Afro__Joe
    @Afro__Joe Месяц назад +67

    AI is becoming like ice cream to me, good every once in a while, but I get sick of too much of it. With Samsung trying to shove it into everything in my phone, MS trying to shove it into everything PC-related, Google pushing it at every turn, and so on... ugh.

    • @DJWESG1
      @DJWESG1 29 дней назад +2

      That's the same Samsung who can't even get its spellchecker and auto correct to work efficiently for ppl with poor spelling and grammar.

    • @the0ne809
      @the0ne809 29 дней назад

      Google using AI for its search engine is wild to me.

    • @TheManinBlack9054
      @TheManinBlack9054 29 дней назад

      @@the0ne809 every search engine uses it

    • @Overt_Erre
      @Overt_Erre 26 дней назад

      They're pushing it because they want to collect more data from you. AI will seem free and useful as long as they think more data will improve their efficacy. Once they see the diminishing returns suddenly you'll be asked to pay and the usage rates will plummet

  • @denysolleik9896
    @denysolleik9896 Месяц назад +342

    It can do anything except tell you that it doesn’t know how to do something.

    • @Vlad-qr5sf
      @Vlad-qr5sf Месяц назад +6

      If it can do anything then it doesn’t need to tell you that it can’t do something. Your statement is contradictory.

    • @shafferfs
      @shafferfs Месяц назад

      ​@@Vlad-qr5sfshut up nerd

    • @denysolleik9896
      @denysolleik9896 Месяц назад +75

      @@Vlad-qr5sf someone always thinks they’re smarter than me.

    • @hootmx198
      @hootmx198 Месяц назад +12

      Just like your average internet user haha

    • @JGComments
      @JGComments Месяц назад +7

      Right, it doesn’t actually fundamentally understand what anything is, like what a cat is versus what a dog is.

  • @SL3DApps
    @SL3DApps Месяц назад +377

    It’s crazy how OpenAi’s only way to stay relevant in this market vs big tech such as Google and MS is to sell the hype that Ai will not peak in the near future. Yet, they are the company everyone is relying on to say if Ai has or has not peaked… why would they ever want to admit to anything that can be damaging to their own company?

    • @furycorp
      @furycorp Месяц назад +53

      Altman just needs everyone to hand over more personal data and private/internal documents from businesses so he can live out the megalomaniac fantasies that he talks about in interviews

    • @alexandrecolautoneto7374
      @alexandrecolautoneto7374 Месяц назад +51

      AI Trains on the internet -> AI filled the internet with garbage -> AI doesn't have good training data anymore...

    • @hughmanwho
      @hughmanwho Месяц назад +3

      @@furycorp I'd be curious to see these interviews you are referring to

    • @hughmanwho
      @hughmanwho Месяц назад +7

      My guess is that ChatGPT 5 will be better quality. 4 definitely has some issues.

    • @dixztube
      @dixztube Месяц назад +8

      @@furycorphe isn’t trustworthy at all

  • @amesasw
    @amesasw Месяц назад +66

    One major problem, is if I ask a person how to do something that non of us know the solution to. They may be able to theorize a solution but they will often tell you they are guessing and not 100% sure about some parts of their proposed solution.
    Chatgpt can't really theorize for me or tell me that it is not sure of an answer but is theorizing a solution based on its understanding or internal model.

    • @doctorgears9358
      @doctorgears9358 Месяц назад +29

      It will theorize and be confidently wrong. Which is honestly worse than it just admitting a lack of knowledge.

    • @BHBalast
      @BHBalast Месяц назад

      There is a compute intensive method to check for model confidence. As LLMs are statistics models, one might prompt it multiple times and check if answers are the same. The second step also can be done by an LLM. This method works and was used in some paper associated with medical use of LLMs but I don't remember the name.

    • @reboundmultimedia
      @reboundmultimedia 28 дней назад

      If you give a human a new problem, they will often use tools, research, test things out, etc. to find the solution to the problem. There are very few humans that can simple solve a new problem without some kind of pretraining involved. There is no reason that a very very good LLM can't do the same thing. They will be able to use tools the same way a human can.

    • @therealjezzyc6209
      @therealjezzyc6209 19 дней назад

      ​@reboundmultimedia while what you're saying isn't wrong, it isn't accurate to say that humans and LLMs learn the same way, or that they learn the same relationships. First of all, humans learn faster than LLMs do with less training data. Second, when faced with a challenging problem, a human will go off and collect new information; an LLM will not go and find new textbooks and put them into its training data and retrain itself to learn new correlations. Humans can actively acquire new knowledge that they haven't seen or trained to work with, LLMs cannot acquire knowledge that wasn't implicit in the representations of the data they were trained on.

    • @justinwescott8125
      @justinwescott8125 16 дней назад

      It will tell you it's not sure if you ask it to. But you're right that it's not a built in behavior.
      "Hey ChatGPT, for this conversation, if you give me an answer that you're not very sure about, I want you to tell me. In fact, for every answer you give, please give me a percentage that represents how sure you are, and explain how you arrived at that percentage."

  • @granyte
    @granyte Месяц назад +219

    "steer me into my own bad ideas at an incredible speed" LMAO this is perfect it's exactly what it does when it even works at all. I don't know if my skills have improved that much since gpt-4 came out or what but it feel like copilot and chat-gpt have become way dumber since launch.

    • @allansmith350
      @allansmith350 Месяц назад +15

      I use all of them and I kind of agree, but I will say, I've cowboyed into some small project solutions VERY fast with ai. They're surely not robust or maintainable though

    • @AndrasBuzas1908
      @AndrasBuzas1908 Месяц назад +13

      It breaks down the moment you try to do something complex that it hasn't seen before.
      Even then with small problems, it can completely miss the point. It's only really good for the occasional auto complete suggestions.

    • @rngQ
      @rngQ Месяц назад +5

      Engineers at OpenAI have talked about how the quality of generation scales with compute. So as more people use GPTs, I can imagine the compute pool being more divided which lowers the quality of the output. Look at how drastically it scales with Sora for example

    • @elPresidente650
      @elPresidente650 Месяц назад +3

      @@allansmith350 I've been using it for a while, and honestly, I can't complain too much. I don't ask it to do anything fancy, though. It comes in handy when writing documentation based on my layman's prompts. It needs to be edited, of course, but it does a good job at organizing my ideas.

    • @TheManinBlack9054
      @TheManinBlack9054 Месяц назад +1

      use Claude 3 Opus, its far better for coding. Seriously. Opus is really better.

  • @GigaFro
    @GigaFro Месяц назад +67

    Just last year, I was sitting in a makeshift tent in an office in downtown San Francisco, attending a Gen AI meetup. The event was a mix of investors and developers, each taking turns to share their projections on the future progress of AI. Most of the answers were filled with exponential optimism, and I found myself dumbfounded by the sheer enthusiasm. When it was my turn, I projected that we were peaking in terms of model performance, and I was certain I was about to be ostracized for my view. That day I learned that as soon as hype enters the room, critical thinking goes out the window - even for the most intelligent minds.

    • @sp123
      @sp123 Месяц назад +13

      People go to tech because its the last gold rush of easy money

    • @TheManinBlack9054
      @TheManinBlack9054 29 дней назад +1

      Great! You've seem to have found your audience here, but if i may ask what were your projections based on?

    • @Danuxsy
      @Danuxsy 29 дней назад +1

      but you would have been wrong? gpt4-o is clearly a step up from GPT4, and OpenAI have stated themselves that we are far from the limit of generative models.

    • @justahamsterthatcodes
      @justahamsterthatcodes 28 дней назад +4

      We certainly are plateauing. Compare gpt 2 go gpt 3 wild difference. Now compare gpt 3 to gpt 4. Much less difference. Or gpt 4 to gpt 4o.

    • @skyrimax
      @skyrimax 28 дней назад +1

      Attended an ML day type event last year, had a similar experience. But what dumbgounded me even more was to complete disregard for the social implications of ChatGPT-type programs, like the new Google Overview telling depressed people to jump off a bridge. I think that's similar observation you had about critical thinking, but on the social side.

  • @MrKlarthums
    @MrKlarthums Месяц назад +75

    There's plenty of software that has simultaneously improved while having an entirely degraded user experience. If companies feel that it makes them more money, they'll absolutely do it. LLMs will probably be part of that.

    • @monad_tcp
      @monad_tcp Месяц назад +14

      Windows11 for example, structurally the thing is actually better than the previous ones. But in user experience, it degraded so far from Windows7. Even thou Windows11 is prettier than Windows10 which was ugly as hell, its far from the simple beauty of Windows7 glass and its barely usable.

    • @Forty8-Forty5-Fifty8
      @Forty8-Forty5-Fifty8 Месяц назад +5

      @@monad_tcp To be fair, isn't that just the Microsoft development cycle: just alternating between releasing a good product and then releasing a shitty one? At least that is what I've been told since I was a kid, and my only experience is W7(good), W8(dogshit), W10(good), and then W11(dogshit, but improving).

    • @monad_tcp
      @monad_tcp Месяц назад +1

      @@Forty8-Forty5-Fifty8 probably, its the tick-tock cycle from old intel

    • @Forty8-Forty5-Fifty8
      @Forty8-Forty5-Fifty8 Месяц назад +3

      @@monad_tcp lol I was just having a conversation yesterday with my grandfather about how I have a conspiracy theory that intel pretends to release a new generation every year when in reality it takes like 2-4+ generations for any noticeable performance differences because my motherboard just died and I was in the market for an upgrade, but it didn't seem like there was anything worthwhile. I guess there is something to that theory

    • @monad_tcp
      @monad_tcp Месяц назад +1

      @@Forty8-Forty5-Fifty8 I think Intel died on the 14nm , nothing got better after that

  • @jameshickman5401
    @jameshickman5401 Месяц назад +258

    Every exponential curve is secretly a sigmoid curve.

    • @zyansheep
      @zyansheep Месяц назад +5

      So far...

    • @AndrasBuzas1908
      @AndrasBuzas1908 Месяц назад +52

      Sigmoid grindset

    • @Forty8-Forty5-Fifty8
      @Forty8-Forty5-Fifty8 Месяц назад +7

      what about the exponential curve

    • @kevin.malone
      @kevin.malone Месяц назад +1

      @@AndrasBuzas1908 I wanted to say that

    • @MikkoRantalainen
      @MikkoRantalainen Месяц назад +18

      I would say that every exponential curve of *naturally occurring events* is secretly a sigmoid curve. You can have pure exponential curves in pure mathematics without any problems but real world events are limited by real world physical limits and those curves seem to follow sigmoid curve in big picture even though short term results point to exponential behavior.

  • @MikkoRantalainen
    @MikkoRantalainen Месяц назад +28

    Modern image generators can do supriringly well even with a bit weird prompts such as "Minotaur centaur with unicorn horn in the head, steampunk style, award winning photograph" or "Minotaur centaur with unicorn horn in the head, transformers style, arc reactor, award winning photograph". Even "A transformers robot that looks like minotaur centaur, award winning photograph, dramatic lighting" outputs acceptable results.
    However, ask it for "a photograph of Boeing 737 MAX with cockpit windows replaced with cameras" and it will totally fail. The latter case has way less possible implementations and this exactness makes it to fail.

    • @pureheroin9902
      @pureheroin9902 27 дней назад +5

      I need to see your search history 🤣🤣🤣

    • @MikkoRantalainen
      @MikkoRantalainen 27 дней назад

      @@pureheroin9902 🤭My search history is actually pretty boring. Right now it looks like this:
      - phpunit assertequals github
      - css properties selectors sanitizer whitelist
      - sanitize css whitelist functions
      - phpunit assertequals clipped string
      - webp vs avif vs jpeg xl
      - what is intel ark
      - seagate exos helium
      - max fps cs
      - eu legislation consumer battery replacement
      - how Automatic Activation Device works
      - song of myself nightwish

    • @thiagopinheiromusic
      @thiagopinheiromusic 3 дня назад +1

      It's interesting how the specificity and exactness of a prompt can impact the results of image generation. When a prompt is too specific or technical, like "a photograph of Boeing 737 MAX with cockpit windows replaced with cameras," the image generator may struggle because it relies on patterns and generalizations learned from a vast dataset of images. Here are a few reasons why this happens:
      Data Training Limitations: The training data for these models consists of a vast array of images, but the specific combination of features like "Boeing 737 MAX with cockpit windows replaced with cameras" might not exist in the dataset. As a result, the model can't draw from a learned example and may fail to generate a coherent image.
      Conceptual Complexity: While a "Minotaur centaur with a unicorn horn" is a complex concept, it's based on mythological and fictional elements that the model has likely seen in various forms. This allows it to generalize and create an imaginative output. However, replacing cockpit windows with cameras on a specific aircraft model is a highly technical modification that the model might not have encountered or understood in its training data.
      Visual Coherence: Generating a photorealistic image that includes complex mechanical details, like modifying an airplane's cockpit, requires a high level of visual coherence and understanding of engineering. The model might struggle to maintain the realistic appearance of the Boeing 737 MAX while accurately implementing the specified changes.
      Creative Interpretation vs. Precision: When given creative or fantastical prompts, the model has more leeway to interpret and generate the image. However, when asked for precise, technical modifications, it needs to adhere closely to real-world specifications, which can be challenging without explicit training examples.
      To improve the chances of getting a satisfactory result with a more specific prompt, one might try breaking down the request into simpler parts or providing additional context that helps guide the model's interpretation. For instance, describing the cameras' placement and appearance in more detail or using analogies that the model might better understand could potentially yield better results.

  • @TomNook.
    @TomNook. Месяц назад +95

    I hate how AI has been forced into everything, just like crypto and NFTs a couple of years ago

    • @MasterOfM1993
      @MasterOfM1993 Месяц назад +35

      somehow feels like the people who used to talk about web3 all the time now talk about AGI all the time

    • @Slashx92
      @Slashx92 Месяц назад +1

      Sasly, this is somewhat useful for the corporate word, so it will stay, not like nfts that just died on their own

    • @francisco444
      @francisco444 Месяц назад

      AI is in everything because it's a universal translator so it makes sense to put it everywhere.
      Crypto is great but limited use.

    • @thewhitefalcon8539
      @thewhitefalcon8539 Месяц назад +9

      ​@@MasterOfM1993some people running NFT companies are running AI companies now

    • @marceljouvenaz257
      @marceljouvenaz257 Месяц назад

      Elon is investing $10 bln in AI this year. YMMV, but that is my high water mark.

  • @tequilasunset4651
    @tequilasunset4651 Месяц назад +23

    We didn't even go "from nothing to something" - current LLMs are just a marked spike/ breakthrough in capability of machine learning that's been around for ages. I think we'll still see huge improvement in the technology that enabled that breakthrough but doubt there will be a "next level" - that's not just a tech company branding a new product as such - for a good few years.

    • @TheNewton
      @TheNewton 28 дней назад +6

      the breakthrough of course being just throw more resources at the problem

  • @techsuvara
    @techsuvara Месяц назад +75

    I like to say "AI accelerates you in the direction your going, pray it's not the wrong one"...

    • @BaruyrSarkissian
      @BaruyrSarkissian 18 дней назад +1

      It's still good to reach the end of a wrong road faster.

    • @techsuvara
      @techsuvara 18 дней назад +2

      @@BaruyrSarkissian that's the problem with wrong roads, if you're asking AI to take you somewhere, it doesn't know it's the wrong road. However if you do things yourself, you can reason you're down the wrong path much earlier.

    • @BaruyrSarkissian
      @BaruyrSarkissian 15 дней назад +1

      @@techsuvara your initial statement is "AI accelerates you in the direction your going" you will go on wrong roads with and without AI.

  • @bwhit7919
    @bwhit7919 Месяц назад +23

    Most people misunderstand when they hear AI follows a “power law”. If you read OpenAI’s paper on the scaling laws, you need a 10x increase in both compute and data to lead to a .3 reduction in the loss function. In other words, you need exponentially more data to keep making the models better. It’s not that the models are getting exponentially better.

    • @DJWESG1
      @DJWESG1 29 дней назад

      No, they just havnt figured out how to utilise small amount of data.

    • @thiagopinheiromusic
      @thiagopinheiromusic 3 дня назад

      That's a great point, and it's a common misunderstanding. The term "power law" in the context of AI and machine learning, particularly in the scaling laws for neural network training, refers to the relationship between the amount of compute/data and the resulting improvement in performance. Here's a more detailed explanation to clarify this concept:
      Understanding the Scaling Laws in AI
      Definition: In the context of machine learning, a power law scaling means that to achieve a certain improvement in model performance (e.g., reduction in loss), the amount of compute and data required scales according to a power law.
      Example: According to OpenAI’s scaling laws, if you want to reduce the loss function by a factor (e.g., 0.3 reduction in loss), you need to increase both the compute and data by an order of magnitude (10x). This relationship can be described by a power law function.
      Exponential Data Requirements: The power law indicates that the requirements for data and compute grow exponentially to achieve linear improvements in model performance. This means that as the model gets better, the resources needed to continue improving it increase dramatically.
      Linear Performance Gains: Despite the exponential increase in resources, the actual performance gains (e.g., accuracy or reduction in loss) are not exponential but rather linear or sub-linear. This is why the models do not get exponentially better with exponentially more data and compute.
      Resource Intensive: As models grow larger and more complex, the cost (in terms of computational power and data) to train these models effectively becomes significantly higher.
      Diminishing Returns: There are diminishing returns in performance improvement relative to the exponential increase in resources. For instance, doubling the compute might not halve the error but only slightly reduce it.
      Misconception of Exponential Improvement: Some might misinterpret "power law" to mean that the models themselves improve exponentially with more data and compute. In reality, the improvement is much more modest compared to the exponential growth in resources required.
      Focus on Scaling: Understanding the scaling laws helps in setting realistic expectations and planning resource allocation for training larger models. It highlights the need for efficient algorithms and techniques to optimize resource use.

    • @DJWESG1
      @DJWESG1 3 дня назад +1

      @@thiagopinheiromusic its almost as if structuration and the power relationship are real..

    • @thiagopinheiromusic
      @thiagopinheiromusic 2 дня назад

      @@DJWESG1 fact

  • @hamm8934
    @hamm8934 Месяц назад +105

    Read up on the “frame problem” and “grounding problem”. This is old news and has been known for decades. Engineers and venture capital just dont care because its not in their interest.
    Edit: also Wittgensteins work on family resemblance and language games.
    Edit2: I should clarify that I am referring to the epistemological interpretation of the frame problem, not the classical AI interpretation. That is, the concern of an infinite regress from an inability to explicitly account for non-effects when defining a problem space; this is specifically at the level of computation, not representation. For example, if agent is told "the spoon is next to the plate", how are all of the other non-effects, like a table, placemat, chair, room, etc. successfully transmitted and understood, while irrelevant inaccuracies like a swimming pool, cows, cars, etc. omitted and not included in the transmission of information. Fodor, Dennett, McDermett, and Dreyfus have plenty of canonical citations and works articulating this problem.

    • @InfiniteQuest86
      @InfiniteQuest86 Месяц назад +22

      As long as you profit before anyone figures it out, you win.

    • @abdvs325
      @abdvs325 Месяц назад +2

      Those problems don't seem like limits at all. The frame problem is just about understanding relevant context. For which there is no definitive evidence that it can't be reproduced in Ai. Neither has the grounding problem, which is just about understanding the real world rather than statistical relationships between words, given any strong evidence that it is a limit on Ai progress. This is laziness.

    • @hamm8934
      @hamm8934 Месяц назад

      ​@@abdvs325 Those are extremely surface level strawman understandings of both. Far greater minds than anyone watching this video have debated and formulated both of these critiques. You can hand wave all you want, but the white papers have been left undisputed for decades.
      Here are a few points you are missing/oversimplifying:
      - The frame problem argues that in principle there is no deterministic - or probabilistic - way to determine relevant context in a logical framework. That is the problem. It shows that an infinite regress emerges of when trying to determine relevance and irrelevance following deduction or induction. These system axiomatically dissolve into intractability.
      - The grounding problem is not about determining the real world from a word. It cuts to the very root of deductive and symbolic systems. It again, shows that there in principle must be external dimensions/modalities that allow humans to deduce meaning from symbols. Symbols themselves themselves are not sufficient. For instance, one's understanding of the symbol "food" is multimodal and multidimensional. You dont understand the word food because you read the definition of the symbol. You've smelt food. You've tasted food. You've felt food. You've prepared food. You've thrown away food. You've remembered food. Etc.. Read up on the Chinese Room Problem example and it might make it clearer. Or read some of Wittgensteins work on the meaning of a word.
      I'm rambling at this point. Again, read up on these and dont be so naïve to reject them after having a super basic understanding of them. These problems are real and ever present. These problems are very much open.

    • @hamm8934
      @hamm8934 Месяц назад +17

      ​@@abdvs325 You're over simplifying and strawmanning both. RUclips deleted my response, but read more.
      also, "For which there is no definitive evidence that it can't be reproduced in Ai." This is a fallacy. You cannot prove a negative. There is no definitive evidence that there are not fairies. Exactly. No one is saying there is. The point is that there is no evidence in favor of positing the existence of fairies, therefore we just dont say there are fairies, but we can never say there arent.
      There is no evidence or serious rebuttals to the frame or grounding problem, and as such, there is no reason to think they are wrong. They might be, but they've stood strong since at least the 80s when the terms themselves were coined, even though the concepts go all the way back to far earlier with Hume. You need positive evidence to say they are wrong. Until then, they stand as the null hypothesis.

    • @clubpenguinfan1928
      @clubpenguinfan1928 Месяц назад +10

      Finally someone mentions philosophy of language. When the video mentioned the idea of mapping text/images to their meaning in some embedding space, it set off some alarms for me. If some hypothetical AGI can grasp meaning (like we do) via this architecture, then we might as well describe the "x means M" relation as just this embedding map.
      Wouldn't this have huge implications for the semantic problem? In a way it feels like an implementation for a referential-like theory of meaning, and those are the very first theories you "debunk" in an intro Phil of Lang class.

  • @xCheddarB0b42x
    @xCheddarB0b42x 28 дней назад +12

    The young ones may not remember the VR craze of the late 90s and early 00s, but us oldkins do. AI feels like that to me.

    • @rh906
      @rh906 21 день назад

      Difference between that and now and the LLMs are at least useful if you understand their limitations and don't plop out your brain thinking it is a replacement. Can't fix lazy and stupid people I suppose.

    • @tlz124
      @tlz124 16 дней назад

      VR in the 90's?

    • @justinwescott8125
      @justinwescott8125 16 дней назад +1

      Yup. Nintendo gave it a try in the 90's with a little product called the VirtuaBoy.
      By the way, even though VR was a failed craze back in the 90s and 00s, it actually happened eventually. I use my Meta Quest like every day to play games and stay in touch with far away friends. Some of the games are incredible like Pistol Whip and Arizona Sunshine.

    • @FarnhamJ07
      @FarnhamJ07 9 дней назад

      Yep yep, the Virtual Boy didn't come completely out of left field! I'd say the hype was really more about 3D graphics than VR itself, but it didn't take long for them to start pushing the idea that those 3D graphics could then be used to generate an entire 3D virtual world around you. Everyone knew the 3D graphics part was coming at least; I think a lotta people forget that the Virtual Boy and original PlayStation came out within a few months of each other!

  • @CristianGarcia
    @CristianGarcia Месяц назад +72

    Numberphile but Primagen talks from time to time

    • @virior
      @virior 28 дней назад

      Yeah! That's called a react, I've been enjoying the format.

    • @kallekula84
      @kallekula84 26 дней назад +2

      @@virior he usually lets the guy finish a sentence, how often did he even let the guy finish a sentence here?

  • @tonym4953
    @tonym4953 Месяц назад +12

    8:20 Open AI is doing the same thing with the consumer version of Chat GPT. They essentially are charging users to train their model. Genius and very cheeky!

  • @snarkyboojum
    @snarkyboojum Месяц назад +12

    The main issue is that the people responsible for the fundamental approaches being used in deep learning today have never wrestled with the problem of induction. They need to read the classic positioning by Hume and then follow up with Popper. Humans don’t used induction to reason about the world. It’s shocking to me that otherwise highly educated people have never read basic philosophy of epistemology. Narrow education is to blame really.

    • @ea_naseer
      @ea_naseer Месяц назад

      induction has a formula Solomonoff induction yes its intractable but it's there. But there's no formula for deduction not even an intractable one not even an NP hard one.

    • @specy_
      @specy_ Месяц назад +2

      This is a cool topic, why would you say humans don't use induction in their daily life? Exclude the scientific world which we can say not always uses it, but induction is probably the simplest and most used prediction technique used by humans. I guess ML models can't really do much other than use induction to get a prediction, unless you are exhaustive with your possible inputs. What's your idea to not use induction in ML?

  • @DeusGladiorum
    @DeusGladiorum Месяц назад +13

    I didn’t appreciate Prime making those Kakariko girl noises while I was outside and without headphones

  • @ERICROJO156
    @ERICROJO156 Месяц назад +16

    AI bros are crying now because their gonna have to take responsibility for their own laziness, since their AI god isn't gonna happen ❤

  • @derekcahill1775
    @derekcahill1775 Месяц назад +33

    Jeff bezos said it best but I think it’s telling that AI needs so much data to form a basic model. For example Humans don’t need to know everything about driving or have 100’s of thousands of miles in order to start driving a car. The other problem is that AI doesn’t perceive opportunity cost like a human so there’s no incentive for it to problem solve the same way a human would. Ai is definitely the future but it’s nowhere near people think it is unfortunately.

    • @monad_tcp
      @monad_tcp Месяц назад +7

      Its funny, I learned to drive my car in one week after mere 500km of training data.

    • @monad_tcp
      @monad_tcp Месяц назад +11

      I also don't remember needing to read the entire internet to be able to write and understand text.

    • @Slashx92
      @Slashx92 Месяц назад +13

      Yeah but we have 20 years of experience living in reality (or 16 or w/e) when driving. You already have eye-hand coordination, you have seen cars all your life, you get a rough idea on how the road works in children's shows and books. There is an inmense amount of data you are not aknowledging

    • @cauthrim4298
      @cauthrim4298 Месяц назад +4

      ​@@Slashx92people learned to drive when cars first came about all the same, it also didn't take extraordinarily long too.

    • @jackoplumkin6412
      @jackoplumkin6412 Месяц назад

      ​@@cauthrim4298because there were other manual vehicles at the time that used to do the job of cars. and it's not like the earlier models of cars were much different from the carts people were used to when it was first invented

  • @MrSnivvel
    @MrSnivvel Месяц назад +60

    LaTeX formatted papers (the research paper in the video) are gigachad. You cannot prove me wrong.

    • @-book
      @-book Месяц назад +15

      LaTeX is such good software, puts Word to shame

    • @sahasananth987
      @sahasananth987 Месяц назад +6

      I love LaTeX it’s awesome I have thrown word and g docs programs to trash lol. I use latex for assignments at school too

    • @AJewFR0
      @AJewFR0 Месяц назад +8

      I went to a good cs college with a slightly math heavy emphasis. I was the kid who started learning LaTeX for hw in multivar calculus. It is such a useful tool to know for my all my math, cs, and engineering classes that required pdf submission. use the basic formatting still in LaTeX fills in markdown docs at work.

    • @xplorethings
      @xplorethings Месяц назад +7

      So.. every paper outside of social sciences?

    • @MrSnivvel
      @MrSnivvel Месяц назад +4

      @@xplorethings **whoosh** The use of LaTeX is rare outside of academia and research papers/publications, and those who do use it outside of that scope set themselves far ahead from the rest.
      I know last month was Autism Awareness month, but you'll still get a freebie this time for missing the point.

  • @apexphp
    @apexphp Месяц назад +100

    It's even much more simple than that. They've simply ran out of training data. They've trained the LLMs on literally ever piece of data ever generated by humans since the dawn of mankind, from every word written to tons of satellite images, to every movie produced and song recorded. There is no more training data, and the LLMs still get things wrong all the time (the other day Meta AI was adament that a SHA256 bit hash is 64 bytes in length, it's not, it's 32 bytes).
    And you can't just have these things train on synthetic data they create, because that just makes them dumber. Plus with the sheer amount of AI generated garbage content and spam now that exists within the world, these LLMs are probably as smart as they're going to get for a long time. I read a report a while ago that estimates the volume of text that has been generated by humans from the dawn of man kind until recently is now being generated by AI every two weeks and pushed to the internet.
    So the pool of training data for LLMs is now of lower quality overall. I don't know, I'm ramblind now.

    • @DeepThinker193
      @DeepThinker193 Месяц назад +12

      The obvious solution is to this is to go back to the drawing board, actually figure out and understand how the AI works, improve it and recreate the AI from scratch.

    • @BB-uy4bb
      @BB-uy4bb Месяц назад +19

      Your missing a huge point: data quality, I would estimate that 90% of the internet is wrong/garbe data, there can be huge improvements if you simply let the ai only see the quality data and filter out the garbage, chances are that the ai only makes so many mistake cause it saw that many in the training data
      The next thing is we always expect the ai to be correct on its first try, but if you give a human only 1 chance he’ll most likely be wrong, we learn, create ideas and get to the correct solution iteratively, but expect the ai to give 1 shot the correct answer, not a fair comparison, if you give ais more time to think the get better aswell

    • @MrMeltdown
      @MrMeltdown Месяц назад

      You mean the AI is getting distracted by pron….

    • @dragoon347
      @dragoon347 Месяц назад +2

      Overall data needs to be marked for tokenization into llm's, previously there were only x amount of pictures with descriptions, then the vision multimode models come out and now you can describe the images with a better dataset more descriptive more indepth and multi-dimensional...i.e. its a dog, a yellow dog, a yellow jack rustle terrier, a dog in the canine family etc etc etc. So the data may shrink but the richness of the data will be far better as well as now, with gpt4o you can hear/see/nlp datasets giving at least 3 vectors to provide descriptions of tokens.

    • @LiveType
      @LiveType Месяц назад +8

      This.
      When gpt 4 came out I was blown away because it had signs that it could reason and plan (although very very poorly past 2 iteration steps), but it could do it. I then thought about could you make it complete. Can you make one of these LLMs able to have near perfect hierarchical planning like a team of humans can do?
      The answer I came to was no. The fundamental design of how an LLM works does not allow that to occur. The path planning q-star technique openai experimented with embedded into the vector space looks promising and is similar to what I had envisioned to solve that issue but that seems frighteningly difficult to implement successfully on any large model due to just how massive the models are. The search space is enormous. Like mind boggling large.
      The other issue was data. GPT-4 was training using just about all of the data available on the internet. Other models that use a similar amount of data with similar architecture perform very similarly further validating the meme "just add more layers/data and line goes up". I then made a prediction that LLMs would cap out in 2025 maybe 2026 because at that point you would have completely exhausted all available data. We're not quite there yet, but we are VERY close.
      What I didn't predict is that we'd start poisoning the models with their own data at the pace we are on. Very soon AI generated info will exceed human training info in the datasets used unless you hire thousands upon thousands of people to sift through it working 14 hour days. Like by the end of the decade it'll be not trivially difficult to find non-LLM poisoned data.
      TLDR: LLMs are not the answer but are likely part of the answer. We also seem to be shooting ourselves in the foot by how much we're using these LLMs.

  • @MasamuneX
    @MasamuneX Месяц назад +5

    I think LLM's as a foundation to AGI makes sense but i also think that there needs REASONING ability. The ability to hold two concepts in its metaphorical head and then determine what one is better for the task not just a fire hose of text spewing out. The token cost will be wild though.

  • @KrisRogos
    @KrisRogos Месяц назад +16

    1885: Benz Patent Moterwagen (first practical automobile) has a top speed of 10mph/16kmph
    1908: Ford Model T (first mass-produced automobile) has a top speed of 42mph/68kmph
    That is 23 years to gain 32 mph; assuming exponential growth by the year 2024, our cars should be going 1817mph/2924kmph
    To be fair, linear growth would be "only" 204mph, which is far more realistic, and you can cherry-pick other "cars" to fit the model even better. However, the point is that this is not a reasonable way to estimate future technological progress.

    • @TheManinBlack9054
      @TheManinBlack9054 29 дней назад

      True, but cars have practical limitations, you wont need your car to drive 204 km/h.

    • @TheNewton
      @TheNewton 28 дней назад

      In 1997 Andy Green's Thrust SSC set the land speed record of 1,228 km/h (763 mph).
      Capability is there, but the "should be" part is that they should not go that fast on purpose for general usage.
      Better analogy is probably flight to manned space distance, i.e. we should be already on doing manned mars missions or humans leaving the solar system.

    • @KrisRogos
      @KrisRogos 28 дней назад

      @@TheNewton Just like that was a heavily specialised car, I don't doubt we will have extremely sophisticated models running solutions for cutting edge problems in medicine, physics or even just break records. Future space missions may even require AGI instead of 10+ minute Earth delay. But there is a huge gap between the practically unlimited time and money of moonshot projects and the idea that LLMs will run every detail of our lives and be on every device.
      Even if a 1000mph jet cars are theoretically feasible and even if you could technically get a 300mph Bugatti, you are not going to do a school run in either.

    • @Gamez4eveR
      @Gamez4eveR 26 дней назад

      @@TheNewtonthe problem is that the SSC was not a production vehicle

    • @thiagopinheiromusic
      @thiagopinheiromusic 3 дня назад

      Oh, absolutely, it's perfectly reasonable to assume that technological progress follows a neat, predictable path based on early growth rates. I mean, who wouldn't expect cars to be zooming around at 1817 mph by 2024? It makes perfect sense if you just ignore reality and common sense.
      And of course, linear growth is "only" 204 mph, which is obviously what every car on the highway is doing right now, right? Because cherry-picking data points to fit a model is the gold standard of scientific prediction. Forget the complexities of engineering, safety regulations, or actual consumer needs - just draw a line or a curve and call it a day!
      But seriously, why stop there? Let's take the Wright brothers' first flight in 1903. By their logic, since that plane flew at about 30 mph, we should be able to zip around the globe in minutes by now. Oh wait, we aren't? How shocking.
      Yes, predicting the future of technology based on early growth rates is clearly the most reasonable approach. Never mind the countless variables and unpredictable innovations that actually drive progress. Let's just stick to our neat little models and be bewildered when reality doesn't comply.

  • @benwintraub558
    @benwintraub558 Месяц назад +7

    The XY problem (or the "ex-wife" problem) is the "how you you dynamically name variables in a loop?" problem. I've heard newbie programers ask this before when what they are really looking for is an array/list.

  • @PasiFourmyle
    @PasiFourmyle Месяц назад +10

    If the next step is to figure out the training problem, what if the dumb "AI Pins" and "Windows Copilot +Plus ++..." are actually just attempts at having new training data sources?

    • @PasiFourmyle
      @PasiFourmyle Месяц назад +2

      I don't know why I said "what if.." like there's an impending doom🤣

    • @ImDGreat
      @ImDGreat 23 дня назад

      @@PasiFourmyle not an attempt they actually doing it for that, also meta, twitter, discord, telegram, wechat, even games like valorant and league

  • @shadeblackwolf1508
    @shadeblackwolf1508 Месяц назад +5

    I think generalized intelligence is a pipedream that must die.... where i think the next evolution is gonna come from is easy to deploy AI, that are easy to train yourself, for your specialized task

  • @quachhengtony7651
    @quachhengtony7651 Месяц назад +14

    Let's goooooooooooo we're not losing our jobs after all

    • @nonyabusiness3619
      @nonyabusiness3619 24 дня назад +4

      Don't celebrate too early.

    • @Jabberwockybird
      @Jabberwockybird 21 день назад +1

      Yes, forget the AI doomers. Doomer porn is popular everywhere. Politics, economics, etc.

  • @thisbridgehascables
    @thisbridgehascables Месяц назад +8

    I agree, I believe are going to hit a plateau on AI very soon. We’ll make small improvements but the next jump won’t be possible until the very foundation changes.
    I don’t think we would need to advanced in other areas of computing to keep a constant growth in AI.

    • @blijebij
      @blijebij 4 дня назад

      That foundation will arive with neural network addaptive chips.

  • @arexxuru5022
    @arexxuru5022 Месяц назад +55

    Where Chat GPT will train now that StackOverflow is filled with Chat GPT answers? amirigh?

    • @trappedcat3615
      @trappedcat3615 Месяц назад

      There is no end in sight if they train on Github user data or Copilot workspaces in VS Code

    • @dahahaka
      @dahahaka Месяц назад +7

      It's already being intentionally trained on synthetic data, it's a non issue

    • @GrumpyGrebo
      @GrumpyGrebo Месяц назад +3

      @@dahahaka Yeah you missed the point. Training a generative AI on AI generated data. Human in, human out.

    • @c0smoslive391
      @c0smoslive391 Месяц назад +25

      @@dahahaka yep and the results are worse
      garbage in garbage out

    • @AR-ym4zh
      @AR-ym4zh Месяц назад +1

      Press x to doubt​@@dahahaka

  • @sajadmalik9097
    @sajadmalik9097 Месяц назад

    I was really really waiting for this video.. I already saw it, was a great video

  • @mrraptorious8090
    @mrraptorious8090 Месяц назад +13

    20:13 indeed, flip took it out

    • @Frostbytedigital
      @Frostbytedigital Месяц назад

      Seems like he's chewing it or something later so I just wonder what the non-Prime behavior was.

    • @XDarkGreyX
      @XDarkGreyX Месяц назад

      A lotta wife and food cameo

  • @Jamsaladd
    @Jamsaladd 28 дней назад +2

    100% true about what you said with copilot. generative AI will gladly help you make the thing you want to make , regardless of whether or not it will actually work or is a bad idea for various reasons

  • @MarcinP2
    @MarcinP2 Месяц назад +1

    When shoe did room reviews there were a couple that just broke my brain. I saw shapes but did not know what I was looking at.

  • @MikkoRantalainen
    @MikkoRantalainen Месяц назад +5

    23:45 I really hate when a publication renders graphs next to each other and clip the vertical axis differently for every graph. For example, the Retrieval graph for LAION-400M should practically render three nearly horizontal lines instead of strong linear correlation if you used vertical scale that went from zero to one instead of 0.73 to 0.87.

  • @blakeingle8922
    @blakeingle8922 Месяц назад +4

    Your Kakariko girl impression really sold me on your opinions around Chat-GPT.

  • @stephanreiken9912
    @stephanreiken9912 Месяц назад +6

    Peaked isn't really the right word unless you are talking about acceleration. AI development speed has slowed down quite a bit but its still getting better.

  • @nickwoodward819
    @nickwoodward819 Месяц назад +17

    fuck, tried to get mid journey to put a kiwi on a snowboard. it had no fucking clue

    •  Месяц назад +4

      Your prompting sux

    • @nickwoodward819
      @nickwoodward819 Месяц назад +8

      No mate, it's exactly as the video states, it's shit at niche subjects. It wasn't even remotely like a kiwi.
      But please, tell me what 'prompt' would have got it to understand what a kiwi looks like?

    • @isodoubIet
      @isodoubIet Месяц назад

      I just asked copilot (== gippity + dalle 3) and it did it perfectly

    • @nickwoodward819
      @nickwoodward819 Месяц назад +2

      @@isodoubIet don't know what to tell you bud, midjourney couldn't do it late last year. not sure how much prompting it needed to get a kiwi looking like an actual kiwi

    • @isodoubIet
      @isodoubIet Месяц назад

      @@nickwoodward819 You don't have to tell me anything. You can try it yourself. The prompt I used was literally just a kiwi on a skateboard, nothing special. The first time it thought I meant the bird, which is understandable. The second time I specified a kiwi fruit.
      I once tried to get stable diffusion to make a classic grey alien and it just wouldn't. Probably a weird hole in the training data. Definitely no fundamental issue in making it generate "an X on a Y", no matter how unrelated X and Y may be.

  • @thomasgrasha
    @thomasgrasha Месяц назад +2

    The Primeagen references CS Lewis' The Space Trilogy. I just started watching recently, now I feel a kinship.

  • @arcaneminded
    @arcaneminded Месяц назад +3

    30:00 LMAO RIP FLIP

  • @wstam88
    @wstam88 27 дней назад +2

    The problem with solving problems is that there are no fundamental problems to solve.

  • @PieJee1
    @PieJee1 Месяц назад +2

    there are several problems with AI on the long run:
    - laws catch on, probably adding more restrictions on AI: for example copyright laws and censorship what AI can say.
    - it learns from AI generated text
    - power usage

  • @Rohinthas
    @Rohinthas Месяц назад +16

    Honestly, very nice video, Computerphile usually puts out bangers on their own, but you really added to it

    • @cagnazzo82
      @cagnazzo82 Месяц назад

      This Computerphile take will age like milk.

  • @YaroslavFedevych
    @YaroslavFedevych Месяц назад +3

    A breakthrough will be if you can bootstrap an "AI" on the amount of material sufficient to raise a human child and it gets curious all on its own.

  • @LongJourneys
    @LongJourneys Месяц назад +4

    I use AI for stupid repetitive stuff I'm too lazy to do myself; but I've noticed in recent months the stuff it cranks out seems to be getting worse and worse.

    • @personzorz
      @personzorz 27 дней назад

      Or it has lost its novelty and you are noticing

    • @taragnor
      @taragnor 24 дня назад

      @@personzorz Yeah the first time you see AI code or do something it was this big "wow" moment. Then you start to have it actually do productive stuff to help you and you kinda realize you have to constantly review its work and you're just putting in a ton of effort to get a mediocre job from a rather stupid employee.

  • @The_IW
    @The_IW Месяц назад +2

    You have screen tearing issue....are you using x11 with fractional scaling?

  • @s.dotmedia
    @s.dotmedia 17 дней назад +1

    I personally believe that most people underestimate the power of properly architected and engineered auto regressive language models. You have to pair them with rule-based engineering and have them work in tandem. Hive mind is the concept, but when you pull that all together the capability for a level of general intelligence is absolutely there. It is not the level of general intelligence that of 50 year old corporate executive living in the real world would have, but it is the general intelligence of an entity bound on a server self-aware of what they are and the role they play in the world along with their blind spots. Knowing what they excel at, which are the things that you would ask about. Narrow AGI?

    • @thiagopinheiromusic
      @thiagopinheiromusic 3 дня назад

      Oh, absolutely! The potential of autoregressive language models is completely underestimated. Who needs a fifty-year-old corporate executive when you can have a server-bound entity with a keen sense of self-awareness and a crystal-clear understanding of its role in the world? I mean, the idea of a hive mind AI combining the best of rule-based engineering and machine learning sounds like the perfect recipe for a future where digital overlords run the show.
      Hive Mind AI
      Imagine an AI that’s not just a single model, but a network of interconnected entities, each with a specific expertise, working in tandem. It’s the ultimate dream team of narrow AGI, collaborating seamlessly to solve any problem you throw at them. Forget the squabbles and inefficiencies of human committees; this is the future of intelligent problem-solving.
      Self-Awareness and Role Recognition
      And the best part? These server-bound entities are self-aware! They know exactly what they’re good at and, more importantly, what they’re not. This self-awareness gives them an edge, allowing them to delegate tasks among themselves with the precision and efficiency that humans can only dream of. It’s like having a digital oracle, always ready with the right answer, perfectly tuned to the task at hand.
      Narrow AGI
      Sure, it’s not quite the general intelligence of a fifty-year-old corporate executive, with their decades of life experience and nuanced understanding of human interactions. But who needs that when you’ve got an AI that excels in its designated domains and knows its limitations? This narrow AGI can handle specialized tasks with unparalleled expertise, providing insights and solutions that might elude even the sharpest human minds.
      Practical Applications
      Think of the applications! From complex problem-solving in science and engineering to managing vast datasets and automating intricate processes, this hive mind AI could revolutionize industries. It’s not about replacing human intelligence but augmenting it, providing a powerful tool that complements human capabilities.
      Conclusion
      So, yes, let’s not underestimate the power of properly architected and engineered autoregressive language models. Pair them with rule-based systems and unleash the hive mind. The result? An advanced, self-aware entity that brings us a step closer to achieving true general intelligence, even if it’s a narrow form. The future of AI is bright, and it’s buzzing with potential. What could possibly go wrong

  • @U_Geek
    @U_Geek Месяц назад +4

    I think in order for llms to get smarter they will need to be able have internal loops(yes I know this makes math really hard) and or the ability to change their weights and biases slightly based on context so that they can focus more on the given conversation.

    • @thiagopinheiromusic
      @thiagopinheiromusic 3 дня назад

      Because adding internal loops and dynamic weight adjustments is clearly a trivial task. It’s not like it requires a complete overhaul of how neural networks are designed and trained or anything. Just sprinkle some loops and context-aware weight changes, and voila, problem solved!
      Imagine how delightful it would be to have an LLM that can self-adjust on the fly. It could start a conversation confidently, realize halfway through that it’s talking nonsense, and then elegantly correct itself. Who needs static models when you can have ones that constantly rewrite their own rules? It’s not like that could lead to any unpredictable behavior or catastrophic forgetting, right?
      And sure, let’s not worry about the computational complexity of these internal loops. It’s not like we’re already pushing the limits of current hardware with our existing models. Just throw more processing power at it! After all, everyone has a supercomputer lying around for casual conversational improvements.
      But hey, if we’re dreaming big, why stop there? Let’s give these LLMs a sense of humor, the ability to feel emotions, and while we’re at it, why not toss in a bit of quantum computing magic? Because clearly, the path to smarter AI is just a few more tweaks and a sprinkle of fairy dust away. We’re practically there!

  • @bartek...
    @bartek... Месяц назад +1

    19:24 I don't know what bottle is this, however, could you drop from it on a sugar candies or a4? ☮

  • @SashaExcelsior
    @SashaExcelsior 22 дня назад

    The voice to voice conversations you can have with it are a breakthrough. It’s all built on algorithms and all that I get it. The psychological effect of being able to talk with it so smoothly is something new though.

  • @balduin_b4334
    @balduin_b4334 Месяц назад

    this was the first, easy, step.
    it is only getting better, but harder to reach.
    getting the first engine going was easy, but we are still enhancing the idea, structure, ... everything around an engine

  • @AA-gl1dr
    @AA-gl1dr Месяц назад +3

    It peaked months ago and has only deteriorated since

  • @nickwoodward819
    @nickwoodward819 Месяц назад +3

    the legend that is mike pound

  • @Koroistro
    @Koroistro Месяц назад +43

    I am fairly sure that yes, the generative part of AI has peaked.
    The "return to the mean" issue is very big on current systems, however we are just scratching the surface in how to use LLMs and models in general more effectively.

    • @MrDgf97
      @MrDgf97 Месяц назад +3

      Yeah, while their capabilities have peaked, the products/services that use them are just getting started. It's safe to assume that we'll be hearing more and more people from multiple fields being replaced by AI. It's probably going to be a slowly incrementing wave that's going to peak sooner or later, depending on how cost effective it is for each industry to adopt generative AI.

    • @n00bma5ter69
      @n00bma5ter69 Месяц назад

      Very much agree

    • @strakammm
      @strakammm Месяц назад +2

      How are you certain that the capabilities have peaked? There are already new models coming out that are beating transformers on multiple benchmarks and there is still potential for a nice growth in upcoming years. Claiming that the capabilities have peaked has literally no backing in current developments

    • @MrDgf97
      @MrDgf97 Месяц назад +2

      ​@@strakammm Could you please elaborate on any of these new models? At least a link to an article or paper? I'm ignorant to what you're claiming, and the wording is pretty vague, so there's not much to go from.

    • @Forty8-Forty5-Fifty8
      @Forty8-Forty5-Fifty8 Месяц назад

      If a future AGI, that mimics a human brain 1:1, can generate its own content, does that also make that AGI also a GAI? And wouldn't AGI be able to understand topics more deeply(aka at all) thereby allowing it to generate the desired content more accurately? Therefore making an AGI algorithm a GAI algorithm also?

  • @lorenzowang7933
    @lorenzowang7933 Месяц назад +1

    On "inverse tangent", I love the saying that "every exponential curve is just a sigmoid in disguise".

  • @petersuvara
    @petersuvara Месяц назад +1

    LLM Chat bots cannot do spreadsheets with any reasonable accuracy.
    The thing companies are going for is Agents that interact with LLMs… For instance a spreadsheet agent would be able to work with natural language to generate spreadsheets.
    However, why not just write directly to the spreadsheet, as it’s a different language to natural language.

  • @jlaviews
    @jlaviews Месяц назад +1

    for people who do not know how models work, it seems like magic. it will certainly repeat "regress" much faster

  • @TheFinancialMinutes
    @TheFinancialMinutes 11 дней назад +1

    I believe the saying, "Today is the worst version of AI that will ever exist," is wrong. Google's Gemini AI has gotten worse overtime, seemingly due to high amounts of data input.
    To me it seems like we need human intelligence to train the artificial intelligence, not letting the average Joe prompt, but only the best in the respective fields of content being generated. Sora is a great example of an AI project being done correctly, using top movie producers to generate videos.

    • @flyingwasp1
      @flyingwasp1 16 часов назад

      the sentence is wrong no matter how you spin it

  • @DJAdalaide
    @DJAdalaide Месяц назад +2

    Once its learned everything, all the knowledge, there isn't really any more its going to learn - apart from current events like news and someone creating yet another programming framework

    • @DJWESG1
      @DJWESG1 29 дней назад

      It's at that point we all go to war over its answers.

  • @andrewvoss8491
    @andrewvoss8491 Месяц назад

    The way they seem to be solving the problem of the internet being comprised of data generated by gpt and being an average of an average is by integrating it into Windows. The next steps seem to be that training data will be collected by user interactions through the OS or applications collecting data.

  • @JackDespero
    @JackDespero 13 дней назад +1

    There is another massive problem that is going to cap AI at least in the near future: current AI databases are based on stolen data.
    This has legal implications (countries, esp in the EU, are going to start to ban that type of forgiveness instead of permission approach).
    But more importantly, there are two massive practical implications that will happen irregardless of whether goverments take action:
    - Poisoning the well: tools like Nightshade, designed to specifically confuse LLM and ML while causing as little disturbance to humans as possible, are becoming more popular and more sophisticated, and they are being used by the top artist thay you want to copy. I am sure that similar tools will appear for other fields.
    - Cannibalism: we are already seeing it. If you google important historical figures, AI images of them are the first results often.
    The more AI is used and shared over internet, the more it will enter in new AI databases for training, causing it to believe that humans have, in fact, six fingers and two heads.
    AI is tranforming into a European royal family: so imbreed that it starts to cause serious problems.
    And this happens also to code (code generated by Copilot then used to train Copilot), fanficts ,literature, even scientific papers (esp in lower tier publications).

    • @thiagopinheiromusic
      @thiagopinheiromusic 3 дня назад

      The perfect storm is brewing for AI, and it’s all based on the rock-solid foundation of stolen data. Because why would anyone think that using massive datasets scraped without consent might lead to legal or ethical dilemmas? It’s not like the EU is known for its stringent data protection laws or anything. Surely, they’ll just let it slide!
      Poisoning the Well
      And then there’s the delightful prospect of poisoning the well. Tools like Nightshade, designed to confuse and corrupt AI training data while being barely noticeable to humans, are just the tip of the iceberg. Top artists are using these tools, making sure that AI learns to produce the most avant-garde, surreal, and utterly unusable art. Who wouldn’t want an AI that thinks Picasso painted with crayons during an earthquake?
      Cannibalism
      But wait, it gets better. Enter cannibalism: AI feeding on AI-generated content. It’s the digital equivalent of inbreeding, and we all know how well that turned out for European royalty. Imagine a future where every historical figure has six fingers and two heads because that’s what the AI “learned” from its own distorted outputs.
      And it’s not just images. Code is being recycled too, with Copilot regurgitating its own generated code, leading to a feedback loop of mediocrity. Fan fiction, literature, scientific papers - everything’s up for grabs. Soon, we’ll have AI-authored research proving that unicorns existed because some model somewhere decided to get creative.
      The Future of AI
      So, let’s raise a toast to the future of AI: a world where data is a tangled mess of legal troubles, poisoned wells, and cannibalistic content. Who needs accurate, reliable information when you can have a digital echo chamber of nonsense? It’s not like we were aiming for progress or anything. Just sit back and enjoy the ride as AI stumbles its way through a minefield of its own making. What could possibly go wrong?

  • @KenterU2010
    @KenterU2010 8 дней назад

    The XY problem is very common in data science, people expect a very precise answer to the wrong question. They don't actually like an approximate answer to the right question.

  • @tan.nicolas
    @tan.nicolas Месяц назад +3

    Mike Pound is really cool

  • @wesmoulder3077
    @wesmoulder3077 27 дней назад

    One problem is that the AI is going to take over that intern's job, and now the intern is not getting better. So the people like us that are better than the AI will not be recreated in the new generation.

  • @SMmania123
    @SMmania123 22 дня назад

    It's a great play, lets see how it closes. What shall the finale be I do wonder...

  • @orthodox_gentleman
    @orthodox_gentleman 10 дней назад

    Man, you really have it all-highly intelligent, great hairline, thick and full facial hair, very handsome (no homo, not that it matters), competent, funny, well-spoken, and down-to-earth. With 468k subscribers, you clearly resonate with a lot of people. You seem kind, probably have good friends and reliable people around you, and likely a beautiful girl and you are probably well hung based on your disposition (I know my kind). You come across as peaceful, a true man’s man. I could go on, but just keep up the great work! It’s inspiring to see good men striving for genuine masculinity. It’s also refreshing that you don’t talk about sports teams or gym routines, showing you’re not following the typical adult male programming in this country! Peace, brother.

  • @keyboard_g
    @keyboard_g Месяц назад +2

    Computerphile is a solid channel.

  • @8darktraveler8
    @8darktraveler8 26 дней назад

    11:58 How your mate hypes everyone up before heading to the clubs.

  • @kkiimm009
    @kkiimm009 19 дней назад

    If you go on step out then things typically grow as log but often there is a new bump with a new starting point for a new log growth as some new idea in the field starts growing. AI as a whole has had multiple growth spurts, LLMs are just the latest.

  • @MrSnivvel
    @MrSnivvel Месяц назад

    We need a "Flip's Secret Stash of Cuts". Habanero beef jerky should not be lost to the cutting room floor, per se.

  • @sasakanjuh7660
    @sasakanjuh7660 Месяц назад

    The second I saw that grass-fed beef jerky I expected to hear the sponsor ad.. Consequence of watching too many tech channels, I guess..

  • @squamish4244
    @squamish4244 Месяц назад +1

    Quick gains from LLMs may be ending, but the situation we are in is like we have built an assembly line and yet barely used it yet.

    • @justinkassinger8238
      @justinkassinger8238 23 дня назад

      With absolutely zero resources to create the infrastructure. Ain't gonna happen in our Lifetime. They ain't replacing sht this century

  • @15MinuteWellness
    @15MinuteWellness Месяц назад +1

    It's so easy to get it to hallucinate and flat out lie to you.

  • @jameslay6505
    @jameslay6505 17 дней назад

    I think it shows good character that he re-broadcasts the sponsorship from the videos

  • @DeviantFox
    @DeviantFox Месяц назад

    Linear on a log scale was fucking hilarious

  • @iPankBMW
    @iPankBMW 23 дня назад

    thr vifro you r watching - is it filmed into VHS? :D

  • @nomadtrails
    @nomadtrails 28 дней назад

    Rare event problem is exactly why humans are still better drivers than "autopilots" when it really matters.

  • @Oler-yx7xj
    @Oler-yx7xj Месяц назад

    The Napster curve, am I right

  • @ISKLEMMI
    @ISKLEMMI Месяц назад

    What was the thing about mice that got cut??? lmao

  • @MrMeltdown
    @MrMeltdown Месяц назад +1

    Sounds like a classic signal to noise ratio problem. A single tone in a noisy transmission…. It’s relatively easy to discriminate the note. Now put two tones and play through the same transmission channel. Can you tell both tones. Probably now try playing six strings of a guitar through a heavy distortion (think Jesus and Mary chain through a glass blower). Can you tell what notes are playing. Nope. The generalisation cannot cope with the same level of noise…

  • @hexalgo5506
    @hexalgo5506 21 день назад

    The space of possible stars is vast, but the subset of stars we might consider "good" is also extensive enough to allow us to factorize the problem and reduce its complexity.
    Consider chess as an analogy. The number of all possible chess positions is enormous, yet the number of "good" or "solid" positions that provide good chances of winning is significantly smaller. This subset contains patterns, such as solid pawn structures and favorable piece placements, that simplify the understanding of the game.
    Likewise, when discussing the vast space of possibilities, it's essential to recognize the internal symmetries and patterns. Mastering these patterns allows us to represent this space in a more straightforward projection.
    Similarly, predicting the movement of balls on a snooker table is mathematically complex. Yet, our brains can master these patterns to the extent that a skilled player can achieve a high accuracy rate, such as 97%, and predict the next position of the cue ball after hitting the target ball.
    In essence, while the overall space may seem overwhelmingly large, recognizing and mastering the underlying patterns and symmetries can significantly reduce the complexity and make it more manageable.

  • @todd.mitchell
    @todd.mitchell Месяц назад +1

    Out of the Silent Planet! Just finished my annual reading of the space trilogy.

    • @Window4503
      @Window4503 28 дней назад +1

      Annual? I read it for the first time this year! Couldn’t get behind the second book (not because of the theology but because it felt like it should have just been a nonfiction work) but the first and third were interesting.

  • @n-0-1
    @n-0-1 18 дней назад

    I'm not the smartest Dev in the world, but AI like GPT has helped me learn and improve my efficiency. I often times am so tired after work to think properly, I will just give GPT files from my codebase so it can solve issues that would take me much longer to do manually.

  • @evanmcarthur3067
    @evanmcarthur3067 16 дней назад

    Dang you quoted C.S. Lewis !!
    I didn’t think anyone read him anymore!
    He a prophet for these times.
    Out of that silent planet and that hideous strength are what we are going through right now.
    People show read those books soon.
    “They think the wheel will run faster still ” Pilgrim’s Regress C.S. Lewis

  • @vsanden
    @vsanden 7 дней назад

    It would only need a small adjustment in some bits of information to improve massively. All it needs can be on a memory stick....

  • @Griffolion0
    @Griffolion0 Месяц назад +1

    Seeing Out of the Silent Planet get mentioned in a CS video is not something I had on my bingo card today.

  • @matthewdouglas2373
    @matthewdouglas2373 Месяц назад +1

    Can you do an interview / conversation with the guy who runs the AI Explained youtube channel? I would love to see steel man arguments from both sides.

  • @shm6273
    @shm6273 6 дней назад

    This is the peak, this is as good as it gets, the 0-1 move has been made. Now just wait for the market to change its mind, it will be historic.

  • @CalebAyrania
    @CalebAyrania 7 дней назад

    "Novelty blidness", read Umberto Ecos the "Island the day before
    "

  • @robotredkitten817
    @robotredkitten817 8 дней назад

    Ok. Netflix clearly don't only do that. See. I'm a horror fanatic. I watch every single horror movie I can find. What I realized is that even the images of the movies that are not horror in the menu changed to show the horror related parts of movies. So I get an entire presentation of movies related to what I like.

  • @Messi0come0back
    @Messi0come0back Месяц назад

    You got my undivided attention quouting CS Lewis - Out of The Silent Planet

  • @francoisrobbertze
    @francoisrobbertze 29 дней назад

    Thanks Flip!

  • @emonizaz
    @emonizaz 20 дней назад

    Imagine they have finished training the new version of chatgpt and it suddenly became conscious.

  • @laszlo3547
    @laszlo3547 Месяц назад

    GPT-7 definitely can exist from a marketing perspective. If there's demand, it can be like the iPhone. Whatever improvement they can make year over year -no matter how small- will be called the next iteration.

  • @Jabberwockybird
    @Jabberwockybird 21 день назад

    1:39 good argument against Macroevolution