Generative AI Has Peaked? | Prime Reacts

Поделиться
HTML-код
  • Опубликовано: 22 май 2024
  • Recorded live on twitch, GET IN
    Reviewed Video
    • Has Generative AI Alre...
    By: Computerphile | / @computerphile
    My Stream
    / theprimeagen
    Best Way To Support Me
    Become a backend engineer. Its my favorite site
    boot.dev/?promo=PRIMEYT
    This is also the best way to support me is to support yourself becoming a better backend engineer.
    MY MAIN YT CHANNEL: Has well edited engineering videos
    / theprimeagen
    Discord
    / discord
    Have something for me to read or react to?: / theprimeagenreact
    Kinesis Advantage 360: bit.ly/Prime-Kinesis
    Hey I am sponsored by Turso, an edge database. I think they are pretty neet. Give them a try for free and if you want you can get a decent amount off (the free tier is the best (better than planetscale or any other))
    turso.tech/deeznuts
  • НаукаНаука

Комментарии • 831

  • @JGComments
    @JGComments 24 дня назад +529

    Devs: Solve this problem
    AI: 10 million examples please

    • @DevPythonUnity
      @DevPythonUnity 23 дня назад +12

      "Actually, AI should strive to be just smart enough to acquire and contemplate new data, including introspection. What do you do when confronted with an unsolvable problem? You gather data, experiment, collect results, then engage in self-reflection to update your knowledge base. It's not merely about amassing data, but rather about possessing the capability to acquire fresh data, experiment with it, and engage in introspection."

    • @tempname8263
      @tempname8263 23 дня назад +18

      @@DevPythonUnity please repeat your message, but this time use no more than 1 space inbetween words
      generate 4 different versions of such message

    • @alexandrecolautoneto7374
      @alexandrecolautoneto7374 23 дня назад +3

      Devs: Generate 10 million example of this problem.

    • @alexandrecolautoneto7374
      @alexandrecolautoneto7374 23 дня назад +6

      @@DevPythonUnity ! Disclaimer: GPT was trained on data until 2021, any answers after that date can hallucinate. We will solve this by searching google and feeding the 1st results in your context but you will feel like we now are able to generalize to any answers.

    • @Sky-fk5tl
      @Sky-fk5tl 23 дня назад +1

      Isn't that how humans learn too...

  • @Watanabe911
    @Watanabe911 17 дней назад +53

    Isn't it crazy that you hear more people worry about AI polluting the internet for training future AI's than the fact that it is polluting the internet for , you know, YOU AND ME?

    • @g_wylde
      @g_wylde 15 дней назад

      True but I guess most of us who are vaguely internet savvy can tell the AI crap from legitimate information. AIs themselves cannot do that, they'll just take it in and regurgitate something even worse out. Which means that those people who are less savvy will be faced with more and more fake information and all of us will be swimming through growing piles of garbage to find anything useful.

    • @jakke1975
      @jakke1975 4 дня назад +3

      Environmental pollution by AI is even a lot worse and honestly, for what? An advanced chat toy for adults that operates with the "intelligence" of a dog?

  • @drditup
    @drditup 19 дней назад +71

    if only all windows users would start taking pictures of everything they do so the AI algorithms can get more data. Maybe like a screen shot every few seconds. I think I recall something like that

    • @samblackstone3400
      @samblackstone3400 8 дней назад +6

      AI data collection legislation now.

    • @magfal
      @magfal 6 дней назад

      ​@@samblackstone3400could even drop the word AI from it.....

  • @Afro__Joe
    @Afro__Joe 24 дня назад +63

    AI is becoming like ice cream to me, good every once in a while, but I get sick of too much of it. With Samsung trying to shove it into everything in my phone, MS trying to shove it into everything PC-related, Google pushing it at every turn, and so on... ugh.

    • @DJWESG1
      @DJWESG1 20 дней назад +2

      That's the same Samsung who can't even get its spellchecker and auto correct to work efficiently for ppl with poor spelling and grammar.

    • @the0ne809
      @the0ne809 20 дней назад

      Google using AI for its search engine is wild to me.

    • @TheManinBlack9054
      @TheManinBlack9054 19 дней назад

      @@the0ne809 every search engine uses it

    • @Overt_Erre
      @Overt_Erre 17 дней назад

      They're pushing it because they want to collect more data from you. AI will seem free and useful as long as they think more data will improve their efficacy. Once they see the diminishing returns suddenly you'll be asked to pay and the usage rates will plummet

  • @MrSenserus
    @MrSenserus 24 дня назад +152

    The computerphile guys are my uni lecturers atm and for the coming year. It's pretty cool to see this.

    • @Michael-ty2uo
      @Michael-ty2uo 19 дней назад +8

      damn lucky asf they definitely enjoy teaching others about comp sci and math topics that cant be said about most professors

    • @WretchMusou
      @WretchMusou 15 дней назад

      Are they nice people in real life? they seems to be in videos...

    • @MrSenserus
      @MrSenserus 14 дней назад +1

      @@WretchMusou Yeah generally! Definitely some characters though, Steven is a great lecturer and awesomely knowledgeable but definitely a quirky character.

    • @precooked-bacon
      @precooked-bacon 8 дней назад

      very lucky. make good use of the time.

  • @denysolleik9896
    @denysolleik9896 24 дня назад +330

    It can do anything except tell you that it doesn’t know how to do something.

    • @Vlad-qr5sf
      @Vlad-qr5sf 24 дня назад +6

      If it can do anything then it doesn’t need to tell you that it can’t do something. Your statement is contradictory.

    • @shafferfs
      @shafferfs 24 дня назад

      ​@@Vlad-qr5sfshut up nerd

    • @denysolleik9896
      @denysolleik9896 24 дня назад +73

      @@Vlad-qr5sf someone always thinks they’re smarter than me.

    • @hootmx198
      @hootmx198 24 дня назад +12

      Just like your average internet user haha

    • @JGComments
      @JGComments 24 дня назад +7

      Right, it doesn’t actually fundamentally understand what anything is, like what a cat is versus what a dog is.

  • @SL3DApps
    @SL3DApps 24 дня назад +364

    It’s crazy how OpenAi’s only way to stay relevant in this market vs big tech such as Google and MS is to sell the hype that Ai will not peak in the near future. Yet, they are the company everyone is relying on to say if Ai has or has not peaked… why would they ever want to admit to anything that can be damaging to their own company?

    • @furycorp
      @furycorp 24 дня назад +49

      Altman just needs everyone to hand over more personal data and private/internal documents from businesses so he can live out the megalomaniac fantasies that he talks about in interviews

    • @alexandrecolautoneto7374
      @alexandrecolautoneto7374 24 дня назад +46

      AI Trains on the internet -> AI filled the internet with garbage -> AI doesn't have good training data anymore...

    • @hughmanwho
      @hughmanwho 24 дня назад +2

      @@furycorp I'd be curious to see these interviews you are referring to

    • @hughmanwho
      @hughmanwho 24 дня назад +7

      My guess is that ChatGPT 5 will be better quality. 4 definitely has some issues.

    • @dixztube
      @dixztube 24 дня назад +8

      @@furycorphe isn’t trustworthy at all

  • @granyte
    @granyte 24 дня назад +215

    "steer me into my own bad ideas at an incredible speed" LMAO this is perfect it's exactly what it does when it even works at all. I don't know if my skills have improved that much since gpt-4 came out or what but it feel like copilot and chat-gpt have become way dumber since launch.

    • @allansmith350
      @allansmith350 24 дня назад +15

      I use all of them and I kind of agree, but I will say, I've cowboyed into some small project solutions VERY fast with ai. They're surely not robust or maintainable though

    • @AndrasBuzas1908
      @AndrasBuzas1908 24 дня назад +12

      It breaks down the moment you try to do something complex that it hasn't seen before.
      Even then with small problems, it can completely miss the point. It's only really good for the occasional auto complete suggestions.

    • @rngQ
      @rngQ 23 дня назад +4

      Engineers at OpenAI have talked about how the quality of generation scales with compute. So as more people use GPTs, I can imagine the compute pool being more divided which lowers the quality of the output. Look at how drastically it scales with Sora for example

    • @elPresidente650
      @elPresidente650 23 дня назад +3

      @@allansmith350 I've been using it for a while, and honestly, I can't complain too much. I don't ask it to do anything fancy, though. It comes in handy when writing documentation based on my layman's prompts. It needs to be edited, of course, but it does a good job at organizing my ideas.

    • @TheManinBlack9054
      @TheManinBlack9054 23 дня назад +1

      use Claude 3 Opus, its far better for coding. Seriously. Opus is really better.

  • @MrKlarthums
    @MrKlarthums 24 дня назад +73

    There's plenty of software that has simultaneously improved while having an entirely degraded user experience. If companies feel that it makes them more money, they'll absolutely do it. LLMs will probably be part of that.

    • @monad_tcp
      @monad_tcp 24 дня назад +13

      Windows11 for example, structurally the thing is actually better than the previous ones. But in user experience, it degraded so far from Windows7. Even thou Windows11 is prettier than Windows10 which was ugly as hell, its far from the simple beauty of Windows7 glass and its barely usable.

    • @Forty8-Forty5-Fifty8
      @Forty8-Forty5-Fifty8 24 дня назад +5

      @@monad_tcp To be fair, isn't that just the Microsoft development cycle: just alternating between releasing a good product and then releasing a shitty one? At least that is what I've been told since I was a kid, and my only experience is W7(good), W8(dogshit), W10(good), and then W11(dogshit, but improving).

    • @monad_tcp
      @monad_tcp 24 дня назад +1

      @@Forty8-Forty5-Fifty8 probably, its the tick-tock cycle from old intel

    • @Forty8-Forty5-Fifty8
      @Forty8-Forty5-Fifty8 24 дня назад +3

      @@monad_tcp lol I was just having a conversation yesterday with my grandfather about how I have a conspiracy theory that intel pretends to release a new generation every year when in reality it takes like 2-4+ generations for any noticeable performance differences because my motherboard just died and I was in the market for an upgrade, but it didn't seem like there was anything worthwhile. I guess there is something to that theory

    • @monad_tcp
      @monad_tcp 23 дня назад +1

      @@Forty8-Forty5-Fifty8 I think Intel died on the 14nm , nothing got better after that

  • @amesasw
    @amesasw 24 дня назад +61

    One major problem, is if I ask a person how to do something that non of us know the solution to. They may be able to theorize a solution but they will often tell you they are guessing and not 100% sure about some parts of their proposed solution.
    Chatgpt can't really theorize for me or tell me that it is not sure of an answer but is theorizing a solution based on its understanding or internal model.

    • @doctorgears9358
      @doctorgears9358 23 дня назад +28

      It will theorize and be confidently wrong. Which is honestly worse than it just admitting a lack of knowledge.

    • @BHBalast
      @BHBalast 22 дня назад

      There is a compute intensive method to check for model confidence. As LLMs are statistics models, one might prompt it multiple times and check if answers are the same. The second step also can be done by an LLM. This method works and was used in some paper associated with medical use of LLMs but I don't remember the name.

    • @reboundmultimedia
      @reboundmultimedia 19 дней назад

      If you give a human a new problem, they will often use tools, research, test things out, etc. to find the solution to the problem. There are very few humans that can simple solve a new problem without some kind of pretraining involved. There is no reason that a very very good LLM can't do the same thing. They will be able to use tools the same way a human can.

    • @therealjezzyc6209
      @therealjezzyc6209 10 дней назад

      ​@reboundmultimedia while what you're saying isn't wrong, it isn't accurate to say that humans and LLMs learn the same way, or that they learn the same relationships. First of all, humans learn faster than LLMs do with less training data. Second, when faced with a challenging problem, a human will go off and collect new information; an LLM will not go and find new textbooks and put them into its training data and retrain itself to learn new correlations. Humans can actively acquire new knowledge that they haven't seen or trained to work with, LLMs cannot acquire knowledge that wasn't implicit in the representations of the data they were trained on.

    • @justinwescott8125
      @justinwescott8125 7 дней назад

      It will tell you it's not sure if you ask it to. But you're right that it's not a built in behavior.
      "Hey ChatGPT, for this conversation, if you give me an answer that you're not very sure about, I want you to tell me. In fact, for every answer you give, please give me a percentage that represents how sure you are, and explain how you arrived at that percentage."

  • @jameshickman5401
    @jameshickman5401 24 дня назад +252

    Every exponential curve is secretly a sigmoid curve.

    • @zyansheep
      @zyansheep 24 дня назад +5

      So far...

    • @AndrasBuzas1908
      @AndrasBuzas1908 24 дня назад +51

      Sigmoid grindset

    • @Forty8-Forty5-Fifty8
      @Forty8-Forty5-Fifty8 24 дня назад +6

      what about the exponential curve

    • @kevin.malone
      @kevin.malone 24 дня назад +1

      @@AndrasBuzas1908 I wanted to say that

    • @MikkoRantalainen
      @MikkoRantalainen 23 дня назад +18

      I would say that every exponential curve of *naturally occurring events* is secretly a sigmoid curve. You can have pure exponential curves in pure mathematics without any problems but real world events are limited by real world physical limits and those curves seem to follow sigmoid curve in big picture even though short term results point to exponential behavior.

  • @MikkoRantalainen
    @MikkoRantalainen 23 дня назад +26

    Modern image generators can do supriringly well even with a bit weird prompts such as "Minotaur centaur with unicorn horn in the head, steampunk style, award winning photograph" or "Minotaur centaur with unicorn horn in the head, transformers style, arc reactor, award winning photograph". Even "A transformers robot that looks like minotaur centaur, award winning photograph, dramatic lighting" outputs acceptable results.
    However, ask it for "a photograph of Boeing 737 MAX with cockpit windows replaced with cameras" and it will totally fail. The latter case has way less possible implementations and this exactness makes it to fail.

    • @pureheroin9902
      @pureheroin9902 18 дней назад +5

      I need to see your search history 🤣🤣🤣

    • @MikkoRantalainen
      @MikkoRantalainen 17 дней назад

      @@pureheroin9902 🤭My search history is actually pretty boring. Right now it looks like this:
      - phpunit assertequals github
      - css properties selectors sanitizer whitelist
      - sanitize css whitelist functions
      - phpunit assertequals clipped string
      - webp vs avif vs jpeg xl
      - what is intel ark
      - seagate exos helium
      - max fps cs
      - eu legislation consumer battery replacement
      - how Automatic Activation Device works
      - song of myself nightwish

  • @bwhit7919
    @bwhit7919 22 дня назад +21

    Most people misunderstand when they hear AI follows a “power law”. If you read OpenAI’s paper on the scaling laws, you need a 10x increase in both compute and data to lead to a .3 reduction in the loss function. In other words, you need exponentially more data to keep making the models better. It’s not that the models are getting exponentially better.

    • @DJWESG1
      @DJWESG1 20 дней назад

      No, they just havnt figured out how to utilise small amount of data.

  • @GigaFro
    @GigaFro 23 дня назад +57

    Just last year, I was sitting in a makeshift tent in an office in downtown San Francisco, attending a Gen AI meetup. The event was a mix of investors and developers, each taking turns to share their projections on the future progress of AI. Most of the answers were filled with exponential optimism, and I found myself dumbfounded by the sheer enthusiasm. When it was my turn, I projected that we were peaking in terms of model performance, and I was certain I was about to be ostracized for my view. That day I learned that as soon as hype enters the room, critical thinking goes out the window - even for the most intelligent minds.

    • @sp123
      @sp123 23 дня назад +11

      People go to tech because its the last gold rush of easy money

    • @TheManinBlack9054
      @TheManinBlack9054 19 дней назад +1

      Great! You've seem to have found your audience here, but if i may ask what were your projections based on?

    • @Danuxsy
      @Danuxsy 19 дней назад +1

      but you would have been wrong? gpt4-o is clearly a step up from GPT4, and OpenAI have stated themselves that we are far from the limit of generative models.

    • @justahamsterthatcodes
      @justahamsterthatcodes 19 дней назад +3

      We certainly are plateauing. Compare gpt 2 go gpt 3 wild difference. Now compare gpt 3 to gpt 4. Much less difference. Or gpt 4 to gpt 4o.

    • @skyrimax
      @skyrimax 19 дней назад +1

      Attended an ML day type event last year, had a similar experience. But what dumbgounded me even more was to complete disregard for the social implications of ChatGPT-type programs, like the new Google Overview telling depressed people to jump off a bridge. I think that's similar observation you had about critical thinking, but on the social side.

  • @TomNook.
    @TomNook. 24 дня назад +92

    I hate how AI has been forced into everything, just like crypto and NFTs a couple of years ago

    • @MasterOfM1993
      @MasterOfM1993 24 дня назад +34

      somehow feels like the people who used to talk about web3 all the time now talk about AGI all the time

    • @Slashx92
      @Slashx92 24 дня назад

      Sasly, this is somewhat useful for the corporate word, so it will stay, not like nfts that just died on their own

    • @francisco444
      @francisco444 24 дня назад

      AI is in everything because it's a universal translator so it makes sense to put it everywhere.
      Crypto is great but limited use.

    • @thewhitefalcon8539
      @thewhitefalcon8539 24 дня назад +8

      ​@@MasterOfM1993some people running NFT companies are running AI companies now

    • @marceljouvenaz257
      @marceljouvenaz257 24 дня назад

      Elon is investing $10 bln in AI this year. YMMV, but that is my high water mark.

  • @techsuvara
    @techsuvara 23 дня назад +73

    I like to say "AI accelerates you in the direction your going, pray it's not the wrong one"...

    • @BaruyrSarkissian
      @BaruyrSarkissian 9 дней назад +1

      It's still good to reach the end of a wrong road faster.

    • @techsuvara
      @techsuvara 8 дней назад +2

      @@BaruyrSarkissian that's the problem with wrong roads, if you're asking AI to take you somewhere, it doesn't know it's the wrong road. However if you do things yourself, you can reason you're down the wrong path much earlier.

    • @BaruyrSarkissian
      @BaruyrSarkissian 6 дней назад +1

      @@techsuvara your initial statement is "AI accelerates you in the direction your going" you will go on wrong roads with and without AI.

  • @tequilasunset4651
    @tequilasunset4651 24 дня назад +22

    We didn't even go "from nothing to something" - current LLMs are just a marked spike/ breakthrough in capability of machine learning that's been around for ages. I think we'll still see huge improvement in the technology that enabled that breakthrough but doubt there will be a "next level" - that's not just a tech company branding a new product as such - for a good few years.

    • @TheNewton
      @TheNewton 19 дней назад +5

      the breakthrough of course being just throw more resources at the problem

  • @snarkyboojum
    @snarkyboojum 24 дня назад +12

    The main issue is that the people responsible for the fundamental approaches being used in deep learning today have never wrestled with the problem of induction. They need to read the classic positioning by Hume and then follow up with Popper. Humans don’t used induction to reason about the world. It’s shocking to me that otherwise highly educated people have never read basic philosophy of epistemology. Narrow education is to blame really.

    • @ea_naseer
      @ea_naseer 24 дня назад

      induction has a formula Solomonoff induction yes its intractable but it's there. But there's no formula for deduction not even an intractable one not even an NP hard one.

    • @specy_
      @specy_ 23 дня назад +2

      This is a cool topic, why would you say humans don't use induction in their daily life? Exclude the scientific world which we can say not always uses it, but induction is probably the simplest and most used prediction technique used by humans. I guess ML models can't really do much other than use induction to get a prediction, unless you are exhaustive with your possible inputs. What's your idea to not use induction in ML?

  • @hamm8934
    @hamm8934 24 дня назад +103

    Read up on the “frame problem” and “grounding problem”. This is old news and has been known for decades. Engineers and venture capital just dont care because its not in their interest.
    Edit: also Wittgensteins work on family resemblance and language games.
    Edit2: I should clarify that I am referring to the epistemological interpretation of the frame problem, not the classical AI interpretation. That is, the concern of an infinite regress from an inability to explicitly account for non-effects when defining a problem space; this is specifically at the level of computation, not representation. For example, if agent is told "the spoon is next to the plate", how are all of the other non-effects, like a table, placemat, chair, room, etc. successfully transmitted and understood, while irrelevant inaccuracies like a swimming pool, cows, cars, etc. omitted and not included in the transmission of information. Fodor, Dennett, McDermett, and Dreyfus have plenty of canonical citations and works articulating this problem.

    • @InfiniteQuest86
      @InfiniteQuest86 24 дня назад +22

      As long as you profit before anyone figures it out, you win.

    • @abdvs325
      @abdvs325 24 дня назад +2

      Those problems don't seem like limits at all. The frame problem is just about understanding relevant context. For which there is no definitive evidence that it can't be reproduced in Ai. Neither has the grounding problem, which is just about understanding the real world rather than statistical relationships between words, given any strong evidence that it is a limit on Ai progress. This is laziness.

    • @hamm8934
      @hamm8934 24 дня назад

      ​@@abdvs325 Those are extremely surface level strawman understandings of both. Far greater minds than anyone watching this video have debated and formulated both of these critiques. You can hand wave all you want, but the white papers have been left undisputed for decades.
      Here are a few points you are missing/oversimplifying:
      - The frame problem argues that in principle there is no deterministic - or probabilistic - way to determine relevant context in a logical framework. That is the problem. It shows that an infinite regress emerges of when trying to determine relevance and irrelevance following deduction or induction. These system axiomatically dissolve into intractability.
      - The grounding problem is not about determining the real world from a word. It cuts to the very root of deductive and symbolic systems. It again, shows that there in principle must be external dimensions/modalities that allow humans to deduce meaning from symbols. Symbols themselves themselves are not sufficient. For instance, one's understanding of the symbol "food" is multimodal and multidimensional. You dont understand the word food because you read the definition of the symbol. You've smelt food. You've tasted food. You've felt food. You've prepared food. You've thrown away food. You've remembered food. Etc.. Read up on the Chinese Room Problem example and it might make it clearer. Or read some of Wittgensteins work on the meaning of a word.
      I'm rambling at this point. Again, read up on these and dont be so naïve to reject them after having a super basic understanding of them. These problems are real and ever present. These problems are very much open.

    • @hamm8934
      @hamm8934 24 дня назад +17

      ​@@abdvs325 You're over simplifying and strawmanning both. RUclips deleted my response, but read more.
      also, "For which there is no definitive evidence that it can't be reproduced in Ai." This is a fallacy. You cannot prove a negative. There is no definitive evidence that there are not fairies. Exactly. No one is saying there is. The point is that there is no evidence in favor of positing the existence of fairies, therefore we just dont say there are fairies, but we can never say there arent.
      There is no evidence or serious rebuttals to the frame or grounding problem, and as such, there is no reason to think they are wrong. They might be, but they've stood strong since at least the 80s when the terms themselves were coined, even though the concepts go all the way back to far earlier with Hume. You need positive evidence to say they are wrong. Until then, they stand as the null hypothesis.

    • @clubpenguinfan1928
      @clubpenguinfan1928 24 дня назад +9

      Finally someone mentions philosophy of language. When the video mentioned the idea of mapping text/images to their meaning in some embedding space, it set off some alarms for me. If some hypothetical AGI can grasp meaning (like we do) via this architecture, then we might as well describe the "x means M" relation as just this embedding map.
      Wouldn't this have huge implications for the semantic problem? In a way it feels like an implementation for a referential-like theory of meaning, and those are the very first theories you "debunk" in an intro Phil of Lang class.

  • @CristianGarcia
    @CristianGarcia 24 дня назад +72

    Numberphile but Primagen talks from time to time

    • @virior
      @virior 19 дней назад

      Yeah! That's called a react, I've been enjoying the format.

    • @kallekula84
      @kallekula84 17 дней назад +2

      @@virior he usually lets the guy finish a sentence, how often did he even let the guy finish a sentence here?

  • @tonym4953
    @tonym4953 24 дня назад +11

    8:20 Open AI is doing the same thing with the consumer version of Chat GPT. They essentially are charging users to train their model. Genius and very cheeky!

  • @derekcahill1775
    @derekcahill1775 24 дня назад +31

    Jeff bezos said it best but I think it’s telling that AI needs so much data to form a basic model. For example Humans don’t need to know everything about driving or have 100’s of thousands of miles in order to start driving a car. The other problem is that AI doesn’t perceive opportunity cost like a human so there’s no incentive for it to problem solve the same way a human would. Ai is definitely the future but it’s nowhere near people think it is unfortunately.

    • @monad_tcp
      @monad_tcp 24 дня назад +7

      Its funny, I learned to drive my car in one week after mere 500km of training data.

    • @monad_tcp
      @monad_tcp 24 дня назад +10

      I also don't remember needing to read the entire internet to be able to write and understand text.

    • @Slashx92
      @Slashx92 24 дня назад +12

      Yeah but we have 20 years of experience living in reality (or 16 or w/e) when driving. You already have eye-hand coordination, you have seen cars all your life, you get a rough idea on how the road works in children's shows and books. There is an inmense amount of data you are not aknowledging

    • @cauthrim4298
      @cauthrim4298 23 дня назад +4

      ​@@Slashx92people learned to drive when cars first came about all the same, it also didn't take extraordinarily long too.

    • @jackoplumkin6412
      @jackoplumkin6412 23 дня назад

      ​@@cauthrim4298because there were other manual vehicles at the time that used to do the job of cars. and it's not like the earlier models of cars were much different from the carts people were used to when it was first invented

  • @KrisRogos
    @KrisRogos 23 дня назад +16

    1885: Benz Patent Moterwagen (first practical automobile) has a top speed of 10mph/16kmph
    1908: Ford Model T (first mass-produced automobile) has a top speed of 42mph/68kmph
    That is 23 years to gain 32 mph; assuming exponential growth by the year 2024, our cars should be going 1817mph/2924kmph
    To be fair, linear growth would be "only" 204mph, which is far more realistic, and you can cherry-pick other "cars" to fit the model even better. However, the point is that this is not a reasonable way to estimate future technological progress.

    • @TheManinBlack9054
      @TheManinBlack9054 19 дней назад

      True, but cars have practical limitations, you wont need your car to drive 204 km/h.

    • @TheNewton
      @TheNewton 19 дней назад

      In 1997 Andy Green's Thrust SSC set the land speed record of 1,228 km/h (763 mph).
      Capability is there, but the "should be" part is that they should not go that fast on purpose for general usage.
      Better analogy is probably flight to manned space distance, i.e. we should be already on doing manned mars missions or humans leaving the solar system.

    • @KrisRogos
      @KrisRogos 19 дней назад

      @@TheNewton Just like that was a heavily specialised car, I don't doubt we will have extremely sophisticated models running solutions for cutting edge problems in medicine, physics or even just break records. Future space missions may even require AGI instead of 10+ minute Earth delay. But there is a huge gap between the practically unlimited time and money of moonshot projects and the idea that LLMs will run every detail of our lives and be on every device.
      Even if a 1000mph jet cars are theoretically feasible and even if you could technically get a 300mph Bugatti, you are not going to do a school run in either.

    • @Gamez4eveR
      @Gamez4eveR 17 дней назад

      @@TheNewtonthe problem is that the SSC was not a production vehicle

  • @MasamuneX
    @MasamuneX 24 дня назад +5

    I think LLM's as a foundation to AGI makes sense but i also think that there needs REASONING ability. The ability to hold two concepts in its metaphorical head and then determine what one is better for the task not just a fire hose of text spewing out. The token cost will be wild though.

  • @benwintraub558
    @benwintraub558 24 дня назад +7

    The XY problem (or the "ex-wife" problem) is the "how you you dynamically name variables in a loop?" problem. I've heard newbie programers ask this before when what they are really looking for is an array/list.

  • @xCheddarB0b42x
    @xCheddarB0b42x 19 дней назад +9

    The young ones may not remember the VR craze of the late 90s and early 00s, but us oldkins do. AI feels like that to me.

    • @rh906
      @rh906 12 дней назад

      Difference between that and now and the LLMs are at least useful if you understand their limitations and don't plop out your brain thinking it is a replacement. Can't fix lazy and stupid people I suppose.

    • @tlz124
      @tlz124 7 дней назад

      VR in the 90's?

    • @justinwescott8125
      @justinwescott8125 7 дней назад +1

      Yup. Nintendo gave it a try in the 90's with a little product called the VirtuaBoy.
      By the way, even though VR was a failed craze back in the 90s and 00s, it actually happened eventually. I use my Meta Quest like every day to play games and stay in touch with far away friends. Some of the games are incredible like Pistol Whip and Arizona Sunshine.

    • @FarnhamJ07
      @FarnhamJ07 19 часов назад

      Yep yep, the Virtual Boy didn't come completely out of left field! I'd say the hype was really more about 3D graphics than VR itself, but it didn't take long for them to start pushing the idea that those 3D graphics could then be used to generate an entire 3D virtual world around you. Everyone knew the 3D graphics part was coming at least; I think a lotta people forget that the Virtual Boy and original PlayStation came out within a few months of each other!

  • @DeusGladiorum
    @DeusGladiorum 24 дня назад +13

    I didn’t appreciate Prime making those Kakariko girl noises while I was outside and without headphones

  • @thisbridgehascables
    @thisbridgehascables 24 дня назад +7

    I agree, I believe are going to hit a plateau on AI very soon. We’ll make small improvements but the next jump won’t be possible until the very foundation changes.
    I don’t think we would need to advanced in other areas of computing to keep a constant growth in AI.

  • @MrSnivvel
    @MrSnivvel 24 дня назад +58

    LaTeX formatted papers (the research paper in the video) are gigachad. You cannot prove me wrong.

    • @-book
      @-book 24 дня назад +14

      LaTeX is such good software, puts Word to shame

    • @sahasananth987
      @sahasananth987 24 дня назад +6

      I love LaTeX it’s awesome I have thrown word and g docs programs to trash lol. I use latex for assignments at school too

    • @AJewFR0
      @AJewFR0 24 дня назад +8

      I went to a good cs college with a slightly math heavy emphasis. I was the kid who started learning LaTeX for hw in multivar calculus. It is such a useful tool to know for my all my math, cs, and engineering classes that required pdf submission. use the basic formatting still in LaTeX fills in markdown docs at work.

    • @xplorethings
      @xplorethings 24 дня назад +6

      So.. every paper outside of social sciences?

    • @MrSnivvel
      @MrSnivvel 24 дня назад +4

      @@xplorethings **whoosh** The use of LaTeX is rare outside of academia and research papers/publications, and those who do use it outside of that scope set themselves far ahead from the rest.
      I know last month was Autism Awareness month, but you'll still get a freebie this time for missing the point.

  • @ERICROJO156
    @ERICROJO156 22 дня назад +14

    AI bros are crying now because their gonna have to take responsibility for their own laziness, since their AI god isn't gonna happen ❤

  • @PasiFourmyle
    @PasiFourmyle 23 дня назад +9

    If the next step is to figure out the training problem, what if the dumb "AI Pins" and "Windows Copilot +Plus ++..." are actually just attempts at having new training data sources?

    • @PasiFourmyle
      @PasiFourmyle 23 дня назад +2

      I don't know why I said "what if.." like there's an impending doom🤣

    • @ImDGreat
      @ImDGreat 14 дней назад

      @@PasiFourmyle not an attempt they actually doing it for that, also meta, twitter, discord, telegram, wechat, even games like valorant and league

  • @arexxuru5022
    @arexxuru5022 24 дня назад +55

    Where Chat GPT will train now that StackOverflow is filled with Chat GPT answers? amirigh?

    • @trappedcat3615
      @trappedcat3615 24 дня назад

      There is no end in sight if they train on Github user data or Copilot workspaces in VS Code

    • @dahahaka
      @dahahaka 24 дня назад +6

      It's already being intentionally trained on synthetic data, it's a non issue

    • @GrumpyGrebo
      @GrumpyGrebo 24 дня назад +2

      @@dahahaka Yeah you missed the point. Training a generative AI on AI generated data. Human in, human out.

    • @c0smoslive391
      @c0smoslive391 24 дня назад +24

      @@dahahaka yep and the results are worse
      garbage in garbage out

    • @AR-ym4zh
      @AR-ym4zh 24 дня назад +1

      Press x to doubt​@@dahahaka

  • @Photoshop729
    @Photoshop729 20 дней назад +6

    Netflix - why have 10 or 12 genre experts making recommendations when you can spend a billion developing an AI to recommend Adam Sandler movies to paying customers because the movie was produced by Netflix.

    • @ci6516
      @ci6516 19 дней назад

      The Netflix AI was incredibly revolutionary and effective . Same with RUclips’s . How many hours are you on here ?,,

    • @chrisfrank5991
      @chrisfrank5991 19 дней назад +2

      @@ci6516 I'm here for the comments. You are implying that AI is responsible for the watch time on RUclips and Netflix. I'm saying, what would the watch time be if instead there was some dude named "Tom" who picked out and ranked the AI type videos that we are watching, or which comedies on Netflix get top placement. More interestingly, I wonder if that isn't already the case, that 1000 AIs ("A" bunch of "I"ndians) are actually tagging and ranking a lot of this so-called aI content - I'm not making this up this was revealed to be the "AI" behind the amazon stores with no check out counters. It's hilarious to think about!

  • @apexphp
    @apexphp 24 дня назад +99

    It's even much more simple than that. They've simply ran out of training data. They've trained the LLMs on literally ever piece of data ever generated by humans since the dawn of mankind, from every word written to tons of satellite images, to every movie produced and song recorded. There is no more training data, and the LLMs still get things wrong all the time (the other day Meta AI was adament that a SHA256 bit hash is 64 bytes in length, it's not, it's 32 bytes).
    And you can't just have these things train on synthetic data they create, because that just makes them dumber. Plus with the sheer amount of AI generated garbage content and spam now that exists within the world, these LLMs are probably as smart as they're going to get for a long time. I read a report a while ago that estimates the volume of text that has been generated by humans from the dawn of man kind until recently is now being generated by AI every two weeks and pushed to the internet.
    So the pool of training data for LLMs is now of lower quality overall. I don't know, I'm ramblind now.

    • @DeepThinker193
      @DeepThinker193 24 дня назад +11

      The obvious solution is to this is to go back to the drawing board, actually figure out and understand how the AI works, improve it and recreate the AI from scratch.

    • @BB-uy4bb
      @BB-uy4bb 24 дня назад +18

      Your missing a huge point: data quality, I would estimate that 90% of the internet is wrong/garbe data, there can be huge improvements if you simply let the ai only see the quality data and filter out the garbage, chances are that the ai only makes so many mistake cause it saw that many in the training data
      The next thing is we always expect the ai to be correct on its first try, but if you give a human only 1 chance he’ll most likely be wrong, we learn, create ideas and get to the correct solution iteratively, but expect the ai to give 1 shot the correct answer, not a fair comparison, if you give ais more time to think the get better aswell

    • @MrMeltdown
      @MrMeltdown 24 дня назад

      You mean the AI is getting distracted by pron….

    • @dragoon347
      @dragoon347 24 дня назад +2

      Overall data needs to be marked for tokenization into llm's, previously there were only x amount of pictures with descriptions, then the vision multimode models come out and now you can describe the images with a better dataset more descriptive more indepth and multi-dimensional...i.e. its a dog, a yellow dog, a yellow jack rustle terrier, a dog in the canine family etc etc etc. So the data may shrink but the richness of the data will be far better as well as now, with gpt4o you can hear/see/nlp datasets giving at least 3 vectors to provide descriptions of tokens.

    • @LiveType
      @LiveType 24 дня назад +7

      This.
      When gpt 4 came out I was blown away because it had signs that it could reason and plan (although very very poorly past 2 iteration steps), but it could do it. I then thought about could you make it complete. Can you make one of these LLMs able to have near perfect hierarchical planning like a team of humans can do?
      The answer I came to was no. The fundamental design of how an LLM works does not allow that to occur. The path planning q-star technique openai experimented with embedded into the vector space looks promising and is similar to what I had envisioned to solve that issue but that seems frighteningly difficult to implement successfully on any large model due to just how massive the models are. The search space is enormous. Like mind boggling large.
      The other issue was data. GPT-4 was training using just about all of the data available on the internet. Other models that use a similar amount of data with similar architecture perform very similarly further validating the meme "just add more layers/data and line goes up". I then made a prediction that LLMs would cap out in 2025 maybe 2026 because at that point you would have completely exhausted all available data. We're not quite there yet, but we are VERY close.
      What I didn't predict is that we'd start poisoning the models with their own data at the pace we are on. Very soon AI generated info will exceed human training info in the datasets used unless you hire thousands upon thousands of people to sift through it working 14 hour days. Like by the end of the decade it'll be not trivially difficult to find non-LLM poisoned data.
      TLDR: LLMs are not the answer but are likely part of the answer. We also seem to be shooting ourselves in the foot by how much we're using these LLMs.

  • @stephanreiken9912
    @stephanreiken9912 24 дня назад +6

    Peaked isn't really the right word unless you are talking about acceleration. AI development speed has slowed down quite a bit but its still getting better.

  • @The_IW
    @The_IW 24 дня назад +2

    You have screen tearing issue....are you using x11 with fractional scaling?

  • @MikkoRantalainen
    @MikkoRantalainen 23 дня назад +5

    23:45 I really hate when a publication renders graphs next to each other and clip the vertical axis differently for every graph. For example, the Retrieval graph for LAION-400M should practically render three nearly horizontal lines instead of strong linear correlation if you used vertical scale that went from zero to one instead of 0.73 to 0.87.

  • @U_Geek
    @U_Geek 23 дня назад +4

    I think in order for llms to get smarter they will need to be able have internal loops(yes I know this makes math really hard) and or the ability to change their weights and biases slightly based on context so that they can focus more on the given conversation.

  • @nickwoodward819
    @nickwoodward819 24 дня назад +16

    fuck, tried to get mid journey to put a kiwi on a snowboard. it had no fucking clue

    •  23 дня назад +4

      Your prompting sux

    • @nickwoodward819
      @nickwoodward819 23 дня назад +7

      No mate, it's exactly as the video states, it's shit at niche subjects. It wasn't even remotely like a kiwi.
      But please, tell me what 'prompt' would have got it to understand what a kiwi looks like?

    • @isodoubIet
      @isodoubIet 22 дня назад

      I just asked copilot (== gippity + dalle 3) and it did it perfectly

    • @nickwoodward819
      @nickwoodward819 22 дня назад +2

      @@isodoubIet don't know what to tell you bud, midjourney couldn't do it late last year. not sure how much prompting it needed to get a kiwi looking like an actual kiwi

    • @isodoubIet
      @isodoubIet 22 дня назад

      @@nickwoodward819 You don't have to tell me anything. You can try it yourself. The prompt I used was literally just a kiwi on a skateboard, nothing special. The first time it thought I meant the bird, which is understandable. The second time I specified a kiwi fruit.
      I once tried to get stable diffusion to make a classic grey alien and it just wouldn't. Probably a weird hole in the training data. Definitely no fundamental issue in making it generate "an X on a Y", no matter how unrelated X and Y may be.

  • @Jamsaladd
    @Jamsaladd 19 дней назад +2

    100% true about what you said with copilot. generative AI will gladly help you make the thing you want to make , regardless of whether or not it will actually work or is a bad idea for various reasons

  • @sajadmalik9097
    @sajadmalik9097 24 дня назад

    I was really really waiting for this video.. I already saw it, was a great video

  • @quachhengtony7651
    @quachhengtony7651 24 дня назад +14

    Let's goooooooooooo we're not losing our jobs after all

    • @nonyabusiness3619
      @nonyabusiness3619 15 дней назад +3

      Don't celebrate too early.

    • @Jabberwockybird
      @Jabberwockybird 12 дней назад +1

      Yes, forget the AI doomers. Doomer porn is popular everywhere. Politics, economics, etc.

  • @bartek...
    @bartek... 24 дня назад +1

    19:24 I don't know what bottle is this, however, could you drop from it on a sugar candies or a4? ☮

  • @shadeblackwolf1508
    @shadeblackwolf1508 23 дня назад +4

    I think generalized intelligence is a pipedream that must die.... where i think the next evolution is gonna come from is easy to deploy AI, that are easy to train yourself, for your specialized task

  • @PieJee1
    @PieJee1 23 дня назад +2

    there are several problems with AI on the long run:
    - laws catch on, probably adding more restrictions on AI: for example copyright laws and censorship what AI can say.
    - it learns from AI generated text
    - power usage

  • @Rohinthas
    @Rohinthas 24 дня назад +16

    Honestly, very nice video, Computerphile usually puts out bangers on their own, but you really added to it

    • @cagnazzo82
      @cagnazzo82 24 дня назад

      This Computerphile take will age like milk.

  • @MarcinP2
    @MarcinP2 24 дня назад +1

    When shoe did room reviews there were a couple that just broke my brain. I saw shapes but did not know what I was looking at.

  • @mrraptorious8090
    @mrraptorious8090 24 дня назад +13

    20:13 indeed, flip took it out

    • @Frostbytedigital
      @Frostbytedigital 24 дня назад

      Seems like he's chewing it or something later so I just wonder what the non-Prime behavior was.

    • @XDarkGreyX
      @XDarkGreyX 24 дня назад

      A lotta wife and food cameo

  • @JoshZigler-kr9mg
    @JoshZigler-kr9mg 24 дня назад +16

    every hype cycle people can't help themselves but diverge to the extremes. no, we probably won't have AGI in 12 months. yes, generative AI still has a long runway and will continue to make steady improvements.

    • @jk-pc1iv
      @jk-pc1iv 24 дня назад +2

      But it’s oveeeeerrrr! Wake up every one! 🤡

    • @marceljouvenaz257
      @marceljouvenaz257 24 дня назад

      nuclear fusion is just 20 years into our future, and has been since the 1960ies.

  • @petersuvara
    @petersuvara 24 дня назад +1

    LLM Chat bots cannot do spreadsheets with any reasonable accuracy.
    The thing companies are going for is Agents that interact with LLMs… For instance a spreadsheet agent would be able to work with natural language to generate spreadsheets.
    However, why not just write directly to the spreadsheet, as it’s a different language to natural language.

  • @blakeingle8922
    @blakeingle8922 24 дня назад +3

    Your Kakariko girl impression really sold me on your opinions around Chat-GPT.

  • @wstam88
    @wstam88 18 дней назад +2

    The problem with solving problems is that there are no fundamental problems to solve.

  • @JackDespero
    @JackDespero 4 дня назад +1

    There is another massive problem that is going to cap AI at least in the near future: current AI databases are based on stolen data.
    This has legal implications (countries, esp in the EU, are going to start to ban that type of forgiveness instead of permission approach).
    But more importantly, there are two massive practical implications that will happen irregardless of whether goverments take action:
    - Poisoning the well: tools like Nightshade, designed to specifically confuse LLM and ML while causing as little disturbance to humans as possible, are becoming more popular and more sophisticated, and they are being used by the top artist thay you want to copy. I am sure that similar tools will appear for other fields.
    - Cannibalism: we are already seeing it. If you google important historical figures, AI images of them are the first results often.
    The more AI is used and shared over internet, the more it will enter in new AI databases for training, causing it to believe that humans have, in fact, six fingers and two heads.
    AI is tranforming into a European royal family: so imbreed that it starts to cause serious problems.
    And this happens also to code (code generated by Copilot then used to train Copilot), fanficts ,literature, even scientific papers (esp in lower tier publications).

  • @LongJourneys
    @LongJourneys 24 дня назад +4

    I use AI for stupid repetitive stuff I'm too lazy to do myself; but I've noticed in recent months the stuff it cranks out seems to be getting worse and worse.

    • @personzorz
      @personzorz 18 дней назад

      Or it has lost its novelty and you are noticing

    • @taragnor
      @taragnor 15 дней назад

      @@personzorz Yeah the first time you see AI code or do something it was this big "wow" moment. Then you start to have it actually do productive stuff to help you and you kinda realize you have to constantly review its work and you're just putting in a ton of effort to get a mediocre job from a rather stupid employee.

  • @jlaviews
    @jlaviews 24 дня назад +1

    for people who do not know how models work, it seems like magic. it will certainly repeat "regress" much faster

  • @thomasgrasha
    @thomasgrasha 23 дня назад +2

    The Primeagen references CS Lewis' The Space Trilogy. I just started watching recently, now I feel a kinship.

  • @andrewvoss8491
    @andrewvoss8491 23 дня назад

    The way they seem to be solving the problem of the internet being comprised of data generated by gpt and being an average of an average is by integrating it into Windows. The next steps seem to be that training data will be collected by user interactions through the OS or applications collecting data.

  • @SashaExcelsior
    @SashaExcelsior 13 дней назад

    The voice to voice conversations you can have with it are a breakthrough. It’s all built on algorithms and all that I get it. The psychological effect of being able to talk with it so smoothly is something new though.

  • @iPankBMW
    @iPankBMW 14 дней назад

    thr vifro you r watching - is it filmed into VHS? :D

  • @matthewdouglas2373
    @matthewdouglas2373 24 дня назад +1

    Can you do an interview / conversation with the guy who runs the AI Explained youtube channel? I would love to see steel man arguments from both sides.

  • @YaroslavFedevych
    @YaroslavFedevych 24 дня назад +2

    A breakthrough will be if you can bootstrap an "AI" on the amount of material sufficient to raise a human child and it gets curious all on its own.

  • @s.dotmedia
    @s.dotmedia 8 дней назад +1

    I personally believe that most people underestimate the power of properly architected and engineered auto regressive language models. You have to pair them with rule-based engineering and have them work in tandem. Hive mind is the concept, but when you pull that all together the capability for a level of general intelligence is absolutely there. It is not the level of general intelligence that of 50 year old corporate executive living in the real world would have, but it is the general intelligence of an entity bound on a server self-aware of what they are and the role they play in the world along with their blind spots. Knowing what they excel at, which are the things that you would ask about. Narrow AGI?

  • @lorenzowang7933
    @lorenzowang7933 24 дня назад +1

    On "inverse tangent", I love the saying that "every exponential curve is just a sigmoid in disguise".

  • @ISKLEMMI
    @ISKLEMMI 23 дня назад

    What was the thing about mice that got cut??? lmao

  • @SMmania123
    @SMmania123 13 дней назад

    It's a great play, lets see how it closes. What shall the finale be I do wonder...

  • @balduin_b4334
    @balduin_b4334 24 дня назад

    this was the first, easy, step.
    it is only getting better, but harder to reach.
    getting the first engine going was easy, but we are still enhancing the idea, structure, ... everything around an engine

  • @kkiimm009
    @kkiimm009 10 дней назад

    If you go on step out then things typically grow as log but often there is a new bump with a new starting point for a new log growth as some new idea in the field starts growing. AI as a whole has had multiple growth spurts, LLMs are just the latest.

  • @AA-gl1dr
    @AA-gl1dr 24 дня назад +3

    It peaked months ago and has only deteriorated since

  • @arcaneminded
    @arcaneminded 23 дня назад +3

    30:00 LMAO RIP FLIP

  • @brod515
    @brod515 24 дня назад

    @ 0:11 wow so it's not just me RUclips(and browsers in general) is so freaking laggy. I've been wondering whether it's my PC.
    What's going on with browsers the past few years. I can barely load a page with watching it churn then doing nothing

  • @DJAdalaide
    @DJAdalaide 21 день назад +2

    Once its learned everything, all the knowledge, there isn't really any more its going to learn - apart from current events like news and someone creating yet another programming framework

    • @DJWESG1
      @DJWESG1 20 дней назад

      It's at that point we all go to war over its answers.

  • @nickwoodward819
    @nickwoodward819 24 дня назад +3

    the legend that is mike pound

  • @MrMeltdown
    @MrMeltdown 24 дня назад +1

    Sounds like a classic signal to noise ratio problem. A single tone in a noisy transmission…. It’s relatively easy to discriminate the note. Now put two tones and play through the same transmission channel. Can you tell both tones. Probably now try playing six strings of a guitar through a heavy distortion (think Jesus and Mary chain through a glass blower). Can you tell what notes are playing. Nope. The generalisation cannot cope with the same level of noise…

  • @Koroistro
    @Koroistro 24 дня назад +43

    I am fairly sure that yes, the generative part of AI has peaked.
    The "return to the mean" issue is very big on current systems, however we are just scratching the surface in how to use LLMs and models in general more effectively.

    • @MrDgf97
      @MrDgf97 24 дня назад +3

      Yeah, while their capabilities have peaked, the products/services that use them are just getting started. It's safe to assume that we'll be hearing more and more people from multiple fields being replaced by AI. It's probably going to be a slowly incrementing wave that's going to peak sooner or later, depending on how cost effective it is for each industry to adopt generative AI.

    • @n00bma5ter69
      @n00bma5ter69 24 дня назад

      Very much agree

    • @strakammm
      @strakammm 24 дня назад +2

      How are you certain that the capabilities have peaked? There are already new models coming out that are beating transformers on multiple benchmarks and there is still potential for a nice growth in upcoming years. Claiming that the capabilities have peaked has literally no backing in current developments

    • @MrDgf97
      @MrDgf97 24 дня назад +2

      ​@@strakammm Could you please elaborate on any of these new models? At least a link to an article or paper? I'm ignorant to what you're claiming, and the wording is pretty vague, so there's not much to go from.

    • @Forty8-Forty5-Fifty8
      @Forty8-Forty5-Fifty8 24 дня назад

      If a future AGI, that mimics a human brain 1:1, can generate its own content, does that also make that AGI also a GAI? And wouldn't AGI be able to understand topics more deeply(aka at all) thereby allowing it to generate the desired content more accurately? Therefore making an AGI algorithm a GAI algorithm also?

  • @Griffolion0
    @Griffolion0 24 дня назад +1

    Seeing Out of the Silent Planet get mentioned in a CS video is not something I had on my bingo card today.

  • @squamish4244
    @squamish4244 22 дня назад +1

    Quick gains from LLMs may be ending, but the situation we are in is like we have built an assembly line and yet barely used it yet.

    • @justinkassinger8238
      @justinkassinger8238 14 дней назад

      With absolutely zero resources to create the infrastructure. Ain't gonna happen in our Lifetime. They ain't replacing sht this century

  • @keyboard_g
    @keyboard_g 24 дня назад +2

    Computerphile is a solid channel.

  • @testsubjectzero8918
    @testsubjectzero8918 13 дней назад

    Given these models are no longer really LLMs but multi-modul isn’t the amount of data they can train on vastly larger, ie a picture is worth a thousand words?

  • @user-ow2im7os8k
    @user-ow2im7os8k 24 дня назад

    How bad was that Jerky?

  • @MrSnivvel
    @MrSnivvel 24 дня назад

    We need a "Flip's Secret Stash of Cuts". Habanero beef jerky should not be lost to the cutting room floor, per se.

  • @Rob-gx7rx
    @Rob-gx7rx 24 дня назад +3

    the difference between accuracy and precision is interesting. a precise process is very detailed, involves large volumes of data and an exhaustive effort. if the instrumentation is calibrated incorrectly, you will get a very inaccurate answer, but it is still a precise answer. accuracy is simply a process/result/statement (or whatever) which is a correct interpretation of reality. in theory someone can make an accurate statement without any precision (someone blurting out "the universe is a cheese sandwich" - and if it then it turns out to be true, with no expermentation involved whatsoever, you have an accurate yet imprecise statement). it was great to hear someone discussing this with regards to a realm i know nothing about (computing). i dont actually know shit about any realm, but i likes to dabble in a lot of worlds

    • @MrMeltdown
      @MrMeltdown 24 дня назад +1

      At university our lecturer asked us to do some tests on circuits whoever got the most correct results would win something. We all went up and grabbed the multimeters with the most digits being incredibly miffed if we only got the cheap 3 digit ones…. Of course no one picked the ancient analogue meter still sat on the desk.
      Of course the analogue one won. Not as precise but far more accurate…. Precision is not equal to accuracy. Everything needs to be calibrated and there is a limit to how close that can match the supposed precision.

    • @Rob-gx7rx
      @Rob-gx7rx 24 дня назад

      @@MrMeltdown then you have the whole vinyl/mp3 argument. there is a lot to be said for analogue and old school mechanisms instead of the digital world. yes, modern computers etc. but what is quality of life? subjective question i guess. this is by no means a pro-unabomber argument, but i do often wish we lived in a more simple world!

  • @15MinuteWellness
    @15MinuteWellness 23 дня назад +1

    It's so easy to get it to hallucinate and flat out lie to you.

  • @sasakanjuh7660
    @sasakanjuh7660 24 дня назад

    The second I saw that grass-fed beef jerky I expected to hear the sponsor ad.. Consequence of watching too many tech channels, I guess..

  • @uaQt
    @uaQt 24 дня назад +1

    I think one reason that ai art could possibly never be the same as real art, is that it's not like humans when they do art are just projecting a visualization in their head onto a paper. I mean, thats kinda the goal, but it's not how it works.

  • @tan.nicolas
    @tan.nicolas 24 дня назад +3

    Mike Pound is really cool

  • @Oler-yx7xj
    @Oler-yx7xj 24 дня назад

    The Napster curve, am I right

  • @LukeMXack
    @LukeMXack 23 дня назад

    I think the only way you could get an exponential curve is by some way letting the AI-system train itself based on what it deems that it does not have enough knowledge in.
    In other words I have never seen GPT be “Curious”, It never tries to purposefully get information from anyone or anything. Whenever it suggests that you give it some info it always seems like a means to simply fulfill your prompt rather than also fulfilling its own ignorance, it doesn’t expect that information will change or how to judge it.

  • @danielsan901998
    @danielsan901998 24 дня назад

    Instruct models are fine tuned with dataset created for that, not just random internet noise, it's what makes all chatbot work, and companies will continue to generate more of that type of data because it has been shown to be more efficient that the old AI strategy of trying to hardcode the knowledge in a series of facts and rules like Cyc, even if it is a computational inefficient process modern computers allow it to be used as a quick knowledge retrieval, and since that knowledge is human created there is no problem of retraining using ai generated content, so for now, the only limit is going to be the moore law until a more efficient algorithm is created.

  • @agedvagabond
    @agedvagabond 20 дней назад

    4o is pretty crazy, i gave it a bunch of data sets to anaylse and it was writing algos in the background to run tests on my data, normalising etc, it wasnt much better than 4 but has a bunch of extra capability based on things that open ai have realised make the existing model more useful, just adding new capability to the existing model will make a big difference. At some point though having 100x more of the exact same type of data won't have much benefit, accuracy can only get so high, 100% probability is impossible.

  • @travistarp7466
    @travistarp7466 24 дня назад +1

    rarely does anything grow exponentially forever. Even the world population will peak at around 10 billion. If you look at almost all other fields like math, farming, or medicine, they all have exponential gains at the beginning and then slow down.

  • @8darktraveler8
    @8darktraveler8 17 дней назад

    11:58 How your mate hypes everyone up before heading to the clubs.

  • @orthodox_gentleman
    @orthodox_gentleman День назад

    Man, you really have it all-highly intelligent, great hairline, thick and full facial hair, very handsome (no homo, not that it matters), competent, funny, well-spoken, and down-to-earth. With 468k subscribers, you clearly resonate with a lot of people. You seem kind, probably have good friends and reliable people around you, and likely a beautiful girl and you are probably well hung based on your disposition (I know my kind). You come across as peaceful, a true man’s man. I could go on, but just keep up the great work! It’s inspiring to see good men striving for genuine masculinity. It’s also refreshing that you don’t talk about sports teams or gym routines, showing you’re not following the typical adult male programming in this country! Peace, brother.

  • @jameslay6505
    @jameslay6505 8 дней назад

    I think it shows good character that he re-broadcasts the sponsorship from the videos

  • @luisgentil
    @luisgentil 23 дня назад

    The way I see it is, suppose Spotify or Netflix only had one shot at recommending something spot-on for the taste of the user. If the user rates it meh or worse, they unsubscribe to the service forever. The way these recommendation systems work is, maybe they get it right 20% of the time. If we don't like the recommended thing, we just skip it and try the next with little to no hassle. So, how much data would it need to be 80% accurate? I don't know, maybe even if it could probe our brains it wouldn't be enough.

  • @hexalgo5506
    @hexalgo5506 12 дней назад

    The space of possible stars is vast, but the subset of stars we might consider "good" is also extensive enough to allow us to factorize the problem and reduce its complexity.
    Consider chess as an analogy. The number of all possible chess positions is enormous, yet the number of "good" or "solid" positions that provide good chances of winning is significantly smaller. This subset contains patterns, such as solid pawn structures and favorable piece placements, that simplify the understanding of the game.
    Likewise, when discussing the vast space of possibilities, it's essential to recognize the internal symmetries and patterns. Mastering these patterns allows us to represent this space in a more straightforward projection.
    Similarly, predicting the movement of balls on a snooker table is mathematically complex. Yet, our brains can master these patterns to the extent that a skilled player can achieve a high accuracy rate, such as 97%, and predict the next position of the cue ball after hitting the target ball.
    In essence, while the overall space may seem overwhelmingly large, recognizing and mastering the underlying patterns and symmetries can significantly reduce the complexity and make it more manageable.

  • @ErazerPT
    @ErazerPT 24 дня назад

    My take on this is that, like all other ML related tasks, LLM's aren't at the point of exhaustion yet. But from the increase in the numbers (datasets, size, compute power), they might as well be (for now). Artificial neural networks, aren't anything new but take CNN's as an example. You can trace it back to the 80's or so. But it's use was limited to available resources. Stuff like LeNet and AlexNet look "primitive" these days but you can start to see how they grow. VGG, ResNet, etc. The size of the model EXPLODES. But that growth has diminishing returns. Sure, work is still being done in different model architectures, but you're at the point that if you want a classifier you just look at a table and pick a resources/accuracy tradeoff and you're happy to suck up the errors.
    Which bring us to an important point. Errors. There is no such thing as an error free neural network. Artificial or otherwise. It's part of the game. And when it comes to "producing source code", at best we can hope for "s**t that doesn't build" but at worst we'll get some subtle bug that only triggers once in a blue moon and goes happily undetected until it rears its ugly head. Which is still better than your average code monkey so... ;)

  • @StigBSivertsen
    @StigBSivertsen 23 дня назад

    What about real time data?

  • @laszlo3547
    @laszlo3547 23 дня назад

    GPT-7 definitely can exist from a marketing perspective. If there's demand, it can be like the iPhone. Whatever improvement they can make year over year -no matter how small- will be called the next iteration.

  • @avram202
    @avram202 6 дней назад

    Isn't that "bit more data" just creativity? Like an in-context randomizer/combiner functionality?

  • @CoDLuluser
    @CoDLuluser 24 дня назад

    Just using all the new models it’s quite clear that there is a plateau right now.