Beyond the Hype: A Realistic Look at Large Language Models • Jodie Burchell • GOTO 2024

Поделиться
HTML-код
  • Опубликовано: 23 дек 2024

Комментарии • 158

  • @GOTO-
    @GOTO-  Месяц назад +3

    Looking for books & other references mentioned in this video?
    Check out the video description for all the links!
    Want early access to videos & exclusive perks?
    Join our channel membership today: ruclips.net/channel/UCs_tLP3AiwYKwdUHpltJPuAjoin
    Question for you: What’s your biggest takeaway from this video? Let us know in the comments! ⬇

  • @tiger0jp
    @tiger0jp 5 месяцев назад +45

    Excellent talk. More than the USING LLMs section which was great, however quite a few would be familiar with given the massive focus on GenAI...it was the first 30 minutes that was really useful especially to our responsibility as technologists. Listening to the categorisation of intelligence and where current LLMs are set the context on where they should be used for and where they shouldn't e.g. fighting wars.

  • @etunimenisukunimeni1302
    @etunimenisukunimeni1302 4 месяца назад +26

    This is the best talk I've seen on any subject in months. Very well put together, super informative even in this short time. You can tell there would've been more where that came from, and the knowledge and experience of the speaker shows. No over/underhyping, just how, where and why LLMs work the way they do.

  • @TechTalksWeekly
    @TechTalksWeekly 5 месяцев назад +19

    This is an excellent introduction to LLMs and Jodie is a brilliant speaker. That's why this talk has been featured in the last issue of Tech Talks Weekly newsletter 🎉
    Congrats!

  • @seanendapower
    @seanendapower 5 месяцев назад +17

    This is the clearest explanation of how this works I’ve come across

  • @dwiss2556
    @dwiss2556 4 месяца назад +20

    This has been one of the best if not the best demonstration of what AI is actually capable of. Thank you for a great talk and most of all keeping it on a level that is understandable even for non-Gurus!

    • @Crawdaddy_Ro
      @Crawdaddy_Ro 4 месяца назад +1

      She isn't taking exponential growth into consideration. It's the same reason many researchers didn't see the current AI boom coming yet others did. If you understand exponential growth, you'll also understand that, even though true AGI is a long way off, it will only take a few more years.
      Look up the Law of Accelerating Returns.

    • @dwiss2556
      @dwiss2556 4 месяца назад

      @@Crawdaddy_Ro Exponential growth is as limited as other growth. Just because there are more transformers does not automatically increase the actual quality of the outcome, which is still the major factor why the label 'hype' is very true in the context.
      We can make cars drive insanely fast but that doesnt mean we can actually get them to be driven safely on public roads at that speed. This very much translates to AI as the energy consumption at current stages doesnt correlate to the outcome it provides. Any advantage in time saving is eaten up by other negative factors currently.

    • @desrochesf
      @desrochesf 3 месяца назад

      @@Crawdaddy_Ro Which exponential growth is she not considering? Even training counts haven't been exponential. Transistor counts / Moore's law hasn't been exponential since 2010 if not sooner and is about to run out with transistor size-shrink return on equity gains coming to an end. Current LLMs aren't much different than the first 'big L' version pre-2000. The AI "boom" currently taking place is largely consumer grift and investors marketing.

  • @mikemaldanado6015
    @mikemaldanado6015 5 месяцев назад +21

    Chatgpt measured its performance in the Bar exam against a set of people that have taken and failed the exam at least once. Research show that people that have failed once have a high probability of failing again. ie. do not trust research funded results, independent research disproves a lot of the gpt claims. ex. independent research found that gpt3.5 gave better answers than 4.0 just not as fast. Thanks you for this "no hype" talk, it should be the norm when it comes to discussing LLM's.

  • @samsonabanni9562
    @samsonabanni9562 3 месяца назад +2

    She's a great teacher

  • @alaad1009
    @alaad1009 5 месяцев назад +20

    Jodie, if you're reading this, you're amazing !

  • @InsolentDrummer
    @InsolentDrummer 4 месяца назад +3

    17:04 Jodie, your remark about scientists being well established in physics, mathematics, computer science etc but not in psychology is rather important, but a bit incorrect. I've been following the development of LLMs loosly, and still, every one seems to be missing the most important point. How many linguists were involved in such endeavours? Natural language does not boil down to just learning strings of characters by heart, generating new string upon them. Unless we consult those who study natural language as it is, LLMs are doomed to be just T9 on steroids.

    • @mortenthorpe
      @mortenthorpe 4 месяца назад +1

      You are correct in what you write, but this is a mere single subset of the main issue with AI… it needs to know the context, if required to actually solve a problem (the outcome being useful, somewhat correct, and repeatable in solving it). Since no one can communicate context exhaustively, if tasked to input this into an AI generator, this is impossible… AI for generating solutions remains impossible.

  • @ManuelBasiri
    @ManuelBasiri 5 месяцев назад +21

    I wish we could mandate watching this talk for all of those over excited business decision makers.

  • @apksriva
    @apksriva 4 месяца назад +2

    woah .. ! brilliant talk. Very well constructed, I was immersed in the talk till the end.

  • @jamesreilly7684
    @jamesreilly7684 5 месяцев назад +3

    All of this can be summarized in the statement that agi will not exist until ai systems can learn by the socratic method as well as it can teach it

  • @kehoste
    @kehoste 4 месяца назад +1

    Excellent talk, thanks for recording and sharing!

  • @VahidMasrour
    @VahidMasrour 5 месяцев назад +2

    Great talk! The best introduction to LLMs i've seen so far.

  • @santhanamss
    @santhanamss 4 месяца назад +1

    Excellent talk, very concise.

  • @Syntax753
    @Syntax753 3 месяца назад +1

    The most insightful AI talk I've heard. Thank you for all the hard work putting this together in such an engaging manner! Fantastic

  • @ankurbrdwj
    @ankurbrdwj 3 месяца назад +1

    thank you for such a great great talk , best I have seen to get an introduction to current state , really great , no bs no hype talk, that's why everyone should study psychology not moral science .

  • @aishni6851
    @aishni6851 4 месяца назад +3

    Jodie you are a great speaker! Amazing talk, very insightful ❤

  • @MaciejHajduk
    @MaciejHajduk 4 месяца назад +1

    She makes it so clear. Best introduction to LLMs and "AI" I have seen ❤

  • @fabiodeoliveiraribeiro1602
    @fabiodeoliveiraribeiro1602 5 месяцев назад +14

    Last year I created an ancient philosophy quiz ("To pithanon ecypyrosis apeiron") and submitted it to ChatGPT. OpenAI's AI did very badly, because it calculated the answer by giving too much value to a word in the sentence and this led it to attribute it to a philosopher Heraclitus whose work emphatically repels another philosophical concept present in the sentence ("apeiron" was coined by Anaximander and used by Xenophanes, philosopher ridiculed by Heraclitus). Some time later, I applied the same test to another AI and the result was surprising. The same mistake was made, but this time the AI ​​cited the ChatGPT test result that I had previously published on the Internet. People in the field of philosophy do not make similar mistakes, especially if they intend to maintain their credibility. And yes, generative text AIs don't just infringe copyright, they do so by mixing good content with bullshit they invent themselves and inappropriate responses provided by other AIs.

    • @antonystringfellow5152
      @antonystringfellow5152 5 месяцев назад +3

      Yes, this is what's referred to as "contamination".
      It's a growing problem for models that are trained on publicly available data from the internet.

    • @trinleywangmo
      @trinleywangmo 4 месяца назад

      @@antonystringfellow5152 And in a day and age when facts and truth mean so little.

  • @stevensvideosonyoutube
    @stevensvideosonyoutube 4 месяца назад +1

    That was very interesting. Thank you .

  • @dp29117
    @dp29117 4 месяца назад +2

    Thank You Jodie for very nice explanations

  • @NostraDavid2
    @NostraDavid2 4 месяца назад

    Note that chatgpt-3.5-turbo has been replaced by chatgpt-4o-mini, it's successor.
    It was likely not live when this talk was given.

  • @trapkat8213
    @trapkat8213 4 месяца назад +1

    Great presentation.

  • @sayanmukherjee1216
    @sayanmukherjee1216 3 месяца назад +1

    Loved it! She is such a wonderful narrator and presenter.
    I have a question regarding #generalintelligence #ai #agi #llms - what if we link AI and one with “regional agency” forms a network with others?

  • @AnthatiKhasim-i1e
    @AnthatiKhasim-i1e 4 месяца назад +1

    "As a curious AI enthusiast, I'm fascinated by the potential of SmythOS to make collaborative AI accessible to businesses of all sizes. The ability to visually design and deploy teams of AI agents is a game-changer. What use cases are you most excited about?"

  • @jmonsch
    @jmonsch 4 месяца назад +1

    Great talk! Thanks you!

  • @andrewprahst2529
    @andrewprahst2529 4 месяца назад +4

    I like when she says "so"
    I would be sad if Australia stopped existing

  • @Flylikea
    @Flylikea 4 месяца назад

    15:51 I genuinely believe this part is very clear and eloquent, but will still confuse a lot of people (partially due to these people's difficulty coping with the notion of raw ability in other humans).
    Train an LLM on dictionaries and grammar books and then ask it to write The Odyssey. If that cannot help people understand intelligence and its difference to how ML/AI models (statistical models on steroids) work, I don't think we can help anyone here.
    It's a great tech. It's not revolutionary (though I can see how it can be used to trigger a revolution), and it is helpful to move faster through repetitive or repeatable components in a task.

  • @nteasushz
    @nteasushz 5 месяцев назад +2

    Absolutely lovely talk

  • @Anna-mc3ll
    @Anna-mc3ll 4 месяца назад +1

    Thank you very much for sharing this interesting and detailed presentation!
    Kind regards,
    Anna

  • @mioszdaek1583
    @mioszdaek1583 4 месяца назад +1

    Great talk. thanks Jodie

  • @ytdlgandalf
    @ytdlgandalf 4 месяца назад +1

    such a clear talker!

  • @GregoryMcCarthy123
    @GregoryMcCarthy123 5 месяцев назад +7

    Excellent talk! Would like to see more from Jodie

  • @emralcanan9556
    @emralcanan9556 4 месяца назад +1

    Nice talk

  • @prasad_yt
    @prasad_yt 5 месяцев назад +13

    Great presentation - concise and loaded.
    Removal of the hype and capturing the essence .

    • @yeezythabest
      @yeezythabest 4 месяца назад

      The bare mention of hallucinations is the weak point of this presentation, especially on RAG part but it was very interesting

  • @rflorian86
    @rflorian86 5 месяцев назад +1

    Got damn....vI will have to listen multiple times, thank you.

  • @ahmedeldeeb6893
    @ahmedeldeeb6893 4 месяца назад +1

    Good talk all in all, but I found the section on "are LLMs intelligent" to be less than coherent. The placement of current LLMs on the chart is completely subjective, and the classification of generalization levels relevant but wasn't really brought to bear. The method of generating a "skill program" is only a preferred way of designing a system and by no means the only way, so why bring it up?

  • @jonchicoine
    @jonchicoine 4 месяца назад

    If you're new to AI and python, good luck getting the example notebook working on windows. It appears to me that more than one package doesn't support windows.

  • @serakshiferaw
    @serakshiferaw 5 месяцев назад +46

    Fantastic speech, now i think AI is on a stage where kids do what parents do and they don't know why. just imitating

    • @samvirtuel7583
      @samvirtuel7583 5 месяцев назад +7

      Disagree.
      Humans also simply obey the functioning of their network of neurons.
      Reflection, understanding are emerging properties, these properties also emerge from LLM and will be more and more precise.

    • @sUmEgIaMbRuS
      @sUmEgIaMbRuS 5 месяцев назад +6

      @@samvirtuel7583 Counter disagree. Human neurons are non-linear, which makes them way more versatile than digital neurons. And a human brain also constantly evolves and adapts its own structure to problems it encounters. These are both fundamental properties that will never emerge from simply scaling up the number of parameters in linear pre-trained NNs.

    • @samvirtuel7583
      @samvirtuel7583 5 месяцев назад +1

      @@sUmEgIaMbRuS This is why I talk about precision, this precision makes it possible to make the LLM less myopic.
      The hallucination is linked to this myopia due to the lack of precision.
      But I remain convinced that LLMs 'understand' in the same way that we understand.

    • @sUmEgIaMbRuS
      @sUmEgIaMbRuS 5 месяцев назад +9

      @@samvirtuel7583 GPTs are pre-trained, i.e. they never learn, they're completely static. Reasoning is sequential. You can't get sequentiality out of a static system by just making it bigger (or more "precise" as you prefer to say).
      They are also transformers, i.e. their entire thing is taking some text as input and pushing out some other text as output. Compilers do the same, they transform C code to x86 assembly for example. They even "optimize" their output by applying certain transformations that don't affect the observable behavior of the program. But this doesn't mean they "understand" the program in any way.
      I'm not saying we'll never make an AGI. I'm saying that if we do, it will probably be very different from today's LLMs.

    • @samvirtuel7583
      @samvirtuel7583 5 месяцев назад

      @@sUmEgIaMbRuS This is precisely the magic of these systems, and this is why even scientists do not fully understand them, we simply know that properties or behaviors emerge that go beyond what the system is supposed to do.
      It is not a question of a programmed expert system based on statistics and a knowledge base, it is more of a sort of holographic database formed by a network of information fragments.
      If you understand what these systems actually do, you will understand that LLMs will soon be able to reason like us.
      I agree with Geaffrey Hinton and IIlya Sytskever, it's just a question of scale.

  • @dimitriostragoudaras8682
    @dimitriostragoudaras8682 4 месяца назад +1

    ok the content is suberb, the presentation is top notch, but OMG I would like to be able to replicate her accent (especially when she says DATA).

  • @sirishkumar-m5z
    @sirishkumar-m5z 4 месяца назад

    Exciting news about META's new open-source model! SmythOS is perfect for integrating and experimenting with the latest AI models. #SmythOS #OpenSourceAI

  • @campbellmorrison8540
    @campbellmorrison8540 4 месяца назад

    Excellent talk thank you. My primary concern is the suggestion that Extreme generalization is at the human level. While Im sure it is I'm equally sure it doesn't apply to a very large percentage of humanity. It seems to me a very large amount of humanity needs to be trained and they too would fail given problems that have not seen before. That suggests to me that there is a very large percentage of the population that are only, in employment terms are only at or below the level of current LLM's and hence very likely to be replaced by AI. as a result I dont think its unreasonable to think that AI as it stands in going to be a threat to human work and hence income. However my real fear is, as the models get larger and the volume of information input becomes wider the less anybody will be able to predict what the output from AI will be. While that may not be a problem in a fixed role, the more we give AI control of infrastructure and especially military and scientific realms the less we will actually be able to control these as we will not be able to predict problems before they happen. From what I am seeing LLMs are not my real concern as I agree they are about natural language so have applications the require language manipulation but what about systems that really appear to have little to do with language such as things that now manipulate images along with geographic data the sort of thing you might need for a self driving car or missile. Tell me I'm wrong please

  • @NuncNuncNuncNunc
    @NuncNuncNuncNunc 4 месяца назад +2

    As long as LLMs get the answer to questions like, "three towels dry on a line in three hours. How long will it take for nine towels to dry on three lines*" wrong, I am not too worried about AGI.
    LLMs are basically cliche machine that happen to know a lot of cliches in a lot of different domains.
    * Gemini provides this reasoning:
    If it takes 3 hours to dry 3 towels, it means it takes 1 hour to dry 1 towel (assuming consistent drying conditions).
    If you have 9 towels, and each towel takes 1 hour to dry, then it will take 9 hours to dry all 9 towels.

    • @SaPe-k6s
      @SaPe-k6s 3 месяца назад

      Copilot answers correctly. Tested right now "If three towels dry on a line in three hours, each towel takes three hours to dry. With three lines, you can hang three towels on each line. Since all the lines work simultaneously, the nine towels will dry in the same amount of time as three towels on one line: three hours."

  • @charlessmyth
    @charlessmyth 4 месяца назад +1

    Good talk :-)

  • @shikida
    @shikida 4 месяца назад +1

    Great preeentation

  • @SimonHuggins
    @SimonHuggins 4 месяца назад

    Hmmm. But you can encode lots of data as though it was language - more efficient tokenization of symbolic representations of different modalities will most likely get us a long way too. And finding generalizations from this may well help LLMs speed further towards AGI. I think the origins of LLMs hides their potentials outside this space. But yeah, there’s a lot of … ahem, attention on this problem.

  • @davidsault9698
    @davidsault9698 4 месяца назад +2

    She's scary intelligent combined with the ability to speak - which is not necessarily intelligence based.

  • @AIhyp
    @AIhyp 4 месяца назад +1

    Just woww

  • @toreon1978
    @toreon1978 5 месяцев назад

    30:17 did you forget Embedding, Context and Prompt Engineering?

  • @Hank-ry9bz
    @Hank-ry9bz 4 месяца назад

    42:50, still it's impressive in its own way. Admittedly not AGI though (whatever that means)

  • @davidporter6041
    @davidporter6041 5 месяцев назад +7

    Jodie rocks generally, but also this exactly the kind of talk we need

  • @generischgesichtslosgeneri3781
    @generischgesichtslosgeneri3781 4 месяца назад

    Don't call it hallucinating, it fabulates.

  • @pmiddlet72
    @pmiddlet72 5 месяцев назад +2

    Generating the ultra generalized model-of-everything doesn't even meet the variation in humans that can remotely constitute these vague notions of AGI. So more domain specific models would appear, to an extent, more solve-worthy. Generating the 'Rennaissance SuperIntelligence' isn't IMO a reasonable goal, for so many reasons - a large part of which are philosophical/ethical.
    What's the point, unless what we're generating is a reasonable model for better understanding ourselves - specifically, how the human brain works and processes the world around it (this is a notoriously complicated area of study)? Conducting scientific research simply 'because we can' (or more likely, because it brings in the Benjamins), while it may generate some new insights, more often than not has driven historical bad actors (and I"m being QUITE nice here) to engage in some ethically horrendous activities.
    So the hype in this regard and the droves of misunderstanding created around it is important to look at with healthy skeptical eyes. Sometimes 'excitement' over an idea, and over-promising various facets of value can run rampant enough to drive this seeming dichotomy between 'accelerationists' and 'doomers', as if there's NO SUCH thing as a spectrum of thought nor the existence of a middle ground. Is it important to have watchdogs over big tech? You bet. We wouldn't expect any less of watchdogs over big finance, big oil, big watermelon growers - you get the idea.

  • @vasvalstan
    @vasvalstan 4 месяца назад

    There should have added the new research from Deepmind and what they did, not just Kasparov and chess

  • @BoomTechnology
    @BoomTechnology 4 месяца назад

    AI solving problems that are irrelevant to us 42:49 > Like bad humans, = Delete human genetic error 🤖;

  • @BasudevChaudhuri
    @BasudevChaudhuri 5 месяцев назад +3

    That was a fantastic presentation! Absolutely loved it!

  • @hcubill
    @hcubill 5 месяцев назад +3

    Jodie is awesome. Really cool presentation!

  • @Darhan62
    @Darhan62 5 месяцев назад +3

    I think there is reasoning going on in LLMs, what can only be called a form of reasoning, and by that token (no pun intended) a form of intelligence. It may be just reasoning based on language, but it is reasoning. Also, what about multi-modal models, that can look at a photo and give you a text description of what's in it, or can analyze a piece of music and tell you the genre, or give you a text description of it?

  • @dennisestenson7820
    @dennisestenson7820 5 месяцев назад +3

    Artificial general intelligence will be built from components that are algorithmic, systematic, and not intelligent at all. Same as us.

    • @TheRealUsername
      @TheRealUsername 5 месяцев назад +2

      I think you should learn biology, our brain is incredibly complex, even more for the neo-cortext, it couldn't be farther from mathematical algorithms. All ML models are statistical patterns learners, they can only learn patterns instead of actual data, because it's mathematically possible, and it requires all the dataset to be mathematically readable.

  • @richardnunziata3221
    @richardnunziata3221 5 месяцев назад

    test data in the training data is a rookie mistake..i have to wonder if that is true or there is a misunderstanding here

    • @aaabbbccc176
      @aaabbbccc176 5 месяцев назад

      This is what I think. You might try asking GPT4o who wins the gold medal RIGHT AFTER the 100m dash at Paris Olympic Games. If it answers, "I do not know," or it answers wrong, then you know if there is a rookie mistake. The answers to MOST of the questions people ask GPT are indeed in the training data, somewhere. GPT is just smart enough to extract (probabilistically) the right answer and put it in well-organized natural language sentences.

    • @suisinghoraceho2403
      @suisinghoraceho2403 4 месяца назад

      @@richardnunziata3221 When you have bots automatically crawling the internet data and OpenAI being very opaque about how their models are trained, this is actually quite difficult to avoid. You can only try to validate whether the test data is in training data after the facts.

  • @soulsearch4077
    @soulsearch4077 5 месяцев назад +2

    I really enjoyed this. I actually got an extra knowledge, and it kind of aligned with my suspicions about the current state of AGI.

  • @seanys
    @seanys 4 месяца назад

    “FORTY TWO!”
    Universal intelligence solved.

  • @lancemarchetti8673
    @lancemarchetti8673 4 месяца назад +1

    Facts

  • @MaxMustermann-vu8ir
    @MaxMustermann-vu8ir 5 месяцев назад +15

    Today I asked MS Copilot aka GPT4 to return a list of African capitals starting with a vowel. It returned 20 results, some of them being repetitions, and 13 out of 20 did NOT start with a vowel. I'm sure AGI is near 😀

    • @jan7356
      @jan7356 5 месяцев назад +1

      I asked GPT-4o to do the same. It gave back 7. All starting with a vocal. All different Only one of them wasn’t a capital (but former capital and biggest city in the country). It needed 2 seconds for something I couldn’t have done. I’m sure AGI is near 😀

    • @MaxMustermann-vu8ir
      @MaxMustermann-vu8ir 5 месяцев назад +1

      @jan7356 I will try it out. But it's still not correct. And you could have done it by yourself. Not in 2 seconds but you would have checked if the result provided by the LLM is correct.

    • @Manwith6secondmemory
      @Manwith6secondmemory 5 месяцев назад

      @@MaxMustermann-vu8ir it doesn’t break down words into individual letters like we do so it will struggle on tasks like that.
      Co-pilot gpt 4 also is not good compared to chatbot/normal gpt 4 and normal gpt 4 is now surpassed by claude 3.5 sonnet.

    • @TheRealUsername
      @TheRealUsername 5 месяцев назад

      Yeah of course, AGI is near, thanks to mathematical algorithms, which don't have any components required for intelligence, but I'm sure statistical pattern within a data distribution is sufficient to outperform the human brain, not to mention the data has to be mathematically readable (tokenization), you can surely get AGI from text and (encoded) images, even though it isn't building a unified representation of the world, even though it isn't capable of extrapolation, abstraction and extreme generalization therefore unable of create novel patterns, unable to be creative, even though the model can't detect its own mistakes while inferencing. AGI is near ? Thanks to mathematical algorithms.

    • @markmonfort29
      @markmonfort29 4 месяца назад

      It doesn't do math so getting an AI model to do math or counts or sums etc is not great. It's why it can't properly count how many Rs there are in the word strawberry etc
      However, it could if it's told to use function calling .. that's how ChatGPT can pull in an excel file and work on it. It turns your query into code and then does that.
      Not sure if copilot can do function calling but if you type into ChatGPT the followjg
      "Using function calling tell me all the African capitals that start with vowels "
      Response is
      The African capitals that start with vowels are:
      Abuja
      Accra
      Addis Ababa
      Algiers
      Antananarivo
      Asmara
      Ouagadougou

  • @sdmarlow3926
    @sdmarlow3926 5 месяцев назад +2

    Is anyone else distracted by the person taking a pic of every slide? *pro tip: announce where slides can be found online before starting a talk

  • @marccawood
    @marccawood 4 месяца назад

    Sorry? 11:20 she says you can use an LLM to generate training data?? I’’ gonna call BS on that claim. If you’re not learning from real world data you’re pissing in the wind.

    •  4 месяца назад +1

      @@marccawood Was she meant was that NLP are really good for getting the parameters you need from raw texts/reports. Something that can be very time consuming when setting up and training ML models.

  • @shoeshiner9027
    @shoeshiner9027 5 месяцев назад +4

    Agree. This video contents are same as I have thought before.😊

  • @InfiniteQuest86
    @InfiniteQuest86 5 месяцев назад +9

    Thank you. She's one of the few rational people left on Earth. You would not believe the type of hate speech pure rage angry arguing I get when I mention that a language model should be used for language, and if we want to do something else we should use whatever tool is suited to that task. How are we living in a time where people think this is a controversial idea?

    • @tsilikitrikis
      @tsilikitrikis 5 месяцев назад

      Bro if you say that GPT4, a Human level language understanding system, have to work only to translation-like tasks is like you say that a man have to work only as translator🤣🤣

    • @TheRealUsername
      @TheRealUsername 5 месяцев назад +1

      ​​@@tsilikitrikis "Human-level" ?? do you think GPT-x can think like you ? Can reason ?

    • @tsilikitrikis
      @tsilikitrikis 5 месяцев назад

      @@TheRealUsername Why you ignore the rest of the sentence ? it can understand language at human level. The other things are the results of this understanding. If you cannot distinguish from human and can do work like human...what is it?

  • @seenox_
    @seenox_ 5 месяцев назад +3

    Very comprehensive and informative, thanks.

  • @kevinamiri909
    @kevinamiri909 4 месяца назад

    GPT3.5 is not 355 B

  • @inthemidstofitall
    @inthemidstofitall 5 месяцев назад +2

    Really wonderful talk. Thank you!

  • @nikjs
    @nikjs 4 месяца назад

    sentience is not a pre-requisite for destruction of civilization. for that, the primitive the better

  • @Theodorus5
    @Theodorus5 4 месяца назад +2

    OK for folks that know something about the subject but a woefully inadequate introduction for those who may not

    • @NostraDavid2
      @NostraDavid2 4 месяца назад +4

      "goto" is a software development conference, so the target was developers, which makes sense.

  • @mortenthorpe
    @mortenthorpe 4 месяца назад +1

    Notice that you can literally substitute the term AI with statistics, and the content and message remains the same… what does this mean, semantic fun aside? Well, for starters it means that Generative AI delivers statistically predictable results, which is the crux and reason to completely negate AI for generative. - it will never deliver correct solutions! The solutions completely rely on quality of data input, and knowing context - neither is trivial, or achievable in any meaningful sense… and you don’t even have to be a programmer or technical to know this - the mere foundation of AI as a concept relies on these factors… in brief, for anything truly meaningful, AI is and remains useless, forever!

    • @stratfanstl
      @stratfanstl 4 месяца назад

      @mortenthorpe EXACTLY. There is no "intelligence" in such systems, only statistical probabilities concerning the next likely token to appear given the "context" of a set of prior tokens ("the prompt") based on everything the model has been supplied to calculate its statistics ("its training"). If you prompt such a system for "energy as a function of mass," it might spit out e = mc^2. But since it is only representing the likelihood of next tokens based on prior tokens, if the world was filled with a million idiots who all believed e = mc^3 and blogged about it 20 times per day and responded to other blogs on the topic five more times each day reiterating their belief that e = mc^3, that scientifically INCORRECT content would eventual distort the probabilities in a LLM to the point where the INCORRECT formula would become increasing likely to appear as output. These models have zero means to weight probabilities based on TRUTH. They are solely capable of weighting based on frequency of appearance. That's not intelligence.

  • @tatyanamamut3174
    @tatyanamamut3174 3 месяца назад

    I would think that she would know that the primary way humans develop the ability to complete tasks is through mimesis - ie by having access to training data from other humans. This is exactly how LLMs work as well. They can’t do what they’ve never seen, just like humans. A person given raw coffee beans will not be able to make coffee unless they have seen another person make coffee before after all

  • @askurdija
    @askurdija 4 месяца назад

    GPT's performance was measured on various intelligence benchmarks and tasks outside of its training data. She doesn't explain what's wrong with these measurements and she doesn't give a concrete proposal of how to measure intelligence in a better way. Human intelligence is measured on tests that are similar or equal to the ones given to LLMs. Instead of engaging with all this body of research, she just shows an anecdotal counterexample (the Codeforces problems).

  • @afterthesmash
    @afterthesmash 4 месяца назад +1

    "I actually got my PhD in psychology." Immediate translation, from her next comment: I'm a widget produced by the Concern Industrial Complex. It's sad the world we now live in automatically translates "watching with a lot of concern" into "oh, you must have a recent humanities degree" but there it is.

    • @afterthesmash
      @afterthesmash 4 месяца назад

      Having now finished the video, she did a good job factoring the world as a practicing data scientist after this brief, but worrying moment.

  • @afterthesmash
    @afterthesmash 4 месяца назад

    Intelligence is _not_ controversial. It's divergent. It's the same as religion. There is no controversy between Christianity and the Muslim faith. But there are definitely major points of divergence. To call perspectives on IQ controversial rather than divergent gives far too much voice to the squeaky wheels.

    • @afterthesmash
      @afterthesmash 4 месяца назад

      What I just did there is a mode of generalization, that of rising above brainless cliche, that I would dearly _love_ to see manifest in my future chatbot companions.

  • @millax-ev6yz
    @millax-ev6yz 5 месяцев назад +1

    Excellent video...although i had to stop my mind from thinking about what would happen if i mixed Foster's and vic bitter together because of the accent. Thats my own neural net working against me

  • @Klayhamn
    @Klayhamn 5 месяцев назад +3

    anyone who has spent enough time with GPT-4 and GPT-4o would easily know the presenter is wrong.
    Good enough LLM's are capable of REASONING and not just "text generation".
    i have myself crafted arbitrary problems that require math and logic to solve, and had GPT-3.5 fail at them and GPT-4 SUCCEED in solving them,
    and there is no way it has "saw this problem before" because i made it up on the spot.
    so i think the title of "emerging AGI" is perfectly fine.
    you have no idea what would be the path to AGI, and just like we "unintentionally" discovered LLMs by trying to solve something else, we might end up creating AGI out of LLM's without directly trying to instill some kind of "general intelligence" into them.
    the main thing i believe LLMs are currently missing is the ability to LEARN - i.e. dynamic plasticity in real-time. they also lack memory, for the very same reason.
    so i think the KEY to achieve AGI would be to grant them memory and plasticity (in some form) - and this would probably be the stepping stone that takes us to AGI levels.
    Even if they don't START with AGI capabilities from the outset, they might EVOLVE to have AGI capabilities just like babies grow up and learn more about the world and gain skills and mental faculties.

  • @MarkArcher1
    @MarkArcher1 5 месяцев назад +1

    I enjoyed the talk but a bit of a red flag that the speaker isn't familiar with the difference between AGI and ASI.

  • @afterthesmash
    @afterthesmash 4 месяца назад

    "it's been an overwhelming flood" Really? An overwhelming flood of sensationalist headlines that you tune out completely if you wished to do so, and it barely impacted anything in your day to day life? At least not yet. And possibly not ever.

    • @afterthesmash
      @afterthesmash 4 месяца назад

      Having now finished the video, to Jodie's credit, this was a passing turn of phrase and the rest of the talk never went here again.

  • @rob99roy
    @rob99roy 4 месяца назад

    This presentation is going to age very badly. Let's not underestimate how quickly AI will progress. I suggest you revisit this talk in a year.

  • @janicewolk6492
    @janicewolk6492 3 месяца назад

    Do you think the emergence of these ideas is leading to the significant drop in birth rates worldwide? As in, who wants to expose their child to neural nets? Maybe this partially explains the rise of right-wing anti-intellectual political movements? As in, who benefits from all of this? It certainly seems as if this is a lovely intellectual game that, like social media, has significantly serious consequences. I have a Master's degree in Slavic Linguistics. Am I now extraneous? Am I just supposed to say, oh well? Is the speed worth the social upheaval? It is no doubt the response of espousers of these ideas that determining outcomes isn't their responsibility. By the way, who watches computer chess games? I love the way the speaker refers to "humans". Who is her constituency?

  • @pristine_joe
    @pristine_joe 5 месяцев назад

    Just thinking, LLMs have been a subject of research for the last few decades & is limited by the computing capacity our technology has thus far produced. Human intelligence is backed by training through evolution & has perfected the art of passing it down to the next generation through DNA, the community etc.,. Could we be in the very early stages of trying to replicate our consciousness & maybe it may eventually emerge if we overcome the limitations we currently face 🐝🌻

    • @TheRealUsername
      @TheRealUsername 5 месяцев назад +2

      LLMs and all ML models are pattern learners and only work when the data is mathematically readable (tokenization) whereas biological intelligence is firstly omnimodal, secondly it relies mostly on abstraction, constant reasoning, intuition and continual learning (neuroplasticity), LLMs aren't a form of intelligence, they're sophisticated mathematical algorithms.

  • @irasthewarrior
    @irasthewarrior 4 месяца назад

    AI is sophisticated degeneracy.

  • @nikjs
    @nikjs 4 месяца назад

    where's the code

  • @PACotnoir1
    @PACotnoir1 4 месяца назад

    It's interesting to see that she can't figure that compression of informations in a trillion of parameters constitutes an elaborated form of intelligence, alien to us but still with "cognitive abilities" and that reducing it to simple maths correlations is like reducing human intelligence to chemical reactions. It just forgets that emergence properties arise in complex systems.

  • @Peter.F.C
    @Peter.F.C 4 месяца назад +1

    What we have here is a lazy person who doesn't even do basic research and doesn't know what they are talking about.
    Take for example her description of the 1997 Kasparov match against Deep Blue.
    In that match, Kasparov did lose the second game. At least she got that right.
    But he did not lose the third game and he did not lose the fourth game. Both those games were draws. He did lose the match and that was despite at that point in time still being stronger than the chess engine. This is information on the match that she could have easily checked if she'd bothered. The chess engine's play had had a psychological effect on him which is why he lost the match despite being a stronger player.
    But it was very well understood at the time that the chess engine that had defeated him had the intelligence of a cockroach.
    As it is now, chess engines are far beyond the strongest humans, but they still only possess a cockroach level of intelligence.
    And are irrelevant anyway in discussion of the capabilities of these LLMs.
    This talk sheds no light on the subject matter.

  • @tsilikitrikis
    @tsilikitrikis 5 месяцев назад +2

    Got 10 problems for the periods of trainning got them all right aand got 10 SAME difficulty problems after and got them.. all wrong?? No way this is right guys

    • @tsilikitrikis
      @tsilikitrikis 5 месяцев назад

      Also the understanding of language is much more broader than cracking the game of chess. You learn something of the real world so it brings you more close to a general entity!

    • @jan7356
      @jan7356 5 месяцев назад +1

      I am sure this is completely outdated. This as done on the first version of GPT-4. Coding abilities and generalization abilities have gotten much better with later version.

    • @InfiniteQuest86
      @InfiniteQuest86 5 месяцев назад +1

      Lol I hope you are being sarcastic. That's always been my experience. These companies are lying about training on the test data. It's actually pretty sad that they don't score perfectly having trained on it. That's pretty pathetic actually.

    • @InfiniteQuest86
      @InfiniteQuest86 5 месяцев назад

      @@jan7356 On anything I've asked GPT-4 and GPT-4o, the original 4 was far better. So this statement doesn't really hold.

    • @tsilikitrikis
      @tsilikitrikis 5 месяцев назад +1

      Bro you understand nothing of this technology. Read the paper "Spikes of AGI". You sya that they train them on test data 🤣🤣. I hope you have no relation with software

  • @sblowes
    @sblowes 4 месяца назад

    Wow, this is so off! AGI didn’t become ASI because it’s sexier, it’s a different class of AI. LLMs have a base level of reasoning properties already at the ChatGPT-4 level, and the _general consensus_ among current leading AI researchers is that we expect a higher level of reasoning once we multiply the size again.

  • @odiseezall
    @odiseezall 5 месяцев назад +2

    the speaker presented 0 (zero) evidence regarding the timeline of increasing generalization of AI.. she's saying "there's a long way to go" but there is no proof to support that conclusion

    • @larsfaye292
      @larsfaye292 5 месяцев назад +6

      @@odiseezall because it's self evident...

    • @TheRealUsername
      @TheRealUsername 5 месяцев назад

      I believe LLMs can mimick a certain form of understanding of certain parts and aspects of their training data, it won't achieve AGI but it can still be useful for certain tasks.

  • @ggrthemostgodless8713
    @ggrthemostgodless8713 4 месяца назад

    GROK will rule them all. ... Elon has been at it for two years only... and look!!

  • @jpphoton
    @jpphoton 4 месяца назад

    hmmm

  • @MisterDivineAdVenture
    @MisterDivineAdVenture 3 месяца назад

    Another "DOTTA SCIENTIST". (No offense.)

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w 5 месяцев назад +2

    Basic info

  • @tyc00n
    @tyc00n 4 месяца назад

    first half was great, 2nd half was just terrible

  • @raiumair7494
    @raiumair7494 5 месяцев назад +1

    Nothing new at all - in face boring to some degree - except a short at why current LLMs are not on its way to AGI - which I agree with