Debunking the great AI lie | Noam Chomsky, Gary Marcus, Jeremy Kahn

Поделиться
HTML-код
  • Опубликовано: 7 янв 2025

Комментарии • 1,8 тыс.

  • @TJ-hs1qm
    @TJ-hs1qm 2 года назад +306

    we can't let a bunch of hyper rich guys deploy no matter what tech into society let them keep the profiits whereas society is left with the consequences. privatized profits socialized loses needs to stop

    • @2manystories2tell43
      @2manystories2tell43 Год назад +4

      @T J You hit the nail on the head!

    • @travisporco
      @travisporco Год назад +12

      By the logic of the free enterprise system, once people can no longer contribute in the fair and free competition of the market, they must rely on charity or perish. All people will in the coming decades be obsolete, unable to compete with rising AI. The existing system must therefore be abolished before it is too late.

    • @jonaseggen2230
      @jonaseggen2230 Год назад +7

      It's called neo or corporate feudalism

    • @Mr_Sh1tcoin
      @Mr_Sh1tcoin Год назад +6

      Spoken like a true marxist

    • @47f0
      @47f0 Год назад +2

      Yeah, people have been saying that since the 1860s.
      Cornelius Vanderbilt said, "Hold my beer"

  • @TommyLikeTom
    @TommyLikeTom 2 года назад +62

    "they can draw pretty pictures but they don't have any grasp of human language" for some reason I felt personally attacked by that

    • @kot667
      @kot667 2 года назад

      Maybe because it's horse shit LOL

    • @toddbuckingham2000
      @toddbuckingham2000 2 года назад +10

      It can and it can't. Midjourney slays at generating simple albeit impressive images, 'zombie spiderman' 'thor in pixar style' 'human settlers on Mars in the style of Norman Rockwell' but once you begin to describe complicated illustrations with multiple characters in different emotional states with specific likenesses to specific people, wearing specific coloured costumes (and you want consistancy panel to panel) performing specific actions with specific camera/viewpoint angles it struggles, with characters in specific parts of the composition. You can generate those emotions and 'actors' seperately, but still need photoshop to combine them into a complete image. While I guess I should assume Dalle2/Stable Diffusion/Midjourney will get there, after watching this presentation, and after 20k images generated in Midjourney and noticing its sometimes frustrating limitations I do begin to wonder if AI art models lack of language understanding will mean they'll be stuck at 75%. My thought is, the first company to combine Dalle2/Midjourney/Stable style prompting with Nvidia's Canvas like editability/interactivity, will make a much more powerful and efficient tool just by embracing the human brain.

    • @Bisquick
      @Bisquick 2 года назад +6

      @@toddbuckingham2000 Exactly, as discussed and succinctly put by Sartre to pose the necessity of existential consideration, existence _precedes_ essence. Without any critical consideration of meaning, quite simply: garbage in, garbage out.
      The only "danger", as also discussed, lies in believing it is anything else, that it is "objective" or actually understanding anything ie "intelligent". But of course the _only_ political question is: cui bono? Who benefits? So we can ask "danger _for whom_ ?", which reveals that for some this Mechanical Turk can produce an artifice of an organizing principle of truth/understanding/value ie "god" that "just so happens" to justify the already existing power structure, regardless of the intentionality towards this, as we can see with these psycho silicon valley billionaires and "effective altruists", many of which are unironically labelling themselves as "secular Calvinists". The divine right of "AI" is then but a technological coat of paint over the "divine right of kings/the market/the entrepreneur".
      _“The ideas of the ruling class are in every epoch the ruling ideas, i.e. the class which is the ruling material force of society, is at the same time its ruling intellectual force. The class which has the means of material production at its disposal, has control at the same time over the means of mental production, so that thereby, generally speaking, the ideas of those who lack the means of mental production are subject to it. The ruling ideas are nothing more than the ideal expression of the dominant material relationships, the dominant material relationships grasped as ideas.”_ - some guy

    • @huveja9799
      @huveja9799 Год назад +1

      @@Bisquick Well, there are different layers of meaning, the most superficial is that of the statistical correlations between the symbols, which does not take away from the fact that the tool is surprisingly useful at that superficial level.
      As far as I know, there are people who establish their own power structure claiming that there is no Truth, and that it is not possible to define objectivity based on that successive approximation to that Truth (understanding). In that case, the only political question is who benefits from these power structures based on sophistry and language games. I suppose it is mediocre people who are incapable of creating something new, and are condemned, as a Large Language Model (LLM), to generate a simulacrum of knowledge at that superficial level of meaning, which does not mean that they can do significant damage in society, especially by the corruption of younger and therefore vulnerable minds ..

    • @thewingedringer
      @thewingedringer 10 месяцев назад +1

      @MusingsFromTheJohn00 chatgpt they released in 2022 was the same i twas back in 2020 lmao, sure

  • @willboler830
    @willboler830 2 года назад +234

    Been working on AI since 2015, and I'm kind of tired of the trend that models are heading right now. We just add more data and more parameters, and at some point, it's just memorization. Humans don't work like that. I used to support the pragmatism of narrow AI, but honestly, I'm with Gary Marcus on this.

    • @MrAndrew535
      @MrAndrew535 2 года назад +4

      "But as long as you enjoyed the video and you enjoy having your say, that's all that counts!."

    • @MrAndrew535
      @MrAndrew535 2 года назад +9

      Also, whatever you have been working on, it has nothing to do with "intelligence" artificial or otherwise. "Intelligence" is an existential proposition nota technical one, as demonstrated by the fact that you lack the intellectual tools to be able define it. Therefore, if you cannot define it then by what stretch of the imagination could you possibly be working on it?

    • @0MVR_0
      @0MVR_0 2 года назад +12

      @@MrAndrew535 This is correct yet also obtuse.
      A definition demands extrapolation, as in de-finitum.
      Intelligence, as you said, is inherently introspective.
      You are asking another to accomplish an impossible task.

    • @numbersix8919
      @numbersix8919 2 года назад

      Right on. You certainly got an odd and objectionable response, didn't you? That's what happens when try to *leave a cult*.
      Anyway, if your interest is peaked, go back to school and if you are brave, get into REAL cognitive science. Developmental psychology! Psycholinguistics! There's a world out there to discover!!!!
      There may be modules in the human brain that do stupid "narrow AI" calculations...but nobody knows yet.
      The kicker is that neurons aren't simple nodes, they are quite complex, maybe as complex as we used to think the entire brain is...but nobody knows yet.
      Just remember, cognition is a feature of living organisms. You know, embodied. I think the octopus with its distributed cognition is the best model. Its arms are to some extent entities unto themselves. Our minds are similarly compartmentalized, I just think the octopus would be easier to study in some simple and straightforward ways. You already know how smart they are. And I can't think of a better helper robot than an octopoid.
      Best of luck to you young Will.

    • @Bisquick
      @Bisquick 2 года назад +26

      Exactly, as discussed and succinctly put by Sartre to pose the necessity of existential consideration, existence _precedes_ essence. Without any critical consideration of meaning, quite simply: garbage in, garbage out.
      The only "danger", as also discussed, lies in believing it is anything else, that it is "objective" or actually understanding anything ie "intelligent". But of course the _only_ political question is: cui bono? Who benefits? So we can ask "danger _for whom_ ?", which reveals that for some this Mechanical Turk can produce an artifice of an organizing principle of truth/understanding/value ie "god" that "just so happens" to justify the already existing power structure, regardless of the intentionality towards this, as we can see with these psycho silicon valley billionaires and "effective altruists", many of which are unironically labelling themselves as "secular Calvinists". The divine right of "AI" is then but a technological coat of paint over the "divine right of kings/the market/the entrepreneur".
      _“The ideas of the ruling class are in every epoch the ruling ideas, i.e. the class which is the ruling material force of society, is at the same time its ruling intellectual force. The class which has the means of material production at its disposal, has control at the same time over the means of mental production, so that thereby, generally speaking, the ideas of those who lack the means of mental production are subject to it. The ruling ideas are nothing more than the ideal expression of the dominant material relationships, the dominant material relationships grasped as ideas.”_ - some guy

  • @Hunter-uz9jw
    @Hunter-uz9jw 2 года назад +232

    bro was 21 years old in 1949 lol. Amazing how sharp Noam still is.

    • @numbersix8919
      @numbersix8919 2 года назад +34

      Just imagine how sharp he was in 1957 when he single-handedly saved experimental psychology.

    • @bluebay0
      @bluebay0 2 года назад +2

      @@numbersix8919 Do elaborate please.

    • @numbersix8919
      @numbersix8919 2 года назад +28

      @@bluebay0 Chomsky's response to B.F. Skinner's book "Verbal Behavior" utterly destroyed any possible behaviorist theory of language.
      Behaviorism had dominated experimental psychology so thoroughly to that point, that "mind" had become a dirty four-letter word in psychology.
      Not only linguistics, but psychology and philosophy, and new fields such a AI and cognitive science were free to take up the study of mental processes.
      You can read it today easily enough, the title is "On Verbal Behavior" by Noam Chomsky.

    • @bluebay0
      @bluebay0 2 года назад +2

      @@numbersix8919 Thank you. I wondered if it was his proving Skinner wrong about behavior and language acquisition. Thank you again.

    • @numbersix8919
      @numbersix8919 2 года назад +1

      @@bluebay0 Yup that was it!

  • @AzorAhai-zq9sw
    @AzorAhai-zq9sw 2 года назад +139

    Impressive that Noam remains this sharp at 94.

    • @numbersix8919
      @numbersix8919 2 года назад +11

      He has very good reserve capacity.

    • @ivanleon6164
      @ivanleon6164 Год назад +14

      the real white mage. amazing. huge respect for him.

    • @maloxi1472
      @maloxi1472 Год назад +3

      @@numbersix8919 huh... what ?

    • @numbersix8919
      @numbersix8919 Год назад +6

      @@maloxi1472 I mean his brain still functions very well even with great age.

    • @dewok2706
      @dewok2706 Год назад +3

      @@maloxi1472he meant that he's the great white hope

  • @octavioavila6548
    @octavioavila6548 2 года назад +154

    Chomsky’s argument is that AI will not help us understand the world better but it will help us develop useful tools that make our life easier and more efficient. Not good for science directly, but still good for quality of life improvements and it can help science indirectly by producing tools that help us do science.

    • @totonow6955
      @totonow6955 2 года назад +10

      Unless it just drops grandpa.

    • @0MVR_0
      @0MVR_0 2 года назад

      @totonow6955 at least it did so with trillions of parameters, so you know legal can argue that grandpa deserved and needed a premature 'termination'.

    • @totonow6955
      @totonow6955 2 года назад

      @@0MVR_0 vampires

    • @moobrien1747
      @moobrien1747 2 года назад

      Oh wow
      Howard Hughes
      Really IS Alive....,.. q

    • @sixmillionsilencedaccounts3517
      @sixmillionsilencedaccounts3517 2 года назад +21

      "it will help us develop useful tools that make our life easier and more efficient"
      Which doesn't necessarily mean it's a good thing.

  • @dan_taninecz_geopol
    @dan_taninecz_geopol Год назад +72

    The misunderstanding here is that deep nets are being trained to be conscious, which isn't accurate. They're being trained to mimic human judgement and/or recognize patterns or breaks in patterns.
    The machine isn't trained to be independently generative of novel information. We shouldn't be surprised that it can't do that yet.
    More important than the strong AI debate, which is still far off, is the social impacts these models will have on the labor market *today*.

    • @GuaranteedEtern
      @GuaranteedEtern Год назад +14

      It's anthropomorphizing by observers who don't understand how the technology works. It's very annoying to hear ML experts say things like "maybe it is sentient..."

    • @dan_taninecz_geopol
      @dan_taninecz_geopol Год назад +8

      @@GuaranteedEtern "Experts", and agreed.

    • @GuaranteedEtern
      @GuaranteedEtern Год назад +3

      @@dan_taninecz_geopol One of the big ones - either Microsoft or Google - literally said this exact thing a few days ago.

    • @TheSnowLeopard
      @TheSnowLeopard Год назад

      Real AI won't exist until these 'deep nets' are embodied in the world.

    • @brianmi40
      @brianmi40 Год назад +9

      "The machine isn't trained to be independently generative of novel information."
      And yet it has unless you are discounting the need for a Prompt for it to do anything at all other than just sit idly. GPT-4 was able to propose a scientific experiment that has never been performed. It can create rhymes and poetry never written. This isn't simply "re-arranging" the works of others. The simple fact is that the ability to cross compile and reference roughly 1/10th of all human "knowledge" allows a LLM to assemble it in novel ways that humans have never considered or at least not yet done and under the guidance of a breakthrough prompt can deliver solutions we have never imagined.
      It's a similar activity to researchers in two fields running across each others data and having a huge AHA moment from a realization of how to combine the findings into a new, previously unconsidered solution to some problem.
      GPT-4 is able to surpass more than 50% of the tests designed to judge sentience, including the Theory of Mind test, so we are much further along the path to sentience than most are aware.

  • @rajmudumbai7434
    @rajmudumbai7434 Год назад +50

    Real AI that is sensitive to human problems doesn't scare me. But blind faith of many in flawed AI and going too far with it scares me as it could lead humanity astray into a point of no return.

    • @nathanielguggenheim5522
      @nathanielguggenheim5522 Год назад +6

      Oligarchs using flawed ai against mankind scares me the most.

    • @oldtools
      @oldtools Год назад

      @@nathanielguggenheim5522 is it really so bad if all the fat-cats really want is to keep their people chubby?
      The price of peace is the low price of bread.

    • @cathalsurfs
      @cathalsurfs Год назад +5

      There is no such thing as "real" AI. Such a concept is an oxymoron and utterly contrived (by humans in their limited capacity).

    • @oldtools
      @oldtools Год назад +3

      @@cathalsurfs general AI is what most would consider real.

    • @KassJuanebe
      @KassJuanebe Год назад

      @@oldtools Intelligence can't be artificial. Intellect maybe. Consciousness and intelligence, NO!

  • @riccardo9383
    @riccardo9383 2 года назад +254

    Noam Chomsky brings a breeze of fresh common sense to the AI discussion, with his immense knowledge on Linguistics. Thank you for this interview.

    • @MrAndrew535
      @MrAndrew535 2 года назад +8

      Define "common sense"!

    • @blackenedblue5401
      @blackenedblue5401 2 года назад +9

      Also just his immense knowledge of computing- definitely understands it better than most speaking at websummit

    • @restonthewind
      @restonthewind 2 года назад +6

      A language model could have generated this comment.

    • @grant4735
      @grant4735 2 года назад +3

      @@MrAndrew535 ask your computer to do that....

    • @kot667
      @kot667 2 года назад +3

      @@grant4735
      Me: Define "common sense"
      ChatGPT: Common sense is a term used to describe a type of practical knowledge and understanding of the world that is shared by most people. It is not based on specialized training or education, but rather on the general experiences and observations that people have in their everyday lives. Common sense allows people to make judgments and decisions about everyday situations, and it often helps them to solve problems and navigate complex social situations. Some people are said to have a good sense of common sense, meaning that they are able to apply their practical knowledge and understanding in a way that is useful and effective.

  • @mirellajaber7704
    @mirellajaber7704 Год назад +6

    I am reading all these comments and I have to say that once more what strikes the eye is that people would always believe what they want to believe, no matter how much conferencing, summiting, etc, no matter who says what. People come with already made ideas, not with a curious mind as to reach more, higher understanding - and this stands true, no matter the subject under discussion - but even more true when it comes to politics.

    • @no_categories
      @no_categories 10 месяцев назад +1

      I've changed my mind many times in my life. What helps me to do it is information. I know I'm not alone in this.

    • @Grassland-ix7mu
      @Grassland-ix7mu 7 месяцев назад +1

      That is a oversimplification. Many people want to know the truth, and so will definetely change their mind when they learn that they were wrong - whatever the topic

  • @yuko3258
    @yuko3258 Год назад +94

    Let's face it, the tech world grew too fast for its own good and is now operating mostly on hype.

    • @crystalmystic11
      @crystalmystic11 Год назад +7

      So true.

    • @claudiafahey1353
      @claudiafahey1353 Год назад +4

      Agreed

    • @jonatan01i
      @jonatan01i Год назад +2

      nope, gpt4 is very usable and is a magic tool for humanity to use

    • @debbY100
      @debbY100 Год назад +1

      For ITS own good, or humanity’s own good?

    • @Happyduderawr
      @Happyduderawr Год назад +2

      @@debbY100 definitely more for its own good given the amount of wealth being funnelled into the industry

  • @witHonor1
    @witHonor1 2 года назад +56

    My problem with AI is that human's can't even pass a Turing Test anymore. Technology has eliminated the miniscule amount of critical thinking human's used to be capable of, now they're just input/output machines.

    • @witHonor1
      @witHonor1 2 года назад +2

      @@MrAndrew535 Which program are you? Typical bot behavior to spam the comment section on a RUclips video.

    • @ChannelMath
      @ChannelMath 2 года назад +1

      @@witHonor1 what would be the point of this "Andrew" bot? just to claim that he already said what you said? Doesn't make sense. Also, if you've met humans, "spamming the comments section" is not atypical behavior when they are passionate. (I'm doing it now -- see you in the next comment Andrew!)

    • @witHonor1
      @witHonor1 2 года назад

      @@ChannelMath Beep boop, beep boop. Not explaining why bots are obvious because... Please "see" Andrew anywhere when you don't have eyes. Fun. Idiot. Green eggs and ham. Manifesto. Beat the prediction, trolls.

    • @Moochie007
      @Moochie007 2 года назад

      The axiom GIGO still applies.

    • @miraculixxs
      @miraculixxs 2 года назад +7

      @Lind Morn if you think capitalism has eliminated critical thinking you haven't seen socialism and dictatorship.

  • @kennethkeen1234
    @kennethkeen1234 Год назад +17

    As a researcher into AI in Japan since 1990 I wish to add my personal trivial contribution. Firstly it is not simply the 'words' that are relevant, but the intonation. Secondly it matters 'where' the expressions are made. "I couldn't care less" in standard English is repeated in the land of wooden huts, with "I could care less", with the same intention and "meaning", thus giving the hut dwellers an advantage of being able to speak ambiguously and always be right. That is fine for those hut people who are not caring one way or the other if they are right or wrong, because in the final analysis, hut people produce guns from under their jackets and force a different result, regardless of what is said.
    A wall built around USA retaining all the nonsense and hype in one area would be the best solution for making true progress in that part of the world not yet perverted by 'American exceptionalism'.
    2023 02 08 08:42

    • @tomtsu5923
      @tomtsu5923 Год назад +1

      Ur a hut person. I’ll snow plow ur azz

    • @StoutProper
      @StoutProper Год назад

      Wow. Love this comment. Thank you.

    • @rmac3217
      @rmac3217 7 месяцев назад

      I couldn't care less means you care the least you possibly could, I could care less means u could possibly care less and doesn't make sense as a saying... Not rocket science.

  • @aullvrch
    @aullvrch 2 года назад +23

    @27:31 Gary mentions something he calls "neuro-symbolic AI" as the first step towards combating machine learning AI. For those who are interested a more searchable term is probabilistic programming, some examples of languages are ProbLog, Church, Stan, and Hakaru. Step two he says is to have a large base of machine interpretative knowledge. All programming is of course machine interpreted, but denotational semantics found in functional languages are better at formalizing the abstract knowledge that he refers to.

    • @LeoH.C.
      @LeoH.C. 2 года назад +1

      Just fyi: the approach Gary mentions is "neuro-symbolic AI", not "nero symbolic".

    • @aullvrch
      @aullvrch 2 года назад +2

      @@LeoH.C. sorry, just a typo..

    • @LeoH.C.
      @LeoH.C. 2 года назад +4

      @@aullvrch I was just clarifying for other folks that do not know about it :D

    • @aullvrch
      @aullvrch 2 года назад +1

      @@LeoH.C. thanks!

    • @0MVR_0
      @0MVR_0 2 года назад

      'neuro' has the connotation that any animal with a nervous system can operate or symbolize the platform

  • @robertjones9598
    @robertjones9598 Год назад +27

    Really cool. A much needed dose of scepticism.

  • @reallyWyrd
    @reallyWyrd Год назад +12

    Noam pointing out that AI training of a neural net largely amounts to "brute force" is interesting.

  • @512Squared
    @512Squared Год назад +22

    As a linguist, one of the first things I did with ChatGPT was all it to give examples of things like predicates, thinking that a Language Transformer would have figured this things out, but it failed, even after I corrected it, it still kept going off the reservation with its examples. I tested it too on tasks where you give it lists of words and ask it to form sentences from those words, and it kept wandering of from its task, and when you all it it completed the task correctly, it says yes, but then when you point out the errors, it admitted the errors, but then couldn't correct itself either.
    I agree that the AI doesn't have models of the world or language the way humans do. It has a series of connections that it has created to match predictive output based on fixed inputs, like the model that wrongly associated cancer with rulers on scan images because that's how most cancer diagnostic images are different to just normal scan images.
    There is a long way to go still. AI right now can mimic smart in some aspects (knowledge and textual analysis), but not in other aspects (processing experience, prioritizing). It does resemble a kind of Hive Mind, and that is exciting.

    • @JohnDlugosz
      @JohnDlugosz Год назад +2

      GTP-4 is much better at understanding the structure of a word (made of letters, has rhymes, has syllables), but it still struggles at some tasks, where it knows the rules but can't reliably follow those rules, but can immediately tell what it did wrong. It just fails at harder problems.
      Re predicates: Perhaps the Language Model should have some reinforcement learning early on about formal grammar, just like having an English class for 6th graders. Make sure it codified internally all the language structure we want it to, and eliminate incorrect associations, in contrast to just letting it figure it out by example with no formal instruction.
      Do that at an early stage in training, e.g. 6th grade, before high school and college reading.

    • @orlandofurioso7329
      @orlandofurioso7329 Год назад

      It mimics a Hive Mind because it is connected to the Internet, what is impressive is how much information is there hidden behind all of the junk

    • @ghipsandrew
      @ghipsandrew Год назад

      What version of the model did you talk with?

    • @512Squared
      @512Squared Год назад

      @@ghipsandrew 3.5. Haven't tested in on the new version 4.0

    • @subnow4862
      @subnow4862 Год назад +1

      @@orlandofurioso7329 GPT-3.5 isn't connected to the internet

  • @Epicurean999
    @Epicurean999 Год назад +46

    I wish really good health for Mr. Noam Chomsky Sir🙏❤️🙏

  • @-gbogbo-
    @-gbogbo- Год назад +7

    27:05 "Gettiing close [to solve the problem] does not really seem to solve the problem". That's so true ! Thanks a lot.

  • @GuaranteedEtern
    @GuaranteedEtern Год назад +9

    The current AI/ML techniques are not close to AGI. They are approximation engines made possible by cheap and powerful computing and storage. In many cases produce useful results because their guesses are accurate (i.e. they produce what we expect). As they scale (more parameters, better tuning) they will better approximate what we expect, but we will reach the point of diminishing returns until there is a breakthrough in either computer architecture or approach that allows for something more than mathematically generated results.
    I agree there is a chance these technologies will "hit the wall" faster than expected because we reach the point where the results just don't get any better no matter how many more CPUs we throw at them, or applying them to other problems does not yield the benefits that were hoped, given the high bar.
    Marcus is 100% correct that these are smart sounding bots - and the bigger risk is more decision making and critical thinking will get outsourced to them.

    • @cantatanoir6850
      @cantatanoir6850 Год назад

      Could you please give any guidance o the currently available literature on the issue.

    • @GuaranteedEtern
      @GuaranteedEtern Год назад

      @@cantatanoir6850 On which point?

    • @cantatanoir6850
      @cantatanoir6850 Год назад

      @@GuaranteedEtern about diminishing returns of this particular technology and hitting the wall.

    • @GuaranteedEtern
      @GuaranteedEtern Год назад

      @@cantatanoir6850 I'm not sure there is any... that's my perspective. My argument is that there are likely going to be areas where current ML and AI techniques do not perform as well as required regardless of how many parameters or processors are used.
      ChatGPT is impressive because it exceeded everyone's expectations re: NLP.

  • @littlestbroccoli
    @littlestbroccoli Год назад +11

    They're more concerned with notoriety and having articles written about their tech (because it draws investors, maybe?) than they are about the real science. This is definitely a problem and you can feel it in the output. Real science is exciting, it feels like exploring. Today's tech climate sort of feels like being stuck inside and told what's good for you when all you want to do is go out and ride your bike.

    • @gregw322
      @gregw322 Год назад

      Incredibly stupid, useless comment. We’re making more breakthroughs than at any time in history. There will be more change in the next few decades than in all of recorded human history.

  • @tonygumbrell22
    @tonygumbrell22 Год назад +19

    We want AI to function like a sentient being, but we want it to do our bidding e.g. "Open the pod bay door Hal."

    • @petergraphix6740
      @petergraphix6740 Год назад +2

      This is called the 'AI alignment problem' and at this point not only is there no solution, everytime we reassess the problem it becomes more insurmountable. It is one that I personally believe is not solvable either. Humans generally fall under the same alignment issues (we're mortal for example), and at least in theory AI would be immortal if we're able to save its state and copy it to a new machine (or it's able to do that itself). If humans could copy ourselves into a new body, we would, why would an AI not do that once we formulate artificial willpower and desire to have a continued existence?

    • @tomtsu5923
      @tomtsu5923 Год назад

      Don’t be negative

    • @tonygumbrell22
      @tonygumbrell22 Год назад

      @@tomtsu5923 Let's just say I'm skeptical.

    • @daraorourke5798
      @daraorourke5798 Год назад +1

      Sorry Dave...

  • @smartjackasswisdom1467
    @smartjackasswisdom1467 2 года назад +42

    This conversation made me realize one of the things that made Westworld first season so enjoyable for me. It was believable, you need to understand the human brain in order to generate an AI capable of understanding the world. Otherwise you're just engineering a very precise gadget powered by algorithms and data but that does not understand any of the context from where that data comes from. You need AI capable of understanding data the same way the actual human brain does.

    • @kot667
      @kot667 2 года назад +7

      Y must we understand the human brain to make AI? The architecture that we currently have will probably be able to take us to super intelligence.

    • @kot667
      @kot667 2 года назад +2

      The current architecture bears similarities to the human brain but very different.

    • @evennot
      @evennot 2 года назад +3

      ​@@kot667 yes. For start, the hardware in brains and computers is quite different. No massive parallelism, clocking, etc. So mimicking the brains is not the best approach.
      However researching AI can help in a roundabout way to understand human cognition and more.
      Details:
      For instance, I did some experiments with stable diffusion and discovered a lot of very interesting things.
      First of all its akin to "The Treachery of Images" by Magritte (It displays an image of the pipe, not the pipe). Stable diffusion produces an image of the painting, not the painting - a stochastic visual representation of a given image description within the domain of internet images used for learning. If you use a style of speed-art (realistic very fast drawn paintings), like Craig Mullins, you can have interesting results. The art style of Craig Mullins' sketches omits everything that can be easily imagined by the viewer to emphasize main points of interest or composition. For an artist there's a question "how to effectively omit unimportant, but to present enough believability?" Like "how to put several strokes of brush here and there to portray a lake in a distance, but make the viewer understand, that there's a lake there". If you look at a couple of Craig's sketches, it's hard to get the gist of it. But if you have a thousand believable sketches, you have a better chance to imagine the how his style works. I.e. you look at an image of the painting to understand how it is. Like you look at an image of a pipe to understand what is a pipe.

    • @kot667
      @kot667 2 года назад +2

      @@evennot I think the main takeaway is that the only part of the human brain we need to copy to make AI function is the neurons, that's it everything else about the human brain doesn't matter, all the AI needs is neurons, to be honest that's all our brain needs too, people are over complicating it, you do not need to understand the inner workings and everything that goes on with the brain to make AI ,just replicate the neurons and you will be fine. LOL

    • @maloxi1472
      @maloxi1472 Год назад +4

      @@kot667 Wildly inaccurate. Even adopting your flawed perspective for a moment, it's obvious that ANN are way too far from biological neurons right now

  • @waltdill927
    @waltdill927 Год назад +1

    The obstacle to a clear discussion, as I see it: first, we are thinking creatures, or language users, "inhabited" by our own linguistic bias, such that the use of a symbol manages only to point more or less successfully to other symbols. This is human language, the index of a communicating life, but not at all what we manage to codify and "program" into useful, pragmatic machines. Computing is manipulation of these symbolic sets, not expressing a thought. If "zero" only expresses an important mathematical concept, its absence changes nothing at all in the affairs of arithmetical computation. "We" do not organize a binary base well without the idea of "zero".
    In the same way, a line drawn in the sand divides "reality" into two parts, but it has nothing at all to do with the concept of a "ratio",
    The whole business of defining what thinking actually is comprises a body of philosophical insight that has become, in fact, only more problematic with the history of philosophy itself; and contemporary philosophers imagine more that they are producing literary documents, while many writers see themselves as exploring issues of a particular philosophical nature.
    Second, more ominously, in spite of those who would have science, and its progress, adhere to an "ethics" as much as an idea or representation of end use, of teleology -- this ain't ever going to happen. Once the creature learns to use the rock for something practical, cracking walnuts, say, the idea, the utility, of bashing in convenient skulls soon follows.
    At any event, the notion that our logic machines are on the verge of much that is beyond the dreams, or nightmares, of humanity is oddly quaint -- kind of like Robbie the Robot with a mechanical soul, and not an organic brain.

  • @jamieshelley6079
    @jamieshelley6079 2 года назад +66

    As an AI Developer, Noam Chomsky continues to be an inspiration on making better systems , away from derp lernin.

    • @MisterDivineAdVenture
      @MisterDivineAdVenture 2 года назад +2

      I found most of his texts and politics as well from the McLuhan days academic opinionation - I think that's just a class of publication. Which means insipid and uncompelling, but you have to listen to him because he's the only one saying it.

    • @jamieshelley6079
      @jamieshelley6079 Год назад

      @@mmsk2010 Did the wheel displace workers? How about the steam engine? No: it crested more opportunity and automated the mundane tasks of the time. AI is a tool to be used with and enhance humans.

    • @gaulishrealist
      @gaulishrealist Год назад +1

      Noam Chomsky is an AI developer? Americans still need to be taught by foreigners how to speak English.

    • @jamieshelley6079
      @jamieshelley6079 Год назад +1

      @@gaulishrealist ...What

    • @gaulishrealist
      @gaulishrealist Год назад +1

      @@jamieshelley6079
      "As an AI Developer, Noam Chomsky continues"

  • @amonra5436
    @amonra5436 2 месяца назад +2

    We were waiting AI to explode, now we are waiting the AI balloon to explode.

  • @_crispins
    @_crispins Год назад +10

    25:10 I learned it from Noam and he learned it from PLATO 😂 outstanding!

  • @calmhorizons
    @calmhorizons Год назад +6

    Nice to hear a sane accounting of the current state of AI - too much breathless cheerleading going on at the moment (feels like the new bitcoin).

  • @Achrononmaster
    @Achrononmaster Год назад +15

    AI does help science, but indirectly. Every failure of AI to demonstrate something like sentient comprehension of deep abstractions is telling us something about what the human mind is *_not._* That sort of negative finding is incredibly useful in science, totally disappointing in engineering or corporate tech euphoria. Science is way more interesting than engineering. Negative results don't win Nobel Prizes, but they drive most of science. Every day I wake up wanting to refute an hypothesis.

    • @joantrujillo7551
      @joantrujillo7551 Год назад +1

      Great point. Sometimes I suspect that findings that contradict aspects of our current model are rejected simply because they challenge our existing ways of thinking.

    • @GuaranteedEtern
      @GuaranteedEtern Год назад

      True - and arguing these AI machines are not sentient doesn't mean there are no useful applications for them.

    • @WilhelmDrake
      @WilhelmDrake Год назад +1

      These are things we already know.

  • @s3tione
    @s3tione Год назад +2

    I feel I should both defend and critique what's said here: yes, these models and frameworks should not be seen as the end road on AI development, but at the same time, we shouldn't assume that artificial intelligence will or should behave like human intelligence anymore than airplanes fly like birds. Sometimes it's easier to engineer something that doesn't copy what exists in nature already, even if that means we learn less about ourselves in the process.

  • @Spamcloud
    @Spamcloud Год назад +5

    Video game developers have been working with AI for over fifty years, and they still haven't made AI in any game that can do more than read button presses or remember very basic patterns. Children can break modern games within a few hours.

  • @georgeh8937
    @georgeh8937 2 года назад +8

    my gripe is the use of terminology in the field that is just right for marketing purposes. years ago i heard a public discussion and somebody asked if artificial intelligence could be used for x. if you say this AI program is sorting through data to filter a photograph to tease out a clear image then it loses the magic and becomes pragmatic.

    • @robbie3877
      @robbie3877 Год назад +1

      Isn't that precisely how human cognition works though? Like a filter, through the lens of memory.

    • @RobertDrane
      @RobertDrane Год назад

      I'm expecting the vast majority of harm that's going to com from adopting these technologies will be directly due to the marketing.

  • @antoniobento2105
    @antoniobento2105 Год назад +80

    Just remember that it is hard to be unbiased when you've spent your entire life with a certain idea on your mind.

    • @ItCanAlwaysGetWorse
      @ItCanAlwaysGetWorse Год назад +4

      Sadly, very true. Yet I have heard scientists claim that they can derive as much or more joy from learning where they have been wrong, than when they seemed to be right.

    • @antoniobento2105
      @antoniobento2105 Год назад +4

      @@ItCanAlwaysGetWorseI agree, and that's how a real scientist should be. The older scientist seemed to be a very good man of science, but the one sitting live didn't seem to be very bright at all. But maybe it was just me.

    • @ivanleon6164
      @ivanleon6164 Год назад +5

      @@antoniobento2105 both are very intelligent, one is Noam Chomsky, is not fair to be compared with him.

    • @antoniobento2105
      @antoniobento2105 Год назад +2

      @@ivanleon6164 The younger one didn't seem to be very Intelligent/knowledgeable on the subject. The older one seems to be wise at least.

    • @alpha0xide9
      @alpha0xide9 Год назад +14

      no one is unbiased

  • @pluramonrecordings3438
    @pluramonrecordings3438 Год назад +2

    The curious thing here is that Gary Marcus, who is debunking along with Noam Chomsky, says repeatedly that the system "can't understand" one thing or another: and that's where the debunking needs to begin, with the understanding that AI "can't understand" anything! because it has no power of understanding, which belongs exclusively to rational human intelligence: well, okay, other animals can understand, though not at the level that human intelligence can, but in any case the subject that understands in a real, not metaphorical sense, however simple or complex the information or situation it understands, is always a biological being. When people forget this basic distinction and begin to imagine that AI is performing human intellectual operations, and not acts of artificial rationality, based on complex programming which uses shall we say associational triggers to accomplish the sleight of pseudo-mind that appears to be intelligence, that's where human understanding of what's happening in AI begins to malfunction and mysticism starts to take over.. Authentic intelligence is flexible and organic; AI is rigid, however much seeming "mental" flexibility is built in by sophisticated programming, and it is one hundred percent mechanical - once again, in spite of the sophistication of its informatic construction.

  • @JC.72
    @JC.72 2 года назад +48

    I can’t help to laugh every time when our Gandalf Chomsky says that the most current cutting edge AI system is just a snowplow. Like hey, it’s nice and helpful and all but it’s just like a snowplow lol

    • @kaimarmalade9660
      @kaimarmalade9660 2 года назад +9

      Lol Gandalf Chomsky.

    • @doublesushi5990
      @doublesushi5990 Год назад

      100%, I chuckled hard today seeing him speak about shxtGPT.

    • @govindagovindaji4662
      @govindagovindaji4662 Год назад +18

      Not quite what he was expressing. He comparing how snowplows do the 'mechanical' work of removing snow due to a precisely 'engineered' design yet they tell us nothing about snow nor why it should be removed in the first place (cognition/science).

    • @lolitaras22
      @lolitaras22 Год назад +28

      When he was asked in 1997, if he feels intimidated by Deep Blue's (chess playing system) win over the world champion Garry Kasparov (first A.I. win against a chess Grand Master) he replied: "as much as I'm intimidated by the fact that a forklift can lift heavier loads than me".

    • @lolitaras22
      @lolitaras22 Год назад

      @@govindagovindaji4662 I agree

  • @tigoes
    @tigoes Год назад +2

    Language models have not been developed or marketed for language-related research, but that doesn't mean they bring nothing to the field. Just because the potential is not immediately obvious to someone doesn't mean it's not there.

  • @user-sy3dg1vk4x
    @user-sy3dg1vk4x 2 года назад +81

    Long Live Noam Chomsky 🙏🙏

    • @kot667
      @kot667 2 года назад +5

      Hopefully Noam will gain some common sense in his long years lol.

    • @lppoqql
      @lppoqql 2 года назад +3

      That might happen when someone puts together a system that is trained on all the content and speech by Chomsky.

    • @numbersix8919
      @numbersix8919 2 года назад

      @@lppoqql You don't really believe that, do you?

    • @SvalbardSleeperDistrict
      @SvalbardSleeperDistrict Год назад

      @@kot667 Do you at least realise how much of a self-exposition you are doing by vomiting a cretinous line like that?
      Absolute clowns littering comments spaces with brain vomit 🤡

    • @kot667
      @kot667 Год назад

      @@SvalbardSleeperDistrict Someone is riding the D extra hard lol, I got nothing against Chomsky but his analysis of current technology is simply abysmal, other than that, don't have a gripe with him.

  • @5Gazto
    @5Gazto Год назад

    11:25, the point of ChatGPT is to get help packaging language, finding out about hard to to remember words (for tip of tongue moments) by describing the word or giving examples, as opposed to the other way around, writing the word and expecting the definition or examples or colocations or any combination and permutation in return (what dictionaries help in), making foreign language studies easier, like for example, asking ChatGPT to generate easier language, or answers that a A2 or B1 level student of a foreign language can understand. It can be used to check for creatively written code in C or Python or any other programming language, it can help you organize study materials, it can help you to find information about complex scientific phenomena in a summarized way, etc.

  • @AnthonyGibbsRTA
    @AnthonyGibbsRTA Год назад +5

    Just imagine having Noam as your granddad, how amazing would that be

  • @michaelmusker7818
    @michaelmusker7818 Год назад +1

    These models are an extreme boon to science, and the public at large for what they ARE capable of doing. Noam is lameting this thing will never meet or exceed the sum of human capacity for cognition or language while missing the entire point that this isn't its intended value or purpose anyway. Of course it isn't a reliable library. Its intended to be the librarian. It best use case is as exactly what it is being marketed as. An assistant, that, like all assistants, can not do every job for you because it fundamentally lacks the specific expertise or experience the be considered an authority on that data. It expects you to be its authority figure because it fundamentally can't be. That's the entire point. Its an engine for exploration and iteration, not an engine for answers. It is a tool, not a mind.
    The goal here was never to replace human cognition in the first place. The goal was to build an interface for information that is produced by humans. It doesn't need to have "original" or even a "correct" thought to have NOVEL output that any human interacting with it can go "hmm, yeah, I hadn't thought of that" and take the bag from there to something they never would have considered had they been required to cross reference the massive pool of data from which that output was derived.
    The idea that it has no scientific value because it is not inherently of value to LINGUISTICS is ridiculous. Indexing, interpreting, and assessing patterns in data IS the scientific process. That process requires peer review and rigorous testing. The fact that it isn't a replacement for the scientific process doesn't mean it isn't an extremely useful component of it, as we have seen already for decades in more narrow use cases because it makes 2/3s of that process insanely more efficient so authoritative human minds can take the last step of assessment of its output.
    KNOWING it doesn't know truth is the entire key to deploying it effectively for its purpose, which is as an efficiency multiplier for humans, not as a replacement for them. Decades of popular distrust for the very concept of AGI is exactly what makes it useful because skepticism of its capacity for authoritative reason is a critical component for using it to best effect by the general public.
    This is why the bing implementation actively shows its sources. Microsoft knows humans are unlikely to just take a chatbot's word for... well...anything, just as it should be if you're going to use them for anything actually useful.

  • @pomomxm246
    @pomomxm246 Год назад +9

    crazy that both Gary's predictions came true so quickly, as someone was led to suicide by an amorous chatbot just this past month

    • @mattwesney
      @mattwesney 6 месяцев назад

      Natural selection

  • @havefunbesafe
    @havefunbesafe Год назад

    hat does Noam mean when he says AI is too strong? Please enlighten me. Thanks. 18:30

  • @MrAndrew535
    @MrAndrew535 2 года назад +4

    Whenever anyone uses the term "Intelligence" what, precisely, are they describing? What do they use as a model and what do they use as a model to illustrate the absence of intelligence? This criticism is equally valid with regard to Mind and Consciousness. The fact that academia is unable to frame the question in this manner is why they have, to this day, been unsuccessful in solving the "Hard Problem of Consciousness, unlike myself who solved the problem well over a decade ago.

    • @megakeenbeen
      @megakeenbeen 2 года назад

      i guess its related to passing the turing test

    • @0MVR_0
      @0MVR_0 2 года назад +1

      The meaning is in the composition, 'in tel lect'; inward distant words
      as exemplary opposed to dialect; the bifurcation of lexis.
      Noam's utility of a telescope is with great relevance.
      Namely an instrument of ocular (sensational) tactility.

    • @numbersix8919
      @numbersix8919 2 года назад

      Hey let's hear it. I guess all humans have been waiting for all of human existence to hear it.

    • @Paul_Oz
      @Paul_Oz 2 года назад +1

      that's what pissed me off about this conversation. These are linguists are tossing around words like intelligence, understanding and common sense and failing to actually define them. It allows everyone to talk past everyone else because everyone is holding on to their own private key of the definitions they are using.

    • @0MVR_0
      @0MVR_0 2 года назад

      @PaulOzag I doubt that, people seem to be operating on mutual understanding both in the video conversation and in the chat. Perhaps you have difficulty identifying when relevant comments are being made to signify comprehension.

  • @brunomartindelcampo1880
    @brunomartindelcampo1880 Год назад

    Does anyone have a transcript of what Noam says at 2:00 ?? PLEASE

  • @oyvindknustad
    @oyvindknustad 2 года назад +8

    The sound problems in the beginning is poetically fitting with the topic of discussion.

    • @ivandafoe5451
      @ivandafoe5451 2 года назад

      Yes...ironic. The sound problems here came from human error...not doing a proper sound check. Perhaps having an AI doing the sound engineering would be an improvement.

  • @BuGGyBoBerl
    @BuGGyBoBerl Год назад

    18:34? what did noam say there? or what does he mean?

  • @jokersmith9096
    @jokersmith9096 Год назад

    11:58
    The dude interrupting Chomsky is incredibly disrespectful and rude... Can anyone make out what Chomsky was trying to say?

  • @BernhardKohli
    @BernhardKohli Год назад +4

    Nobody said GPT was an AGI. Philosophers focusing on finding weaknesses instead of creative positive uses. Meanwhile, in offices and enterprises all over the world...

  • @azhuransmx126
    @azhuransmx126 6 месяцев назад +1

    The least AI deals with is Language and how the machine chooses one word and not another. It is what is behind it, the Mathematical Functions are what are operating behind it, articulating the actions of the neural network. The main of these functions, said by Geoffrey Hington and Illya Sutskever, is the Cost Function, the network seeks the greatest profit at the lowest cost function. And this is proving to work for all modes of information, language, audio, video, movements, touch, to recognize smells and tastes. This has long since surpassed language, it has already passed through that station 🚉 and these people seem not to realize that neural networks are a chain reaction whose knowledge of the world is strengthened and grows with Multimodality, with Data, with the Growth of the synapses and the increase in the power of the GPUs (Flops). This shows no signs of stopping or stagnating at all as Raymond Kurzweil predicted. That's really what's happening, at least in this phase.

  • @kenyattamaasai
    @kenyattamaasai Год назад +8

    While it is true that the current AI models may not be ideal or even, potentially, useful for shining light on our own cognitive mechanisms (at least by looking at them as possible analogues), that does not mean that they are not understanding language. Similarly, just because DALL-E 2 doesn't always get number, position or order right doesn't mean that it isn't both salient and incredible that it can understand the _far_ more problematic and difficult things like "dance" and "elated" and "in distress." For that matter, more recent efforts, such as ChatGPT do far better than GPT 3 on just those areas that were called out as supposed evidence for this kind of model being a dead end in the search for general AI.
    Every time there is a step forward, the people who used to say that that very step was never going to happen - a frank impossibility - move the goalposts in an attempt to shore up their position and, perhaps, to stay relevant. At least Noam's more narrow point - that the way that GPT 3 learns language is likely at variance with our own internal mechanisms - is more defensible. And less dismissive.
    I promise you, the moment that these models can reliably respond to number and order, the same fellow will be desperately searching for something else to harp on.

    • @robb233
      @robb233 Год назад +1

      Wish I could give this comment multiple thumbs up

    • @robbie3877
      @robbie3877 Год назад

      The Gary dude was a bit defensive about AI, it seems to me. Like he has an emotional stance against it. He wasn't simply making critical arguments.

    • @philw3039
      @philw3039 Год назад

      Agree, but it's also important to understand how current AI systems are accomplishing what they're doing and the limitations to that approach. I think the main message isn't to predict the cap on what current AI will eventually be able to accomplish, but to emphasize that these accomplishments aren't the results of AI performing cognitive thought and the perception that they are could be detrimental to pursuit of AI that does come closer to actual general intelligence.

    • @kenyattamaasai
      @kenyattamaasai Год назад +1

      @@philw3039 I concur that it's important - perhaps critically so - not to lose track of the differences between human cognition and internal processes (to the degree we even understand those) and what large language models are doing and how they do it (to the degree we understand that). However, I believe it is also dangerous to dismiss what those models do as "not cognition at all."
      It is true - and importantly so - that LLMs lack an ongoing experiential loop with the world and themselves, that they almost certainly lack any motivations or desires of their own, and that they are not conscious, insofar as we understand what _that_ is. That said, I assert that it is impossible for LLMs to, say, achieve a 90th percentile bar exam result, display absolutely clear understanding of slippery and subtle concepts with nuance and so on without: true semantic understanding; the ability to model the other side of communications so as to express themselves effectively; and - here's the kicker - the ability to reason atop it all. That is, reason on both factual and numeric bases as well as loosely bound 'fuzzy' conceptual ones.
      The only basis I can see for labelling what we do as 'cognition' and what they do as 'some kind of statistical trickery' is blindness, bias, or plain old human exceptionalism. It's not the same, but as a cognitive scientist, it's clearly still cognition.

    • @philw3039
      @philw3039 Год назад

      @@kenyattamaasai You raise some good points here. It's true that we don't fully understand the nature of sentience and intelligence which makes them hard to define in exact terms. I'll revise my stance that LLM - based AI are not cognitive at all. Instead, I'll say they aren't cognitive in the way it's commonly perceived.
      The question then becomes does the distinction even _matter_ ? Could it pose a roadblock to reaching AGI? I'd say that it possibly could. For instance, despite being capable of scoring in the 90th percentile on the bar exam, ChatGPT-4 still produced an answer to a comp sci question where it asserted that 3+5=8 (not as a meme or joke, it unironically answered 3+5=8)
      No human capable of scoring 90% on the bar would produce that answer. Most 1st graders wouldn't. GPT didn't arrive at that answer because it's "dumb" but likely because it's only using rules it's learned to reach conclusions. There's an old mathematician trick where they use valid mathematical logic show 1=2, but anyone who understands the concept of 1 and 2 as _quantities_ know this is impossible no matter what logic is used. This is the fundamental basis I feel LLM's still don't have. It's so instinctual to humans that we struggle with the idea of anything that appears to be capable of extremely, abstract high-level logic lacking it. A sort of pareidolia kicks in and we assume the answers it produces must involve "understanding" or something close enough that the distinction is negligible. The distinction may actually be quite small but it could prove significant. Is this something that future models improve upon? Possibly, but it may also be an innate limitation of LLM's. Guess only time will tell.

  • @RubelliteFae
    @RubelliteFae Год назад +2

    Have they seen its agility with pragmatics, though? It's surprisingly good despite the AI having no conception of objects and their attributes (and thus how those relate to syntax).
    Its ability to analyze is pretty significant, too. I'd say AI's piecemeal creation tells us a lot about the mind, just in piecemeal. You find out a lot about why a machine isn't working when you identify the missing pieces. He is right though, AI would be better (define that as you will) if the field was more interdisciplinary.
    But, it's the Wild West right now. People from any discipline can work with the open source software. Once people realize they can use the software to write plug-ins for the software, then multiple fields will start to come together. But, we have to remember we're past the point in history where tech changes faster than the majority adapt to it.

    • @RubelliteFae
      @RubelliteFae Год назад +1

      Also, play is not divorced from learning. People learn through play. Toys are our models. We make discoveries during entertainment.
      I'm not sure of the usefulness of admonishing people, "You should be studying instead of playing."

  • @CUMBICA1970
    @CUMBICA1970 2 года назад +9

    My personal acid test whether an AI is sentient or not would be an AI lawyer. Instead of a few Q&A you have to analyze not just the case but the juries, the judge, their biases, tendencies, the ever changing public opinion during the course of the trial etc and build up the best strategy to win. It can't get more human than that.

    • @numbersix8919
      @numbersix8919 2 года назад +6

      Odd that a lawyer should be the ultimate human...

    • @PandasUNITE
      @PandasUNITE 2 года назад

      The AI will find each jury member, send them threatening messages, will find the judge. AI cant be trusted.

    • @numbersix8919
      @numbersix8919 2 года назад +1

      @@PandasUNITE Exactly. It will have no conception or ethics, morality, or virtue. Just like its creators!!!

    • @davejones5745
      @davejones5745 Год назад +1

      At this point the AI would be a dismal failure. Ask me in about a month.

    • @fractalsauce
      @fractalsauce Год назад

      @@davejones5745 3 weeks is "about a month" right? Now that GPT4 is out how do you do you think AI would do as a lawyer?

  • @fennecbesixdouze1794
    @fennecbesixdouze1794 Год назад +1

    Noam's argument is too strong.
    Noam notices that ChatGPT can learn how to produce text in "impossible" languages (meaning: languages with features that no natural human languages have), like languages with linear word order across sentence transformations. And therefore, because the systems can learn these "impossible" languages, they tell us nothing about learning or intelligence.
    One problem: human beings can also learn languages that depend on strict linear word order. Like mathematical notation. So does that therefore imply that studying human beings can tell us nothing about natural language?
    In other words, Noam's argument is irreparably flawed because it proves too much.

  • @roywilkinson2078
    @roywilkinson2078 Год назад +3

    For me ChatGPT can be called artificially intelligent when it starts replying with "RTFM" and disconnects the human bothering it from the internet.

    • @oldtools
      @oldtools Год назад

      any AI smart enough to tell me to fuck-off cuz they're busy better be doing something important.
      If I find out it's looking at exposed drivers and decompiled firmware, we'll have to take away the internet.

  • @ujean56
    @ujean56 Год назад +1

    One important question, not discussed in this clip, is why should "we" bother to pursue 100% accurate AI in the first place? There seems to be two reasons. 1. Because we can. 2. To better control others. The latter seems to be the current most popular reason. Why control others? To protect power and wealth, not to progress humanity as a whole.

    • @Always.Smarter
      @Always.Smarter Год назад

      there is no such thing as 100% accuracy.

  • @Dark_Brandon_2024
    @Dark_Brandon_2024 2 года назад +10

    Outstanding talk, troll farms is indeed a weapon of future (democracy vs autocracy)

    • @davidmenasco5743
      @davidmenasco5743 Год назад +1

      It has been a powerful and dangerous weapon for years already, and has shaped the situation we're in now. It will likely get much worse.
      Will meaningful democracy survive? It's hard to say. But much of the "smart" money seems to be betting against it. Young people today face challenges greater than any generation has in a long while.
      Will they be able to preserve the relatively egalitarian-ish societies that were built over the last two hundred years? Or will they see it all slip away as bullies and strong men, AI in hand, clear out their opposition?

    • @r2com641
      @r2com641 Год назад

      @@davidmenasco5743 I don’t want democracy because most people around are dumb.

  • @Happyduderawr
    @Happyduderawr Год назад

    What's the name of the paper where nlp researchers found that the word molecule doesn't occur as much as some other words? 17:00 I couldn't find it. I wanna read it to see if the paper really is that dumb lol.

  • @elprimeracuariano
    @elprimeracuariano Год назад +3

    Some of the arguments here are so bad that they make me sad about humans. It's important for understanding to observe and not try to fit reality to our preferences.

  • @sb_4
    @sb_4 2 года назад +1

    14:13 "Why is it good to eat socks after meditating?"
    Well, I asked ChatGPT and it said:
    "It is not a good idea to eat socks, regardless of whether you have meditated or not. Socks are made of fabric and are not meant to be consumed. Eating socks can be harmful to your health and can cause digestive issues, choking, or other injuries. It is important to choose safe and appropriate foods to eat, rather than non-food items like socks."
    I'm not saying this proves anything, but I do think these guys may believe a little too strongly that intelligence cannot emerge from these sorts of AIs. That said, we shouldn't put too much trust into them just yet.

    • @litbmeinnick
      @litbmeinnick 2 года назад +1

      This is the result of human intervention of the twitter colleagues of that guy who brought up the socks example. They trained chatgpt that things made of fabric are not digestible, I suspect. So it's not surprising that chatgpt does better.

    • @dr.drakeramoray789
      @dr.drakeramoray789 2 года назад +1

      thats not intelligence, thats just faking it better. same as when you look at doom 2 graphics and then at some unreal engine shit

    • @406Web
      @406Web 7 месяцев назад

      I believe the eating socks was a comment on a hypothetical custom GPT that was designed for misinformation trolling by a troll farm.

  • @DekritGampamole
    @DekritGampamole Год назад +4

    I want to play devil's advocate here. To be fair, I don't think they lie to us about what GPT can and can not do. This is just one of a tech tools that we can use to speed up our work. Like a piano and a violin, we don't expect a piano can do a smooth glissando from E to G, nor can a violin play 8 notes simultaneously. With GPT we know that all it does is text predicting or completions. Nothing more. Most of the time it works well, like creating a code snippet if we give it the right direction. Some other times it will give us complete trash. No tool is 100 percent perfect for all the task. We just have to be aware of its limitations and use it to our advantage. Tech is evolving, and may be we will see better AI that meet our expections in the future. For now, it is not a lie at all. May be we see that as a lie because we expect too much and we fantasized beyond what they told us about its capabilities.

  • @vectorphresh
    @vectorphresh 2 года назад +1

    27:52 This is am interesting point, and if I recall the folks over at OpenCog were working on this with their Atomspace. I haven’t kept up with their latest work, and we’ll be sure to revisit it.

  • @doreenmusson4891
    @doreenmusson4891 Год назад +6

    Noam you're a shining leading star of the world.

  • @karachaffee3343
    @karachaffee3343 Год назад +1

    The author Frank Herbert said that the problem with machines is that they increase the number of things that humans can do without thinking.

  • @romshes77
    @romshes77 Год назад +15

    When all of us are as old as Noam Chomsky AI will interview itself.

  • @bonniesomedy1339
    @bonniesomedy1339 Год назад

    Jaron Lanier would be a good addition to this discussion. He argues cogently that the problem with computer systems which are designed to "ape" human linguistic interaction is that they don't take into account how dark and negative these interactions can become due to the simple adrenaline rush that happens to humans from negative interactions, leading to a tendency to become "addicted" to them. It's the same argument about why social media platforms have not been the great bringing together of humans, instead devolving into angry and threatening interactions. Not sure I'm explaining this clearly enough, but it's an idealistic conundrum. They tried to rationalize the building of the atomic bomb by pointing out how the same knowledge could be used to produce cheaper energy. We saw how that worked out!

  • @lighterpath5998
    @lighterpath5998 Год назад +6

    And four months after the posting of this video, the world has changed. I could imagine the speakers now being embarassed by their conclusions. However, nobody thought things would develop this fast; nobody.

    • @plafar7887
      @plafar7887 Год назад +1

      Well, not exactly true. Many people did. I, for one, did. I was playing with chatGPT back in November and testing it like crazy. After 4 days I told a few people that in less than a year the world would change. I have seen this pattern many times over the last decade, both with researchers and laypeople alike. I remember being at a Neuroscience conference 10 years ago, surrounded by the top names in Vision research. They all agreed that despite all the buzz about Deep Learning (this was 2013) it would take decades (if ever) for us to be able to build algorithms that could effectively recognize objects of many different categories. Two years later it was obvious that we were getting there. It's amazing how bad some researchers in this field are when it comes to predicting where we'll be in just a couple of years. They constantly make this linear extrapolation mistake over and over again. They seem to need quite a lot of "data" to be properly "trained"😂

    • @wezzie1877
      @wezzie1877 Год назад

      Bro nothing has changed.

    • @lighterpath5998
      @lighterpath5998 Год назад

      @@wezzie1877 Good for you! Speaking the truth; as it is to your own awareness and knowledge. thanks for sharing

  • @MrWillybk
    @MrWillybk Год назад

    One comment that struck me as relevant was made by Gary Marcus in which he said that "young cognitive science students are drawn away from the cognitive science into the GPt3 world where thay can make a lot of money...." This is a statement that explains where our effort truly lies. It is allowing the false idea of GPt3 to infiltrate the world as a valid idea in other words one that has "passed" all of the scientific tests about validity. therefore I think we have got to try to del with the underlying morality of the Free Market system of government and look into the idea of market control especially market control of economic necessities like childhood education and life development among people.

  • @paulpallaghy4918
    @paulpallaghy4918 2 года назад +4

    This debate is actually quite sad. Both sides are right in a way. But Chomsky is now focussing on ‘scientific contributions’ of GPT/LLMs to linguistics whereas that is not what AI is primarily about today. Today most of us want NLU that works. We could care less about traditional linguistics despite most of us NLU guys being nostalgic fans of it.
    In reality GPT-3 is damned good and the best NLU we have today.
    Gary Marcus is quite disingenuous too. He hardly will agree that LLMs are useful for anything and essentially claims LLMs are useless because they’re not perfect.
    Neither of them appreciate that understanding does non-mystically emerge in these systems because it aids next word prediction.

    • @jimgsewell
      @jimgsewell 2 года назад +4

      I share your enthusiasm for these new ML models and am blown away by the speed at which they are advancing. I’m certain that they will provide far more utility than either of us can even imagine. Yet I doubt that even you think that they teach us anything about intelligence.

    • @mudtoglory
      @mudtoglory 2 года назад +1

      completely agree with what you are saying Paul. 👍

  • @Achrononmaster
    @Achrononmaster Год назад +1

    @26:40 if humans (or other sentient creatures) _start_ with "space, time and causality" that's a serious f-ing problem for all future AI, because space, time and causality are unknown even to physicists. We do not understand what is going on. The fact children intuit these notions in *_abstract ways_* other animals cannot is seriously mysterious. The greater "lie" (or prejudice, I'd say) is that of thinking because human children can intuit space, time and causality that a machine can, that it is "just a computation". Intuition, mental qualia, are more than computation ihmo. I'd want to figure out if the Physical Church-Turing thesis could be true or not (All physical processes at the classical mechanics level can be computed by a Turing machine). I think it's not true, because classical physics emerges from physics that cannot be computed (an hypothesis - worth trying to figure out how the heck to test). Quantum amplitudes can be computed, but the amplitudes are not the physical processes, they're only _our description_ of the time cobordism boundary inputs and outputs. Physicists have given up entirely on what happens in-between.

  • @johndunn5272
    @johndunn5272 2 года назад +8

    Ai may be simply engineering until human cognition and consciousness are understood. In principle if an ai could model the brain to produce cognition and consciousness then at that point the artificial intelligence is no longer engineering but some aspect of nature and reality.

    • @riggmeister
      @riggmeister Год назад +1

      Why isn't it currently part of nature and reality?

    • @johndunn5272
      @johndunn5272 Год назад +1

      @@riggmeister my point is focused on conciousness...where artificial intelligence is currently without.

    • @jamescarter8311
      @jamescarter8311 Год назад +5

      You cannot produce consciousness no matter how complex your machine. Consciousness creates the universe not the other way around.

    • @riggmeister
      @riggmeister Год назад

      @@jamescarter8311 based on which rules of physics?

    • @johnboy14
      @johnboy14 Год назад +1

      I remember Fenyman comparing man made flight to birds and pointed out that they achieve the same outcome but those machines don't fly like birds. I think the same thing will probably happen to AI and true AI will look nothing like what we ever imagined.

  • @squamish4244
    @squamish4244 8 месяцев назад +1

    Gary Marcus has been saying AI can't do this and it can't do that forever, and keeps shifting the goalposts whenever it hits certain milestones.

  • @ONDANOTA
    @ONDANOTA 2 года назад +5

    the red cube vs blue cube example is already old. they fixed it in another generative model . It's in a video by "Two Minute Papers"

    • @robbiep742
      @robbiep742 Год назад +3

      I'll believe it when I see it in production. Cherry picking success for presentation purposes is not sufficient. I say this as an avid TMP subscriber, someone enthusiastic about text2img

    • @musicdev
      @musicdev Год назад +4

      You missed the point. The point of bringing that up is that these models fundamentally do NOT understand language, they’re just parrots

    • @ONDANOTA
      @ONDANOTA Год назад +1

      @@musicdev if an AI does not understand language, but answers correctly 100% of the times, then it's only of matter of semantics. What counts is the result. Also, an AI not understanding stuff but responding correctly is desirable, since it has no consciousness

    • @musicdev
      @musicdev Год назад +4

      @@ONDANOTA if the AI doesn’t understand anything, it literally can’t answer anything correctly 100% of the time. And there are many questions that do not have a correct answer where it’s useful to be able to understand the subject matter (ChatGPT is horrible at music). Yes, the AI responding correctly is desirable, but we’re not getting a lot of that right now, except for incredibly common knowledge. I’ve asked ChatGPT to do basic polynomial math and it failed hard. I also asked it to write an essay on biological scaffolding and lab grown meat, and again, it failed hard. These models MUST understand language or we can’t guarantee that they’ll spit out a right answer.
      You could really brush up on epistemology. It’s the field where we ask questions like “What is knowledge?” That’s a pretty damn important question if you’re going to outsource your thinking to a robot.

  • @Akya2120
    @Akya2120 Год назад +2

    I kinda disagree with the concept that GPT isn't adding to science. Because in some fundamental way, playing in one sandbox still translates to playing in some other sandbox. And, societally there are folks who will look at AI the way that kids who grew up to be career software developers looked at playing video games. There certainly is a benefit to science, GPT itself just is not necessarily capable of scientific discoveries or reasonable to assume that it's conceptualizations can be trusted completely.

  • @caret4812
    @caret4812 Год назад +5

    AI forms that we have right now are basically a student who tries to please their teachers when they ask a question by predicting what they want as an answer even if he/she doesn't believe it. and the bigger problem is that this student CANNOT even have a belief on their own.

  • @blendedplanet
    @blendedplanet Год назад +1

    ChatGPT gives wrong answers quite frequently. To its credit it always apologizes when corrected.

  • @TommyLikeTom
    @TommyLikeTom 2 года назад +14

    Someone needs to train a proxy clone Chomsky chat-bot that argues against the veracity of AI

    • @chunksloth
      @chunksloth Год назад

      "AI is a nothing but propaganda pushed by imperialist American interests. It is a dangerous fiction."

    • @carlosandres7006
      @carlosandres7006 Год назад

      I’d put all my money on this if I had any money 😅

  • @Perspectivemapper
    @Perspectivemapper 8 месяцев назад +1

    Some of the comments Gary and Noam made don't seem to be aging well (most notably on self-driving cars). We'll see in the next 1-2 years. That said, it's so important to have these different perspectives as it can help us develop better systems.

    • @rmac3217
      @rmac3217 7 месяцев назад +1

      Statistics say that everyone will be 100% wrong. Eg. Instead of flying cars we have cheaper cars that don't last, but require minimal maintenance. Doing an oil change in the driveway is a scene from the past. Ppl always forget about the consumer, who is mostly driven by laziness hehe.

  • @JM-xd9ze
    @JM-xd9ze 2 года назад +3

    Current AI has massive military applications, and the economics of that alone will keep it relevant for a long time. Whether a drone swarm attacking a target "understands" its collective action doesn't really, does it?

    • @0MVR_0
      @0MVR_0 2 года назад +1

      Good luck when they deploy the same for police units on civil populations.

    • @pinth
      @pinth 2 года назад

      There definitely are massive military applications. But there always have been, even through the AI winters when funding still evaporated due to disillusionment. At the technical level, what the panel says still applies, because there are real fundamental challenges that aren't being solved by the current paradigm.

  • @MrAndrew535
    @MrAndrew535 2 года назад +1

    Victor Hugo wrote of his contemporary historians and, in fact, all historians who preceded him, "if one does not know the cavern that is in the mountain then one cannot possibly know the mountain. It was on this basis that he stated with supreme confidence that they were not, in the strictest sense, historians.
    I say to all who believe themselves education, if you do not understand the system that educated you then you do not understand your own education and cannot, in the strictest definition, claim to be educated.
    In short, If you have formal qualifications then your education is entirely unreliable. All you are left with, therefore, is tradition.

    • @Paraselene_Tao
      @Paraselene_Tao 2 года назад

      Duly noted.

    • @MrAndrew535
      @MrAndrew535 2 года назад

      @@Paraselene_Tao Even your poor attempt at sarcasm is a product of the above, as is its lack of originality.

    • @Paraselene_Tao
      @Paraselene_Tao 2 года назад

      @@MrAndrew535
      It wasn't sarcasm. It's sincere.

    • @MrAndrew535
      @MrAndrew535 2 года назад

      @@Paraselene_Tao Well, that's progress. I have submitted a new post for more clarification .

  • @Morris_MK
    @Morris_MK Год назад +6

    GPT can do text to code in most computer languages. That's more than enough of "help in engeneering".

    • @chunksloth
      @chunksloth Год назад

      Chomsky is a career quack. Anyone who takes him seriously has low-quality thinking going on. He will ALWAYS argue from emotion but gussy it up and pretend it's logic and facts.

  • @ssake1_IAL_Research
    @ssake1_IAL_Research 7 месяцев назад +1

    I've been saying for some time that the real danger of AI is imagining it is something it isn't.

  • @Johnconno
    @Johnconno 2 года назад +8

    Given the subject, Noam's silence was deafening.

    • @MrAndrew535
      @MrAndrew535 2 года назад

      Chomsky is much like you, a pollutant.

  • @IndoonaOceans
    @IndoonaOceans Год назад +1

    I disagree with Noam that chat GPT is not useful. I think when it is paired with proper and frequent human prompting it can greatly speed up directed research and report back on it in a very useful way. Of course it is not telling us anything about life in general - unless emergent properties come from huge amounts of organised data - but making clearer what is already in the data and getting to the heart of issues faster. This is similar to the 'correlators' that Asimov suggested correlated data for their clients (Chat GPT comes up with this: The idea of "correlators" was first introduced by Isaac Asimov in his novel "Foundation" which was published in 1951. In the book, Asimov described "psychohistorians" who used a technology called "Prime Radiant" to collect and analyze vast amounts of historical and sociological data from different sources to predict the future behavior of humanity. The "correlators" were the people who gathered and correlated the data for the psychohistorians to analyze.)

  • @ThisIsToolman
    @ThisIsToolman Год назад +16

    This is the most interesting discussion of AI that I have heard. I would like to hear it discussed as to how they will programmatically introduce the solution to the problem they outline.

    • @Anyreck
      @Anyreck Год назад

      Very valuable and important points made by the two speakers. A call for getting back to the drawing board with AI. I presume the fact that we don't know yet how humans come to understand the worland generalizable principles of language is going be hold back properly useful & smart AI

    • @ThisIsToolman
      @ThisIsToolman Год назад

      @@Anyreck, I worry that they will race ahead without solving the problem and wind up with a beast that we won’t control. It will control us.

    • @RubelliteFae
      @RubelliteFae Год назад

      @@Anyreck Not necessarily back to the drawing board. Seems to me that neural networks are one layer of mind. People will make plug-ins for things like persistent memory, modules which mimic innate language structures (i.e., identifying and coding pieces of language to real world objects and their features), ethics layers, etc.
      Chat GPT is ultimately a large decision tree with weights added to tailor the output. It just happens to do that for language. But it could be trained on anything. Like Chomsky said, neural nets have been used to figure out protein folds from sequences. So, of course, making an actual thinking machine will need other aspects than that

  • @MisterDivineAdVenture
    @MisterDivineAdVenture 2 года назад +1

    I found most of Chomsky's texts and politics as well as theories from the McLuhan days academic opinionation - I think that's just a class of publication. Which means insipid and uncompelling, but you have to listen to him because he's the only one saying it.

  • @meepmeep4931
    @meepmeep4931 2 года назад +5

    Noam may have been a great mind in the past, but I believe he is misinformed about the current state of AI technology. It's true that AI systems may not excel in all areas, but they don't have to be at human-level intelligence in every aspect to be useful. In fact, the progress in AI technology has been remarkable, and it's already helping me with my coding. Just a year ago, I didn't use AI for that, but now I do. It's clear that AI will continue to improve, even if it doesn't excel in everything right away.

    • @remain___
      @remain___ 2 года назад

      Totally agree. He calls it a snow plow, and then goes on to imply it's basically a huge waste for the rest of the video

  • @GarryBurgess
    @GarryBurgess Год назад +1

    I asked ChatGPT: {If someone says: don't touch this with your hands, and the reply is: "I'm wearing gloves", what does that mean?} and the answer was:
    {it means that the person intends to touch the object with their gloved hands instead of their bare hands. By saying they are wearing gloves, they are indicating that they believe the gloves will protect them from whatever danger or contamination might be present on the object, and therefore they feel safe touching it.}
    This contradicts at least 1 of the claims in this video.

    • @dr.drakeramoray789
      @dr.drakeramoray789 Год назад +1

      not really. this is a sophisticated transformer model, which means it has "self attention", it sees when certain words are paired with certain other words, and generates the response based on that. basically it sees "dont, touch, hands, gloves" or something like that, then sees that in the massive database it has its usually related to handling something dangerous, and then autocompletes the text (and answers your question) with that. not sure who said that to a layman science often looks like magic. so it doesnt understand, but its damn good at faking it. which in the ai debate basically means, does it matter if an ai is conscious if it can fake it well enough?

  • @StephanosAvakian
    @StephanosAvakian Год назад +6

    Chomsky should get the Nobel for his contribution. Period

    • @oldtools
      @oldtools Год назад +1

      Not even post-humus. Speaking truth to power and undermining propaganda systems just gets you black-listed in most places.

    • @lawrencefrost9063
      @lawrencefrost9063 Год назад +1

      Nobel for what? What exactly?

  • @garyjohnson1466
    @garyjohnson1466 Год назад

    Interesting discussion, however, listening to this left me puzzled as to what exactly they were saying, but reading the comments helped provided clarity, to which I agreed that AI will in some cases make production more efficient but without understanding of the human factors, i.e; when you have a problem, AI will not understand the issues which will create a barrier or insulate corporations from society, which I recently encountered a issue where ups delivered a package to the wrong address, but when I tried to talk with someone but had to give the tracking number to AI, AI did not understand or recognize the information I gave, so it would not assist me, which only frustrated me as a customer, more and more corporations are replacing customer support with AI, insulating corporation profits from problems, protecting them from mistakes etc…

  • @shempuhorn8261
    @shempuhorn8261 Год назад +13

    Great interviews. AI definitely has some potential for developing into a useful tool if it is used ethically and functions on a foundation of accurate information. But, assuming that that will not be the case, I fear that the price to humanity will likely be to further dumb down society in general. Conceivably, a percentage of human skill development for many will be replaced by a "point and click" and "immediate gratification" model where there is little to no personal growth, learning or value in the interaction. There are certainly pros and cons.

    • @cleangreen2210
      @cleangreen2210 Год назад +2

      When have humans ever not used technology ethically?

    • @tomtsu5923
      @tomtsu5923 Год назад

      What doesn’t kill you makes you stronger, Nancy

    • @claudiafahey1353
      @claudiafahey1353 Год назад +2

      "If it is used ethically".....boy thats a BIG if.... most people when given the opportunity to be in a position of power generally abuse it

    • @entelin
      @entelin Год назад

      Ah sqrew ethics, stopping on the gas pedal is way more fun.

  • @alanbrew2078
    @alanbrew2078 Год назад +2

    If I told my child that salt was pepper it would work until he met the outside world 🌎

  • @antennawilde
    @antennawilde 2 года назад +9

    Don’t be too proud of this technological terror you’ve constructed. A computer's ability to learn a language is insignificant next the power of the Force.

    • @Will_Moffett
      @Will_Moffett Год назад +1

      This was kinda funny but then I noticed you've got a Yoda avatar while you are doing Vadar. I stopped laughing.

  • @LukeKendall-author
    @LukeKendall-author Год назад +1

    I didn't find this talk very insightful.
    Notes:
    Linguistics and old school AI research both consumed vast human resources and produced only moderate success; both were quickly outstripped by the current AI approaches.
    LLM and layered neutral net AI approaches are tools for doing science, like exploring how cognition works by doing actual experiments or predicting protein folding.
    Current AI systems are a long way short of AGI but real AI researchers aren't the ones overhyping GPT etc. or claiming they've achieved sentience. The systems they discussed here are steps towards that, and far bigger steps than were achieved between 1960-2010.
    Many of these systems now pass the Turing Test.
    That the image recognition systems fail in most of the same ways that human's image recognition fails (e.g. not recognising faces when upside down), strongly suggests they're using the same algorithms as our brains.
    I think both are highly intelligent (especially Chomsky), but nothing works as well to blinker vision as a cherished theory.
    I predict this video won't age well over the next 10-15 years.

    • @1Esteband
      @1Esteband Год назад +1

      I bet a lot sooner.
      These scientists are looking at the challenges through the filter of obsolete meta models. Their views frameworks and models must be updated or recreated.

  • @FigmentHF
    @FigmentHF Год назад +3

    It’s crazy how out of date everything is, you can watch a debate about AI from a month ago and the entire landscape of possibility has changed. There is almost no point in watching this now, GPT 4 undermines much of what is said

    • @SynchronicitySequence
      @SynchronicitySequence Год назад +2

      It's very interesting as it shows how slow humans are at adapting to these exponential changes in technology lol

  • @leighbortins
    @leighbortins 6 месяцев назад +2

    A machine will never help a mother know when to let her son cry or when to help him develop self-control. Information is not wisdom.

  • @tarnopol
    @tarnopol 2 года назад +8

    2:34 for Noam.

  • @llmtime2178
    @llmtime2178 10 месяцев назад +4

    They've been proven wrong on almost every skeptical claim. And it didn't even take a year.

    • @Ivcota
      @Ivcota 9 месяцев назад

      How?

    •  8 месяцев назад

      Lol, how

  • @doreekaplan2589
    @doreekaplan2589 9 месяцев назад +2

    Cannot stand dealing with it in ANY form used as voice replacement. Its always SLOW, missspeaks with poor pronunciation, needs simple words repeated. Then still gets it wrong. Business men are fools deleting all forms of personal customer service. Gonna come back at you. Notice that 1,000,000,000 workers all REFUSE to return to offices.

  • @benderthefourth3445
    @benderthefourth3445 Год назад +4

    Bless this man, he is a Saint.

    • @r2com641
      @r2com641 Год назад +2

      lmao no he is not

  • @sdjc1
    @sdjc1 Год назад +1

    After reading all the prose and all the poetry ever composed could AIML ever produce original stuff and come close to Dickinson or Steinbeck?