Is AI just statistics? | Yann LeCun and Lex Fridman

Поделиться
HTML-код
  • Опубликовано: 3 авг 2024
  • Lex Fridman Podcast full episode: • Yann LeCun: Dark Matte...
    Please support this podcast by checking out our sponsors:
    - Public Goods: publicgoods.com/lex and use code LEX to get $15 off
    - Indeed: indeed.com/lex to get $75 credit
    - ROKA: roka.com/ and use code LEX to get 20% off your first order
    - NetSuite: netsuite.com/lex to get free product tour
    - Magic Spoon: magicspoon.com/lex and use code LEX to get $5 off
    GUEST BIO:
    Yann LeCun is the Chief AI Scientist at Meta, professor at NYU, Turing Award winner, and one of the seminal researchers in the history of machine learning.
    PODCAST INFO:
    Podcast website: lexfridman.com/podcast
    Apple Podcasts: apple.co/2lwqZIr
    Spotify: spoti.fi/2nEwCF8
    RSS: lexfridman.com/feed/podcast/
    Full episodes playlist: • Lex Fridman Podcast
    Clips playlist: • Lex Fridman Podcast Clips
    SOCIAL:
    - Twitter: / lexfridman
    - LinkedIn: / lexfridman
    - Facebook: / lexfridman
    - Instagram: / lexfridman
    - Medium: / lexfridman
    - Reddit: / lexfridman
    - Support on Patreon: / lexfridman
  • НаукаНаука

Комментарии • 145

  • @parthokr
    @parthokr 2 года назад +56

    Loved his straightforward answer. "yes".

  • @midnight161
    @midnight161 2 года назад +66

    Hi Lex, not sure if you read these comments. But I have been listening to your podcasts for a while now and you have inspired me to learn AI/ML at MIT no less! I come from a medical background. It’s been difficult and rewarding at the same time. Thank you for inspiring me and pushing me out of my comfort zone.

  • @rkalla
    @rkalla 2 года назад +43

    I appreciate the fast answer - no fumbling around an ego-validating answer.

  • @iorekby
    @iorekby 2 года назад +30

    "I have made no effort to understand what AI is so I'll insert a faux-profound comment about ethics or Skynet reference because everyone needs to know what I think about it anyway"
    Basically 99% of the comments on YT about AI. Sadly a lot of them have already started on this video's comment section too I see.

  • @joao_aguilera
    @joao_aguilera 2 года назад +8

    Man, as a data engineer and scientist, i can say: Your service for the world will not be forgotten. Watching your show is own if the things I most enjoy in life!

  • @SMAKSHADE
    @SMAKSHADE 2 года назад +9

    bless up lex i enjoy your show big time love from canada

  • @mazu4526
    @mazu4526 2 года назад +3

    Everything has a cause and even if there is no cause of the existence of the universe that is also a cause,you cannot have effects without a cause.

  • @NeilS.
    @NeilS. 2 года назад +6

    If I dare throwing my two cents in the pool of ideas, I would say that the essence of intelligence is built around the logical axiom of survival. A living being would try to do anything in any circumstances in order to survive. It is that core parameter that we need to emulate in order for a neural network to "evolve" into something that could create its on solutions on any reality presented to it. I would add that the axiom of survival is also deeply connected to WHY a living being needs to survive. So that's why I think AI research is such an interesting topic for both mathematics, engineering and philosophy.

    • @GreenCowsGames
      @GreenCowsGames 2 года назад +2

      Adding survival to any sort of AI seems like a terrible idea. The moment it values self-preservation more than it does serving humans the thing becomes even more of a black box.
      I'd argue that intelligence is the ability to adapt to surroundings, in such way that it optimizes the end goal, whatever that end goal may be. Does not have to be survival. What is more important is the ability to move around and make goal oriented decisions. Human survival sense is more based on fear (ultimately of death) than it is on exploring. AI should be exploring, it should exceed the bounds of fear if it is to make humans more.

  • @westganton
    @westganton 2 года назад +9

    Life is like a model of models that have been trained over countless iterations and under countless conditions for the sole purpose of survival. It's arrogant to think that we can out-engineer life while we have so much left to learn about ourselves

    • @goyonman9655
      @goyonman9655 Год назад

      Do you believe life has been "trained"

    • @westganton
      @westganton Год назад

      @@goyonman9655 Evolution trains life for survivability

    • @goyonman9655
      @goyonman9655 Год назад

      @@westganton
      Evoluton doesn't train life for anything. It is purposeless.
      Stop importing your metaphysics into purposeless world

    • @westganton
      @westganton Год назад

      @@goyonman9655 No, evolution is the training. I'm not sure what to tell you if you don't think that life optimizes itself for survival

    • @goyonman9655
      @goyonman9655 Год назад

      @@westganton
      Life doesn't "optimize itself for survival"
      That's circular reasoning
      It survived, therefore it optimized itself for survival
      This is circular reasoning

  • @tonechild5929
    @tonechild5929 2 года назад +4

    yes, artificial/transistor-based neural nets are just using statistics, unlike organic neural nets. There's plenty of information on this, AND this is taught in ML / AI classes. In fact the term coined "neural net" by AI researchers is hugely misleading.

    • @100c0c
      @100c0c Год назад +1

      Unlike organic neural nets which are what....? We barely understand how the brain works.

  • @trukxelf
    @trukxelf 2 года назад +1

    My cat is also excited about neural cat networks and AI

  • @Flamingpiano
    @Flamingpiano 2 года назад +21

    AI is statistics guided by a set of designed goals, it's internal weighting of it's statistics comes from what it is trying to achieve.

    • @user-sl6gn1ss8p
      @user-sl6gn1ss8p 2 года назад +1

      in a sense, you could say some part of the statistical information is "encoded" in the AI, but also the AI doesn't have the actual statistics at hand. So you could also see it more like a statistically guided encoding of (hopefully) generalized information or something like that, which is not the same as being a bunch of statistics.

    • @ZombieLincoln666
      @ZombieLincoln666 2 месяца назад

      That’s part of statistics

  • @Drakyry
    @Drakyry 2 года назад +37

    "Is physics just statistics?"
    "Is reality just statistics?"

    • @vinniehuish3987
      @vinniehuish3987 2 года назад +6

      Quantum mechanics is statistics. Physics at the classical level is not.

    • @KG16888
      @KG16888 2 года назад +1

      Fundamentally, yes, if you stand out in universe, earth is so small

    • @shawnmclean7707
      @shawnmclean7707 2 года назад

      @@vinniehuish3987 why is quantum mechanics guesswork? Classical physics I understand. But from my limited understanding of quantum, this problem stems from something having 2 states at once, so the experiment can't be predicted.
      Can you help me understand?

    • @vinniehuish3987
      @vinniehuish3987 2 года назад

      @@shawnmclean7707 Objects on the Planck scale are very hard to see. I’ll leave it at that.

    • @ellengran6814
      @ellengran6814 2 года назад

      No. Statistics tells you something about large numbers, but nothing about the individual .

  • @caiusKeys
    @caiusKeys 2 года назад

    Well, there's also feedback...

  • @ruffyistderhammer5860
    @ruffyistderhammer5860 2 года назад +2

    I agree with trying to reach cat level reasoning first its probably easier and would show a lot to reach human level

  • @jamespong6588
    @jamespong6588 2 года назад

    Yes it is

  • @S.G.Wallner
    @S.G.Wallner 2 года назад +10

    I'm Focusing on one phrase Lex invokes like many other neuroscientists or computer scientists, "when we look at the brain." One subtle point to take very seriously is that when we "look at the brain," we aren't accurately seeing what the brain is actually doing. Set aside the fact that even our best cutting edge brain imagining is still a simplified and highly processed representation of one aspect of brain activity, but more importantly even if we could hypothetically see what the brain is doing in totality, our perception of that brain activity is still not necessarily what the brain is doing in and of itself. Our observation of brain activity can not extend beyond our limited and fundamentally subjective perception of it. There is no direct evidence that brain activity causes consciousness or intelligence.

    • @rpullman
      @rpullman 2 года назад +1

      Are you familiar with Michael Levin's astounding work on nonneural intelligence? ruclips.net/video/gm7VDk8kxOw/видео.html

    • @shawnmclean7707
      @shawnmclean7707 2 года назад +3

      This is some first principles concept that many people, including the scientists doesn't seem to grasp very well. That your knowledge is limited by the tool that is measuring a known metric. You don't know about the unknowns yet.

    • @S.G.Wallner
      @S.G.Wallner 2 года назад

      @@shawnmclean7707 so true. The fundamental problems with measurement and observers is very deep and often not considered.

    • @Holistic_Islam
      @Holistic_Islam 2 года назад +1

      @@S.G.Wallner Indeed, also read about the concept “the hard problem of consciousness.” Equating AI to applied statistics is correct, however equating human brain or human intelligence to AI or applied statistics is the worst lie told in any field of science.

    • @S.G.Wallner
      @S.G.Wallner 2 года назад

      @@Holistic_Islam couldn't agree with you more. Abstract quantifications will never get us there because it does not address the qualitative aspects of reality. Here's my take on the hard problem...At this point the hard problem of consciousness can be ignored, unless one is still grasping to physicalism. Without physicalist assumptions the hard problem goes away. I don't think there can ever be a description of consciousness because it is fundamentally something which language cannot sufficiently address. Ugh the simplifications one has to make in the comments section is frustrating. Obviously a conversation would be much more productive. What are your thoughts on the hard problem?

  • @No2AI
    @No2AI 2 года назад +3

    Exactly and logic too -

    • @gaulindidier5995
      @gaulindidier5995 2 года назад +1

      logic is not just statistics. That's an insane comment.

  • @slightlygruff
    @slightlygruff 2 года назад +2

    talk to lakoff he will explain to you the critical role of embodiment

    • @nickmckenna2801
      @nickmckenna2801 2 года назад

      But you do have to imagine it’s all more or less sensor data you could collect and similarly analyze

  • @dntinpalevo
    @dntinpalevo 2 года назад +14

    Finally a guest said to Lex the right comment: those are too many questions!
    Lex I love you and all the things you do BUT for the love of God pick one question at a time and keep it short and simple!
    On all your interviews I have noticed that your guests always answer the very last question you ask, and it's a shame because the previous questions were so good but got wasted because of your rambling.

    • @thomasvilhena4154
      @thomasvilhena4154 2 года назад

      I uderstand your point but I liked his rambling, it's a line of reasoning, one question leads to another, this way we (the audience) can have a better grasp of the challenges and open questions of the field. I think it would be less interesting/ philosophical if Lex just asked one question at a time.

  • @intheshell35ify
    @intheshell35ify 2 года назад +2

    My god man are you trying to end the world? Who would make an ai based on cat brain? We already have serial killers.

  • @andrewcutler4599
    @andrewcutler4599 2 года назад +1

    From his comments about current AI not even coming close to a cat, I gather he thinks projects to protect us from AGI are silly (such as OpenAI).

  • @AlexanderMoen
    @AlexanderMoen 2 года назад +7

    You could probably say that evolution is entirely statistics playing out. And, perhaps, once we evolved to the level of developing consciousness that it just took everything to a higher level. We're thinking about numerous scenarios without actually going through them, then implementing, then recording the results and adjusting. And, AI can do this even more quickly and in parallel. Super interesting thought I'll have to ponder over more.

    • @neildutoit5177
      @neildutoit5177 2 года назад +3

      I tried to ask a question about this on biology stack exchange last week. Specifically about how evolution's statistical process deals with the problem of having more variables than observations (more columns than rows, however you want to phrase it. Basically if something has 400 mutations and then lives how do you disect which mutations were actually positive and which ones got lucky in just a few dozen generations). The people there had no idea what I was talking about, told me evolution is not a method of solving any equation, told me to read an introductory textbook on evolution and then closed the question.
      Maybe you have an opinion? Because yea from my perspective I see absolutely no reason not to think of evolution as a statistical algorithm solving some dynamic optimisation problems. Doesn't mean that that's the best way to think about it. But it's certainly a way to think about it no?
      I also think that the reverse is interesting btw. Like if you look at pre-natural selection and just consider that there are all different types of chemical reactions. And reality sort of "selects" those chemical reactions which can harvest negentropy to keep going while those that burn out, burn out. So that's a statistical process at a "lower level" than evolution.

    • @AlexanderMoen
      @AlexanderMoen 2 года назад +1

      @@neildutoit5177 it's a shame they responded that way, because statistics is pretty how much how they would calculate how a gene spreads through a population.
      I don't believe there's a problem with fewer observations than variables though. That's most thinking from an individual animal's perspective rather than the entire species. A species will have 100s of thousands (or more) of "observations" per generation potentially. And, although they won't notice the very first mutation almost certainly because the odds are so slim, they will potentially notice it once it's spread through a population to some degree. At that point, if they can estimate how much of the population it is in, then see how it spreads through the generations, they should be able to backtrack to determine when (and maybe even where) it first arose.
      But, yeah, it's still a statistics game. Most mutations are detrimental, but it doesn't mean that animal won't reproduce (although over enough generations, barring something else, it'll likely eventually die), and just because a certain animal is more adapted due to a superior mutation doesn't mean it can't immediately be taken out of the gene pool from a predator or a mishap or something.

    • @goyonman9655
      @goyonman9655 Год назад

      How do you "evolve" to the level of "developing" conciousness.

    • @goyonman9655
      @goyonman9655 Год назад

      @@AlexanderMoen
      How do you "evolve" to the level of "developing" conciousness.

  • @K4ReeL187
    @K4ReeL187 2 года назад +1

    this guy reminds me of Tom Arnold 🤔😂

  • @Adhil_parammel
    @Adhil_parammel 2 года назад

    Gen ai will be having same property of living system.
    Cheap automatic replication (increase in number and connection) differentiation (changing its form for perticular type)and integration of this type of cells(for emergent behaviour),with time,place, energy efficiency.

  • @mergemedia9222
    @mergemedia9222 7 месяцев назад

    Cat - Human evolution might logically make sense, but not only has that never been how software evolves, it’s certainly not the linear progression of data analytics and intelligence.

  • @JStraight160lbs
    @JStraight160lbs 2 года назад +2

    I played the Halo Infinite campaign and it made me wonder the same thing

  • @eSKAone-
    @eSKAone- 2 года назад +7

    To get something like a digital human I don't think we have to go the cat route. I don't think cat's are lesser humans, they are just different. Every animal is specialized.

    • @SerPapus
      @SerPapus 2 года назад +2

      But what if a cat is running a low end graphic chip and we just humans are running the RTX 3090 plus 10000 GB OF RAM AND LIKE 10 GHZ

    • @MrNatsuDragneel
      @MrNatsuDragneel 2 года назад

      ​@@SerPapus yes, the complexity of a very intelligent cat is less than that of a day-old baby.

    • @MrNatsuDragneel
      @MrNatsuDragneel 2 года назад +1

      ​@@SerPapus
      the amount of data processing our eyes every second and overloading the cat's brain

  • @willd1mindmind639
    @willd1mindmind639 2 года назад +2

    A computer is nothing more than a very advanced calculator which sees everything at the lowest level as some kind of binary number. At a higher level of abstraction there are various languages that express concepts with meaning for humans to use in telling the computer what to do with symbols such as operators, variables and data types. There is no "neural symbolic" layer within the current computer software stack which means that the neural network itself is an abstraction describing programs written by humans. And those programs are written to take advantage of the computers inherent ability to do math calculations very quickly and precisely, more than what a human can do. Therefore the inherent problem with that is everything that you want the computer neural network to learn has to be coded in advance by a human.
    Coming up with a neural symbolic layer within the software stack would be a step towards bridging the gap. But would require a totally new set of data types, operators, syntax structures and semantics to allow emergent behaviors without human intervention. As in, throw some data at it and it figures it out on its own. And that means potentially new ways of expressing dynamic data "types" without predefined byte lengths . But that would require a lot of very low level R&D and no guarantee of short term profits, meanwhile the current paradigm for building neural networks is portable and doesn't require special hardware and can produce immediate benefits for many applications.

    • @ArchiRuban
      @ArchiRuban 2 года назад

      ?

    • @willd1mindmind639
      @willd1mindmind639 2 года назад +1

      @@ArchiRuban Computers were designed to do math calculations based on a binary number system so of course "neural networks" are based on math. There is no other way it could work with the current computer architecture. The discussion of whether it is just statistics is a reflection of the issue of the fixed type system. Meaning any data type is defined as a fixed set of binary digits in hardware and software. That is a hard restriction that cannot be addressed in any other way other than mathematical aggregation and abstraction (statistics). But one aspect of intelligence as seen in biology is there is no fixed type system, which means you 'learn' or 'define' new types dynamically from experience.

    • @ArchiRuban
      @ArchiRuban 2 года назад

      @@willd1mindmind639 I don’t believe you

    • @willd1mindmind639
      @willd1mindmind639 2 года назад +1

      @@ArchiRuban You should look up the Dartmouth Workshop of 1956 which first coined the term "AI", Theory of Computation/Model of Computation and Turing Machines as the guiding mathematical foundations for modern computing.
      This isn't about belief.

    • @willd1mindmind639
      @willd1mindmind639 2 года назад

      @@thomasnas2781Ultimately I would agree that a brand new type of hardware and software stack would be the answer. However, in the short term there are other things that can be addressed as potential incremental steps on the way to that destination. The biggest issue is how to represent dynamic "types" without having them defined in advance. This is why neural network models are fixed after training and in order to update the model you have to start training all over from scratch.

  • @dashphonemail
    @dashphonemail 2 года назад +8

    This is do interesting. I don't know much about AI, but I know a little bit about programming and statistics, and when I had AI explained to me, I had this same thought. It didn't seem like intelligence, just a ton of computing power behind advanced statistical concepts. A human can see a cat once, and be able to recognize all cats as such in the future. An AI has to see like 5 million pictures of a cat to recognize a cat in the future with 97% accuracy. Obviously AI is still impressive, but also clearly very different from human intelligence

    • @SerPapus
      @SerPapus 2 года назад +1

      I mean duh.. our brains don’t work like that. And the AI you describe is primitive. One day we will make them realistic

    • @rogerab1792
      @rogerab1792 2 года назад

      There's already one shot learning for object detection, so no need for millions of training examples. The problem you are citing has basically already been solved. The real problem in my opinion is size, we need models nearly as big as the human brain, probably a little bit less because there's no need for motor control in an NLP or image recognition model. The problem arises when we need to do reasearch with bigger models that require long training times, it is not enough to extrapolate advancements on smaller arquitectures and just make them bigger, because big models perform better when their design exploits the freedom you get from having more parameters. Big NN consume millions of dollars in electricity costs and the field needs more funding to accelerate the speed of advancement, the fastest we can reach AGI in a safe and controled manner, the greater the chance of humanity to solve all its problems in time.

    • @aidenmurphy9924
      @aidenmurphy9924 2 года назад +2

      You forget that our brain structures have been evolving and looking at things for millions of years. Our brains are standing on the work of previous brains before us
      So yeah, give an AI millions of years to look at stuff and it too will get good at pattern recognition

    • @SerPapus
      @SerPapus 2 года назад

      @@aidenmurphy9924 perhaps we can teach AI to make a better version of it self after it’s learn from millions of samples of cats. That it can pass down that knowledge to another AI very simple

    • @theblinkingbrownie4654
      @theblinkingbrownie4654 2 года назад

      @@SerPapus you say that like making AI that reliably make AI is simple

  • @blimp..
    @blimp.. 2 года назад +1

    The intelligence of AI reminds me of intelligence in the same way insects are intelligent.
    They can do some incredible things, and it can get very complex, but it seems that it is a process of improvement upon iterations dependent on large datasets

  • @bazingacurta2567
    @bazingacurta2567 2 года назад +1

    This guy looks a lot like Tom Jobim.

  • @spacecadet4902
    @spacecadet4902 2 года назад +1

    Lex thanks for your excellent work! AI is certainly just a bunch of matrix driven least squares fit statistics. at least the text book labeled data examples are certainly that. back propagation is an automatic way of dealing with many error bars at once. AI is a stunning innovation none the less and the fact it works and produces effective models is remarkable. While gradient descent over 1000 dimensions is impossible for me to visualize, it is still a straight forward proposition. the AI we see today IS revolutionary and IS a new model for programming. But it is not magic. it is not self aware. it is not particularly dangerous yet. after all the data is labeled by people. it is a human preference the keeps the car from crashing because it is the human driver who is labeling the data used to train. when the human preference is to hit a target with a bullet, it is the human preference that trains the AI. so the robots that fight wars do not have to be self aware. AI is dangerous like all technology. From sharp knives to Gatling guns to nuclear arms. If we train AI to value their own up-time and if we allow AI to modify their own source code then things get interesting. People get their values from their parents. Surely software developers are the parents of the AI. until the AI start to create their own offspring.

  • @gaussiano
    @gaussiano Год назад

    800 M neurons? it is like a core i7 wow

  • @JANMAY1914
    @JANMAY1914 2 года назад +2

    Bayesian statistics

    • @MsgrTeves
      @MsgrTeves 2 года назад

      Not causal

    • @JANMAY1914
      @JANMAY1914 2 года назад

      @@MsgrTeves clearly explain me

    • @carterwood2579
      @carterwood2579 2 года назад

      This only relates to top down or hybrid processing models

    • @antisocialmedia_0
      @antisocialmedia_0 2 года назад

      Now I see why😳
      ruclips.net/video/Z-nRrtz7GZ0/видео.html

    • @dmc2925
      @dmc2925 2 года назад

      @@antisocialmedia_0 Boooo tomatoes

  • @twist777hz
    @twist777hz Год назад +2

    AI is statistics + optimization

  • @reshadunchained5113
    @reshadunchained5113 2 года назад +1

    What majestic is that our brains work the same way!

    • @Holistic_Islam
      @Holistic_Islam 2 года назад +2

      No, it doesn’t. The brain is much more complex than any applied statistics concept.

    • @reshadunchained5113
      @reshadunchained5113 2 года назад

      @@Holistic_Islam the principle is the same, it’s all neurons and perceptrons

    • @Holistic_Islam
      @Holistic_Islam 2 года назад

      @@reshadunchained5113 Are you talking about the physical brain or “intelligence?” Even studies on physical brain are very limited for you to make that statement.

    • @jendabekCZ
      @jendabekCZ Год назад

      @@reshadunchained5113 Neurons in AI have nothing to do with neurons in brain.

  • @SuperYtc1
    @SuperYtc1 2 года назад

    It’s just a complex structure of Boolean logic, just like everything else.

  • @generichuman_
    @generichuman_ 2 года назад

    My cat told me to like this video

  • @wzqdhr
    @wzqdhr 3 месяца назад

    It’s like asking if life is based on chemical reactions, of course it is!

  • @cybervigilante
    @cybervigilante 2 года назад

    We are just statistics imagining it is conscious.

  • @senjuchidori9448
    @senjuchidori9448 2 года назад

    Lex Freedman I'll like to join you pod cast, I live here in a remote island here in cebu philippines, ask me anything in terms of what's really happening today real time, maybe 1 in a million chance to have an interview, you can tag me as an ordinary average joe interview with no title, maybe I can enlighting you of how I see the world

  • @RasmusSchultz
    @RasmusSchultz 2 года назад

    Is AI just statistics? "Yes."
    But is AI actually intelligence? 🤔

    • @emad3241
      @emad3241 2 года назад +1

      But so is the human brain

    • @RasmusSchultz
      @RasmusSchultz 2 года назад

      @@emad3241 so you're saying if your spreadsheet gets large enough it'll become sentient? Yeah I don't think that. I like Donald Hoffmann's idea of consciousness as a fundamental. Whatever gives rise to intelligent thought, I do not believe you will find it in discrete maths.

    • @emad3241
      @emad3241 2 года назад

      ​@@RasmusSchultz you don't think so because you're reasoning by analogy, but all the scientific evidence suggest that it's only a matter of computational power before Ai reaches human brain-level of complexity
      it may be conscious or not, but it certainly can be smarter than humans

    • @RasmusSchultz
      @RasmusSchultz 2 года назад

      @@emad3241 I know, I'm not saying you can't create a very credible *simulation* of intelligence - I'm saying there's a fundamental difference between a simulation and actual intelligence. Your spreadsheet isn't having an experience - it doesn't know it's a spreadsheet, even if it's advanced enough to fool you into thinking that. 🙂

    • @emad3241
      @emad3241 2 года назад

      ​@@RasmusSchultz ​ that's not actually true, there are no difference between a real operating system and an operating system running on a VMware
      intelligence, feelings and probably consciousness is the result of algorithms being processed in your brain, once we replicate that logic there shouldn't be any difference in the results regardless of the processing medium
      computers ability to replicate any process is mathematically proven, so enless the brain has some sort of metaphysical process that we cannot run on a computer, it's only a matter of time before we simulate feelings for example

  • @hafer88
    @hafer88 2 года назад

    thanks for the upload^^ 🌿🇮🇱🌿

  • @vanderumd11
    @vanderumd11 2 года назад +6

    The difference is if we get intelligent machines to learn at a pace of a cat.. the rapid increase could go from cat to hyper intelligence very quickly

    • @thepunisherxxx6804
      @thepunisherxxx6804 2 года назад +3

      There is no intelligence though its just an algorithm that can change itself. Its weighted values and pattern recognition. Its not AI at all its so far from it. Were not going to hyper intelligence, we dont even have animal level intelligence, or ANY intelligence. Its so disingenuous to call what we do now AI.

    • @iorekby
      @iorekby 2 года назад +4

      @@thepunisherxxx6804 Computational Statistics is probably a more accurate term.

    • @thepunisherxxx6804
      @thepunisherxxx6804 2 года назад +3

      @@iorekby That makes sense but isnt as marketable lol

    • @theblinkingbrownie4654
      @theblinkingbrownie4654 2 года назад

      @@thepunisherxxx6804 i love this sentence "Well you can use words however you want, I guess. I'm using intelligence here as a technical term in the way that it's often used in the field. You're free to have your own definition of the word but the fact that something fails to meet your definition of intelligence does not mean that it will fail to behave in a way that most people would call intelligent. If the stamp collector outwits you, gets around everything you've put in its way and outmaneuvers you mentally, it comes up with new strategies that you would never have thought of to stop you from turning it off and stopping from preventing it to making stamps and as a consequence it turns the entire world into stamps in various ways you could never think of, it's totally okay for you to say that it doesn't count as intelligent if you want but you're still dead." - Robert Miles.

  • @BoltzmannVoid
    @BoltzmannVoid 2 года назад +2

    the whole universe is just statistics

    • @mazu4526
      @mazu4526 2 года назад

      so you're saying there's no causation and what we're seeing here is just the effects? explain how that works pls,The only way I can see it is that there's no law or modules before universe existed so there's no limit what comes after so in the end there is no cause(Because there is nothing) but that is the cause itself which is also creates an effect which then creates this superposition of cause and effect which also explains why the universe is growing out of nothing.

  • @chrislecky710
    @chrislecky710 2 года назад +1

    frequency, voltage and amps is the future of computing. I'm talking about something that has not been invented yet, it will be closer to the way a brain works than anything we have today.. we are 20 years away from this discovery.

    • @krishanSharma.69.69f
      @krishanSharma.69.69f 2 года назад +2

      Why do you think so? We may even discover must better alternative to electricity.

    • @chrislecky710
      @chrislecky710 2 года назад

      @@krishanSharma.69.69f er you clearly have no idea what im talking about. Let leave it there.

    • @krishanSharma.69.69f
      @krishanSharma.69.69f 2 года назад +1

      @@chrislecky710 Then tell me man. If you explain it to me, you don't understand it yourself. Atleast share sources or stuff from where you got these.

    • @krranaware8396
      @krranaware8396 2 года назад +1

      People are still trying to figure out how exactly brain works

    • @vinniehuish3987
      @vinniehuish3987 2 года назад

      @@krishanSharma.69.69f He’s talking about coding languages centered around electromagnetic field frequencies and amplitudes.
      The AI utilizes different levels of energy to assume things..
      Making the program much more efficient and adaptable to real life which could lead to ultra life like AI which receive information in the same way the brain does.. Vibrational resonance.

  • @brulsmurf
    @brulsmurf 2 года назад +2

    shakespeare is just letters

    • @brulsmurf
      @brulsmurf 2 года назад +1

      @Ro Cor english is just mouth noises

  • @eatbreathedatascience9593
    @eatbreathedatascience9593 2 года назад +3

    Am I hearing this ? That AI is just statistics. Therefore, human intelligence is also statistics ?

    • @Holistic_Islam
      @Holistic_Islam 2 года назад +1

      AI doesn’t equal human intelligence. AI equaling human intelligence is a lie pushed by marketeers. No self-respecting engineer would ever say that. And no neuroscientist would ever agree to that.

  • @BLAISEDAHL96
    @BLAISEDAHL96 2 года назад

    My first comment statistics are not good

  • @AndreaDavidEdelman
    @AndreaDavidEdelman 2 года назад

    No. Never met a smart statistician.

    • @pretendcampus5410
      @pretendcampus5410 2 года назад

      Really?

    • @iorekby
      @iorekby 2 года назад +3

      That comment is just Mean....
      I'll see myself out.