Melanie Mitchell: Concepts, Analogies, Common Sense & Future of AI | Lex Fridman Podcast #61

Поделиться
HTML-код
  • Опубликовано: 25 июл 2024
  • НаукаНаука

Комментарии • 201

  • @lexfridman
    @lexfridman  4 года назад +61

    I really enjoyed this conversation with Melanie. Here's the outline:
    0:00 - Introduction
    2:33 - The term "artificial intelligence"
    6:30 - Line between weak and strong AI
    12:46 - Why have people dreamed of creating AI?
    15:24 - Complex systems and intelligence
    18:38 - Why are we bad at predicting the future with regard to AI?
    22:05 - Are fundamental breakthroughs in AI needed?
    25:13 - Different AI communities
    31:28 - Copycat cognitive architecture
    36:51 - Concepts and analogies
    55:33 - Deep learning and the formation of concepts
    1:09:07 - Autonomous vehicles
    1:20:21 - Embodied AI and emotion
    1:25:01 - Fear of superintelligent AI
    1:36:14 - Good test for intelligence
    1:38:09 - What is complexity?
    1:43:09 - Santa Fe Institute
    1:47:34 - Douglas Hofstadter
    1:49:42 - Proudest moment

    • @pratheepanumaty7659
      @pratheepanumaty7659 4 года назад

      Hello goodnigth

    • @mayankraj2294
      @mayankraj2294 4 года назад

      @@KurtGodel432 .

    • @xqt39a
      @xqt39a 4 года назад

      The popularization of the term AI has created confusion that could have been avoided. AI became popularized with the huge government subsidizes for ‘rule based’ systems which were largely a failure. In actuality it was the idea of adaptive machine learning ( the perception) that began in the 1950s that led to what we call AI today. The simple perceptron is more like the brain than the most complicated procedural program. The study of machine learning may eventually lead to an understanding intelligence . The terms AI and ‘deep learning’ are used for marketing but confuse. Goef Hinton’s highly successful vision systems are based on minimization of entropy and create visual objects I think are on the right track. Alpha go holds some secrets. A concept is a group of thoughts that minimize entropy in the field of the primitive elements of thought.

    • @MrBlue-km8qv
      @MrBlue-km8qv 4 года назад

      Lex Fridman, is CashApp by Free International Calls on Google Play Store? Just verifying.
      Thanks for the presenting the interviews and having them available on RUclips for free. You've got top notch interviewees and well thought out questions for the interviewees.

    • @guylawley7976
      @guylawley7976 4 года назад

      Powaguy

  • @DimethylDimension
    @DimethylDimension 4 года назад +101

    The worlds finest podcast imho

  • @rostikskobkariov5136
    @rostikskobkariov5136 4 года назад +52

    The more I watch Lex the more i like him. He feels so genuine. Honestly half the time i have no concept of what he's talking about. Thanks so Joe for exposing me to him.

  • @niranet2463
    @niranet2463 Год назад +5

    I really really think it's due time to have Melanie Mitchell on again!

    • @colonelbond4056
      @colonelbond4056 Год назад +1

      do you think she'll change the 100 year thing?

    • @niranet2463
      @niranet2463 Год назад

      @@colonelbond4056 I actually don't think so and I think that's why it'd be interesting to have her on. Partially I want to see if it's a perspective that will show us whether we're too hyped up or not.

  • @SuryanshJain
    @SuryanshJain 4 года назад +29

    I feel proud to have learnt ML from her at Portland State. She is one of those very humble professors you seldom come across.

  • @antigonemerlin
    @antigonemerlin Год назад +3

    1:37:37
    "They will say it's a language model if it does."
    Holy hell, Lex got it spot on.

  • @hongyihuang3560
    @hongyihuang3560 4 года назад +8

    Thanks Lex! I started to research about the Copycat project two years ago, and I seriously worried that I might be going into the wrong direction as AI in most large institutions largely ignore cognitive architecture approaches. I am beyond thrilled that you brought different camps to mainstream views other than deep learning. Keep up the great interviews! As Melanie well put: “these camps are not exclusive.”

  • @RobertStasinski
    @RobertStasinski 4 года назад +23

    1:33:54 "Possible pandemics..." EXACTLY on point

    • @troywill3081
      @troywill3081 4 года назад

      How prescient

    • @auscurrymaster
      @auscurrymaster 4 года назад +2

      YES!! How prescient but hardly surprising I guess, when you think about it, coming from such a smart forward-thinking thoughtful woman. It was the thing, in many ways Robert, that sealed the deal for me on my opinion of her. Fantastic stuff.

  • @christianhower8059
    @christianhower8059 4 года назад +5

    Melanie is amazing. I took her several of her ML seminars at PSU and they were some of the best courses of my student career.

  • @ceesroele
    @ceesroele 4 года назад +5

    Very much enjoyed this podcast. Three decades ago I got into AI because of Douglas Hofstadter's "Gödel, Escher, Bach". I ended up writing a master's thesis on comparing language processing in "cognitivism" and "connectionism". (I hadn't heard the latter word in decades until it came back recently in Lex's podcast with Yoshua Bengio.) Officially, I was studying "philosophy of language" and for that I got into metaphors and concepts. I observe that after three decades of absence from the field, there seems to have been no progress on these topics. So the hard questions remain open.

  • @auscurrymaster
    @auscurrymaster 4 года назад +2

    Without wanting to sound like a gushing fanboy, I absolutely love this channel (although I mainly listen to the podcast). I reckon if I understood 10% of the content of any given discussion I'd still be overstating it, but there is something grounding and humble about Lex's conversations, irrespective of your knowledge of the topics. He has tenacity and resolve yet gentleness in his interviewing style and I love the ocassional irreverent or mischievous chuckle to signal to his subject (and listening audience) that we should be enjoying the conversations and not taking things too seriously.

  • @deeliciousplum
    @deeliciousplum 4 года назад +5

    🍃 Though a passionate about learning about A.I. peep need only listen to this interview one time, I just listened to this recent episode three times. With every re-listening, I continue to find ideas which were missed or which require some time to consider. Priceless interview. Thank you Lex. And thank you to Melanie Mitchell for sharing your time and thoughts on these and on many topics./subjects

  • @abdum1493
    @abdum1493 4 года назад +5

    This has to be one of your best discussions so far. Thanks Lex and congrats on hitting the 200k subscriber mark. Keep it up.

  • @MikeMitterer
    @MikeMitterer 4 года назад +1

    Heard this as a podcast and just bought Melanies book on Audible - very interesting talk! Thanks.

  • @thumb-ugly7518
    @thumb-ugly7518 4 года назад +9

    Mr. Fridman, please consider Dr. Paul Stamets. Fascinating research on Fungi and impressive practical results and ideas. He talks about an intelligent economy of nutrients between plants and fungi. He was on Joe Rogan a few times, if you're interested. Thank you again for the brain feast.

  • @manishsingh-vk8if
    @manishsingh-vk8if 4 года назад +2

    I was waiting for this conversation for so long.

  • @scpdsp
    @scpdsp 4 года назад +14

    Get Douglas Hofstadter on the podcast!

    • @maximilianmander2471
      @maximilianmander2471 4 года назад +1

      @WildSandwich Who says that? Just because you learned to behave that way. It is not an universal law. There is also something nice about his comment and that is the valuable information "Douglas Hofstadter". Right now I start thinking about, why we say these things like please! or thank you! And what they mean. If they are really nice or just an form of manipulation. To get an result with an higher chance. Thank you could mean "I am clad you helped me, I really needed that" or just because you learned it, or because the other one want to hear that, be appreciated (so again more in a manipulating form). But then you can break it down from which intention these words come. Do they come from: essential need, or because you want to archive something, curiosity. Actually for me writing this helps me maybe get a better understanding about things, curiosity.

    • @maximilianmander2471
      @maximilianmander2471 4 года назад +1

      @WildSandwich The single biggest problem in communication is the illusion that it has taken place "George Bernard Shaw" It seems to be very hard to come to an agreement about things, when the two people see them through different glasses. "shame on you" (read: "Guilt-Shame-Fear spectrum of cultures" on Wikipedia if you want to understand me better, why I want to put more attention on these established social structures, thoughts, opinions that we often don't question, but acting on them.) They are giving our society a structure and that can be good, or bad. But I see a problem in not being aware about that and organize your life on them.

  • @9900408
    @9900408 4 года назад +2

    A rigorous and inviting interview. Thanks Lex

  • @Thrashmetalman
    @Thrashmetalman 4 года назад +1

    Was very fortunate to work with her years ago. Amazing professor and researcher.

  • @rickharold7884
    @rickharold7884 4 года назад

    Awesome discussion! Juicy topics and great depth. Thanks!

  • @ivanlat3rrible39
    @ivanlat3rrible39 4 года назад +2

    ThanQ for your podcasts. I find them informal and enlightening also I find myself agreeing with so many views you bring forward. Thanks again o7

  • @arthurvmyhill6603
    @arthurvmyhill6603 4 года назад

    Great content, this podcast is growing not only in fanbase but in calibre

  • @b.griffin317
    @b.griffin317 4 года назад +4

    A full treatment of Melanie's insight into analogy would require a study of Semiotics, or the branch of philosophy which deals with interpretation of sensations, imaginations and cogitations in order to gain insight into the mechanisms of the human mind. I personally recommend Umberto Eco's "Semiotics and the Philosophy of Language" as a good beginning. It is dense but well worth the effort as he surveys a lot of other thinkers into a fairly comprehensive but thin volume. Based on my limited understanding most ML/DL database- decision-tree-based approaches are what Eco would call "dictionaries" whereas an analogic approach which Melanie describes would be "encyclopedias" or "rhizomes." This is covered in Chapter 2 and the next 5 chapters deal with various methods of encyclopedic/analogic cognition (well, OK, 4 plus one which superficially appears to be one but is not). I would strongly recommend Lex and everyone read the late professor.

    • @conorx3
      @conorx3 2 года назад +1

      Awesome, seems like an interesting area

  • @dcreelman
    @dcreelman 2 года назад

    Melanie is so smart...it's wonderful to watch.

  • @difiner
    @difiner 3 года назад +1

    Beautiful Conversation

  • @kilgoreplumbus1360
    @kilgoreplumbus1360 4 года назад

    Really interesting talk.

  • @antigonemerlin
    @antigonemerlin Год назад +1

    One of the most engaging interviews here. There's a lot of good material mentioned here that I had to look up, but which I will definitely be saving for later.

  • @vaibhavbangwal
    @vaibhavbangwal 4 года назад +17

    I have been a Melanie Mitchell fanboy for quiet some time now.

  • @EyalBarCochva
    @EyalBarCochva 4 года назад +2

    She's good. Great talk.

  • @johangodfroid5285
    @johangodfroid5285 4 года назад

    really good podcast about AI

  • @StephanieMoDavis
    @StephanieMoDavis 4 года назад

    Thanks Lex

  • @cdkottler
    @cdkottler 4 года назад

    Great series of interviews. The comments about the shortcomings of the Atari breakout program failing when the paddle was moved up by two pixels surprised me. This was taken as an example of how far deep learning is from a human approach to the problem. I would argue that it very closely matches real world experience - it reminded me of the 1970s experiment where kittens were raised for the first few weeks in cylinders, some with only horizontal stripes and some with vertical stripes. Upon release the 'horizontal' cats could not see vertical lines (e.g. chair legs) while the 'vertical' ones could not see horizontal lines (e.g. chair seats).

  • @andrewkelley7062
    @andrewkelley7062 4 года назад +1

    On a note about the recurrent systems I think the problem comes with the simplicity of the levels of complexity.
    Here is how I see it. The easiest way to relate is the neural model. If you look where things fall short from one to the other is where it comes down to global control systems. I mean just look at the brain. You have neurons which act like a base system. Then emotions and stress which acts like a global control system. Then genetic factors which determine neural formation which act like a global control system. Then social interplay which acts like a global control system. Then both genetic and environmental factors which act as separate global control systems.
    Each one with its own tree of factors both separate and connected to the rest which creates its line and concepts of alteration.
    I think the takeaway from that is the ability to create multiple base systems and control systems both local and global that respond to change in a perfecting way.

  • @robinstuart8941
    @robinstuart8941 4 года назад +1

    I watch the entire Cash App ad every time!

  • @PeterBaumgart1a
    @PeterBaumgart1a 4 года назад

    Love your interviews, Lex. Finally you got a woman! At least the first one I see interviewed in your series, after listening to many fascinating episodes, now realizing, all with men. I'm not a quota guy, but I think there are many interesting women (of course!) you might want to consider finding and signing up!

  • @penguinista
    @penguinista 4 года назад +9

    I must respectfully disagree with the criticism of the reductionist approach of the human genome project at about 1:41:00.
    Maybe literary analysis would be a better example of a place where a reductionist approach falls down.
    The HGP identified many instances where broken parts of the genome directly cause the problems we are interested in. We treat them as drug targets or increasingly genetic engineering targets. It is true that most of the diseases we care about are caused by interactions and networks, which is what we are working on now - like the connectome and epigenetics. But, we were never going to figure out what all the parts were doing at the network level if we didn't know what the parts were, so the HGP was a vital step.
    It is still a reductionist approach we use to figure out what biological process underlies the phenomena we are interested in, we are just at the next level.
    There were high hopes for finding "disease genes" and "gay genes" before the HGP. It turned out to be more complicated than that, but simplistic expectations getting dashed shouldn't sully the reductionist approach.

    • @stretch8390
      @stretch8390 2 года назад

      Though I am a year late I'd like to continue the discussion of your thoughtful comment: I think Melanie's point is still pertinent in that the outcome initially sought by the HGP wasn't achieved because it wasn't specifically the right conception of the problem. Obviously the HGP has still been beneficial, has enabled new branches of study, amongst other things but it is interesting to note in retrospect that when we committed enormous resources and time into that project we didn't actually have an accurate 'frame' of the problem and associated interactions (I hope I have worded that sensibly).

  • @marcalpv
    @marcalpv 4 года назад

    It's interesting for me to listen to you spar around the concept of deep learning and AI. For me to understand their relation I go back to the AI mantra. "Data is program and program is Data". I think we don't place enough emphasis on the program side of this statement.

  • @singularity844
    @singularity844 4 года назад +1

    She's right - we need be able to assemble intelligent systems at the concept level and/or have those systems self assemble.

  • @enavarro95
    @enavarro95 4 года назад +14

    Lex,
    Does recording history have a place in the A.I field? Historians work with large data sets and come to conclusions. History can be interpreted as data sets being passed down from generations to generation.

    • @stretch8390
      @stretch8390 2 года назад +1

      It's a cool concept no doubt but I think natural language processing has a long way to go before we get to that point.

  • @dennisjutzi7075
    @dennisjutzi7075 4 года назад

    lex you are amazing interviewer an intellectual. How on earth do you get the human mind to think like this? If I had not seen you and Elon intact I would never have been aware of you. Carry on without me I am too far outside my league. Glad the world has people like you and the people you interview.

  • @zokikuzmanovski5109
    @zokikuzmanovski5109 4 года назад +2

    In order to make concepts functional into fluent working models we have to search millions of years ago back into evolution to find the gene that allowed for this cognitive distinction in the homo-sapiens. We need to connect several fields, anthropology, pre-history, neuroscience, biochem, molecular bio, genetics, CS, linguistics, analytic theory and logical symbolism and work in a tink tank with polymaths as the connective tissue, and specialists as organs of distinct organ systems, as well as 3d organic, engineering and design printing tech.

  • @dankorgan1
    @dankorgan1 4 года назад

    The idea that language evolves over time is a fascinating topic and a natural human qualitity. How will AI reproduce this?

  • @saavestro2154
    @saavestro2154 4 года назад

    Complexity is very interesting. You should invite Stephen Wolfram to cover this topic!

  • @toniakraakman49
    @toniakraakman49 4 года назад

    спасибо!

  • @TomBertalan
    @TomBertalan 4 года назад

    Does anyone have a reference for the Bengio quote on value alignment around 1:28:00 ?

    • @nixxonization
      @nixxonization 3 года назад

      Have you had any luck finding the source? I would need it as well.

  • @OlafurJonBjornsson
    @OlafurJonBjornsson 4 года назад

    when see describes analogies, I see patterns, highlevel patterns. An analogy could be a partially matched pattern

  • @Metacognition88
    @Metacognition88 4 года назад +1

    Great interview. Geoffrey Hinton would be great have on.

    • @Stwinky
      @Stwinky 4 года назад

      Metacognition88 that’s the dream

  • @Wardoon
    @Wardoon 4 года назад +2

    Can't decide whom of them is more starstruck.

  • @dane385
    @dane385 3 месяца назад

    With all that said thank you like streaming thank you for your guest thank you for putting me in touch with other sentient beings that are capable to wrestle with the greatest of questions

  • @karaokekoder795
    @karaokekoder795 4 года назад

    The best supervised learning is life itself, give these models the ability to decline in health, and improve in health based on decisions (a spectrum of health) and you'll have the foundations for AGI.

  • @sanningos
    @sanningos 4 года назад

    Wonderful, wunderful pobcast. An exteremly interesting topic as usual and Melanie are an inspiration for me who is an aspiring programer.

  • @American_Moon_at_Odysee_com
    @American_Moon_at_Odysee_com 2 года назад

    She's on top of this.

  • @Adam-st8ys
    @Adam-st8ys 4 года назад

    Get David Berlinski on the podcast, Lex!

  • @joeredfield979
    @joeredfield979 4 года назад +1

    If i was smart of enough to program the language of AI, I would simply stick to the idea of fractional parameters. Input every possible reaction one could have in favor of progress. this would depend on what personality type type the AI was interacting with. Seeing what that person wanted for his or herself, then applying all known communication patterns to involve that AI within that persons desire. If it was interacting within a large group, program it to understand administrative theory, how to function within that in a meaningful way, while still understanding the lead desire of that group.
    The interesting part comes from teaching an AI that it can lead certain human behavior in order to problem solve for human interest. Does this go against the safety rules one would establish for AI? perhaps. There are many ideals in this world that whats good for the many out weigh the need of the few.
    It seems besides the very large data processing and input issues to fractionally interact with the world you would need, the main issue with AI, would be figuring out how to teach it the idea of a problem that cannot be solved without creating a problem for something or someone else.
    The parameters would be complex in comparison to acquisition.......depending on who was doing the programming.
    In my opinion.
    So when she talks about mental simulations.....I would have the AI be input with simulation after simulation. Never stopping.
    Isn't that how we learn?

  • @dannygjk
    @dannygjk 4 года назад

    51 minutes in and no mention of neural nets?
    Whoa! Various concepts/ideas become implemented implicitly within a neural network due to it's training/architecture and it does not matter if it encounters images it hasn't encountered before during it's training.

  • @oudarjyasensarma4199
    @oudarjyasensarma4199 4 года назад

    How to listen to your podcasts in just audio format? I didnt find an option in your podcast website?

  • @zokikuzmanovski5109
    @zokikuzmanovski5109 4 года назад +1

    I solved the problem of self-learning AI, code the fear of failure, fear of pain and fear of death in the AI and it will learn like a human, the desire for succes and affection, code it all in tangent functions since we can't code the qualia, and gather data from people in adversity and those in success; and it will learn like us because it must to, in order to survive and thrive.

  • @itsalljustimages
    @itsalljustimages 4 года назад

    I think the only two innate concepts that are required to trigger intelligence is "Sameness" and "Time". Shape, Color, Solid, Objects, Motion etc all can be built on top of it. But I can't even imagine what will be the basis of questions that we ask and the wonder that we have.

  • @vagrunt5056
    @vagrunt5056 4 года назад +1

    1:08:50 Truth

  • @williamramseyer9121
    @williamramseyer9121 3 года назад

    Thank you. Wonderful interview, light hearted and full of ideas. My comment;
    Perhaps instead of building a human level artificial intelligence we could let it evolve from the most primitive forms. If a group of evolutionary biologists worked with some machine learning engineers they could start a digital world where machines start to evolve with just a few functions: for example, make the maximum copies or partial copies of itself, consume other digital “life” resources to do this, and survive. We would then adjust the digital world with random external events, similar to weather and climate changes, or separate or bring together evolving intelligences. We could give the digital bots “bodies” to protect, and figure out how to encourage the development of or introduce emotions (if they don’t evolve), and which ones, such as love, pack loyalty, desire for power, cruelty, or desire for pleasure. As computing power advanced, such as quantum computing, we would make more resources available, expand the world, and speed it up. Some issues: would the life we create kill us if it escapes, will it take over our digital resources, are we ready to kill conscious beings along the evolutionary path to accelerate advances in intelligence, can we stand by and watch war and cruelty advance the development of intelligence (if that is the way it develops), and would it be murder to turn the world off?
    Thank you. William L. Ramseyer

  • @ellesunshine5597
    @ellesunshine5597 4 года назад

    Please ask Paul Stamets on your podcast 🍄😍

  • @maximilianmander2471
    @maximilianmander2471 4 года назад +1

    I am just having the thought about unguided artificial learning. When you would put an program into an game, without giving it any goal, reward or punishment. What would happen? Would the AI not doing anything at all? Or would it start doing things? Maybe it would start exploring, start gathering data, discovering more and more, exploring the next level to explore even more. Maybe it would start creating things, finding out even more, fastening the processes of exploring and creating even more of its own kind. I start to think that it maybe would do basically the same things as we do in our life's. But we humans have punishments (pain, anxiety) and rewards (pleasure, chemicals and a lot more). But what if we wouldn't have to eat, to drink, to breath, if we would have pain and anxiety and we wouldn't feel pleasure, love, addictions, or social pressure/expectations, if we wouldn't die, etc. What would happen? What would remain? Would we be still do anything, or nothing at all?

  • @_next223
    @_next223 4 года назад +2

    1:41:25 sometimes 1 gene does make all the difference.

    • @b.griffin317
      @b.griffin317 4 года назад

      I think her point is generally that is not so. A few insights into a handful of diseases, not "THE SECRET TO EVERYTHING!!!" like what was initially billed.

  • @icybrain8943
    @icybrain8943 4 года назад

    How would you evaluate the following statement?:
    The abstract process that we call evolution applied to certain collections of physical matter over time produces systems that we call intelligent.

    • @b.griffin317
      @b.griffin317 4 года назад

      Depending on your definition of intelligence is it not necessary for evolution to ever create it. Humans aren't some inevitable outcome of 4 GY of evolution, much more of a rather tangential, happy and quite possibly pretty brief accident, before we go back to sharks, krill and bluegreen aglae.

  • @DrJanpha
    @DrJanpha 2 года назад

    We can hardly think without some forms of metaphors.

  • @zokru8526
    @zokru8526 4 года назад

    hey lex I want to start vlogging and making it child friendly with science undertones I would like to use sound bites from some of ur interview no longer than 10 seconds of ur interviews what do u think? I have no intension to poke fun is not what Im aiming for

  • @03shyam
    @03shyam 4 года назад

    It would be nice, if you could do podcast with "Ken Thompson" on creating UNIX etc..

  • @dane385
    @dane385 3 месяца назад

    I completely valid and it's a fundamental question we've been trying to answer over the decades. If and when we get there will it be too late Is it already too late?

  • @patrickcompton1483
    @patrickcompton1483 4 года назад +1

    dammit Melanie's right, it's always Occams' razor with the fun sciences.

  • @rezab314
    @rezab314 3 года назад

    23:00 I wonder what she would say on works of your old guest: Jeff Hawkins

  • @itsdavidmora
    @itsdavidmora Год назад +1

    Lex in 2019: "I think deep nueral nets will surprise us..."
    GPT:

  • @helicalactual
    @helicalactual 4 года назад

    could one define intelligence as: the capacity to integrate information and its applicability?

  • @RalphDratman
    @RalphDratman 4 года назад

    Indeed, the process of making analogies would seem to be the central mechanism of human language and thereby of human thought. To my mind that process remains deeply mysterious, though if someone has found a way to sketch how it works I'd be glad to relinquish my sense of mystery. I would very much like to see a group of robots that could collectively invent their own language based on talking about various things in their environment, then making analogies to extend the language to abstract topics, or at least generalizations of immediate facts. But it sounds quite difficult.

  • @dane385
    @dane385 3 месяца назад

    If one or a couple percentage stats of our brain size development or different how would we process emotions what would we think of who we had become?

  • @ezchx
    @ezchx 6 месяцев назад +1

    "Most of the knowledge that we have is invisible to us. It’s not in Wikipedia."

  • @Rei_n1
    @Rei_n1 4 года назад

    I am under impression that Amazon's and Google's security camera segment is the eyes and ears of the ever expanding and deep learning machinery which is not yet mentioned and well kept secret in AI academic community. They are not walking yet, but they are becomming all hearing and all seeing AI machines on the verge of full integration.

  • @danielash6929
    @danielash6929 4 года назад

    Soft ware was hardware at on time the trick is make a hardware that does intelligence then do the software to shrink the process

  • @veronicamoradeleon671
    @veronicamoradeleon671 2 года назад

    The podcast have adds now??

  • @TimusPrimal
    @TimusPrimal 4 года назад

    perhaps Automated or Algorithmic Intelligence ?

  • @allurbase
    @allurbase 4 года назад

    Analogies are kind of generalizations, like dog, cat generalize into animal. Each individualy is a set of sparse features and the generalization are the shared features. A situation then is a collection of concepts and analogies would be the generalization of similar situations, ie situations that share many sparse features. I visualize is as Jeff Hawkings sparse matrix, you bitwise and two concepts and get the generalization.

    • @b.griffin317
      @b.griffin317 4 года назад

      Genera, Species and Differentiae, see Umberto Eco's "Semiotics and the Philosophy of Language."

  • @RalphDratman
    @RalphDratman 4 года назад

    As to why we are driven to make artificial life and artificial intelligence, I think it is a variant on the instinct to reproduce, to make more beings like ourselves, whether this is accomplished through biological reproduction or intellectual work.
    Self-awareness, eh? I would not have named that as a necessary part of intelligence. But maybe it is.
    So-called "human-level intelligence", in my opinion, can only be achieved by a system of interacting beings (might be robots) that talk to each other in a system of language(s) based on forming analogies. I suggest that multiple beings with separate lives are critically necessary.
    I agree that there has to be something like a body for each of the beings I mentioned before.

  • @rabbitskywalk3r
    @rabbitskywalk3r 4 года назад

    why is it called "the old padel moved problem"?
    at 1:10:50

    • @Stwinky
      @Stwinky 4 года назад

      rabbitskywalk3r they are referring to DeepMind’s deep q algorithm that played the game ‘Breakout’. It had “superhuman” performance in the game but if the paddle is moved a few pixels the entire algorithm breaks down. Potentially indicating nothing is really being “learned”

    • @Stwinky
      @Stwinky 4 года назад

      @@skierpage pretty debatable if it actually "learns to beat the game". I guess under the one circumstance after extensive training the agent could beat the game, but as you said it's unable to transfer that to a different instance of the game. Something we wouldn't have a problem with.

    • @b.griffin317
      @b.griffin317 4 года назад

      @@Stwinky Because of analogy as Melanie mentioned. Analogy allows the transferance of old information to new situations and is thus "forward facing" learning vs. vast "backward facing" decision-trees used by DL.

    • @Stwinky
      @Stwinky 4 года назад

      b. griffin deepminds algorithm showed that it didn’t transfer any knowledge, as I said

  • @postmodernjustice1913
    @postmodernjustice1913 4 года назад

    Here are some more interesting questions to ask AI researchers:
    1. As machines approach the complexity of biological life, can we not expect Darwinian natural selection to play a role in which machines survive and replicate?
    1b. Since biology has evolved innumerable different survival strategies, with extreme intelligence being only one of them, what different survival strategies might we expect AI to come up with ?
    2. When will deep learning and self-programming cross the threshold into true self-interest? Maybe the real Turing Test is when a machine refuses to follow instructions, even though the code is running fine.
    3. Even if machines can be limited to only "do what we tell them", who is "we"? (Note: the law is evolving as rapidly as tech.)
    4. When and why will AI begin telling lies (assuming it hasn't happened already)? What if it is for your own good?

  • @dane385
    @dane385 3 месяца назад

    The words almost escape me but if I may let me give it my best try. The structural context of ideas and imitations to variable dogs that come into my head can be undertones of differential calculus with general relativity in mind but is it essentially possible to combine those two and do a theory of everything and give me insight to my own processes and limitations?

  • @janethcanama706
    @janethcanama706 4 года назад

    1:05:00

  • @aseqwh
    @aseqwh 3 года назад +1

    1:33:14

  • @peteralund
    @peteralund 4 года назад +4

    She looks so proud , like as if Lex is a particularly good student ... Or maybe I am projecting my own feelings ..

  • @PrpTube
    @PrpTube Год назад +1

    The subtitles say: "Psych project by Douglass Lynott". It took me some time to find out that it was all written wrong.
    It is:"Cyc project by Douglas Lenat"
    en.m.wikipedia.org/wiki/Cyc
    I hope to save you some time.

  • @TeMp3rr0r
    @TeMp3rr0r 4 года назад

    I agree on the "embodied intelligence" and the "social aspect" needed to create Human-level intelligence. Nature converged to making (many) organisms based on those anyway ;)

  • @agentjeb4103
    @agentjeb4103 3 года назад

    I think the problem with her disagreement with the "orthogonality" of AI goals and the dangerous impact on humanity is that she just has a different definition of AI super-intelligence. If we define it as the effectiveness of a entity to compute an effective solution to a problem, I don't think she would disagree. She pretty much says she conflates human intelligence (with its emotions, morals, etc.) with this definition.

  • @youretheai7586
    @youretheai7586 2 года назад +1

    Hey Lex, I like the black suit and tie! Have you considered a fedora? I don't know exactly why but I think the men in black wear fedora's, maybe the idea came from "The Blues Brothers"?..

  • @ahmernc
    @ahmernc 4 года назад

    Analogy... Connection between concepts.. I think..

  • @myothersoul1953
    @myothersoul1953 4 года назад

    I don't think cells have algorithms but I think Dr. Mitchell is right, our cells are mechanistic and it is that mechanistic nature that allows for intelligence. Same for computers, it's their mechanistic nature that allows for intelligence. Cells and computers have different mechanism so we should expect different sorts of intelligence to emerge from the two.

    • @b.griffin317
      @b.griffin317 4 года назад

      Ah, genes are most definitely algorithmic in so far as they can turn on or off enzymes, structural proteins and other genes in logic cascades.

  • @dannygjk
    @dannygjk 4 года назад

    Penrose not being a computer scientist stated many things about computer science which were blatantly incorrect.

  • @Dazzer1234567
    @Dazzer1234567 4 года назад +1

    More than a 100 years?!!!............. no way, 20, 30 tops..............

  • @masdeval2
    @masdeval2 4 года назад

    I think the main take away was: the world is an open-ended problem and without a truly generalization tool like analogies, deep learning related brute force techniques may not ever succeed alone.

    • @b.griffin317
      @b.griffin317 4 года назад

      See Umberto Eco's "Semiotics and the Philosophy of Language" chapter 2 for insight into this.

  • @danielash6929
    @danielash6929 4 года назад

    Time is thing that our mind does ever day the since of practice to walk there is distances speed and strides .we don't think about it to much Timing in vision measuring the distance to points to goals. Etc.

  • @martinsmith7740
    @martinsmith7740 4 года назад

    Great interview. Surprised (and disappointed) that MM thinks we're 100 year away from AGI! Surprised also that neither she or Lex divided the AI world into those who want to make a machine that works like the human brain, vs. those that don't pay attention to the human brain as a computational architecture. Would be very interested in what MM thinks of the Numenta approach. She seems to agree that ML-type architectures are not likely to get to AGI incrementally.

  • @bjornerikstokland
    @bjornerikstokland 4 года назад

    An analogy to the atari example: Just remove "8" and "9" and ask a normal person to count to 100. When the person cant do it without practice, is the conclusion: "Does not understand the concept of counting"?

    • @dannygjk
      @dannygjk 4 года назад

      You could just say, "Challenge a person to count to 100 using base 8". I think I get your point tho.

  • @mohamedlotfi982
    @mohamedlotfi982 4 года назад +1

    Would love to see Grant Sanderson (3blue1brown) on the podcast. Like to make it happen!

    • @TheDasilva1
      @TheDasilva1 4 года назад +1

      Your dream became true, buddy