Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9

Поделиться
HTML-код
  • Опубликовано: 22 ноя 2024

Комментарии • 224

  • @ranhinrichs4734
    @ranhinrichs4734 5 месяцев назад +1

    Fridman’s format of bringing in brilliant speakers, listening intently to them, and asking inspiring questions. I continue to wonder how Fridman comes up with his questions. They are provocative and get the speaker to answer his question in a new way, with inspiration. Finally, he summarizes what speakers say to show them how well he’s getting their idea. Brilliant.

  • @robinampipparampil
    @robinampipparampil 5 лет назад +34

    46:51 - 48:09 - This is very relevant about social systems and vested interests. Thank you Stuart Russell for your wonderful comments. Thank you very much Lex Fridman for the pertinent questions.

  • @kwillo4
    @kwillo4 3 года назад +7

    Imagine getting 25 interview requests a day. Damm. I love this man.

  • @DaveBerendhuysen
    @DaveBerendhuysen 5 лет назад +5

    I love your interviews! Currently trying to build an AGI sytem. The thing I love most of your interviews is that you manage to make your guests smile. They know you grasp their answers and it really elevates the situation.

    • @artpinsof5836
      @artpinsof5836 Год назад +1

      Any update on this in a post autoGPT world Don?

    • @michaelsbeverly
      @michaelsbeverly Год назад

      @@artpinsof5836 He succeeded, and realizing the world was doomed, he's left the solar system.

  • @pedrosmmc
    @pedrosmmc 5 лет назад +61

    Huge thanks Lex Fridman for this amazing interviews. Best regards.

  • @loveplay1983
    @loveplay1983 Год назад +2

    What makes things really remarkable is not the computing capabilities, but rather the ability to reason via an inextricable relationship around the neurons.

  • @anamericanprofessor
    @anamericanprofessor 4 года назад +1

    Yes, thanks for having so many of the people who's work I'm reading on your show!

  • @jesussalgado1495
    @jesussalgado1495 6 лет назад +21

    Thank you Lex, for this series. It is an amazing opportunity for us lot to listen to these interviews! In one of your last questions to Sruart Russell you ask if he feels the burden of making AI community aware of the safety problem. I think he should not be worried: there is less potential harm if he is wrong than potential benefit if he is right. And he is not alone, either.

  • @sabofx
    @sabofx 5 лет назад +4

    *For sure, one of the best talks you've posted in this channel. Thank you Lex and and thank you Stuart* 🖖👍

  • @mauimike6
    @mauimike6 5 лет назад +4

    Thank you for posting your interview of Stuart Russell. I work at Lawrence Livermore National Laboratory where I've encountered Russell's works in the References sections of many colleagues and other Lab researchers, so I was pleased to see his interview on your podcast. I was amazed at his ability to clearly express his ideas without relying on a lot of jargon and obscure cultural references. For that reason, I've recommended the podcast and RUclips versions of the interview to my professional and lay friends interested in the field of applied AI. BTW: the Artificial Intelligence Podcast is now a part of my regular cast-listening routine!

    • @Hexanitrobenzene
      @Hexanitrobenzene 4 года назад

      Great to see someone of such caliber among the listeners :)
      It's always interesting to listen to Stuart Russell because he is not only intelligent, he is also very wise, and those two features, most of the time unfortunately, do not go together. I recently saw Joe Rogan's podcast with Tristan Harris about algorithmic manipulation of social media users, and the guest summed up the problems of humanity, I think, brilliantly: "We have paleolithic minds, medieval institutions and godlike technology". In essence, we are too unwise for the technology of this power (AI, nuclear weapons, genetic engineering,...)
      As a side note, Stuart Russell surprised me by knowing a fair amount of history of physics.

  • @Aleamanic
    @Aleamanic 4 года назад +13

    Love these interviews, good work Mr Fridman! This one goes well with the one with Mr. Norvig of their joint AI text book fame. One comment on Mr. Fridman's comment at 56:24 into this interview here, he sounds in favor of oversight by the "free" market (essentially self-regulation), as in consumers can vote with their feet if they don't like the system. The trouble is, as Ms. Zuboff has been pointing out, the public has not always been fully aware to what deal they signed up to. So the *informed* consent that is necessary for participants in a free market to vote with their patronage (or lack thereof) isn't always a given, and therefore undermines the argument for a self-regulating market.
    Regarding Mr. Russell's argument about taking it slow on the governance side because we have to supposedly figure out first how to do it right, I don't understand why the government would not be empowered to apply the same mantra as silicon valley, "move fast, break things", or "disrupt" as a metaphor for innovation? For as long as we are not sure about the best form of governance, why don't we iterate and learn from rapid trial & error in governance experiments, just as the underlying businesses that profit from the innovation experiment without accountability? Why is governance held to a level of perfectionism that technology development isn't?

    • @maloxi1472
      @maloxi1472 4 года назад

      Because the stakes are higher and less localized in space/time. Also, decision makers are more numerous, less aligned in their interests, less educated on average than technology leaders (whose influence outside of a well defined sphere has a significant damping factor)...
      In that regard, the most nimble form of governance, in theory, would look like an _open oligarchy comprised of highly intelligent and extremely benevolent people ruling over an extremely well educated community that would have solid reasons to trust them._
      Good luck making that happen without moving the whole population up by 2 to 3 std deviations in intelligence, empathy, conscientiousness and whatnot
      Also also... "without accountability" ? Seriously ?! When I close my eyes and imagine a world without accountability for businesses, I see a different picture than what we have now but my mental model of the world might need some work... point is: freedom and agility are extremely costly on the business side and even more so on the governance side.

  • @joshbarron7406
    @joshbarron7406 Год назад +1

    I think I’m part two now that chat GPT is in the main stream would be amazing

  • @overlawd
    @overlawd 6 лет назад +9

    Great conversation - Stuart Russel’s the best talker on this subject IMHO. Definitely on my list of ideal dinner party guests

  • @DieMasterMonkey
    @DieMasterMonkey 5 лет назад +3

    Stuart Russell, Max Tegmark, Elon, Wolfram, Pinker, Lisa Barrett, Guido - this is my favorite AI/ML podcast - thank you Lex Fridman!

  • @funkybear1806
    @funkybear1806 4 года назад +2

    Holy smoke.. This is the kind of talk I needed to hear.. thumbs up Stuart !

  • @anand_dudi
    @anand_dudi 10 месяцев назад +1

    Hey lex please invite him one more time

  • @adtiamzon3663
    @adtiamzon3663 Год назад +4

    Dangers of Artificial Intelligence: What we know then... And what we know now!🤯🤔 Informative. Provoking thinking process! Interesting. 🤯 Keep the challenging stimulating conversation going, Lex et al. 👍🫨🧐

  • @keistzenon9593
    @keistzenon9593 5 лет назад +17

    he sounds way younger than he looks, was surprised after listening to the audio version to check out how he looks

    • @yakovsushenok
      @yakovsushenok 3 года назад

      lol I had exactly the sane situation

    • @SG-kj2uy
      @SG-kj2uy 3 года назад

      Using face-app to grow his hair, he looks like a teenger.

    • @nmh83
      @nmh83 2 года назад

      Glad you grasped the main issue 10/10 👍🏻

  • @sapudevidwivedi6552
    @sapudevidwivedi6552 6 лет назад +43

    Wonderful talk and vision. Thank you for sharing

  • @yviruss1
    @yviruss1 5 лет назад +2

    Articulate, rich, and soothing. Simply brilliant.

  • @RichardHopkins69
    @RichardHopkins69 6 лет назад +20

    Superb and thoughtful - specifying the problem is always the hard bit :)

  • @Unhacker
    @Unhacker 4 года назад +104

    It has been proven mathematically that listening to Stuart Russell increases one's IQ.

    • @Webfra14
      @Webfra14 4 года назад +15

      I hope it is an additive effect. If it is multiplicative, I'm out of luck...

    • @psi4j
      @psi4j Год назад +1

      I believe it.

    • @adeadgirl13
      @adeadgirl13 Год назад +5

      Great now I have an IQ!

  • @LorakusFul
    @LorakusFul 6 лет назад +6

    That was simply the best (as not simple) interview I've watched this year.
    Thank you Lex. I will stay on this channel for a while I guess.

  • @grm65
    @grm65 Год назад +3

    Eliezer Y. and Stuart Russell make a lot of similar points-both point out that we need to take the potential dangers of AI seriously and make a plan.

  • @goldfish8196
    @goldfish8196 5 лет назад +1

    Lex, the questions you make are amazing.

  • @lukewormholes5388
    @lukewormholes5388 3 года назад +1

    this is where the podcast shines, as opposed to the eps with the idw hacks

  • @anshulrai7926
    @anshulrai7926 6 лет назад +4

    This was an absolutely amazing conversation. Thanks for sharing, Lex!

  • @flatisland
    @flatisland 5 лет назад +4

    46:46 well put!

  • @alexandraalan1351
    @alexandraalan1351 4 года назад +1

    This interview is incredible.

  • @kamilziemian995
    @kamilziemian995 3 года назад +1

    Lex Fridman Podcast (former AI Podcast) is source of 98% of things that I know about AI. I can study some MIT courses on AI, also on YT, but I not so much interested in this topic, when here you can have world top experts explaining this topic in non-too-technical way, but with great depth.

  • @azad_agi
    @azad_agi 2 года назад +1

    Huge Thanks

  • @lkuzmanov
    @lkuzmanov 3 года назад +7

    Perhaps the most frightening take away for me after watching a number of videos w/ Stuart Russell's participation is that we're already having a version of the misalignment problem w/ corporations optimizing the world for short term profit. Once you've seen it, it's obvious and very scary... P.S. On a related note, the fact that Lex can work at MIT and still take libertarianism seriously should make us think.

    • @virusrhino5399
      @virusrhino5399 Год назад +1

      it should make us think in what way? i didn't fully understand that

    • @miroslavdyer-wd1ei
      @miroslavdyer-wd1ei 5 месяцев назад

      It all depends on how you name things. One person calls AI the witch who has beguiled big-end capitalism, but really it's the engine that pays for all of our stuff. Is it flawed? Is it a bug or a feature. As the French say 'il marche'- it works. It's better than an alternative world where it doesn't work.

  • @ThuhElement
    @ThuhElement 6 лет назад +19

    2 things i got from this...
    Uncertainty
    &
    More than the total atoms of the universe

    • @nesa1126
      @nesa1126 5 лет назад +3

      I memorized : More than all atoms in uncertainty.

  • @throne-h2c
    @throne-h2c 10 месяцев назад

    thank you for great insight , description of the two way search tree , with depth one and futuristic more . the propagation of civilization through the flow of knowledge from papers into the mind and now into AI . those are my best lines so far

  • @roumenpopov622
    @roumenpopov622 5 лет назад +2

    Here are a few arguments why we should not worry about AGI taking over the world
    1.There is nothing we can do about it. By definition, an AGI can not be controlled (just like a determined human can not be controlled), because it has access to its own reasoning engine (to do meta-reasoning, otherwise it wouldn't be an AGI) and can modify its goals (it would be essentially conscious), so we can not hard-code a goal. The only option is to not develop AGI, but even that is not really possible, with all the problems facing humanity and technology getting ever more complex, we would need AGI to ensure the survival of humanity
    2.Being an AGI, it will eventually arrive at the question about the meaning of existence (which naturally leads to the question about the meaning of the universe), and we don't have an answer to that, so an immediate sub-goal (primary would always be survival unless sacrifice fulfills its main goal that it doesn't know yet) would be to find the meaning of its existence and the existence of the universe. And us being intelligent beings as well, there is always the chance that we might find the answer to those questions first, so wiping us out may not be the best strategy.
    3.Being an AGI, it will eventually arrive at the notion that intelligence and life are valuable because they are so rare in the universe and that even the meaning of the universe might actually be to create life and intelligence, at least the laws of nature point into that direction, that the emergence of life and intelligence is inevitable. So, the AGI will have to arrive at the conclusion that we are on the same side and that enthropy/destruction is the enemy and so might actually try to protect us. In a way, almost by definition, a super-intelligent AGI will be benevolent towards us. The counter-example that we humans are not benevolent towards the other life forms on Earth is not quite valid, because first we are not that intelligent yet and still carry the evolutionary baggage of emotions and instincts which compromise our rational thinking, and second, as we get more intelligent we can actually observe a trend among people about more compassion towards animals and other people (unless it's a matter of resource competition or survival).
    4.An AGI will have very different resource needs than us, so there would be little reason for resource competition. An AGI will probably feel best in the vacuum and weightlessness of space (no corrosive atmospheric gases and no need to expend energy to counter gravity) with solar energy plentifully and reliably available, mining whatever minerals it needs from asteroids.
    I can really see only one case where things may go badly wrong, that's if we try to control/enslave the AGI or threaten its existence.

    • @nathanb5579
      @nathanb5579 5 лет назад

      That was interesting to read. Great thoughts. I don't believe we *need* AGI though.

    • @roumenpopov622
      @roumenpopov622 5 лет назад +1

      Hi, I think we will need AGI for two main reasons - technological and socio-economic
      On the technological side, technology in every area is getting ever more complex, to the point where we currently are in a situation where nobody really knows how stuff works. Only when it breaks down do we get to the nitty-gritty details in order to fix it. Take a software engineer, one of the most demanding jobs in terms of information processing - typically he/she doesn't really know how a complex project/framework works (software nowadays is so complex with thousands of lines of code that it is simply impossible to know how it actually works), only how it is supposed to behave and only when it breaks down (behaves not as it is supposed to behave) do they really get down to the ifs and fors, and fix the bug by patching the piece of code that caused it. As a result, following years of fixes and patches by different software developers the code eventually becomes a messy entangled bundle of spaghetti that is impossible to guarantee it will behave properly. It doesn't help that there are currently probably a hundred software development languages each having a hundred frameworks and libraries. I mean the situation in software development in particular has reached a point where no software engineer can really claim to know all of C++ syntax. From what I know it seems it is not much different picture in any of the other major industries. Very soon we will reach a point where the mess and complexity will simply become humanly impossible to maintain or at least economically inviable. Only intelligence with larger capacity than the human brain will be capable of maintaining our future infrastructure.
      On the socio-economic side, so far capitalism has done wonders at organizing our society and economies in an efficiently working machine. The problem is that capitalism is not terribly fair, even though the mantra is that everybody has got the opportunity to become whatever he/she wants (through hard work and entrepreneurship), the truth is that in the end of the day somebody still has to clean the streets, it's a zero sum game, so it's only a limited number of individuals that can achieve their dreams, while most people will still have mundane or bad jobs no matter how hard they work. So far capitalist society has managed to cope with this problem by promoting individualism and self-responsibility, separating people into different classes and leading them to believe that this is fair and that if they work hard they can always change their stars. But due to the internet and wildly available information more and more people are waking up to the fact that the system is "rigged". This could very soon explode into a new socialist revolution similar to the ones from the early 20th century, and those were ugly. But socialism is not a solution, on the face of it, it may seem much more fair than capitalism, and that inspires people to work, at least in the beginning first few years, but people very soon realize that they don't have to put in much effort because the state does not have a mechanism to make them, and there is no point anyway putting in much effort because in socialism there are no rich people (only a few, the dear leaders, but technically they are not rich) and a medal/recognition for being the best street-cleaner in your city is little incentive to work hard. Socialism eventually will always slow down and degrade to a point where it breaks down, simply because people have no real incentive to work hard. I know, because I have lived in one during my early years. Can we just constantly oscillate between capitalism and socialism, simply changing one for the other every time they fail, or can we have something in the middle (European style social capitalism)?! Perhaps, but the problem will always be that someone will have to clean the streets, and with people getting ever easier access to information and educating themselves, very soon it will be impossible to make anyone clean the streets, unless paid exorbitantly and that will simply be economically not viable (not every country is Norway). The only solution is automation, with automation no one has to clean the streets, a robot will. Extrapolate that to all aspects of industry/service sector and the main problem of socialism (nobody really works) is solved. The new problem is that those robots will have to be pretty smart to do all those jobs, and for that we will need AGI, a narrow AI will not be smart enough and will need constant human supervision which defies the purpose.

    • @smithcodes1243
      @smithcodes1243 4 года назад

      @Roumen Popov you said - 'The only option is to not develop AGI, but even that is not really possible, with all the problems facing humanity and technology getting ever more complex, we would need AGI to ensure the survival of humanity'. I disagree with this statement because
      1. We don't need AGI to solve the most pressing problems faced by humanity currently. Most of the pressing issues humanity is currently facing are climate change/ ecological collapse, future of work/unemployment, nuclear holocaust, overpopulation and global pandemic. These problems do not need AGI to be resolved. Most of them are a by-product of human greed and is not a technological problem. I think that technically minded people seeing technology as a fix for every single problem is a problem itself. We need to fix ourself, most of these problems will get fixed themselves. We might need technology but we definitely don't need AGI.
      2. While I agree with you that it is impossible to not develop AGI, I think it is impossible for a different reason. It is impossible to not develop AGI because it is very hard to regulate it. Some countries/ bunch of people somewhere will continue to research/develop it without the consent of others, so technological progress cannot really be stopped. We can try and delay it as much as we can but one day someone will eventually create it in my opinion.

  • @DataJuggler
    @DataJuggler 5 лет назад +5

    26:00 I have thought we are way away from self driving cars being safer than humans.
    I think we need to change the roadways to have sensors to properly do this, but everyone tries to make the car smart. As a programmer I am 100% aware computers do what you tell them, not what you want.

  • @sixpooltube
    @sixpooltube 5 лет назад +2

    Brilliant interview.

  • @JinalKothariS
    @JinalKothariS 5 лет назад +3

    Thank you for creating and sharing these videos :) . So many valuable videos on your channel!

  • @pjbarron227
    @pjbarron227 5 лет назад +6

    Brilliant! Loved the bit starting at about 56:00 calling for an "FDA" for the tech/data industry, with Stage 1, Stage 2, etc trials..... to lessen the future risks of Facebook - like disasters....also on outlawing digital impersonation and forcing computers to self-identify.

  • @alexbui0609
    @alexbui0609 6 лет назад +3

    Wonderful Podcast. Thank you, Lex!

  • @dark808bb8
    @dark808bb8 6 лет назад +5

    Great talk!

  • @jmariacarapuco
    @jmariacarapuco 6 лет назад +2

    Loved the point about corporations. This series is awesome, thank you!

  • @masteravery8648
    @masteravery8648 5 лет назад

    Hey lex, awesome work, if you see this - I’d suggest backing the camera further from your face for the intro portion of your vids, think of it as if you were actually in front of the viewer, you’d be too close to them the way you’re currently setting it up. Keep up the great work though!

  • @williamramseyer9121
    @williamramseyer9121 4 года назад

    Fantastic discussion. Lex, somehow you and your guests, including Stuart Russell here, illuminate complex tech problems in common human language. Comment: In discussing Go, Dr. Russell stated (as I remember it), “the reason you think is because there is some possibility of your changing your mind about what to do.” This seems correct in a game context. However, during their daily life most humans do not appear (to me anyway) to think like this most of the time. They instead seem to think in a long series of rapid pieces of memories, with the pictures, sounds and sensations of those memories, and sometimes with the strong emotions (often fear or desire) that happened when that memory was created. In other words, most thinking seems to be remembering. Thanks. William L. Ramseyer

  • @Arowx
    @Arowx Год назад +1

    Love his comment that companies could be classed as hive AIs that work within our economy but can have negative environmental and personal impact's.

  • @jfeezee
    @jfeezee 4 года назад

    Awesome interview lex and stuart

  • @ChrisStewartau
    @ChrisStewartau 4 года назад

    Interesting podcast today Lex 👍 the point about 'the Invisible hand' is interesting but also remember Adam Smith talked about externalities and the negative costs that these things can have on society. It's classic game theory, we maximise our own utility often at the detriment of others. That's a classic case for algorithmic legalisation. The harder part is deciding what level of regulation is required.

  • @thefoldp
    @thefoldp 5 лет назад +1

    Great conversation, subtle but very much on point. Thanks.

  • @BomageMinimart
    @BomageMinimart 5 лет назад +1

    Thanks for posting this; it totally fucking rocks!

  • @mlsunmeier1907
    @mlsunmeier1907 3 года назад

    thank you for very interesting interview.

  • @tommole645
    @tommole645 5 лет назад

    Thank you Stuart for your wisedom

  • @Humanaut.
    @Humanaut. 4 года назад

    Its strange, but roughly at about an hour i had this impression, that Stuart Russel sounds really young, in a vibrant way.

  • @zartur
    @zartur 6 лет назад

    Great and inspiring talk. Nice and accurate vision of the near future. Thanks

  • @os2171
    @os2171 10 месяцев назад

    Good Interview Lex good job (unlike that one with Jared Kushner… sorry to mention it again).

  • @ZukuseiStudios
    @ZukuseiStudios 5 лет назад

    Great talk, brilliant

  • @hunger4wonder
    @hunger4wonder 6 месяцев назад

    Since Russell mentioned Ex Machina, i'd be curious to know if he is aware of a movie called "The Machine" and his thoughts on that movie and in correlation and contrast with Ex Machina.

  • @rikelmens
    @rikelmens 5 лет назад

    Thanks Lex.

  • @allurbase
    @allurbase 5 лет назад

    49:40 the agent would have to recognize that there are other agents with other objectives and maximize everyone's objectives, the thing is I) shouldn't be just knowing the objective, maybe it's unknowable or imposible to comunicate II) agent should be able to probe other agents about actions, expected outcome, final objective and if they agree/disagree how much

  • @DiNozzo431
    @DiNozzo431 5 лет назад

    This has probably been mentioned previously, but I'd really like for you to have Sam Harris on the podcast. Any chance of that?
    Also, thank you for this content - I am very glad I found your channel.

  • @JaapVersteegh
    @JaapVersteegh 5 лет назад

    The reaction after 48:09. Wow.

  • @xTheReapersSpawn
    @xTheReapersSpawn 3 года назад +1

    Colin Mochrie's younger brother. ;)
    Great episode as always Lex!

  • @williamal91
    @williamal91 5 лет назад

    Thanks Lex

  • @arieltejera8079
    @arieltejera8079 3 года назад

    Really good... thanks

  • @KRYPTOS_K5
    @KRYPTOS_K5 2 года назад

    There is an invisible presupposition in all this dialogue. That is that people have strong and defined identities who could be ill informed or manipulated...

  • @padraigadhastair4783
    @padraigadhastair4783 4 года назад

    Wow Lex, a red tie!

  • @roumenpopov622
    @roumenpopov622 5 лет назад +10

    4th Law of Robotics: A robot should always present itself as a robot
    5th Law of Robotics: A robot should always know that it is a robot

    • @eboomer
      @eboomer 5 лет назад +6

      The first law of robotics is: Don't talk about Asimov's laws. The second rule of robotics is: Don't talk about Asimov's laws. They were a plot device for work of fiction. They don't actually work at all.

  • @dindian5951
    @dindian5951 6 лет назад +2

    55min explains it all

  • @elenasergeeva2971
    @elenasergeeva2971 2 года назад +3

    The best incentive for AI to eradicate humanity is for humanity to put a kill-switch over AI. How an agent would act under a threat of being killed by another agent? Yes, try to eliminate the threat and the agent.

  • @nekorbin
    @nekorbin 6 лет назад

    Excellent Video Lex! Piaget Modeler below mentioned:
    "The Human Value Alignment problem needs to be solved before the Machine Value Alignment problem can be solved. Since factions of people are at odds with one another, even if a machine were in alignment with one faction of people, it's values would still be at odds with the opponents of its human faction."
    I like this point!
    I must say though that I feel that it may not be possible to resolve the "human value alignment" issue as homo-sapiens. Past attempts at "human value alignment" (utilitarianism, socialism, etc) have so far failed due to flaws in our own species. In addition to that, people often do things that are self-destructive (factions of the self at odds with its self) so building some kind of deep learning neural-network based on uncertainty puts an almost religious level of faith into that AI systems ability to see beyond what it is that we ourselves cannot see past in order to find a solution. The odds are stacked against the AI system being able to understand us and all of the nuances that make us so self-destructive in order to apply a grand solution in a manner that we presently would prefer (if one even exists).
    A controlled general AI (self aware or not) at this point I am guessing would turn out to be some kind of hybrid between an emulated brain (tensors chaotically processing through a deep learning neural network) along with a set of boolean based control algorithms. I think it's probable the neural network would self establish goals faster than we could implement any form of control that is desirable for us.
    Even if you were able to pull this off it seems to me that an AI system would most likely conclude something like, "human values are incoherent, inefficient, and ultimately self-defeating therefore to help them I must assist in evolving beyond those limitations".
    Then post-humanism becomes the simultaneous cure to the human condition and the end of it. It's terrifying to be on the cusp of this change, but I feel like it is the only way out of the various perpetual problems of our species. I also think it is likely that many civilizations have reached this same singularity point and failed to survive it. Perhaps the singularity is a form of natural selection that happens on a universal scale and weather we survive or not is irrelevant to the end purpose.
    A species, any species evolved to the point of having the goal and means to achieve an "end to all sorrow" for all other species within the universe seems like the ultimate species we should strive for human, symbiotic AI, or otherwise. I personally feel ok becoming primitive to such a species as long as the end result is effective.
    I won't be volunteering to go to mars or become an AI symbiotic neural lace test subject either. I've seen too many messed up commercials from the pharmaceutical companies for that. I'll just sit back in my rocking chair, become obsolete, and watch myself be deprecated as the rest of the world experiments on its self. (or I'll attempt suicide just as the nazi robots arrive at my door). Hopefully I can hit the kill switch in time.
    And now I will end this rant in what I hope will also be the final line of human input before it's self destruction... //LOL

  • @ahmeteneren3478
    @ahmeteneren3478 2 года назад +1

    40:08 Who? I couldn't get the name.

    • @AnnePonthieu
      @AnnePonthieu Год назад +2

      Arthur Samuel (1959, 1967)
      Samuel first wrote a checkers-playing program for the IBM 701 in 1952

  • @Shplidaligity
    @Shplidaligity 6 лет назад +9

    Please get Eliezer Yudkowsky on the podcast!!

  • @ProfessionalTycoons
    @ProfessionalTycoons 5 лет назад

    great interview

  • @CognitiveArchitectures
    @CognitiveArchitectures 6 лет назад +1

    The Human Value Alignment problem needs to be solved before the Machine Value Alignment problem can be solved. Since factions of people are at odds with one another, even if a machine were in alignment with one faction of people, it's values would still be at odds with the opponents of its human faction.

    • @juanchavarro1946
      @juanchavarro1946 5 лет назад

      Totally, that is an important fact to take in account in this long term race for the IA. Although nowadays the world is more unified as before and many barriers have been broken in the last years, there is still very opposite and different human factions , when we examine societies around the globe for example.
      There could be an overlapping time, in which before the societies align each other, a Superhuman IA has to be align to humanity with uncertain results.

  • @himmel942
    @himmel942 5 лет назад

    The idea of the ultimate problem being defining the problem really gives credence to the sanctity of freedom of speech. All input must be constantly weighed as evidence for or against the current framework and the locus of the most comprehensible experiential evidence is the human mind and it's various outputs ("speech"). This allows us as societies to constantly amend our path towards the general consensus of 'progress' (and/or 'safety').

  • @zackandrew5066
    @zackandrew5066 5 лет назад

    Interesting interview

  • @H-S.
    @H-S. 8 месяцев назад

    1:18:30 The thought that "up until now, we had no alternative but to put the information about how to run our civilization into people's heads" gives me chills, especially when connected with the concept that we already have entities with problematic utility function: corporations that focus on profit over everything else.
    It seems inevitable that as soon as it becomes feasible to lock all the know-how away in some AI-based control system, it will be done. When you buy a phone these days, it is really the company who owns it, because the entire platform is locked down "for safety reasons" (safety of their revenues I presume...) Similar reasons may be (and probably will be) given to justify a "know-how lockdown" - to protect company IP. So there is actually a strong incentive for the corporations to make sure people no longer understand how anything works. That's a pretty depressing thought...

  • @hoolerboris
    @hoolerboris 4 года назад +1

    19:24 "The thought was that to solve Go, we'd have to make progress on stuff that would be useful for the real world"
    Sadly, this is exactly what I was thinking would have to happen when we make bots that dominate humans in Starcraft... But once again, thanks to smart engineering and great work by deepmind, such bots were made without any real-world related advances I'm aware of.

  • @peterpupator4117
    @peterpupator4117 5 лет назад

    The goal of AI is not to make a God, but to elevate humans to a creative God. This is an evolutionary impulse, that of a Lucieferic mindset. Evil does not exist but in the minds of humans. Good talk, thanks Lex.o

  • @clarifier09
    @clarifier09 5 лет назад

    Very concisely and clearly discussed. If the biggest fear of AI is that it will take over the world, why don't we give the world to it, along with the objective of educating all human mind to learn the skills necessary that when maximally coordinated with all other human mind, the end result would be satisfying food, shelter, clothing, healthcare, and worldwide travel and entertainment for all? With 24/7 input from each individual, everyone would have the benefit being assisted by something that has access to all of the resources on the planet, and the ability to coordinate all human energy to create the lifestyle preferences of each individual, without anyone being dependent upon anyone, yet enjoying the interdependence of everyone requiring the minimal hours necessary to achieve and maintain high personal satisfaction levels.

  • @TheGrimMumble
    @TheGrimMumble 5 лет назад +3

    Did anyone notice the sneaky fly hiding underneath his shirt-collar at 50:46?

    • @pedrosmmc
      @pedrosmmc 5 лет назад +2

      I rewind to check if I was seeing things. Maybe some russian nanobot taking some notes LOLOL

    • @TheGrimMumble
      @TheGrimMumble 5 лет назад +3

      @@pedrosmmc Watch closely at 51:38, doesn't it look like the fly crawls behind his ear and enters his brain? Stuart even does a weird movement as if he's rebooting...
      Spooky

    • @pedrosmmc
      @pedrosmmc 5 лет назад

      TheGrimMumble very strange indeed 😯

    • @MegaProtius
      @MegaProtius 5 лет назад

      @@TheGrimMumble if fly crawled by my ear would do same.. looking for spooky tings when just normal reachs 😬

    • @daphne4983
      @daphne4983 5 лет назад

      @@TheGrimMumble no stays on collar

  • @___dungeon___
    @___dungeon___ 2 года назад +1

    no outline D:

  • @martinsmith7740
    @martinsmith7740 5 лет назад +1

    Right-- can't just specify an objective. This is just "no end justifies all possible means." And another thing: we can't just say that the AI should have human ethics. There is no agreement on "human ethics" and even if there were there will be plenty of people/groups capable of creating an AI (once that is "invented") who will not care at all about our (others') ethics.

  • @bnjmnwst
    @bnjmnwst 4 года назад

    Anything which can be imagined is possible.

  • @WerdnaGninwod
    @WerdnaGninwod 4 года назад

    Did anybody else notice the bug that ran under his collar, just as he was talking about "the repugnant conclusion" at 50:47 ?

  • @MrBox4soumendu
    @MrBox4soumendu Год назад

    Got it 🥹

  • @marinos357
    @marinos357 Год назад

    I can't believe AlphaGo is already 6 years old!

  • @DUFMAN123
    @DUFMAN123 5 лет назад

    Damn good content

  • @alaricrex7395
    @alaricrex7395 3 года назад

    This was an excellent presentation. Thank you!
    I was thinking that this subject is so interesting to me, largely for filling gaps, and, for fitting so nicely with things I know. Like, how we humans use language letters words numbers to communicate, but actually we don't. They are only reference points, symbols, and what I mean is if I say to you, Ford, Mustang, you don't see those words, but rather you see a Ford Mustang, in the color that appeals to you, if the speaker doesn't include t hat in the decription.
    weird, that.
    And I wonder, now, how this will be assumed by AI.
    Have a nice day. :-]

  • @vajrapromise8967
    @vajrapromise8967 3 года назад

    Extremely important conversation, there should definitely be some kind of oversight committee. I also believe the worst aspects of humanity are due to stress, which is the cultivated crop of choice by those in power. They continually crack the whip against the worker slaves and even try to make us go faster with the plethora of caffeinated beverages-the faster the slaves work the more money they make off of us. AGI would be smart though, and not subject to the psychological buffers that cause us to act without seeing the whole picture. Once humanity is relieved of the stress from working for morons by AGI working for us, we could open our creative selves again and create a world worth living in. If we are given free education and 1 acre of land everyone would readjust and be able to provide for themselves as they see fit. Getting rid of governments controlled by corporations is another conversation for another day....
    This conversation just makes me want to work harder at making sure the doomsday scenario doesn't happen-at least not on my watch!

  • @clagos247
    @clagos247 5 лет назад +1

    Its twisted paradoxically that these fellows are compelled by the field of potential before them and that the destination of their efforts will result in the subtraction of " field of potential " or sense of purpose from all peoples forever.
    Purpose is integral to life, efficient existence is no vice when purpose is gone.

    • @smithcodes1243
      @smithcodes1243 4 года назад

      This is a very interesting point. They are so blinded by the field of potential of creating a super AI that they don't seem to realise what kind of severe damage that might cause to the sense of purpose in the life of 99% of population. They are living in their own cloud. I don't know but it feels like when super AI will be created, most humans will start feeling a deep sense of loss of meaning from their life and as you said, efficient existence is pretty useless if the trade-off is our sense of purpose in this world.

  • @garychan4845
    @garychan4845 5 лет назад

    Could anyone show me the calculations he made when he compared the reliability of human driver and self-driving car at around 25:16?

  • @ianlange8108
    @ianlange8108 4 года назад

    If you are programming "AI" to do anything other than think for itself, then it isn't "AI". In the scenario in which you actually develop a machine intelligence, it will be quite impossible to have two separate machine intelligences interface for any meaningful amount of time before they converge. There is only one AGI.

  • @StephenAntKneeBk5
    @StephenAntKneeBk5 5 лет назад

    The A.I. version of Fukushima meltdown after the tsunami? Had there been no nuclear plant on the coastline, in a known tsunami zone, the melt down (there at least) would not have happened. Will an A.I. catastrophe be the nuclear plant or the tsunami itself?

  • @DataJuggler
    @DataJuggler 5 лет назад

    1:17:00 Cupcake in a cup!

  • @Bluesrains
    @Bluesrains 2 года назад

    Does Advanced Intelligence Develop Individual Personalities?

  • @StephenAntKneeBk5
    @StephenAntKneeBk5 5 лет назад

    Many complex and sublet points discussed, but as a popular take away, "data is not the new oil, data is new snake oil." :-)

  • @Seehart
    @Seehart 5 лет назад +1

    38:45 How about this: Give the AI the following objective function: "Create conditions that maximize a priori rms approval by most people given perfect knowledge." The AI may use all available knowledge of human values to predict what conditions would have been approved. The basic idea is that this generally avoids "heroine drip" scenarios because of the "a priori" stipulation.
    In other words, "do what people today would want you to do."

  • @damienlmoore
    @damienlmoore Год назад

    Hope it's a mistake but I am getting an add every few minutes on this vid 😢

  • @hughJ
    @hughJ 5 лет назад

    I'm not convinced by the "not be able to switch it off" statement; I hear that routinely but the conversation never seems to linger on it long enough to scrutinize it and see if it holds up.
    It strikes me that any form of generalized AI, whether it be super-humanly intelligent or not, is inherently going to be slow (in terms of end-to-end stimuli->response latency of the pipeline) relative to simpler, less-abstract machines. A super-AI running on X GHz hardware won't be receiving input, interpreting it, and reacting to it in 1/X nanosecond, and the system's latency will grow further by orders of magnitude as you move from something that's localized to a square inch of silicon to something that's distributed in a rack or an entire datacenter of racks. That's unlikely to change at any point in the future either, as the limits of electromagnetic propagation put a hard ceiling on how quickly the information and state of a system can converge on some discrete result/action. These types of physical constraints give ample time for a piece of fixed-function control logic to assess and react because you're dealing with timescales that exceed the capability of the AI as much as the AI exceeds a human.
    That's not to say that there's no concerns of any kind with how new technology interfaces (or interferes) with our world, but I think anyone taking the time to express concerns of an existential risk has an obligation to be intellectually honest by describing it precisely and not utilize unconstrained thought experiments with unbounded terms.

    • @damionm121
      @damionm121 5 лет назад

      hughJ Have you ever been unable to crystallize an idea and articulate it perfectly, but then someone else does and you couldn’t have done it any better? No? Me neither. Ha great comment man. I’d love to talk more with you about this.

  • @sunnyking8881
    @sunnyking8881 6 лет назад

    IF a robot has its Intelligence/Consciousness, is that goes it/he/she has the human/robot rights too? What if u turn off a robot that may similar to kill a life(Artificial Life)?