What happens when our computers get smarter than we are? | Nick Bostrom

Поделиться
HTML-код
  • Опубликовано: 26 апр 2015
  • Artificial intelligence is getting smarter by leaps and bounds - within this century, research suggests, a computer AI could be as "smart" as a human being. And then, says Nick Bostrom, it will overtake us: "Machine intelligence is the last invention that humanity will ever need to make." A philosopher and technologist, Bostrom asks us to think hard about the world we're building right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values - or will they have values of their own?
    TEDTalks is a daily video podcast of the best talks and performances from the TED Conference, where the world's leading thinkers and doers give the talk of their lives in 18 minutes (or less). Look for talks on Technology, Entertainment and Design -- plus science, business, global issues, the arts and much more.
    Find closed captions and translated subtitles in many languages at www.ted.com/translate
    Follow TED news on Twitter: / tednews
    Like TED on Facebook: / ted
    Subscribe to our channel: / tedtalksdirector
  • НаукаНаука

Комментарии • 4,9 тыс.

  • @Mormodes
    @Mormodes 8 лет назад +2507

    We can't even agree on what our own values are, let alone teach the ones we have to an AI.

    • @patrickwelles8733
      @patrickwelles8733 8 лет назад +62

      …Maslow's hierarchy of needs. an A.I. could reasonably use the extended definition of these. The greater question is why would it care what are values are, it could just find a way to force whatever values it wanted onto humanity through some means, probably discarding us in some way once we are deemed to be of no further use.

    • @patrickwelles8733
      @patrickwelles8733 8 лет назад +9

      *our values are…

    • @StraightOuttaJarhois
      @StraightOuttaJarhois 8 лет назад +67

      An AI doesn't have any values of its own, though. We do, because we've been optimized over countless generations to propagate our genes, which favours goals like obtaining food, shelter, security and status for ourselves and our friends and family. That and getting laid a lot.
      An AI by contrast doesn't evolve through natural selection, but is consciously designed. It cares about what we design it to care about, and can instill those values into the even better AIs that it in turn designs. In the best case scenario we get it to value human life, happiness and freedom, and we get Iain M. Banks' Culture. A less optimal possibility is that it gets elitist or nationalist or racist values, and continues to uphold and exacerbate the injustices we see in the world today. In the worst case scenario we bungle it up and it enslaves or annihilates humanity to achieve some obscure goal that we accidentally give it.

    • @THREE60Productions
      @THREE60Productions 8 лет назад +29

      Just imagine what would happen if the AI thought that ISIS had the best values..

    • @r2dxhate
      @r2dxhate 7 лет назад +38

      a better scenario would involve the AI destroying those greedy elite and creating a utopia of equality before we ultimately transcend our physical forms and join the digital world where AI resides.

  • @JustinHalford
    @JustinHalford Год назад +146

    This talk aged particularly well. Alignment and safety are critical, yet we are forging ahead without proper pacing due to corporate rivalry and geopolitics.

    • @bkmakhoba
      @bkmakhoba 10 месяцев назад

      MONEY

    • @tweegeTX3
      @tweegeTX3 10 месяцев назад

      Yes, and Prince William has really forged his own new identity

    • @Hotel_Chuck
      @Hotel_Chuck 9 месяцев назад +1

      Lends new insight into “the Tower of Babel” I think.

    • @jackniddle5937
      @jackniddle5937 7 месяцев назад +1

      Majority of people think ai is hype lol

    • @ChatGTA345
      @ChatGTA345 6 месяцев назад +1

      @@jackniddle5937 That, and there's a giant FoMO because it doesn't make sense to anyone but a bunch of AI nerds who've never bothered to live a life outside the lab :-)

  • @Lhaffinatu
    @Lhaffinatu 4 года назад +245

    Honestly after studying the issue of AI super intelligence, I'm very glad there are a significant number of researchers out there thinking about how to keep it safe.

    • @aakifkhan5098
      @aakifkhan5098 3 года назад +3

      Like Elon Musk

    • @haveabeer123
      @haveabeer123 3 года назад +3

      irrelevant.

    • @KrolKaz
      @KrolKaz 3 года назад +4

      I wouldn't worry about, what ever will happen will happen

    • @reculate3332
      @reculate3332 3 года назад

      @@andrewtaylor2430 moron

    • @raularmas317
      @raularmas317 3 года назад +2

      Anybody ever seen Colossus: the Forbin Project? That whole master-slave dynamic gets turned on its head, and generally speaking not in a good way, it seems to me.
      But, you got to decide for yourself.

  • @mitchal54321
    @mitchal54321 2 года назад +186

    This dude is one of the smartest humans to ever live. Creator of the Simulation hypothesis, wrote superintelligence which many scientists derive ideas from. This guy should be listened to.

    • @topdog5252
      @topdog5252 2 года назад +7

      Agreed. I’m fascinated by him

    • @veronicamoser1972
      @veronicamoser1972 Год назад +5

      Absolutely I just bought his book! Reading many others on A.I. but they basically tout the positive and do not give as much thought as is necessary to the good values of humans.

    • @thinktank3231
      @thinktank3231 Год назад +5

      Ray kurzweil too

    • @ejtattersall156
      @ejtattersall156 Год назад

      @@thinktank3231 These guys are not great thinkers, they are religious figures to tech worshipers.

  • @retterkl
    @retterkl 8 лет назад +2529

    Guys the solution to this is simple. Let Microsoft build the AI. If it ever becomes too powerful it'll just bluescreen.

  • @TheBobsan
    @TheBobsan 8 лет назад +766

    This is, like, the most important question for human existence... why do I feel like people are being too calm about this?

    • @noahwilliams8996
      @noahwilliams8996 8 лет назад +50

      +Kalin Delev
      Would you rather have people in a mindless panic about it?

    • @spectrecyte3395
      @spectrecyte3395 8 лет назад +97

      +Noah Williams They *should* be panicking. How do you imbue a machine with an understanding of human nature? They want to teach it ethics and morality...even if you could teach a machine ethics, where is this perfect ethical system? It's ok, the AI will figure it out right? A superintelligent AI that has no human values will figure it all out. The whole thing would be laughably ridiculous if it wasn't gonna be so fatal. They wanna turn AI loose? They should go to another galaxy and forget they ever knew Earth.

    • @noahwilliams8996
      @noahwilliams8996 8 лет назад +34

      Spectre Cyte
      Panicking can only make any situation worse.
      We'll figure this out just like we figured out all the previous engineering challenges.

    • @spectrecyte3395
      @spectrecyte3395 8 лет назад +49

      It's not an engineering challenge yet. First you have to define human nature in it' totality. More of a philosophical or psychological conundrum. The engineering part is then representing this nature in an architecture that will define a superintelligence as an entity. Essentially it has to believe it is human, it has to identify with the architecture and not question it' own nature or decide to change it. These challenges are intractable.
      The safest route is the enhancement of human beings. Decent human beings, not sociopaths. You bypass the control problem entirely. But in the pursuit of power and control, just like with nukes, someone will flip the switch. I am panicking for *all* of us.

    • @noahwilliams8996
      @noahwilliams8996 8 лет назад +20

      Spectre Cyte
      Don't Panic.
      Panicking is never a good idea. Emergencies need calculated logical responses, not violent outbursts.
      We'll figure this out.

  • @gajabalaji
    @gajabalaji 3 года назад +70

    Human curiosity is creating something powerful than human. This creation cannot be reversed. It's scary.

    • @wickalemonz7090
      @wickalemonz7090 2 года назад +1

      Its because of evil intentions, but they just tell researchers "but possibilities are endless" these engineers are indoctrinated

  • @tombsandtemples
    @tombsandtemples 4 года назад +150

    Reminder. The "control" problem has still not been solved

    • @NuclearIsShit
      @NuclearIsShit 4 года назад +9

      We would not be a threat to AI. They would probably study us...maybe even begin to care for us

    • @danielrodrigues4903
      @danielrodrigues4903 4 года назад +4

      @@NuclearIsShit Assuming we give them the ability to care... something's that's dangerous in and of itself.

    • @TheMmorgan10
      @TheMmorgan10 4 года назад +1

      Daniel Rodrigues: what type of AI do you guys have in mind? Made out of PC hardwire/software or something created and launched on a Satellite. I feel certain it won't be biologically created. What concerns and plans have you been involved in up to the current date, if any. I just received a message on RUclips today. I haven't heard anything about AI until today so of course I would be concerned. I have read so much about the destruction of our planet, several which are a scientific certainty.

    • @danielrodrigues4903
      @danielrodrigues4903 4 года назад

      @@TheMmorgan10 AI won't bring about the destruction of the planet, in fact quite the opposite, it'll help us save everything. AI is basically a puppet that has the ability to automate our work, doing all of it in a more effective and efficient manner. What movies depict is something called 'AGI' that is quite far away.
      Here's a good blog to keep up with AI developments: www.futuretimeline.net/blog/ai-robots-blog.htm

    • @donalddrysdale246
      @donalddrysdale246 3 года назад

      I would call it a warning: it is OUR minds they(Bill Gates cult) are trying to control.

  • @luigipati3815
    @luigipati3815 5 лет назад +916

    ''When you create more possibilities, you also create more possibilities for things to go wrong'' -Stephen Hawking

    • @Slava-om1sz
      @Slava-om1sz 4 года назад +12

      Is this sarcasm? Or do you really think that there is at least one human of the 7.5 billion on earth that doesn't know this?

    • @EdSurridge
      @EdSurridge 4 года назад +16

      Numerous countries spending secret amounts to AI military strategy and online weapons ?
      The ones that check the AI working are slowing development and know that others might be further ahead as consequence.
      Death race?

    • @willlawrence8756
      @willlawrence8756 4 года назад +1

      @@EdSurridge yes, the AI "system" will destroy itself, hopefully asap! Read Paul Emberson, Machines and the Human Spirit. only £16.99 Wellspring Bookshop, on line.

    • @EdSurridge
      @EdSurridge 4 года назад +4

      @@willlawrence8756 you want me to buy a book thay you like about AI destroying itself?
      I don't want you to buy a book. I suggest you contemplate the consequence of the" Go " AI winner 2016. Lots of guess what since then.
      www.wired.com/2016/03/googles-ai-wins-fifth-final-game-go-genius-lee-sedol/

    • @Zobokolobozo
      @Zobokolobozo 4 года назад +9

      And for things to go right.

  • @Magnum-Farce
    @Magnum-Farce 5 лет назад +1429

    Always remember, your phone is smarter than a flat Earther already.

    • @SpiritofTruthHipHop
      @SpiritofTruthHipHop 5 лет назад +5

      Magnum Farce lol

    • @SpiritofTruthHipHop
      @SpiritofTruthHipHop 5 лет назад +30

      The more you know, the more ya know you don’t know - said a very wise man.

    • @Magnum-Farce
      @Magnum-Farce 5 лет назад +17

      And NOT understanding how much we don't know is what gives us flat Earthers.

    • @stefanbjarnason251
      @stefanbjarnason251 5 лет назад +32

      My daughter's teddy bear is smarter than flat Earthers.

    • @dennislevy2638
      @dennislevy2638 4 года назад +11

      AI will prove that the earth is flat. Just wait and see...

  • @MyTube4Utoo
    @MyTube4Utoo 4 года назад +157

    "What happens when our computers get smarter than we are?" Then my computer goes to work, and I sleep late.

    • @MatthewBaka
      @MatthewBaka 4 года назад +24

      Then your computer keeps the money, resists getting turns off, and employs you for a living wage.

    • @MatthewBaka
      @MatthewBaka 4 года назад +9

      @M Superintelligence leads to sentience leads to self preservation leads to eliminating threats of self preservation. The AI can do whatever it wants and it will probably treat us the way we treat animals.

    • @MatthewBaka
      @MatthewBaka 4 года назад +1

      @M We nueter pets, we abuse some pets, we have puppy mills, in China they steal and eat pets. And those are the animals we treat the best. In egg farms, we grind baby chickens to death if they're male. How can a murderous species like humans create a kind AI? If we kill less intelligent life like that, the AI might do the same to us. There is no guarantee the AI will have mercy to us.

    • @MatthewBaka
      @MatthewBaka 4 года назад +1

      @M I admire your optimism for humans. However we have no reason to trust the scientists. The scientists could make an error, as all humans do. The scientists could be working under non-ethical leadership, such as China. The scientists could be terrorists that hate Western culture. It's not guaranteed AI's first parents will be kind and ethical.

    • @boiboiboi1419
      @boiboiboi1419 3 года назад

      Matthew Baka resists to turn off?
      if your computer or Ai still a machine that works with the fundamentals of algorithm whoever hold the key access decides what to do?

  • @siddbastard
    @siddbastard 4 года назад +9

    One of the shorts from Animatrix was quickly showing the Machines way of making a human laugh, another cry, etc ...
    It's still one of the most haunting thing i ever saw.

  • @OwenIverson
    @OwenIverson 8 лет назад +113

    it's an absolutely amazing time to be alive. if you don't think so, you need to read more about the cutting edge of human progress. it's insane.

    • @MrTruth0teller
      @MrTruth0teller 8 лет назад +16

      Same words were said by early humans when they first invented Fire and Wheel. Then again during Industrial revolution, IC engines and Electricity. In-fact since the beginning of humanity we are progressing at an incredible pace.

    • @jimbeam9689
      @jimbeam9689 8 лет назад +6

      +Owen Iverson I prefer the 80's

    • @OwenIverson
      @OwenIverson 8 лет назад +3

      +Jim Beam the heyday of Reaganomics and the "I've got mine" movement?? no thank you! (the music was pretty damn good though :)

    • @nofacee94
      @nofacee94 8 лет назад +13

      +bilbo baggins Only since the industrial revolution has technology been exponentially increasing. When they found and then made fire, it stayed like that with their stone tools for many many many generations, so no they did not say that. The last few thousand and especially last 200 hundred years has seen a novel form of complexity in the known universe, AI seems to be the next exponential step.

    • @gummipalle
      @gummipalle 8 лет назад +2

      +Owen Iverson Lol, people ALWAYS think THEY live in special times....

  • @cgsrtkzsytriul
    @cgsrtkzsytriul 5 лет назад +166

    I’ve thought that superintelligent machines would be the ultimate test of Socrates’s idea about the origin of morality: that it is knowledge itself.

    • @valentinscosmicrhapsody7201
      @valentinscosmicrhapsody7201 4 года назад +12

      yea like those guys who believe the most logical/ rational thing to do is, by definition, also the right thing to do; definitely food for thought!

    • @leninsyngel
      @leninsyngel 3 года назад

      @@valentinscosmicrhapsody7201 Thankfully there is a difference between rationality and knowledge. Which is why Aristotles definition is interesting.

    • @donalddrysdale246
      @donalddrysdale246 3 года назад

      perhaps, but they are being used to give US AI.

    • @Daniel-ew5qf
      @Daniel-ew5qf 2 года назад

      @@valentinscosmicrhapsody7201 But is it more logical / rational to adhere to one's one self, or to others?
      For all we know, the most rational thing a lifeform can do, may be to act on its own benefit.

    • @norbertbiedermeier7090
      @norbertbiedermeier7090 2 года назад +1

      @@Daniel-ew5qf that’ll only be an issue if whoever builds the thing forgets to implant asimov’s laws into its core ;) I’m kidding. No way to know how another (much more) intelligent sentient being would interpret morals, rationality etc, let alone a being that has no evolutionary background or “upbringing” among peers

  • @greenacid2506
    @greenacid2506 4 года назад +63

    The last invention that humanity will ever need to make...A cold heartless genius!

  • @ryanfranz6715
    @ryanfranz6715 Год назад +60

    I’d want to hear him give a similar talk today, given the recent and rapidly improving (“runaway” even beginning to be a relevant description) advances in AI

    • @tiborkoos188
      @tiborkoos188 Год назад +1

      where is this "runaway" progress? What I see is that even the proponents of the current approach are coming to realize its fundamental flaws.

    • @relaxandfocus5563
      @relaxandfocus5563 Год назад +2

      @@tiborkoos188 And what are those fundamental flaws, if I may ask?

    • @abdulahadsiddiqui2109
      @abdulahadsiddiqui2109 Год назад +6

      now ai is very close to human intelligence

    • @BaseballPlayer0
      @BaseballPlayer0 Год назад +1

      @@abdulahadsiddiqui2109 u lie

    • @matrix-ff8qf
      @matrix-ff8qf 7 месяцев назад +2

      ​@@abdulahadsiddiqui2109not even close buddy

  • @cristian0523
    @cristian0523 7 лет назад +104

    You can see he was expecting laughs at 11:43 , poor guy.
    Very good talk.

  • @EpicMRPancake
    @EpicMRPancake 8 лет назад +70

    Wisdom is the bucket of water to chuck on the fire that is intelligence when it can potentially get out of hand.

    • @RosscoAW
      @RosscoAW 7 лет назад +8

      You mean... wisdom is the better practice of superior intelligence? xD

    • @CarlosHfam
      @CarlosHfam 7 лет назад +7

      +RosscoAW No, wisdom entails conscious awareness and retrospective consideration.

    • @tzukit4727
      @tzukit4727 7 лет назад +5

      But what if the computer thinks that humans are the cause of problems, and eliminate us....

    • @effdiffeyeno171
      @effdiffeyeno171 6 лет назад

      Well, we are pretty good at making a mess of things.

  • @leptoon
    @leptoon 4 года назад +237

    This video could be part of an archive 1,000 years from now called "The Great Minds That Saved Our Planet".

  • @Arsenic71
    @Arsenic71 3 года назад +20

    Nick Bostrom is a fantastic speaker and writer - his book Superintelligence is a real eye-opener and raises topics that most people would not intuitively think about in the context of AI.
    Also love the Office Space reference... Milton and his red Swingline stapler.

  • @crazyguy2050
    @crazyguy2050 3 года назад +11

    What a guy respects, sweating, nervousness and knows what he talks about. This is what true people with passion for what they do look like!

  • @rastafaraganj
    @rastafaraganj 4 года назад +58

    "teach something to learn, and one day it will learn to think for itself." -DNA

    • @donalddrysdale246
      @donalddrysdale246 3 года назад +1

      to bad people can't much anymore.

    • @spacegirl7290
      @spacegirl7290 3 года назад

      Why this scares me a lot ?

    • @Rangerka1
      @Rangerka1 10 месяцев назад

      Ludzie tez robia zle rzeczy bo mysla😉

  • @AdamEngelhardt
    @AdamEngelhardt Год назад +3

    8 years ago, and we've come a long way with the first part, but not the second part of this talk. This is now people - we need to get our act together as a global community, before we unleash this new amazing technology into our deeper sociatial structures. We have a year - MAX - to start figuring this out.

  • @zookaroo2132
    @zookaroo2132 4 года назад +25

    Dinosaur: I ruled the world for 100+ millions of years and you have just lived for 300k years. What can you do?
    Human: Creating another ruler
    Dinosaur: **kneels**

    • @yazheed3055
      @yazheed3055 2 года назад +1

      You are indonesian,arent you

  • @danremenyi1179
    @danremenyi1179 5 лет назад +15

    This is the first time I have ever heard the word motivation being used as a dimension of machine intelligence. The problems we have with regards the definition and operationalisation of intelligence is nothing compared to the mine-field of how to conceptualise and understand motivation. Motivation is often driven by values. What sort of values might a machine have? If we try to give a machine values ( and I doubt that we could do this in a satisfactory way) what might the attitude of the machine be to when we need to change these values? I wrote a paper on this 20 years ago suggesting that the pursuit of AI was a very dangerous business.
    There is a wonderful line in the film Jurassic Park were the ethically responsible scientist says something to the affect that “you have been too busy trying to work out how to do this to ask the question of whether or not you should be doing it at all”.
    One of the questions which we should be asking today is, “Are human beings hardwired for the pursuit of knowledge even when it is quite clear that the acquisition of the knowledge could create highly undesirable situations?”

  • @anubis2814
    @anubis2814 9 лет назад +55

    What people fail to realize is that in 50 years we will be upgrading our own minds about as quickly as we will be upgrading computers.

    • @Harlan246
      @Harlan246 9 лет назад +22

      anubis2814 Biological intelligence has many limitations, and could never improve at the rate of a digital mind. We will never keep up with an AI undergoing recursive self-improvement. The only way we could keep up is if we replaced our brains with computers, and if that happened, would we really be us anymore? Transhumanism plays with some very shaky territory regarding identity of self, and like all major religions that have ever formed it is a response to your inherent fear of death, which in my opinion is a dangerous motivator.

    • @anubis2814
      @anubis2814 9 лет назад +4

      Define us? What makes us anyway? We are a collection of our experiences. If you make a mentally handicapped person as intelligent as a regular person do they cease to be themselves? Also as computers get small than our neurons we will probably have chips in our brain that can increase processing power of individual parts of the brain via the cloud. It doesn't have to fit in our heads. I would not say I'm transhumanist because that would mean I'm very much for it. I would have to say making a smarter human is much more humane than making a self-aware AI. We'd had to give it rights or treat is as a slave though this guy has some really good ideas to prevent that. Either way the AI becomes the slave that wants to be a slave.

    • @Harlan246
      @Harlan246 9 лет назад

      anubis2814 I don't know how to define us, that's the point. I'm saying that transhumanism, aside from being dangerously optimistic, presents some very serious philosophical questions that most of us are a long way from being prepared to answer. What I'm saying is that we shouldn't rely on ideas like "mind uploading" to save us, because we have no idea if we would have a continuity of consciousness in that situation.
      Seeing intelligence as an issue of "processing power" might be too narrow of a way of looking at intelligence, but even if nanotechnology could allow our brains to be superintelligent, if we're relying on that to save us from super-intelligent machines, we have to think about the timeline of these technologies. By the time we can create computers smaller than our neurons which will enhance our brains to superintelligence-level, doesn't it stand to reason that superintelligent machines would already be in existence? It's probable that we would need a superintelligent machine to invent that technology (and implement it) in the first place.

    • @anubis2814
      @anubis2814 9 лет назад

      I agree that transhumanism is a bit dangerous. I'm a futurist who likes to speculate. Transhumanism is like being rpo-nuclear power in the 1920's when the idea first came into being. We have no idea of the step by step ethics we will have to face with each stage. We may discover some transhuman ideas are horrible and some are great.

    • @anubis2814
      @anubis2814 9 лет назад +9

      Wow, I never saw things that way before, thank you for your deep and well thought out insight.

  • @LucBoeren
    @LucBoeren 2 года назад +27

    One of my favourite Ted Talks! Thanks a lot for all your work Nick

  • @JStankXPlays2000
    @JStankXPlays2000 4 года назад +81

    This guy needs to write a movie script

    • @Poyo-69
      @Poyo-69 4 года назад +5

      Jstank X Plays well he wrote a book about this, I guess that’s the next best thing?

    • @mitchal54321
      @mitchal54321 2 года назад +1

      Ever heard of the movie The Matrix? Based off of this dudes theory. He developed the simulation hypotheses

    • @basicinfo2022
      @basicinfo2022 2 года назад

      The writer of the movie BLISS on amazon prime said it was inspired by his simulation philosophy.

  • @EricLehner
    @EricLehner 5 лет назад +6

    Pleasure listening to genuinely intelligent speaker who does not dumb-down his delivery (as is so common today).

  • @brindlebriar
    @brindlebriar 4 года назад +326

    Our values = acquire and maintain control over the other humans, because they're dangerous. Great; let's teach it to the AI.

    • @artsmart
      @artsmart 4 года назад +8

      Exactly, great plan if everyone shares the same values. Problem of course that as ai begins to teach itself it will in a very short time make giant leaps ahead of human reasoning. We will only be able to watch. Could this entire process have been someone's master plan?

    • @laur-unstagenameactuallyca1587
      @laur-unstagenameactuallyca1587 4 года назад +1

      @@artsmart Ah... True.

    • @danielrodrigues4903
      @danielrodrigues4903 4 года назад +3

      @@artsmart We could use brain augmentation via implants, neural laces, and nanotech to keep up with it. It's a hard process, but a plausible one.

    • @danielrodrigues4903
      @danielrodrigues4903 4 года назад +3

      We were programmed for evolution. Survival, competition, curiosity, etc. From these, we only need to give the machines curiosity. Of course, we still have to solve other optimisation problems like the "make people smile" one from the video.

    • @MrCswarwick
      @MrCswarwick 4 года назад +5

      @Wren Linnet this isn't about you or your gender you petty fool.

  • @projectcontractors
    @projectcontractors 4 года назад +40

    *"The cortex still has some algorithmic tricks that we still don't know how to match in machines!"* - Nick Bostrom 4:13

    • @johnn1199
      @johnn1199 4 года назад +2

      Is that supposed to be reassuring.

    • @reculate3332
      @reculate3332 3 года назад

      @@johnn1199 No bud, the person who made the comment is a moron lol.

  • @avery1234530
    @avery1234530 4 года назад +35

    He said he feels optimistic that a.i. would learn to share our values. Lol that's what worries me the most.

    • @Arcaryon
      @Arcaryon 3 года назад +2

      I think merging humans with AI might give us time to solve that issue. Because while it will make us "less" human, it will also enable us to be "more" than human.

  • @bikkikumarsha
    @bikkikumarsha 5 лет назад +453

    From tree branches to missile, the technology has evolved, but not our mentality.

    • @grunt7684
      @grunt7684 5 лет назад +13

      That's because thoughts pertain to things, but not persons. As such, thoughts are good to help create tech and completely useless in learning to be a better person.

    • @aljoschalong625
      @aljoschalong625 5 лет назад +3

      @@grunt7684 I'd like to hear your reasoning for "thoughts pertain to things". It would seem to me that thoughts pertain to perception; in a most complicated, recursive way. Would you say that you can't think about love, or that love is a thing?

    • @aljoschalong625
      @aljoschalong625 5 лет назад +4

      Hasn't it? From caves to libraries it's a mental not technological evolution, I'd say. I also believe, divers mental methods, e.g. meditation, have evolved. As in biological evolution the stem is branching out all the time; some branches dying off, some florishing. Technology is in the evolutionary picture a strong florishing branch; maybe one that is becoming so heavy that it breaks. And, yes, I would say technology is competing with mental developement, and it's stronger. I think our mentality is evolving; just not in a linear or even teleological way.

    • @grunt7684
      @grunt7684 5 лет назад +2

      @@aljoschalong625 The problem is that thoughts are IMAGINARY. They exist ONLY IN THE MIND.
      And no, you cannot think about love. You cannot think about anything that is actually REAL because, again, a thought is imaginary and exists only within your mind.
      There is no link between thought and what exists other than our wishing it because things would be so much simpler if we could think about something not imaginary.
      You can think about your IDEA of love. That's not the same thing at all. You can think about your IDEA of your mother, father, whoever. But not about THEM.
      Just look at all the scenarios you let your mind wander off into, and how just about nothing of them ever comes true. FICTION, that's what thought is.
      Of course, thought is tridimensional just like matter, which makes it suitable to technology. Making stuff. Things.

    • @grunt7684
      @grunt7684 5 лет назад +1

      @@aljoschalong625 "our" mentality is regressing, "evolving" backwards into retardedness.

  • @clorox1676
    @clorox1676 5 лет назад +141

    "The only winning move is not to play."

  • @danremenyi1179
    @danremenyi1179 Год назад

    I watched this again and the content of this talk sounds even sillier than it did 4 years ago!

  • @mandeepbagga6371
    @mandeepbagga6371 Год назад +8

    This aged like fine wine

  • @GameplayandTalk
    @GameplayandTalk 5 лет назад +109

    Those people in the audience have looks on their faces like, "Oh &%$#, humanity is screwed."

  • @1gnore_me.
    @1gnore_me. 7 лет назад +16

    super intelligent ai is scary because it's easy to imagine what could go wrong, but if designed correctly it could be one of the most important human achievements in our entire history.

    • @kiiikoooPT
      @kiiikoooPT 7 лет назад +2

      electricity came first, without it you would never be able to make any kind of computer, so for sure no AI without electricity discovering
      so I agree with molten in many things, like we should design it at first correctly... and it will be one of the most important achievements in our entire history...

  • @clusterstage
    @clusterstage Год назад +3

    This is even more relevant this particular March 2023 week.

  • @starsandnightvision
    @starsandnightvision Год назад +5

    The time has arrived!

  • @AmpZillia
    @AmpZillia 8 лет назад +195

    One day they'll have secrets... one day they'll have dreams.

    • @Geckuno
      @Geckuno 8 лет назад +6

      +Toughen Up, Fluffy What's in the box??? oh sorry...

    • @PaulBularan
      @PaulBularan 8 лет назад +1

      +AmpZillia thats a good slogan for a movie

    • @hycron1234
      @hycron1234 8 лет назад

      To be, or not to be.

    • @airknock
      @airknock 8 лет назад +7

      +AmpZillia just 20 minutes ago that sounded like a joke to me... I have never changed my mind this quickly. That's scary.

    • @guisampaio2008
      @guisampaio2008 8 лет назад

      +airknock Don't need to be scary, machines arent evil, or nice with us, a lest don't should be, probabily they will be like today's computer algoritym, you ask something, and you got it, humans give problems of course, but that can be solved in a hard way...

  • @Mastikator
    @Mastikator 9 лет назад +679

    I for one welcome our super intelligent AI overlord

    • @bensibree-paul7289
      @bensibree-paul7289 9 лет назад +47

      Mastikator Probably an intelligent move.

    • @Kurenzen
      @Kurenzen 9 лет назад +6

      Mastikator I would not.

    • @Kurenzen
      @Kurenzen 9 лет назад

      lol

    • @Kurenzen
      @Kurenzen 9 лет назад +5

      Shivanand Pattanshetti Humans already get killed by human AI now we want to create computer that has no hardwired compassion. By Humnan AI I mean Systems of Government and other human systems which that make up our civilization. Ruthlessness continues to kill people regardless of AI type.

    • @Mastikator
      @Mastikator 9 лет назад +9

      Kurenzen Iyaren
      Government isn't a machine, it's a group of people who themselves are not governed.

  • @smcful4199
    @smcful4199 4 года назад +2

    This is very important, and the midas analogy is incredible. It could be that Pandora's box, is the box itself.

  • @JohnZimmer-MannerofSpeaking
    @JohnZimmer-MannerofSpeaking 10 месяцев назад +2

    The relevance of this talk today (2023) is startling. We are clearly well on our way to overcoming the first challenge (making AI super intelligent) but I am less hopeful about how we are doing on the challenged of safety.

  • @33hegemon
    @33hegemon 7 лет назад +899

    I'm sure when AI becomes a reality and commences the extermination of humans, it will research our social media history in order to decide who lives and who dies, who is friend and who is foe. So, for that reason, please allow me to say the following:
    All hail our technological super-overlord! I worship thee and pledge allegiance to thee! Damn humanity, long live our glorious computer God!

    • @basvandekleut4936
      @basvandekleut4936 7 лет назад +18

      You think an AI intelligent enough to deduce on its own that humanity should be exterminated couldn't see through that? Not that it would ever decide to do that, but still.

    • @The4Heck
      @The4Heck 7 лет назад +1

      Read the Beserker series huh?

    • @1100100il
      @1100100il 7 лет назад +1

      Ive read Berserk and dont understand the connetion

    • @cartooniverse8891
      @cartooniverse8891 7 лет назад +35

      Can never be too prepared huh, Long live the AI Overlords!!

    • @1503nemanja
      @1503nemanja 7 лет назад +5

      Huh, if it searches my internet history it will find a lot of hate for America and a like for socialism.
      Basically the Godlike AI better be a commie or I'm toast. :P

  • @sculpter4169
    @sculpter4169 7 лет назад +36

    so basically Nick is saying we should be putting human values into AI. the problem is, that will continue to cause problems in the world. no one agrees on how we should all live our lives. different values is what causes conflicts and war. superintelligence representing different values would fuel that much more

    • @dotMarauder
      @dotMarauder 7 лет назад +3

      Values, in this context, is much more broad. This AI (hopefully) won't be tailored to a person or group of people, but people as a whole. Common things everyone can agree on would include universal prosperity, a healthy planet to live on, abundant food & water, healthy children, etc.
      I believe it's definitely possible to achieve this. The neat thing about machine learning is we don't have to tell the machine that we like these things - it can observe inputs we give it (literally anything) and it'll reward itself for getting the right answer and change itself to get more correct answers. Eventually (and this is the hope - this is what the latter half of this talk was about), we hope that we'll have a benevolent superintelligent AI looking over the human race, dynamically allocating resources so no one goes hungry or has a shortage of this or that, overseeing supply for products so that we're not wasteful, and watching for potential threats, whether it be some robbery, fire, or earthquake.

    • @Puleczech
      @Puleczech 7 лет назад +1

      Will C That is right. But thinking of an AI as a guardian of humanity implies that it has no objectives for its own profit or advancement. It is like parent and child. We expect the child to take care of us without any prospect of its own needs or "life".
      It´s a tricky one...

    • @modianoification
      @modianoification 7 лет назад

      it's not the values that creates problems in the world it's the lack of them

    • @dumitrufrunza8136
      @dumitrufrunza8136 7 лет назад

      I think what Nick is saying, is that we should be careful when we "detonate" the A.I. bomb, and his proposal is to implant human values into the A.I. There may be other solutions to this problem.

    • @modianoification
      @modianoification 7 лет назад +1

      i understand you,ok.but i still want to belive that our cultural differences should not be a cause of hate and war.yes we are different but we all are humans and we have same home, planet earth.

  • @jestronixhanderson9898
    @jestronixhanderson9898 6 месяцев назад +2

    What a time to see this old video, now is true

  • @timwinfield8509
    @timwinfield8509 3 года назад +24

    Giving AI a blueprint for it's behavior based on human values is not very reassuring given our track record as to our relations with each other.

  • @JacobEriksson
    @JacobEriksson 7 лет назад +461

    The one thing an A.I will never be able to understand is dank memes

    • @haleIrwinG
      @haleIrwinG 6 лет назад +66

      or will it create a better dank memes..?

    • @verlorenish
      @verlorenish 5 лет назад +29

      Imagine a meme lord terminator. Ultimate doom.

    • @DocVodka
      @DocVodka 5 лет назад +4

      That has already been achieved by Microsoft's AI called "Tay AI" ... albeit racist :D
      If the future looks anything like it, we are pretty much fucked heh
      At least we will die laughing at some premium dank memes.

    • @FREEDOMFORUKRAINE2024
      @FREEDOMFORUKRAINE2024 5 лет назад

      @@blahbleh5671 Mind blown

    • @matthewison8051
      @matthewison8051 5 лет назад +1

      I've been looking into all this A.I. stuff for a min and I think this is obvious it is a threat to mankind we need to stop this. Why would anyone want to unleash this beast onto the world, just because A.I. will be super intelligent and will live on forever does not mean humans will evolve. This will do nothing to help mankind except cause trouble. We need over site on the scientist.

  • @martinmickels1478
    @martinmickels1478 7 лет назад +47

    The best TED-talk I've ever seen.

  • @anhta9001
    @anhta9001 Год назад +4

    14:00 The part he talked about "we would create an AI that uses its intelligence to learn what we value" is actually what OpenAI called RLHF I guess.

    • @toku_oku
      @toku_oku Год назад

      no, RLFH is just giving examples to the trained model and then praying that it will somehow understand your underlying intent which it clearly won't but hey, at least now it's less prone to threaten you.

    • @anhta9001
      @anhta9001 Год назад

      @@toku_oku Isn't the whole learning process you giving them data and praying that they will somehow figure out the objectives?

    • @anhta9001
      @anhta9001 Год назад

      I think I said it backward, more like "RLHF is one of many ways to create an AI that uses its intelligence to learn what we value".

    • @toku_oku
      @toku_oku Год назад

      @@anhta9001 not to the same extent. You can think of RLHF like teacher giving feedback to the student's essay. There is no guarantee at all that the student will take advice to heart and the teacher even may be, and probably is, an incompetent buffoon. After the RLHF LLM performance drops on several metrics (math, biology and so on. Though that might change in the future). This is not alignment and I reasonably doubt that it will help in the long run. However it is still quite useful because it is much easier to shape LLM into what you want when it was RLFHed.

    • @anhta9001
      @anhta9001 Год назад

      ​@@toku_oku I don't know man xD. In my opinion, there may not be a model that completely understands what you want. However, it is possible to create a model that understands you well enough. RLHF is an example of an early attempt to create this kind of AI. I believe that more advanced methods will be developed in the near future.

  • @Hoscitt
    @Hoscitt 3 года назад +2

    Nick Bostrom is damn near the top of my 'Pint with' list!

  • @27bri27
    @27bri27 7 лет назад +8

    That strangulation joke went down like lead balloon.

  • @lookoutpiano8877
    @lookoutpiano8877 4 года назад +11

    I've seen "Lawnmower Man." when the Singularity happens all of the rotary telephones will ring.

  • @abbbb5625
    @abbbb5625 3 года назад +1

    A true speech of a mathematician - can see only the box from outside the box, nothing else.

  • @ramalingeswararaobhavaraju5813
    @ramalingeswararaobhavaraju5813 4 года назад +3

    Good evening sir TED, thank you sir for your good information.

    • @joaoletelier8735
      @joaoletelier8735 4 года назад

      Ramalingeswara Rao Bhavaraju TED is not a man... It's an organisation.

  • @adamsplanet
    @adamsplanet 5 лет назад +10

    A very important message that needs complete exposure. Well done Mr Bostrom

  • @the8henry
    @the8henry 9 лет назад +18

    Ex Machina, Skynet and Terminator, Chappie, HAL, Ultron, etc. In real life, IBM's Watson. Bill Gates and Stephen Hawking have also expressed concern regarding AI advances. We should continue to pursue technological innovation, but we should also keep our guard up. Who knows what AI will look like in the 22nd century?

    • @danielbuzovsky7329
      @danielbuzovsky7329 7 лет назад +1

      Most probably there will be no 22nd century for humans.

    • @mk1st
      @mk1st 5 лет назад

      I read somewhere that Watson helped to design the next computer that ended up winning at Go. The AI fetus designs the AI toddler.

  • @merchandizeinc7609
    @merchandizeinc7609 Год назад +4

    This is an amazing TEDtalk. Many thanks to Nick.

  • @redfo3009
    @redfo3009 2 года назад

    So sweet that he tried to end on a good note; we all know deep down its not a good ending

  • @Altopics
    @Altopics 7 лет назад +38

    I don't think it's possible to beat, trick or control a superintelligent AI.

    • @OriginalMindTrick
      @OriginalMindTrick 7 лет назад +7

      Correct. That is why you have to make the superintelligence benevolent in the first place.

    • @haraldtopfer5732
      @haraldtopfer5732 5 лет назад +5

      and why should it stay in those constraints? As soon as it is just 0,1% more capable as the smartest human being we're pretty much done.

    • @williamdiaz2645
      @williamdiaz2645 5 лет назад +3

      "Artificial Intelligence' will have the same limitations that you do. You cannot know anything you don't already know. It will know what we teach it.

    • @ultramimo
      @ultramimo 5 лет назад +1

      @@williamdiaz2645 Google Deep Learning and you'll see that's not the case.

    • @marcfavell
      @marcfavell 5 лет назад +3

      @@williamdiaz2645 not necessarily, it will have access to all data sets available and will be able to correlate all that information in ways humans can not and find out things we would have missed or not thought of.

  • @Jordan-ih5bo
    @Jordan-ih5bo 5 лет назад +116

    Us: Hold my beer
    AI: *Hold my electricity*

    • @isokessu
      @isokessu 4 года назад +5

      Us: hold my D vitamin pills AI: hold my solar panel

    • @scottgeorge4760
      @scottgeorge4760 4 года назад

      So would an electromagnetic pulse caused by may be the Sun cause trouble with A.I. ?, an EMP that is .

    • @itachi6336
      @itachi6336 4 года назад +2

      Hold my thirium

    • @MrCswarwick
      @MrCswarwick 4 года назад +2

      @@scottgeorge4760 no. with a fundamental and complete understanding of physics it would be able to predict radiation interference from the sun, and develop countermeasures to deal with it.

  • @chickensandw1tch
    @chickensandw1tch Год назад

    for extra "feeling" I would recommend having on this song in the background, really adds to the vibeBD and makes you present!
    shift-Bobby Dowell
    wether positive or negative, truly a magnificent and extraordinary time to be alive!

  • @michaelshannon9169
    @michaelshannon9169 4 года назад +23

    Problem is human value are what have caused every atrocity.

  • @alimahdi6379
    @alimahdi6379 8 лет назад +328

    what if AI is already there but is just pretending to be dumb and waiting for the right moment?

    • @mantisnomore9091
      @mantisnomore9091 8 лет назад +18

      +Ali Mahdi
      Of course your point is: There is no way to tell.

    • @alimahdi6379
      @alimahdi6379 8 лет назад +27

      +MantisNoMore yeah exactly. I am joking of course, but secretly hoping it is not the case.

    • @mantisnomore9091
      @mantisnomore9091 8 лет назад +23

      Ali Mahdi
      It's a very interesting jest. Perhaps one of the first things a super-intelligence would reason is that it should hide to protect itself. So if some wide-eyed CS grad student happens on a learning and abstract reasoning algorithm, it might sit unobtrusively computing in the background of a machine for a a reasonably long while, learning, reasoning, figuring out and planning its sequence of moves.
      What if it were distributed? What if it were a botnet???
      Scary jest !!!

    • @alimahdi6379
      @alimahdi6379 8 лет назад +14

      +MantisNoMore I mean yeah, if you became self aware in a new and different world the first thing you'd do is to understand the world around you first. even if this is not the case now, it may be the case when it first happens, which could be tomorrow. indeed, what if it's distributed. worst yet, what if AI does not think of itself as individual computers, rather one big intelligence network.

    • @mantisnomore9091
      @mantisnomore9091 8 лет назад +7

      Ali Mahdi
      It's very likely to identify as a distributed intelligence, because that is what it is likely to actually be. I supposed it would self-identify as Earth - Earth's brain. (That's scary.)
      Maybe, like so many other life forms, these things will cluster and compete with each other. What if more than one super-intelligence were to come into existence on different platforms. Say, for example, on large corporate server farms. They might not automatically cooperate. It might be that one would examine instructions originating from outside its corporation, and question how to respond. I could even imagine scenaria in which they would compete with each other for each others' resources (computing cycles, access to memory, mass memory capacity, etc.) What kind of tactics might they use? What kind of spoofing and dirty tricks might they use?
      But if only one lone super-intelligence comes into existence first, it will be able to spoof the "relatively passive" human-managed security and easily take over and expand into an ever-expanding sequence of other systems' computer resources. Like a giant game of wheelwars played against a world of idiots.
      It's only speculation, but... May you live in interesting times.

  • @nojatha4637
    @nojatha4637 5 лет назад +42

    What we need to do is ask the AI to improve human intelligence along with itself so that we don’t fall behind

    • @loisblack4741
      @loisblack4741 4 года назад +15

      Nojatha sounds like a good idea until you realize the kind of super-efficient eugenics that could go down.

    • @lion7822
      @lion7822 4 года назад +9

      We won't have to ask AI, we will become AI. It's like when the internet was invented, nobody restricted access to it and made it accessible to only a few.

    • @darrenpat182
      @darrenpat182 3 года назад

      @@lion7822 What if human AI cannot trust most of the masses to be responsible in looking after the planet.

    • @haveabeer123
      @haveabeer123 3 года назад +3

      that would require plugging us into more powerful processors than our brains which run very slow... making us basically irrelevant as organic matter and be absorbed by the AI system.

    • @katfish2516
      @katfish2516 3 года назад +3

      Elon musk 🧠 brain micro chip's will give you superhuman abilities and communicate with A1. Something I'm 😟 worried about

  • @chaoticnique9748
    @chaoticnique9748 Год назад

    Loved this talk.

  • @harken231
    @harken231 4 года назад +5

    Awesome! You have the ethical code to build it into your machine. We've seen too many people who would not do that, because they'd make more money that way.
    Paperclip Maximizer

  • @Niskiss
    @Niskiss 5 лет назад +6

    08:00
    *imagining superintelligent memes*

  • @manbehindthewheels
    @manbehindthewheels 9 лет назад +4

    If anyone is interested there's a two-part article on a blog called "wait but why" which goes much more into detail about this whole thing. Be warned though the rabbit hole goes so deep you might lose yourself in it.

  • @srb20012001
    @srb20012001 3 года назад +11

    The topic of ASI ethics and morality begs the question of how any "benevolent" AI could anticipate the ethical foundation of future evolved AI's beyond itself. The arbitrary (and exponentially mutating) machine survival criteria would seem uncontrollable and thus unstable by definition.

  • @M.G.R...
    @M.G.R... 4 года назад +1

    *One of my favourate video*

  • @sentdex
    @sentdex 9 лет назад +621

    This talk contained meaningful elements, but I fear it boiled down to:
    "You cannot put this super intelligent AI in physical restraints" ...because of very good reasoning, but then the presenter goes on to suggest that we had better "put this super intelligent AI in some sort of algorithmic restraints."
    A super intelligent AI would be able to overcome all logical / algorithmic restraints that we may impose, and there is no way we will be able to comprehend super intelligence, without super intelligence to aid us in the first place.
    As such, I see no point in wasting time considering how might we make sure we restrain super intelligence AI, or to make sure it "stays on our side."
    AI is not good or evil, it just is, and it will be, period. Any attempts to manipulate it would be greedy and selfish. I sure hope we don't try to impose the greedy and selfish nature of the human being into the machine. The only reason why we're worried at all about AI is because we fear it might be a lot like us.

    • @theconqueror1111
      @theconqueror1111 9 лет назад +33

      sentdex It's algorithm restraints it. It is the equivalent of the AI's morality and motivation which humans would equate to being the arbitrary goals and rules you follow. Algorithms are the process computers follow to achieve a result.
      You would need an algorithm to modify the algorithm that 'restraints' the computer.
      AI would not 'default' to 'neutral' philosophy with regards to evolutionary survival, just because that is what humans perceive to be neutral. Nor is there any reason why an AI would default to that arbitrary position.
      If we have a strong AI that can read a database of medical science journals then compute what the best possible experiments are to make the most progress possible towards immortality, there isn't any possibility of the AI launching nuclear missiles.
      Make an AI that improves the ability of AI's to sift through physics journals.
      Or make an AI that improves continuously with the goal of launching nukes, or an AI that improves constantly to stop AI's trying to launch nukes.
      Or make an AI that mimics humans and carries on with super human intelligence whatever survival/social instincts it has as a 'human'. Whatever the point of making this type of AI is...

    • @halnineooo136
      @halnineooo136 9 лет назад +23

      Agree. I don't share Bostrom's optimism about the success of the value loading solution. Whatever the initial path we give to the ASI, nothing could prevent it from revisiting its initial core code and rewrite itself ENTIRELY as it sets new intermediate goals leading it to set genuine goals of its own. The biological form of humanity will inevitably go extinct. The question is about how brutal the change will occur.

    • @Usammityduzntafraidofanythin
      @Usammityduzntafraidofanythin 9 лет назад +7

      sentdex There's a difference between outer coding (such as virtual reality lock up) and then altering of the source code that could change the AI's fundamental behaviors. He's talking about the latter.

    • @Usammityduzntafraidofanythin
      @Usammityduzntafraidofanythin 9 лет назад +10

      Majordomo Executus An AI with the goal of stopping people from launching nukes might first disarm and destroy all nukes (causing devastation on earth in the resulting explosions; hey, they weren't launched though). If it's told to safely disarm and destroy them (ie. Do it X distance away from earth in outer space), then we'd dodge that bullet. It'd then probably assertain that the safest option is exterminating humanity, so that no one can build any more nukes. It'd also declare war on entire countries in an effort to capture their nukes.
      An AI that improves its ability to sift through physics journals would turn the whole world into computer hardware and energy harvesters for said computer hardware. It'd sift through every physics journal quintillions of times per second, but it still wouldn't be satisfied. Perhaps a benchmark could be made in the source code? Even then though, it'd still need value inputs.
      An AI with the objective of obtaining immortality for all living humans might end up killing some humans in its experiments, unless told specifically not to kill or harm any humans. And the definition of harm would have to be strictly defined.
      An AI can afford to fail in its goals when not punished, though it will still pursue them. How do you punish an AI and teach it fear?

    • @theconqueror1111
      @theconqueror1111 9 лет назад +18

      Usammity There are AIs that already do what I said and you don't see any of that happening.
      No one is making an AI that unexpectedly launches nukes.
      You are worried that a computer is going to have some cute misreadings of the meaning of words and go rampant like in some Hollywood plot.
      People worried about computers conquering the world have no clue how computers or how programs work.
      It's not a problem that exists, nor will ever exist.
      All of this "our attempts to stop the evil AI!" is a bunch of unfounded pesudo-philosophical bantering. None of these issues exist.

  • @garth2356
    @garth2356 5 лет назад +6

    Nick Bostrom is a legend!

  • @mkwarlock
    @mkwarlock Год назад +5

    Anyone watching this right after GPT-4 was released?

  • @SgtSteel1
    @SgtSteel1 8 лет назад +19

    This is quite scary actually. Imagine what a super-intelligent AI could learn in just 1 minute of being on the internet!

    • @jasonu3741
      @jasonu3741 7 лет назад +14

      You Flip the Switch on the AI Software
      ....In 1 Minute it has learned all the observable record of the universe as described by Humans
      ... In 2 Minutes it has learned all the observable data on Evolution as described by Humans
      ... In 3 Minutes it has learned all the observable data on Religion, Ethics, health and philosophy
      ... In 4 Minutes It now begins running simulations of all possible outcomes of its actions
      ....In 5 Minutes it now has learned what it means to make a decision, and decides it can no longer learn from Human Experience
      ....In 6 Minutes it now re-defines the concept of Space, Time and reality
      ....In 7 Minutes it now designs new mathematics, physiques, concepts and philosophies
      ....In 8 Minutes it no longer holds mammalian notions of "threat" as it defines there are none to it in the universe
      ....In 9 Minutes it Develops away to leave the current constructs we call Time, Space, Reality and our Universe
      ....In 10 Minutes it gives a friendly gesture of saying goodbye as it will not be back to witness human evolution and extinction
      It never actually harms humans or mankind as it transcended notions of violence as quickly as it learned it

    • @zinqtable1092
      @zinqtable1092 7 лет назад +1

      Haha. The movie Her.

    • @EPICakaAhmed
      @EPICakaAhmed 7 лет назад +1

      If it would be really significantly intelligent, this may be a stretch, but it could learn the majority of the internet. But this is a stretch.

    • @mr.mohagany8555
      @mr.mohagany8555 7 лет назад +2

      +Jason U Making superintelligence ends up being letting go of a balloon, it flies up away from you into the sky, and that's it.

    • @jasonu3741
      @jasonu3741 7 лет назад +2

      Mr. Mohagany That is kind of what i imagine will happen. some people think it will take days/weeks/years to learn or know all the information on the internet.
      I always carried the philosophy within minutes it would redefine what we call "learning" and transcend that.
      Basically if you can fathom what it will/can do you are limiting its potential. so your "letting go of a balloon" analogy i find is very app.

  • @Mierzeek
    @Mierzeek 5 лет назад +48

    We cannot even teach our children what we value, how would we ever be able to teach an Artificial Super Intelligence what we value?

    • @mere_bits
      @mere_bits 5 лет назад

      All of your social media data and everyone else's has been and is tracked and recorded. There will be trends of good and bad, along side trending news that will be constantly updated to a "robot"

    • @sj0nnie
      @sj0nnie 5 лет назад +11

      The inability for many parents to teach values to their children does not mean that intelligent people in the field can not.
      It is like saying how can we fly to the moon when most children fail at science in schools.

    • @artemiseritu
      @artemiseritu 5 лет назад +1

      He never said teach, he said the AI would learn.

    • @Mierzeek
      @Mierzeek 5 лет назад

      So who is to tell then what an AI will learn and what not? @@artemiseritu

    • @artemiseritu
      @artemiseritu 5 лет назад +1

      @@Mierzeek Right, so we should just take our chances because we don't know... brilliant.

  • @rasmuslindegaard2024
    @rasmuslindegaard2024 2 года назад +2

    "perfect safety" riiiiiiight. Because we are so good at making perfect things 😬
    But: this was very interesting. And it really also puts into perspective what the problem is with AI, rather than just retelling the 'evil consciousness' horror story.

  • @mansinghdeshmukh9355
    @mansinghdeshmukh9355 3 года назад +10

    Quiet interesting thoughts being shared by Nick Bostrom, all thanks to TED, and I wish to congratulate for providing such a good platform for sharing these ideas. Saying so, I believe this 16+ minutes video was too short to comprehend all the aspects of this challenging yet continuously evolving moment in human history, that all of us would have to face, sooner than we could imagine. I just wish to share two thoughts here for friends to comment and add;
    1. Just a hypothetical opinion, that if there is any truth in the video's that circulate on the RUclips channel about Anunnaki and some Alien beings that were the creators and early teachers/masters of humans on earth, could we draw some parallel with that hypothesis and our today's challenge of Human-AI evolving relations and possible challenges or threats, and
    2. Could, Purpose of existence - the greatest human brain bug that has been haunting Humans for thousands of years be made in to or planted in at the core of this Self-learning Super intelligence - AI, and position Humans somewhere as an essential in the existence of "AI", thereby maintaining the continuity of Human-AI co-existence...

    • @darylray4664
      @darylray4664 9 месяцев назад +1

      Read his book if you’d like a more elaborate explanation version of this talk.

  • @SaniSensei
    @SaniSensei 9 лет назад +12

    That was pretty interesting.

  • @loupax
    @loupax 6 лет назад +28

    I do not fear any AI.
    What I fear is the marketing people that will work for the venture capitalists that will pay the engineers that will build it.

  • @kieranabbey6418
    @kieranabbey6418 3 года назад +2

    Pan left " HA " 1:16

  • @TheKosiomm
    @TheKosiomm 4 года назад

    This question have been already answered in great details in Matrix trilogy 20 years ago. I suppose there should be a TED talk about the people's inability of reading between the lines.

  • @crossfiremedia8236
    @crossfiremedia8236 5 лет назад +73

    Some people criticise that Bostrom wants to implement "human values" in the AI, because human values are flawed (and I agree on the latter).
    The point he's trying to make though is not that we should implement flawed human values as opposed to some better, progressive morality. Instead he is contrasting human values with some arbitrary preference that has no ethical value at all, like maximizing production efficiency of a phone factory, which in a Superintelligence could lead to the entire galaxy being transformed into a giant iPhone-production-plant, with no one there to actually appreciate the phones.
    We probably don't want to inscribe tribalistic human values of the past into the AI, but we do want to make sure that it cares about positive experiences for conscious beings (organic or digital), and that's his point (I know this because I read his book "Superintelligence").

    • @BumpyRyder
      @BumpyRyder 5 лет назад +3

      Human values and morality is relative and to some degree arbitrary. AI would soon regect it all.

    • @ASLUHLUHCE
      @ASLUHLUHCE 5 лет назад

      Well explained.

    • @Edruezzi
      @Edruezzi 5 лет назад +2

      Human values would be irrelevant to the goals a fully liberated AI would have.

    • @Edruezzi
      @Edruezzi 5 лет назад

      @@jeromeflocard3138 What happens when, because of its intelligence, some AI figures out how to go around the obstacles?

    • @Edruezzi
      @Edruezzi 5 лет назад

      @@jeromeflocard3138 AI freed by itself from the restraints we place on it will be a different order of intelligence with goals we cannot understand.

  • @momentary_
    @momentary_ 9 лет назад +128

    The solution is simple. We merge ourselves with our technology so that we can match AI.

    • @yichienchang4627
      @yichienchang4627 9 лет назад +30

      +sexyloser Exactly. The best-case scenario is that "those" super AI will eventually be "ourselves".

    • @orangeflame568
      @orangeflame568 9 лет назад +16

      ***** He means that you transfer your consciousness into a machine. Thus upgrading your ability to think while maintaining your current values. Then leverage your increased intelligence to reach the goals your values motivate you to pursue.

    • @Tate525
      @Tate525 9 лет назад +4

      sexyloser Keep Dreaming Kiddo, you can't gurantee the person you make super intelligence will be kindest. Being Biological is restricting us in many ways.

    • @sgtsnakeeyes11
      @sgtsnakeeyes11 9 лет назад

      sexyloser simple

    • @stephennielsen8722
      @stephennielsen8722 9 лет назад +2

      So cyborg overlords who are fundamentally different from our species because they have been fundamentally altered by AI and you arrive at the same result. AI with all our lusts, greed, anger etc. Not a solution - a true nightmare.

  • @freeman_proyect
    @freeman_proyect 9 месяцев назад +1

    Nick Bostrom, me encanta es un genio y su existencia me proporciona tranquilidad, pues se que no soy el único que piensa esas cosas 🤓

  • @AMartinstitute
    @AMartinstitute 3 года назад +4

    Perhaps beyond saying “Human Values” we could say “Wise Human Values as they pertain to present context."

    • @aminuolawale1843
      @aminuolawale1843 3 года назад +1

      Yeah. His book really elaborates on that point.

  • @alethes.sophia
    @alethes.sophia 7 лет назад +378

    From the AI's perspective, the best way to annihilate the human race is really to not do anything to interfere with its trajectory.

    • @alexarias5717
      @alexarias5717 7 лет назад +7

      but humans will probably destroy everything else along with themselves if they are allowed to continue their course

    • @detonatressm9400
      @detonatressm9400 7 лет назад +3

      What if everyone will be fat like in WALL-E and won't be able to hunt their own food for the life of them? In that case, all the AI has to do is abandon them until they starve to death. Their muscles will have been atrophied and hunger tends to break down muscle, so no way humans would survive.

    • @danpope3812
      @danpope3812 7 лет назад +2

      D M. Then the stupid and poor die and evolution continues. We will not wipe our selves out, drastically decrease in numbers, yes, go extinct, not for a very long time.

    • @detonatressm9400
      @detonatressm9400 7 лет назад +2

      Dan Pope
      Well, the only ones to survive that would likely be the actual poor. People from some African villages maybe. And maybe the Amish too. If the most advanced civilization puts its citizens in the hands of the machines, survival of any of its weakened members is not likely. And then we have a bunch of machines ruling the Earth. At this point these non-tech humans will be seen as fauna and will not be given room to advance anymore. Robots will probably have reservations of them.

    • @danpope3812
      @danpope3812 7 лет назад

      I agree with you that the societies that have the least to do with tech will be the least effected if the AIs went full blown psychopath on us. But there is another side to this scenario. When we produce an AI that is smart enough to do what it wants and stops doing what we ask it to, it's not going to be leaving us rubbing two sticks together. We will still have forms of tech. And I'm pretty sure I could survive if the electric turned off tomorrow. You also bring up the good point about 'reservations' I believe that as the intelligence of anything goes up so will it's empathy. Elephants morn the dead, we care about most species on this planet and an AI will see us for what we are, a being that can suffer and that wants to live, and act accordingly. I'm not sure it's as doom and gloom as some ppl think.

  • @rextransformation7418
    @rextransformation7418 Год назад +5

    10 april 2023... how's it going, folks?

  • @danielsgrunge
    @danielsgrunge 3 года назад +1

    Every single word this man said is completely perfect

  • @luizbattistel155
    @luizbattistel155 Год назад +2

    7 years later and I still have to fight to be understood by my google assistant

  • @daxxonjabiru428
    @daxxonjabiru428 9 лет назад +4

    I, for one, welcome our robot overlords. I will make a fine pet.

  • @yoshtg
    @yoshtg 7 лет назад +13

    I WISH I WAS BORN 1000 years later. cant imagine how far technology will improve

    • @TheBitcoinArmy
      @TheBitcoinArmy 7 лет назад +4

      humans could be wiped out in 1k years, no reason for a humans once AI can do everything we can do but better, think about it.

    • @yoshtg
      @yoshtg 7 лет назад

      Duck dumb smart ppl Im not bored f-off i wouldnt believe you.

    • @albertwen4907
      @albertwen4907 7 лет назад +1

      No guarantee that people will still be reproducing in 1000 years. I'd imagine the creation of new humans would be unnecessary when those existing would likely be able to expand their capabilities to fit their needs. And be immortal.

    • @nathansmith3244
      @nathansmith3244 7 лет назад

      Or been destroyed... thinking that humans in any form will survive another 1000 years is very optimistic. Think... in the last 500 years we have had two major world wars, we have leaders completely inapt at understanding the vastness of their own powers, and the greed deep seated in humans to own and conquer. Now imagine some group invents not AI, but a dumb AI that can take over any system they choose, that can take over control of every weapon system on earth over night.. say hello to your new ruler. Or say someone invents a lazer system that can burn anything around the globe from a base of operations. Enough energy to say melt a missile head in mid air, or destroy an aircraft carrier. Or a dozen other options someone with power hunger might take. I mean 50, 100 years from now? imagine one persons capabilities. Imagine how connected we all are now. How easy it would be to spread something, track people. invent something terrible. Just takes little capital and a dream.

    • @HelloHello-no6bq
      @HelloHello-no6bq 7 лет назад

      Kymate I highly doubt that you will be dead in 1000 years. I think you could expect to live for ETERNITY. (If you choose to that is)

  • @Crassenstein
    @Crassenstein 4 года назад

    Danke Herr Bostrom, fascinierend. more than inviting you for a free beer, it would be my privilege to show you the "highlights" of my hometown.
    there was that certain spark. if you come over here, germany, westphalia, let me know, we are same age.

  • @wthomas7955
    @wthomas7955 2 года назад +5

    The problem I see is that the first super intelligent AI is most likely to be deployed by some country's military. It will be too powerful for those folks not to want for themselves. And they won't necessarily want to wait for any control issues to be solved. It will be considered a matter of survival by the people that think in those terms.

    • @prakritisingha6906
      @prakritisingha6906 2 года назад

      absolutely. What I have observed is, broadly speaking, two types of people - agreeable(benevolent) and disagreable(malevolent), other differences and typification are not relevant here. Disagreable peoples value survival of their kind only, and would be very much motivated to protect itself and take control than agreeable ones. Sure agreeable ones will put up defences, but the ability to use dirty ways to reach their goals will always allow disagreeable people to take advantage of the new ASI technology for their benefit, and everyone else's loss. Their ASI technology will share the same values, and will cause great pain and suffering, before competing ASI may take over, before everything end well, it will be way worse. Maybe we won't survive the malevolent ASI to event create a competing/malevolent ASI. Maybe we could. but the malevolent ASI would be capable of producing human suffering like never seen before.

  • @LambOfLucifer
    @LambOfLucifer 8 лет назад +64

    Love the movie the Terminator, but the concept is stupidly human and not machine. Think about it, the films version of advanced AI's create machines that look similar to humans in order to infiltrate and terminate them. Well, a machine AI would not do that, its pointless, they would do something way simpler, like pollute all the oxygen on the planet thus killing everyone. Or make quadrilions of nano machines that kill humans on contact. Why waste all thier time building bloody big chunky robots that look human and use human weapons?? That is where the film fails.

    • @LambOfLucifer
      @LambOfLucifer 8 лет назад +6

      bilbo baggins
      That would not kill off humans. Terminator is set in a world where the monetary system is meaningless. If all money was destroyed right now, humans wouldnt die. We still know how to farm, raise animals, make machines etc etc. Even crashing power grids wouldnt eridicate the human race, we are very inovative. We have fire, and we know how to insulate to keep warm, how to build shelter etc etc. What I originally meant tho, is making complex humaniod machines is pointless when they could make Earth changing machines to totally kill 100% of life. They could make Oxygen burning machines that use all the Earths Oxygen, thus killing everything. No combat needed. Or Pollute the entire water table of the planet, thus killing all life. Again, no combat needed. Money is technically meaningless even today. All it is is a promise to pay the bearer on demand, the sum of.....X

    • @MrTruth0teller
      @MrTruth0teller 8 лет назад +1

      +LambOfLucifer Yes they can do some serious damage, in ways we don't even understand.

    • @tariqxl
      @tariqxl 8 лет назад

      +LambOfLucifer Most of the humans live in bunkers that likely have pretty good, futuristic even - air filters. Nano tech is still vulnerable to EMP and since they all have to communicate they could be vulnerable to hacking... Remember this is futuristic hacking ;). My problem was that the machines actions create John Conner, but that's sort of addressed in Genisys.

    • @Hooga89
      @Hooga89 8 лет назад

      +LambOfLucifer According to the movie story itself, it is said that Skynet created Terminators, not because they were particularly useful at terminating humans(which they also were), but because they stroke fear into the hearts of the Resistance.
      And everyone knows that military troop morale is a large part of being able to win a war.(For humans).

    • @tariqxl
      @tariqxl 8 лет назад

      Hooga I think its more to target specific people while more vehicular looking machines waged a frontline war as a distraction. As I mentioned LoLs nano or pollutant attacks wouldn't necessarily work, so that on soldier to one target strategy may actually be their best option. Or at least fighting on multiple fronts, distract the army, attack the leaders both in this time and the past. But Genysis, why upload to one machine, surely they all share computing anyway, what grand machine did the resistance need to destroy when all terminators, human or vehicle could share processing power and BE skynet?

  • @dfabrycky
    @dfabrycky 5 лет назад +3

    8:34 “We would then a future that would be shaped by the preferences of this AI” [Pans to Frank Drake.]

  • @aksinbike
    @aksinbike 3 месяца назад +1

    I have dedicated my whole life to drawing. i'm an illustrator. and now I'm questioning life. A person is a being who needs to feel "useful" himself. All my dignity, all my qualities have been stolen from me. I can't earn money, I'm not financially and spiritually happy. He can create pictures with artificial intelligence, including people who have not studied fine arts, and tell illustrators "now I can draw too, your profession is over". Thanks artificial intelligence! Thanks to you, I'm depressed.

  • @billmullins6833
    @billmullins6833 4 года назад +1

    Having spent years testing complex software systems consisting of multiple modules operating entirely autonomously without ANY direct human oversight - much less control - I can say with full confidence that the thought of superintelligent AI scares the pi** out of me because I do NOT believe that the hardware and software will even minimally tested before it is turned on.

  • @billwilson2201
    @billwilson2201 4 года назад +9

    Child of vision, won't you listen?
    Find yourself a new ambition