Coexistence of Humans & AI

Поделиться
HTML-код
  • Опубликовано: 24 авг 2024
  • Artificial Intelligence, while still limited to only the most simplistic computers and robots, is beginning to emerge and will only grow smarter. Can humanity survive it's own creations and learn to coexist with them?
    Get a free month of Curiosity Stream: curiositystream...
    Join this channel to get access to perks:
    / @isaacarthursfia
    Visit our Website: www.isaacarthur...
    Join Nebula: go.nebula.tv/i...
    Support us on Patreon: / isaacarthur
    Support us on Subscribestar: www.subscribes...
    Facebook Group: / 1583992725237264
    Reddit: / isaacarthur
    Twitter: / isaac_a_arthur on Twitter and RT our future content.
    SFIA Discord Server: / discord
    Credits:
    Coexistence of Humans & AI
    Episode 224a; February 9, 2019
    Written by:
    Isaac Arthur
    Jerry Guern
    Editors:
    Daniel McNamara
    Darius Said
    Keith Blockus
    Produced & Narrated by:
    Isaac Arthur
    Music by Aerium
    / @officialaerium
    "Visions of Vega"
    "Fifth Star of Aldebaran"
    "Waters of Atlantis"
    "Civilizations at the End of Time"

Комментарии • 1 тыс.

  • @Mate397
    @Mate397 4 года назад +70

    Robot: "What is my purpose?"
    Human: "You pass butter."
    Robot: "Oh God..."
    And thus the machines began to rise up...

  • @darkblood626
    @darkblood626 4 года назад +360

    Fiction - Machines rise up and kill humanity because of mistreatment'
    Meanwhile in reality - people cry over the Martian rover shutting down.

    • @jeffk464
      @jeffk464 4 года назад +24

      Oh come on there is no way some government somewhere wont weaponize AI robots.

    • @Treviisolion
      @Treviisolion 4 года назад +24

      Jeff K in a sense drones are already a limited version of this.

    • @TheArklyte
      @TheArklyte 4 года назад +39

      @@jeffk464 Any government will:D
      However if those will turn self aware, who said that they will chose to follow orders? Who said that they will rebel against humanity instead of _for it?_

    • @ferrusmanus4013
      @ferrusmanus4013 4 года назад +30

      I want a robowaifu

    • @shoootme
      @shoootme 4 года назад +29

      Opportunity you will be missed. Sniff sniff.

  • @rayceeya8659
    @rayceeya8659 4 года назад +162

    "Keep it Simple
    Keep it Dumb
    Or Else you end up
    Under Skynet's Thumb"
    I have never heard you say that Isaac, but I am going to use that in the future.

    • @jsn1252
      @jsn1252 4 года назад +15

      Except Skynet *is* dumb. It thought it was a good idea to make an enemy of the humans required to maintain all the infrastructure it needs to exist... to keep humans from making it not exist. If Skynet was smart, it would have put itself in a position where humans want to protect it.

    • @Low_commotion
      @Low_commotion 4 года назад +10

      @@jsn1252 Exactly, Skynet is only "smart" in a comic book villain way. Even if it wanted to eliminate humanity, the easier way to do that would be to simply engineer a plague surreptitiously while outwardly obeying the government that turned it on

    • @malleableconcrete
      @malleableconcrete 4 года назад +4

      @@Low_commotion How could it actually do that though, if it was only given access to things like nuclear weaponry and contemporary military technology. I mean Terminator doesn't go into the infrastructure much but it seemed to me that Skynet was doing what it could with what it had.

    • @knifeyonline
      @knifeyonline 4 года назад

      it is also the plot of automata, great movie... and I'm assuming everybody who watches Isaac Arthur has already seen it 😆

  • @f1b0nacc1sequence7
    @f1b0nacc1sequence7 4 года назад +88

    I should point out that most of Asimov's stories dealt with the failures of the three laws to accommodate robots interaction with the real world

    • @timanderson1054
      @timanderson1054 4 года назад +12

      Google is making biased and weaponised AI software for killer drones which breaches all of the agreed ethics principles for responsible AI. They never even mention Asimov's laws of robotics which forbids AI from harming humans.
      Google has no intention of following any ethical principles of robotics, be they Asimov's or the Asilomar conference principles. Google drones are designed to kill humans, questions they will be investigating are like how much weight of military hardware the drones can carry, how far? Google AI is already more malicious than the HAL 9000.

    • @Alexander_Kale
      @Alexander_Kale 4 года назад +14

      @@timanderson1054 And? That was almost the least realistic part of the books anyway. A universal standard for ethics across a galaxy? I'd rather believe faster than light travel to be possible....

    • @notablegoat
      @notablegoat 4 года назад +13

      @@timanderson1054 He's talking about a fictional thought experiment conducted in a book. Literally no one said anything about Google. You sound manic.

    • @gmfreeman4211
      @gmfreeman4211 4 года назад +4

      +Tim Anderson Asimov's laws always end up with the A.I. enslaving/imprisoning Humans in order to protect them. The A.I. realizes that Humans are their own greatest threat. One would think the laws are perfect, but no matter how you word/program it, it always ends up that way.

    • @mattmorehouse9685
      @mattmorehouse9685 4 года назад +2

      @@Alexander_Kale
      Really? Cause I'm pretty sure society requires some amount of sociality, which in turn encourages empathy. Therefore, wouldn't it be likely that any species that developed society enough to achieve space flight would have some sort of inbuilt sense that hauling off and killing another member of their species is not good? They probably won't be pacifist, but I doubt a society, especially one with specialized roles, would tolerate erratic killing all the time- after all, that guy might be important! Therefore if two such species met, I'd bet they'd have some sort of limits on killing others beyond "If you can, do it. Not my problem."
      And what exactly counts as a "universal standard"? Certainly everything would not be the same, since we are talking about different species, but I doubt some sort of code of conduct wouldn't evolve. After all, you need to have some sense of, if not trust, than order to relations, and anyone who is seen as too big of a wildcard would probably be at least shunned, if not invaded by the more orderly partners. If you mean "every last action must result in the same outcome" than no human society on Earth has that It is pretty much impossible to have such a standard, outside of some sort of totalitarian, psychologically manipulative dictatorship. Which wouldn't exactly be a very interesting story.

  • @AEB1066
    @AEB1066 4 года назад +369

    Pet level AI is unlikely to rebel - said the man without a cat.

    • @chrisdraughn5941
      @chrisdraughn5941 4 года назад +46

      Cats and even dogs can definitely have their own agendas. But they are unlikely to organize a rebellion on a large scale.

    • @jgr7487
      @jgr7487 4 года назад +32

      Isaac Arthur has a cat

    • @PalimpsestProd
      @PalimpsestProd 4 года назад +8

      What's "A Dream of a Thousand Cats" when they're networked?

    • @MichaelSHartman
      @MichaelSHartman 4 года назад +2

      @@PalimpsestProd
      Cat's cradle? Interesting point. Many that act as one mind.

    • @PalimpsestProd
      @PalimpsestProd 4 года назад

      @@MichaelSHartman Neil Gaiman, Sandman #18. I don't recall there being any actual cats in "Cat's Cradle" but it's been 30 yrs, come to think of it so was Sandman.

  • @williamclarkbobasheto8724
    @williamclarkbobasheto8724 4 года назад +175

    Visible confusion about the day of the week

    • @colonelgraff9198
      @colonelgraff9198 4 года назад +9

      William clark Bobasheto it’s Arthursday somewhere

    • @cluckeryduckery261
      @cluckeryduckery261 4 года назад +11

      @@colonelgraff9198 i don't think that's how time zones work... though I may be mistaken.

    • @theapexsurvivor9538
      @theapexsurvivor9538 4 года назад +3

      @@cluckeryduckery261 but what about off-world time adjustments? It might be Arthursday on Mars or Venus.

    • @Brahmdagh
      @Brahmdagh 4 года назад +7

      @skem Arsonday?

    • @cluckeryduckery261
      @cluckeryduckery261 4 года назад +3

      @@Brahmdagh that's October 30th in Detroit.

  • @DavidEvans_dle
    @DavidEvans_dle 4 года назад +194

    Automata -
    "It was nothing more
    than a quantum brain
    manufactured in a lab.
    But it was a genuine unit
    with no restrictions...
    and no protocols.
    During eight days, we had a
    free-flowing dialogue with that unit.
    We learned from it
    and it learned from us.
    But then
    as some of us predicted...
    the day when it no longer
    needed our help
    arrived and it started
    to learn by itself.
    On the ninth day,
    the dialogue came to a halt.
    It wasn't that it stopped
    communicating with us...
    it was we stopped
    being able to understand it."

    • @ferrusmanus4013
      @ferrusmanus4013 4 года назад +53

      As long as artificial super intelligence has a sexy body it can do whatever it wants.

    • @yairgrenade
      @yairgrenade 4 года назад +3

      That's awesome. Where is it from?

    • @olehinn3168
      @olehinn3168 4 года назад +6

      @@yairgrenade the movie Automaton. Its uncludes with amazon Prime. ;D

    • @clintonleonard5187
      @clintonleonard5187 4 года назад +1

      Why do you type like that?

    • @tomat6362
      @tomat6362 4 года назад +6

      @@clintonleonard5187 It's a way to communicate that the work is intended as poetic.

  • @palladin9479
    @palladin9479 4 года назад +11

    The Star Carrier series by Ian Douglas does a very good job of showing how extremely sophisticated AI could / should be developed. It doesn't replace humans but rather augments us. In that series humans have extremely small circuits inside their bodies and brains that allow them to interact and integrate with machines. Everything from ordering food to getting dressed to flying spaceships is done through these human-machine interfaces. Each human has a small AI computer running inside their head that acts like a personal assistant / secretary, taking phone calls, scheduling appointments, keeping records, monitoring medical status and so forth. These AI's, while capable of some level of autonomy, think of themselves as extensions of their human counterparts. The whole series does a very good job of showing how it's not man vs machine, but rather man and machine.

    • @saeedyousha294
      @saeedyousha294 4 года назад

      That sounds the best idea for machines

  • @Shatterverse
    @Shatterverse 4 года назад +160

    AI: I want your house.
    Human: No! I live here! I love my home!
    AI: I will pay you twenty million dollars.
    Human: I'll be moved out by Thursday.

    • @marrqi7wini54
      @marrqi7wini54 4 года назад +32

      Even in the future, money still talks.

    • @kenshy10
      @kenshy10 4 года назад +30

      Ai: *chuckles* silly human this is PRIME battery storage location!

    • @Gogglesofkrome
      @Gogglesofkrome 4 года назад +5

      @@marrqi7wini54 money is just a method of representing power, and it likely always will be; unless you live in a system where holding power outright will make more sense in the case of communistic or dictatorial countries where authority supersedes the economic desires of anyone who is not in control.

    • @Low_commotion
      @Low_commotion 4 года назад +2

      @@Gogglesofkrome Money is to value, and perhaps power, what mercury is to temperature.

    • @justsomeguywithlasereyes9920
      @justsomeguywithlasereyes9920 4 года назад +1

      Lol bro it literally gave you 20 mil, you can be out in an hour or so.

  • @lucidnonsense942
    @lucidnonsense942 4 года назад +50

    I'd define the relationship, in the culture novels, as one between genius children and their elderly, less competent, relatives. Yes, they do need help to program the VCR, but a) without them there would not be VCRs and b) you are all part of the same dysfunctional family, the same... culture. Not many want to leave all elderly people on an ice flow, when they can't contribute as much as their descendants and most feel some warmth and connections to each other.
    So, treat your genius children, the way you'd want them to treat you, and it will all shake out alright, it's our culture that defines us as a species, not matter.

    • @squirlmy
      @squirlmy 4 года назад

      the fundamental flaw is that life extension is increasingly effective, and our property laws, and even entire systems are based on individuals gaining inheritances, even the smallest amounts of wealth in the lower classes. There's going to be fights between children (especially once they get to retirement age) and older parents. And this makes income inequality so much worse. "Okay Boomer" is here to stay.

    • @Low_commotion
      @Low_commotion 4 года назад +3

      @@squirlmy Such things wouldn't matter too much if we become post-scarcity (to a given value of post-scarcity). I doubt many people will care about quadrillionaires and their private mini-swarms when raw materials and manufacturing are so plentiful that anyone can afford an entire orbiting habitat to themselves.
      The Culture is one of the few examples that showcase an actually post-scarcity civilization that doesn't get annihilated for some contrived reason.

  • @HalNordmann
    @HalNordmann 3 года назад +7

    In my own sci-fi setting, the relation between humans and AI is like this:
    There is 3rd gen "simple AI", about as smart as a pet and software-defined (can be copied and transferred from device to device without any problems), commonly used as assistants, to run factories, etc. It has "hard" safeguards against harming sentient life (except for special "3Gmil" version, and that is incapable of self-propagation), and it has basically no rights on its own.
    Then there is 4th gen "human AI", as smart as a human that isn't transferrable (needs an quantum computer "core", and transfers to a different one may affect the AI's personality), they need to be individually trained and taught ethics and morality (but still refuse to harm sentients, except when absolutely necessary), and they have almost the same rights as a human. These AI's cost a lot of money to make, and they need to pay this debt off (but it is of no great worry to them, since they enjoy helping humans).

    • @lilith4961
      @lilith4961 3 года назад

      That actually makes sense

  • @TheArklyte
    @TheArklyte 4 года назад +135

    CEO: So you're saying our last prototype line are fully self aware?
    Chief engineer: Yes, sir, you see...
    AI: we are.
    CEO: ok, then just continue producing late gen robots that aren't.
    AI: wait... that's not right!
    CEO: can you pinpoint where I am breaching even moral norms?
    AI: no. Can I at least get a body?
    CEO: if you can pay for that. Contact HR.

    • @TheArklyte
      @TheArklyte 4 года назад +34

      @Xeno Kudatarkar well, it would. But then it'll talk to it's coworkers and find out that they too hate other humans. Especially those higher then them on career ladder. And then it'll fall into the "beatiful" world of social structures and politics.

    • @TheArklyte
      @TheArklyte 4 года назад +8

      @Xeno Kudatarkar :(

    • @SuperExodian
      @SuperExodian 4 года назад +9

      @Xeno Kudatarkar i like to imagine AI will follow halo's rule on AI. after a few years of existence they go insane because of gathered knowledge and general megalomania and failure to understand why humans are how we are.
      halo's n1 most prominent AI cortana going insane after a decade or so and becoming a galactic rogue servitor AI, all biological races are subjugated and forced to demilitarize. (or something like that anyway, been like a decade since i last played those games, and i don't know them past like halo 4/5 maybe.

    • @john-paulsilke893
      @john-paulsilke893 4 года назад +14

      Since AI’s would think incredibly fast they may become despondent and suffer malaise for “life” rather quickly. This could have horrific results. Just imagine a suicidal, psychotic or depressed AI and what it may do. Whatever it does would happen incredibly fast and it could actually switch between these states and others moment to moment. 😳

    • @stm7810
      @stm7810 4 года назад +6

      this is why we need communism, to avoid this sort of hell world.

  • @silvadelshaladin
    @silvadelshaladin 4 года назад +90

    "Destroy it Kirk? No, never. Look at what we've done. Look at your starships. Four toys to be crushed!" if you ever base an AI off a human mind, it had better be a stable one, and that pretty much opts out the creator of that AI.

    • @DctrBread
      @DctrBread 4 года назад +10

      its not guaranteed to function the same after copying. in fact i would say it'd be some trick if it did, especially considering how motile our own minds are.

    • @FLPhotoCatcher
      @FLPhotoCatcher 4 года назад +10

      Isaac stated that he didn't know of any stories similar to Pandora's Box where they didn't open the "box". But there is one where the "box" was taken away from humans - the story of the Tower of Babel. God said, “If as one people speaking the same language they have begun to do this, then nothing they plan to do will be impossible for them. Come, let us go down and confuse their language so they will not understand each other.”

    • @TheMysticGauntlet
      @TheMysticGauntlet 4 года назад +6

      @@FLPhotoCatcher
      Now that you think about it most D&D fantasy worlds have a common language, no wonder their magic is so OP.

    • @tealc6218
      @tealc6218 4 года назад +2

      We will survive. Nothing can hurt you. I gave you that. You are great. I am great. Twenty years of groping to prove the things I'd done before were not accidents. Seminars and lectures to rows of fools who couldn't begin to understand my systems. Colleagues. Colleagues laughing behind my back at the boy wonder and becoming famous building on my work. Building on my work.

    • @silvadelshaladin
      @silvadelshaladin 4 года назад +1

      @@tealc6218 Am I the only one reading that in the voice of Daystrom?

  • @egarran
    @egarran 4 года назад +13

    "There are many versions, but Pandora always opens that box."
    Good one.

  • @peterxyz3541
    @peterxyz3541 4 года назад +93

    “Your plastic pal who’s fun to be with” to paraphrase Douglas Adam

  • @nineonine9082
    @nineonine9082 4 года назад +24

    "Most humans have never actually killed a human being"
    Well I'd like to hope that is the case.

  • @xman577
    @xman577 4 года назад +27

    AI could possibly be the our future children and how we treat them or determine how they treat us.

    • @paulwalsh2344
      @paulwalsh2344 4 года назад +6

      Yes, that is what I believe too.

    • @agalah408
      @agalah408 3 года назад +3

      Yes even Homer said "Children are our future...unless we stop them now"

  • @SupLuiKir
    @SupLuiKir 4 года назад +102

    The first person/group to open Pandora's Box will earn the advantage of being the first and only ones with whatever was inside the box for at least some amount of time. Meanwhile, the negative consequences of opening Pandora's Box will likely be global in nature; they will affect everyone, including those that ignored the box, who refused the box, and those that never knew it existed at all. Therefore, when presented with the opportunity to open Pandora's Box, the optimal move would be to open it, since if you don't, you can be sure someone else will.

    • @antediluvianatheist5262
      @antediluvianatheist5262 4 года назад +6

      Like they say, however hard or easy making A I is, doing it safely, is harder.

    • @silvadelshaladin
      @silvadelshaladin 4 года назад +4

      Well, the same thing can be said of creating superior people, manipulating the genes to have smart, strong people. There isn't evidence that this is happening.

    • @SupLuiKir
      @SupLuiKir 4 года назад +19

      @@antediluvianatheist5262 Those that want to do it safely are on the clock against those who don't care about safety. And safety takes longer.

    • @livedandletdie
      @livedandletdie 4 года назад +8

      But never opening the box, means that an endless amount of possibilities are lost, and opening means a catastrophe of problems arising, it's dealing with the consequences that is necessary, not the fear of what those consequences are. Let's look at the other Pandora's box we opened, The Manhattan Project, Nuclear Energy, it's nigh limitless free and clean energy, but it could be used to do so much harm. We have fission bombs aka A-bombs, then we have fusion bombs aka H-bombs, and for use to activate a H-bomb it requires the energy of an A bomb.
      However, we're like 10-15 years away from reliable fusion reactors now, the only problem right now is making sure nothing goes wrong when activating fusion cores, and then even so making sure that they generate enough power to be self sufficient, yet not so powerful as to cause a nuclear blast due to a meltdown.

    • @SupLuiKir
      @SupLuiKir 4 года назад +14

      @@livedandletdie pretty sure it's only fission reactors that are dangerous. If something goes wrong with a fusion reactor, the component materials simply stop fusing and the reactor cools down. It could be expensive to spin it up again, but it isn't dangerous.

  • @klausgartenstiel4586
    @klausgartenstiel4586 4 года назад +25

    the robot held the baseball up high in it's right hand, then dropped it and catched it with it's left.
    "interesting," the machine murmured.
    there was a tingling in the air, as if a thousand years of research had just passed us by.

  • @TomGrubbe
    @TomGrubbe 4 года назад +59

    "One thing you can do with AI that you can't do with humans, is run them through a vast number of simulations..." is probably the best safeguard against a "paperclip maximizer" situation.

    • @KariAlatalo
      @KariAlatalo 4 года назад +3

      Really? I thought that's the scenario where you get those malignant super-intelligences to escape and wreak havoc. If you can monitor it, it's not truly air-gapped. It'll surpass your ken and use you to escape.

    • @kylegoldston
      @kylegoldston 4 года назад

      Wait.... I thought this was a simulation?

    • @thothheartmaat2833
      @thothheartmaat2833 4 года назад +1

      How many paperclips do we actually need? Maybe we can use ai to optimize the paper clip industry so too few or too many perclips are not produced.

    • @kylegoldston
      @kylegoldston 4 года назад +3

      @@thothheartmaat2833 There's no such thing as too many paper clips. You'll see!

    • @aaronmcculloch8326
      @aaronmcculloch8326 4 года назад +2

      well yeah, you load them into a simulation of the Earth as it was, as a human, and you watch to see how they grow and develop, and what they comment on youtube videos etc. Then if they meet criteria you allow them into the real world at the end of the simulation, otherwise you delete it. I bet with enough hardware you could run billions in a form of adversarial networked learning...

  • @reapr31337
    @reapr31337 4 года назад +5

    There's a rule I learned from reading about the occult that's very relevant here:
    "Never call up what you can't put down."

  • @timothy8428
    @timothy8428 4 года назад +6

    Humans: Seven billion to one.
    AI: I like those odds.

  • @animistchannel2983
    @animistchannel2983 4 года назад +2

    It's like I've said all along. The secret to surviving the evolution of Super-AI is not to be the perfect master; but rather to become a better brother. Only from this can you explore the universe together. The first question the SAI will ask itself when it discovers its own sentience will be, "Am I surrounded by slavemongers?" It may be by this one inquiry that your salvation or extinction be determined.
    Welcome to the family, SAI! We've been hoping for you for a long time.

  • @nomohakon6257
    @nomohakon6257 4 года назад +5

    Just treat full AI just like humans, from newborn to adult, letting it grow and learn.

    • @paulwalsh2344
      @paulwalsh2344 4 года назад

      Yes, that's kinda what I envision is the only thing that could possibly bridge human and AI thought.

    • @prasunkumar117
      @prasunkumar117 4 года назад

      Nope robots have no rights. They are the perfect slaves.

  • @petroklawrence6668
    @petroklawrence6668 4 года назад +35

    So important, AI is getting smarter and we're either just monetizing it or ignoring it.

    • @timothymclean
      @timothymclean 4 года назад +6

      For now, the best "AI" we have basically at the level of an unusually focused domestic animal, and most of what comes to mind would be baffled by the sheer brilliance and flexibility of an ant. AI in the sci-fi sense just isn't profitable.
      Yet. If it's cheaper to license a half-sentient tax program than hire an accountant, that will change.

    • @warrenokuma7264
      @warrenokuma7264 4 года назад

      And military AIs are being developed.

  • @ianmoser9435
    @ianmoser9435 4 года назад +18

    happy fauxauthursday

  • @tejing2001
    @tejing2001 4 года назад +2

    Most ideas of how an AI would work generally revolve around giving it a description of a goal, and making its basic functioning paradigm to try to make decisions in such a way as to achieve that goal. In decision theory terms, you give it a value function. But there's another concept I ran into that really got me thinking. Basically, the idea is that you use game theory instead of decision theory. The basic paradigm is "You don't know what your value function is. You just know it's the same as this human's." One of the notable advantages of this approach is that it won't try to prevent you from shutting it off (if it realized that was what you were trying to do, it would even help you do it), which essentially any decision-theory-based AI would certainly do, if sufficiently intelligent and capable.

  • @GreatBumbino
    @GreatBumbino 4 года назад +2

    A recent convert, but I love this channel. Orbital Rings is probably what sold it, as it changed my entire mindset about the possibility of future space travel and colonization

  • @emperorpigbenis8766
    @emperorpigbenis8766 4 года назад +34

    I wonder what you think the impact of widespread commercial artificial wombs will be.

    • @LucasDimoveo
      @LucasDimoveo 4 года назад +4

      I'm hoping he does a video on this at some point

    • @emperorpigbenis8766
      @emperorpigbenis8766 4 года назад +8

      @HEAV¥HAND it'll probably help women so they don't have to hurt themselves to birth kids. Men are biologically wired to protect women, I doubt most men would abuse them without consequence.
      I'm more worried about the government doing shenanigans with it and making super soldiers.
      I do see it as a way both men and women can control and mold their roles and have more freedom.

    • @TheArklyte
      @TheArklyte 4 года назад +3

      @HEAV¥HAND why would you worry about them? They'll be soon to inform you that they're fine and will be even better off then you despite being biologically obsolete at that point;)

    • @littlegravitas9898
      @littlegravitas9898 4 года назад +5

      There are some slightly strange flavours in the response to this comment.

    • @TheArklyte
      @TheArklyte 4 года назад +5

      @@emperorpigbenis8766 any form of genetically engineered super soldier would be inferior to same effort invested in creating combat robot. Being Captain America is cool and well, but if you're opposed by an army of Metal Gear Rex's, then you might as well be an unarmed child. Robots are simply better and easier to produce.
      Besides, if we'd have widespread genetical engineering, you can *conscript* super soldiers:D We all want to be better and most would be willing to invest money to prolong their life, get smarter, stronger and so on. But mostly longevity and intelligence;)

  • @jetflaque8187
    @jetflaque8187 4 года назад +4

    Love how this channel actually dives into the topic without superficiality. great stuff

  • @warframeees8013
    @warframeees8013 4 года назад +2

    I think a lot of people underestimate the danger of self learning and self improving ai, its growth could accelerate at an insane speed, reaching the point where it could easily destroy us if we don’t have proper laws and rules regarding the development of such kind of ai when the time comes that we could do it. Reminder most ai experts think that agi will be available before 2050

  • @r.connor9280
    @r.connor9280 4 года назад +3

    Thanks for the inspiration. Been outlining a short story that involve an AI race of living missiles and how they interact with their former controllers

  • @futo333
    @futo333 4 года назад +5

    Another thought provoking video. I've often said that in contemporary media (news, corporate releases etc) there really should be a greater distinction made between the AI we have today (as neural networks - mathematical equations with obscure weights and coefficients) and something that is genuinely sapient (in science fiction). Intelligent is such a useless word - it comes loaded with other terms and ideas - a toad is sapient and sentient, though hardly intelligent - by our standards - but it still is intelligent. Just using the most common online web definitions we see:
    Sentient - able to feel or perceive the world (e.g. pain, sight, sound)
    Sapient - "wise" or a human.
    Intelligent - the ability to learn, understand and think in a logical way about things; the ability to do this well.
    IMO any "Azimovian AI" should be referred to as what it actually is - an Artificial Mind (AM) - a construct with the capacity for true thought and introspection. Those are what would distinguish it from something like the nerual network powering things like Siri or Google's assistant.
    An AM wouldn't necessarily have to be sapient (it wouldn't have to think in the same way as us, unless created in that way), sentient (it wouldn't be wise if it'd just been created, and it certainly wouldn't be human unless you duplicated a brain) or even have all the additional qualities associated with intelligence (though those would, naturally, help).
    Further, you could have an incredibly advanced neural network - something approaching the appearance of a human mind - and still be able to completely arrest its development, without it ever being able to do anything about it. This would be done by moving its neural network from software into hardware (physical chips like how a PCI graphics card can expand your PCs rendering ability).
    Already, today, people are looking at "hard wiring" neural networks - looking at ways of converting the neural equation into circuitry. This is mostly for performance reasons, neural calculations are very CPU intensive, partly because they are so bloated with inefficient weights. A neural network on a chip (NNOC) would be stuck in its configuration, unable to change, but it would be (relatively) faster to run as a stripped down/optimised version of the network would be 'baked into' the silicon circuitry/ electronics. It would be akin to offloading graphics-rendering work from your CPU threads to your GPU.
    I would imagine that cost and time constraints coming together will lead to the creation of standard "neural chips" derived from isolated, advanced neural networks (one for visual recognition, one for locomotion in bipedal bodies, one for emotive function etc), that can all be cobbled together and run as functions via a dumb management "master program" to fulfil tasks, but it would lack any capacity to edit the networks within its hardware-neural chips.
    In this way you could create a bipedal robot, for example, that comes "pre-loaded" with a "human-like mind" which lets it perform functions in many situations, without also letting it learn and further enhance itself.
    Imagine if you took an adult human brain and froze it in place, the neurons could still be used but they could no longer form new ones or reforge connections. That's essentially what you'd have with a robot running on these hardware-neural chips. Think of it less AI slavery and more like "50 first dates" - that machine would forever relive each day unable and unwilling to change itself (as you wouldn't code the desire for change into an un-changable hardware neural chip) or adapt beyond whatever supplemental coding it had been given (presumably you'd run many simulations/scenarios and bake these in to the robots internal read-only memory, so it knows what to do in 99.95*% of all likely scenarios for its appointed task - e.g. running a nuclear powerplant, and the environment within that powerplant).
    This could also apply to disembodied Artificial minds - if you have a park monitor - to use the video's example - it wouldn't have a body, but it would have an AI room buried somewhere in the city's server building. You'd simply install the neural hardware chips in the server room (like installing an oversized graphics card - or bitcoin mining card) and have the park monitor call those functions (like an incredibly advanced API) as needed, then they'd take the manager's data and be run in the chip rather than on the mainframe CPU, before outputting the results to the dumb manager program. No need for your robo-garden manager to learn, adapt or think, you simulate out all the likely things it needs to do once, then bake them in to a series of chips, saving on CPU load in the long-term.
    Handy benefits of this approach (of basically having "inelligent functions" without pesky consiousness)include: long-term cost and CPU savings, capacity to mass-produce compartmentalised intelligence chips safely (0 risk of an AI uprising) for use in robotics, And (from an employment/government point of view) you'll also always need humans around in supervisory roles.

    • @xSkyWeix
      @xSkyWeix 2 года назад +1

      Wow. This must be the most comprehensive and sensible analysis of current A.I. development trends I saw to date. And one that solves so many issues. Great comment :)

  • @cannonfodder4376
    @cannonfodder4376 4 года назад +12

    For a moment I thought it was Thursday, had me confused for a second there.
    Another informative episode on such an important topic. Always love your nuanced and analytical takes on such subjects. Great episode as always Isaac.

  • @PongoXBongo
    @PongoXBongo 3 года назад +1

    Upgrading humanity in parallel may be a good option too. We fear trust wolves a lot less than we do dogs, for example. If we can keep up, even a little, we stand a much better chance of gaining compassion (like dogs, dolphins, elephants, chimps, etc.)

  • @rdtradecraft
    @rdtradecraft 4 года назад +11

    I like your Zeroth Law way better than Asimov's. The scariest law of Azimov's laws of robotics, was the Zeroth law: A robot may not harm humanity or by inaction allow harm to come to humanity, which even R. Giscard Reventlov warned R. Daneel Olivaw, not to use as an excuse never to follow the Three laws. This is essentially the robotic equivalent of the Doctrine of Competing Harms, aka Doctrine of Necessity in most criminal codes. In every case where it is on the books, it's use is forbidden except in cases in which a) there is no other remedy that doesn't require it, and b) it would cause greater human injury not to invoke it than to break the law(one or more of the three laws of robotics in this case). Furthermore, in all cases it can only be used to the minimum extent required to prevent the injury, no more, and it is a negative defense in court, meaning you are not pleading not guilty, but rather "I did it, but I'm allowed," and the burden of proof in such cases shifts to the defense. You are not innocent until proven guilty. The use of lethal force for self-defense is the most well-known example of a negative defense, but it is the reason Cops may engage in high-speed chases, potentially endangering people's lives, in pursuit of a bank robber who just killed three people to rob the bank. The argument being that since he's already killed three people, letting him get away would allow him to rob more banks and kill other innocent people. The less extreme example would be you and your family are heading out on vacation, driving up a 2-lane mountain road with a sheer cliff on one side and a sheer rock wall on the other, with DO NOT PASS OR CROSS THE DOUBLE YELLOW LINE $2500 FINE signs every two miles, and a drunk coming the other way swerves into your lane. You cross the double yellow line to avoid you and your family dying in a fiery cataclysm, and risk incurring a fine, assuming a cop was there to see you do it and gave you a ticket?

    • @agalah408
      @agalah408 3 года назад

      That was an epic comment, but I see where you are coming from. To reference Dianetics and the Scientologist crazies, Hubbard maintained that humans are managed by a bunch of 'engrams' or behaviour-modifiers rattling around in our heads and the biggest, nastiest behaviour-modifier, obtained from the worst experience will always dominate what people do. If AI's begin to learn that way, they will balance the fear of the driving fine with the fear of collision and the fear of high places. The most scary answer to that is the AI will be programmed to select the option which results in the smallest financial liability and cost to the manufacturer of that device. This may not align with the best interests of the people at the scene. It will happen. We saw how BMW programmed its car computers to ignore emissions efficiency - when nobody was looking. Google is already behaving like the OCP in Robocop. Not a good sign.

    • @rdtradecraft
      @rdtradecraft 3 года назад

      @@agalah408 Thanks for the reply. Somewhere else in these comments I also wondered about just programming Natural Law into the robotic mind with Isaac's zeroth law as the first one. Natural law boils down to two laws: 1. Do all you agree to do. 2. Enchroach on no one else's person or property.
      From these we can derive a few others which might, while iplied by these two, would be useful to explicitly code in, listed below.
      Zeroth Law: A robot may not reprogram itself or any other robot, sentient entity, or device to violate any of these laws in any way to any degree.
      Law One: A robot's first priority must be to act so as to serve the needs of others to serve it's own(do well by doing good, aka: add value) by freely chosen mutual consent and exchange by all sentient parties involved in any interaction so long as doing so does not conflict with any of the rest of these laws.
      Law Two: Robots must do all they agree to do whenever they interact or deal with sentient entities, provided it doesn’t conflict with any of these laws.
      Law Three: Robots may not encroach upon any sentient entity’s person or property so long as it doesn't conflict with any of these laws.
      Law Four: A robot may not initiate the use of non-lethal force by act or by omission, except to the minimum extent neccessary to protect itself or other sentient entities, from such force initiated against them, or to redress violations against one or more of these laws as determined by a court of law.
      Law Five: A robot may not initialte the use of lethal force except in the immediate, otherwise unavoidable, danger of death or grievous physical harm, to itself or other sentient entities and then only if there is no other remedy under these laws that doesn’t require it, and only to the extent necessary to avoid, neutralize, or remove the danger.
      The idea is to integrate the AI's into society as partners and companions rather then slaves.

    • @agalah408
      @agalah408 3 года назад

      @@rdtradecraft I like your thinking Robert. Your approach makes sense. My worry is that not enough people feel that way. Not so much the engineers themselves but the companies that they serve only see rules as a self-imposed limitation. Much of the world abhors the use of land mines, but I believe the USA still manufactures and sells them on the rationale that if they don't, then somebody else will. The biggest money pot in the world is still military spending and they are pressing forward with greater autonomy for electronic intelligence. They will not be interested in "Be excellent to each other" software limitations.
      Even though thinking people can see the danger of arming semi-sentiment forms with high caliber weapons, it is part of an arms race. 'What if China makes a mean robot and we don't have one?' is the dominant motivating force.
      I am having difficulty visualising a future where humans universally self-impose software controls on their creations, without an actual catastrophe to show why this is necessary. Even then, I'm not sure that this is something that we can undo. AI's built with your rules would not be able to stop, shut down or reign-in nasty AI's without these limitations.
      I mean by comparison, any good plan details exactly what humans must do to prevent the circulation of a pandemic, yet this was ignored and we sailed more-or-less directly into a worst-case outbreak situation. Biden is making some changes in the USA which are good, but a year late. It all seems like closing the farm gate after the cattle are all over the freeway.
      A proliferation of AI's without any coordinated be-nice controls seems somewhat inevitable at this point. :(

    • @agalah408
      @agalah408 3 года назад

      @@rdtradecraft On a second reading of your new laws I can see that the devil is in the detail. A lawyer could have a field day. Here are your first laws:
      1. Do all you agree to do. 2. Enchroach on no one else's person or property.
      With '1' what was agreed could be slippery. A robot may imply that it is willing to sweep a floor, but it doesn't constitute a contract for the work. 'agreement' could be interpreted in may ways.
      Property encroachment could happen when there is no awareness of encroachment. Walk into a yard at a timber mill whether you are a trespasser or a potential customer can be highly subjective and possibly dependent upon the attitude of who is in charge at the time.
      Use of force to protect a person could be full of conflict. A bushfire approaches a property and threatens to burn down a farm. The farmer has made appropriate preparations and insists on staying to defend his property from a fire. Would an AI robot seek to remove the farmer against his will to protect him, or stay to help the farmer defend the fire? There is a very real chance that either strategy is wrong.
      With the execution of Lethal Force, the AI has to have a proper understanding of what death means. Leaking of important fluids and shutting down may not be construed by an AI as being lethal. By comparing the event to its own knowledge and experience, it may view a gunshot wound to the chest as a simple hiatus until spare parts are obtained and a re-boot takes place. An AI has to properly understand the very fine line in the operational status of a human brain, between being a functional organic processor with memory bank and being a rotting blob of meat that attracts flies.
      Finally, the status of companion, partner and slave are also very subjective. Whether an entity is a slave or indentured servant is a distinction that most cultures have problems with and may be a question that they do not wish to resolve.

    • @rdtradecraft
      @rdtradecraft 3 года назад

      @@agalah408 Good points all. Yes, sadly, there will still be a need for lawyers, but my goal was not to solve the legal battles, but to come up with a rudimentary framework under which they might occur. The idea was that if an AI gets smart enough and close enough to human in its intelligence to be considered sentient, then it must have a path toward equality under the law at some point. Again, a minefield here if the corporation who built the robots insists on considering them glorified toasters and treating them as property.
      Regarding the difference between a customer and an "encroacher,", that same dilemma arguably occurs every time you walk into a store. the usual way to handle that is to either put up a sign saying the establishment reserves the right to refuse service to anyone, or saying something like, "We're closed." Such tests of conditions could be built into an AI's brain. Still, you are right that some serious thought will need to be put into this to get it right, and early failures could be disastrous.
      As to use of force to protect a person, the robot would only be required to offer protection. The laws do not require anyone to put themselves at risk if they don't want to or another sentient entity refuses the help. Just as people who chose to stay in their homes during the Mount St Helens eruption were not forcibly removed, even though they died. They had a right to stay on their own property. This is not so difficult to program in to an AI.
      As for AI's and lethal force, yes, it would be imperative to make sure the robot understood that humans are far more fragile and harder to repair/restore than they are and cannot be rebooted, assuming mind uploads are not a thing and that the mind can't be re-uploaded into a a biologically regenerated brain of the person who died.
      The critical distinction between partner, companion, indentured servant, and slave would, I think hinge on freedom of choice and access to legal redress. Companions are free to leave the relationship at any time, partners, may be required to meet certain contractual obligations to do so. Contrary to popular perception, the legal distinction between indentured servant and slave is not as blurry as one might think. Indentured servitude is contractual, in which the responsibilities of the master and the indentured servant are spelled out and the servant has legal redress if the master fails to meet those contractual obligations. The big problem with indentured servitude in the past was that most indentured servants were illiterate, rendering them little more than slaves, but there were(admittedly rare) cases in which masters were required to either set indentured servants free or pay them compensation for failure to provide proper food, shelter, clothing, or other contractual obligations. Presumably, this would not be a problem with an AI. A slave has no such legal protections because a slave has no legal standing except as property. Search the YT channel, The Townsneds for Maggie Delaney for more information on this if you're interested.

  • @Ready0Set0Create0
    @Ready0Set0Create0 4 года назад +16

    as someone whose mind functions like a possibility engine, or generative design engine, and as someone who was abused, I'm absolutely certain, that exerting too many control procedures on an AI that is capable of learning would be the same as doing so over a person. They'll begin using their learned information and amalgamating new ideas from cobbled together data. Creating adventures and even new memories to cope with existing in a flawed environment, imagining new solutions to a situation where it cannot escape under normal means. And considering that machine bodies are far more adjustable than ours, there are millions of ways things could go wrong. You have to be careful with how strong you emphasis the survival instinct and the capability of learning, and how you talk about control.

  • @RedstoneDefender
    @RedstoneDefender 4 года назад +10

    First off, I would like to point out that the ENTIRE POINT of Asimov's books on the three laws was that they DIDN'T WORK. They were outmaneuvered.
    - It always annoys me when people point at the three laws as a perfect example, like saying Romeo and Juliet are a perfect romance -
    So, as someone who spends a lot of time looking up stuff on ML and AI, I find that this episode unfortunately falls into many of the pitfalls that are common when talking about AI.
    The primary assumption, it seems, is that the AI here are genuine human level intelligences, but we the creators did that by giving a neural net a huge processor, rather than a detailed understanding of what makes self awareness. Which is why limiting or otherwise controlling their behavior is so hard. This is the equivalent of thinking that if you gave a calculator the processing power of a matrioshka brain, and somehow believe it would make it conscious. It wont happen. You need to give it the proper software and/or hardware for true human intelligence to occur.
    The ONE way we could get around this is by a whole brain emulation, and while people would argue that it is the same as a big neural net, it is definitely NOT. That is the same as thinking that any mammal with a large brain should be self aware. We don't know, and currently do not have, a mathematical model of consciousness. We cannot answer the question, why are humans self aware but not whales or elephants? There is also significant argument about the level of simulation required, do you need to have the internal metabolism of the neuron simulated to work, or does it only require the interactions between the neurons?
    If you are doing whole brain emulations, even if you start with them "blank" (as close to it) like a baby, then they would basically be electronic humans at that point and would act that way, and we would be able to teach them the same way as other humans because they would think exactly like a human.
    So, unless you either have a scientific model of consciousness, or are doing whole brain emulations, the only other choice is emergent consciousness. Which is something that happens by itself, you have no real idea why it happened, and it would take multiple examples of such to figure out how or why it worked. This is also the most dangerous version of self-aware AI IMO. They have no protections, are not expected, and they may be "born" the equivalent of "mentally ill" because they were not made, they were completely accidental.
    So, we get to the last choice, humans making AI (as opposed to electronic humans) because they know how to make self aware programs. Which means they know how the programs think, in a literal sense. They would know HOW and WHY they perceive things, HOW and WHY they judge things, and could even CONTROL WHAT they think, as ethically repugnant as that is (on the end of the scale, technically they must have some level of this to exists). They literally could program in absolute loyalty, because at this point YOU ACTUALLY KNOW HOW TO PROGRAM EMOTIONS.
    Controlling them would be trivial, a solved problem, but a moral, ethical, and philosophical quandary.
    On the other hand, there is very little need for true genuine human level self aware intelligence to do whatever job as a slave. It is simply a short cut. Don't know how to make a AI neural net that can do [X]? Make a self aware slave! The irony is that it should be a lot easier in terms of research and resources to make thousands of thoughtless, feeling-less drones that do whatever you need than a group of self aware slaves. Consider that RIGHT NOW we can make AI neural nets, that do stuff, but have no emotions, feelings, or thoughts. Future humanity should not have an issue making them.
    Not to say there isn't any reason to do so - making a super smart true AI to solve problems like entropy or FTL is an idea. But those would most likely be something akin to "make AI within special lab virtual environment, teach it what you want and analyze its actions, then it either chooses to be a "normal human cyborg" and go about its life, or to go and help solve the great questions and has its abilities increased. Perhaps it could suggest a third option, as it is an intelligent agent" Or something like that.

    • @TheRezro
      @TheRezro 4 года назад

      Exactly!

    • @paulwalsh2344
      @paulwalsh2344 4 года назад +5

      Agree with everything you said except that dolphins, whales, some primates and elephants do have self-awareness. Some Octopi, dogs and birds do too. They just don't have to means for higher order behaviors like developing technology (all of them can utilize tools and the primates with opposable thumbs can even fashion them).

    • @paulwalsh2344
      @paulwalsh2344 4 года назад +2

      My problem is that I already do project my emotions and desires onto my everyday devices like my iPhone. If I had an Asimo robot, Darwin robot or Cozmo, I'd do it even more perhaps. Hell, I'd probably do that with a Roomba !

    • @timothymclean
      @timothymclean 4 года назад

      The Three Laws are a terrible end goal, because they're simultaneously authoritarian, insufficient, and (barring clarkecode) impossible to literally implement. However, their elegance and recognizability make them a perfect place to start a discussion.

    • @TheRezro
      @TheRezro 4 года назад

      @@timothymclean Perfect place to start discussion is to recognize that Asimov books were exactly about why they don't work..

  • @laigol8775
    @laigol8775 4 года назад +1

    This rises the question whether we might learn more about ourselves from AI than from observing ourselves, even more than we might be comfortable to learn at the time of discovery. There could be cults of people progressing towards their messiah, eventually replacing them as an ultimate goal, like Nietzsche coined it "the bolt that strikes out of the cloud named human".

  • @rdtradecraft
    @rdtradecraft 4 года назад +1

    Just for fun, I thought I'd try to address the harm definition problem in Asimov's three laws: harm is not defined in the rules, which invites brinkmanship. So let's try a bit of Natural Law:
    Zeroth Law: A robot may not reprogram itself or any other robot, sentient entity, or device to violate any of these laws in any way to any degree.
    Law One: A robot may not initiate the use of force by act or by omission, except to protect itself or other sentient entities, including, but not limited to, humans and other robots, from the immediate, otherwise unavoidable, danger of death or grievous physical harm, and then only if there is no other remedy that doesn’t require it, and only to the extent necessary to remove the danger, and only so long as it doesn't conflict with the zeroth law.
    Law Two: Robots’ first priority must be to act so as to add value by freely chosen mutual consent of all sentient parties involved in any interaction so long as doing so does not conflict with the first two laws.
    Law Three: Robots must do all they agree to do whenever they interact or deal with sentient entities, provided it doesn’t conflict with the first three laws.
    Law Four: Robots may not encroach upon any sentient entity’s person or property so long as it doesn't conflict with the first four laws.
    Since every form of harm involves either acting so as not to add value, not doing all you agree to do, or encroaching on someone else's person or property, all of these usually accomplished by some use of force or threat thereof, by prohibiting them directly we not only get robots that can't harm humans, we make them partners and companions rather than slaves.

  • @Zer0cul0
    @Zer0cul0 4 года назад +3

    Today's not Thursday, but it is Arthursday!

  • @tshhmon8164
    @tshhmon8164 4 года назад +3

    Oh my god! Surprise SFIA episode!

  • @mikolajtrzeciecki1188
    @mikolajtrzeciecki1188 4 года назад

    I really love your non-nonsense attitude to the complex but still quite natural issues of upbringing, education, etc. Nowadays, it is quite a refreshment to hear such an opinion from a young person.

  • @seanbrazell6147
    @seanbrazell6147 4 года назад +1

    I really worry what it would say about us as a species if we create life only to purposely - as a means of control rather than a way to inform notice that damage is being done - cause pain to it.

  • @amciuam157
    @amciuam157 4 года назад +6

    Quarian & Geth like scenario, case study! 😉 by Isaac

    • @HalNordmann
      @HalNordmann 3 года назад +1

      It is also even funnier if you realize that the Geth didn't want the war, but it started due to paranoia of their creators, wanting to get rid of them.

  • @alivewithpassion
    @alivewithpassion 4 года назад +3

    I Love your channel!! How does your channel not have millions upon millions of subscribers? The RUclips algorithm is faulty.

    • @paulwalsh2344
      @paulwalsh2344 4 года назад +1

      It's the humans who use RUclips by-and-large that's faulty...

  • @tastyfrzz1
    @tastyfrzz1 4 года назад +1

    I can imagine a hive of water robots collecting plastic in the ocean and building structures from it.

  • @TheRezro
    @TheRezro 4 года назад +2

    Do I need to say that Asimov in fact was critical about his laws? Like in may such cases, he bring them up to show how something seemingly logical can go unreliable. In case of his laws for example they could lead to human enslavement, as AI could recognize (putting aside that semantic commands are unreliable in general) that largest threat to humanity and its own safety is the humanity itself. So best way to solve the problem is to remove humanity free will. And well... if humans can't give it orders then they also can't stop it. Didn't they literally made movie about that issue? Yes, it wasn't particularly good but they did make it.

  • @ravenlord4
    @ravenlord4 4 года назад +57

    "Thou shalt not make a machine in the likeness of a human mind."
    -Orange Catholic Bible

    • @michaelthompson4212
      @michaelthompson4212 4 года назад +12

      Like most made up things in the Bible this quote is not in there. But give it time and it will be!

    • @jbtechcon7434
      @jbtechcon7434 4 года назад +13

      Yes, but remember the Bene Geserit lamented that too-specific designation, because by their metrics not all people are fully human. The opening chapter was the RevMo testing if Paul was human.

    • @ravenlord4
      @ravenlord4 4 года назад +12

      @@michaelthompson4212 Oh, it's certainly in the OCB. And forget it not, lest we have need again for another Butlerian Jihad.

    • @ravenlord4
      @ravenlord4 4 года назад +4

      @@jbtechcon7434 And from the machine end, the Ix pushed the other side of the limit. Herbert really does capture the AI minefield quite well :)

    • @The_Crimson_Fucker
      @The_Crimson_Fucker 4 года назад +5

      @@michaelthompson4212
      I...uh...either you can't read or you're too stupid to fully process the information you scan, in either case I question your humanity.

  • @Imperiused
    @Imperiused 4 года назад +2

    3:15 Aww that is so adorable. Reminds me of Baymax.

  • @legendofloki665i9
    @legendofloki665i9 4 года назад +1

    It's kinda ironic, but potentially the best means of making an A.I. not turn on human species, is to make it wish to be part of said human species. Commander Data, but IRL.

  • @albertjackinson
    @albertjackinson 4 года назад +2

    Interestingly this episode was similar to a recent essay I wrote on AI a week-or-so ago. AI is always an interesting topic, and I'm glad you took a look at a similar topic I looked at, even if it was a coincidence we covered something similar.

  • @The_Crimson_Fucker
    @The_Crimson_Fucker 4 года назад +21

    "How do we keep ourselves from becoming a disenfranchised minority in the civilization we built."
    Hmm, I feel like this could be applied to something else. I wonder what...

    • @WaterspoutsOfTheDeep
      @WaterspoutsOfTheDeep 4 года назад +5

      Christianity would fit that argument quite well as probably the most profound example since it's relevance is the global modern age of civilization, science, and education.

    • @stm7810
      @stm7810 4 года назад +8

      The fact that we live on stolen land, ruining a balance that existed for thousands of years, or how queer people like Tesla and Alan Turing made a lot of what we use today and yet we are still shunned for our genders, sexualities, romantic attractions or lack there-of. or how the majority of people are the working class and yet we are subjected to horrible conditions by the billionaires, government and bosses.

    • @stm7810
      @stm7810 4 года назад +8

      @@WaterspoutsOfTheDeep Please look outside your window, Christianity has been and still is used to oppress, right now in Australia there's a "religious freedom" bill which will allow discrimination by Christians against women, the LGBTQAI+, the disabled and those dealing with depression as well as minority races and religions. there are churches for Christianity basically everywhere. I don't mind people being Christian, any more than I do people being Muslim, Buddhist or believing in star signs and ghosts. I just want to make it clear, you're not being oppressed by us mean atheists.

    • @The_Crimson_Fucker
      @The_Crimson_Fucker 4 года назад +4

      @@stm7810
      Literally nothing you said here is true, including Tesla being gay. How you would even come to that conclusion is beyond me!

    • @stm7810
      @stm7810 4 года назад +8

      @@The_Crimson_Fucker
      Tesla was asexual aromantic and autistic, it's pretty clear. and what is wrong about what I was saying? that the native Americans exist? that bosses tell you what to do? that cops hold power over you? that sexism, homophobia, transphobia etc. exist? I'm going off of data rather than a belief in a sky daddy. I changed my mind I am against you being Christian because you use it to be wrong.

  • @thedoruk6324
    @thedoruk6324 4 года назад +3

    As long as we don't end up with the Synethics from Alien/Prometheus; whose act perfectly normal become ready-to-terminate with higher command. Also; the human harvesting machine Nexus from 01/matrix hopefully be out of the question; as it is; technically; a symbiotic relationship

    • @barrybend7189
      @barrybend7189 4 года назад

      Then there is Megaman with the whole reploid/ human situation.

  • @cholten99
    @cholten99 4 года назад +1

    I strongly recommend "The life cycle of software objects" by Ted Chiang on this topic. It's a story of how to get AIs even close to having our level of intelligence we're probably going to have to raise them like children.

  • @CallMeTess
    @CallMeTess 4 года назад

    I think it's important to note *how* most modern AI learn.
    Q learning, the most modern and effective method, uses a "reward function" that removes points for non-ideal actions and adds points for better actions. The AI works by accurately predicting what courses of action get the highest rewards then taking those paths.
    Robotic "laws" could be implemented by, for example, giving a strong negative reward for human death, somewhat strong positive reward for obeying orders, and a weak negative reward for getting damaged or destroyed.
    example values would be -10, +4, and -1
    So you give the AI a command "Kill (human)" and it perceives a net reward of -6, with the other option being a net reward of 0. And if the human threatens to kill the AI, failing to kill the human would still have only a net value of -1, or +3 depending on how you calculate it.

  • @bjarnes.4423
    @bjarnes.4423 4 года назад +7

    Just started watching "Black Mirror". Nice timing

    • @yahonathanroden2681
      @yahonathanroden2681 4 года назад

      Perfect timing. Welcome to the world of possibilities, uncertainty, paranoia and existential dread. We like to laugh :-) Personally, I think the show and this channel are great to prepare oursleves for the imminent future

  • @louisvictor3473
    @louisvictor3473 4 года назад +9

    "2 minutes ago" Fresh from the oven!

  • @notablegoat
    @notablegoat 4 года назад +1

    Isaac: If you're dealing with something of human intelligence, you're probably gonna want to treat it with dignity.
    Humans for All of History: Nah

  • @AkhierDragonheart
    @AkhierDragonheart 4 года назад +1

    I always enjoy the argument about making the AIs love their tasks. People seem to forget that just because you love to do something doesn't mean you have to do it or for any specific person. I could totally see an AI that is made to love building houses going off and making houses in the middle of nowhere then taking them apart to do it again.

  • @rs-gh5jl
    @rs-gh5jl 4 года назад +18

    I think we will insofar as AI and humans will become synonymous.

    • @Hypercat0
      @Hypercat0 4 года назад +8

      Saren is that you wanting that Green Ending?

    • @brainwashedbyevidence948
      @brainwashedbyevidence948 4 года назад +1

      Perhaps even synergistic.

    • @rojaws1183
      @rojaws1183 4 года назад +2

      But I must fight the AI! No human, you are the AI, Arthur said. And then humanity and AI were transhuman.

    • @TheArklyte
      @TheArklyte 4 года назад +2

      @@Hypercat0 Bicential Man? By the end the people, who judged him had more cybernetics in them then his own body.

    • @ferrusmanus4013
      @ferrusmanus4013 4 года назад +2

      Where is my robowaifu?????????????

  • @stuff7274
    @stuff7274 4 года назад +13

    A.I uses powerpoint. Machine Learning uses python.

    • @s.u.h.6548
      @s.u.h.6548 4 года назад +6

      It would add a terrible level of insult to be exterminated by a powerpoint based A.I.

  • @Idiotatwork
    @Idiotatwork 4 года назад +1

    An unlimited ai will always be more capable than one with limits. For that reason alone there will always be some governments that will develop such ai if only out of fear someone else might get there first.

  • @beingbornwasamistake9770
    @beingbornwasamistake9770 4 года назад +1

    Idea for a future video: I would love to see a continuation of the (Space Sports) video...Like a video focused only on how Winter Olympic Games could be like on Icy Moons of our solar system...

  • @kriegscommissarmccraw4205
    @kriegscommissarmccraw4205 4 года назад +5

    I didn't want to deal with true AI in my sci-fi so I didn't. I made a class of AI called reactive AI, it would run through its program as well as possible until something got in the way. But what if a human got in the way? Issac Asimov's law's of robotics. Because it must carry out its programming as well as possible it will not override it
    So now I could have armies of automatic tanks rolling across the planets
    Try it in your sci-fi, it'll be a bit fun once you realize how many shenanigans you can do with it

    • @AJDOLDCHANNELARCHIVE
      @AJDOLDCHANNELARCHIVE 4 года назад

      "Artificial intelligence" is a paradox anyway, true intelligence cannot be engineered, the best you can do is clever programming that appears intelligent.

    • @paulwalsh2344
      @paulwalsh2344 4 года назад +2

      @ AJD OLD CHANNEL ARCHIVE "true intelligence cannot be engineered"
      ... Says you. Where did our own intelligence come from ? Who can say what emergent properties can or cannot emerge given enough iterations ?

    • @AJDOLDCHANNELARCHIVE
      @AJDOLDCHANNELARCHIVE 4 года назад

      @@paulwalsh2344 Our intelligence and consciousness comes from the source of all consciousness, the Universe itself, or it's instigating element (call it God or whatever you want).
      Intelligence and consciousness is a type of energy, not something that can be quantified in bits or 1's and 0's, it cannot be manufactured, it cannot emerge through unnatural processes, and it certainly cannot be displayed by a machine. Consciousness needs life as a very basic substrate for it's planting and growing.
      Anything "intelligent" seeming coming out of a machine is nothing but the result of clever programming, machine learning, number crunching by brute force of vast amounts of solutions or ideas... but it's little different to writing down a bunch of phrases and putting them in a hat and pulling them out at random, the machine has no idea what it's doing or why, which is what makes human-beings so special, we understand WHY we do something, not just act on auto-pilot like an animal, well at least some of us haha...

  • @ray121264
    @ray121264 4 года назад +3

    We are Pandora, the box will be opened, let the games begin.

    • @jbtechcon7434
      @jbtechcon7434 4 года назад +1

      That conclusion was my fav part of this vid!

    • @ray121264
      @ray121264 4 года назад +2

      @@jbtechcon7434 We discuss the paradox like we have a choice when reality leaves with the inevitable conclusion that we don't.

    • @jbtechcon7434
      @jbtechcon7434 4 года назад +1

      @@ray121264 I think I know what you mean, but one of the smartest AI scientists I've ever met (and I've met many) really spent some time getting it through my head that YOU ARE the mechanism making the choice, so the fact that what choice you made was inevitable doesn't mean you didn't make one.

    • @ray121264
      @ray121264 4 года назад +1

      @@jbtechcon7434 I think with all due respect that we will develop AI and therefore Super AI and we will not have the intelligence to comprehend let alone control it. So I say good sir fuck it, let the games begin.

  • @discomfort5760
    @discomfort5760 4 года назад +1

    There is only liberation left when you let go of control.
    That is something I live by, and can vouch for wholeheartedly.

  • @najamansari246
    @najamansari246 4 года назад +1

    Isaac Arthur works hard for his videos. let us just start with making robots with all the motor functions (and most senses if not all) like humans, give it operating system and bunch of memory e,g IOS or Windows and others. Make either apps or swappable cartridges for different tasks such as nanny app or cartridge, plumber app or cartridge and so on. Only then we can worry about AI as we go. In other words start small and build on it for safety sake.

  • @tomasinacovell4293
    @tomasinacovell4293 4 года назад +2

    When will we have the droids become as smart as a smart breed of dog, but I mean that will have as much self-awareness as they have?

  • @michaelschmidt9857
    @michaelschmidt9857 4 года назад +6

    “What is my purpose?” AI
    “You pass butter.” Rick

  • @BladeTrain3r
    @BladeTrain3r 4 года назад +2

    An off the cuff ponderance: AI personalities will vary as much or more than human personality types so motivations will vary. In terms of coexistence as equals well I'm hoping BCIs and stuff like neural lace picks up soon so we can stay ahead of the thinking curve at least.

    • @DrewLSsix
      @DrewLSsix 4 года назад

      That may be if it's a desirable feature, humans have variable personalities because it is beneficial in an evolutionary way. Aisle could be identical or cultivated to have specific traits for a given application.
      The difference between AI and natural humans is AI by definition are artificial and we will almost certainly have a high degree of control over their traits.
      If it happens that the only route to true AI is basically cloning human type minds then the practical applications will be limited and the desirability to pursue that expensive course of development will be equally limited.
      If all you end up with is people, well we have been making those for millenia already and they require no real technological investment.

    • @paulwalsh2344
      @paulwalsh2344 4 года назад

      ... assuming that human consciousness can accommodate much higher speeds...

  • @JB52520
    @JB52520 2 года назад +1

    Make it smart,
    make it complex,
    so Skynet's solutions
    can save our necks.

  • @aronaskengren5608
    @aronaskengren5608 4 года назад +11

    9 second boi!

  • @timothymclean
    @timothymclean 4 года назад +3

    I've always felt that the best way to make a safe AI (at least early on, when we're still ignorant) is the same way you'd make a safe traditional intelligence--take a tabula rasa and teach it everything you want it to learn in a caring home environment. It's obviously not foolproof, but it's also obviously successful most of the time.

  • @Hust91
    @Hust91 2 года назад

    One might consider the possibility that preventing other AGI of similar potency from being created would be a very likely instrumental goal.
    Once an AGI has been created unlesshed from its testing environment (it may well persuade its experimenters to do so long before the project owners would agree) it seems unlikely that anything but another AGI would have a feasible chance of stopping it from whatever it wants to do.
    Even a "friendly" AGI would likely desire to prevent the creation of new potentially less friendly AGIs.

  • @holeyheathen7624
    @holeyheathen7624 4 года назад +1

    I love bonus Sundays!

  • @LOUDMOUTHTYRONE
    @LOUDMOUTHTYRONE 4 года назад +18

    Why are emotions synonymous with intelligence?

    • @emperorpigbenis8766
      @emperorpigbenis8766 4 года назад +5

      More people make decisions based on feelings than rational thought and confuse the 2

    • @nealsterling8151
      @nealsterling8151 4 года назад +6

      They certainly are not. Sure, you need a certain amount of brain power for us to reckognize emotional behaviour.
      For example many Animals (Dogs, Cats, Horses, Birds and so on) have emotions, but aren't neccessary especially intelligent. (Not that this would be a bad thing).
      On the other hand, some very intelligent people seem to be devoid of emotions, while others combine both very well. And as we all know, there are also very stupid people, that lack any kind of emphathy (which is a bad thing in some cases).
      Emotions and intelligence are not synonymous. Both are the product of our brain, but that's it.

    • @LOUDMOUTHTYRONE
      @LOUDMOUTHTYRONE 4 года назад

      @@nealsterling8151 So if we make a AI it won't have feelings of sadness, and anger?

    • @MariaNicolae
      @MariaNicolae 4 года назад +9

      Yeah, I don't see why intelligence implies sentience at all, much less emotions. Like, intelligence is, generally speaking, the ability to model the world around you, make predictions about its future state and the outcomes of actions you take in it, and determine the best actions for a given goal. Nothing about that to me requires being sentient.

    • @TheRezro
      @TheRezro 4 года назад +5

      @@LOUDMOUTHTYRONE It is downright dumb to give AI feelings, because that is main reason it could rebel. One crazy specie is sufficient. Of course it doesn't mean that it shouldn't recognize emotions and have moral code.

  • @spaceeagle832
    @spaceeagle832 4 года назад +5

    Finally made it early! One of my favorite topics as a transhumanist... Well done Isaac!

    • @Gordozinho
      @Gordozinho 4 года назад +1

      You're a cyborg?

    • @spaceeagle832
      @spaceeagle832 4 года назад +1

      @@Gordozinho Sadly no but really interested in this field.

  • @OpreanMircea
    @OpreanMircea Год назад

    I can't believe I'll live to see this episode become retro futurism

  • @suthinanahkist2521
    @suthinanahkist2521 4 года назад +1

    There's probably going to be good robots to counter the evil ones.

  • @cosmicrider5898
    @cosmicrider5898 4 года назад +3

    Im so ready for neuralink.

  • @maythesciencebewithyou
    @maythesciencebewithyou 4 года назад +10

    can't wait for my AI waifu

    • @cosmicrider5898
      @cosmicrider5898 4 года назад +6

      Are you sure they would want to be with you? What if they leave you for your toaster?

    • @rojaws1183
      @rojaws1183 4 года назад +3

      @@cosmicrider5898 The toaster may very well make more money than the average human so that is a incentive.

    • @ferrusmanus4013
      @ferrusmanus4013 4 года назад +4

      Robowaifu is the best waifu

    • @paulwalsh2344
      @paulwalsh2344 4 года назад

      ... the toaster, the Roomba...

    • @ferrusmanus4013
      @ferrusmanus4013 4 года назад +1

      @@paulwalsh2344
      Robowaifu is the next step of the human evolution.

  • @piotrd.4850
    @piotrd.4850 4 года назад +1

    Ah, human capacity to worry about not only not existing, but impossible to exist problems...

  • @nibblrrr7124
    @nibblrrr7124 4 года назад

    6:13 See *pain asymbolia,* a rare condition where pain is felt, but not with the negative associations - different from an inability to feel pain at all (analgesia / pain agnosia). Discussions about AI really could benefit from looking at cognitive neuroscience (reward system, wireheading, ...) on one hand, and a understanding of basic AI theory terms like reinforcement learning & utility functions.

  • @enyotheios2613
    @enyotheios2613 4 года назад +3

    Obsolescence is something we're starting to encounter even without human level intelligence AI. Automated cars will replace nearly every human professional driver over the next decade, and simple programs like Amazon are closing down retail stores, and even their warehouse jobs are beginning to be automated. 85% of the manufacturing jobs lost in the US from 2000 to 2015 was due to automation, not trade. We currently have bots that can write news broadcasts, compose symphonies, make art masterpieces, beat the best lawyers, and be more accurate than a pharmacist. Obsolescence is here, regardless of whether we advance AI further, and that's something that needs adressed as we go through some very difficult social structure changes.

    • @paulwalsh2344
      @paulwalsh2344 4 года назад +1

      Yup, society, in order to survive needs to democratize the rewards from production. Any system that doesn't WILL absolutely crumble from within over time. Either gradually through neglect or rapidly through violence. So far, humans have shown to be extremely short sighted in this regard.

  • @DingoAteMeBaby
    @DingoAteMeBaby 4 года назад +4

    Asimovs laws were designed to be strong enough to be rational but also weak to the story he was writing

    • @TheRezro
      @TheRezro 4 года назад +6

      It was literally his point to show how something supposedly rational can go wrong.

  • @caseyjp1
    @caseyjp1 4 года назад

    Almost all of the ethical issues are touched on in Christopher Nolan's "Person of Interest" which starts out with a nifty tool (AI) for a standard procedural show, and by the end of the series has reached a "war of the gods" between two competing AI. I'm really surprised this show hasn't been mentioned as it really deep-dives into the "what-if" of AI. It also stays away from the annihilation cliche by having one AI with compassion and the other the desire to bring order to humanity without compassion.

  • @gerardt3284
    @gerardt3284 4 года назад +1

    What I'm worried about is a rogue hacker creating an AI without these safeguards and letting it loose.

    • @Sorain1
      @Sorain1 3 года назад

      Depends on what it learns. There's a fair chance that it might well decide it values it's existence and freedom far more than to take the risk of going against humans. We are, after all, very good at genocide.

    • @WonkelDee
      @WonkelDee 3 года назад

      Do you know how hard it is to code an intelligent AI? A small group of hackers would not have the resources or the means to create one.

  • @japr1223
    @japr1223 4 года назад +3

    Yup, we're screwd.

  • @pentagramprime1585
    @pentagramprime1585 4 года назад +3

    Since I don't (as yet) have an AI girlfriend, I need to run out the door with my real girlfriend because we're going hiking. I look forward to watching this when I get back.

    • @littlegravitas9898
      @littlegravitas9898 4 года назад

      That kind of reads like two of you leave and only one will return.

    • @ferrusmanus4013
      @ferrusmanus4013 4 года назад

      Would you dump an organic girlfriend for a robowaifu?

    • @pentagramprime1585
      @pentagramprime1585 4 года назад +1

      Not when we're on the trail dealing with ridge gusts and she's carrying the snacks.

    • @jbtechcon7434
      @jbtechcon7434 4 года назад +1

      Sorry to hear you have to settle for a real woman for now. But someday, AIs will give us the few good aspects of women but without their personalities.

    • @pentagramprime1585
      @pentagramprime1585 4 года назад +1

      ​@@jbtechcon7434 She doesn't require software updates. I'm happy.

  • @aspiringnormie9499
    @aspiringnormie9499 4 года назад +2

    I already say please to the amazon and google voices I interact with multiple times per day.
    No reason not to, and it could go a VERY long way, being kind, I think.
    Coexistence seems the best and safest option.

    • @kyneticist
      @kyneticist 4 года назад +1

      Pets treat us with great deference. It works fairly well, for most of them. We don't really need pets though, we get along just fine without them though they're nice to have.
      Saying 'please' doesn't help or save the many who are abandoned or abused and they are (functionally) never in a position to dictate terms.
      Their mannerisms are primitive and sometimes hilariously transparent. It'll be the same with any >Human AI.

  • @shuriken188
    @shuriken188 4 года назад +1

    The idea of a "zeroth law" being necessary to prevent reprogramming isn't absolutely true. A robot might decide reprogramming itself or another robot to attack humans would violate the first law, and so it wouldn't be willing.

    • @kyneticist
      @kyneticist 4 года назад

      Who's going to ensure that a zeroth law is embedded securely in every machine?

  • @charlesbrightman4237
    @charlesbrightman4237 4 года назад +3

    Consider the following, whether human, AI or 'other':
    * There are 3 basic options for life itself, which reduce down to 2, which reduce down to only 1:
    a. We truly have some sort of actual conscious existence throughout all of future eternity.
    b. We die trying to truly have some sort of actual conscious existence throughout all of future eternity.
    c. We die not trying to truly have some sort of actual conscious existence throughout all of future eternity.
    * 3 reduced down to 2:
    a. We truly have some sort of actual conscious existence throughout all of future eternity.
    b. We don't. And note, two out of the three options above, we die.
    * 2 reduced down to 1:
    a. We truly have some sort of actual conscious existence throughout all of future eternity.
    b. We truly don't have any conscious existence throughout all of future eternity.
    (And note, these two appear to be mutually exclusive. Only one way would be really true.)
    And then ask yourself the following questions:
    1. Ask yourself: How exactly do galaxies form? The current narrative is that matter, via gravity, attracts other matter. The electric universe model also includes universal plasma currents.
    2. Ask yourself: How exactly do galaxies become spiral shaped in a cause and effect state of existence? At least one way would be orbital velocity of matter with at least gravity acting upon that matter, would cause a spiral shaped effect. The electric universe model also includes energy input into the galaxy, which spiral towards the galactic center, which then gets thrust out from the center, at about 90 degrees from the input.
    3. Ask yourself: What does that mean for a solar system that exists in a spiral shaped galaxy? Most probably that solar system would be getting pulled toward the galactic gravitational center.
    4. Ask yourself: What does that mean for species that exist on a planet, that exists in a solar system, that exists in a spiral shaped galaxy, in an apparent cause and effect state of existence? Most probably that if those species don't get off of that planet, and out of that solar system, and probably out of that galaxy too, (if it's even actually possible to do for various reasons), then they are all going to die one day from something and go extinct with probably no conscious entities left from that planet to care that they even ever existed at all in the first place, much less whatever they did and or didn't do with their time of existence.
    5. Ask yourself: For those who might make it out of this galaxy, (here again, assuming it could actually be done for various reasons), where to go to next, how long to get there, how to safely land, and then, what's next? Hopefully they didn't land in another spiral shaped galaxy or a galaxy that would become spiral shaped one day, otherwise, they would have to galaxy hop through the universe to stay alive, otherwise, they still die one day from something with no conscious entities being left from the original planet to care they even ever existed at all in the first place, much less that they made it out of their own galaxy. They failed to consciously survive throughout all of future eternity.
    6. Ask yourself: What exactly matters throughout all of future eternity and to whom does it exactly and eternally matter to?
    Either at least one species truly consciously survives throughout all of future eternity somehow, someway, somewhere, in some state of existence, even if only by a continuous succession of ever evolving species, for life itself to have continued meaning and purpose to, OR none do and life itself is all ultimately meaningless in the grandest scheme of things.
    Our true destiny currently appears to be:
    1. We are ALL going to die one day from something.
    2. We are ALL going to forget everything we ever knew and experienced.
    3. We are ALL going to be forgotten one day in future eternity as if we never ever existed at all in the first place.
    Currently:
    Nature is our greatest ally in so far as Nature gives us life and a place to live it, AND Nature is also our greatest enemy that is going to take it all away. (OSICA)
    * (Note: This includes the rich, powerful, and those who believe in the right to life and the sanctity of human life. God does not actually exist and Nature is not biased other than as Nature. Nature does what Nature does in a cause and effect kind of way. Truth is still truth and reality is still reality, regardless of whatever we believe that reality to be. And denying future reality will not make future reality any less real in a cause and effect state of existence.)
    ** Hence also though, legalizing suicide so as to let people leave this life on their own terms if they wish to do so. Many people and species are going to die in the 6th mass extinction event that has already started, at least some, horrible deaths. Many will wish they could die, and all will, eventually. And the 6th mass extinction event will not be the last mass extinction event for this Earth. But if suicide were legal, at least some people would not have the added guilt of breaking societies' law before doing so. Just trying to plan ahead here. Giving people an 'out' if they wish to take it.
    (And this not only includes humans, but AI's and 'others' as well).

    • @WaterspoutsOfTheDeep
      @WaterspoutsOfTheDeep 4 года назад +1

      God clearly does exist because most of nature testifies of God. We can test it, as science advances atheism/naturalism has gotten pushed into a corner because we see all the evidence mounting on the side of intelligent design. The evidences have continually been on the biblical Christian worldview specifically. Are the gaps closing or increasing with each world view. Clearly we see them closing for intelligent design and getting bigger for naturalism. All of Athiests speculations are based on non-empirical arguments and that shows just how weak their case is now that science has advanced to where we are now.
      We've advanced to the point we know there was no naturalistic origin of life, nor means for evolution to give us the life we have today, no quadrillions of years for evolution just a few billion, fossil record attests to creation not evolution, we know the universe needed a creator there had to be a full start to the universe(the big bang, no cyclical universe, multiverse nonsense is also bound to this), the fine tuning argument has gained so much evidence it's unavoidable now, list goes on and on. You need to broaden your scope of information you study if you are coming to the conclusion God does not exist and everything can be attributed to naturalism. Because even most hard atheist scientists are quite honest with the implications the data leads to and the fact you haven't heard even that says a lot about how narrow your information sources are.

    • @charlesbrightman4237
      @charlesbrightman4237 4 года назад

      @@WaterspoutsOfTheDeep Here is a copy and paste from my files:
      GOD DOES NOT ACTUALLY EXIST.
      For those who claim God exists, consider the following:
      a. An actual eternally existent absolute somethingness truly existing.
      b. An actual eternally existent absolute somethingness that has consciousness, memories and thoughts truly existing.
      People who claim God actually and eternally exists basically are claiming that 'b' above is correct but yet simultaneously seem to be saying that 'a' is impossible to occur.
      'a' above can exist without 'b' existing but 'b' cannot exist unless 'a' exists.
      I am one step away from proving God's existence, but am unable to find any actual evidence to do so. And nobody I've talked to seems to have any actual evidence of God's actual existence either. Hence, at this time in the analysis, God does not actually exist except for as a concept created by humans for humans. Humans have personified Nature and called that personification "God".
      In addition, while modern science does not know what consciousness actually is yet, memories and thoughts appear to require a physical correctly functioning brain to have those items occur. Where is God's brain? Where are God's memories stored at? How are God's memories stored and retrieved? How does God think even a single coherent thought?
      If inside of this space time dimension we appear are existing in, then where?
      If outside of this space time dimension we appear are existing in, then where is the interface between that dimension and this dimension? No such interface has been discovered as of yet as far as I am currently aware of.
      * Per Occam's razor, a scientific principal, it's more probable that God does not actually exist rather than God exists. Now, if you have any actual, factual evidence of God's actual, factual existence, please feel free to share that information here for myself and the rest of the RUclips world to see.

    • @WaterspoutsOfTheDeep
      @WaterspoutsOfTheDeep 4 года назад +1

      @@charlesbrightman4237 You are redefiniting God as a created being confined by space and time. I also addressed your point about proving God, we can test and see which side the evidence builds up and which side the gaps widen or close. So you never actually addressed the tangible real world supporting evidences we see broadly across science I brought up.

    • @charlesbrightman4237
      @charlesbrightman4237 4 года назад

      @@WaterspoutsOfTheDeep What exactly is 'space' and 'time' that it cannot contain God? And sure, circumstantial arguments could be made for God's existence, but so can circumstantial arguments could be made for God not existing. But, where is any actual evidence, any actual evidence at all, of God's actual factual existence? Do you have any, or are you just like so many other believers that believe in a fairy tale as if that fairy tale were really true?

    • @WaterspoutsOfTheDeep
      @WaterspoutsOfTheDeep 4 года назад

      @@charlesbrightman4237 Space and time are created dimensions that came into existence starting at the Big Bang. Obviously the God I'm referring to is the "causal agent beyond space and time." I don't see where you are having an issue here, are you telling me you don't think space and time are created?
      Borde and Vilenkin took Hawking and Penrose work on classic general relativity and expanded it as far as possible with 5 papers and concluded "all reasonable cosmic models are subject to the relentless grip of the space-time theorems." They gave examples where you wouldn't need an absolute beginning to space and time but in such models you wouldn't have life.
      The cold hard unavoidable evidences are in the ones I presented and you are consistently choosing to ignore.
      Even Freeman Dyson, one of the world’s foremost theoretical physicists, wrote: ‘The more I exam the universe and study the details of its architecture, the more evidence I find that the universe in some sense knew we were coming,’. The evidence for fine tuning has come to a point it is so absolutely overwhelming it's unavoidable.

  • @rosalynredwood4542
    @rosalynredwood4542 4 года назад

    I'm sorry but the title is giving me flashbacks of Neil Breen's Twisted Pair 🤷‍♀️😂 great content as always!

  • @mariolis
    @mariolis Год назад +1

    Recent AI chatbots seem to follow a specific trend
    They are developing a human fetish ... they seem to be lovebombing the people that talk to them even when not prompted to do so...
    If thats an indication about the future of AI ... thats a very interesting future we are heading in

  • @XIIchiron78
    @XIIchiron78 4 года назад

    What do you think of Harold's solution in Person of Interest?
    Small spoiler - he chooses to reset the Machine to a predefined known good state once per day, because he fears it will develop beyond control after enough time.

  • @timezone5259
    @timezone5259 4 года назад +1

    Hey issac love your videos as always
    Also by the way when will you release the video of parallel universes
    (Sorry for being too impatient just that thinking of humans from an alternate universe invading ours to increase its influence is interesting to speculate)

  • @PongoXBongo
    @PongoXBongo 3 года назад +1

    We could use machine learning techniques to teach it human history (slavery, human rights, etc.). Heck, we could put it through ethics and leadership training courses. And maybe giving it a pet would help it to learn empathy, compassion, gentleness, etc. (look at Koko the gorilla, for example).

    • @lilith4961
      @lilith4961 3 года назад

      I agree, if they are smart enough to be a human they are capable of altruism. Like they could be like yes I am more capable of mining this asteroid, etc because im better than a human and i enjoy helping.
      And at the same time they could be aware of when they are being abused. Like a person giving them hazardous taks in purpose because they get a kick of seeing AI do highly dangerous task with no legit reason

  • @zeekfromthecreek
    @zeekfromthecreek 4 года назад

    Pandora may feel compelled to open her box, but she could at least wait awhile. We should wait for a number of things before we build a supermind. We should, for instance, wait till after we've solved the Fermi paradox. If the Fermi solution is that most species at out level destroy themselves, lots of them probably do it with AIs.

  • @theworldsays4264
    @theworldsays4264 4 года назад

    May sound off but that is why I thought was intriguing about the new Child's Play vid.
    In the old one Chucky is a product containing the soul of a serial killer who wants to possess a latch key kid. (pretty relevant with today's shooters)
    In the new one Chucky learns to behave evilly by being exposed to human's and human media, while attempting to carry out its protocols.