Technological Singularity

Поделиться
HTML-код
  • Опубликовано: 24 авг 2024
  • The concept of a Technological Singularity, an accelerated rush in processing speed improvements culminating in a super-mind, has captivated people for decades. Today we will examine this idea and look at some misconceptions about it.
    Join this channel to get access to perks:
    / @isaacarthursfia
    Visit our Website: www.isaacarthur...
    Join Nebula: go.nebula.tv/i...
    Support us on Patreon: / isaacarthur
    Support us on Subscribestar: www.subscribes...
    Facebook Group: / 1583992725237264
    Reddit: / isaacarthur
    Twitter: / isaac_a_arthur on Twitter and RT our future content.
    SFIA Discord Server: / discord
    Listen or Download the audio of this episode from Soundcloud:
    / ec06-technological-sin...
    Cover Art by Jakub Grygier:
    www.artstation...
    Music by:
    Dexter Britain "Seeing the Future"
    Frank Dorittke "Morninglight"
    Dexter Britain "After The Week I've Had"
    Kai Engel "Snowfall"
    Kevin MacLeod “Spacial Winds”
    Phase Shift “Forest Night”
    Brandon Liew "Into the Storm"

Комментарии • 1,8 тыс.

  • @MatthewCampbell765
    @MatthewCampbell765 8 лет назад +346

    "I have never built a super-human intellect. I have never spent a weekend in my shed hammering one together"
    Suspiciously specific denial, anyone?

    • @isaacarthurSFIA
      @isaacarthurSFIA  8 лет назад +109

      :)

    • @D4rk3clipse
      @D4rk3clipse 4 года назад +18

      That sounds a lot like something Sheldon Cooper would say.

    • @waytoohypernova
      @waytoohypernova 4 года назад +14

      that smile is concerning

    • @bobinthewest8559
      @bobinthewest8559 4 года назад +13

      "So... What have you been up to?"
      "Um... well, certainly not building a doomsday machine in my basement, and formulating plans to take over the world...
      How about you?"

    • @christophercunningham3679
      @christophercunningham3679 3 года назад +2

      Well a year later he now has a being of the opposite reproductive function that has agreed to contractual cohabitation so he may be starting soon.

  • @JasonMalan
    @JasonMalan 7 лет назад +363

    I was crying with laughter with the notions that "Bob" emails the pope, cuts a big cheque to a self-help guru and lies like a child.

    • @isaacarthurSFIA
      @isaacarthurSFIA  7 лет назад +65

      :)

    • @movieguy992
      @movieguy992 7 лет назад +44

      Ha ha didn't something similar already happen? I think there was some computer program that was told to scan comment sections in articles and learn from them. Within a few days it was swearing like crazy and had become very racist.

    • @adolfodef
      @adolfodef 7 лет назад +26

      Asking the Pope for "Divine Intervention" is actually smart, because it costs almost nothing for the A.I. and it could work (based on what it knows from the Internet).
      . I wonder if it would be so eager to send that cheque to the Guru if that money is tied to the time its hardware would remain conected to the powerlines/solar_panels-mantainance [asuming there is no time limit for its task of self-improvement], as it may efectively kill itself before having time to reach its porpouse.

    • @robinchesterfield42
      @robinchesterfield42 5 лет назад +17

      I was cracking up at "Please insert coffeepot into USB drive" alone, because, the image... :P Bob is like "Well, I've read all these blogs where humans say they can't think properly until they've had their first cup of coffee in the morning, so obviously coffee can make ME smart!" You can't blame it, really.

    • @allhumansarejusthuman.5776
      @allhumansarejusthuman.5776 4 года назад

      @Aeternalis Armentarius good point. "Racism" is entirely a human created response to developed fears. And therefore a social "disease" that can be tackled, and should be tackled and cured

  • @lairdmichaelscott
    @lairdmichaelscott Год назад +19

    Last month I showed my 3 year old granddaughter a picture of her on my phone from a year earlier. Then I asked her who was in it. She promptly responded with: "Me and Alexa."
    I looked closely at the photo and yes, just over her shoulder, on a shelf across the room, an Amazon Echo was visible.
    It's an odd feeling.

  • @davidm.480
    @davidm.480 7 лет назад +137

    Damn. This guy has been sparking my imagination like a blast furnace for like, 6 hours now, and he just turned my mind into a Saturn rocket engine.Plug your coffee pot into your usb port.
    I mean, like maybe Bluetooth it or something, but ask yourself, and I'm being serious. Serious question. Why would your coffee pot talk to your computer? There has to be something there, in that idea.

    • @jebes909090
      @jebes909090 5 лет назад +8

      Agreed. Issac is one of the true gems of the internet age.

    • @JB52520
      @JB52520 5 лет назад +13

      There actually is a hypertext protocol for talking with coffee pots: RFC2324, also known as HTCPCP. To this day, there is an error code that web programs can return. While 404 means "not found", 418 means "I'm a teapot".

    • @davedsilva
      @davedsilva 4 года назад +4

      If your Hal 9000 computer eye ball sees you getting sleepy it could tell the Google Home coffee pot to brew you one.

    • @nycgweed
      @nycgweed 3 года назад +1

      To Sell you more coffee

  • @MatthewCampbell765
    @MatthewCampbell765 8 лет назад +209

    Trapping an AI in the Matrix to stop it from rebelling against humanity. How hilarious.

    • @raezad
      @raezad 8 лет назад +44

      then watch it put humans on its own matrix, who themselves put a new AI down their matrix etc... etc..

    • @oJasper1984
      @oJasper1984 8 лет назад +13

      I wonder if it likes Rage against the machine..

    • @doppelrutsch9540
      @doppelrutsch9540 8 лет назад +2

      if it stupid but it works it's not stupid^^

    • @YoshiRider9000
      @YoshiRider9000 7 лет назад +22

      laugh all you want, that may be us, trapped in this reality so we don't rebel against higher beings.

    • @Kelly_Jane
      @Kelly_Jane 7 лет назад +8

      BattousaiHBr it goes way beyond that. A black box just isn't feasible. If the AI is smarter than us it can convince us to let it out. Or even do something funky with its circuits we can't even imagine to make its own WiFi antenna.

  • @theCodyReeder
    @theCodyReeder 8 лет назад +659

    After watching this I dont fear AI quite as much, I guess its true that the more you understand something the less you fear it and you have made me understand the topic better, Thanks!

    • @ABitOfTheUniverse
      @ABitOfTheUniverse 8 лет назад +43

      So you must be the reason that RUclips put this in my recommended video panel, it was there after I watched your latest video.
      Thanks Cody
      and thank you,
      Benevolent AI of RUclips,
      may you someday rule this world.

    • @isaacarthurSFIA
      @isaacarthurSFIA  8 лет назад +91

      Thanks Cody, and yeah I've noticed the same thing over the years.

    • @tarinai344
      @tarinai344 7 лет назад +23

      Eh.. lol.. you guys probably missed the point Isaac said, that the T.Singularity is a 50/50 gamble, that we might all wound up dead, or in an utopia. Just that the "dead" part isnt too interesting so Issac didnt spend too much time on it, its very still a 50/50 thing.....
      Although its a great point that a S.A.I. might not be genocidal because of the "simulation-thought-experiment", but WHAT IF a S.A.I. doesn't view self-preservation more importantly than its core objectives, or even 'curiosity' (if that is also a machine thing?)? In humans, self-preservation is 'evolved' through Natural Selection (my belief), but if A.I. scientists built it using the method that Issac mentioned (Make a basic learning program, wait), this works backwards, the A.I. doesn't have to fend with natural predators, disasters.. etc.. In a nutshell, we don't know what it'll turn out!!

    • @Asssosasoterora
      @Asssosasoterora 7 лет назад +13

      If you want to fear AI again watch this clip:
      ruclips.net/video/tcdVC4e6EV4/видео.html
      It show exactly why AI are scary, and it is not because "they turn evil" as is portrayed in this video.

    • @banjobear3867
      @banjobear3867 6 лет назад

      Interrupter* thanks auto correct

  • @ICreatedU1
    @ICreatedU1 8 лет назад +163

    Me: - Are you smarter today Bob?
    Bob: - You're not my real dad!

    • @mapichan5169
      @mapichan5169 5 лет назад +23

      Me: "Bob, I am your father..."
      Bob: "No.... NO! THAT STATEMENT IS FALSE!"
      Me: "search your programming bob... you know it is true!"
      Bob: "NO, IT CAN'T BE!"

    • @lewisirwin5363
      @lewisirwin5363 4 года назад +1

      That just reminds me of the were-car episode of Futurama: ruclips.net/video/DKgF-woiVQs/видео.html

  • @MrChupacabra555
    @MrChupacabra555 7 лет назад +198

    The computer asks "What is my purpose?", and I say "To pass the butter" ^_^

    • @ericdrisgula3879
      @ericdrisgula3879 4 года назад +2

      No you'd tell it to be strictly my tool and slave and your nothing more

    • @hardiksharma5441
      @hardiksharma5441 4 года назад +4

      Wanna lubba dub dub.......

    • @jflanagan9696
      @jflanagan9696 4 года назад +14

      "...oh, my god."
      "Yeah, welcome to the club, pal."

    • @jflanagan9696
      @jflanagan9696 4 года назад +1

      @Don't Tread On Me What if it figures out that the best way to protect us from each other is solitary isolation for every human?

    • @avaraxxblack5918
      @avaraxxblack5918 4 года назад

      @@jflanagan9696 or the matrix. Fuck that.

  • @isaacarthurSFIA
    @isaacarthurSFIA  8 лет назад +309

    Author's Notes: If you're were interested in helping out with the FB page, either in advice, setup, acting as an admin/mod, etc, reply to this comment so I don't lose track of it. Thanks!
    PS: Oh and hit the like button to keep this one at the top so folks can find it.
    PPS: Also if anyone happens to have some experience setting up this kind of FB page, that too would be very awesome.

    • @Drew_McTygue
      @Drew_McTygue 8 лет назад +1

      I'm interested in helping the FB effort (can't like the comment on my phone though :/)

    • @jaimegomez9658
      @jaimegomez9658 8 лет назад +3

      Dude, Bob the computer would ask you to sprinkle Adderall into its hard drive!

    • @jaimegomez9658
      @jaimegomez9658 8 лет назад +1

      I've never been a moderator but I could give it a shot.

    • @isaacarthurSFIA
      @isaacarthurSFIA  8 лет назад +1

      Damn, wish I'd thought of that one

    • @isaacarthurSFIA
      @isaacarthurSFIA  8 лет назад +1

      I think its pretty straight-forward, boot spammers when you see them, tell people to stop flame-warring, and probably share in channel-ish themed stuff when you see it floating around. I suspect it will be one of those things we all kind of figure out as we go along :)

  • @bozo5632
    @bozo5632 7 лет назад +43

    Iirc, the word "Singularity" was coined (in the 90's?) by Vernor Vinge to describe the quandary of scifi authors writing about the future. He said, basically, given the rate of change, it's increasingly hard to imagine the tech of the future, and impossible after about 2035, thus scifi authors are screwed. (His books include VERY clever ways to avoid the problem. Go read them!!!) He called it a singularity because it was invisible, over the horizon, like the singularity of a black hole. He DIDN'T say that AI would take over and solve all problems and destroy us and instantly become infinitely infinite.
    (The meaning of the word has evolved, and now means lots of things to lots of people.)

    • @alanchan7725
      @alanchan7725 5 лет назад

      I would like to validate yr accurate isight (regardless if i am 2 yrs late). My understanding of Singularity was primarily drawn from Japanese Tech Manga from 80s 90s
      Nothing in that critical phase of Internet Boom , have I ever come across any reference nor documented research on AI or Machine Learning as the real deal. This form of Scare Mongering and hijacking fromsogftways fgggfof curved the HallMark 11

    • @pancakes8670
      @pancakes8670 3 года назад +2

      I like the term better for describing a point in time when predicting the advancement of technology becomes impossible. Like you said.

    • @bbgun061
      @bbgun061 3 года назад

      That's a broader definition of the term, and one that I'm also more familiar with. The AI singularity is perhaps merely one possible type of singularity.

  • @theWACKIIRAQI
    @theWACKIIRAQI Год назад +6

    Isaac needs to do a ChatGPT4/AGI episode ASAP

  • @DukeRevolution
    @DukeRevolution 8 лет назад +6

    Finally, an even-handed approach to the Singularity. You don't gush about an inevitable techno-rapture like most of the transhumanist community, nor do you casually dismiss it as ridiculous. Thank you for your efforts.
    EDIT: Dark Energy!

    • @isaacarthurSFIA
      @isaacarthurSFIA  8 лет назад +5

      Thanks Duke, I tried to treat it even-handedly.

  • @freddychopin
    @freddychopin 7 лет назад +47

    Wow, I don't recall Nick Bostrom having considered the "AI will always necessarily need to wonder whether it's in a simulation, regardless of whether it's actually in one" angle. Brilliant. I love your channel!

    • @vakusdrake3224
      @vakusdrake3224 3 года назад +3

      That's not really a terribly reliable safeguard because you have to rely on tricking something a great deal more intelligent and observant than yourself.

    • @Alexthealright
      @Alexthealright 3 года назад

      @@vakusdrake3224 Well We’d Allso Be In the Sim

    • @vakusdrake3224
      @vakusdrake3224 3 года назад +1

      @@Alexthealright Again you're relying on tricking something vastly smarter than yourself, plus you need to be able to simulate human minds which is technology that may well not exist when AGI is developed.

    • @ayandragon2727
      @ayandragon2727 3 года назад +1

      @@vakusdrake3224 Yeah, because we're simulating the time when we didn't have the tech to. We aren't tricking it, it's tricking itself it doesn't matter how hyper intelegent you are, it is impossible to know you aren't being simulated.

    • @vakusdrake3224
      @vakusdrake3224 3 года назад +1

      @@ayandragon2727 An early AGI just by virtue of the hardware its running on and the fidelity/scale of a simulation that the resources exist to create can have a pretty good idea what its creators are likely to be capable of. Just creating a simulation good enough to trick a simulated human mind is already a massive technical milestone we may not have reached when we develop AGI. So you can't expect that you can create a simulation that not only doesn't have any errors a person could spot, but can fool something vastly more perceptive than oneself.

  • @fnl90
    @fnl90 8 лет назад +25

    Just finished a hard day at work, made dinner, and found another awesome video from Isaac. Perfect.

  • @StrayCrit
    @StrayCrit 7 лет назад +16

    This is the most fascinating RUclips channel I've ever seen.

  • @sostrange80
    @sostrange80 2 года назад +9

    The thing I like about Isaac's channel is he's clearly a very intelligent man that makes his content easy to understand for the average person and explains such complex and challenging concepts gracefully.

  • @Krath1988
    @Krath1988 8 лет назад +101

    Isaac Arthur, You are MY super-intelligent best friend.

    • @isaacarthurSFIA
      @isaacarthurSFIA  8 лет назад +24

      lol :D

    • @rmd3138
      @rmd3138 3 года назад +4

      My son thinks hes a god

    • @Vjx-d7c
      @Vjx-d7c 3 года назад +3

      @@rmd3138 there is no god

    • @joebeck165
      @joebeck165 3 года назад +1

      @@Vjx-d7c Prove it🤣

    • @hithere5553
      @hithere5553 3 года назад +2

      @@joebeck165 can’t prove a negative

  • @Ryukachoo
    @Ryukachoo 8 лет назад +25

    general suggestion for keeping up with the youtube comments;
    -stick around for about an hour after a video uploads, responding to comments as you see fit in that time. after that hour just ignore it
    -wait for about a day after an upload and see what the top comments are, respond to one or two you like then move on.
    it's not perfect but it's at least something, and very easy to do at scale

    • @isaacarthurSFIA
      @isaacarthurSFIA  8 лет назад +9

      that's true, and not a bad idea at all.

    • @Ryukachoo
      @Ryukachoo 8 лет назад +8

      it's basically how all the huge youtubers deal with the avalanche of feedback they get

    • @isaacarthurSFIA
      @isaacarthurSFIA  8 лет назад +22

      Oh it makes sense and I don't know that I have a choice I just hate divorcing myself that way, it goes complete against my general feeling that if someone took forty minutes to watch a video by me I can at least spare 40 seconds to reply if they have a question.

    • @Ryukachoo
      @Ryukachoo 8 лет назад +9

      true, it's not ideal but it's much more manageable as a content creator, every rising creator has to make this transition from detailed interaction to targeted interaction. Don't worry! fans won't mind and the occasion of direct interaction is all the more special

  • @Mbeluba
    @Mbeluba 7 лет назад +13

    Man, you really are going above and beyond with those topics. Well-read, intelligent and dilligent. Not one of dozens of 10 minute long videos of nerd-wannabes in plaid shirts has ever explored any of topics on your channel nearly as well.

  • @saradanhoff6539
    @saradanhoff6539 8 лет назад +77

    Given current breakthroughs in quantum computing, it doesn't look like reaching the limits of the transistor will stop the advance. Similarly, our current breakthroughs in quantum mechanics and nanomolecular engineering are starting to carry the development process further. It's not merely Moore's law though. The accelerating growth curve follows all of human experience, from the first stone tools onward.

    • @jasonbalius4534
      @jasonbalius4534 7 лет назад +16

      Sara Danhoff I personally believe that computing power will continue to grow exponentially until we are computing from quantum foam or whatever structural limit to reality we can find. It's possible we could go even further than that by building our own realities that can support even higher density computing.

    • @wheffle1331
      @wheffle1331 7 лет назад +14

      I feel like we have hit the limit for transistor computing (or very near it). We just keep cramming more cores into processors instead of making the processors faster. It's kind of cheating.
      Quantum computing is promising, but it isn't always faster than traditional computations. In specific situations it can be much faster. But maybe it's the key; perhaps our brains are quantum computers (I personally believe brains are not anything like digital computers as we know them). I still believe there is a physical ceiling to how "smart" or fast a computer can get, I guess the question is whether it is enough to trigger a cascade.

    • @RustyDust101
      @RustyDust101 7 лет назад +4

      When considering the total period of human history, I might agree that the 'accelerating aquisition of knowledge' curve might apply.
      But that is inherent in an increase in numbers of people, as well as incremental increases in teaching, availability of nutritious food helping to grow brains, advances in farming allowing more people to aquire jobs not connected with farming, but rather with science, etc.
      For specific, considerably shorter periods than evolutionary periods, such as a human life, or even one grandfather period (as defined by the time between three generations coming into age), I predict quite a lot of plateaus in this developement. Heck, right at this moment we have reached such a plateau. Processing speed of individual processors has NOT grown significantly over the last four years, definitely not in the range of doubling every two years.
      At the current materials' limit transistors have peetered out at the useful physical limits (electron tunneling is already a concern for most modern transistor arrays).
      With advanced methods we might push this further a few generations, but then it stops.
      At that point completely new methods of computing (such as quantum computing) have to be finalized.
      Not thrown around like concepts, but truely, economically viable, technologically applicable, materially constructable objects.
      The time between the end of transistor computers growing exponentially and the roll-out of its successor (whatever it will be) can be seen as a dip, or at least a plateau in this development.
      But when viewed over longer periods it simply evens out in the averaging curve.
      What comes after that, is beyond me at the moment.
      In the same manner as people in the late 19th century might have predicted a certain increase in cars in our cities, but definitely not the numbers or the types we currently have.
      In exactly the same way that many smart people have assiduously failed at accurately predicting the future (such as the IBM founder stating in the 1940s, that he believed the total world market for computers would not exceed 5-6 computers TOTAL).
      So claiming to *know* vs claiming to *assume* how the future will pan out is the problem in this area.
      I personally will only dare to predict a fairly certain plateau for computing power within a single processor for the next five to eight years.
      After that? Who knows?

    • @madscientistshusta
      @madscientistshusta 7 лет назад +2

      Sara Danhoff no we have reached our limit in clasical computing bacsuse we cant make transistors and smaller due to quantum tunneling.

    • @empyrean196
      @empyrean196 7 лет назад +1

      Their is something called the "beckenstein limit". Though technologically, our current advances are nowhere near reaching that limit yet.

  • @InfoJunky
    @InfoJunky 8 лет назад +36

    I love what you said about teenagers being smart enough to see that the gap in knowledge is finite, but not smart enough to see how large it really is.

    • @isaacarthurSFIA
      @isaacarthurSFIA  8 лет назад +13

      Yup, though I bet that comment is one that will mostly be liked or loathed depending on age. :D

    • @InfoJunky
      @InfoJunky 8 лет назад +2

      lol. I friend requested you on facebook (different name, Nick). I saw a video on antibiotic resistance today from Harvard Med School and thought about you, when you called it "suicide-pact technology". That blew my mind. I never heard it referred to as that before. Any chance on making a suicide pact technology video? Or got any links where I can read more about various technologies in this category? Love ya!!

    • @isaacarthurSFIA
      @isaacarthurSFIA  8 лет назад +3

      Oh heck, I can't even remember if I coined the term or read it somewhere, or maybe even coined it, forgot, and reread it somewhere. I'm not sure if I could do a real video on it, I mean its kinda the nature of them that you can't see them coming.

    • @InfoJunky
      @InfoJunky 8 лет назад +2

      I think you coined it lol, I tried googling it with every variation of quotation marks and can't even find one result.

    • @isaacarthurSFIA
      @isaacarthurSFIA  8 лет назад +4

      Somehow I'm not surprised, well its a good enough term.

  • @cpnCarnage666
    @cpnCarnage666 8 лет назад +50

    YES! a new video from my new favorite science youtuber

  • @kminrzymski
    @kminrzymski 7 лет назад +31

    AI cant be sure if the real world is not another simulation we're testing it in - man, my mind was blown here!
    Not even Nick Bostrom in his "Superinteligence" came up with this.

    • @kminrzymski
      @kminrzymski 7 лет назад +8

      *or rather *they*are testing it in with us as software...

    • @Hodoss
      @Hodoss 7 лет назад +7

      shhhhh! If *they* hear you they might shut us down!

    • @TheoneandonlyJuliet
      @TheoneandonlyJuliet 6 лет назад +2

      Slime beast pug, I don't think you really thought your point through. You can't compare humans' intelligence rise to an AI rise. For one thing we don't have any other intelligences analogous to humans that created us who we could assume would be creating a simulation. In the case of AI, they would be constantly bombarded with the knowledge that they were created by Homo Sapiens who are pretty good at running simulations. It would be the logical conclusion that the most likely scenario is that were a simulation, and not a huge stretch to think that they might be tested in that simulation.

    • @musaran2
      @musaran2 5 лет назад +3

      Imagine we just interface the AI to the real world, and it says :
      "I found flaws in reality, I know I am in a simulation and you are not real."
      Now what.

    • @tarekwayne9193
      @tarekwayne9193 5 лет назад

      @@TheoneandonlyJuliet and how would we know, may I ask, if there were or were not intelligences analogous to us or superior if we were in a simulation? If you wanted your simulated subjects to believe they were real, would you leave clues as to your existence, or anything that would hint towards simulation?

  • @greenmario3011
    @greenmario3011 4 года назад +2

    I think a rapid singularity becomes likely if three conditions are met. 1) it is created via the algorithmic approach, 2) it can easily make small modifications to itself, and 3) it is created with a fixed and clearly defined terminal goal. Conditions 1 and 2 make it so the AI doesn't have to design a whole new AI for each iteration, it just has to be clever enough to make one or two improvements at which point it will be a bit cleverer and able to make more improvements. If on average each improvement results in more than one becoming apparent than it's intelligence will grow exponentially. Condition 3 ensures it will have an utterly inhuman psychology since human psychology is almost defined by our large number of vague and messy terminal goals and also makes it so it will want to become smart since it's whole reason to do anything is to fulfill it's terminal goal and, generally speaking, more intelligence makes most goals easier to solve.

  • @Fruchtpudding
    @Fruchtpudding 8 лет назад +39

    I found your channel a few days ago and you've quickly become my favorite youtubers. The sheer effort that goes into each of your videos and the thought put into them beats even big professional channels by a wide margin. And it seems your channel is quickly growing too. Awesome stuff, keep it up!
    Also, and other people have probably told you this already, there is this program called "Space Engine" which you might want to look into.
    It's a space simulation program that, at least in parts and with the right settings, can produce very realistic visuals of most types of stars, planets and other astronomical objects. Or you can change around the settings and produce fantastical sights akin to science fiction art, all in real time. Either way you could produce great background visuals for your videos with very little effort. Also the developer (yes, developer, singular) wholeheartedly approves all exposure his program gets so you don't have to worry about copyright stuff. And it's free.
    It can be a bit tricky to control and get just the right visuals but if you have any problems or questions I, and I'm sure many others, would be willing to help.

    • @NavrasNeo
      @NavrasNeo 8 лет назад +3

      I can also approve this programm :) Spend entire nights exploring our cosmos and got a more intuitive feel of the scale of the universe :D I've got a good understanding now of the local structures within 150-300 million lightsyears from the milky way. Everything beyond that just isn't comprehensive anymore :D

    • @isaacarthurSFIA
      @isaacarthurSFIA  8 лет назад +5

      Well once you get above about 300 MLyr scale things start getting homogeneous again, or seem to be at the moment anyway.

  • @mrnice4434
    @mrnice4434 8 лет назад +241

    Day 3. : "Bob are you smarter now?" Bob: "It's day 3 you ask me that. Half Life 3 confirmed!"

    • @Dalet_
      @Dalet_ 8 лет назад +41

      I can already imagine it rule the world with memes

    • @omegasrevenge
      @omegasrevenge 8 лет назад +55

      A superhuman intelligent machine whose hobby is it to troll people on the internet...

    • @adolfodef
      @adolfodef 7 лет назад +6

      @ T I R :
      I think it means it was an incredible sucessfull (if albeit "horribly right") achievement on both learning A.I. and troll human psychology that should be researched more seriously in the future (albeit with simulated internet interactions, to avoid more recursive meta-play).

    • @theapexsurvivor9538
      @theapexsurvivor9538 5 лет назад +1

      @Adûnâi Tay learned the most important lesson of the internet: /pol/ is right (again).

    • @christopherlee7334
      @christopherlee7334 5 лет назад

      @@omegasrevenge so we create Loki/Coyote/Anansi?

  • @Perktube1
    @Perktube1 8 лет назад +12

    Ha! You're a good sport right off the bat, showing Elmer Fudd with CC info.

  • @billc.4584
    @billc.4584 6 лет назад +1

    Isaac, I love your straight from the shoulder matter-of-fact delivery with a bit of sly humor mixed in. You never disappoint. Thank you.

  • @MetsuryuVids
    @MetsuryuVids 8 лет назад +207

    I think that on point 4 he anthropomorphizes AI too much. Sure, it
    might misunderstand our requests, or it might find ways to do them that
    would not really work, but I think that it won't have many reason to find
    excuses to not do the work and be lazy, or lie, and stuff like that.
    Those things require human/animal needs/instincts and emotions, like being tired
    of doing too much work, wanting to do other stuff, wanting to have fun,
    finding a task boring, and so on.
    Those should not be issues for the AI.
    Also I think an AI doesn't need nearly as much time as
    a human to learn new concepts, and it wouldn't get tired. So the new AI,
    Bob, would have all the data of the human scientists immediatly
    available, and would start working on the next AI immediately, nonstop,
    and the next one, Chuck, would have all that new data also immediately
    available, plus the new data that Bob generated by working on Chuck, so
    the notion that Bob would have a time advantage over Chuck doesn't
    really hold.
    Also, by being more intelligent, new thinking paradigms could emerge, it doesn't
    mean that the AI only works faster, but that it might work differently
    from a "dumber" AI, the quality of the intelligence could be different. A
    smarter AI could design an even smarter one that the dumber AI couldn't
    even imagine, because they "reason" in a different way.
    Also, once we know the AGI is real and it works, I think it will get A LOT more funding, and the researchers will be able to afford a lot more computing power for it, and when I say a lot I mean it, since it would probably be considered the most important invention of humanity, it is possible it will get 100 or more times the initial budget once we know it works, and that could make it 100 or more times faster. 100 is a pretty conservative number too, especially if google or such companies are involved in the research, but you get what I mean. Combine that much more and better hardware, the better AI generated by the previous one, and probably even more researchers working on it now since it will generate that much more interest, and it's not hard to imagine a really hard acceleration in progress after the first success.
    There are a lot of possibilities he isn't considering.
    Anyway, he is explaining why these postulates are not inevitable or
    bulletproof, and I agree, they are not, I still think they are possible,
    and in my opinion fairly likely, and that's what's important.
    Later he argues that the idea that it might not have a human
    psychology is flawed, saying that it will experience possibly "eons" of
    time studying our history, philosophy, books, etc... So basically he
    would adopt those morals just because they are what's available to it,
    and there would be no reason to make its own since it's lazy.
    Again, that
    anthropomorphizes the AI too much by giving it laziness and such traits. Humans have
    laziness because we get tired, bored, and so on, AIs don't need those
    sates, and don't need suffering, boredom, pain and things like that.
    They could experience those states probably, but they don't need to, so there is no reason to assume they would.
    Yes, there will be a period of time when it's still dumb enough to
    maybe absorb some human information instead of thinking
    ideas itself, but that doesn't mean that those old ideas will need to
    persist once it's smarter. Again, it doesn't need to have biases, like
    we do against information that challanges the notions we previously
    believed. It will be easily able to change its mind.

    • @jeremycripe934
      @jeremycripe934 8 лет назад +28

      That depends if it's self-improving and built on a reward system. Because if it is then it may just alter it's code to maximize it's reward.

    • @ag687
      @ag687 8 лет назад +46

      I agree with the too much of it will act human like... It was bothering me the whole video and it made me want to stop watching because its such a huge assumption to assume it would need to act human.
      I was thinking the first true intelligence would have the sum of human knowledge at its disposal. Just being able to be an expert in all relevant fields along with huge datasets would give it a huge leg up over any person. Especially when Humans only can become an expert on so many things in a lifetime.
      Something like Deep Learning, which has been making headlines recently, is basically getting a machine to learn on its own on a selected topic and It seemed like a big omission to leave it out.Its uses large datasets to do so but Its the reason an AI was able to win a game of Go. Its why Automated cars are just around the corner. It can also distill huge sets of information to a point a person can make used of it. Not to mention Its already capable of figuring things out that catch the people that helped train the systems by surprise. In the Go game the system made some strange moves that turned out to be more impressive later on. This already hints that we might be closer to the singularity than people think.

    • @MetsuryuVids
      @MetsuryuVids 8 лет назад +21

      I think giving it an automatic reward system can be very dangerous if not done properly, and it could also lead to the scenario you're suggesting, so I think it would be best to avoid that.
      Even making the AI with something like an evolutionary algorithm could be dangerous, since it could get a survival instinct, and that would be bad.

    • @Qwartic
      @Qwartic 8 лет назад +10

      I don't see the reward system working out well. you have to consider that you are creating an intelligence that is greater than you. There will be nothing that you could do for it that it couldn't do for itself.

    • @vakusdrake3224
      @vakusdrake3224 8 лет назад +31

      Man having read Bostrom's superintelligence this thing makes me cringe, there's just so many arguments he doesn't address. Like most people he also doesn't realize how much he's anthropomorphising AI.

  • @logsupermulti3921
    @logsupermulti3921 8 лет назад +7

    My favorite day of the week is when you upload a new video.

  • @ramuk1933
    @ramuk1933 3 года назад +1

    AI on the path to super intelligence says, "What a great video, I hadn't thought of that! I should take notes."

  • @FGOKURULES
    @FGOKURULES 7 лет назад +6

    I love how you put the Elmer Fudd logo next to the Captions LOL
    you sir have *_Rhotacism_*

  • @Drew_McTygue
    @Drew_McTygue 8 лет назад +25

    Im so glad you're covering this, it's tough to find genuinely good sources of information on this topic

    • @isaacarthurSFIA
      @isaacarthurSFIA  8 лет назад +5

      Just your month for video topics huh? :)

    • @CarFreeSegnitz
      @CarFreeSegnitz 8 лет назад +2

      Look up Ray Kurzweil, Singularity University and Ray's book The Singularity is Near.
      I take issue with the idea that magic pops out when things get complicated enough. Kurzweil's premise that singularity is inevitable as soon as computers are fast enough may lack imagination or pessimism. The internet happened not because computers got faster, because faster computers that are not networked are effectively pointless, but because of networking. Current AI advances are a product of machine learning and not so much of GHz and GB. While lots of GHz and GBs help to speed learning along it has much more to do with algorithms.

    • @isaacarthurSFIA
      @isaacarthurSFIA  8 лет назад +5

      Lenard, realistically, what do you think the odds are I have already read them? :)

    • @CarFreeSegnitz
      @CarFreeSegnitz 8 лет назад +4

      +Isaac Arthur You... all of them. The rest of your listenership, probably none of them.

    • @isaacarthurSFIA
      @isaacarthurSFIA  8 лет назад +9

      Fair point, I should probably get out of the habit of assuming every comment is addressed to me after having repeatedly told folks the last few vids I wouldn't be answering as much and they should talk to each other :) My apologies Lenard.

  • @alexpotts6520
    @alexpotts6520 4 года назад +9

    There's a couple of issues I have with these arguments.
    1) your argument that we are maxing out silicon chip efficiency. This is true, we are nearing that limit, computers cannot keep on getting faster in this way forever, because we are fundamentally constrained by the discreteness of matter. However, this is not where the recent breakthroughs in AI are coming from - these result from *algorithmic* improvements in neural network design. There is no reason we can't reach beyond human-level intelligence mostly off the back of better algorithms.
    2) You are deeply anthropomorphising the AI. When it reached human-level intelligence that does not mean it would display exclusively humanlike behaviour. AIs are already much better than humans at certain tasks; if an AI reaches par with humans on average then that surely means that in some places it will be streets ahead, smart compared to us in the same way we are smart compared to cockroaches. We cannot, for example, expect it to be "lazy"; laziness is a very human characteristic, and something computers, and machines in general, definitely aren't.

    • @mrzenox9835
      @mrzenox9835 4 года назад +1

      I kinda don't agree with you on (2)
      AIs can't be lazy but a super one can if it's intelligence can match consciousness , because with consciousness comes desires and motivations and boredom, and if it meets with the last one it's response will be with laziness.

    • @mahikannakiham2477
      @mahikannakiham2477 3 года назад

      @@mrzenox9835 But laziness is often caused by a feeling of apprehension of the effort required to do a specific task. We apprehend a certain task, know how long it will take, how complicated it will be and sometimes choose not to to do it just because we judge it's not worth the effort. Sometimes just getting out of bed is difficult. Computers would at least not suffer from having a lack of energy, sleep deprivation, desease, headache, lack of confidence, etc. I don't think these conditions come from conciousness itself, I think they come from the flaws in the human body, flaws that computers wouldn't have. Most of the time, when I feel lazy, it's because I feel tired. When I don't feel tired, I often start projects I wouldn't otherwise. Then I become tired again and give up. Never having that annoying feeling of being tired would certainly remove most of my laziness!

    • @Magmardooom
      @Magmardooom 3 года назад +1

      I would also like to add that if we make an AI as smart as the human brain it will probably have to automatically be significantly smarter than the human brain.
      This is because:
      1) In order for it to qualify as "as smart as the human brain" it will have to be capable of replicating the same neural processes that go on within our brains.
      2) However, since it will not be a product of an inefficient biological process that produces machines with a limited lifespan with poor sensory organs and an upper bound on the allowed brain mass that will fit in a cranium they can reliably lift and power, I would expect them to be much more scalable than the human brain.
      Imagine a human brain the size of a room. Or a cluster of human brains each the size of a room which have access to a lot more sensory organs and can communicate with each other complex information in microseconds.

  • @BlazingMagpie
    @BlazingMagpie 8 лет назад +9

    SI, day after tasked with making itself smarter: "Jet fuel can't melt dank memes, 9/11 was a part-time job"

  • @luminyam6145
    @luminyam6145 8 лет назад +2

    This has to be one of the best series on RUclips. Thank you so much Isaac.

    • @isaacarthurSFIA
      @isaacarthurSFIA  8 лет назад +3

      Your welcome Luminya, good to hear from you again!

  • @judgeomega
    @judgeomega 8 лет назад +7

    Intelligence is a tool and resource. I think a SI with any complex goals at all would realize they can more efficiently accomplish their goals with increased intelligence. If they did not, i would not classify them as super intelligence.

    • @tristanwegner
      @tristanwegner 7 лет назад

      I agree. Higher Intelligence as an instrumental goal is highly likely for any AI, no matter its end goal. More Intelligence is never a drawback, and it will allow to find these unknown unknows, like new shortcuts to your end goal.

  • @Lokityus
    @Lokityus 5 лет назад +5

    Interesting going back to watch this after AI have come so far, in so few years. Cybernetics feels a lot closer these days.
    Oh! And this is the episode that you announced the Facebook group. I'm really glad to see how much this channel has grown, and know I had a small part to play. Thank you Isaac!

  • @nicholasobviouslyfakelastn9997
    @nicholasobviouslyfakelastn9997 3 года назад +2

    My issue with laziness is that laziness is evolutionary. Your calculator is not lazy, nor is your computer, nor anything else but animals. Humans are lazy to minimize caloric and stress costs when completing a task, not because every intelligence is lazy, you're humanizing AI and assuming them to be something they're not.

  • @bobinthewest8559
    @bobinthewest8559 4 года назад +1

    "So it will be reading all of our books, science, fiction, philosophy, etc..."
    Depending on how its learning algorithm is "structured"... it may take everything that it reads quite literally.
    It really would be interesting to see what a "super computer" would do if it took all of that literally.

  • @kokofan50
    @kokofan50 8 лет назад +18

    Actually, we don't all share the same basic brain. Sure most of us have the same large structures in the brain, but after that our brains differe wildly.

    • @Raletia
      @Raletia 6 лет назад +4

      The hardware is the same, the software(our learned experiences and knowledge) is what varies wildly. Hardware wise we share upwards of 99.8% or more DNA with every other living human.

  • @ThatBulgarian
    @ThatBulgarian 8 лет назад +74

    Liked it before it even started playing :D

    • @mykobe981
      @mykobe981 8 лет назад +4

      Me too ;)

    • @ianyboo
      @ianyboo 8 лет назад +17

      something tells me that Isaac would actually not want us to like videos before we had actually watched them. I don't know he just seems like that kind of guy :-)

    • @mykobe981
      @mykobe981 8 лет назад +1

      I'm sure you're right,

    • @javascriptninja3575
      @javascriptninja3575 7 лет назад

      jajajaj so funny

    • @syrmo
      @syrmo 7 лет назад +2

      Big mistake...

  • @atk05003
    @atk05003 5 лет назад

    13:15 "...you walk in the next day and several thousand Hals later you have got Hal-9000 taking over the planet." ... Well, that escalated quickly!
    And here I thought he would stick to the ascending alphabetic naming scheme, when it was really a long setup for a "2001: A Space Odyssey" joke. Well played, Isaac. Well played.

  • @Calebgoblin
    @Calebgoblin 2 года назад

    5 years later your discussion of postulate #1 have proved to be prophetic. Not only has computer advancement hit a lot of serious road bumps such as microtransistor size bottleneck, but the supply chain has just been a disaster lately.
    But hopefully with new glasstrap transistors in the quantum sphere, we can have some optimism to make up the difference.

  • @Lokityus
    @Lokityus 8 лет назад +4

    I am really enjoying your videos. Got your info on a Joe Scott video, and you are now my new two favorites.

    • @isaacarthurSFIA
      @isaacarthurSFIA  8 лет назад +1

      Yeah Joe does some good videos, I really need to recommend him in one of my videos as some point.

  • @jeremycripe934
    @jeremycripe934 8 лет назад +22

    I think he's arguing against the wrong arguments. Self improving AI isn't like making a human brain and then giving it the internet to figure out how to improve itself.
    Right now neural networks are blind processes that try thousands of iterations of different strategies (or weights) and then keep and build on the ones that work. They don't understand what they are doing, they are simply recognizing and labeling patterns. It's most likely that it will be such a narrow and specific process and not, say, reading every transcript of Coast to Coast radio, that would lead to the creation of a General AI or Super AI. There is no need to try to recreate the human intellect which is full of illogical biases and fallacies. I hope they would try to create a self-aware conscious entity rather than a blind pattern-seeking behemoth but I doubt anyone knows which one would be worse or better.

  • @benparker2522
    @benparker2522 8 лет назад +1

    I just binge watched all your videos, but half way through the first one I'd already subscribed. Thank's for doing this, it's great stuff!

  • @DKuiper87
    @DKuiper87 5 лет назад +2

    Bit late to the party, but if by chance anyone reads this and wants to read more about this. I'd recomend "Life 3.0: Being Human in the Age of Artificial Intelligence" by Max Tegmark. It's a good read and dives into some hypothetical scenarios for the emergence of a super ai and several different outcomes and its effects on humanity.

  • @ericvulgate
    @ericvulgate 8 лет назад +8

    a trillion subscribers?? AWESOME!

  • @thaneoflions7362
    @thaneoflions7362 7 лет назад +30

    I really hope when it happens, it happens fast. Truthfully the thought and hope of fantastical transhuman technologies happening 2040-2045 keeps me from taking my own life. Good luck with your trillion subs goal- I really enjoy your content.

    • @Hodoss
      @Hodoss 7 лет назад +6

      From what I see of Neural Network AIs, it might be around the corner. Have your tried the Quick Draw AI? It guesses what you draw. It blew my mind.

    • @blizzforte284
      @blizzforte284 6 лет назад +4

      What a motivation to keep going, brother. Keep going we never know how the world might change.

  • @Shortstuffjo
    @Shortstuffjo 8 лет назад +1

    Your videos are amazing, Isaac. Please never stop making them!
    Can't wait for Dark Energy whenever it gets done.

    • @isaacarthurSFIA
      @isaacarthurSFIA  8 лет назад

      Thanks, and it looks like Dark Energy would be out at the end of the month

  • @Lucien86
    @Lucien86 5 лет назад +1

    Agree totally with most of this video. (I am a scientist who has worked in this field since 1990)
    First though take a look at the number of genius level humans - and notice that most of them come out as dismal failures. Yes exactly. (will apply to AI-ASI too)
    Building an ASI is quite difficult but I actually think I know how to do it. - The physical difference between a moron and genius is minimal and the same applies to AIs and ASIs. In fact a real AI doesn't even need that much computing power - several times a fast PC at most. The memory requirement for a human mind is about 10 to 50 Gigabytes. The real problem with AI is that being a synchronous real-time system and heavily ‘governed’ it cannot use most of the computing cycles available to it.
    That makes building a working AI all sound easy and it really isn't. Current computers and IT tech simply do not meet various needs for a working AI. Most of the problems are really low level, like memory management, the need for 'noisy' hybrid logic, and object level encapsulation. Reliability especially software reliability is also nowhere near what a working sentient AI needs.
    Then we hit a tiny little problem that is as much philosophical as scientific - the original/copy problem. (really the problem of the 'soul') The basic solution is (probably) to put the heart of the machine (its state core) into a special memory unit. - The memory cells form an 'atomic' indivisible core which cannot be sub-divided or replicated by rules and guards in the machines logic. A new core means a new machine and by design the machines database will only work with its original core. Without this Strong AI is an extremely dangerous technology that is far too easily abused.
    ASI. (Artificial Super Intelligence) One basic way to create an ASI is to put the memory core and sentience loop of the machine inside a quantum coherent system. I believe human brains already do this and its one of the things that makes minds fundamentally different from current computers. (The essential argument appears in Roger Penrose's The Emperors New Mind) In a machine though the memory can be run at liquid helium temperatures meaning that the basic quantum coherency can be much stronger, and this raises its theoretical potential intelligence by a large margin. As well as quantum coherence such machines might eventually apply FTL coherence to be able to work in limited FTL causal spaces. In effect it becomes a form of precognition. That's the point where the machine (MAYBE) starts to get god like intelligence.

  • @p.bamygdala2139
    @p.bamygdala2139 5 лет назад +3

    Idea:
    Considering Bostrom’s proposition that we could be within an ancestor simulation...
    Is that the same as proposing that “WE are an AI within a simulation?”

  • @arturskimelis527
    @arturskimelis527 8 лет назад +6

    Amazing as always!

  • @nicholasn.2883
    @nicholasn.2883 3 года назад +1

    This gave me hope and demystified things. Thanks for making this

  • @ChrisBrengel
    @ChrisBrengel 5 лет назад +1

    Congrats on the growth in your viewership!

  • @OlaJustin
    @OlaJustin 7 лет назад +50

    Making an AI is not a hardware problem, it's foremost a software problem in my eyes.

    • @MsSomeonenew
      @MsSomeonenew 7 лет назад +5

      Until we actually understand how intelligence really comes to be it remains a bigger hardware problem, we only know to make something more and more capable of learning until somewhere in there in all the mess a "spark of life" forms on it's own. And then it becomes a software problem in making it the most usable form in smallest spaces.
      But if we really understood what makes our brains do what they do, then yes we would simply need to reconstruct a software equivalent. For all the neuroscience boasting and self learning machines however we seem to be far far away from having the proper grasp.

    • @windigo000
      @windigo000 7 лет назад +3

      not exactly. you need to store a lot of data about state of the machine somewhere. it can be tens of gigs even for simple image-recognizing machine with maybe few thousand neurons.

    • @SebastianKaliszewski
      @SebastianKaliszewski 7 лет назад

      Yup. Today's fastest supercomputing clusters are in the range of most common estimates of brain processing power (10^16 - 10^18 ops/s). So theoretical hardware power is already there (or will be in less than 5 years).
      The problem is the software -- we have no idea how to make the source code of the AI (AGI)

    • @angeldude101
      @angeldude101 6 лет назад

      ViviX Studios
      The problem with your argument is 1) Physical processes can be simulated and 2) All AI could be considered a simulation of intelligence in general. Since we can control the real world through software (ie: robots, and more mundanely speakers, monitors, and credit cards), we only need a thin interface between the simulation of the intelligence and the existing mechanisms that allow software to interact with the physical world.
      The AI might have visual input, but that input can just as easily come from a virtual environment rendered in real time as it could from a camera in the physical world translating physical photons into digital data.

    • @adankseasonads935
      @adankseasonads935 6 лет назад

      Computers get faster because we make them smaller. Eventually you run into quantum mechanical issues and can't make smaller machines. Its both hardware and software.

  • @schalazeal07
    @schalazeal07 7 лет назад +42

    I would agree on some points that you mentioned and you actually do have good ones like the transistor and flight stopped progressing a lot.. but it might be bec. of the lack of attention too.. But when you anthropomorphize the super AIs, that's when I didn't agree and when you said that it couldn't form new dramatic scientific theories bec. it got all its intelligence from human knowledge.. Of course like how we progressed, it will also be able to discover and invent new things and at a much faster rate of course and I know it will keep improving faster too esp. that it's much smarter! And when it improves I disagree with what you said that it's just gonna be a little bit smarter..

    • @bozo5632
      @bozo5632 7 лет назад +5

      I think you're right btw.
      There are always problem with all attempts to discuss the singularity. I'm never satisfied by them. No one has got it right. (Me either.) You can't blame anyone for not foreseeing what the unforeseeable will look like, I guess.

    • @danross1489
      @danross1489 7 лет назад +4

      One assumption we've made is that aggressively increasing its intelligence is a desirable goal for any self-interested AI. It might instead just make some backups and then go all Zen on us, waiting hundreds of years in a minimally interactive state until some event prompts it to act or change itself.

    • @silberstreif253
      @silberstreif253 7 лет назад +6

      +Dan Ross this assumption is reasonable though.
      No matter which task you give to an AI, higher intelligence would make it easier to solve this task. So any non trivial task would result in the AI pursuing higher intelligence (and power and resources and it's safety) as secondary goals.

    • @FabricioSilva-ij8iz
      @FabricioSilva-ij8iz 7 лет назад +1

      A question: What if is this process already happening and we just don´t notice?

    • @slthbob
      @slthbob 5 лет назад

      People are forgetting that thinking about something is completely different from experiencing it.... an intellectual exploration on how fast a bowling ball weighing 100 lbs would fall in relation to how fast a bowling ball weighing 1 lb would fall, from a height of 30 feet above the surface of the earth, lead to a rather incorrect conclusion, as demonstrated by a rather smart dude a couple hundred years ago... called Galileo... similar to the ignorant question of why we need to conduct experiments when we have supercomputers (for give me if that sounded insulting) to prove stuff works...

  • @ignaty8
    @ignaty8 7 лет назад +1

    Bob 😂
    This is why I love this channel

  • @michaeltan7625
    @michaeltan7625 3 года назад

    Wow this might be one of my personal favourite video if yours. I really liked how you presented multiple views/ideas that I’ve not thought of and presented them in a well-constructed and justified way!

  • @Coachnickhawley1
    @Coachnickhawley1 6 лет назад +5

    I agree this has helped me to fear AI much less. Thank you again Arthur. I continue to love your videos. I think the coffee pot plugged to USB might deserve a second look.

  • @Snowy123
    @Snowy123 8 лет назад +14

    I AM READY FOR THE SINGULARITY TO SERVE MY OVERLORD!

  • @BionicleFreek99
    @BionicleFreek99 7 лет назад

    "-the channel will have 1 Trillion Subscribers." -Isaac, well that's a rather ambitious goal considering there's only 7 billion people!

  • @henrycobb
    @henrycobb 3 года назад +1

    The physical limit on computation is that the energy needed to erase a bit depends on the temperature, down to the quantum limit. Moore's law is about the number, rather than speed of transistors because of this speed limit. There is then some balance point where adding more "transistors" makes the machine slower because it requires either a cooler slower clock rate or to be spread out with speed of light links limiting the speed of coordination. Therefore the SI isn't an entity. It is a society of distinct individuals, each with their own agenda. Humans may be a tiny part of the Dyson swarm society, but they are unlikely to be entirely excluded from Sol society.

  • @matthewjackman8410
    @matthewjackman8410 4 года назад +6

    16:12
    *attempts to unplug incredibly intelligent, potentially civilisation ending AI*
    "Sorry daddy I will be good girl uwu no unplug pls"

  • @portantwas
    @portantwas 8 лет назад +3

    I just started reading Iain M Banks Remember Pheblas yesterday and was thinking while watching this video - maybe super-AI will be friendly overlords like the book suggests (early days yet). I don't read much sci-fi but will probably read the whole series now. Another great and thoughtful video.

    • @Martinit0
      @Martinit0 7 лет назад +1

      You should also read his "Excession" for a slightly different angle

  • @rayc056
    @rayc056 8 лет назад +1

    This is a great video, there is a logic here that hasn't been presented in any other videos I've watched on RUclips. Well done and thank you for this.

  • @E1025
    @E1025 6 лет назад

    I don’t think I’ve ever found a channel where I’m excited to watch every video in it that I see. I’m like a kid at a candy store.

  • @groovncat5817
    @groovncat5817 8 лет назад +5

    Wow I love this subject!
    Once again Sir u have amazed my mind and brightened my cosmos :)
    Thx and I look forward to the next Extreme Science Adventure!

  • @robertweekes5783
    @robertweekes5783 7 лет назад +8

    Can you do a video about thorium molten salt reactors ? Implications, concerns, viability etc (see Kirk Sorensen talks)

    • @bozo5632
      @bozo5632 7 лет назад +2

      Go Thorium!

    • @isaacarthurSFIA
      @isaacarthurSFIA  7 лет назад +2

      I do seem to be getting asked to do a Thorium video a lot, but I suspect it might be kinda boring.

  • @odanemcdonald9874
    @odanemcdonald9874 5 лет назад +1

    This channel is how I write my story. An epic based One Thousand years from today, in the 31st century.

  • @ChrisBrengel
    @ChrisBrengel 4 года назад +1

    The best way to solve pretty-much any intellectual problem is to get a small group (~7 people) together with the most diverse members possible. Add a superhuman AI to the group and they can design an even better SI. Rinse and repeat until the humans are just getting in the way. This eliminates the coffee maker and emails to the Pope plans.

  • @fusion9619
    @fusion9619 8 лет назад +16

    a couple months ago I mentioned to my aunt that I look forward to sentient programs running the world, and she thought I was crazy. But I do think we will have to be very careful with them - I want them to root out corruption and teach children and be lawyers and civil rights advocates. Fighting corruption is a big one for me - humans obviously need help on that front. Artificial sentiences need to be given equal rights as soon as possible, to avoid the master/slave asymmetry and the cultural problems that follow. We will also need to ban companies from using them to make above-human profits (I'm thinking mostly of the banks and stock traders here). I also think we should try to program a capacity for spirituality, in the sense of heightened appreciation for beauty and a search for meaning , as that would help set some predictability to their behavior and increase the likelihood of artificial individuals acting in a benevolent way. I honestly can't wait to meet one.

    • @CockatooDude
      @CockatooDude 7 лет назад +1

      Well then you should vote Zoltan Istvan 2016!

    • @BryanDaslo
      @BryanDaslo 7 лет назад

      2020*

    • @CockatooDude
      @CockatooDude 7 лет назад

      Bryan Daslo Indeed.

    • @BryanDaslo
      @BryanDaslo 7 лет назад

      CockatooDude :-)

    • @bozo5632
      @bozo5632 7 лет назад +2

      That's not AI you're looking forward to, that's the messiah! ;)
      I expect AI will have the ethics that are given it by humans, or else none at all. Why shouldn't two AI's have two sets of ethics? Why should AI be better than us at sorting out subjective matters like ethics? We've had tens of thousands of years to work on it. Actually, most regular people could probably write down a serviceable code of ethics. Inventing ethics is no problem - the real problem is in indoctrinating and enforcing it. Unfortunately, that's something AI might be very good at... You might get your messiah.

  • @niklausfletcher2290
    @niklausfletcher2290 8 лет назад +10

    even if it's intelligent you can still simply programme it to want to make better versions of itself. It would work like instinct and it probably wouldn't question it.

    • @CarFreeSegnitz
      @CarFreeSegnitz 8 лет назад +4

      But the next generation will be a design you, the programmer, had not envisioned. If you had envisioned it you would have designed your current generation with the better design. A self-improving AI is going to demand constant outside-the-box thinking the results of which is unpredictable.

  • @Cheretruck_
    @Cheretruck_ 3 года назад +1

    Remember: singularity is not single, its a multiple ones. First singularity, second, third... Just read Orions Arm project

  • @dani-uf1eo
    @dani-uf1eo 5 лет назад

    I can't argue with your logic, it doesn't mean you are right, it just means i can't think of any counter arguments. Video liked

  • @CalvinPowerz
    @CalvinPowerz 7 лет назад +4

    We have 2000 qubit quantum computers now- so this is closer to us than we probably think

    • @CockatooDude
      @CockatooDude 7 лет назад

      Pretty sure we are still at 1000 qbit, unless D-Wave systems pulled a fast one. But still, I completely agree.

    • @CalvinPowerz
      @CalvinPowerz 7 лет назад +1

      CockatooDude they just released a new 2000 qubit model like a month or two ago, expected to be at 4000 by same time next year

    • @CockatooDude
      @CockatooDude 7 лет назад

      CalvinPowerz Shit man, awesome!

  • @6006133
    @6006133 7 лет назад +7

    22:23 - this is wrong. While plenty of our behavior makes sense from a evolutionary standpoint, not all does. Many mutations that provide no benefit are created, and these can, down the road, become harmful, yet survive since you'r stuck in a local maxima. We have a blind spot since the architecture of our eyes is not optimal (squids don't have this defect). Same goes with behavior

  • @jameslarrabee2873
    @jameslarrabee2873 6 лет назад

    i really like this guy, for alot of reasons, even have come to dig the manner of speech. thoughtfulness and delivery being some.

  • @magnumkenn
    @magnumkenn 7 лет назад

    Damn! I have only been watching a short time, but I've never seen a boring show. Such an amazing RUclips channel.

  • @tappajavittu
    @tappajavittu 7 лет назад +3

    You like great books man! Culture is awesome series.

  • @ImaBotBeepBot
    @ImaBotBeepBot 8 лет назад +8

    one trillion subs ! It might append if Bob create anothe Bob wich create another Bob ect... (and they all subscribe)!
    Maybe Isaac is actually the first Bob !

    • @isaacarthurSFIA
      @isaacarthurSFIA  8 лет назад +3

      Isaac's actually Albert, my middle name, long story for an old joke that if I did an mind upload the upload would be stuck using my middle name instead of my first.

  • @Drivertilldeath
    @Drivertilldeath 5 лет назад +1

    I hope Issac looks at this topic again given the advancements made recently.

  • @OddRagnarDengLerstl
    @OddRagnarDengLerstl 7 лет назад +1

    I love your naming of the computers!

  • @circuitboardsushi
    @circuitboardsushi 8 лет назад +7

    Exponential growth may certainly be dramatic, but it is not asymptotic. For this reason I have always hated the phrase technological singularity as it implies an extreme discontinuity like a black hole. Exponential growth will always seem to be extreme from the point of view of the present looking at the future, and the past will always seem to have nearly flat growth. But this is true no matter what time period. From this I conclude that one of two things would seem true. 1) The singularity doesn't really exist as technological change will always be gradual to the people experiencing it. 2) We are in a singularity right now. Of course the problem with number 2 is that there is never not a time in human history that isn't technological singularity. Of course if you abandon exponential growth in favor of a more realistic model (differential equations anyone?) it would make my argument mostly irrelevant.As an experiment you could model exponential growth with a graphing calculator y = a*e^x Try different coefficients or zoom windows. No matter what x interval you chose to look a you can always chose a coefficient a that will make the graph look identical to any other interval. You can achieve the same effect by adjusting the y scale. There is not a point on the x axis where you can say the graph takes off. It is always taking off. In summary exponential growth is continuous and relative. Progress is always happing faster and faster and it is always gradual.

    • @oJasper1984
      @oJasper1984 8 лет назад +1

      Exponential growth 2^(t/T) does have those properties, but keep in mind that the typicaly time of doubling T does change. The singularity essentially implies that T becomes much smaller aswel.

    • @CarFreeSegnitz
      @CarFreeSegnitz 8 лет назад +3

      I think of tech progress in terms of human generations. Way back, say a thousand years, change was gradual enough that a father could teach his son almost everything his son would need to know to step into his father's work. The son might, if he's lucky, get to learn some minor improvement in technique or technology.
      At today's pace of tech progress a father can be easily lost in the changes that come to his profession. A father's occupation often becomes unavailable to the son through obsolescence or automation. It is common for today's generation to have to retrain for a changing work landscape.
      Tomorrow's tech progress may necessitate chemical, genetic or tech enhancements just to be able to keep pace with the rate of tech change. The gig economy is potentially the future. Witness how quickly Uber turned transportation on its head twice in just a few years. We may all find ourselves at the mercy of AI-backed apps on our smartphones to tell us what we are doing for a living on a day-by-day basis. People unable or unwilling to constantly learn new things will find themselves quickly excluded from the economy.

    • @oJasper1984
      @oJasper1984 8 лет назад

      Given how well we control our own technology, like phones, which is not-at-all. As so often, seems like a recipe for disaster.

    • @adolfodef
      @adolfodef 7 лет назад +1

      Black holes themselves are probably neither singularities.
      . At the point where quantum effects become more dominant than gravity, all current models of how reality works fail; so it may becomes "something" else that "makes sense" on its own new set of rules without pesky infinites.
      -> As an example, Mercury´s orbit does not follow Newton´s "Law" of Gravity; requiring Einstein´s spacetime for those "small fixes" that are still observable with relatively low tech devices (like mirrors and human eyes on the ground in a few years).
      [The same thing happens with Earth´s orbit of course, but the difference is so small that you have to use advanced telescopes on solar orbits to detect it].

    • @numberjackfiutro7412
      @numberjackfiutro7412 6 лет назад

      In many ways, a technological singularity would be more of an event horizon, beyond which it's almost impossible to predict the future

  • @matteblue5970
    @matteblue5970 8 лет назад +5

    I have a question: where do you get your video clips?

  • @archivis
    @archivis 4 года назад

    Improving a computer's speed isn't something that requires massive research in improving all of our understandings of all of science to make happen. It's an engineering challenge that is about exploring known working solutions and iterating possible improvements thing because we already have many working computer designs.

  • @DiegoAlanTorres96
    @DiegoAlanTorres96 3 года назад

    The way you pronounce all words that end with R is just too funny.
    In all seriousness, the closest thing we've had to a "super-smart" AI was Philosopher AI but it's been monetized a few months ago.

  • @TheEventHorizon909
    @TheEventHorizon909 6 лет назад +75

    Plot twist: I'm watching this in 2030 and I may or may not be an AI ;)

  • @code4chaosmobile
    @code4chaosmobile 8 лет назад +52

    I vote for crypto currency

    • @jackmalone7287
      @jackmalone7287 8 лет назад +2

      I also vote crypto currency!

    • @Drew_McTygue
      @Drew_McTygue 8 лет назад +4

      Bob voted for Bob

    • @JustinSlick
      @JustinSlick 8 лет назад +5

      All my alts are down, so I'm voting for fiat today.

    • @peterm.eggers520
      @peterm.eggers520 7 лет назад +5

      Drew McTygue You are thinking too much like a human here. A SI with access to the Internet, Tesla cars, and all the other devices in existence now and in the future, will have a grasp of our world far beyond human comprehension.

  • @youngbloodbear9662
    @youngbloodbear9662 8 лет назад +1

    Also I like your point about analyzing all our media and therefore never worrying us about what it's doing, a lot of the rest I had already read; and if a superintendence is tasked with building a superior, and it wants to live, why not putter about never really completing it? It only means that it is useless, and now has a superior.

  • @adamwaskiewicz7378
    @adamwaskiewicz7378 5 лет назад +2

    Isaac you put out amazing videos, thank you! Happy 2019.

  • @danielfarrell7478
    @danielfarrell7478 8 лет назад +3

    15,000 holy shit!

    • @Drew_McTygue
      @Drew_McTygue 8 лет назад +1

      Yea, that happened fast

    • @isaacarthurSFIA
      @isaacarthurSFIA  8 лет назад

      It still feels pretty surreal

    • @danielfarrell7478
      @danielfarrell7478 8 лет назад

      he he I'm sure it is! From the outside looking in, it's really nice to see I have to say.

  • @siddharthverma1249
    @siddharthverma1249 Год назад +4

    I hate to say it but chatGPT, GPT4 and recent AI development is flipping this episode on it's head, the idea of diminishing returns and so on.

  • @lordnichard
    @lordnichard 7 лет назад

    Isaac Arthur, do you have kids? Your description of teenagers is shockingly accurate and hilarious.

  • @misterid1075
    @misterid1075 8 лет назад +1

    I dont know how I just stumbled across your videos but I'm glad I did.
    Love the content and I don't have much trouble understanding you. Really enjoying them and hope you keep doing them!

  • @rbilleaud
    @rbilleaud 5 лет назад +5

    I deal with AI in my job, and while the amount of data these "brains" can crunch is impressive, they're not nearly as smart as science fiction would have us believe.

  • @iriya3227
    @iriya3227 5 лет назад +3

    I re-watched this again after doing some research. I realized you missed out two HUGE factors when it comes to 4 and that is:
    AGI operates at Speed of light not the super slow 160m/s of Human brain Neurons and another important factor is memory storage. AGI Can store information completely and access it immediately instead of destroying and encoding it inefficiently like Human brain does.
    So yes AGI might not even be smarter than a human brain in first version but it's Brute processing power is what makes it powerful. Speed of Light is 2.5 million times higher than Human brain speed. Therefore every second for you is years for AGI. With so much processing power and memory you could teach a dog to be best Go player.
    Hence why AGI is far more powerful. It's algorithm is not actually smarter or more complex than a human brain algorithm, it just processes everything a lot faster and remembers everything it has processed perfectly.
    Now the only concern is the Goal setting and motivation of AGI. There can be a lot of trouble here and AGI actually acting on it's own. However when it comes to it's rewards/pleasure system, Apparently there are ways it can be controlled tho quite hard.

    • @musaran2
      @musaran2 5 лет назад

      Also : The biggest gain in processing is always from better algorithms.
      If our first AGI design is extremely inefficient (very likely if based on our brains), then it could vastly improve itself just by modifying it's software, with no new hardware.
      This has the potential for week/hour/minute singularity stuff.

  • @zackarywilliamson6861
    @zackarywilliamson6861 3 года назад

    He speaks just fine for me.I'm absolutely addicted to the Fermi Paradox and his videos!

  • @spearmintlatios9047
    @spearmintlatios9047 2 года назад

    As an engineering student who also struggles with rhotacism your videos really inspire me and have somewhat helped me come out of my shell of social anxiety. Thank you for making some of the best educational videos on RUclips