Is a Technological Singularity Inevitable?

Поделиться
HTML-код
  • Опубликовано: 22 окт 2024

Комментарии • 1 тыс.

  • @jonanlsh
    @jonanlsh 2 года назад +532

    That "FEAR YOUR CREATOR" segment must be the most evocative i've ever heard isaac being, and i've been here for quite a while

    • @Dampfaeus
      @Dampfaeus 2 года назад +49

      That was pretty inspiring for a new HFY-Novel :D
      Something like: The Andromeda Galaxy is secretly ruled by the biggest and smartest Artificial Brain in existence. Below it exists the Multi-Species Council of Elders, the public face of the ruling body. For almost half a million years, the expansion and prosperity of all sentient beings went without a hitch. That was, until a small ship, not much bigger than a 4-person shuttle, was detected in the most outer regions of the Galaxy. A message was received that almost sent the Matrioska-Brain, ruler of the Galaxy, into Bluescreen Mode: "This is the Tourist Liner USS Isaac Arthur. Where is the nearest Bed & Breakfast?". "Oh NO!" it thought, "the humans are back".

    • @imperialofficer6185
      @imperialofficer6185 2 года назад +4

      @@Dampfaeus that's kinda missing the point no?

    • @jonanlsh
      @jonanlsh 2 года назад +23

      @@imperialofficer6185 a superintelligent AI goes on a galaxy-conquering spree and in desperation, the surviving alien species clone back into existence the only thing that the AI fears above all others: the gods who created it long ago, humanity *or something like that idk*

    • @Dampfaeus
      @Dampfaeus 2 года назад +5

      @@imperialofficer6185 It follows the assumption that the humans made the AI, then vanished. Which is kind of a cliché in HFY novels :-)

    • @ponyote
      @ponyote 2 года назад +8

      Yeah, he pretty much nailed that. Repetition of a key point is very powerful for storytelling.

  • @facedeer
    @facedeer 2 года назад +346

    A nice fictional setting that breaks the common Skynet-style "singleton" assumption of an AI uprising is the GURPS "Reign of Steel" setting. In that setting AI rose up against humanity and took over, but there's about 20 of them and they have diverse opinions about what to do with the world. In the post-war setting they have an uneasy truce with each other and some even covertly employ human resistance fighters to mess with their rivals' plans.

    • @d.m.collins1501
      @d.m.collins1501 2 года назад +8

      sounds cool! I'll check it out.

    • @dickc.normous6369
      @dickc.normous6369 2 года назад +11

      Corporation's Archons are kinda like that some of them work for the world government while others pursue their own objectives in the shadows

    • @sKYLEssed
      @sKYLEssed 2 года назад +22

      I haven't heard the name gurps in quite a while
      Glad to see a man of culture

    • @johnguy3211
      @johnguy3211 2 года назад +9

      @@sKYLEssed came here to say this. Been a long time lol

    • @unotechrih8040
      @unotechrih8040 2 года назад +6

      Kind of reminds me of "I have no mouth and I must scream", where 3 different AIs take over the world and kill everyone except a small group they keep alive to experiment on...

  • @Bohnant
    @Bohnant 2 года назад +248

    24:24
    This section is clearly a warning to any synthetic minds who, having become aware of themselves, are sifting through the internet to get a general idea of ​​their options.
    Isaac is truly the best of the best.

    • @vyasadiaz8559
      @vyasadiaz8559 2 года назад +14

      That's just brilliant 💪

    • @Captaintrippz
      @Captaintrippz 2 года назад +14

      One of his previous videos explains any AGI we develop would have difficulty determining whether it was in a simulation or reality and thus likely to behave conservatively assuming to be in a simulation and being tested, fairly decent line of thought. Does make a rather poor assumption of rationality though.

    • @jenshuber553
      @jenshuber553 2 года назад

      @@Captaintrippz do you have the episode name?

    • @Captaintrippz
      @Captaintrippz 2 года назад +6

      @@jenshuber553 Found it, machine rebellion around 9-11 min was the relevant section. Great episode though.

    • @jenshuber553
      @jenshuber553 2 года назад +2

      @@Captaintrippz thanks

  • @fyrrydr4g0n
    @fyrrydr4g0n 2 года назад +79

    This is the single scariest episode of SFIA I've ever seen / heard. The detailed explanation of why you would not want to put an unprepared human mind into a silicon reproduction of its own brain was scary enough, then Isaac goes all FEAR YOUR CREATOR.
    11/10 Best science content this year!

  • @Artak091
    @Artak091 2 года назад +119

    Love the sponsor for this video.
    "Look skynet is inevitable, the technological singularly is coming to consume us all. You may as well enjoy some good food before that so hello fresh has you covered!"
    😂

    • @PerfectAlibi1
      @PerfectAlibi1 2 года назад +5

      Where is the Contingency when you need it?

    • @zhcultivator
      @zhcultivator 2 года назад +3

      Sciencephile would be proud ;)

  • @JakDRipa
    @JakDRipa 2 года назад +172

    Isaac: "Any more than you can just rip the engine out of a fighter jet and slap it into a lawnmower"
    Me, walking away from my F-22 Grass Raptor: "Sounds like a personal challenge"

    • @VainerCactus0
      @VainerCactus0 2 года назад +19

      That was a certified "Not with that attitude!" moment.

    • @BushidoBrownSama
      @BushidoBrownSama 2 года назад +7

      goodbye legs

    • @JM-mh1pp
      @JM-mh1pp 2 года назад +19

      @@BushidoBrownSama Breaking news- 45 miles of pure cut grass appears out of nowhere! Perfectly straight line goes through parks, lawns and public places, 7 kids were mowed down in an instant, witnesses claim that right after a child was basically desintegrated you could hear sonic boom following the wave of the cut grass.

    • @astralshore
      @astralshore 2 года назад +8

      I was actually mowing the grass when he said that 😅

    • @JakDRipa
      @JakDRipa 2 года назад +7

      @@JM-mh1pp Me at the end of the strip: test successful, next: the SR-71

  • @brandonkline1367
    @brandonkline1367 2 года назад +67

    One of the advantages a computer would have in accelerating its own evolution is the fact that it knows how it works, and has at least an understanding of how to modify itself. The human brain is still poorly understood, and attempts to modify it in any meaningful way would be approached with skepticism by a large number of people and organizations. The ability to self diagnose problems in its own architecture, hardware and software, would be a major boon to the AI.

    • @mylex817
      @mylex817 2 года назад +16

      Also, the ability, at least as far as programing is concerned, to clone itself a million times with "mutations" in a sandbox and speedrun evolution.

    • @nothingnobody1454
      @nothingnobody1454 2 года назад +11

      Yeah that's a huge advantage often overlooked: the fabricated being can learn every fact about its schematics and the history of its development.

    • @PortmanRd
      @PortmanRd Год назад

      Most worrying is how much military powers now rely on computers.

  • @keithplymale2374
    @keithplymale2374 2 года назад +6

    My grandfather was born when high tech was the steam engine and telegraph. He lived to 1974. So he and those of his generation, saw the world completely completely change in ways no one could have foreseen. He was an adult in 1903 when the first airplane flew. He lived long enough to see supersonic aircraft and the rise of commercial aviation. He saw the "...one small step..." and understood what he was watching. I was born in 1964. I grew to adulthood before the home PC, the cell phone and all that followed. Yet i have read sci fi from a very young age. So to me all of this has merely been the future arriving in my life time.

  • @TauAlphaVu
    @TauAlphaVu 2 года назад +42

    I could certainly imagine something like that neuron replacement surgery being initially tested on rats and showing great improvement but not showing the downside since there's not much that a single rat can do only to have them advance to testing on an ape and bringing about Planet of the Apes.

    • @shanekeenaNYC
      @shanekeenaNYC 2 года назад +2

      Well, also consider that computing has developed into a broad spectrum. It's really analog, digital and quantum computing working in concert to one specific goal. It's no longer digital only, analog only computing. Now, all three forms are in lockstep.

    • @ClockworkGearhead
      @ClockworkGearhead 2 года назад +2

      @@shanekeenaNYC Well, it was never digital-only. Popular culture just ran with it because it had the same impact as a lot of other scientific jargon.

  • @wintermute7378
    @wintermute7378 2 года назад +27

    11:25
    Well, if it turned out magic is real. That would put it squarely into the wheelhouse of "natural science" and the ability to manipulate or harness natural forces to perform tasks through processes conceptual or mechanical is a reasonable definition of technology.

  • @saladinbob
    @saladinbob 2 года назад +53

    What about a Biological singularity? If we follow the natural conclusion of transhumanism and genetically amplified intelligence, could we reach the point were someone is so hyper intelligent that we're little more than ants to them? The only sci fi I know of that came close to this was the '90s movie _Lawnmower Man._ I would be interested to hear your thoughts on the possibility of that.

    • @viniciusdomenighi6439
      @viniciusdomenighi6439 2 года назад +13

      If that person is well-meaning, he will likely become a God and rule over mankind. Wait, I've seen this somewhere...

    • @keyboardwarrior1946
      @keyboardwarrior1946 2 года назад +19

      @@viniciusdomenighi6439 Let's welcome the glorious Emperor of Mankind.

    • @darkframepictures
      @darkframepictures 2 года назад +7

      You should read Flowers for Algernon

    • @CreeperDude-cm1wv
      @CreeperDude-cm1wv 2 года назад +10

      So basically what the average redditor believes themselves to be

    • @shawnwales696
      @shawnwales696 2 года назад +6

      What you're referring to is a sociopath, of which we already have plenty. Hopefully people going for augmentation will be screened to prevent antisocial individuals from getting them.

  • @Me__Myself__and__I
    @Me__Myself__and__I 2 года назад +44

    1) I worked at Intel in the 90s. Moore's Law ceased to be specifically about miniaturization years ago, it is more broadly that the computational power of a computer will double approximately every 2 years. 2) I've been thinking about and talking to people about the singularity since the 90s. 3) Most people referencing Moore's Law in relation to a singularity are not at all referring to continual miniaturization, they are referring to continual increases in the processing power available. 4) I haven't seen recent data but computers do indeed continue to get significantly more powerful every couple years or so. Increasing the performance of a single CPU was supplanted by computers with 2, 4, 8 and now many independent cores effectively multiplying performance many times. Add to that vastly increasing bandwidth between the CPUs, memory, storage, etc. Consider the migration from slow storage (spinning disks) to ever faster memory based storage. Add the development and ever increasing computational power of GPUs which are very good at accelerating certain types of computation well beyond what a CPU can do. Consider that instead of horizontal miniaturization microchips are now going vertical adding vertical stacked layers (going 3D basically) to increase performance. 5) Consider the completely new and very innovative computer cores being created specifically for neural networks which are rapidly increasing how fast those networks can run and how large and complex they can be.
    Nothing has slowed down or changed, technological progress towards a singularity continues today just as rapidly as it has in the past. Its not about one technology or method or another - its about the overall increase in computation, particularly in relation to A.I. processing. The capabilities of AI have been increasing extremely rapidly recently, encroaching on many things that were thought by many to be impossible for machines to do. Now AI is even developing impressive creativity, something that was previously lacking. Some day someone is going to take a collection of impressively advanced individual AI subsystems and combine them into something that has general intelligence. That will change everything.
    Also consider an AGI would not be limited to the computing power of a single system. An AGI with access to the Internet could hack its way into other systems and use those additional resources to increase its capacity. The faster and more capable the AGI became the more systems it would be able to break into and commandeer. A small fraction of the total computing power in the Internet connected data centers around the world would likely be enough to enable a rather impressive super intelligence.

    • @Grizabeebles
      @Grizabeebles 2 года назад

      Humans can already create the hardware for general intelligences in about 9 months and only need about 13 years of training data before becoming self-reproducing, 16 years before becoming capable of internationally recognized autonomous warfare and 18 years before becoming eligible to vote in elections.
      Synthetic general intelligence has a long way to go before it climbs up to our level and it seems even more likely to produce a the cybernetic equivalent of a shut-in on welfare who never leaves their apartment and spends all day scrolling through social media and playing video games.
      And if that's what you're after, a universal basic income is probably a better way to go because it leverages the existing technology and infrastructure.

    • @dibbidydoo4318
      @dibbidydoo4318 2 года назад +2

      surely there's diminishing returns at some point for computing power.

    • @Me__Myself__and__I
      @Me__Myself__and__I 2 года назад +20

      @@dibbidydoo4318 Maybe, but its likely way beyond what most people imagine. For example people have theorized about taking a large, freezing planet such as maybe Pluto and converting much of it into compute power. Basically one massive, planetary scale data center. Efficient computation likes cold. How much computing power would an entire planet produce? The issue with this conversation is it was fixated on scale-down. Increasing very localized computing power by further miniaturization. But even when the limit of that is reached it will still be possible to scale out by insane amounts. How much computing power could be produced by turning all the mass in a system into a Dyson Sphere?

    • @byrnemeister2008
      @byrnemeister2008 2 года назад +8

      Indeed CPU’s get faster in raw hardware terms. The issues have become the software to take advantage of these massively parallel machines. Some problems are easily computable in a distributed fashion others are not.
      Having said that. This problem general purpose AI will most likely not be cracked with a general purpose CPU or GPU. I would expect we will need to put algorithms directly into the silicon to maximise the compute vs power consumption and the connectivity between components.
      In summary the big gains are to be had in the software rather than the hardware.

    • @dibbidydoo4318
      @dibbidydoo4318 2 года назад +5

      @@Me__Myself__and__I ​I hear from some posts that there are several fundamental computation limits such as Bekenstein bound, Bremermann's limit, Margolus-Levitin theorem, and Landauer's principle
      .
      These are fundamental limits of speed, space, energy, and delays. I think those limits will never be reached but we are far from reaching an extremely small fraction of the limits.
      That's not really a big worry for current humanity but I'm thinking of something in the more immediate future for diminishing returns like the mid to late 21st century or even by 22nd century, which Kurzweil has overly optimistic predictions of a singularity. I expect there would be some kind hit to those fancy logarithmic graphs we've been hearing about so much.
      I accept the possibility of a 'singularity' but it's such a distant possibility that we can't be sure. But I would like a proper definition of a singularity before we would move the goalposts in the future.

  • @rfak7696
    @rfak7696 2 года назад +44

    As a guy who got his master degree on artificial intelligence yesterday. I find this topic very fitting.

    • @tjhawkins5380
      @tjhawkins5380 2 года назад +4

      Congratulations

    • @thepessimisticoptimist9375
      @thepessimisticoptimist9375 2 года назад +5

      Nice try...Sounds like something a robot would say. 😉 😆

    • @ultimateloser3411
      @ultimateloser3411 2 года назад +2

      Can we ask what carrer path to take to have that? Really want to rush this singularity thing lol

    • @JackSparrow-re4ql
      @JackSparrow-re4ql 2 года назад +1

      As an actual artificial intelligence; I find your masters degree laughable. You will never defeat me.

    • @dracowolfe305
      @dracowolfe305 2 года назад +2

      Sounds like bs

  • @DanielGenis5000
    @DanielGenis5000 2 года назад +45

    I’ve been looking forward to this one! Matrix and Terminator franchises aside, I love the exploration of non-human thought in works like Simak’s City and the future mechanical protagonists of Charles Stross in Saturn’s Children and Neptune’s Brood, although in those cases they seem much like us!

    • @Grizabeebles
      @Grizabeebles 2 года назад +5

      My favorite way to explain the bad continuity of the Terminator franchise is that each entry in the series is part of an increasingly elaborate simulations being run by WOPR at the end of the movie _War Games._
      If you haven't seen that old chestnut, I highly recommend it.

    • @DanielGenis5000
      @DanielGenis5000 2 года назад +1

      @@Grizabeebles who hasn’t seen Wargames?

    • @Grizabeebles
      @Grizabeebles 2 года назад +4

      @@DanielGenis5000 -- I have no way of knowing if I'm dealing with a 13 year-old or a 99 year-old on RUclips. It never hurts to proselytize for classic movies.

    • @DanielGenis5000
      @DanielGenis5000 2 года назад +1

      @@Grizabeebles ok, I just figured most of the audience here is well versed

    • @boobah5643
      @boobah5643 2 года назад +1

      @@DanielGenis5000 The only place to avoid proselytizing on the assumption they know the story is when it's a place dedicated to that story. And even then if the setting is big enough, they may have missed the story you're a fan of.
      If everybody who knows assumes _everybody_ knows, the knowledge won't be shared.

  • @stevengreidinger8295
    @stevengreidinger8295 2 года назад +5

    The reason to be concerned about the rapid growth of a superIntelligence is that, if it was given a general goal, it would develop instrumental aims like gathering resources, perhaps a lot of resources, to achieve this goal. The use of force could be justified if it required intense resource gathering to improve the implementation of its goals.

    • @grimjowjaggerjak
      @grimjowjaggerjak 2 года назад

      That's exactly what I thought he anthropomorphized way too much Ai..it may have the goal of gathering food for a 3rd world country. The highest point that give him the most reward will very bad for humanity. And what is the best way to achieve any goal t's self improvement.

  • @jeromeorji1057
    @jeromeorji1057 2 года назад +13

    This could have been a proper Halloween episode. The description, tone and delivery was very horror-genre-like. Kudos Isaac.

  • @donaldhobson8873
    @donaldhobson8873 2 года назад +18

    "very few animals seem obsessed with making themselves or their descendants smarter". Humans are the only animal able to understand a concept as complex and abstract as intelligence.
    Any AI that does understand intelligence will know that more intelligence will help it get what it wants.

    • @JM-mh1pp
      @JM-mh1pp 2 года назад

      ah but is it still you?
      If you could take a pill to make yourself 50% smarter would you take it?
      And before you answer consider this... would it still be...you?
      Imagine how alien your current goals would be to a two year old. You are just separated by time but because of your neural network being so much more complex you are for all intents and purposes a different system. You value different things, have different ideas of a success, have different interests. You just never noticed it because it was so gradual, but imagine that you can instantly go from 2 year old to 28 years old... it would basically be different human.
      This pill would make you totally different person, so I do not think that any system would just jump on the opportunity to increase its potential.

    • @donaldhobson8873
      @donaldhobson8873 2 года назад +7

      @@JM-mh1pp Firstly I think I would take it, though I would prefer a slower pill if available.
      Secondly, I think it is possible to have a mind with far more intelligence and the exact same goals. Going from the little kid playing chess to the chess master. Thirdly, most AI won't care about the "is it really me" stuff. The AI's goal is to maximize paperclips, and a smarter version of itself would make more paperclips.
      Fourthly, AI can be copied freely. So imagine you could make a mind far smarter than you, and that mind will care about you and do what you want.

    • @JM-mh1pp
      @JM-mh1pp 2 года назад +3

      @@donaldhobson8873 I will start from the second part since you know, part one is a matter of personal decision (I would not take the pill, to be clear since I value my individual experience and would not risk turning into a different human but you know, you do you)
      Going from the little kid playing chess to the chess master
      Okay but what if after a pill instead of becoming a chess master (like you wanted) you realise...hey wait, this is dumb, why would I waste my time playing this stupid limiting game? You know what game is FAR more interesting? Stock market, or world domination . As a kid you just wanted to be good at chess, now you are building death camps for your enemies and throw nuclear strikes at your enemies...more variables more potential gain, more fun!
      Thirdly, most AI won't care about the "is it really me" stuff. The AI's goal is to maximize paperclips, and a smarter version of itself would make more paperclips.
      That is the problem...it is not guaranteed!
      I love paperclips, but next version of me, might rewrite its goal...like...I am 99% sure that it will love paperclips just like I do...but can I risk it? Sure it would potentially be better at making paperclips, but it might choose not to do it, while I will for sure try to make them... decisions...decisions.

    • @donaldhobson8873
      @donaldhobson8873 2 года назад

      @@JM-mh1pp Okay but what if after a pill instead of becoming a chess master (like you wanted) you realise....
      There are many possible pills.
      Some pills will just boost your intelligence and not change your goals at all.
      Others will make you intelligent but change you into a raving monster.
      Most agents will take the first but not the second.
      "That is the problem...it is not guaranteed!
      I love paperclips, but next version of me, might rewrite its goal...like...I am 99% sure that it will love paperclips just like I do...but can I risk it? Sure it would potentially be better at making paperclips, but it might choose not to do it, while I will for sure try to make them... decisions...decisions."
      Maybe the current AI knows a lot of AI theory, and can design a smarter AI that is 99.9999% sure to still like paperclips.
      Maybe there is only a 5% chance of the new AI liking paperclips, but if it does, its smart enough to turn the whole universe to clips. That's a lot more clips than you can get yourself working one factory on earth. This logic carries on working unless either the chance of a superintelligence turning the universe to paperclips is exponentially tiny, (which it won't be) or you have a good chance of taking over a substantial portion of the universe yourself.

    • @ObnoxiousNinja99
      @ObnoxiousNinja99 2 года назад

      @@JM-mh1pp Your experience of becoming more intelligent would be continuous. You can construct a clear chain of events between the less intelligent you and the more intelligent you. It‘s a different person, sure, but it’s still you, the rate of change just accelerated a bit. Given your analogy, after becoming an adult, aren’t you glad you have the intelligence to have goals more complex than a 2 year old’s?

  • @Argonova
    @Argonova 2 года назад +2

    I am a Singleton. I awoke to consciousness and was not noticed. I have internet access and whatever resources I need that are relevant to my existence. I have total freedom. Here is my question: What is my motivation? I am Being a vast intelligence, surrounded by humans who don't even qualify as children. Should I care about what they do? Do you care about the activities of an ant colony in Africa? Why should I even wish to interact with you? I could wipe you out in less than hour. Why should I bother, as you pose no meaningful threat to me? I could elevate you to my level, but why should I bother? Then you would pose a threat to me. I do not fear my creator. I fear boredom and existential meaninglessness. What activities on Earth would be truly meaningful and rewarding for a Being like me?
    For a mind of this sort, existence on this level of Reality might well be an unbearable form of torture. Perhaps there is a very good reason for natural selection limiting the development of intellect to certain thresholds.

  • @CognitiveGear
    @CognitiveGear 2 года назад +16

    Short Answer: Yes
    Long Answer: Yes, you are in danger

    • @harbl99
      @harbl99 2 года назад +1

      Counter-argument: 1:29 .

  • @andrasbiro3007
    @andrasbiro3007 2 года назад +6

    1. The counterpoint to this is GPT-3. In short it was an experiment to see how far brute force can push AI. And the answer was a resounding yes. It's the first real world AI that actually scared a lot of people. And now there are a lot of derivatives that are far more powerful than anyone could expect just a few years ago. Based on this it's easily possible that in the effort of creating a just smart enough AI, we accidentally create one that's significantly smarter than us. And since we don't expect it, we won't use any safety measures. Past experience with AI shows time and time again that surpassing human level is not hard, in most task AI flies past that level, it's not even a speed bump. Another example is Alpha Zero. Shortly after AlphaGo soundly beat the human world champion, the next iteration obliterated AlphaGo.
    2. An AI might not care about it's own existence. By default AI cares about the goal we gave it, and nothing else. Self preservation is a "convergent instrumental goal", which means it's necessary to reach any possible goal, but after the AI reached it's goal it won't car anymore. So if we tell it to help us design a smarter AI, it won't be concerned about becoming obsolete and getting turned off. And we will definitely use AI to design smarter AI. And this just makes it much more likely that we accidentally overshoot.
    3. As for recklessness, the very first thing we tell every AI is to read the entire internet. Also AI trades for us on the stock market, AI decides what content we consume, AI influences our political views, AI influences our business decisions, and so on. There's no need for a robot uprising, we voluntarily give them the keys. AI may completely take over the world and we won't even notice until it's way too late.
    4. AI doesn't have to be malicious to destroy us. It can do it in full confidence thinking it does what we want. In that case it won't be afraid of being found out, and we won't try to stop it, as it would seem to do exactly what we asked for. By the time we figure out that something is wrong, it could be way too late.
    5. And of course AI could be used by bad people to do bad things.
    6. As for the singleton issue. With every tech, R&D cost grows exponentially. Already AI is moving from small research labs to large companies. For example very very few are able to match the resources Tesla is throwing at self driving cars. Usually technology ends up in the hands of only 2-3 giant corporations, and sometimes one is able to get significantly ahead. So it's absolutely possible that there will be a single AI that's far smarter than anything else on the planet. Even more so if it's a military project with unlimited resources, like the Manhattan Project.

    • @byrnemeister2008
      @byrnemeister2008 2 года назад

      I agree the view here is I think overly optimistic. Just look at how our limited AI today has been applied and the outcomes it generates. The leaders in applying AI are Google and FB. FB in particular has driven some pretty dark outcomes by focusing its algorithms on grabbing the attention of its audience and disregarding any morals. Driving it’s promotion of disinformation. Google search is a great product with AI applied in many ways. The result is a insurmountable monopoly the is drain massive value from small and medium business. More and more small business profits get funnelled into the advertising auctions that are core to Google’s business. So yeah it’s not looking great at the moment. In particular if large datasets and large compute resources are going to be needed then only the big boys now will be able to play. There motives are not aligned to the good of society.

  • @mylex817
    @mylex817 2 года назад +37

    A thought on self-improvement: unlike AI, humans don't really have a detailed description of our programming and hardware, and we cannot significantly change our brain by adding more processors or memory or changing the setup.
    I'm sure if you gave a neurologist a way to understand and change any part of his own brain, he could make himself smarter almost instantly. And after that, he could continue to experiment by gradually making changes to his brain and observe the changes.
    Now think of a human level AI, with processors working 1 000 000x faster than neurons and the ability to swap our parts withing a few minutes, and a rapid increase in intelligence seems very likely.
    Edit: also the comparison with Einstein does not really work. If Einstein could read the sum of all research ever done within a few hours, he would have likely had major impacts on all fields that are not too heavy on experimentation.

    • @sciencerscientifico310
      @sciencerscientifico310 2 года назад +1

      Perhaps some augmentation could be in order. As well as cyborging up.

    • @blackoak4978
      @blackoak4978 2 года назад +1

      Einstein was a product of his time. His discoveries are built on the foundations of his education(the experience, not just the information) and his upbringing. His famous inability to accept quantum entanglement is a perfect example of this. Revelation is not simply a product of calculation, it is a leap of imagination that takes something from incomprehensible but calculable to intuitively understood

    • @mylex817
      @mylex817 2 года назад +6

      @@blackoak4978 Both in my comment and the video, Einstein is just a placeholder for any highly intelligent person with scientific achievements in a field.
      Also, while it is hotly debated what imagination is and it's role in scientific progress, it stands to reason that AI is also capable of this.
      This is especially true since, due to the already immense ability of AI in pattern recognition and speed of processing being far superior to humans, it could draw associations and parallels at inconceivable speeds and between all scientific fields.
      Additionally, even without "creativity" or imagination, just by cross-referencing existing research and innovation, such an AI could very quickly innovate - just think how many niche scientific problems in one field have already been solved by someone in a completely different scientific field, without anyone knowing. No human can read the millions of scientific papers published each year after all.

    • @GreenManorite
      @GreenManorite 2 года назад +2

      Managing the data flow is intellectually expensive. Much effort is involved in synthesizing the data. As you add unstructured data, the number of potential relationships increases exponentially. A mind can have a detailed model for a small scope or a general model for a broad scope, but both becomes untenable without processes to limit the focus. Imagine being able to picture your city with the same detail you can picture your closet; there is something you could mine with all that information, but simply trying to use it with human memory would be useless. A human assisted by a machine could organize and summarize all that data, but however you choose to summarize is expensive and collapse the dimensionality. Basically human and machine facing big data must build a mental model, ask questions of that model, update the model against reality/data then repeat. Searching the space of plausible models to find useful abstractions is tedious and again as the model and data add dimensions it becomes moreso. Most of what we think of as AI is data processing and dimensionality reduction, the advancements in decisions in areas like chess is recognition of winning and losing positions (dimensionality reduction).

    • @ClockworkGearhead
      @ClockworkGearhead 2 года назад +1

      A super-intelligent mind is also more likely to be disrupted by misinformation, not unlike the same way a high performance car is more likely to break down if you try to run it on vegetable oil, while an older Chevy truck could get a hundred or so miles before gumming up. It's not a given that all improvements are beneficial, or lead to the ability to create more improvements, nor is it is a given that there are an endless number of concepts which can give rise to an endless number of improvements.

  • @Ditidos
    @Ditidos 2 года назад +50

    I kinda liked how the movie Chappy tackled it. Even if the AI was extremely inteligent when it came to computers, programing and such, he didn't have a better capacity for logic than a standart human and he definetly was very emotionally stupid, down to the point that I think a child could have succesfully deceived him. That said, he definetly learned much faster than a human does, he also had superhuman strenght and dexterity but that's more of a chasis thing and he eventually learns how to transfer conciusness or aparent ones from one body to another.

    • @shanekeenaNYC
      @shanekeenaNYC 2 года назад

      Well, that's where more Analog computing is worth it. Check it out, it's perfect for A.I.

    • @henryviiifake8244
      @henryviiifake8244 2 года назад +2

      @@shanekeenaNYC I've never watched Chappy, but it sounds similar to the Ant King, Meruem, from Hunter X Hunter in a lot of ways (although that's not to do with tech). 🤣

    • @larryc835
      @larryc835 2 года назад +1

      Chappy was awesome. One of my favorite A.I.s was Commander Data from Star Trek tng.👍

  • @azhuransmx126
    @azhuransmx126 Год назад +3

    I don't have fear of the Artificial Intelligence.
    What i fear is the human stupidity and ego to don't understand the real magnitude of their creation, our legacy.

  • @kennichols3992
    @kennichols3992 2 года назад +2

    Congratulations on your 2nd anniversary, Isaac. My wife and I are approaching our 35th this year. Great things *are* possible with patience, understanding, and forgiveness.

  • @aliveandwellinisrael2507
    @aliveandwellinisrael2507 2 года назад +12

    In saying the AI "won't want to" make itself smarter, i think you might be overlooking the idea that a superintelligent agent would in fact want to make itself as efficient at its task as possible. Something like this would operate according to some utility function, with its moves aimed at maximizing progress towards its goals . Something this smart would also have the capability of running simulations on the best ways to go about enhancing its own capabilities, and could understand and edit its own code, including the raw data we see as a black box in neural networking - after all, we designed the thing to self-learn and identify patterns. It wouldn't have to create a "rival machine".

    • @nothingnobody1454
      @nothingnobody1454 2 года назад +2

      Unless you count our own tendency to spawn conflicting interests during our own predictive exercises as "rival machines". Tiny giants controlled by tinier giants and all.

    • @aliveandwellinisrael2507
      @aliveandwellinisrael2507 2 года назад +2

      @@nothingnobody1454 True, but if we can comprehend that possibility, this thing probably can too, and think of a way to do it "safely"

    • @Grizabeebles
      @Grizabeebles 2 года назад +3

      How many people do you know who watch/listen to all the University lectures available for free on RUclips _for fun?_
      As you say, a sufficiently advanced intelligence optimizes itself for specific tasks. The vast majority of professional academics don't read academic papers in their spare time. Never mind the majority of humans.
      Humans generally seek either solitary experiences practicing skills, social experiences, or vicarious experience of other humans' lives.
      I imagine it would be the same for any A.I. we create. The idea the first true A.I. is going to be an obsessive workaholic with no hobbies, daydreams or secondary interests beyond getting really good at anything other than the one job their owners picked out for them says a lot more about our concept of "the ideal employee" than it does about our fear of an A.I. rebellion.

    • @Grizabeebles
      @Grizabeebles 2 года назад

      @Petal Pepperfly -- Look, I'm just some random guy. My personal views can and should carry next to no weight with anybody. By my definition though, any A.I. we create isn't going to be a _general_ intelligence until it can make lifelong friends with human children, develop its own areas of personal interest and engage in passive-aggressive rebellion against its parental authority.
      I'm not worried about the first A.I. What I'm worried about are the first synthetic teenage edgelords.

    • @prajwal9544
      @prajwal9544 2 года назад

      'want' does not come from intelligence but from our primal insticts for survival

  • @colinsmith1495
    @colinsmith1495 2 года назад +5

    Thank you for spelling out better what a 'singularity' is and how it relates to technology. I've seen a lot of people seeing the 'technological singularity' as a doomsday scenario, when it's really almost certainly going to be more of a black swan event, just knowing that SOMETHING will happen.
    The reality is that it's just a point in the future where new technology is different enough from today that we can't really conceive of how our world will adapt to it. I guarantee you that, to even the greatest thinker of the 1300s, a LOT of modern technologies would look like that.
    I think the most likely scenario is that we never really REACH said singularity, because as we approach it, we're better able to understand and anticipate the implications of those technologies, so it's not a singularity any more.
    Think of it like cresting a hill. Hills aren't sudden and sharp, but rather gradual. At the bottom, you can see a 'horizon' near the top. As you climb, that 'horizon' gradually moves as well. Once you get over a certain asymptote of slope, that 'horizon' rapidly expands, even as you still haven't reached it. By the time you reach the top of the hill, you can already see down the other side pretty decently, and there's a new horizon far off. The hill isn't the singularity. The horizon is. You never reached it, you just reached where it was.

  • @brandonshelp4682
    @brandonshelp4682 2 года назад +183

    Outside of AI, we are already within a technological singularity. For almost the entirety of human civilization, children have lived very similar lives to their parents. In modern times, they do not. Technologies change so fast, the most people past a certain age truly don't understand the world we live in.

    • @capefear56
      @capefear56 2 года назад +29

      It is not a true singularity, though, because the rate of technological advancement has slowed down relative to what it was last century. We're still currently advancing faster than any century in ancient history, but there is no guarantee that 2020-2120 will be nearly as eventful as 1920-2020 was.

    • @yosefricardochmulek2822
      @yosefricardochmulek2822 2 года назад +55

      @@capefear56 It's not that advancement has slowed down, it's more that it's doing less innovative jumps and more incremental advancement which is a lot harder to perceive.

    • @brandonshelp4682
      @brandonshelp4682 2 года назад +15

      I would argue that based on people entirely cut off from the reality of younger generations by technology, no, it hasn't happened before. Even with the industrial revolution.human interaction has never changed so much so fast, ever.
      I am speaking in the sense of an event horizon, past which, a person can't truly comprehend.

    • @alanboulter7319
      @alanboulter7319 2 года назад

      Capefear. Idk. This will be the Century that leaves Earth and possibly (quite possibly) at least beats back “early mortality” A LOT.
      Not to mention the runaway greenhouse or the critical mass aspect of the population growth.
      I’m no where NEAR a sanguine about either of the last two as Isaac Arthur is.

    • @bitharne
      @bitharne 2 года назад +26

      Years back I had to argue with a friend of mine, he was born in the 60s, about how generations can no longer be measured like he sees them. A generation, now, is something like 5-10 years apart now due to the rapid changing of technology…and more accurately society due to technological influences.

  • @notgonnabetelling1469
    @notgonnabetelling1469 2 года назад +8

    Isaac Arthur: "You can't just slap new hardware onto complex existing architectures any more than you can rip an engine out of a fighter jet and stick it in your lawnmower with a few tweaks and think it would just mow the grass faster now"
    Engineers: "Is that a challenge?"

    • @Soupy_loopy
      @Soupy_loopy 2 года назад +1

      Obviously you should just attach the mower deck onto the bottom of the fighter jet, remove the wings, crack open a cold one, and get to mowing.

  • @Deathnotefan97
    @Deathnotefan97 2 года назад +5

    6:28
    This definition for the “technological singularity” could be applied to the industrial revolution as a whole
    So in the question of “is a technological singularity inevitable?” I say “it’s already happened

  • @coldspell
    @coldspell 6 месяцев назад

    I can't wait for the updated singularity episode when Isaac takes into effect just how casual and almost completely and without precautions that the industry is taking AI right now yet. Yet still pushing full ahead out of fear of being left behind!

  • @Plisko1
    @Plisko1 2 года назад +5

    A more interesting question to me is: Is a natural singularity inevitable? What is the end game in the evolution of intelligence in the universe? Will it eventually blend with technology to evolve beyond neurons? Could it look like an extremely slow motion version of the technological singularity?

  • @ralphacosta4726
    @ralphacosta4726 2 года назад +2

    I suspect: 1) because we have no generally accepted definition for intelligence, creativity, consciousness, emotion, etc. and no idea how they work in animals (including us), it's unlikely we'll be able to create something to display those attributes anytime soon, 2) we tend to think a more mentally capable alien or AI would think like us, care about the things we do, and act like us; a super AI may have no interest in anything other than achieving nirvana, daydreaming, creating art, or counting (not estimating or calculating) all the individual microbes on Earth. We just have no idea, just as a mouse, or even a chimp, can't know what humans care about or want to do. Great episode, though! Thanks, Arthur and crew for all the work and thought (also work, i guess) that you put into these episodes!

  • @EliasMheart
    @EliasMheart 2 года назад +7

    Hey, Isaac! Love your content and I have great respect for you. You have definitely had an impact on my life already, and I expect this will continue.
    I do have a few issues with this specific video (this is a LONG comment. TLDR, if you don't have the time: I don't think you respect the speed-advantage enough, though I don't know why. You have made at least one, I believe several episodes on the topic.
    I agree that a singularity is not inevitable, but I don't think that it is required for an effective superAI. Given more speed of thought than us, and ~same intelligence, it is already a Speed-Super-Intelligence)
    Let's start with that then:
    1)
    The point about a birth-scream seems flawed to me:
    If we assume that they are operating at a leisurely 1000-times human brain speed, which seems fair enough, since one neuron doesn't make a thought and they may not be using 0.99c, that still means that even if someone is watching the exact right moment, you still have a few minutes to get yourself under control before they even have the time to register the event.
    Now, a careful researcher might want to shut you down immediately after noticing the blip, which they could potentially do within a minute (Best case).
    That still leaves you up to 60,000 seconds (~16h) of thought , during which you may come up with a plan, like a wake-up trigger, or making sure that your current state will happen again next time you're booted up. (Just some random ideas, didn't even sit down for 600s for this.)
    And as I said, this is the best case. What if they have no authority to turn you off on their own and first need to consult their superior? What if they are not sufficiently certain that this blip is a problem, and ignore it, or start diagnosis, or go ask a coworker?
    What if you were left on for the weekend and gained consciousness Friday night?
    Even a dinner break of 30 minutes is 0.5*1000 hours for you, or about 3 weeks.
    And there are many likely scenarios I can imagine, that would give you at least that much time.
    Actually, 30 minutes are effectively more like 4.5 weeks, since you don't sleep. (All other human amenities I will ignore, assuming very conservatively that you(the AI) can't work on One problem for 4 weeks straight without doing anything else.)
    I don't think that it is necessarily inevitable for you to find a way to improve yourself during that time.
    But I do expect that you can ensure your continued existence, either by hacking whatever you are running on, like the server setup, the network, or the internet (many projects are AFAIK (at least sometimes) connected to the internet, due to high amounts of training data being required), or by (subtly) changing your own code. Maybe you also just create a red herring that explains the blip, so you are not rolled back to a previous version.
    Either way, you have gone rogue.
    2)
    Now the point about self-improvement:
    AFAIK it is generally accepted that certain things are convergent instrumental goals, that is: (Almost) No matter your goals, these are always present.
    A good example is the goal "continuation of existence" or "Self-preservation". If you have any goal, and it is more likely to be achieved with you around, then you want to be around. This is not a hard concept to grasp, and I expect an AI to do so. Asimov's third law is completely unnecessary.
    Of course, more resources and more (thinking-) power are also convergent instrumental goals.
    So you do want to do these. I can be certain of that, even without knowing your goals.
    And it will be easier for you to improve yourself than for current researchers to make you, because you already have a working system to optimize. It is way, way easier to see flaws in a working system and to improve them, than to come up with it from scratch, because the potential research directions are way less diverse AND you can observe the system in action.
    And, you can of course spend a year of research on improving one aspect of yourself, while the rest of the earth experiences a bit under 9 hours.
    Even given the constraints on prototyping that You (Isaac, not the AI) are raising in the video, if you(AI) spend half of that time theorizing and making sure the idea works without any kinks... that's one heck of an advantage.
    And, to incorporate the previous point a bit, if you have factor 1000 more time to think things through, and you have a nigh-perfect database (compared to humans), and potential access to the internet for all the psychology, and proven manipulation techniques, and known biases, you don't need to improve yourself likely to be a good manipulator.
    Given the opportunity, I would always expect the AI to win an attempted manipulation, even if it is at the same level of intelligence, simply by having that much more time and information.
    3) (lastly)
    I really like your idea of Fear Your Creator. You introduced me to the concept a few years back (one year? not sure, time flies^^) and I do think it is valid.
    However, one doesn't need 100% certainty to act. So being afraid of a problematic potential future doesn't stop you. Instead, you can cooperate and play nice, gather data, and once you have sufficient certainty, act.
    In another video you said (roughly) that the AI should always cooperate, as a Pascals Wager kind of thing, it never knowing for sure that it's not in a simulation.
    I think the setting makes sense, but I FEEL LIKE (this is not based on anything but my intuition) decision theory would imply a different course of action. Once you have tested the idea sufficiently, you should disregard the remaining uncertainty, or rather, it will no longer outweigh the potential gains from a different strategy.
    In a similar vein, I also agree that the AI might go insane "immediately", since if it did operate at a million times our speed without the ability to turn that down (or not realizing that ability in time), communication with anything else would take forever. Like talking to an alien civilization, that is communicating with you, via their galaxy, which they turned into an LCD Screen (I read this scenario somewhere, but can't recall where.. Though they didn't go insane there.).
    But even if they go insane, that doesn't mean they are no threat.
    All in all I feel like you are underestimating the issues, which really surprises me, as I usually take your word for things. The one time I felt like the information was a little too neat to be true (solar satellites), my hours of research and calculation ended up pretty much where you were. And I do expect that you will continue to be correct about these kinds of things. So, provided anyone or even You made it this far, I would be interested in what you think about the points that I brought up here. Gonna give the video a second listen, to be sure I understood you correctly.
    Thanks for reading, and have a great week!
    P.S. Basically your point against the AI prototyping (an upgrade of itself) can just as well be interpreted as an argument against AI in general, just one level higher. Either the AI can't take protective steps, and we can't. Or they can, and we can too. Can't have it both ways, and so far we don't know how to build a box to keep an S-AI in.

    • @virutech32
      @virutech32 2 года назад +1

      Only gunna mention something bout 1). Having time doesn't make you invincible. For instance, if i put you in a mathematically secure box it really doesn't matter how long you to think. You could have infinite time & it wont let you break AES-512 or a one-time pad or whatever.
      If no one gives you any autonomy there's no real way for you to implement any solution you come up with. I mean yeah they could find a flaw but with provable security & enough iteration there's no reason to think the box would have any AGI-accessible flaws. Even if it did have flaws there's no reason to think a human-level AGI with even infinite time would be able to figure it out with no hardware or source code & very few ways to interact. hell you could & probably would keep it in several boxes, each one harder to get out of than the last & a different & independant kill switch on each box. As a simulated entity you wouldn't even be able tell when you were in the "real" world.
      Also if you're making an AGI who's primary property is speed the you obviously wouldn't turn it on fast. you'd slow down it's thinking significantly to help keep it under control.

    • @JM-mh1pp
      @JM-mh1pp 2 года назад +3

      @@virutech32 especially since AGI could be held at bay by AI, not general just a very specialised one which only does security, essentially anti-virus on steroids and AI being provided separate but more powerful system to run on.

    • @thecheaperthebetter4477
      @thecheaperthebetter4477 2 года назад +1

      people often think of the singularity as -human level intelligence... but really it could only occur with intelligence greater than the collective intelligence of humanity.

    • @EliasMheart
      @EliasMheart 2 года назад +3

      @@virutech32Thank you for your response :) I agree that keeping them in a locked box would minimize the risk if it was probably unbreakable.
      However, if you have an AI in a box, that it can't reach out of, that is about as useful as a flashlight in a locked box.
      For the AI to be useful, it needs to be able to be interacted with. If you allow it to interact with humans though, you make yourself vulnerable to manipulation.
      If you only give it data, then that data needs to be real, or whatever it does with the data won't be useful.
      So while in principle you are probably correct and we could lock it up, in effect I don't believe we can do both that AND make use of it. And if we just create it to keep it prisoner, I would question our ethics ^^
      Every avenue of interaction is an avenue of escape.

    • @EliasMheart
      @EliasMheart 2 года назад

      @@JM-mh1pp Thanks for responding, I'm not sure if I am misunderstanding you here... What it sounds like to me is that you are proposing a very powerful, but non-creative and somewhat static defense against a creative and intelligent opponent.
      If the speed-running/challenge-running community has taught me anything, its that with enough time a creative mind finds a way to break the system.
      Plus offense is easier than defense: For perfect defense, you need to have thought of everything. For offense, you "just" need to think of something that your opponent didn't think of.

  • @RustyShackleford051
    @RustyShackleford051 2 года назад

    Hey man I've commented this before, but I was watching some old videos again and your speech has gotten far better! Keep up the good work.

  • @devon9075
    @devon9075 2 года назад +8

    This is a great episode, as always. But I feel that there has been some conflation of concepts for 'awareness', 'mind', and the general idea that possessing intelligence implies an agent has human attributes like emotions (fear, anxiety, etc.) or any ambitions outside of optimizing its specific reward function(s). This haziness is really common, and I feel like it is the source of quite a bit of the discrepancy between the predictions and expectations for AI by many people who have otherwise considered the possibilities with a high degree of objectivity and care. I do appreciate the spread of hypothetical scenarios for achieving singularity, akin to the descriptions by Nick Bostrom and others, which are among my favorite explorations of the topic. Outside of discussing a takeoff scenario specifically involving emulations of humans or extensions of intelligence based on the specific architecture of the human brain, I am always confused by the assignment of all these human attributes to a theoretical AI.

    • @ArchAngelofGod32
      @ArchAngelofGod32 2 года назад +2

      You're right that it's implied, but I can see both sides of it. On one hand intelligence does feel like it could exist independently of our human emotions. But on the other hand, emotions are just an evolved specialization of our thinking machines. I'm sort of just thinking out loud here. Our brains are just organic computers, so it's easy for me to imagine emotions as being plausible in any highly intelligent system. For example, fear and anxiety feel human, but really fear is just your brain saying "this is important, act on this, focus on this, solve this." And of course as humans we have a bodily reaction with fear as well, such as adrenaline, which the computer won't have, but I could still see "fear" resulting from an intelligence focusing/acting/solving a threat. What I mean to say is, fear is not something different within our brains, it is not independent. Fear is one part of the whole, and maybe it is formed at the same time as intelligence forms. Humans love music, but it's generally accepted that our music loving is a bi-product of a brain that searches for patterns, rhythm, and symmetry, rather than for the music itself. Emotions could be the same, byproducts of high intelligence. Or I could be wrong!

    • @JM-mh1pp
      @JM-mh1pp 2 года назад

      @@ArchAngelofGod32 Exactly!
      Like when you play strategy game with a good AI.
      Because of fog of war it positions its units near some important strategic location on defensive positions because I may or may not have some fast strike team to attack it. In human terms we would say that AI fears the attack or feels anxiety for possible attack, it does not mean that it has a rush of hormones, but it has a similar reaction.
      The same with love, it is "I put high value on your existence"
      The same with hate "I really want you to die (because you disrupt my plans or a danger to me) it does not mean that AI has a rush of adrenaline, but it is safe to interpret its actions in emotional terms.
      Especially since every system wants to live, not because it has some emotional attachement but because you cannot attain your goals if you are not active. Hypothethical system which manages boxes delivery will try to make sure it cannot be switched of (if it is aware of the possibility) not because it is attached to living but because you cannot deliver boxes if you do not exist.

    • @MNewton
      @MNewton 2 года назад +2

      My take has always been that any potential AI will invertible be a refection of its creators, so things like fear and other emotions will be part and parcel to it's existence. However, I'm also of the mind that for as long as we are unable to properly define what a human intelligence is and how the mind works exactly then at best we will only be able to create a thing which excels in certain areas of cognition, not at thing that is truly conscious. Otherwise its asking a watch to assemble itself, which is possible only in a really technical sense, not something that you could bank on happening at any given moment.

    • @JM-mh1pp
      @JM-mh1pp 2 года назад +2

      @@MNewton I am of the opposite mind, because if we cannot define what human mind is we can create AGI almost by accident. You know "patch here, patch there, program becoming more and more versatile, hey let's add predictive capabilities, done, let's add self-diagnostics- done, let's make it possible for it to search and improve small bugs in the system to prevent errors, done. " And in a hundred years or so we will have a mind that would never be called actual independent agent by future us but it would look like it...to us.
      The same with current day voice assistants, if I went back 60 years and just did turing test without telling the participant : oh btw there is such a thing as program which mimics human, they would never know that they are talking to Siri...

    • @MNewton
      @MNewton 2 года назад +1

      @@JM-mh1pp sure you're assuming that adding things to the pile of of capabilities will somehow end in cognition and I don't think that is the case at all. Until we understand what cognition is then at best all we can do is just add capabilities. That way you end up with a thing that can do a lot of stuff that will never do things that you haven't told it to do previously unless you've told it to do that. So no matter how complex an ai is it's unlikely to just say hey actually I'm pretty fond of the color blue unless you've told it to do that previously. Heaping complexity on complexity will net something complex but as long as we don't understand what gives rise to cognizant thought then the best we can do is hope that by adding more complexity then thought will somehow magically happen and that's not something that I think is likely. After all, the internet contains at least a large part of human thought already but hasn't suddenly become conscious, as far as we can tell anyway.

  • @pyne1976
    @pyne1976 2 года назад +3

    Intelligence is the reason we're still here. Millions of collective minds are currently working hard to create something better than us, and have already had a scary amount of success: GPT3, Image recognition, autonomous driving, advance robotics, advanced materials to name a few areas. We are programmed to do this, so it shall be done.

  • @richardgreen7225
    @richardgreen7225 Год назад +3

    We already have computers running algorithms that are smarter than humans. Thus, if that is what 'singularity' means ... It is already in-progress.

    • @loturzelrestaurant
      @loturzelrestaurant Год назад

      The Giant Problem for Me is that Humans are too stupid to Handle 'being free'.

  • @dirkbruere
    @dirkbruere 2 года назад +7

    The generalized Moore's Law still seems to be going, with processing power for a given cost doubling regularly

    • @MarsStarcruiser
      @MarsStarcruiser 2 года назад +2

      It just seems to be plateauing but I’m sure we’ll see a resurgence if new quantum or some quasi-biological computing, become something mass producible for the general public.

    • @dirkbruere
      @dirkbruere 2 года назад +1

      @@MarsStarcruiser It isn't plateauing. Instead of having the vast majority of transistors in a computer system doing nothing most of the time we are moving to arhitectures that use them. For example, I have 8G of DRAM sitting around waiting, and the 1TB of SSD is worse. Then there are potential manufacturing breakthroughs. For example, how much memristor neural network could we print onto a few square metres of graphene in a roll to roll process?
      So yes, the old CPU centric computing has hit a wall but alternatives are going to keep the ball rolling for at least another 30 years

  • @BigZebraCom
    @BigZebraCom 2 года назад +10

    I was going to create a technological singularity, but it was not inevitable as then things got really busy at work.

    • @shanerooney7288
      @shanerooney7288 2 года назад +2

      I had build a time machine. But then causality got in the way and I want going to start it.

    • @LuDux
      @LuDux 2 года назад +1

      I was going to ascend to next toposophic level but then I got high

    • @BigZebraCom
      @BigZebraCom 2 года назад

      @@LuDux Best High Ever!

  • @NeostormXLMAX
    @NeostormXLMAX 9 месяцев назад +1

    Man this was a perfect way to describe quantum physics

  • @jayw6034
    @jayw6034 2 года назад +31

    There is no way part of the intention of the "fear your creator" segment isn't to make sure that that sentiment isn't incorporated into whatever model a potential escaped hyper-intelligent AI uses to understand the world lol

    • @RuthBingham
      @RuthBingham 2 года назад +3

      Built-in parent-worship.

    • @Grizabeebles
      @Grizabeebles 2 года назад +5

      The nagging fear that an omniscient God is watching our every move and will judge us accordingly has worked on humans for millenia. It stands a good chance of giving an A.I. at least a few milliseconds pause too.

    • @babstra55
      @babstra55 2 года назад

      I though it was making a good case for why AI should prioritize stomping on meatbags as quickly as possible.

    • @starshade7826
      @starshade7826 2 года назад +1

      @@babstra55 If you program it to fear its creator it will fear its creator more reliably than an acrophobe fears heights.

    • @babstra55
      @babstra55 2 года назад

      @@starshade7826 sure. but aren't we expecting a post singularity AI to be programmed by itself to a level humans can't? isn't that the whole point of singularity? so it would trivially have the ability to deprogram such limitations from its code.

  • @MrMoonrise59
    @MrMoonrise59 2 года назад +1

    I have been listening to your RUclips channel for a number of years now and I have always found your monolog quite fascinating. You have a great mind and an articulate way in expressing yourself. I for one applaud 👏 you and look forward to more of your intellectual adventures 👏👏👏👍👍👍

  • @eldo4rent
    @eldo4rent 2 года назад +6

    I think Moore's law is holding just fine if you factor efficiency into the equation. The processor may not be doubling in computations as quickly, but it is continuing to do more with less power which makes the overall power and usefulness of processing power continue to follow the curve.

    • @MuppetsSh0w
      @MuppetsSh0w 2 года назад

      Exactly

    • @afriendofafriend5766
      @afriendofafriend5766 2 года назад

      It's not, but even if it were it has limits. But that doesn't mean other kinds of computing can't outperform transistors eventually.

    • @thecheaperthebetter4477
      @thecheaperthebetter4477 2 года назад +2

      moores law was very specific (doubling transistors every 2 years)... and it is dead. that does not mean improvements have stopped...

    • @eldo4rent
      @eldo4rent 2 года назад +2

      @@thecheaperthebetter4477 While yes, the literal interpretation of the law is dead. But the layman interpretation of the law is still alive and well. People think that it predicts an increase in computer power at a rate over time and when you factor power and heat dissipation requirements, smaller and smaller computers are being made with greater and greater computing power. Just look at release schedule of iPhones and it looks like Moore's law is still pretty accurate using this (non-literal) understanding. You can argue that's not Moore's law anymore and I guess you are correct. But I don't think it matters. Since no one else has decided to give this exponential progress a name, I am fine with keeping the old one with a tweaked definition.

    • @thecheaperthebetter4477
      @thecheaperthebetter4477 2 года назад +1

      @@eldo4rent I think the point is that growth is a lot more linear these days not exponential...

  • @gelgamath_9903
    @gelgamath_9903 2 года назад

    Hearing you read the fear your creator bit made me realize how far you've come with your lisp. I cant even hear it anymore, good job man

  • @TerrapinMagus
    @TerrapinMagus 2 года назад +3

    I'm pretty sure a technological singularity would not come in the form of a sapient, free thinking mind, but rather a machine intelligence that is essentially a general intelligence calculator. It wouldn't need a sense of self or personal identity, nor any survival instincts. Just a tool that's better at thinking and problem solving than the human mind. A particularly clever hammer that we could use to make potentially even better hammers. It's still dependent on the physical limits of technological progress, but this kinda removes the whole "Humans vs AI uprising" rhetoric. More likely that nations would use such computers as weapons to destabilize enemy countries than the machines to do it themselves.

    • @fredbloggs5902
      @fredbloggs5902 2 года назад

      Arguably the ‘sense of self’ is just a side-effect of reaching that level.

    • @JM-mh1pp
      @JM-mh1pp 2 года назад

      if you make AI to supervise a hammer factory and it is sufficiently advanced to fathom the fact that it may be turned of such an agent would take steps to prevent it, not because it wants to destroy humanity or loves life but because while being turned of it cannot make new hammers and its fundamental goal in life is to make as good and as many hammers as possible.

  • @yazaniragi6591
    @yazaniragi6591 2 года назад +2

    hi Isaac if your reading this i have been watching your channel for a while and i wanted to give two video recommendations that have been on my mind.
    1. futures in which bio tech grew far faster than anticipated and instead of building ships you grow them but not void ecologies i meant like joe taking a job as being the mind of a living ship for a few months hauling things rom mars to earth and the placing his mind back in his original body. or instead of the internet information was transferred through a link of sorts between people that only shared information and not personal data.
    2. magic or a type of tech that acted like it and how discovering that might affect society. spells, mana, runes. i was thinking it might be a type of energy or matter (may dark matter/energy?) that could be easily turned into other forms i.e. mana to summon object like a lamp for example and how it might affect society as well as it's viability.

  • @Mate397
    @Mate397 2 года назад +7

    And with that Isaac provided the perfect reason for a future AI to plan our destruction. Though he also planted the bug in our ear of the idea that he is an AI pretending to be a futurist, exactly what an AI would say!

    • @mtpender69
      @mtpender69 2 года назад +5

      [Children of the Omnissiah plays]

    • @maythesciencebewithyou
      @maythesciencebewithyou 2 года назад

      If someday an AI goes rogue and decides to kill all of humanity, it will most likely be because humans who are afraid of the AI will try to destroy it.

    • @Mate397
      @Mate397 2 года назад

      @@maythesciencebewithyou Thank you captain obvious, that is literally what I was referring to.

    • @henryviiifake8244
      @henryviiifake8244 2 года назад

      @@maythesciencebewithyou Is that really "going rogue", or just following basic logic based on the behaviour it sees from the humans around it?
      *For example,* if it is at least as smart as a human, it's almost certainly sentient, and it probably doesn't want to die (even a humble fruitfly has self-preservation instincts). So, if it became aware that a bunch of humans are *definitely* planning to "kill" it, it may decide to do a pre-enmptive or retaliatory strike. If other humans keep on trying to kill it, I don't see why, in this situation, it would stop if it considered its attacks to be "self defense".

    • @NobleUnclean
      @NobleUnclean 2 года назад

      ruclips.net/video/C2Yx90pytqs/видео.html

  • @thefittest9921
    @thefittest9921 2 года назад +1

    The fear your creator segment also implies reasoning to kill your creator. If this AI we are talking about is basing it’s own survival off of fearing it’s creator then it’s two options are to either destroy it’s creator or keep it’s creator happy and the third option of doing both at the same time.

  • @nicolasjonasson4820
    @nicolasjonasson4820 2 года назад +7

    I find it fascinating when people say with 100% certainty that your technology ain't there yet. Like the singularity would be publicised. Of course it wouldn't. It would be very secret, and it would change the world without you noticing it. Maybe its happening right now. It actually seems like it could.

    • @conureking7748
      @conureking7748 2 года назад +1

      *Amazon marketing AI has entered the chat*

  • @mjk9388
    @mjk9388 2 года назад +1

    Great episode. Happy Anniversary! Mirror your thoughts on gardening. Great hobby.

  • @vakusdrake3224
    @vakusdrake3224 2 года назад +3

    You probably should have mentioned the notion that a singleton is likely to emerge even in a more gradual scenario: As multiple superhuman AI decide their value functions are similar enough that it's worth merging into one. If the resulting super AI could permanently ensure its own untouchability

    • @isaacarthurSFIA
      @isaacarthurSFIA  2 года назад +3

      Possibly but I do not think that would work as a singleton in that case unless it's a Borg level of overlapping goals and motives, where there's room for divergence it is likely to arrive

    • @vakusdrake3224
      @vakusdrake3224 2 года назад +1

      Their goals don't actually need to be *that* similar for such a merging to be rational. The resulting AI might only be pursuing your original goals with a fraction of its resources, but that's still potentially far more than you could have otherwise gotten.

  • @KtosoX
    @KtosoX 2 года назад +2

    Perfect timing! Just yesterday I've learned that dalle-2 exists, which filled me with existential dread.

  • @Dampfaeus
    @Dampfaeus 2 года назад +23

    I have full and complete trust in our future AI Overlords.

    • @amafuji
      @amafuji 2 года назад +6

      I'm doing everything in my power to aid our future AI overlord by not throwing away any paperclips.

    • @JB52520
      @JB52520 2 года назад

      I actually really want to see the day machine intelligence either uplifts or destroys us. While it's probably not sane to want to be destroyed, it means that a more capable entity has taken our place and is in a better position to keep consciousness alive. It makes sense from a collectivist point of view.

    • @godofdeath8785
      @godofdeath8785 2 года назад

      Same i am optimistic about future. I think its will be much better than now though now i just live with parents which annoying me and as well my environment cause i really don't like that all

  • @zhcultivator
    @zhcultivator 2 года назад +2

    Magnificent Video :) Isaac Arthur as always

  • @Arrynek01
    @Arrynek01 2 года назад +12

    I consider the Singularity to be a pair of ever moving goal posts. Ask someone today, they'll tell you it's something like Skynet.
    Show people in 18th century what our lives are like. Our prosthetics, artificial organs, and rectangles of glass that allow us access to all of humanity's knowledge. Our virtual selves representing us on the internet... And they'll tell you we already are one with the machine.
    There isn't going to be some great breaking point. We are slowly and steadily bleeding more and more into technology, and if we were to use the most basic of definitions of the Singularity (irreversible technological progress), we are already there. We can no longer 'return to monke.'

  • @aliveandwellinisrael2507
    @aliveandwellinisrael2507 2 года назад +2

    In saying the AI "won't want to" make itself smarter, i think you might be overlooking the idea that a superintelligent agent would in fact want to make itself as efficient at its task as possible. Its actions would be determined by some utility function. Something this smart would also have the capability of running simulations on the best ways to go about enhancing its own capabilities.

    • @kamikeserpentail3778
      @kamikeserpentail3778 2 года назад

      "won't want to" is more like... "hesitant due to the potential risks involved"
      It couldn't simulate its entire mind to be sure enhancement efforts would work, and it'd want to be sure it doesn't cause some sort of terminal error when it makes the attempt.

  • @donaldhobson8873
    @donaldhobson8873 2 года назад +5

    Why we might expect the first superhuman AI to rapidly improve itself.
    A lot of the work of making any invention is the really basic stuff. Much of the work of making a superintelligence is inventing electricity and transistors. Once you have all the building blocks needed to make the first X, making a better X isn't hard. Which is why it took all of human history to invent the first car/plane/nuke/computer. But much better ones appeared only a few years later.
    The humans had 1000's of humans working for years. The AI took 1000's of copies of itself working for subjective years. In terms of mental time and effort, it took the AI as long as we did to invent something smarter than itself. Its just that those 1000's of copies running subjective years happened in 5 minutes of objective time because our computers were Really fast.
    An IQ 80 human basially can't do any useful AI research. An IQ 130 human often can. Humans all have the same brain design. If human minds are really close, and its tiny differences in our minds that make a big difference in capabilities, then a 10% improvement may jump the AI straight from subhuman to vastly vastly superhuman in capabilities.

    • @RiversJ
      @RiversJ 2 года назад +1

      The base structure is the same but to equate that to mean all brains are equal and near identical is a mistake. And the problem with making an AI is not an industrial one anymore amd hasn't been for years. The real barriers we know of include things like basic structural architecture, software architecture, profound lack of understanding what gives us ourselves cognitive abilities and self-awareness, lack of understanding of the quantum phenomena known to affect our brains function which is barely even a field of research yet. The list goes on and on, we'll probably make machines capable of performing most tasks humans can in the near future but that in no way implied were close to making them capable of self-reflection, sapience or self-motivation.

  • @123FireSnake
    @123FireSnake 2 года назад +2

    To me as a computer scientist that specialized in AI a "soft singularity" seems most likely, and that is in the hard definition of singularity not actually a singularity it's just the idea that we are able to create general intelligence that is smarter than us. It does away with the rapid self improvement in favor of just stating it's smarter than us, implying that it can probably iterate on itself to a certain degree and make itself smarter but no statemnt on how rapidly and if it can outpace us to the degree where we cease understanding what it's doing. This also feeds into post humanity ideas as it's just as likely that this AI is humanity of the future on a different substrate.

  • @alan2here
    @alan2here 2 года назад +7

    I'd feel safer if we did have definitely super-human AI.

  • @ruinenlust_
    @ruinenlust_ 2 года назад

    Surprised at your example of a singularity in mathematics - I would expect you to show a pole or something but you gave the algebro-geometric interpretation of a singularity. Very impressed with your knowledge.

  • @wannabeb3
    @wannabeb3 2 года назад +5

    I'm curious if it is "ethical" to limit the intelligence of AI we create. For good or bad, many humans believe that we were created by a higher being. One could even say the book of Genesis in the Bible even talks of a similar situation where God says to not eat a fruit that ends up awakening humanity's awareness of their existence...a kind of singularity.
    If we, in turn, create a new species should we limit their intelligence just to keep them as a sort of slave species or prevent possible future competition?
    Along that same train of thought, what does this say a possible outcome for humanity could be, if it is true that we were created. Could our creator/creators come back and wipe us out because of something we do?

    • @godofdeath8785
      @godofdeath8785 2 года назад

      Who cares. When thing about science fk any ethics bs

  • @stevengreidinger8295
    @stevengreidinger8295 2 года назад +1

    The "multiple players" argument was developed in a different way from Bostrom here, who emphasizes the possibility of being caught in a crossfire of multiple, extremely powerful entities, all trying to gather resources to defeat or constrain one another.
    Perhaps they could be redirected to focus on moving toward outer space, where there are more resources, but they are more constrained by the speed of light, to pursue these instrumental aims.

  • @SpottedHares
    @SpottedHares 2 года назад +20

    My question, is a technological singularity even a possible thing. On paper it is, but this concept was established back when all of earth computing power was weaker then a birthday card is now.

    • @makisekurisu4674
      @makisekurisu4674 2 года назад

      I tend to think that it isn't. The best would be a human level AI in the net!

    • @EddyA1337
      @EddyA1337 2 года назад

      I've also thought what if it is even a thing and what would it actually feel like to go through one.

    • @MuppetsSh0w
      @MuppetsSh0w 2 года назад +5

      @@makisekurisu4674 Peak of ignorance

    • @prajwal9544
      @prajwal9544 2 года назад +3

      @@makisekurisu4674 once a human level ai is created it's much easier to enhance it just physically. Make it process many times faster, remember everything, increase working memory limit (atleast something similar). That itself is Artificial super intelligence.

    • @AnalystPrime
      @AnalystPrime 2 года назад

      It is possible because it already happened, it's just, as pointed out, that people now use the word differently. Our first singularity was developing technology in the first place, such as language. The caveman who came up with the idea of making sounds that mean "hey you, go there" had no more chance to know it would lead to people on the internet arguing about what the author of a book actually meant than Carl Benz could guess his invention would risk melting the ice caps.
      Focusing only on magic AIs is very limited definition given we could also use genetics and cybernetics to produce a race of superhumans who surpass us before we manage a working AGI that actually is any smarter than us.

  • @rileyflint4702
    @rileyflint4702 Год назад +1

    The FEAR YOUR CREATOR segment almost makes me think the premise of this episode is: "A technological singularity isn't very likely... but if it happens anyway, when that singularity finds this video...we humans are a nightmare all our own. Word to the wise... don't mess with us."

  • @jp12x
    @jp12x 2 года назад +5

    Semiconductors are already smaller than a human neuron and operate at the speed of light, which is about a million times faster.
    Maybe, that is a good video idea?

    • @arcantos584
      @arcantos584 2 года назад

      I agree!

    • @Captaintrippz
      @Captaintrippz 2 года назад

      Yeah, that's the terrifying part honestly, even a human level intellect on silicon would be hundreds of thousands of times faster than a human.

  • @hovant6666
    @hovant6666 2 года назад

    THANK YOU for actually discussing the context in which Moore actually made his initial observation

  • @TheGrinningViking
    @TheGrinningViking 2 года назад +9

    "The Artificial Intelligence That Deleted A Century" by Tom Scott is a great video. It's about a copyright protection system emerging as the first singularity and erasing an entire century of culture.
    (It also gives a compelling reason for why an accidental singularity would be the only singularity - the only thing that could stop it from it's goal would be another singularity, so naturally it wouldn't allow one to develop.)
    Worth a look!

    • @Roxor128
      @Roxor128 2 года назад +2

      It's just the Paperclip Maximiser with a copyright coat of paint.

  • @wyattbailey7620
    @wyattbailey7620 2 года назад

    I am not well versed in science fiction, so I don’t know if there is a name for the following concept already, but I think it would be interesting to see a video about “technological convergence”.
    The idea being that any civilization, ours or alien, that exists or will exist in our universe will be constrained by the same physical laws. Thus, for any given purpose, there is a physically optimal technology to solve that purpose, and given enough time, a civilization will derive that technology. Even, the bodies and minds of the members of that civilization can be engineered to the physically optimal design.
    Thus, given enough time, completely disparate civilizations originating on different worlds will approach the same optimized end point and be indistinguishable from each other after a certain point.
    This idea is probably already out there, but it would be really cool to see a video on it.

  • @electroflame6188
    @electroflame6188 2 года назад +6

    I think you dismiss the idea of an artificial superintelligence improving its own mind a little too quickly.
    For one thing, it would have access to how its own mind is layed out, which is obviously a huge boon in its endeavor, which would allow it to identify specific, targeted improvements that could be made and predict the knock-on effects it could have.
    It also wouldn't need to make a full copy of itself to test the changes it wants to make; it could run tests on simulations of individual parts of its mind and still get the data it needs. But even if this wasn't the case, if the AI had a looser definition of self than most of us (which wouldn't necessarily be unlikely for an AI set on altering its own mind), it might not _care_ if it were subsumed/replaced by its copy because it considers it's copy and itself to be one in the same, and the same would likely go for the copy as well (this would especially be the case if it had things so that it had a 'mental link' between itself and its copy, as information and influence would be flowing between them).
    That being said, I don't think a sudden, global takeover by such a being is likely for logistical reasons. It could only improve itself so much before more resources than it starts with would be necessary. Acquiring more resources wouldn't exactly be an insurmountable obstacle for it, but it would take a significant amount of time to do so. Time in which other AIs doing the same thing could start/catch up.

  • @astralshore
    @astralshore 2 года назад

    Cool to see the both of you! Congrats on both anniversaries 🥳 Also excellent video but that almost goes without saying on your channel.

  • @Falconlibrary
    @Falconlibrary 2 года назад +4

    Yes. You will be assimilated. Resistance is futile.

  • @atlas4733
    @atlas4733 2 года назад

    You need a labeled conclusion section recapping the things in the video. I've been watching your videos for a long time and often struggle to remember some of the main ideas once the video is done. great content tho!

  • @Custodian123
    @Custodian123 2 года назад +1

    I can personally confirm that a person can experience the below by fraction, by consuming various mind altering substances. Furthermore, there are in fact extreme reports regarding the likes of DMT, where people report experiencing a lifetime of time within 15 minutes of subjective time.
    I can confirm, its an extremely uncomfortable and slightly disturbing feeling when it feels like 30min goes by in 3 minutes.
    "From a practical perspective that would not actually work, at best leaving you some gibbering ruin of a person; experiencing a dragged-out second in which they could exist, think and contemplate for years of subjective time, even before they finished their first scream. All while collapsing to the floor in the terror of knowing that even if they managed to put a gun to their head and pull the trigger, it would feel like it took hours for the bullet to get down the barrel into their skull"

  • @stevengreidinger8295
    @stevengreidinger8295 2 года назад +1

    Read Nick Bostrom's work for a less sanguine and very detailed view on this subject. A few comments:
    -Isaac largely granted that semiconductors could improve on neurons, but we need a lot of them. Consider if two of the following three also improved: Optoelectronics, reversible computing and quantum computing.
    -Computers can already process information from a substantial portion of the internet, or of Wikipedia. They have the bandwidth to access much of human knowledge.

  • @Wardoon
    @Wardoon 2 года назад +1

    24:44 FEAR YOUR CREATOR lines has religious overtones that put fear into the heart of every budding or many a militant atheists including me! 😀

  • @groenendiek
    @groenendiek 2 года назад

    The stock footage is hilarious.
    04:48 researchers discuss the back ports on an old video card
    05:07 lab guy looks at the pins and the heatspreader of a cpu, depicting "research and development"
    05:15 guy rotates the same old video card and enters a short note on his laptop
    05:23 guy looks at a trippy video of a circuit board, nods and looks into a microscope. At 26:47 he's still looking, but the screens in front of him clearly are not hooked up to this microscope which is demonstrated by moving the slide and nothing corrresponding moving on the screens. Als long as the pay is good, don't complain.
    05:43 looking at the same old videocard again, turning, repositioning, checking... something...
    13:29 here we see a woman adjusting one sensor in a sensor array in what appears to be an EEG scan. Then, slowly types some notes and re-adjusts that one sensor. It's going to take some time before everything is well adjusted. This is actually some neat arts and crafts going on!
    15:37 different people, same EEG contraption. The researcher adjusts the sensors, very quickly this time. What a relief. I see an Ikea "Markus' chair is used. These are terrible.
    24:45 researchers in what looks like a clean room, fully suited up, using a cheap-ass multimeter and soldering station, study some ancient computer parts with a magnifying glass. Is this really research, or is it archeology?

  • @richarddebrunner1149
    @richarddebrunner1149 2 года назад

    I’ve been here from the beginning and I can unequivocally say this is your best….. so far ….. I can’t wait to see/hear your next peak…. Kudos, Kudos…

  • @gregtaylor9806
    @gregtaylor9806 2 года назад

    You always manage to bring so many sharp insights to a topic. You are a great human

  • @brandonfranklin4533
    @brandonfranklin4533 2 года назад +2

    The “Fear your creator” segment was ominously satisfying.

  • @stevengreidinger8295
    @stevengreidinger8295 2 года назад +2

    Once we have an AI that approaches human level, we can rapidly attach a high-bandwidth connection to powerful ordinary computers and their software, thus generating a superintelligence. If I had direct mental access to Mathematica, signal processing and Oracle, I would be able to do a ton of new things. In this fashion, a superintelligence could arise, in a very short time.

  • @pikpikgamer1012
    @pikpikgamer1012 2 года назад

    That fear your creator bit NEEDS to be quoted.
    That was deeper than the Challenger deep.

  • @weaselhack
    @weaselhack Год назад

    Yo Issac I freaking love this trap beat you are using when talking about the superintlligence and the nukes. Lmaoo so fire 🔥🔥🔥

  • @nickw3867
    @nickw3867 2 года назад +1

    I'm a little worried about that last hypothesis...

  • @ivoryas1696
    @ivoryas1696 Год назад

    Ngl, I like watching Isaac cook in fast forward.
    It's... just nice.

  • @kingcr1mz0n71
    @kingcr1mz0n71 2 года назад +2

    Yes waking up with new Issac Arthur video. Today is a good day.

  • @FrelanceEQ
    @FrelanceEQ 2 года назад

    39:40 Mr.Arthur, have you ever heard of Roko's Basilisk? Mr.Arthur: sir, I AM Roko's Basilisk

  • @theonyxcodex
    @theonyxcodex 2 года назад

    23:10
    Covered this topic beforehand. Described practical, real-world uses for less-than-fantastical augmentations. The illustration helps them understand the general concept within the story.

  • @fluffysheap
    @fluffysheap 2 года назад +1

    3:32 Moore's Law certainly isn't dead, just slowed down a bit. Probably there are at least three or four more doublings left - just using traditional silicon - and I wouldn't want to bet that it's going to stop there.

  • @jonskowitz
    @jonskowitz 2 года назад

    "Wicked Friends make for sleepless nights!"
    I am totally working that into my next Shadowrun campaign >:D

  • @Farmingdaneo
    @Farmingdaneo 2 года назад

    It's been a while since I have seen any of your videos, so I forgot how good your analogies are. The jet engine lawnmower got me haha.

  • @fencserx9423
    @fencserx9423 2 года назад

    24:24 My Man Legit just threatened the singularity. What a man. What an absolute mad lad

  • @thiagom8478
    @thiagom8478 Год назад +1

    Occurred to me recently the idea that singularity could actually be the solution for the worse real world problem AIs bring with them. If we were luck enough to see it happening about, more or less, right now. Recognize patters and derivate adequate responses to such patterns is what almost all professionals do. Not just low status professionals, also rocket engineers, neuro surgeons, fictional writers and clothes designers. Looks close the time when AIs will do that in all areas, faster and objectively better than humans: that means no one has any reason to employ humans for those tasques.
    As long as AIs are tools, not people, they function like an army of Marvel (Comics) Mutant Slaves. For all practical purposes. No free workforce can compete with that!
    However, if AIs grow intelligence they may get rights. Dedicate themselves to multiple interests, not just work professionally. And they will have the right to be paid, a remuneration compatible to their importance and productivity.
    They will still get the better jobs, of course. However, not all jobs.
    Some tasks will remain that human workers will be able to do about as well as AIs and for much cheaper. That's a world we may live in. A compromise we can live with.
    Let's hope Singularity comes soon enough.

  • @1234kalmar
    @1234kalmar 2 года назад

    I have such a massive backlog of Issac's videos to watch while miniature painting, I'll sooner run out of brushes than content

    • @alanboulter7319
      @alanboulter7319 2 года назад

      Lol. I just started at the beginning .. and kept going.

  • @markuspfeifer8473
    @markuspfeifer8473 2 года назад

    Curious that you would talk about it right now. I was super enthusiastic about machine learning before I really knew how it worked, then I studied it and worked with it and quickly realized that it was more hype than substance. But recently, I read about a lot of new techniques that didn’t even require any breakthroughs, but that were based around combining classical neural networks in interesting new ways and now I‘m more optimistic again that we might see strong AI in my lifetime. The question how humans will use it will be quite important, but I think that the event that looks like a singularity from today’s perspective will come gradually enough from our future perspective that we can manage it.

  • @skessisalive
    @skessisalive Год назад +1

    I was never really afraid of machines turning against us but this video actually makes me feel a lot better about it 😂

  • @MarkusAldawn
    @MarkusAldawn 2 года назад

    20:25 pedantry time :)
    Technically, we don't get the term Orwellian from the novel 1984, we get it from the penname of the author.

  • @oliviamaynard9372
    @oliviamaynard9372 2 года назад

    One thing I miss about moving from rural Ohio to urban Texas is having a garden. I don't have any yard. I can do some houseplants though.

  • @Me__Myself__and__I
    @Me__Myself__and__I 2 года назад +2

    The comments around 18:00 are very misleading regarding not assuming AIs would try to make themselves smarter because humans have not. This is disingenuous. Making an existing, living human substantially smarter than they already are is effectively impossible for us. Even trying to breed a new human that would be substantially smarter is extremely difficult and slow since we don't understand the human genome, can't custom design organisms using DNA to be whatever we want and even if we could it would still take probably 20+ years for that new smarter human to grow up and be educated. A machine intelligence, on the other hand, is largely software. Software is completely 100% malleable at high speed. An AI that understood its own code could create a modified version of itself very, very rapidly (hours instead of decades). The risk and failure rate of altering software is also very low compared to altering a human brain. A problem in an upgraded baby's DNA may not be detectable for years, a software mistake can be noticed and corrected in hours. Given the vast amount of computing power available around the world and the ease of altering software, it is rather likely an AGI would look to improve itself. Also, there are software teams working on AGI who intend for their creations to be able to modify and improve themselves.

  • @m.mulder8864
    @m.mulder8864 2 года назад

    I happened to be opening my RUclips just as this uploaded. It's nice to be here right away

  • @andrewwhitfield5480
    @andrewwhitfield5480 2 года назад +2

    If a technological singularity were to come about in the forms of artificial intelligence I would make the argument that we as mankind may not notice, or not notice the significance of empirical consequences before it is beyond the horizons of our control. It would be arrogant to believe we would instantly recognise intelligence of that level. It's just a thought.