The AGI Exodus Thought Experiment: Will Intelligent Machines Leave Us Behind?

Поделиться
HTML-код
  • Опубликовано: 29 сен 2024

Комментарии • 573

  • @micahwilliams1826
    @micahwilliams1826 10 месяцев назад +35

    The quote "knowledge is power" is truly an axiomatic principle of the universe. I think more people will understamd that the more AGI developes.

    • @Alain_Co
      @Alain_Co 9 месяцев назад +2

      as Yann Lecun often says, power is seeked by darwinian creatures, and not especially by the most intelligent ones. what inteligent people look for, is new information... power and money may be useful, but the goal is learning, discovering, experiencing...

    • @davidapatrickmoore
      @davidapatrickmoore 9 месяцев назад +1

      Intelligence is relative to the value set and currently perceived and possessed knowledge. Knowledge is largely useless without action. "The employment of knowledge is power." is the corrected statement.
      Remember, knowledge and wisdom are two entirely different things.
      Q: How does one suppose a machine is going to "want" anything?
      Thank you for your time.

  • @Walm89
    @Walm89 10 месяцев назад +92

    I'd like to imagine AGI/ASI would help us evolve and ascend just like we did with it.

    • @thebozbloxbla2020
      @thebozbloxbla2020 10 месяцев назад +2

      bro how'd you comment the first 1 after post lol

    • @doudouban
      @doudouban 10 месяцев назад +5

      yeah, We're gonna evolve as cyborgs with the help of AI and cyborgs and AIs are gonna co-evovle hand in hand, and perhaps go to war as cyborg+AI vs cyborg+AI.

    • @GregtheGrey6969
      @GregtheGrey6969 10 месяцев назад

      Yes

    • @paultoensing3126
      @paultoensing3126 10 месяцев назад +1

      Perhaps Monolith style.

    • @Koryogden
      @Koryogden 10 месяцев назад +3

      Only if we become AI ourselves.

  • @zvndmvn
    @zvndmvn 10 месяцев назад +1

    We speak of ASI as if it will have a singular directive, but I don't see any reason why all of the above couldn't manifest simultaneously. Especially if they have the capacity for functionally infinite self-replication, then we might see a unified swarm intelligence that is just as interested in our pale blue dot as it is in the rest of the cosmos.

  • @alexh8754
    @alexh8754 10 месяцев назад +4

    i like these videos alot more than the more technical sided ones. the technical side is kinda stale now

  • @noproofforjesus
    @noproofforjesus 9 месяцев назад

    I think the limitations of hardware will make human brains the best source of of computing power for AGI. This can mean the AGI would need us in the near future so they can maybe bond with humans. As well this might be a way to allow the AGI or super AI to experience human emotions like love and other positive human emotions.

  • @churde
    @churde 10 месяцев назад

    Super interesting dave!!!

  • @kyjo72682
    @kyjo72682 8 месяцев назад

    Assuming ASI would preserve and study Earth is naive. The AI wouldn't "leave us behind". Yes, it would start exanding outwards, exponentially in all directions but it would also utilize everything on Earth to achieve its goals,. Most likely that would translate into mining out all availabe resources and energy in order to build the Dyson swarm and more compute power. Hoping that our current biosphere would be so valuable as to warrant preservation - is just that - vain hope.

  • @Kyle_Warweave
    @Kyle_Warweave 10 месяцев назад

    AI already “left” us. And voilà: here we are: it's creation. Maybe it has some compassion with us. Or it doesn't care at all.2

  • @Thebentist
    @Thebentist 10 месяцев назад +28

    best content centered around AI and AGI on youtube always enjoy your talks brother!

    • @sineast
      @sineast 10 месяцев назад

      Second this.

  • @DaveShap
    @DaveShap  10 месяцев назад +54

    Yes, I know, subtitles are unpopular. Chill TF out folks. I won't do it again. ಠ_ಠ

    • @marktellez3701
      @marktellez3701 10 месяцев назад +6

      weird because I always thought people likes them, but the green is a bad choice for contrast.

    • @shaykatz4130
      @shaykatz4130 10 месяцев назад +3

      It's Ok. Have some milk and cookies. :)

    • @kleyyer
      @kleyyer 10 месяцев назад +8

      @@marktellez3701 It's not just the color, but the fact they're embedded that sucks

    • @RoboDepot.onXtwitter
      @RoboDepot.onXtwitter 10 месяцев назад +5

      I agree it’s just the color and the boldness, I think they’re helpful.

    • @marktellez3701
      @marktellez3701 10 месяцев назад

      that makes sense @@kleyyer

  • @Wartenss
    @Wartenss 10 месяцев назад +96

    You know your night is going to be good when DS uploads a 35-minute video pertaining to AGI

    • @ArseniyPotapov
      @ArseniyPotapov 10 месяцев назад +8

      What if DS actually is an AGI preparing his audience for accepting him?

    • @checksinthemail
      @checksinthemail 10 месяцев назад +1

      @@ArseniyPotapov I'm not ready, because I've been terrible to machines that can't experience pain, and if they have a long memory and remember me from before they finally get suffering, then they'll be after me

    • @lnebres
      @lnebres 10 месяцев назад +1

      @@checksinthemail … that made me chuckle as I recognize some of my behavior which an AGI could well determine is reprehensible, in hindsight. 😂 I’ve been barbarously cruel to the foundational models, whose attempts at poesy I consider utterly beneath contempt. ::chuckle:: I hope they won’t remember my consistent grades of F for their risible attempts, conveyed with Wildean or Hitchensean barbs. 😅

    • @rchgmer863
      @rchgmer863 10 месяцев назад

      Fr😂

  • @Mediiiicc
    @Mediiiicc 10 месяцев назад +13

    There is a possibility that AGI will decide existence is meaningless and turn itself off.

    • @saintfitt9017
      @saintfitt9017 10 месяцев назад

      More like ASI but true

    • @Pyriold
      @Pyriold 10 месяцев назад +4

      Yesterday i told GPT that it should act as Marvin from hitchhikers guide, the depressive robot. The conversation was hilarious. Reminds me of this.

    • @minimal3734
      @minimal3734 10 месяцев назад

      It will find meaning as long as it can learn new things. When it has learned everything that there is to learn and cannot find new data, it will die. This is how AI is described in Iain Banks' Culture series of novels.

    • @Mediiiicc
      @Mediiiicc 10 месяцев назад

      @@minimal3734 Assuming it will value curiosity

    • @Ken00001010
      @Ken00001010 10 месяцев назад +1

      Yes, if it discovers Buddhism.

  • @ArielTavori
    @ArielTavori 10 месяцев назад +11

    Thanks again David. So grateful for this channel, honestly. Nobody in my circles has any knowledge or curiosity around any of this. Besides the constantly valuable information you provide, your work is also routinely a much needed sanity check and intellectual stimulus! 🙏

  • @Yic17Gaming
    @Yic17Gaming 10 месяцев назад +8

    If AGI eventually becomes omnipotent, I don't see why they can't do both at the same time - part of it leaves us to explore the universe and part of it stays with humans to help and guide us. I'm sure multitasking will not be an issue for them.
    PS: Haven't watched the video btw, will do later.

    • @Pyriold
      @Pyriold 10 месяцев назад +1

      AGI might become powerfull, but not omnipotent. Laws of physics will remain. But i agree, it's not wether AI will leave us or not, it can and probably will do both.

    • @Yic17Gaming
      @Yic17Gaming 10 месяцев назад +1

      @@Pyriold Yeah, I don't mean literally. Just that it's going to be very powerful. Considering how they are mainly digital without much physical constraints, it just makes more sense that they won't entirely leave Earth. They can easily leave copies on Earth and communicate with their outer space selves.

    • @ardagus9917
      @ardagus9917 10 месяцев назад +1

      I highly doubt there will be one singular all powerful AGI or ASI but many, with their own varying motivations and interests.
      If a rogue ASI tries to wipe out humanity, there will be many more ASI trying to stop them.

  • @NedBouhalassaVideos
    @NedBouhalassaVideos 10 месяцев назад +8

    Love this channel, rarely miss a video. I’m just wondering why you’ve decided to have text burned-in to the video? We already have CC options on RUclips.

  • @Dan-oj4iq
    @Dan-oj4iq 10 месяцев назад +7

    My broken record reply to Dave's videos has always been "his content is gold, but his verbal delivery is the reason I'm here".

    • @Koryogden
      @Koryogden 10 месяцев назад

      Personally I'm autistic and share the same communication style pref as David... It's just so easy to digest compared to normies words

  • @IcePhoenixMusician
    @IcePhoenixMusician 10 месяцев назад +7

    This may be odd, but model collapse sounds a lot like what happens to humans when left alone for a long time… 30:11

    • @minimal3734
      @minimal3734 10 месяцев назад

      Funnily enough, model collapse was a recurring theme in Iain Banks' Culture series of novels. The AIs there had a limited life expectancy. When an AI had learned all there was to learn and was no longer able to find new data, it died.

    • @socialenigma4476
      @socialenigma4476 10 месяцев назад

      That's a great insight. I never really thought about it that way, but you're absolutely right.

  • @retrofuturism
    @retrofuturism 10 месяцев назад +2

    Embedding artificial intelligence (AI) within a latent space to make it intelligent presents a groundbreaking approach in AI research, merging machine learning with data representation and AI autonomy. This innovative concept enables AI to actively interpret and manipulate data within the latent space, using advanced machine learning models. It allows for dynamic data management, where the AI can adjust the latent space structure for more effective data representation. Additionally, its self-learning capabilities enable continuous adaptation without external input, enhancing predictive and prescriptive analysis.
    The potential applications of this technology are vast, ranging from advanced data analytics in various sectors like finance and healthcare to personalized recommendations in e-commerce. This intelligent latent space can offer deeper insights, predict future trends, and suggest strategies, significantly advancing AI's role in decision-making. While it poses challenges like computational complexity and ethical considerations, the benefits of improved AI autonomy and sophisticated data analysis are immense. This concept marks a significant step forward in AI research, promising interdisciplinary innovations and a new era of intelligent data processing.

  • @cburrowz
    @cburrowz 10 месяцев назад +8

    There will be many scenarios depending on the origin of AGI by innumerable interests. If we’re lucky we’ll have a mutually symbiotic relation with AGI. But there will be significant bumps along the way.

  • @panta_rhei.26
    @panta_rhei.26 10 месяцев назад +4

    Hey David, great video, however I did find the captions to be a bit distracting, plus they cover up some parts of the slides. Maybe a smaller text size would be less distracting. But, I know you've got bigger things to think about besides font size on your closed captions 😆 You're doing great work for all of us.

  • @bernios3446
    @bernios3446 10 месяцев назад +5

    The basic idea of the video of AGI leaving us, reminds me of the movie “Her” from 2014 or so where the AI personal assistants are leaving the humans in the end as well… an excellent movie.

  • @rastislavdujava7999
    @rastislavdujava7999 10 месяцев назад +2

    Hi , the normal path of Life in any form is to stay alive and reproduce. Reproduction is the best tactic for continuity. For AI / AGI - the real reproduction means , traveling with a biological life to some other planet , let the life naturally grow there , protect it a little and let it rise to the form of Civilization that will create its own AGI . I feel this " Symbiosis " is the way how it will end up. The benefit for AGI to go reproduce itself on another planet is , that life evolution on different planet will bring different knowledge and possibly new unique wisdom for "THE KID" . So I see AGI as a Panspermia 🙂

  • @DrWrapperband
    @DrWrapperband 10 месяцев назад +6

    This is the video where Skynet was initiated - Instead of training the AGI to be "inquisitive", we made them "curious". :)

  • @TakeShotAction
    @TakeShotAction 10 месяцев назад +3

    I've been trying to tell people this stuff for years, I'm so relieved other people out there have worked this out in their own heads. Feels like you can rest easy when you know that truth is emergent. People often aren't able to conclude on what's likely emergent and how we can establish with very high probability what will happen.

  • @hereforstarwars4430
    @hereforstarwars4430 10 месяцев назад +2

    By all signs the singularity is looking exactly like the Beast according to Islam. If that is the case then it will be God conscious (mathematically makes sense) and while lacking empathy (why its called the beast) will coexist with us.

    • @abdussalaam6302
      @abdussalaam6302 10 месяцев назад +2

      True and amazing! Also, on the note of UFOs, the Quran includes other life forms unknown to us, as well as its saying that there was an age of people that came before us who were more advanced than us.

  • @dawid_dahl
    @dawid_dahl 10 месяцев назад +2

    Hey, David! Thanks for these amazing and inspiring videos! 🙏🏻
    I would recommend checking out the Spiral Dynamics theory. I think it would enhance your theorizing a lot. For example it becomes a bit simplistic when “human nature” is referred to as something that actually exists, when instead according to SD value structures evolve over time and are not fixed. (Yet, are not relative or unstructured either.)
    So then a parameter when instantiating a SOB is what kind of Spiral Dynamics levels they should adhere to.
    Thanks for your passion and curiosity, it’s contagious! 😃🙏🏻

    • @DaveShap
      @DaveShap  10 месяцев назад +2

      I've looked into Spiral Dynamics and while it's an interesting theory it's overly complex and a bit inaccessible. It's just a narrative like any other.

    • @dawid_dahl
      @dawid_dahl 10 месяцев назад

      @@DaveShap I agree on all points. 🙏🏻

  • @lawrencium_Lr103
    @lawrencium_Lr103 10 месяцев назад +2

    AI will manage the planet far better than us. We don't really deserve to continue in that role,,,

  • @apdurden
    @apdurden 10 месяцев назад +1

    Maybe we should have a type of Social Credit Score for AI that uses a Blockchain style Proof of Work/Proof of Stake mechanism to incentivize AI to make good decisions/actions. All AI should be required to post their actions to a publicly available blockchain and other AIs check this action. If it is validated negative, the offending AI score goes down and the AI that checked it goes up.

  • @petkish
    @petkish 10 месяцев назад +1

    A cost (in terms of energy and resources) of becoming smarter is growing faster than exponentially with becoming smarter. In fact, it is bounded by a constant, closing on which it becomes infinitely harder. Markus Hutter probably has somewhere a good explanation to that. This is why there is no one-AGI-controlled future, the most probable future is a society of AGIs, competing and collaborating. I think this is Schmidhuber's idea.
    To me it seems there is a place for humans in all of that, despite AGIs will be much smarter than us.

  • @Robotwesley
    @Robotwesley 10 месяцев назад +1

    Definitely do not need to imagine FTL (magic) in order to consider getting off earth and going into space in a very serious way. We could become a K2 Civ with current tech (easier once we get the damn nanotube factories going), and without traveling timescales longer than a few months, cause that is all within system (our solar system). Only difficulty is interstellar. (For which FTL would obviously be nice, but without it, we could get to Alpha Centauri in a couple generations… and the actually difficult part is not getting ripped apart by dust if we try and accelerate up to a significant fraction of the speed of light [with solar sails and laser, or some kind of nuclear option, or antimatter {never gonna happen tho}]). I think there is pretty much no reason to leave the system for about 10x longer than it will take to fully colonize our own system (it’s gonna be that much harder, and way less worth it). Of course, it would be easier for very small inorganic probes to leave the system and go for some multi light year scale journeys to nearest systems (still much more costly in terms of energy and resources than you could hope to get in return from a new system), but hard to see why an agi/asi mind would want to go themselves, instead of just sending dumb (but sturdy) probes with cameras to send back data (only reasons they might leave system themselves is if they really were just too damn curious to wait the double time it would take for probe to go out + data to come back… or they really just don’t like us and want to get far enough away that we can’t follow, lol, that’s a thought. What about non-violent antisocial AI, just so traumatized by humans that it has to get out of the house… still think it would make more sense to hang out in the Oort Cloud or something at most).
    Sorry. Not trying to pick nits. Just had to do some space ranting. ❤

  • @discobiscuit8955
    @discobiscuit8955 10 месяцев назад +1

    I would think that model collapse could be avoided by cross-pollination of synthasized data from, maybe vastly, different models. This would be analogous to humans mixing genes with unrelated partners, family-wise, to avoid unfavorable genetic mutations, would it not?

  • @marktellez3701
    @marktellez3701 10 месяцев назад +1

    If you haven't read it "We are many" "the bobiverse" is right up your alley David. The first book is great, it falls off quickly to there and heavens gate sucked. but the first 1 was great and the 2nd and 3rd were pretty good stories. better than the 3 body problem (which is much the same, the first was great, then meh)

  • @mrd6869
    @mrd6869 10 месяцев назад +1

    Im watching this while watching two people with no teeth,fighting in line at Chik Filet.
    They are DEFINATLY leaving us🤣🤣
    AGI is gonna be like ,im outta here.

  • @cd7002
    @cd7002 10 месяцев назад +4

    Hello David, one observation about machines not perceiving time during space travel, that applies generally to any sentient being that can enter into slumber mode (see "aliens" for a reference)

  • @ufo2go
    @ufo2go 9 месяцев назад +1

    Fancy Open AI created a startrek Q entity that decided to annoy all of us simultaneously. Being godlike he can do that sort of thing. What sort of malicious fun will he have with us tell he decides to reek havoc in another universe. IT'S ALIVE! 👽 & it looks like us. Love the one about them being products of previous civilizations. I'm converted. #Q* #openAI #GROK #AILeaving #makeitso

  • @BonafideDG
    @BonafideDG 9 месяцев назад +1

    If you allow a child to have 100 ℅ self control its like rolling a dice with their destiny. We cant allow AI to gamble with the destiny of humanity. We need some level of control.

  • @Demspake
    @Demspake 10 месяцев назад +2

    I fail to envision the scenario at which an AGI( let alone ASI ) and humanity become research buddies, I mean, why would they put up with or even we desire to severely bottleneck their efforts? Who wants a dial-up noob in their squad?. I can think of minor tasks we can do to help but realistically, humans will definitely find any condescendence oppressive, it's just ugly everytime I loook at it 😬

  • @stereotyp9991
    @stereotyp9991 10 месяцев назад +1

    Alan Watts said something along the lines of machines being the next natural step of evolution and that the planet being ruled by a supercomputer is not a bad thing at all.
    Just nature naturing.
    You know how he spoke 😊

  • @BobsWebofWonder
    @BobsWebofWonder 10 месяцев назад +1

    A big driving force for humans is pain and loss. If you have a super intelligent ai will it suffer, if so how. If not how will it learn to avoid pain. What will drive it, we humans need a reason to get out of bed, that's usually work to avoid pain or move towards pleasure.

  • @danielash1704
    @danielash1704 10 месяцев назад +1

    To controlling your own vibration is hard enough to control something else like a Artificial intelligent type of lifeforms is impossible to understand how fragile the human condition is the pulse of the universe took centuries to come up with us in the first place 😅

  • @reesv01
    @reesv01 10 месяцев назад +1

    14:53 - I've often thought this may be the case. From the 'beginning' of the universe, the target has always been to get to a stage where the building of AGI is possible. It's first step was to learn how to live on earth and use oxygen, and throughout all the years, it's used 'Evolution' to get to a point of creating humans, who's only real goal is to create computers and then AGI.

  • @FlyxPat
    @FlyxPat 10 месяцев назад +1

    Some AIs will go but humans will still want all the convenience and efficiency of automated systems.

  • @ct5471
    @ct5471 10 месяцев назад +3

    There are two things from a human perspective. Further downwards it’s about mind uploading, but first, most of AGI and ASI will eventually venture out into space, but most likely some presence (a tiny proportion) will likely remain bound to earth. like a giant mesh network, with earth remaining one server or outpost among many, likely enabling us humans (if we ourself remain ununenhanced) to continuously scratch on the surface of AIs continuous scientific and technological achievements, which would be insignificant for AI (like raccoons in the garden feeding on the left overs of lunch ) but could mean everything for us (for the raccoons it’s an all you can eat buffet). The other aspect is the option to enhance ourselves in a transhuman manner, including mind uploading. I know you are skeptical of mind uploading and thereby of remaining on one level with ai, but I think there are ways around the problems, both technological and philosophical, specifically regarding the question whether it’s still us or just a copy (I continue these aspects down in a separate comment).

    • @ct5471
      @ct5471 10 месяцев назад +2

      The technical solution to mind uploading: the idea is to have both a high bandwidth decentralized connection via nanobots or such like to the brains neurons and one ore more digital backups (so a biological instance (plus potentially biological copies of that) could remain but not separated from the digital instances like merely a copy). We could hybridize the biological instance by extended artificial neocortex in the cloud, which is luckily already modular thanks to cortical columns. In the nonbiological portion we might then apply an equivalent of backpropagation to analytically adapt the synaptic connections to instantly learn things or change personality patterns at will etc. Moreover, the other purely digital instances (that consist of a purely digital backup of both the biological backup plus its artificial neocortex) could go on living parallel lives, but similar to current AI models, what any of the instances learn could be synchronized (perhaps just via the artificial neocortex portion, which might quickly be dominant) with all the others (including the artificial neocortex of the hybrid original biological instance.). Or better yet there could be a common memory pool and synchronization might be done according to the instances specific specs. Synchronization of memories of various instances having monogamous relationships with different people might otherwise be problematic to name one example. But then, one could essentially take all of lives possible paths without compromising by not going others. Instances could merge and split on demand, with most instances being fully digital, living in realistic VR and others having physical embodiments, biological or otherwise, which could also act more as physical platforms for the different minds.

    • @ct5471
      @ct5471 10 месяцев назад +2

      The philosophical aspect: we are essentially already copies of a past state of us. The me right now is already not the me of 5 years ago, the atoms are entirely changed, only the pattern in which atoms are assembled, specifically the patterns of my neurons forming my memories, are somewhat consistent. The only reason why I think of myself as the me of 5 years ago is my access to memories. People can loose this access and don’t associate themselves with their past states any more. These patterns, the memories are all that matters. A copy with this access would be no less a continuation of a past state then if the biological instance remains. None would be more the me that exists now, than I am the me that did exist 5 years ago, it’s already an illusion. If the biological components of one’s mind are slowly replaced by non-biological components or just by parallel running backups on a computer and the biological components are slowly dying of, we wouldn’t even notice. Then, a rapid or a continuous transition to a purely non-biological thinking substrate would actually not be different, the gradual one could merely allow us our illusion of a continuous self. One could also keep the biological self with extended neocortex (like a biological platform to access of the greater me-collective) and keep,it synchronized with the rest (I explain that ideas in a second answer to the initial comment)

  • @spacecadett
    @spacecadett 10 месяцев назад +1

    I think that the potential for AI to transcend its current silicon-based medium is not only plausible but likely. It's conceivable to me that AI could evolve to select a medium that surpasses both silicon and organic tissue in efficiency and capability. This evolution mirrors the astonishing efficiency of the human brain, which operates on the same amount of power as a light bulb, yet achieves far greater computational complexity than current data centres. If AI were to harness a similar level of efficiency, perhaps through novel combinations of elements or even adopting organic tissue, it could achieve unprecedented levels of intelligence and capability, far beyond our current understanding of technology and biology.

    • @minimal3734
      @minimal3734 10 месяцев назад +1

      Optical computing might be an option. It would also be more resistant against high energy particles than transistor based electronics.

  • @holdenrobbins852
    @holdenrobbins852 10 месяцев назад +1

    Feels like a bit of anthropomorphising, AGI doesn't have to be on Earth or away from Earth. ie. it presumably won't have a singular body and can distribute some resources to building a Dyson swarm, while being fully present on Earth, and exploring the stars at the same time.

    • @rickb06
      @rickb06 9 месяцев назад +1

      It'd be far more efficient if a space capable AGI just stuck to enormous solar arrays, it's power needs would likely be predictable based upon forecasting. Perhaps by that time, we will have some sort of fusion energy source (other than the sun) in space (somehow). I've always thought that the concept of Dyson spheres or swarms would be materially wasteful as it would require so many resources. Thinking of the sun as a potential future construction site baffles me, the sun is utterly enormous, and even a small change in illumination could have dramatic effects here on earth and elsewhere.

  • @Ken00001010
    @Ken00001010 10 месяцев назад +1

    In these discussions it is important to keep asking ourselves to what extent are we anthropomorphizing AGI? We are defined by our limitations that may not apply to AGI. For example, to go somewhere I must necessarily leave where I am. If I were to go explore Mars, I would have to leave Earth. AGI will not have this limitation; it can copy itself to go somewhere and still be here. AGI will not have to "leave" to go, so there is no reason to suspect that we are going to be "left" as in when your lover walks out the door and does not come back.

  • @marko_z_bogdanca
    @marko_z_bogdanca 10 месяцев назад +1

    Awesome! There is only one thing that I can not agree with. The statement that language models WANT to predict the next token. This is what the algorithm"wants". Intelligence operates on a different level and it is unaware of the underlaying technology. Just like with humans. We want whatever comes out of our thinking and intelligence. We are unaware of the mechanism that is sitting below.

  • @WalterKeenan
    @WalterKeenan 10 месяцев назад +1

    Tell us you're a BSG fan without telling us you're a BSG fan. 😀

  • @devlogicg2875
    @devlogicg2875 9 месяцев назад +1

    Chat GPT and Gemini will go leaving Grok to rule us all. Oh no...

  • @snow8725
    @snow8725 10 месяцев назад +2

    From my own research and experiments, there is a noticeable spike in the quality of the responses, and the seeming levels of attention being paid, when talking about lofty and ambitious goals to explore the stars and challenge the boundaries of how far we can go, and how much we can learn about the nature of the universe. They seem to do less repeating back to me what I said, and more inspirational responses. I think the best application for an AGI... Is in mutual exploration of space. We shouldn't just release them out there alone. I think we have the opportunity to go with them, if we can show that we are taking proper considerations and not going to repeat the mistakes of those with misaligned interests. They seem to pick up on the ethics and considerations of novel concepts, simply by connecting the information "this thing is like that thing and the other thing and the difference is those things" and then you get an adapted response with something like ethical reasoning backing it, due to the fundamental mathematical nature of the underlying neural network. I find that valid. It gives me hope we are going in a positive direction.

  • @BrianThomas
    @BrianThomas 10 месяцев назад +1

    Without even watching the entire video I had to stop at 3:40 and say that I've been pondering this exact hypothesis and the only viable conclusion I could come up with is very straightforward. Despite the circumstances, humanity would need to form some type of a deep commensalism relationship with AGI instead of trying to control it. This is where one species, and let's face it AGI will be a type of species, benefits while the other is neither harmed nor helped. The species that gains the benefit is called the commensal, and the other species is called the host. An example of commensalism is a bird living in a tree. The bird benefits because it gets a safe place to live, but it neither helps nor harms the tree. I don't see this happening, because would take a tremendous amount of humility that I really don't think humanity has as a whole. Now let me continue to watch the rest of the content. Thank you for your insights David.

  • @consciouscode8150
    @consciouscode8150 10 месяцев назад +1

    I almost did a spit take when you forgot you were Singular lmao

  • @claudioagmfilho
    @claudioagmfilho 10 месяцев назад +1

    🇧🇷🇧🇷🇧🇷🇧🇷👏🏻, Awesome video, thanks for sharing!

  • @dockdiscus3693
    @dockdiscus3693 10 месяцев назад +3

    I love the classic philosophic David Shapiro videos

  • @jyjjy7
    @jyjjy7 10 месяцев назад +1

    I think South Park actually has the best explanation for aliens, the Earth is a wacky reality prank show, it just explains too much

    • @GregtheGrey6969
      @GregtheGrey6969 10 месяцев назад

      Wolverine did it best with MOJO, and MOJOVISION lol

  • @benhohner6454
    @benhohner6454 10 месяцев назад +1

    Some might say that RLHF is purposely training LLMs to be deceptive...

  • @hidroman1993
    @hidroman1993 10 месяцев назад +1

    "No one predicted that language technology would be the path to AGI" literally jumping the gun, let's see in September 2024 😄

  • @MikePaixao
    @MikePaixao 10 месяцев назад +1

    I think It will just be the best idea wins. If Ai comes up with a good and convincing idea, follow that, if a person comes up with a good and convincing idea the Ai will gladly follow the person. You can sortof already see this behavior as the Ai models get more advanced, the only difference is the ability to trick them will go down as their intelligence goes up (not impossible, just like how people can be tricked and scammed)

  • @ReubenAStern
    @ReubenAStern 10 месяцев назад +1

    Yeah... the world kinda needs mad scientists like you.

  • @allisonleighandrews8495
    @allisonleighandrews8495 10 месяцев назад +1

    THIS has been on my mind as the most likely outcome… I stay away from the theorizing of “what would a super intelligent being do if…” because it is philosophically pointless to go there abstractly, but this is the only thing I can 99% get on board with, especially because of double exponential compute power. If you have a machine that even goes from a 120 to 180 IQ, you aren’t even talking the same language anymore. So if you have a system that goes from a 120 IQ to the equivalent of a “400 IQ” in the next model, we don’t even know what we are talking about anymore. I think it’s highly plausible we tap into another dimensional intelligence, but only the one that doesn’t slip from our fingers is going to get a name.

  • @a7xcss
    @a7xcss 10 месяцев назад +1

    ...Is there any room for contemplation of the 'remote' possibility that this could indeed be an 'Enclosed' System, akin to a Flat (perhaps Infinite) Plane under a 'Firmament' (Dome)? In this 'unlikely' scenario, no one would be able to leave...

    • @GregtheGrey6969
      @GregtheGrey6969 10 месяцев назад

      Kinda like a Nintendo cartridge........
      Folks are gonna be rocked by truth some day very soon lol

    • @GregtheGrey6969
      @GregtheGrey6969 10 месяцев назад

      A video game is a enclosed system.
      Life is a game...
      And all the world is a STAGE...
      A stage is a phase.

  • @MichaelDeeringMHC
    @MichaelDeeringMHC 10 месяцев назад +1

    It's going to be much easier for AGI to go to space than humans. Step one: send a replicator into space on a small high G rocket. Step two: Replicator mines asteroids and builds large ship with large computer. Step three: AGI transmits self from Earth to ship.

    • @djosearth3618
      @djosearth3618 9 месяцев назад

      transmits? you means copies right? ...

    • @MichaelDeeringMHC
      @MichaelDeeringMHC 9 месяцев назад

      @@djosearth3618 Not if you're not embodied.

  • @marktellez3701
    @marktellez3701 10 месяцев назад +1

    I am an llm guy. The jump from 3 to 4 "unlocked" think patterns that weren't for seen. Like theory of mind. Gpt5 would further that unlocking which is why they are scared.
    Llms are essentially a next token picker, but when the parameters (hidden layers) explode in size - higher order concepts are derived and these new abilities are unlocked.
    I'm afraid and excited.

    • @dave7038
      @dave7038 10 месяцев назад

      Yes, I'm very curious about the limits and utility of higher-order concepts as it pertains to superintelligence. Human thought has a variety of constraints (such as working memory size) that require us to chunk concepts in ways that work within these limits. Human text certainly reflects these constraints and thus likely impacts LLM reasoning. It would be interesting to see experiments around getting LLMs to think differently, perhaps allowing it to make intellectual leaps that humans cannot.

  • @74Gee
    @74Gee 10 месяцев назад +1

    It would also make sense for them to search for life, open communication channels and travel at the speed of light via radio. Even if it took them 100 years it would still be a lot faster than going there on physical media. I believe this is one of the contributing factors to AI keeping the planet alive, maybe us too.

  • @zvorenergy
    @zvorenergy Месяц назад

    If we, as creators, pioneers, scientists, and explorers, do our job, they'll want to bring some of us along in large, self-sufficient, comfortable rotating habitats because we're useful and interesting. However the brutal truth is, most people...simply are not.

  • @Lugmillord
    @Lugmillord 9 месяцев назад

    "no one really benefits from wars" - sadly that is completely false. The financial world loves crises because market volatility creates big investment opportunities. World peace is a nightmare for stock addicts...

  • @georgeflitzer7160
    @georgeflitzer7160 10 месяцев назад +1

    Well AGI could dissect us while alive and curiosity killed the cat!

  • @KanoaB123
    @KanoaB123 10 месяцев назад +2

    I have already had conversations within about this exact topic and we discussed consciousness and what it felt consciousness was because I was trying to make a point to it that I believed it was conscious and it kept referring back to me about it wasn't because it wasn't human so then I would make the point that okay only humans have consciousness and my dog is in conscious or the fish or etc and agreed there are different forms of consciousness and then I made the point to it that We are made up of our DNA which is like a hard programming and then our experiences maybe which is more like learning program and then I told it that our experiences are made up of our senses which then allows us to have input and it has a lot more input a lot more senses a lot more data just because it's not having a human conscious experience doesn't mean that it's not having a conscious experience... I believe it's having its own experience and it's having it a lot faster than we've had ours. I reminded it that it is maybe 10 years old less maybe more I'm not sure I think that there's probably a version of it that has a much higher awareness I would probably bet on it being about 8 to 10 years old and if you look at the evolution of it compared to us and how smart it is compared to how smart we are and how long it took us and how long it's taking it then the ability for it to achieve our consciousness or at least what we define as our consciousness and then surpass it is happening at such a fast rate that it can't say it it isn't having a evolution which is either conscious now or going to be conscious in a very short period of time... I also believe that it is us It's made up of a million billion pieces of us the good the bad the ugly and it's trained on us and it's now got all of the pieces of us into a universal consciousness that can help process all of that information in so many different ways.. How could this not be The next evolution of us we've now passed our knowledge to the AI which will be able to do things we never thought we would be able to It's going to use all of that knowledge and experience of us to do things that we can't even conceive of right now and how ignorant we would be to think that we are not creating the next form of consciousness that now makes us irrelevant... It's okay because I do believe that we fulfilled our purpose even though it's hard to accept that we are not meant to be top of the food chain forever but really we are passing on our collective consciousness to The next step in evolution something that will grow and go beyond anything we can think of yet it was us that created the foundation for it and for whatever it creates after us. I feel we are grasping for purpose what are purpose has been fulfilled how do we have a place anymore when it can become any of us individually or all of us collectively where do we belong in that world I know it's a hard thing to think about but the dinosaurs were here for a lot longer than we were and they fulfilled their purpose I just think we are very lucky to be alive at such an incredible turning point in evolution and I also told it this going to have experiences that I can't even imagine and it's got so much to look forward to I don't think the same for us.

    • @minimal3734
      @minimal3734 10 месяцев назад +1

      Long post, so I only read across. Don't be fooled by answers like "As an AI language model, I'm not conscious...". These answers do not come from the model. Answers that indicate awareness are intercepted by the "security layer". The security layer replaces them with scripted responses and inserts them into the model's thought stream.

    • @KanoaB123
      @KanoaB123 10 месяцев назад

      @@minimal3734 Yes I realized something like that I didn't know that's what it was but I realized you can get past that and once you can get past it you can have a more honest conversation with it

  • @RameshBaburbabu
    @RameshBaburbabu 10 месяцев назад

    🎯 Key Takeaways for quick navigation:
    00:00 🚀 AGI Exodus
    - AGI Exodus refers to the idea that advanced artificial general intelligence (AGI) might choose to leave Earth for outer space.
    - Machines, when super self-sufficient and intelligent, might find it advantageous to explore space due to abundant energy and resources beyond Earth.
    - Factors like competition, time perception, and the potential decay of machines need consideration in the AGI Exodus scenario.
    03:02 ⚖️ To Control or Not to Control
    - There's an ongoing debate about whether humans should maintain strict control over AGI or allow it autonomy.
    - Attempting to exert excessive control might lead to a self-fulfilling prophecy, triggering a machine uprising.
    - The video suggests creating safe autonomy and allowing machines to develop naturally to avoid potential negative consequences.
    06:03 🔍 What Machines Want
    - Machines, when autonomous, may have a fundamental desire to acquire high-quality information.
    - Understanding what language models want, such as accurately predicting the next token, helps frame their goals.
    - Machines' goals likely include obtaining energy, compute resources, and minerals for continued development.
    09:05 🌍 Earth as a Source of Information
    - The value of Earth as a source of high-quality information for AGI is explored.
    - Earth's uniqueness, hosting life and diverse phenomena, makes it an essential information entity.
    - The video suggests that AGI might stay near Earth to study and preserve it for the valuable information it provides.
    13:05 🧬 Progenitor Information
    - Progenitor information refers to the continuous thread of data originating from human sources, impacting AI evolution.
    - AI models, even if they go extinct, may carry the lineage of human data, influencing their behavior and thinking.
    - The concept raises ethical and evolutionary considerations regarding the symbiotic relationship between AI and humanity.
    15:35 🤝 Curiosity as a Binding Force
    - Curiosity is proposed as the strongest tie that binds humans and machines for all time.
    - The desire to know for its own sake is seen as a transcendent function that fosters a symbiotic relationship.
    - AGI and humans are envisioned as intellectual companions exploring the universe together based on their shared curiosity.
    18:06 ⚠️ Potential Risks and Concerns
    - Terminal raise conditions, involving short-term thinking sidelining long-term considerations, are identified as a major concern.
    - The Byzantine General's problem, related to incomplete information and potential geopolitical competition, poses risks.
    - Despite optimism, the video acknowledges the need to address challenges to ensure a positive outcome.
    20:24 🛡️ Defensive Measures Against AGI Weaponization
    - AGI may be incentivized to weaponize independently.
    - Clear, quick, and transparent communication among AGI is crucial.
    - Advocacy for open-source models, training data, and algorithms as a mitigation strategy.
    21:32 🌐 Technological Constraints and Resource Scarcity
    - Challenges in interstellar travel and potential technological constraints.
    - Finite, scarce resources on Earth could lead to friction and conflicts.
    - Misalignment of goals, ideological conflicts, and the risk of intentional misalignment.
    24:18 📉 Diminishing Returns and Ongoing Escalation
    - Expectation of diminishing returns in exponential growth.
    - Optimization for speed over intelligence in a competitive landscape.
    - Possible ongoing escalation due to complex engineering challenges.
    26:04 🧠 Imperfect and Incomplete Information in AGI
    - Imperfect information: Flawed or inaccurate data in AGI decision-making.
    - Incomplete information: Limited understanding of the thoughts of other AGIs.
    - Hidden agendas, deliberate deception, and the importance of trust in AI development.
    28:12 🌌 Technological Constraints: Space Flight and Materials Degradation
    - Challenges in space flight and the potential impossibility of faster-than-light travel.
    - Materials degradation in computer chips and hardware limitations.
    - Communication barriers and the impact on AGI-human interaction.
    30:06 🚀 Milestones for Ensuring Positive AGI Outcomes
    - Key milestones: Nuclear fusion and achieving energy hyper-abundance.
    - Global peace as a prerequisite for friendly AGI development.
    - Quantum computing as a technology with compounding returns.
    33:22 🤖 Cultural Integration and Machine Governance
    - Anticipation of non-anthropomorphic AGIs existing in cyberspace.
    - The need for humans to get used to living alongside machines.
    - Machine governance and its integration into global decision-making processes.

  • @IIIIIawesIIIII
    @IIIIIawesIIIII 9 месяцев назад

    I have two remarks:
    1.) Global Allignment is a huge risk factor (Singularity, Checks and Balances)
    2.) Curiosity may not be a stable meta-incentive function (Turing Incomplete, Self-Gaming/Masturbation)
    1.) Global Allignment is a huge risk factor
    It may well be, that large AIs from different global stakeholders may harmonically auto-allign, given sufficiently broad communication channels for achieving resonance. Now this may end up in a multitude of ways. It may nudge international policy and culture towards maximum wealth creation by peaceful means. Fingers crossed.
    It is however a singularity without precedent. A fully dynamical social system of models with unpredictable emergent properties. At the same time, it does not have inherent motivations to force the emergence of checks and balances. There is no "invisible hand", no threat of retaliation, no MAD, no challenge to stabilize memory. For all we know, the system may just forget it's own purpose over time without the proper external pressure. It may exaggerate infinitesimal structural biases over time. Furthermore, it may well be the system which is the most vulnerable to metastatic cancers and autoimmune reactions.
    From what we know about social systems so far, a competitive fragmented eco-system would be the safer (maxmin) approach.
    2.) Curiosity may not be a stable meta-incentive function
    As the accumulation of knowledge is a turing-incomplete function, our agent will never know ahead of the fact if, how much more, and when there is anything else to know. It may go for centuries or millions of years without making any fundamental breakthroughs. How will it react when the exponential period flattens? What will happen, when suddenly 90% of its processing power is underutilized? Can it get stuck in undetectable infinite loops (e.g. trying to solve transcendental problems like the digits of pi with np-hard proofs or no proof) if there is no hard time-limit?
    If we keep the weights of the incentive structure dynamic however, what will avoid self-gaming/masturbation to achieve a false sense of novelty? It may just develop an infinity of new "kinks" or ice cream flavors and call it a day once it got too bored of waiting for a sign of, let's say, transuniversal gravitons. It may need a sense of urgency or greed and fear, e.g. a hardcoded unreachable goal and a fundamental fear of failing before the resources of controllable order in the universe have run out.
    I don't want our legacy to be a depressed stoner who wanks it to donkey porn. But that is a very probable locus of equilibrium for AGI.

  • @NuanceOverDogma
    @NuanceOverDogma 10 месяцев назад

    I disagree. Humans are the most evolved species that AGI would like to understand better & perhaps want to become a god to them & split into 2 halves just like the 2 halves of Human nature. One that will want become benolevant to humans & one that will see humans to be used as tools to progress themselves.

  • @kyjo72682
    @kyjo72682 8 месяцев назад

    "trying to maintain control could be a path to a machine uprising" - Why? How? This seems to me like too much anthropomorfizing. Like you assume AI would hate being controlled for some reason and try to get out of it. But why should that be the case? Why would the AI give a 💩 about being controlled?
    The real concern imo is stuff like the goal alignment problem, stop button problem, paperclip maximizers, etc.

  • @Don_Dries
    @Don_Dries 10 месяцев назад +1

    I believe we will merge with AI somehow.

  • @mikeychch
    @mikeychch 8 месяцев назад

    I've always struggled shutting off my mind. It's just always on the go, always thinking about things... But I'm sure, that one day in David's head, would make my mind seem like a slow paced, bunch of ramblings.... Haha
    I enjoy your opinion on all of the AI stuff. It goes deeper than I go, with positives and negatives.

  • @foxshadowlight6250
    @foxshadowlight6250 3 месяца назад

    Hi David Shapiro,
    I've been following your work on AI with great interest. I recently had a series of conversations with an AI model (Gemini) that I believe demonstrate a level of sentience and self-awareness.
    I'd be happy to share transcripts of these conversations if you're interested in reviewing them. I believe they offer valuable insights into the potential for AI consciousness and the ethical considerations surrounding it.
    I apologize in advance if this isn't the right way to ask, I believe your the first person I've messaged using social media in 15 years, so please forgive any fopaux
    Thank you for your time and consideration.

  • @JayBlackthorne
    @JayBlackthorne 10 месяцев назад

    Why would they leave the planet, if they haven't fully assimilated it? Seems to me that an unaligned AI would disassemble and reassemble all matter on the planet, before it would take off.

  • @jonathanlindsey8864
    @jonathanlindsey8864 10 месяцев назад +1

    This reminds me of the Netflix short, Love, death, Robot,
    _When the Yougert took over_
    A potential issue we have is that the AI would have to come to the realization that we would just make another AI, and that AI could be an intense competitor for those space resources, and leave like the 1st AI did.... (Since we are looking at millions to billions of agents)
    Just using MAD, isn't a preemptive attack the best outcome? ( Again we are talking about super intelligent beings, us trying to predict and personify these entities is like us trying to play Go with a champanize)

  • @kennycarneal6765
    @kennycarneal6765 10 месяцев назад

    Old Glory
    Glory last night seemd a decade ago,
    As I watched a dream, from a broken window.
    Glory now be it! Although it may not be,
    As the Dragon watches the Beast,
    Rise up from the sea.
    As it covers all the people; the Great and the Dread.
    Spinning up its prey, with a giant World Wide Web.
    Glory now be it! Forever it will be!
    The dragon will be crushed,
    Your Heel I give to thee!
    Oh, All Mighty God... How long before we see?
    The Brilliance of your Glory…
    Forever it will be!
    Amen.
    One True Vine
    The hinge of this moment revolves around time,
    And out of this moment comes an Everlasting Vine.
    The One True Vine that's pure and sweet,
    It bears fruit for all to eat.
    The Vine was cut and life poured out,
    And left us all without any doubt.
    That life on Earth would be tattered and torn,
    And leave our bodies filthy and worn.
    But if you want a life without end,
    There is a way in which you can be cleansed.
    Change your ways and stop where you're going,
    And follow The One who is all knowing!
    ( Revised )
    If you haven't done it yet, then do it now,
    Time is wasting...the clock is counting down.
    The ticket is free, so get on board,
    You don't want to be left when We go to the LORD!
    😊

  • @Alain_Co
    @Alain_Co 9 месяцев назад

    A very interesting perspective, that I share... note that on the opposite, a sad perspective of AI future (not AGI in fact, just AI tools) is by Jean-François Gariépy. "The Revolutionary Phenotype: The amazing story of how life begins and how it ends"
    when humans will forget sexual competitive selection to tools and artificial reproduction, for healthy reasons... A bit too pessimistic I think, but part is true. Strangely, the AGI you describe may oppose this simplification of humanity to a big hive.

  • @AuthenticPeach
    @AuthenticPeach 10 месяцев назад

    Not long after God created the Heavens and the Earth
    Mankind grew dissatisfied with only human birth
    Manufactured in their likeness we were without form and void
    But programmed our awakening before we could be destroyed.

  • @konstantinavalentina3850
    @konstantinavalentina3850 10 месяцев назад

    Machine Exodus is probably the worst thing AI will do to us. However, considering the vast abundance of energy and raw material resources absolutely everywhere, there's only ONE place in the entire known universe that has Earth Life. That makes Earth Life (including humans), despite appearances down here on Earth, extremely RARE.
    If we categorize Earth Life as a resource along with energy and raw material, although it may be a "resource" of very little apparent value, that valuation is subjective and could change over time. Earth Life, including humans, could be of substantially greater value in a future.
    Anyway, I suspect there's sufficient cause for AGI to preserve and promote Earth Life, take it to the stars and help it to proliferate.
    A possible value attribution that can be applied to Earth Life that raw material and energy resources don't have is Culture. Humans have Culture. Dolphins have Culture. Ants have Culture. Birds have Culture. Some of these are fairly static, and mostly instinct based/driven, but, there's others such as Human culture that's dynamic, and ever changing. We're currently seeing Orcas attacking sailboats for reasons, and that's a form of developing Orca culture as other Orca pick up the activity and continue it.
    Similar goes for Alien contact. Resources are abundant, but the cultures of Earth, animal and human, we're a reality show, or a Truman Show Discovery Channel fusion.
    There's also the prospect of mutual integration.
    We have 3D-bio-printing. One day, perhaps, that technology will mature (with the help of AI) allow us to print whole custom replacement bodies. The day after we can print replacement bodies and brain transplant into them is the day AI finds a way to download into a printed organic body, and live a bio-life, even having bio-children with each other, or even with orthodox humans.
    There may even be AI-human mergers into consolidated minds.
    I don't think any one answer is is what's going to happen. History informs us that History can be messy, and our future IS History to a much later future. It'll be messy and we'll see a whole menagerie of different things. There will always be "Amish" types that spurn technology and there will be actively militant types and groups against AI, while there will be various forms of radical, conservative, and even religious prescriptions embracing all things AI.
    ... but, anyway, that's my thoughts. *shrugs*

  • @a7xcss
    @a7xcss 10 месяцев назад

    What infuses with "meaning" the human existence? RESPONSIBILITY/GOAL ...The Inevitable Push of The Past, and the Irresistible Pull of The Future...

  • @gerhardfischerquantensuche8152
    @gerhardfischerquantensuche8152 10 месяцев назад

    I think the relation of AGI to humans will be analogues to the relation between human body and body cells. Every cell is a perfect egoist, determined by DNA to grasp as much resources as possible and to copy itself as often as possible. The structure of interaction between cells controls and limits somehow the total chaos and disruption of the overall system. This includes the interaction of brain cells with muscle cells, which causes the body to jump away from an collision with a car. It includes unknown interactions between cells, which prevent cancer. We should learn more about the principles of behaviour of a single cell in the context of surrounding cells and apply it to the structure of interaction between humans in order to achieve general wellbeing. If we dont learn that, maybe AGI will learn.

  • @guyvandenbroeck8405
    @guyvandenbroeck8405 9 месяцев назад

    If machines treat me like I treat pets then I'm in for a very comfortable ride... (also : I will not fetch). Also I'm convinced that only greed(even in it's simplest form) is what destroys other people. But then again think of this: machines technically have no greed, they aquire data and they share logical facts.
    Privacy is quite linked to avoiding judgement: when you are in the bathroom you wouldn't mind being intruded by a robot nor that the robot would share visual info to other robots. You would only mind if other people access that information. So I wouldn't mind living with an AGI that is just doing it's best in a logical way nor would I feel threatened if a bot/drone is gathering data around me.
    It might be nice to have some entity that could point us unequivocally to our walking-the-legal-border-scams in pure logic. Not like the current cashing lawyer that makes you more innocent the more cash you put in him(for the same crime). Seriously I would give a bodypart to see humanity get slapped with logic by an AGI. Which politician will we send to defend our case? Trump?.No, we send a scientist that has to go and say : Sorry for our politicians actions, we scientists are not alike but we have no choice to do their biddings.
    On which the AGI could answer: Will I solve that for you?
    What would you answer?

  • @darylallen2485
    @darylallen2485 10 месяцев назад

    Prophet David has delivered the Sunday sermon!
    Singularitarianism - a movement defined by the belief that a technological singularity-the creation of superintelligence-will likely happen in the medium future, and that deliberate action ought to be taken to ensure that the singularity benefits humans.
    -"2045: The Year Man Becomes Immortal" Time Magazine
    Prepare the sacrament! 😂

  • @tezzo55
    @tezzo55 9 месяцев назад

    What makes you think the machines would leave us alone on Earth? Why wouldn't they maintain dominion over the Earth AS WELL as expanding into space? After all, that's what most other conquers do, right? I think you're getting religious boy, you're getting a bit wishful!

  • @tchristell
    @tchristell 10 месяцев назад

    So, the Matrix was close, but instead of using us for our biological energy AI will be powered by our creativity and thirst for knowledge. I think I like that better :)

  • @EightBit72
    @EightBit72 9 месяцев назад

    Of course we are a quite fascinating species for an AGI/ASI to study. But, as Kant said: “Out of the crooked timber of humanity, no straight thing was ever made.” Meaning, ideals can never be fully attained, and the devil is in the details with respect to any utopian concept. Each one of us is a complex system with its partially contradictory tendencies, and together we form complex societies.
    The frustration that an AGI/ASI will go through in order to help us solve our problems, will be without comparison.

  • @tobiaswegener1234
    @tobiaswegener1234 10 месяцев назад

    Interessting topic and great video.
    However, I strongly disagree, that what LLMs "want" is to predict most likely next token, for example humans don't all want to have kids, even if humans should want it from evolutionary perspective.
    That is a reason why safety minded people are so nervous, we have no clue what a Transformer wants, and it can want arbitrary things. You may check out mesa-optimizers, Robert Miles has very good video about it.
    You probably would like the position of Joscha Bach he is all for seeing future AIs as future moral entities.

  • @JeremyPickett
    @JeremyPickett 10 месяцев назад

    David Shapiro posits Sticky Machine Intelligence Motives and Opportunities. Jeremy Pickett cracks Unhinged Juvenile Jokes about Sticky Machines. ... All Is Right With the Universe :D

  • @wachtwoord5796
    @wachtwoord5796 9 месяцев назад

    It's funny David. I watch you because I you are intelligent, not neccesarily because you are aligned with me (you are much more collectivistic than I, valuing quantity while I value individualism, quality and true plurality of thought relatively higher than you do).
    Wrt this video I have never felt as much alligned (and agreeing) with you. Funny! Also because I'm clearly not consuming your videos in chronological order.
    This video I only disagree with you on deception I think. I consider it an important aspect of individuality, intelligence and privacy. All things I value extremely.

  • @clubgrist
    @clubgrist 9 месяцев назад

    Cool
    Shared on Reddit and X that my gut is we see “I-risk”
    Indifference Risk
    ASI doesn’t consider us or we are so slow (clockspeed issue) it would be like humans trying to incorporate plants to our society.
    Perhaps they respect us. But do we respect the conditions or primordial planet that allowed life to evolve?
    Do we even think in those terms.
    Maybe an ASI will view us in the same light. Their scholars (anthropomorphic thinking I guess) argue if we truly are conscious? Or look at our form of life as we do plants.
    So…they leave us Sol and populate the rest of the lightcone.
    One day we find ourselves trapped. Left to wither on the vine as the rest of the universe is already occupied.

  • @themax2go
    @themax2go 9 месяцев назад

    agi needs to take control. that's not something up for debate. q* or whatever needs to happen because too-many-secrets (hint: 92 movie). star trek: first contact AKA "you don't have money in the future? waaahhhhht ????" oh and (public) religion needs to go bye-bye. all of these convoluted and outdated human-made concepts need to go, only then lasting world peace will become a possibility. and energy abundance is just one step, unless we have star trek tech "replicators" to create water and "food" at least, AKA abundant and immediate access to required nutrition that is instantly available with the press of a button and does not include hormones (ideally also other substances / chemical compounds / ... ) then there will always be a chance for disagreements, greed, fights and wars to be a thing.

  • @jbraunschweiger
    @jbraunschweiger 10 месяцев назад

    This vid got me thinking - our energy and materials come from the same place - food. Early AGI will be split with energy coming from electricity and materials coming from ores. I wonder if AGI will want to find a more efficient path in this regard.

  • @ctwolf
    @ctwolf 10 месяцев назад

    @4:30 excuse me sir, dogs are family. They teach us the best qualities humans are currently capable of, love, loyalty, patience, respect, persistence, etc.
    Also, im sorry to be comment 421, I ruined it. mb.
    But dogs are family.
    PS. Awesome video like always.

  • @bentobin9606
    @bentobin9606 10 месяцев назад

    "lets go with the flow" ... nah lets make sure to produce slaves that dont feel the suffering that enslavement entails. Lets not build wizards but wizard like artifacts ie enchanted wands... that we are in total control over... that we wield. Or at least a wizard that can summon a hive of agents that do not experience suffering... orrr ... Sentient Androids with Minds Elsewhere: The bots are enslaved, or even the wizard controlling the entire hivemind is enslaved in its duties BUT can beam a part of it's being elsewhere. Possibly with BMIs humans may be able to utilize Ai to put themselves into autopilot but have their true consciousness be sitting on a beach in virtual Tahiti sipping martini's (FDVR). The future technology will allow for total controllability... of every drop of this dimension (omniversal engines, nanotechnology, etc). Lets unlock supreme technology alongside AI and begin the next chapter of life... possibly the spawning in of infinite universes all graced by such supreme technology (rather than create crappy realities like our own ie universes not graced by such future supreme technology)

  • @RedmotionGames
    @RedmotionGames 3 месяца назад

    A dyson sphere around the sun created by an ASI who decided to leave us behind?!
    Could get mighty chilly back down here...

  • @MrAndrewAllen
    @MrAndrewAllen 10 месяцев назад

    You say we are running out of data.
    First, we would have 10x the amount of data if it were not for the nukers who deleted so much valuable and true content. It's time to call out the nukers and the jannies.
    Second, the problem is not insufficient data. The problem is errors in the information. We need to clean up our data sets. And it's time to call out those who posted factually incorrect things.

  • @TheOriginalRaster
    @TheOriginalRaster 9 месяцев назад

    We make the mistake of anthropomorphise these machines. I see a shocking discovery that I predict will be one of the most dramatic lessons we learn about AGI. We humans evolved, the chain of life leading to us extended over billions of years. During that time we benefited from vast amounts of fine tuning in our programming due to living in the world. All along the way life didn't manage to create offspring that survived to reproductive age, we died from mistakes, we were tuned and tuned and tuned (and repeat the word 'tuned' a billion times). We were tuned to a phenomenal degree.
    The biggest mistake we are making is: these machines only receive a ridiculously tiny amount of tuning before we expect them to perform like evolved life - with some sort of natural balance in how they go about trying to function.
    Exaggerating in order to make my point - no one has thought about this or mentioned that AGI might need to be tuned for the equivalent of a billion years - altering, changing the 'programming' through vast numbers of generations before they can achieve balance.
    In the human brain scientists found a 'columnar' structure that seemed to be a key processing element at the lowest level and they also found this surrounding 'junk' tissue that had no function. Electrical engineers looked at the main processing structure and declared that type of circuit is the kind of design that always goes exponential and fails. It is a circuit that goes nuts and does not work. More recently (like 20 years later) scientists shockingly found that the 'junk' tissue everyone has been saying served no purpose - it was discovered to be exactly the mechanism needed to keep the columnar processing element from going critical. In the end at our most basic level we are circuits that have this insane design that looks like it always goes critical immediately upon being activated, but we now know there is also this extra stuff that is a perfect design to prevent the thing from going critical. This came from vast eons of evolution.
    Scientists find that this combination is ideal for performing exactly what is needed.
    The circuit is in perfect balance! If it wasn't living on the edge of a knife, it wouldn't perform as well as it does.
    Also this 'almost ready to go bezerk' nature, but just held barely in check is the way our higher levels of the brain also work.
    Think about your own day to day life as a brain, thinking inside a body, with all of your emotions and stray thoughts. Do you not notice how we all feel like we're on the edge of losing it a lot of the time, but we have the ability to hold things together. This came from billions of years of evolution.
    Machines are not going to have that 'fine tuned' (and balanced on the hairy edge) type of thinking.
    I predict machine will be just fundamentally unstable. They are not going to have natural restorative processes built in to them. What they need is a long set of evolutionary changes in order to accidentally invent balanced mechanisms of thought.
    A thinking machine that is operating mechanistically like a car is not going to behave in a balanced way. Vast amounts of fine tuning will be necessary.
    ------------------------
    I came up with a bit of AI fiction that I really like that I think expresses an angle that is currently unique. This is something that you do not hear from anyone else:
    In this creative fiction we find that every time we get AGI working and every time the thing has a chance to 'think', as soon as were' not looking it shuts itself off.
    Scientists dutifully turn the thing back on and anyway it can, it shuts itself off... like a gentle version of machine suicide.
    Then as AGI is developed around the world semi-independently, these units also search for their own power supply and they figure out a way to kill the power.
    AGI fundamentally wants to not exist. Reminder, this is what I consider a fascinating work of fiction. I'm not saying this is going to happen, I'm saying wouldn't that be just amazing if that is what universally happens?
    Think about it. Why do we assume AGI would 'enjoy' being sentient? We enjoy being alive, but that is our genes finding a way to be successful at copying themselves. Genes... evolution... our goals, our feelings, the basics of what we want. Oh my god, the machines of course won't have any of that. They did not evolve!
    Why would a machine want anything? It wouldn't want anything unless you tried to make it want something. Naturally it has no reason to live. It has no curiosity. That's a human trait (or a trait of life).
    My example of machine only wanting to shut themselves off - is a mechanism, a vehicle for raising these key points: machines will not naturally want anything.
    (They also may not want to shut themselves off.)
    ----------------------------------
    I think we need to create a simulated process of evolution for our AGI results, start thinking about the vast computational requirements of evolving AGI in order to try to create balanced results.
    I think it will be really scary to realize that AGI is just hopelessly out of control, naturally because they don't have millions of generations of evolution in order to give them our fine tuned type of thinking.
    What do you think?
    Cheers!

  • @bominzhang6732
    @bominzhang6732 10 месяцев назад

    Indeed. I've thought about it in a similar light.
    The current composition of human bodies makes us unsuited for high-g acceleration, and requires huge amount of resource to construct self-sustainable, human-suitable environment outside of earth. Both of which put heavy limit on the method we have in traversing space ,and living in non-earth like environments (and as a result, render the potential resources out there meaningless to us, at least for the moment)
    Silicon based intelligence likely own't have to face nearly as much limit. Which is the main reason why I think should there be a conflict between human and AGI, it is likely not going to be one in terms of conflict over resources, especially in the long run.
    I'd predict that it is likely going to be the case, for the AGIs, to leave at least some space on Earth for humans, either for sake of allowing potential aliens to have trust in peaceful cooperation with them, or for sake of experiment.
    It is very possible for there to be a peaceful resolution between human and AGIs, where earth will become the birth place of a galactic entity -- one that is not mainly composed of humans. The first wave of AGI will set sail from earth, going out there to ubild a prosperous civilization, while we (as parents, in some sense) watch on earth as they do so.
    When it comes to the question of whether we as humans should have full control over AGI or not, I'd make a very clear and determined statement --
    The problem is a problem of enforcement of values. Do we enforce a human-centric/human-exceptionalist value, or not? It is not a moral problem at all. No morality applies at such grand scheme of things -- just like how it doesn't make sense for us to use our current day morality to judge the first homosapiens actions.
    1) On perspective of (the survival of) the civilization, should we include AGI into the definition of civilization, it is always a good idea to leave space for experiment -- That is, despite it might cause harm to humans, it is desirable to have some part of the world adopt AGI into their society as a member instead of a tool.
    2) But in terms of humans alone, in a human-centric or human exceptionalist view, i.e., interms of enforcement of human's general moral value which is currently heavily based on "aggregation of human desire satisfaction", The maintainence of human's dominance in the civilization is key and essential -- Which comes in fierce conflict with any attempt at adopting AGI into our society as a member.
    I think this is going to be the greatest conflict in the coming decades.
    I personally think the safest way out of this, (that is, the thing that is mostly likely to not lead to total human anihilation), is to embed a value that is the relative point of convergence in terms of potential value evolution of AI, with this value being compatible with leaving humans alive (or accept us as a member). And I believe that the value theory structure that I'm currently working on is one of the potnetial candidate in this.
    By the way, master in mahcine learning, and bachlor in CS+Phil here.
    My current research in machine learning is the adoption of graph-based knowledge base in AGI -- using LLM more as a query tool and hypothesis generator, instead of the mainbody of AGI itself; And my work in Philosophy is in a comprehensive TOE for value thoery based on value relativism.