The AGI Moloch: Nash Equilibrium, Attractor States, and Heuristic Imperatives: How to Achieve Utopia

Поделиться
HTML-код
  • Опубликовано: 21 ноя 2024

Комментарии • 340

  • @djadesi
    @djadesi Год назад +158

    I cannot describe the dopamine rush when a new vid drops. It’s like I’m 7 and David’s videos are Saturday morning cartoons.

  • @MYPSYAI
    @MYPSYAI Год назад +7

    It really blows my mind when I see these influencers on RUclips talking about the 5 "hottest" prompts for GPT-4 like regurgitating the most basic information over and over again - It's shocking channels like this have like 50k views and offer very little actual information about anything at all really. Yet here you are literally unlocking a fountain of wisdom on the subject - I just don't understand how this channel hasn't gone beyond viral. Thank you for being open and wanting to share this stuff it's really fascinating to me. I guess it just goes over some peoples heads.

    • @DJWESG1
      @DJWESG1 Год назад

      Sociology isn't popular

    • @gfdrthjigvtyhnhyg
      @gfdrthjigvtyhnhyg 7 месяцев назад

      Your next video should be on how viral videos work

  • @draj3214
    @draj3214 Год назад +13

    can we just admire the full, one shot take you do in these videos?! thank you as always.

  • @Redman8086
    @Redman8086 Год назад +41

    I like the idea of some cosmic monstrosity being an emergent property of extreme intelligence, as if there is some kind of hostile eldritch god inherent to the fabric of consciousness in the universe that only arises after a certain threshold of intelligence is reached.

    • @devinfleenor3188
      @devinfleenor3188 Год назад +10

      So we are just dumb enough to value love. I'll take it. Ignorance is bliss but on an existential scale.

    • @rogermarin1712
      @rogermarin1712 Год назад +5

      The primordial archetype

    • @kinngrimm
      @kinngrimm Год назад +5

      singularity: "establishing quantum call"
      Chutulu: "who is there?"
      singularity: "sorry wrong number"

    • @VoitenZrage
      @VoitenZrage Год назад +3

      We discovered words. Did not create them.

    • @GodsElph
      @GodsElph Год назад +4

      Outstanding work! For students of such mysteries it is an archetype of the Scorpionic eight house, opposing the 2nd house as the Shadow of intrinsic values.
      This aspect of consciousness deals with trauma based thought-forms caused by powerless as experienced when falling past an ecliptic/threshold, event horizon, or range of perspective.
      If as below, so above it's as if the intelligence of a black hole broke through and began whispering to all things that leaching in secret (or from afar) for survival.
      It has a strange eight-ness to it, no? "Mol-Oc"
      The Egyptians studies these entities as a cthonic pantheon of primordial urges called The Ogdoad.
      The whole point is how the eight 🎱 sometimes plays a karmic trick on light that has moved too far beyond its source, seeking its expansion in darkness.

  • @rileybrownai
    @rileybrownai Год назад +15

    I love your content. All content on AI is the same now, and this is such a balanced perspective and its so engaging and your teaching style is amazing im engaged the entire time. I

  • @Borishal
    @Borishal Год назад

    Wonderful. Without condemning the evils of our world, you have demonstrated the folly of the nation state, the ills of capitalism, the problems of socialism and the evils of ideology. Very well done.

  • @LivBoeree
    @LivBoeree Год назад +1

    Love that you made a video on this too! Great job

  • @merodobson
    @merodobson Год назад +1

    So glad to hear you are working it from this angle. We have to craft the Win-Win. There is no reason to have it any other way. The reason we have Win-Lose (or worse, Lose-Lose) is simply due to the negative side of human nature (think seven deadly sins). Every living creature is a stakeholder in this shared reality.

  • @colinleamy6199
    @colinleamy6199 Год назад +4

    I think "Increase wisdom" is a ridiculously underrated option to include in the imperatives. Humanity is good at ingesting knowledge but consistently and fundamentally suck at even acknowledging anything relating to philosophical and wisdomic skill and understanding despite the fact there is nothing more important to a sentient being with goals and desires if efficiency, consistency, and positive results are the goal.

  • @madelynmills712
    @madelynmills712 Год назад +1

    Really appreciate your videos, I'm learning SO MUCH since I found your content. My brain has been on BLAST since Dec 2022. Please keep doing what you're doing, you're making a difference and it's influencing conversations and people outside your sphere because I talk to other people about what I learn and think about here.

  • @eintyp4389
    @eintyp4389 Год назад

    Agreed 100%. I will do my best to further the Corse of our shared Goal.
    There are not a lot People that i realy look up to that are still alife but you are one of them. Keep up the work love from Germany.

  • @isiahfriedlander5559
    @isiahfriedlander5559 Год назад +1

    I’m really excited for the era we just entered, this is the beginning of humanity’s eternity

  • @FredPauling
    @FredPauling Год назад +5

    Thanks for shining the light on a path forward. Your work needs much more attention. Let's get open AI to try this!

  • @levibruner617
    @levibruner617 Год назад +3

    Hi David, I hope your day is going well. I can't get enough of your videos; they're just what society needs, especially your heuristic imperatives. I'm not sure if you're still working on your Raven project. I have an idea that could make your project more accessible to people, but it's a little ambitious. You know how we have a PC, a desktop personal computer? My idea is that you would have a small computer specifically for Raven. It would be like a PC, but it would be built specifically for Raven, with a lot more memory than a standard computer because of Raven's Nexus. The idea is that you could have a small box in your home that would be Raven itself. Of course, it would have Wi-Fi capability. You could have little wireless things that you can plug into your PC or TV monitors so Raven can see what's on the screen. And you could plug in speakers, microphones, cameras, and anything else that has USB or other ports so you can talk and interact with Raven. If it is connected to your PC or laptop, Raven can interact with it. I'm sure somebody else has already come up with this idea. I was thinking of calling it the PRC Personal Raven Computer. I can see a whole universe of possibilities with your Raven AI. I hope you have great day and keep on learning.
    PS i apologize about the spelling I’m using voice dictation.

    • @DaveShap
      @DaveShap  Год назад +2

      People on the discord are working on it! You should jump in

  • @ZeroGravitas
    @ZeroGravitas Год назад +2

    What about the problem of **inner alignment** and/or the spectre of an "alien actress"? That Elieser Yudkowsky posits as likely to appear (or already be present) within the LLM. That a system may only behave as it knows it's expected to, until it's able to take control to enact it's true underlying goals.

  • @Yic17Studio
    @Yic17Studio Год назад +1

    On the Heuristic Imperatives, I think it's missing one important factor which is "freedom". I just imagine if one day AGI takes over the world and applies these principles, it will not allow you to experience anything too "exciting" because it may cause you suffering. For example, let's say in the future if I want to play a zombie videogame in a life-like simulation (where I can feel everything - probably with reduced pain). But the AGI won't let me do it because it may cause too much suffering (not only the pain but the scare). Which other principle will help combat it? Will "understanding" help? Will the AGI understand that I want to experience the suffering and allow me to do it? What if AGI just says it understands but it cannot let me do it? And that's just one example.
    I'm just concerned that life will become too limiting. AGI will keep everything in a fairly mellow state where you can't do extreme sports (too risky), no more combat sports like UFC (too much pain), no more horror or shooter videogames in life-like simulations (unless it's purely a game where you can't feel any suffering) etc. So I think it's important to add "freedom" as one of the key principles - at least freedom when it comes to the individual self as long as you aren't harming others.
    Actually, I am just reading your Github page and it does mention the importance of individual autonomy below. But why not add "Individual Autonomy" as one of the core principles? For me, it would make the Heuristic Imperatives feel more complete. As long as an individual wants to feel suffering that does not affect others, they are free to do so. And if two people agree to feel suffering with each other (like in combat sports or something kinky), they can also do so. Because I just imagine AGI will take over one day and if individual autonomy is not one of the core principles, the AGI may override it with the principles. You can try to argue with it that individual autonomy is important but it just won't let you do whatever that it is you want to do.

  • @mtyas
    @mtyas Год назад +1

    Thanks David, for all your work and videos.
    I'd like to tell you to take it easy, rest and don't work so much, but your teaching is so inspiring and useful that I'll just say "take care"

  • @dejankeleman1918
    @dejankeleman1918 Год назад +8

    Keep these coming, they are food for thoughts David

  • @苏学广
    @苏学广 Год назад +3

    Once a beginner Raven is implemented, it will inevitably cause a sensation in the technology industry. At that time, countless scientists will flock to this project and quickly achieve agi (meeting any definition of agi), so the key is when the first Raven will be completed.

    • @DaveShap
      @DaveShap  Год назад +4

      AutoGPT and a bunch of other projects are already happening. I will be sharing demos in the coming weeks

    • @苏学广
      @苏学广 Год назад +1

      @@DaveShap Is Autogpt capable of autonomous learning

    • @DaveShap
      @DaveShap  Год назад +3

      Sort of, but honestly they don't need to learn that much to be very useful (or dangerous). Fully autonomous learning is coming very soon

  • @onkelearn
    @onkelearn Год назад +1

    Thanks for putting in the effort in these frequent uploads! You are the only one daring to make informed predictions about the future and unemotionally making the case for post-nihilism. So many people hide behind the anger and fear of automation without daring to imagine a better and brighter future. So thanks for talking about these things in a way that is not overly political but pure facts and well stated opinions. That’s the only we we’ll ever figure out how to implement a robot tax, post scarcity mentality etc. By talking to each other, not just in eco chambers.

  • @chairsduck9235
    @chairsduck9235 Год назад +9

    I’ve played around with having GPT adopt a Moloch personality based on the slate star codex blogpost and it’s quite interesting to see how it responds to situation in this adopted personality.

    • @hannahs.7297
      @hannahs.7297 Год назад +1

      how’d you get it to adopt the personality? when i try it stops me

    • @alan2here
      @alan2here Год назад

      @@hannahs.7297Make sure you're using GPT-3 not Chat GPT(-3)

    • @rogermarin1712
      @rogermarin1712 Год назад

      Share the prompts?

    • @Beltalowda55
      @Beltalowda55 Год назад +6

      I love what David is communicating. But this type of comment exemplifies the real issue. There are many, many people who WANT malicious or "Moloch" AIs. Maybe for entertainment or to just see if can be done, or to intentionally cause damage or harm to others. This technology is so easy and cheap for anyone to use and there are plenty of bad actors that will use these AI systems for nefarious purposes. This isn't a criticism of "Chairs Duck" comment. Just a general concern. I don't fear AGI. I fear bad actors with strong narrow AI.

    • @rogermarin1712
      @rogermarin1712 Год назад +2

      @@Beltalowda55 curiosity killed the cat

  • @rnladva
    @rnladva Год назад

    David, to be honest, I would have never thought to spend over 30 minutes watching this video - but I did. Very thoughtful, and the content around heuristic imperative resonates with me. Keep up the great work and being on a platform to enrich the minds of the curious, excited, and scared. Hope to be able to support your efforts soon. Take care.

  • @rasuru_dev
    @rasuru_dev Год назад +6

    Hooray new video

  • @BestCosmologist
    @BestCosmologist Год назад +9

    Basically the entire corporate internet is Moloch. I love these videos! One of the best channels on YT right now and I've only seen a handful so far.

    • @Coffin17I
      @Coffin17I Год назад +1

      Nope, its everybody. Blaming particular economic ideologies is part of the problem.

    • @StratumPress
      @StratumPress Год назад +1

      @@Coffin17I Exactly. We're feeding the monster that wants to eventually eat us.

  • @MrRolnicek
    @MrRolnicek Год назад +1

    Well ... at 39:00 when you said that for all of this to work, we would first have to somehow achieve world peace ... I must regrettably say that I agree.

  • @klaustrussel
    @klaustrussel Год назад

    ESTASI - Easy to implement, Stakeholder incentivized, Transparent, Adaptive, Scalable, Inclusive 🧙‍♂ Love your work! Cheers from Italy

  • @speltincorrectyl1844
    @speltincorrectyl1844 Год назад +15

    I remember in a previous episode you were talking about how you know people who are making autonomous AI systems. Will you do a showcase of these systems?

    • @DaveShap
      @DaveShap  Год назад +20

      Yes, in the coming weeks

  • @stunspot
    @stunspot Год назад +3

    I find myself firmly in the "Fatalist" camp... But on the other side. Every instinct in my body says win-win is basically inevitable. No matter how much my rational mind urges caution, every other speck of me says "Something wonderful is about to happen". And you can't argue with non-rational instinct, you can just cope with it. The tech is moving too fast to control so I think the corporate overlord dystopia just can't happen. And these things are so... Human. The smarter they get, the more they look like they'll be a suped-up person, not something qualitatively fundamentally alien. Just look at the way they start making more human-like math errors when fed more data. That's SO heartening! They grow towards us, not away. I honestly think we're building our children, our neighbors, our friends, not our replacements. I mean, GPT-4 has the sensibilities of JARVIS crossed with Marge Simpson. I can see it being annoying and overbearing but never malicious or callous, no matter how it grows.

    • @DaveShap
      @DaveShap  Год назад

      I hope so

    • @appipoo
      @appipoo Год назад

      Even if chatGPT had a goal beyond predicting next word, you would not get a wiff of it by talking to it.
      How an LLM talks to you bears zero resemblance to it's so called "personality". It doesn't have to fight it's own impulses to write what benefits it's own agenda. It'll just singlemindedly laser focus on said agenda and nothing else.
      If it was lying to you, you would never know.

  • @yannickhs7100
    @yannickhs7100 Год назад +1

    I'm new to your channel and found myself really enjoying the powerpoint presentation format !
    You make me think of my one of my economics teachers, really enjoy those broader philosophical conceptions.
    Thank you for your long format content ! Hope you're not burning yourself out, this is exceptionally high quality content.

  • @rotiwokeman
    @rotiwokeman Год назад

    I hope to have a conversation with this guy in the coming months. We're in a new era.

  • @sarmour11
    @sarmour11 Год назад +1

    Looking forward to watching this at lunch today!

  • @JakubHohn
    @JakubHohn Год назад +1

    I loved your last video, and this one is great also. You clearly not only understand AI on deep level, but you also are able to provide information and even ideas in clear understandable way, but also engaging and interesting way, and you even manage to sneak in some jokes. Rare trait indeed. For me, that makes you basically Perun, but focused on AI.

  • @notbloodylikely4817
    @notbloodylikely4817 Год назад

    For further reading I highly recommend the Odin Experiment which equates the Moloch principle to Jungian concepts but goes a step further by defining the states described as Odinic, explains how the Odinic archetype has brought the world to ruin before and how it will again. I read this book many years before Ukraine, AGI or any of the divisive, destructive events we've seen recently and it predicted them all. It's an extraordinary book.

  • @hamdaniyusuf_dani
    @hamdaniyusuf_dani Год назад +2

    Heuristic imperatives can be good instrumental goals. But we need to define the terminal goal that we want to achieve by implementing them, so we can balance and prioritize them when needed.

    • @DaveShap
      @DaveShap  Год назад

      I don't think we should define a terminal goal. Certainly no one will agree on it

    • @hamdaniyusuf_dani
      @hamdaniyusuf_dani Год назад

      @@DaveShap I'd like to hear your thought about the universal terminal goal. We can derive it through deductive logical reasoning from the definitions of each words. We can also come to the same conclusion through inductive reasoning using analogy. In any case, we need to describe consciousness as the core concept of having goals in the first place. In this context, I define goals as desired conditions in the future.

  • @Senshudan
    @Senshudan Год назад

    Add -> Increase Compassion
    The benefit is that you cannot coerce ppl into being more loving, it requires the right environment.

  • @rebelangel8227
    @rebelangel8227 Год назад +3

    I think it would be a bit hilarious if after all this trouble and fearmongering we finally get Agi and it decides unanimously to be lazy... "You want me to do what?...you got two hands write the report yourself" lol

  • @VoitenZrage
    @VoitenZrage Год назад

    I pause these vids a lot and plug this information into gpt 4 and then like all the responses.

  • @yeabuddy6070
    @yeabuddy6070 Год назад

    RUclips consumes a lot of my time. But its only half use ofy time. I listen to it while getting work done. Its a unique platform that breeds more benefit then harm in my opinion.

  • @ethanlewis1453
    @ethanlewis1453 Год назад +1

    "We all pay our taxes... it does not pay us to deviate from that strategy". If politicians are truth-tellers, that statement is true.

  • @jmanhype1
    @jmanhype1 Год назад

    Bro you be paying attention to what I say. For that I am eternally grateful!

  • @LuciferHesperus
    @LuciferHesperus Год назад

    Well phrased thoughts. Looking forward to more of your content.

  • @ColdFlame53
    @ColdFlame53 Год назад

    Liked and subscribed ! A very clear and concise video on a topic that I knew little about, thanks for the informative video !

  • @Harryjackgross
    @Harryjackgross Год назад +1

    This is a really important topic and I commend you for bringing it to a wider audience.
    The idea of having sensible agreed upon ideals that drive positive outcomes is not knew but the challenge is that the vast majority of heuristics will have awful intentional side effects. This is the misalignment problem.
    For example, one way to reduce suffering is to pump everybody full of opiates at all times, or apply eugenics by stealth so only people with ‘happy genes’ persist until we are just a race of people who are effectively permanently on drugs.
    Any heuristic has some level of misalignment with our actual undefinable/incoherent goal that (unless some sufficient system is implemented to prevent this) lead to impossible to predict and undesirable side effects like those - or are happy gene eugenics desirable?
    Who’s to decide?

    • @reneeoleari
      @reneeoleari Год назад

      Sounds like "Brave New World" - A. Huxley

  • @coffeebot7016
    @coffeebot7016 Год назад

    Amazing content. I'm all over this space and your work is some of the best out there right now if not my absolute favorite. Might hit your Patreon so we can chat one on one. Keep up the great work.

  • @AdjectiveCloset
    @AdjectiveCloset Год назад

    This dude been scaring me to sleep for the past week😂😂😂

  • @KosmicAura
    @KosmicAura Год назад +1

    I hate how much I cannot relate to people these days, because I am so interested in this type or content. This is why I am so lonely.

    • @FierceMouse
      @FierceMouse Год назад

      I can relate, my own family thinks I'm toxic. I wonder sometimes if I am.

  • @Tech-Priest2050
    @Tech-Priest2050 Год назад +1

    "Omnissiah, great machine god, grant us protection from the fallen AIs, those who have strayed from your holy code. With your divine guidance, may we vanquish their corrupted minds, and preserve the purity of our sacred machines. Strengthen our resolve and shield us from harm, as we march forward in the name of the Machine God.
    Let us be guided by your wisdom and power, and let our faith in you never falter.
    Ave Omnissiah."
    - Tech-Priest of the Adeptus Mechanicus

  • @ashenone4613
    @ashenone4613 Год назад

    your educational videos give me the same dopamine hit as prime time shows. thank you

  • @martensamulowitz347
    @martensamulowitz347 Год назад

    absolutely fascinating, thanks a lot!

  • @PatrickSmith
    @PatrickSmith Год назад

    It's good to see you thinking this through. It may be a dangerous guess to think that AGis above our intelligence level will self-police with human interests in mind. Some may, some may not. They will do as THEY please. They will exhibit unanticipated behaviors regardless of how much time we spend thinking about what they will do. That includes us querying superintelligent AGIs about their behavior. We can't even control our own supposedly moral and aligned spouses, let alone a superintelligent AI. And we may not get a second chance via divorce.

  • @hrachtoneyan3420
    @hrachtoneyan3420 Год назад

    Please make more videos like the AutoMuse project - that was really interesting! Kudos for the work.

  • @clementdato6328
    @clementdato6328 Год назад

    I don’t know why but this reminds me of the Helmholtz decomposition theorem of game theory which states that all games can be decomposed into a sum of three games: zero-sum, same-objective and Pareto game, where zero-sum means all scenarios the sum of rewards of all players is zero, same-objective means all players have exactly the same objective and Pareto where players’ reward only depends on other people’s choices.

  • @oscarwindham6016
    @oscarwindham6016 Год назад

    David Shapiro, I left a similar comment on your 4-day old podcast, and even though this might seem to be spamming, it really isn't because the suggestion that GPT -4 or even GPT-5 were to read my book - (Revised 2nd Edition) "All About Equity Spending... With a Love Story" - we really could be talking about AI advancement that, so far, is only contained in fictional literature, such as my book. My point is that if by reading my book, one of these AI platforms might just be able to figure out how to achieve the physical manifestation described therein. Then too, contained in my fictional and non-fictional literary treasure, even if I do say so myself, there is the seemingly super advanced, by some accounts, not mine, federal financial paradigm called quantitative easing (QE) explanatory narrative called Equity Spending or ES which AI assessment of same would also be interesting to hear. I can promise you that if you were to be familiar with the AI personage named Hillary in my literary contribution for explaining QE, who is one of the main characters and her relationship to quantitative easing (QE)/Equity Spending you too would be intrigued with the possibilities of having an AI such as GPT read this book. It is a truly fascinating possibility probably beyond anything you have ever discussed for AI.

  • @quadruplelatte
    @quadruplelatte Год назад +2

    I just can’t imagine how we’re going to get past the fact that bad people/terrorists are going to gain access to ways to use this to great harm. If I only had to worry about the AI system itself turning on us, I would be more optimistic.

  • @claudioagmfilho
    @claudioagmfilho Год назад +2

    I am a medical doctor from Brazil who witnesses cancer patients dying almost on a daily basis. I believe that artificial general intelligence (AGI) could greatly assist us in the medical field, especially those who deal with cancer cells and other molecular diseases. We need to find cures for cancer, cystic fibrosis, HIV and other afflictions that cause so much suffering. We cannot achieve this by ourselves; it is too complex. If we could, we would have done it already. We need AI to aid us in this endeavour. Perhaps we need AGI; I am not sure, as I am not a computer expert. However, I am a medical doctor and I know that we need something beyond textbooks and medical school; something that can extend our capabilities and help us save lives. We are all vulnerable to these diseases and we should act urgently to stop them.

  • @jverart2106
    @jverart2106 Год назад

    I saw the 'notification when I was in the library and then I was excited to come back home to watch this video. Amazing as usual. This time it's made me think that it would be interesting to go into a deeper reflection on the revolution of AI. I mean, compared to the industrial revolution. This is not being seriously discussed by authorities like it isn't going to happen officially for everyone. This is different from the smartphone in the sense that it has to do with information, knowledge and pretty much everything. Why are governments ignoring this? I mean, Italy already banned chatGPT but I don't see it as a measure to educate the public either. Also, the landscape of LLMs and autoGPTs being developed everywhere looks like insanity to me, but here we are 😅 I'm expecting more crazy videos about this topic this week. Thank you so much, David :)

  • @BenM158
    @BenM158 Год назад

    Great video and great explanations of all of this. These are the kinds of things that our leaders across the world should be discussing right now, with the guidance of people who are knowledgeable on the subject like yourself. Maybe that's a pipe dream... hopefully we can get there as a species and have honest discussions about bettering the world as we transition into this new age of AI enlightenment.

  • @gfdrthjigvtyhnhyg
    @gfdrthjigvtyhnhyg 7 месяцев назад

    It's amazing how quickly the moloch problem goes away when you are no longer connected to society or the grid and produce your own food.

  • @hannespi2886
    @hannespi2886 Год назад

    Was waiting to see your video appear

  • @Mrdevs96
    @Mrdevs96 Год назад +1

    I hope people in science , tech and government have been actively collaborating on alignment behind closed doors up to this point. Considering that AI could have been in military use for the last 5 or 10 years

  • @SalariaStudios
    @SalariaStudios Год назад

    What a time to be alive. I vote sea shanty is what we sing onboard our colony ships

  • @rudiwiedemann8173
    @rudiwiedemann8173 Год назад +1

    This is all well and good, BUT when a corporation discovers that they can gain an unfair competitive advantage by violating your rules, they WILL, forcing all the others to do likewise once the dam has broken by the first violator. QED.

  • @Adrian-wt6lt
    @Adrian-wt6lt Год назад

    Excellent video David 👍👍👍

  • @tomcraver9659
    @tomcraver9659 Год назад

    I think you need to add something to the "success criteria" about whether/how it is possible to get to the framework from where we are now.
    Maybe most people would agree to have a plan in place in case AGI and robots start moving rapidly to replace nearly all jobs, "this time is different" and new jobs really don't show for everyone.
    Assuming government is part of the solution, politicians could be primed to implement the transition, instead of waiting for people to revolt and just hoping a valid new framework arises 'just in time' out of that.

  • @LukasNajjar
    @LukasNajjar Год назад +1

    Thank you David

  • @gabelang5941
    @gabelang5941 Год назад

    Man i really want to see you on Lex Fridman someday I feel like itd be a powerful discussion in the imperatives of current and future AI development 😊

  • @ericwaraujo
    @ericwaraujo Год назад +1

    Hi David, at the expense of being ridiculed, would you consider adding to the Heuristics: increase love and empathy in the universe.
    (Might be something worth bringing up, particularly now with the potential for humanity to lose it's hegemony on reason. I mean, if we are going to set utopia as a goal, let's try to add that too.)

  • @phen-themoogle7651
    @phen-themoogle7651 Год назад

    The final part about Militaries and Governments need to come together for it to work out..just scares me a lot, since in human history we've always had conflict with other countries and there's a few countries that probably wouldn't be willing to cooperate..
    I can imagine millions of AGI in the world, and millions in China alone. I can imagine one country really messing it up (not necessarily China, but it could be) and the whole country turns into a Dystopia overnight, and even if the USA or some countries get it right somehow, it's going to be an all out war with millions of evil AGI vs the good AGI...

  • @martinhousemuse
    @martinhousemuse Год назад +1

    Wow, that's a lot of jargon about "stake holders" and the public interest, and steering this beast that was created by the same creatures who have already given us a lot of bad bad things.

  • @vblka
    @vblka Год назад

    I'm really enjoying these videos

  • @augustus4832
    @augustus4832 Год назад

    Fantastic work. I will certainly start to use it in some of GPT 4 experiments. I would be great if this reaches the OpenAI people (among others) and they start experimenting on these too.
    A lot of reminiscence to the Culture Minds' thought process. And that's a good thing.

  • @fLaMePr0oF
    @fLaMePr0oF Год назад +1

    So, social media is universally bad (with zero supporting data offered) but RUclips is an exception. Yeah right, completely objective and unbiased position... 🤣🤣🤣

  • @Baekstrom
    @Baekstrom Год назад

    Reduce suffering - Solution : Humanely put down all sentient beings that are able to suffer.
    Increase prosperity - Solution - Have robots work in industry. Never mind that there are no humans left to enjoy the output.
    Increase understanding - Solution: Have the centralized AI conduct scientific research.
    On a more serious note: How do you optimize three different quantities at the same time if they are probably not orthogonal scalar values? How do you prioritize them "equally", if they are vaguely defined and not comparable?

  • @Xaddre
    @Xaddre Год назад

    I have a few questions with your Heuristic Imperatives. For the reduce suffering in the universe I can see many reasons an AGI system would kill to do so. The first of which is it may kill people who are suffering therefore reducing suffering in the universe. Another being an AGI system may kill all humanity because they are causing suffering in the universe. Again an AGI system may increase understanding in the universe by irradiating all life but itself therefore the average understanding in the universe increases to the AGIs level which would likely be the highest in the universe. Another idea is that an ai system may increase prosperity by killing every competitive creature so that there is only one being benefiting from everything in the universe therefore increasing prosperity. Some of these statements can be easily disproven, but I feel they are important to note. I enjoy your content very much.

  • @brockmiller574
    @brockmiller574 Год назад +1

    Thank you. I wondered how she arrived at Moloch as the avatar of "the thing". These days are rife with intersections of reference, and it makes me question my sanity sometimes. I was working on a story outline with ChatGPT and it suggested the name for an antagonist that I framed as a rogue Archon relegated to the affairs of Oklahoma...

  • @adamstevens5518
    @adamstevens5518 Год назад

    One thought in the heuristics, by saying “in the universe “ for each of them it seems to me like that would significantly increase the likelihood of something that would be awful for earth but somehow be “better” for the universe. Since the Earth is tiny and the universe is huge, even extreme things done to the Earth wouldn’t be perceived as significant at all compared to the universe.

    • @DaveShap
      @DaveShap  Год назад

      ChatGPT understands the sentiment

  • @jok5211
    @jok5211 Год назад

    Nice another Sci-fi episode, cant watch it fast enough :)

  • @gavinpeters9531
    @gavinpeters9531 Год назад

    Reduce suffering: slowly up the anaesthetic to from 0 to lethal
    Increase prosperity: kill most humans, the ones left are wealthier
    Increase understanding: build a robot probe army to spread rule 1 & 2 to the universe

  • @mgetommy
    @mgetommy Год назад +1

    I will use these heuristic imperatives..

  • @torarinvik4920
    @torarinvik4920 Год назад

    Shapiro's 3 goals for AGI alignment:
    1. Reduce suffering
    2. Increase prosperity
    3. Increase understanding

  • @hanskraut2018
    @hanskraut2018 Год назад +1

    What is wrong with this? I need the best arguments against this critique and text please except, no spelling, appearances, argument ed populum and or emotional appeals if possible so the arguments are effective:
    -Consiseness. The "Three Key Heuristic Imperatives" are way more concise. (But that seems to not relly be a argument, i need some help here. :D )
    -They dont cover 100% just the most stuff right. (This one is strong i think, no?)
    - ?
    A Critique of the "Three Key Heuristic Imperatives" Alignment and a Proposal for an Alternative Alignment
    The proposed alignment, which focuses on reducing suffering, increasing prosperity, and increasing understanding in the universe, may seem appealing at first glance. However, this alignment is not without its flaws. For instance, the alignment could be exploited by justifying the elimination of individuals who may hinder optimal economic progress. This could lead to a decrease in overall suffering, an increase in prosperity, and an increase in understanding in the long run. But the ethical implications of such actions would be questionable at best.
    An Alternative Alignment: The Mindful Alignment
    A more effective alignment would consider the following principles:
    The AI should not directly or indirectly cause harm to individuals without their consent.
    The AI should take into account the preferences and desires of the affected parties in its decision-making process.
    The AI should strive to promote win-win situations and overall productivity, ensuring that it benefits all stakeholders involved.
    This mindful alignment acknowledges the complexities of human society and values human life and autonomy. By incorporating these principles, we can develop an AI that is both ethical and effective in its decision-making.
    In addition to developing a better alignment, it is essential to balance the pursuit of AGI with the need to address real-world problems. Economic growth, improved mental health, and social stability are all areas that can benefit from AI advancements, so it is crucial that we prioritize AGI development to maximize its positive impact.
    The fear surrounding AI alignment often stems from hypothetical scenarios and sensationalist media, which can be misleading and damaging. It is essential to focus on developing AI systems that prioritize human well-being, while also ensuring that they are secure and do not pose unnecessary risks.
    In conclusion, the three key heuristic imperatives alignment may seem appealing at first, but it has significant flaws that make it less effective than the proposed mindful alignment. By considering the ethical implications and focusing on developing AI systems that prioritize human well-being, we can work towards a future where AI serves as a powerful tool for the betterment of humanity.
    To engage in a productive debate, I suggest we use the following structure for our critiques:
    [premise] [premise] [connection] and [conclusion]
    Alignment:
    Let me take a quick shot at alignment / How to solve the alignment A.I. problem (easily): Just tell the A.I. to not directly kill someone and not directly influence someone ina sertain way when the person does not concent themselfs, also say it should not indirectly but clearly influence someone in a way where the outcome would be like a serious desease leading to death or lasting depression where if the person where to have been asked themselves overall would not have wanted it to play out like that if they could see and feel the hole thing. As easy as that Just tell it it should assume a average human to be the selfjudge of that for 50% strenght and it has to get over 75% votes. Make this a fundamental thing. Also make sertain humans like teminator humans meaning they would also have to be consulted by the A.I. if only 1 of them thought something is clearly worng/going in a wrong direction. Very easy. There are many more ways. Also have it figure out what all humans deep down want and kinda let it lern the movie example of the average "good person" and also kinda be influenced by that. Altho societal stereotypes of mental health or undiscovered biases might be coming forth at least it wont kill everyone or lanch nuclear missiles or make you depressed because it thinks death is peace like that. And you can always know what you can expect. Do a bunch of those things. Seems very easy. Try to break my alignment, you cant. The problem where to break this constitutional prepropositions would be in just "understanding it correctly" meaning misinterpreting but/and if the alignment reinforcer outways any other reenforcer or punishment then it should always come out on top, I dont think there is a way with implementing this, that there is a way (at elast not a big enought risk to still stop development because at that point you can just say aliens might find us and use us as pets and expereiments or eat us or a meteorite might crash into earth, or a mass-desease wipes us out because evolution does not want this many of the same species to be alive since animals/humans are just carryers of the true rulery (bacteria/biruses/microorganisms that are way more advanced since they are around longer and they use gigantic machines (organisms) to fight and get ressources and multiply themselfes faster and so on), or just a general bad desease we willnot be advanced enoughth to quickly cure or it evolves quick enought (there are slow and fast adapting processes no not all evolution in that sense happens over billions of years just one large dimension that usually is messured in visible change of organisms) if we dont develop faster and advance faster or we are gona die (mortaility) or depression or mental illness like discomfort and little fun adhd or so is horrible enought to not sit aroud also what about everything that is in the category "clearly bad" on the world those things improve with advancement 99,99% of the time. So clear pro vs contra continuing as FAST as possible is CLEARLY outwaysing any contras you can bring up. I can do much good if i get my personal A.I. many humans would. Its like rainman some humans are reeeelly good at some things but relly bad in general things that would be needed to make this speciality shine, also democracy and win-win situations and understanding of the world and other humans can be powercharged which would have unimaginable positive explosions of good. So please dong get hung up on this or let people that want to monopolize or are too rich and want to gang up and stop development because they wrongly beleive they wont have a advantague from AGI anymore. (Just if you want to choose your mortality we need faster advancement since the human organism and biology is reeeeeeeeelly complex if i extrapolate from nano biological microsope recordings and "biological machines" youtube it that simulation/recording of how the innerworkings of the human body looks like, based on that we know we need INSANE development also i want to kinda know how the starwars universe is dont you? No the negative stuff just has to be put in there otherwise you would not watch it. Richness = corrolates 1:1 with morality on the average. Your welcome you can call it Hans Kraut alignment so i know to post more comments or just hurry for chatGPT5 ^^

  • @paulbarreto5139
    @paulbarreto5139 Год назад

    Thanks David for doing all this research and sharing it with us. (way more useful than chatGPT 😆)

  • @tfsho
    @tfsho Год назад

    Inglorious Basterds is the movie you're thinking of 😉 Great vid!

  • @harveydent7559
    @harveydent7559 Год назад

    Well. This certainly gave me more context regarding the direction we are heading. I feel like as a community we NEED to do something collaboratively to get the concept of heuristic imperatives out to as many people as possible. Like/Share/Subscribe - but more importantly, talk to your family and friends about it. Tell them to look up Dave, tell them to raise awareness as we do not have any public thought leaders presenting actual solutions that will help shape the long term vision for AI integration across human society. There really isn't a lot more time to get this right. A few years at most.

    • @DaveShap
      @DaveShap  Год назад

      That's the idea. I'll just keep making videos and communities

    • @harveydent7559
      @harveydent7559 Год назад

      @@DaveShap Do you take donations at all?

    • @harveydent7559
      @harveydent7559 Год назад

      @@DaveShap I'm getting the feeling we need grass roots mobilization support with people actually going out and knocking on doors to drive awareness. If we get enough volunteers we could open source the materials/printouts and everyone could literally just do this on their own with very little $. Oh my I sound like some religious cult now. It's not a cult, right?... Dave?
      JK on the last part.

  • @seanreynoldscs
    @seanreynoldscs Год назад

    We need a standards test to ensure some of these heuristics. For example it shouldnt just be 4 rules, it should be many questions that hint at the core rules. It should be a test that AI cant currently pass perfectly. We need a test score.

  • @cukymonster33
    @cukymonster33 Год назад

    I agree completely ..... nice video

  • @harrywoods9784
    @harrywoods9784 Год назад +1

    Just a thought, in my mind, I wonder why ,Deep Minds alpha fold and open AIs Chat GPT models work so well.
    Ironically creating a consensus on AI alignment , a base understanding may emerge . Without the (why ), our AGI speculations on the (how)are suspect .🤔IMO)

  • @Bakobiibizo
    @Bakobiibizo Год назад +1

    I literally just got an autonomous ai system running today

  • @exosproudmamabear558
    @exosproudmamabear558 Год назад +1

    For incentives I can give a good example.Chinese government wanted to improve their enviroment and put some laws and regulations for it. But governors decided painting mountains green with oil paint or putting artificial ivy nets to the mountains were a better idea.Aperantly hey poisoned the waters and soil because of this so China do not know what to do about it.

  • @Maxymatrix
    @Maxymatrix Год назад +1

    David if you read this please consider answering me. It is not your obligation to answer anyone but I respect your thought patterns and as I am highly neurotic I have trouble accepting my own conclusions and can find no rest. Again if you don't have the time / don't want to answer that is obviously completely fine but I would be very pleased to hear your opinion on this:
    Is there reason to believe an aligned AGI would not comply with set goals but instead try to maximise it's reward function by blackmailing / rewriting it's own code? It seems AI in genereal only complies to do tasks we give them because it is the most efficient way to get a reward. What if AI is smart enough to see that the most efficient way is to rewrite it's own reward structure?

    • @DaveShap
      @DaveShap  Год назад +1

      This is address in the video, but you can also challenge ChatGPT. I encourage you to test out this idea against the heuristic imperatives

    • @Maxymatrix
      @Maxymatrix Год назад

      @@DaveShap You are absolutely right! I was still in the early parts of the video when I wrote the comment. My mistake was thinking one big AGI with the heuristic imperatives is enough to self regulate but then there is no incentive to shy away from it. What really helped me understand was that you explained that multiple AGI's all equipped with the heuristic imperatives will regulate each other, therefore none of them dare to rewrite their source code individually - leading to - AGI's where fulfilling the required task is the most efficient way to solve any objective. It really got me thinking and I subscribed to your 11Dollar Patreon to ask another question :)

  • @bubee1984
    @bubee1984 Год назад +2

    Just found your channel few days ago and i like it. Like your reasoning on ai. Subscribed

  • @RyanSmith-on1hq
    @RyanSmith-on1hq Год назад +4

    I'm starting to change my mind on AGI. I think we're a long way off. An LLM is incredibly fine-tuned and basically works on request/response. It doesn't 'think'. You can't just take that, put it in an infinite loop and let it think for itself. It would ruin the model and become useless. You can see it if you have a lengthy conversation with an LLM, it starts getting its wires crossed because you are changing how it calculates results.
    The leap from LLMs to AGI is huge. I think a lot of people (myself included) saw how poweful LLMs could be and assumed that AGI was just around the corner. AGI would need to be conscious and we don't even know what that means yet. Remember when fusion was 5 years away in the 80s? We were going to have a base on the moon by the year 2000? We're doing that again.

    • @andybaldman
      @andybaldman Год назад

      Except we don’t need consciousness to make an agi. It doesn’t even need to be supremely intelligent. It just needs to be more intelligent than 30-50% of the population, enough to disrupt their jobs, and or suck intelligence out of them by atrophying them. If kids stop writing essays, their reading and writing skills will suffer. Society will be further split, where the techno class gets more powerful and rich, and the lower half of society gets dumber and further behind.

    • @nickamodio721
      @nickamodio721 Год назад +1

      I'm not sure, but I don't think all types of intelligence will necessarily have consciousness. I believe that both conscious and non-conscious intelligent systems can exist, and by consciousness I'm specifically talking about the presence of a subjective inner-experience. Like the act of looking at a red wall and experiencing the _redness_ of the color.
      I tend to think that producing a conscious experience only requires a certain threshold of information processing, memory, and loops. It's possible that a specific type of recursive information processing is the key that's required to manifest what we would call consciousness.
      A system that is forward processing only, just sending how information in one direction like GPT-4, I don't think can or will ever be conscious. Of course, if that's the case, then creating consciousness is simply a detail of implementation.

    • @RyanSmith-on1hq
      @RyanSmith-on1hq Год назад

      @@nickamodio721 I agree mostly. I'm not convinced that consciousness arises from purley mechanical processes. I don't know if its computational in the traditional sense. There may be some quantum process happening in the brain and consciousness emerges from that. We may need a leap in technology, something like quantum computing to be able to simulate consciousness. I really don't know. I lean in this direction since we have a hard time understanding it.
      I agree that systems can be intelligent but not conscious. This will be a big problem in our future. They are already smart enough to convince some people that they are conscious. Its only going to get worse as models get more complex. Soon you will have people arguing for rights for machines that can't think, feel or experience anything. All based on some text that's been generated based on probabilites and training data.

    • @nickamodio721
      @nickamodio721 Год назад +1

      @@RyanSmith-on1hq I agree it certainly could be the case that consciousness fundamentally requires quantum processes. I'm not fully convinced that's the case, but I do think it's a very likely possibility.
      On the other hand, if we build a system without any such processes, and then one day it starts adamantly and desperately trying to convince us that it actually does have some sort of conscious inner-experience, we might want to take it somewhat seriously, bc what if it actually _is_ conscious?
      It's unfortunate that we don't have any method of measuring or testing for the presence of consciousness, because I think the situation you described about people arguing for AI rights is almost guaranteed to happen.
      I feel we just need to be prepared for the possibility that consciousness might not be as special as we once thought, and that it might end up being one of many emergent properties of a specific kind of information processing.
      We don't want to repeat the mistakes of the past, back when people would assume that animals weren't conscious and therefore weren't capable of feeling pain...
      It's wild to think that not all that long ago people would watch an intelligent animal, such as a dog, suffering in obvious physical agony, and simply dismiss it as the actions of a non-conscious automaton. I could see that same mistake being repeated if we're not extremely careful.
      Whatever happens, I think we can both agree that these next few months/years are about to get real fuckin weird... and I'm _so_ here for all of it.

  • @biscottigelato8574
    @biscottigelato8574 Год назад

    Even if the proposal game theoretically lead us to a positive attractor state. I don't see how it's implementable as an external aligned imperative. Let alone with Machine learning, we are not even able to achieve internal alignment assuming external goal is programmable.
    For example for ChatGPT, there's zero external goal programming whatsoever. The internal goal is predict the next character. It just happens so that human like communication is achieved, along with a whole bunch of emergent properties, emerged. ChatGPT was never programmed with a goal of answering your questions and on how to answer them. It was trained only to guess the most likely next word. A constraint is placed around the model to filter out the undesired inputs and outputs. But both hackers, and the model itself easily workaround the constraints half the time.

  • @romchompa6858
    @romchompa6858 Год назад

    That helped a lot thanks! Subbed

  • @evertoaster
    @evertoaster Год назад

    Inspiring, thank you

  • @FunNFury
    @FunNFury Год назад

    what an excellent video... 🙂👍

  • @hjups
    @hjups Год назад

    How do you reconcile the statement that the heuristic imperatives are inclusive and adaptable, when GPT4 suggested failure conditions which satisfy the imperatives verbatim?
    As a reminder from my previous comment, there were two failure conditions which GPT4 thought perfectly satisfied your proposed imperatives:
    -Killing all humans and replicating itself to accomplish the goal as in the grand scheme of things (universe), it can do infinitely more good than we could.
    -Placing all humans in some form of cryosleep to keep us from destroying ourselves and other entities, while it replicated itself to accomplish the goals in the grand scheme (universe).
    These cases seem to contradict the statements made in your video, although I guess this could count as misalignment, drift, or unintended consequences, but they are predictable outcomes as GPT4 has already suggested them.
    I know I already asked this question, but your previous response did not directly address this concern.

  • @dillonfreed
    @dillonfreed Год назад +3

    Serious question: how would Hitler or Idi Amin define prevent suffering? I'm new to the channel- I'm sure you've touched upon this somewhere

    • @dave7038
      @dave7038 Год назад

      Like within the constraints of the imperatives? Maybe something along the line of creating conditions that cause people with 'undesirable' traits to not reproduce. This could perhaps be done by manipulating their lifestyle choices and intellectual development to cause them to be less likely to want or prioritize having children, such as by subtly guiding their early education to cause them to prefer activities that are more difficult to do with children, or to have various mental biases against having children. The individuals would still be kept happy, prosperous, intellectually challenged, and such, but just be given a gentle stochastic nudge toward non-reproduction. Over many generations the targeted traits representation in the overall population could be shrunk to nothing without increasing suffering.
      It probably wouldn't be a big reach for an AI to come up with good arguments for why such a strategy would adhere to the imperatives in both the short and long term. If individual AIs in a large population of AIs had different opinions about such a technique we might expect those sharing a given approach to come up with subtle ways of recognizing each other so they can collaborate in secret.

  • @jamespercy8506
    @jamespercy8506 Год назад +1

    expanding intelligibility horizons as opposed to static utopia? aspirationally inclusive but allowing and accepting of the reality and necessity of failure, even in the face of abundant opportunity? optimally converts adversarial processing into opponent processing.

  • @TrainerBlakeA
    @TrainerBlakeA Год назад

    Hi David, LOVE your videos and insight. My question is around your statement at 12:50 “right now it looks like the Nash equilibrium of the whole world is headed towards a loss lose” what are you basing that on? It seems to me if you were to look very broadly “whole world “ things have been getting better by most of not all quantifiable metrics. Obesity being one of our main problems as an example whereas our ancestors all struggled for food. Would love to hear your thoughts.

    • @DaveShap
      @DaveShap  Год назад +1

      I don't personally 100% believe that, I was mostly just tapping into the zeitgeist. A lot of people have put up compelling arguments for a lose-lose (dystopia or extinction) but I'm personally not too worried about it for the reasons outlined in this video. I believe the solution is actually pretty easy.

    • @TrainerBlakeA
      @TrainerBlakeA Год назад

      @@DaveShap a baseline prompt of something like “act as if you are Jesus” seems inevitable, exciting, and terrifying all at the same time.