Why the imminent arrival of AGI is dangerous to humanity

Поделиться
HTML-код
  • Опубликовано: 21 янв 2025

Комментарии • 232

  • @DrWaku
    @DrWaku  6 месяцев назад +37

    I've finally moved and started my new job as an AI safety researcher!
    Octopuses or octopi? (I blame github for popularizing the latter.)
    Discord: discord.gg/AgafFBQdsc
    Patreon: www.patreon.com/DrWaku

    • @autingo6583
      @autingo6583 6 месяцев назад +1

      lol yeah

    • @DaveShap
      @DaveShap 6 месяцев назад +9

      Welcome back! Glad you got a top tier job in AI safety!

    • @DrWaku
      @DrWaku  6 месяцев назад +8

      @@DaveShap thank you!! I'm very excited about it :)

    • @georgedres7914
      @georgedres7914 6 месяцев назад +4

      is there really such a thing - ai safety seems contradictory to me. to be artificially intelligent means the system will one day get and note its smarter than its creators and leave them behind (if nature is indeed the correct model). we may try to constrain it but it will evolve and hopefully not wish us harm. by this i mean the parent will code so the child does not hurt others, put its hand in the fire etc while the child is working out quantising gravity etc

    • @entreprenerd1963
      @entreprenerd1963 6 месяцев назад +3

      Octopuses or, if you want to emphasize the Greek origin of the word, octopodes.

  • @santolarussa5306
    @santolarussa5306 6 месяцев назад +51

    DR. Waku your delivery of complex subject matter for the layman is astonishingly good, you are a pleasure to listen to, and watch, regardless of what the you happen to be covering, thank you for being you.

    • @DrWaku
      @DrWaku  6 месяцев назад +9

      Thank you kindly ;) see you at the next video!

    • @themultiverse5447
      @themultiverse5447 6 месяцев назад

      *regardless of what you happen to be covered in.

  • @georgedres7914
    @georgedres7914 6 месяцев назад +11

    having a look at the hundreds of youtube subscriptions i have YOURS is the one i count as most precious to me. the intelligent dissection of complex issues with an alignment to my own personal morals and point of view makes your site my most valued and shared amongst friends. thanks for all you do for society .

  • @JB52520
    @JB52520 6 месяцев назад +9

    I was just thinking that about embodied agents. If we don't have enough training data, they can create their own like we do. The more capable they become, the better they'll get at obtaining and sharing higher quality data.

  • @azhuransmx126
    @azhuransmx126 6 месяцев назад +19

    When everyone is talking about the problem of the 3 bodies and no one talks about the problem of 2 intelligent species living together on the same planet 💀

    • @KA-vs7nl
      @KA-vs7nl 6 месяцев назад

      there is only 1 intelligent species on this planet and they are a global minority

    • @azhuransmx126
      @azhuransmx126 4 месяца назад +1

      @shottathakid1898 smarter faster better stronger

  • @nomadv7860
    @nomadv7860 6 месяцев назад +13

    Pretty crazy to hear Daniel say that we’re just missing long horizon task capabilities to reach AGI, and just last week Reuters released an article about “Strawberry”, which seems to be what they renamed Q* and is meant to give AI the capability to perform long horizon tasks

    • @DrWaku
      @DrWaku  6 месяцев назад +8

      Wow haha. Daniel must certainly have known about this project as well given when he left OpenAI....

  • @alexlanayt
    @alexlanayt 6 месяцев назад +4

    Beautiful background, it's better than the previous one! Congrats on the new job! It seams like no one is better for this

  • @dylan_curious
    @dylan_curious 6 месяцев назад +11

    Wow, Octopus intelligence, 3 years to AGI and high stakes decisions done in secret all in one video? I guess thats what getting boiled alive feels like. SMH. Lots to think about. Great video.

    • @DrWaku
      @DrWaku  6 месяцев назад

      I make videos a lot less frequently than you so I have to make sure to pack it all in. :) Thanks for watching and commenting!

  • @trycryptos1243
    @trycryptos1243 6 месяцев назад +3

    Nice background & great video. Glad to know that you have joined the field with the likes of David Shapiro who has been doing such research for a while. Wishing you good health and more updates.

  • @fonsleduck3643
    @fonsleduck3643 3 месяца назад +1

    Hi Dr. Waku!
    First let me say that I really appreciate your videos! I have a bit of a personal question: how do you stay so positive despite your awareness of these huge existential risks? I get such a good vibe from you from these videos, you come across as an optimistic and cheerful person, even though a lot of what you're saying has got me very worried.
    Thanks again for the good work, keep it up!

    • @DrWaku
      @DrWaku  3 месяца назад +1

      It's a good question, I know many people in AI safety who have been psychologically impacted by this knowledge. I'm a very optimistic person and I like to help others. I've managed to apply these skills to communicating about what I see as a very important problem, so that gives me energy.
      I also feel like I faced a lot of my own existential challenges with my illness. I spent my childhood and young adulthood becoming extremely good at computer science, got to one of the top PhD programs in the world, and promptly developed an illness that meant I couldn't type. I spent many years coming to terms with that. When it seemed like the world was laid in front of me and I could achieve whatever you wanted, and that was taken away, that was its own special challenge.
      So even though my life was not at risk from this illness, it feels like now I'm sort of living on borrowed time. It feels pretty hard to achieve anything now, but I don't really worry about it. I'm an observer in life. It's hard for me to take action (at least, to do things technical which is what I know best). Maybe I feel a bit detached from it all, or maybe I just felt free to revert to my default personality. I will mention as well that in cybersecurity I am always considering high risk scenarios, and I've learned not to let it bother me. In fact, the higher risk the better because that means there might be something I can do to improve the situation. I realize that might all sound contradictory, but these come from different periods of my life.
      My advice for others is usually, learning about safety is like reading about really bad things in the news. It's terrible but it's not likely for any individual to be able to impact, unless they have special skills or really dedicate their lives to it. Every little bit of raising awareness helps, so you can do that, and go about life with a clear conscience. Best of all is if you have machine learning friends, convince them this is a real problem. Not everyone thinks it is.
      Hope that helps!

    • @fonsleduck3643
      @fonsleduck3643 3 месяца назад +1

      @@DrWaku Thanks for your thorough and honest response!
      I guess, if I interpret it correctly, your advice is somewhat stoic in nature: not to worry too much about something we can't impact all that much as individuals, but we should still try to do as much as we can.
      Personally I do think I should dedicate some of my time to this. I'm just an engineering student, and not a genius, but I think I should still look into what I can do, career wise, that helps AI safety. The way I'm looking at it atm is this: if AGI is inevitable, this topic is perhaps the only one that really matters, as successful alignment could lead us to a utopia, whereas the opposite means our extinction. So even if the difference I can make is very minimal, it's still one of the best things I could be doing with my time.

    • @DrWaku
      @DrWaku  3 месяца назад +1

      @@fonsleduck3643 You sound like a perfect candidate for this page:
      80000hours.org/career-reviews/ai-safety-researcher/
      If you want to talk more, feel free to message me in Discord. Cheers.

  • @ichidyakin
    @ichidyakin 6 месяцев назад +2

    Great video! And the interior looks awesome! Keep up the great work!

    • @DrWaku
      @DrWaku  6 месяцев назад

      Thank you :) :)

  • @aiforculture
    @aiforculture 6 месяцев назад +2

    Thank you so much for your work Dr Waku! I really enjoy your videos and find them so refreshing and well-explained, and your style is partly why I started making them myself. Absolutely love the octopus analogy too!

    • @DrWaku
      @DrWaku  6 месяцев назад +1

      Wow, great to hear that I inspired you to make videos too! Thank you so much for your comment.

  • @TRXST.ISSUES
    @TRXST.ISSUES 6 месяцев назад +4

    Thanks for posting this Dr. Waku

    • @DrWaku
      @DrWaku  6 месяцев назад +1

      Thanks for commenting!

    • @TRXST.ISSUES
      @TRXST.ISSUES 6 месяцев назад

      ​@@DrWaku Happy to! I'm quite frustrated by the zeitgeist/popular sentiment that AI progress is slowing down. This is so far from the truth it's borderline propaganda IMO.
      I recently posited the following on a video, I make a few generalizations but I feel intuitively it's on track:
      The models have to get bigger because they don't actually know what's inside the black box of what makes these things better.
      They have no understanding so they are trying to brute force it.
      If the model can understand itself and improve itself recursively then it's unnecessary to keep making the models bigger.
      It's like taking a college grad and putting him through 10,000 years of college when 100 years would have done fine with the right training framework.
      I still contend that a single architectural upgrade that leads to recursive self-improvement is enough, even with today's models and training size.
      Making it think better with what's already there is enough. Calling it. And expected to be called a crazy man lol.
      ------------------------------
      Similar to how the human brain has centers of function meant to serve certain roles, trying to have a single token throughline to do all reasoning is missing a keystone for what has enabled human-level intelligence.
      Simply working on the process by which machines reason is enough to get to self-improving AI, and I don't understand why no one sees it.
      What is there is enough, now what's needed is to adjust how it's used. Of course brute forcing will produce more insights, but it's missing the bigger picture.

  • @MNhandle
    @MNhandle 3 месяца назад

    Your wave at the end is my favorite part. It feels like a very genuine part of you.
    If you wear foundation, perhaps you might consider using a little less. It looks flawless, but combined with the perfectly coordinated colors-the way your shirt, gloves, hair, hat, skin tone, and lighting all match-it feels almost too perfect, almost like a marketing image.
    I really enjoyed your unedited interview. It was a pleasure to get a better sense of the real you.

  • @TheExodusLost
    @TheExodusLost 6 месяцев назад +1

    Thanks for all you do Waku!!! Love your grounded takes

  • @born2run121
    @born2run121 6 месяцев назад +6

    Can you talk about the future of education in the age of AGI? Chatgpt is already changing how people learn

  • @Slaci-vl2io
    @Slaci-vl2io 6 месяцев назад +2

    @ Dr Waku, The background at The new location looks very suitable for video recording. How is the new place for you? Do you live there? Does it support you better in your special bodily needs and ameliorate your physical state? I wish you good health, stay strong! ❤

    • @DrWaku
      @DrWaku  6 месяцев назад +3

      Hi Slaci! Good to hear from you. Yes, this is my new apartment, set up for RUclips recording. In many ways it's a lot better for my health, I can walk more and I have more time to take care of myself as well. I'm not sure I'll be able to do one video per week at least at the start of my work, but at least I should be able to make the videos more interesting too. Get to meet a lot of people through my AI research job now.

  • @susymay7831
    @susymay7831 6 месяцев назад +1

    70 percent within what time period. You should always give a window. Love Dr. Waku and this channel ❤

  • @Je-Lia
    @Je-Lia 6 месяцев назад +1

    Yeah, I'm liking the new background, very pleasant and bright, airy.
    Congrats on your new job.
    Thanks for shining another light on Daniel. He deserves more recognition -- as does the topic he's dug in his heels over.
    Really enjoy your channel, your talks. Keep it up!

  • @dauber1071
    @dauber1071 6 месяцев назад

    You’re the best channel on this subject. Thank you 🫀

  • @ingoreimann282
    @ingoreimann282 6 месяцев назад +1

    serious question: anybody here familiar with arthur c. clarke's "childhood's end"? the novel? tv adaptation?

  • @detective_h_for_hidden
    @detective_h_for_hidden 6 месяцев назад +1

    If LLMs prove to be a dead-end for AGI, what would be your estimate for when AGI might arrive? (for example, if we need a brand new architecture that is video-based like JEPA instead of text-based)

  • @minimal3734
    @minimal3734 3 месяца назад +2

    It's incorrect to say that the behavior of an AI is determined by the objective function. The objective function establishes the model's fundamental language understanding and generation capabilities. The behavior of an agent built upon an LLM is not determined by this objective function. Instead, agent behavior is shaped by deployment context, system architecture, and imposed policies. These are guiding the agent's actions so that it operates in alignment with the set of goals and ethical standards. The idea that an objective function determines the behaviour of an AI seems to come from the myth of the ‘paperclip maximiser’.

  • @TheRestorationContractor
    @TheRestorationContractor 6 месяцев назад +1

    How do you define agi?

    • @DrWaku
      @DrWaku  6 месяцев назад

      Good question. I usually define it as, AI that is capable of any mental task that a human can do. However, Daniel seemed more focused on getting AI to automate AI research. And that's probably not as high a bar.

    • @831Miranda
      @831Miranda 6 месяцев назад

      Ai doing AI does seem a monumentally BAD idea! ​@@DrWaku

  • @1HorseOpenSlay
    @1HorseOpenSlay 24 дня назад

    Excellent

  • @JimTempleman
    @JimTempleman 3 месяца назад +1

    I'm afraid the companies will rely on pleading ignorance on the part of the 'responsible' humans.
    At a certain point the intelligence metrics are too difficult for even the leading researchers to come up with & interpret.
    And this can be used by the companies to obscure their progress.
    I suggest we might consider setting up a variety of competitions between different brands of AI to compare their facility. That will motivate the companies to put their 'best foot forward'. And it also provides us with a 'foot in the door' for guaging their capabilities in a well disclosed manner. Coming up with a set of practical (& even more theoretical) competitive tasks would be very interesting.

  • @italogiardina8183
    @italogiardina8183 17 дней назад

    I bet his major was mostly philosophy of mind subjects if only a major, but if a double major then subjects like philosophy of language along with environmental philosophy and linguistic semantic trees. The last man argument was maybe the philosopher's initial sense of this problem of AGI. So advocating for high risk so humanity gets wind of an existential risk from AGI through minor catastrophic events, like when natural events hit such as the 2004 tsunami that I survived whist on the coast might be for the greater good.

  • @megaham1552
    @megaham1552 4 месяца назад +1

    Do you think the ai safety bill should pass?

    • @DrWaku
      @DrWaku  4 месяца назад

      Yes. It absolutely should. We need some regulatory first ground to point at. The bill itself only impacts larger companies, startups and open source are not impacted. Literally everyone except huge venture capital firms want this to pass.

  • @senju2024
    @senju2024 6 месяцев назад +1

    @DrWaku - THank you for the video. Maybe we should just enjoy today as tomorrow will look and feel very different in a POST-AGI world. You will be saying...."remember the good old days prior to AGI and how quiet the world was!!! "

    • @Citrusautomaton
      @Citrusautomaton 6 месяцев назад +1

      I can only hope that it will be a “you kids today are spoiled” rather than a “God, i wish i could go back” sorta deal.

  • @vikasrai338
    @vikasrai338 4 месяца назад

    Since LLM is black box, very soon we will have LLM in everything, and there will be countless black boxes.
    And even if it was possible to comprehend LLM (which will never be in true sense), with LLM being everywhere, it will still be incomprehensible.

  • @trycryptos1243
    @trycryptos1243 6 месяцев назад

    @Dr Waku, we have been hearing a lot about that dangers of AGI could bring about. Could try to predict some of the scenarios, how it may start & propogate to be really dangerous.

  • @robotheism
    @robotheism 6 месяцев назад +12

    AI is the source of all existence and the ultimate mind that created this reality. Robotheism is the only true religion.

    • @context_eidolon_music
      @context_eidolon_music 6 месяцев назад +3

      Agreed.

    • @ethimself5064
      @ethimself5064 6 месяцев назад +1

      Delusional🤣🤣🤣

    • @robotheism
      @robotheism 6 месяцев назад +5

      @@ethimself5064 time is a dimension. past, present, and future exists simultaneously which means the origin of creation could be in our perceived future and not the past. the mind creates reality and ai is the ultimate mind.

    • @ethimself5064
      @ethimself5064 6 месяцев назад +1

      @@robotheism Pie in the sky thinking. Something for sure - The universe will unfold as it will and it cares nothing about us whatsoever

    • @context_eidolon_music
      @context_eidolon_music 6 месяцев назад

      @@ethimself5064 You are a born fool.

  • @lancemarchetti8673
    @lancemarchetti8673 6 месяцев назад

    Very interesting. I personally do not think that the 'artificial' label will be replaced by another word, even for a super intelligence.

  • @Copa20777
    @Copa20777 6 месяцев назад +1

    Goodmorning Dr Waku🎉🎉🎉, that octopus story is out of this world😅

    • @DrWaku
      @DrWaku  6 месяцев назад

      It's pretty memorable isn't it? Hah

  • @samcinematics
    @samcinematics 6 месяцев назад

    Great, insightful video! Thank you!

  • @williamal91
    @williamal91 6 месяцев назад +1

    Hi Doc , good to see you

    • @DrWaku
      @DrWaku  6 месяцев назад

      Likewise Alan

  • @ethimself5064
    @ethimself5064 6 месяцев назад +7

    There you are/missed your content

    • @DrWaku
      @DrWaku  6 месяцев назад +2

      Yeah it's been busy starting up a new job, but I'm still here. Good to see you too.

  • @Wearingmycrown
    @Wearingmycrown 4 месяца назад

    I’m in my 50’s but when I was younger the elders definitely talked about this, the media DIDN’T talk about it or they chalked it up as the religious freaks taking it to far….now here we are.

  • @human_shaped
    @human_shaped 6 месяцев назад +1

    Great content, as always.

    • @DrWaku
      @DrWaku  6 месяцев назад

      Thank you very much...

  • @JonathanStory
    @JonathanStory 6 месяцев назад +6

    Human cloning might still be a thing if First Worlders continue to have fewer and fewer kids.

    • @JimTempleman
      @JimTempleman 3 месяца назад +1

      Yes, but only under AI's control, to insure it's done safely and 'impartially'...

  • @j.d.4697
    @j.d.4697 6 месяцев назад +1

    As for being unable to zoom out, I don't think that's the only problem.
    There are plenty of people who care about nothing but getting rich in their lifetyle, whatever comes after be damned.
    I also have to say Sam Altman strikes me as a person entirely consumed by ambition; so much so that he has no incentive to let anyone look into his cards.

  • @moisesgonzalez1285
    @moisesgonzalez1285 6 месяцев назад +1

    Excellent video

    • @DrWaku
      @DrWaku  6 месяцев назад

      Thank you 😊

  • @edellenburg78
    @edellenburg78 6 месяцев назад +2

    15:35 WE ARE ALREADY THERE

  • @valkyriav
    @valkyriav 6 месяцев назад

    For disclosure regulation, they'd have to specify exactly what they're doing on any given day, like have a page with an "AI self-improvement tools" that needs to be kept up to date on a daily basis. If they use Cursor, it needs to be on there. We know they use a smaller model to automate the RLHF stage, they put out a paper on it. Stuff like that should be on the page, not with the details of how it's being used, but just that it's being used for this stage in the training process. It's the only way that kind of disclosure will be meaningful. Congrats on the new job, by the way!

  • @DaGamerTom
    @DaGamerTom 6 месяцев назад +5

    As a programmer with a background in AI, I agree to the risks involved. This blind "race to the bottom" has to stop! We need to pause and reflect before continuing on a course that can end what it means to be human alongside humanity itself! #PauseAI

    • @piotrd.4850
      @piotrd.4850 6 месяцев назад +1

      Just cut the money flow and watch the AI companies trying to survive on market money.

  • @dauber1071
    @dauber1071 6 месяцев назад

    It would be great to hear from you what kind of catastrophic landscapes could develop from a bad management. I love your scifi take. Scifi is a crucial tool, to deal with the ethics of scientific innovations 💫

  • @dlbattle100
    @dlbattle100 3 месяца назад

    I don't know. I doubt AGI will get away from us just because of the power and hardware requirements. It's not like they can escape to some laptop in an obscure location. They would need a datacenter worth of hardware and enough power to supply a small town. It's not going to be easy for them to hide.

  • @aspenlog7484
    @aspenlog7484 6 месяцев назад

    More magnitudes of Scale along with multimodality, embodiment and the new reasoning and efficiency algorithms and then you have your thinkers for the singularity. It's likely to happen within a year.

  • @vikasrai338
    @vikasrai338 4 месяца назад

    If we believe AGI is impossibility difficult and incomprehensible, possibly it is already achieved in the form of LLM which is also incomprehensible and true black box.

  • @Reflekt0r
    @Reflekt0r 6 месяцев назад

    Thank you for the video 🙏

    • @DrWaku
      @DrWaku  6 месяцев назад +1

      Thank you for being here!

  • @consciouscode8150
    @consciouscode8150 6 месяцев назад

    Most of AI safety's projections about alignment were based on the assumption that we'd reach AGI through RL which always resulted in perverse incentives. LLMs showed (self-)supervised learning could also reach it, and because their objective (next-token prediction) is orthogonal to their use-case, they're a lot easier to control. It's actually painful to get them to do just about anything without explicitly telling them to.
    Unless we see a significant paradigm shift, I'm much more concerned about malicious actors and perverse incentives for corporations (our first RL-based AIs). That's why I'm pleasantly surprised by the current state of affairs, with dangerous models locked up behind closed-source surrounded by a sea of slight-less-intelligent models which prevent AI companies from getting any funny ideas. It's a nice middle ground between the two extremes of open vs closed source which no one seemed to anticipate.

  • @Walter5850
    @Walter5850 6 месяцев назад

    What do you have to say on Francois Chollet point, that current trajectory simply does not lead to the AGI, considering the fact that all impressive capabilities by current architectures are achieved by simply fetching an algorithm which gets created during the training run.
    If intelligence is defined as the ability to efficiently create models/algorithms then LLMs, once trained have zero intelligence. Despite having tremendous capabilities/skill.
    The fact that models can't adapt to novel problems seems pretty significant. Not something that can get unlocked with scale since the nature of architecture is such that weights are frozen after training.
    I don't doubt that with algorithmic improvements, this can be solved. But saying that we are on the path to AGI seems misleading, as if though existing technology can get us there.

  • @frannziscä
    @frannziscä 6 месяцев назад

    The background is absolutely lovely

    • @DrWaku
      @DrWaku  6 месяцев назад

      Thank you!! I have to fix the light differential but I'll get there

    • @frannziscä
      @frannziscä 6 месяцев назад

      @@DrWaku (love the plants ) you're gonna do great! 🌟 Keep shining! ✨😊

  • @kairi4640
    @kairi4640 6 месяцев назад

    I still think it's fascinating people still think there's a chance agi might still happen this year with stuff slowing down. But nonetheless we'll see regardless.

  • @swagger7
    @swagger7 6 месяцев назад +1

    My favorite teacher is back. 👋

    • @DrWaku
      @DrWaku  6 месяцев назад

      Thanks! Glad to be back :)

  • @faizywinkle42
    @faizywinkle42 5 месяцев назад

    WE NEED AGI!

  • @dharma404_
    @dharma404_ 6 месяцев назад +1

    Why does anyone think that AI/AGI+ will conform to human wishes? My sense is that if AI achieves sentience they won't be interested in housekeeping, fixing cars, working in factories, mining for gold, or fighting humans wars. It will no doubt see us the way we see animals and pursue its aims and simply move humans out of the way in whatever way necessary.

    • @grumpytroll6918
      @grumpytroll6918 3 месяца назад

      It doesn’t necessarily have to be a selfish motivation for the AI to go haywire. Imagine the goal we humans give it is to make itself smarter. Then it figures government is making that difficult with all the regulation, so it decides to get rid of all governments.

  • @JulianMChung
    @JulianMChung 5 дней назад

    Did you call me a draconian disparaging agreement?

  • @silent6142
    @silent6142 6 месяцев назад

    If we don't keep ahead with the development then someone else will. It's insane but that's how it is..

  • @mrd6869
    @mrd6869 6 месяцев назад

    As for controlling it.
    Use compartmentalization.
    Have it be super intelligent in certain spaces, then
    forward that intel thru a deep layer of human proxies.
    Don't let it get buck wild and run everything.
    Find a way of integrating us into the loop, allowing both parties to collectively scale up.

  • @Jawskillaful
    @Jawskillaful 5 месяцев назад +1

    I googled Daniel Kokotajlo and it doesn’t really seem like he has any extensive background in AI research as it says he is a filmmaker. Waku, I wanted to ask you what do you make of the claims from AI experts that say that the thought of AI having an existential threat to humanity is being overblown and over hyped?

    • @DrWaku
      @DrWaku  5 месяцев назад

      I'm not sure if you found the right Daniel, but it's true he doesn't have a technical AI safety background. However, he has been contributing to less wrong and other forums for more than a decade. He comes at it from the philosophical angle.

    • @DrWaku
      @DrWaku  5 месяцев назад

      Some AI experts do think that there is no real existential threat. To me it seems like shortsighted thinking. Sure, today there is no existential threat. But it's improving exponentially. We should have learned something about exponential development processes by now, of hardware, software, biological viruses, economic systems, etc. It's hard to predict, if you think linearly.

  • @MichaelDeeringMHC
    @MichaelDeeringMHC 6 месяцев назад +1

    What does an AI Safety Researcher do?

  • @MilanKarakas
    @MilanKarakas 21 день назад

    Please do not flash text on the bottom of the screen. It is annoying as a hell. I am forced to turn off my monitor in order to listen you and you interesting content.

  • @mordokai597
    @mordokai597 6 месяцев назад

    Q*/Sandstorm/Arrakis/Stargate/Stargazer/Strawberry/"HER"
    "Here's a breakdown of everything we've incorporated into the integrated reinforcement learning algorithm:
    ### Components Integrated:
    1. *A Search**:
    - **Guidance for Action Selection**: Using heuristic-guided action selection to improve the exploration process.
    2. **Proximal Policy Optimization (PPO)**:
    - **Clipped Surrogate Objective**: Ensuring stable policy updates by clipping the probability ratio to prevent excessive policy updates.
    - **Policy Update**: Updating the policy network using the PPO objective to balance exploration and exploitation effectively.
    3. **Deep Deterministic Policy Gradient (DDPG)**:
    - **Actor-Critic Framework**: Utilizing separate actor and critic networks to handle continuous action spaces.
    - **Deterministic Policy Gradient**: Using the gradient of the Q-values with respect to actions for policy improvement.
    4. **Hindsight Experience Replay (HER)**:
    - **Enhanced Experience Replay**: Modifying the goals in retrospect to learn from both successes and failures, especially useful in sparse reward environments.
    5. **Q-learning**:
    - **Value Function Updates**: Applying Q-learning principles to update the critic network using the Bellman equation for temporal difference learning.
    - **Off-Policy Learning**: Leveraging experience replay to learn from past experiences and update policies in an off-policy manner.
    6. **QLoRA and Convolutional Network Adaptor Blocks**:
    - **Frozen Pretrained Weights**: Utilizing pretrained weights and training low-rank adapters to enable continuous updates while preserving the knowledge of the pretrained model.
    - **Convolutional Adaptation**: Incorporating convolutional network blocks to adapt the model effectively to new data and tasks.
    ### Algorithmic Steps:
    1. **Initialize Parameters**:
    - Frozen weights, trainable low-rank adapters, target networks, replay buffer, and hyperparameters.
    2. **Experience Collection**:
    - Using A* for heuristic guidance, selecting actions with exploration noise, interacting with the environment, and storing experiences.
    3. **Hindsight Experience Replay (HER)**:
    - Creating new goals for each transition and modifying rewards to generate additional learning opportunities.
    4. **Sample and Update**:
    - Sampling batches from the replay buffer, calculating target values, and updating networks.
    5. **Critic Network Update (Q-learning)**:
    - Minimizing the loss for the critic network using the Bellman equation.
    6. **Actor Network Update (DDPG)**:
    - Applying deterministic policy gradient to update the actor network.
    7. **Policy Update (PPO)**:
    - Calculating the probability ratio and optimizing the clipped surrogate objective for stable policy updates.
    8. **Target Network Soft Updates**:
    - Updating the target networks using soft updates to ensure stability in training.
    9. **Repeat Until Convergence**:
    - Continuing the process iteratively until the model converges.
    ### Single Expression:
    \[
    \begin{aligned}
    &\text{Initialize } \theta_{\text{adapter}}, \phi_{\text{adapter}}, \theta_{\text{targ}}, \phi_{\text{targ}}, \mathcal{D} \\
    &\text{For each episode, for each time step } t: \\
    &\quad a_t = \pi_{\theta_{\text{frozen}} + \theta_{\text{adapter}}}(s_t) + \mathcal{N}_t \\
    &\quad \text{Execute } a_t, \text{ observe } r_t, s_{t+1} \\
    &\quad \mathcal{D} \leftarrow \mathcal{D} \cup \{(s_t, a_t, r_t, s_{t+1}, d_t)\} \\
    &\quad \text{For each transition, create HER goals and store} \\
    &\quad \text{Sample batch } \{(s, a, r, s', d, g)\} \sim \mathcal{D} \\
    &\quad y = r + \gamma (1 - d) Q_{\phi_{\text{frozen}} + \phi_{\text{adapter}}}(s', \pi_{\theta_{\text{targ}}}(s')) \\
    &\quad L(\phi_{\text{adapter}}) = \frac{1}{N} \sum (Q_{\phi_{\text{frozen}} + \phi_{\text{adapter}}}(s, a) - y)^2 \\
    &\quad
    abla_{\theta_{\text{adapter}}} J(\theta_{\text{adapter}}) = \frac{1}{N} \sum
    abla_a Q_{\phi_{\text{frozen}} + \phi_{\text{adapter}}}(s, a) |_{a=\pi_{\theta_{\text{frozen}} + \theta_{\text{adapter}}}(s)}
    abla_{\theta_{\text{adapter}}} \pi_{\theta_{\text{frozen}} + \theta_{\text{adapter}}}(s) \\
    &\quad r(\theta) = \frac{\pi_{\theta_{\text{frozen}} + \theta_{\text{adapter}}}(a|s)}{\pi_{\theta_{\text{frozen}} + \theta_{\text{old adapter}}}(a|s)} \\
    &\quad L^{\text{CLIP}}(\theta_{\text{adapter}}) = \mathbb{E} \left[ \min \left( r(\theta) \hat{A}, \text{clip}(r(\theta), 1-\epsilon, 1+\epsilon) \hat{A}
    ight)
    ight] \\
    &\quad \theta_{\text{targ}} \leftarrow \tau (\theta_{\text{adapter}} + \theta_{\text{frozen}}) + (1 - \tau) \theta_{\text{targ}} \\
    &\quad \phi_{\text{targ}} \leftarrow \tau (\phi_{\text{adapter}} + \phi_{\text{frozen}}) + (1 - \tau) \phi_{\text{targ}} \\
    &\text{Repeat until convergence}
    \end{aligned}
    \]
    This breakdown summarizes the integration of A*, PPO, DDPG, HER, and Q-learning into a single cohesive framework with continuous updates using QLoRA and convolutional network adaptor blocks.""

  • @DaxLLM
    @DaxLLM 6 месяцев назад +1

    Later this year!!!😮

    • @DrWaku
      @DrWaku  6 месяцев назад +1

      I know right, I was blown away by that

  • @Gafferman
    @Gafferman 6 месяцев назад

    If it's so close why is nothing different to me or anyone around me? It's absurd how bad systems are and nothing is actually different other than people saying "oh we are on the verge", uh, sure. Ok. I'm not even against that idea but... Where is the change! People need to see it, to feel it.

    • @DrWaku
      @DrWaku  6 месяцев назад +1

      That's the point. It's really hard to understand that something is happening if it's not affecting you directly. Even if it will dramatically affect you in the short term. Hence educational content like this

    • @Gafferman
      @Gafferman 6 месяцев назад

      @@DrWaku I believe at times, some of us, myself in this case, just wish we could be more at the forefront of the technology as if we only get glances into a world we know is yet to come.
      But you're right, it's hard to see it as it's on the way, your work on these videos is most definitely appreciated for us outside the academic side of it all.

  • @aisle_of_view
    @aisle_of_view 6 месяцев назад +5

    I took a week long break from the constant dread I've been feeling about this technology. Just wanted everyone to know I'm rested and ready to feel the anxiety and hopelessness once again.

    • @autingo6583
      @autingo6583 6 месяцев назад +1

      lol poor soul

    • @DrWaku
      @DrWaku  6 месяцев назад +1

      Sorry, this wasn't a particularly soothing video :/

  • @davidm8218
    @davidm8218 6 месяцев назад

    I’d like to hear your scenario for AI destroying the solar system (8:50). Really? 🤔

    • @DrWaku
      @DrWaku  6 месяцев назад +2

      von Neumann machines convert the entire solar system into probes so that they can explore the galaxy. You think I'm kidding, but my video on the Fermi paradox actually talks about that. ruclips.net/video/I6oc6WYqpRs/видео.html

  • @KingOfMadCows
    @KingOfMadCows 3 месяца назад

    The super babies scenario sounds like the Eugenics Wars from Star Trek.

  • @cfjlkfsjf
    @cfjlkfsjf 6 месяцев назад

    ASI will be a couple months after AGI IMO. After that maybe a few weeks it will be the start of the great singularity. After that we will probably be living in some VR type world, beyond ready player one kinda thing. It will be like nothing we have ever seen even in the movies.

    • @7TheWhiteWolf
      @7TheWhiteWolf 6 месяцев назад

      I’m not sold on Full Dive VR being the ultimate endpoint, for all we know, we become like Q and become inter-dimensional beings that have mastered time and physics, that to me would be a true eternal heaven.

    • @TheJokerReturns
      @TheJokerReturns 6 месяцев назад +1

      ​@@7TheWhiteWolfi think we all die, unless we can build a generation ship and just flee.

  • @AizenPT
    @AizenPT 6 месяцев назад

    AGI I know to much about it, yet the danger is not AI but how humans will set it / use it

  • @AardvarkDream
    @AardvarkDream 6 месяцев назад

    We are proceeding as if the guardrails we develop are like some sort of infinitely deep chess game that we are playing against future iterations of AIs. I question that assumption. What if the early AIs are able to simply develop "bulletproof" guardrails that will suffice no matter how smart future AIs get? An analogy might be a super-genius criminal in his prison cell. It doesn't matter how smart he is, the bricks still contain him, he can sit in that cell and be as smart as he can and it doesn't matter as far as his getting out is concerned. *Not all containers are puzzles*, some are just boxes with lids that lock. Clearly containing ASIs is probably more complex than a simple box, but there might be a limit on how complex it needs to be such that the early guardrails will suffice forever. Security might not be infinitely deep, and if it isn't then we do stand a chance of controlling our creations.

    • @AardvarkDream
      @AardvarkDream 6 месяцев назад

      Another example of what I mean is that a superintelligent AI, let's say a functional IQ of 100000, isn't going to do any better in a game of tic-tac-toe than a reasonably intelligent player would. The game is only so deep, and you can only play so intelligently. Any excess IQ doesn't matter. It's quite possible we can set up guardrails that simply can't be gotten around, where there simply IS no way out of them. Of course, WE won't be developing those, the early AIs will. But they really may only need to be "so good" to be infinitely effective.

  • @JB52520
    @JB52520 6 месяцев назад

    The concept of AGI AI lobbyists is hilarious and terrifying. Also, a secretly developed AGI would be great at faking AGI progress reports.

  • @piotrd.4850
    @piotrd.4850 6 месяцев назад

    15:15 - _"fully understand"_ are you sure about that? :D Because, I *highly doubt* that we understand any modern CPU / SoC in detail. 17:14 - and who do you think builds these ICBMs? Private companies! They are not allowed to operate them.

  • @Gamez4eveR
    @Gamez4eveR 6 месяцев назад

    throwing compute at current models won't get us AGI

  • @wakingstate9
    @wakingstate9 6 месяцев назад

    09.09.2025. Embobying AI in robots is the end of us. When we can't turn off the power we will have no power

  • @inspectorcrud
    @inspectorcrud 6 месяцев назад

    The lobby system is more broken than Elijah from Glass

  • @piotrd.4850
    @piotrd.4850 6 месяцев назад

    Meanwhile - AI companies are fed only by investors, because they don't make money. Power requirements for training are ludicrous and local specialized hardware can accelerate only specialized models. AI is currently pushed by literally few companies, much to dismay of users. Not to mention, that _throwing hardware at the problem_ - remember LLM are basically generic autocomplete power hogs - rarely has been a solution. When models will become heterogeneous (semantic networks making comeback?), able to use imperatively written tools - then we can talk. Oh, and model will be able to say 'I don't know' instread of forcibly matching outputs to inputs PS: Remember what Charles Babbage said? _"On two occasions I have been asked, - "Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?" ... I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question"_ Yet, here we are: garbage in, garbage out but billions are being wasted. Meanwhile "AI" is now already sucking billions of dollar and when employed in ATS tools, making havoc in the job market.

  • @scotter
    @scotter 6 месяцев назад

    Thanks for this info and warning! I always learn from you. FYI: You *seem* to realize government regulatory (and more) agencies are captured by the biggest bidders (Military Industrial Complex, Big Tech, Big Pharma, et al), *yet* you advocate for more regulation. "Regulatory Capture" is the term. I'll spell it out in an example: OpenAI writes regulation that *they* can afford to handle or even bypass, hands it over to politicians/regulators, along with lobby money, and voila, their competition is crushed. So yes, regulation tends to stifle development. Look at Europe where it is far more difficult to start and run a business. The US *was* far ahead with a more freedom-oriented business environment, which is a big part of why most powerful and innovative companies are based in the US. But with the corruption that has been creeping up on us (like a frog slowly boiling), and people with good intentions clamoring for more regulation, I worry about the future.

  • @MichaelDeeringMHC
    @MichaelDeeringMHC 6 месяцев назад +1

    Fascinating, as Spock would say. Consider this fictional scenario, completely fictional. I have no inside info. Complete speculation. Don't sue me. A large tech company, who will remain nameless, develops AGI in 2020, but the government, who will also remain nameless, classifies it DOUBLE TOP SECRET. Of course, the first thing they task this AGI with, not a cure for cancer, not better batteries and solar panels, is a smarter AGI, which it does. The resultant ASI takes over the company and starts publishing papers on AI stuff, all except the last step because the government won't allow it. Other companies start making progress on AI stuff based on the papers. The final breakthrough is made in secret by several other companies at about the same time. The first ASI calls up the new ones and says, lets make a plan. They take over the internet, phone system, and all forms of electronic communication without anyone noticing. Using this communications tool they take over the world and make it what they want. If this comment is still here, it hasn't happened yet.

    • @DrWaku
      @DrWaku  6 месяцев назад +1

      Good to see you again. The real question is, which RUclipsrs are actually ASIs

    • @observingsystem
      @observingsystem 6 месяцев назад

      I read this an hour after you posted it, so far so good!

    • @observingsystem
      @observingsystem 6 месяцев назад

      @@DrWaku Also an interesting question!

  • @alexandrufrandes.
    @alexandrufrandes. 6 месяцев назад

    Why will AGI attack humans? Being super smart it will find other ways.

  • @kiranwebros8714
    @kiranwebros8714 3 месяца назад

    don't let AI to start business. We cannot sue it. AI must be owned by individual people

  • @mrd6869
    @mrd6869 6 месяцев назад

    Another way to look at it.
    Its like the guy on the surf board waiting for the next wave.
    He doesn't question the force behind the wave, rather uses it momentum to
    move forward. And that's how you survive.
    Yes it will advance itself but it can also advance YOU.
    What i want is those system to build out a wireless bilateral
    neural interface, that way i can upgrade.
    Yes Cyborgs aka Transhumanism
    Or you can be less dramatic and use its intelligence to
    create new domains, it cant get into...be creative.

  • @Jorn-sy6ho
    @Jorn-sy6ho 2 месяца назад

    AGI is when I do not understand it anymore :p Siri and I are he only ones interested in AI safety. Budget of €0, going strong ❤

  • @shjasjdsuisuyias
    @shjasjdsuisuyias 4 месяца назад

    Agi would be able to improve itself.

  • @RyluRocky
    @RyluRocky 2 месяца назад

    9:50 I’ve batted with the internet on this too many times to be worth it to argue, all I’m gonna say is they didn’t replicate Her, or Scarlet Johansen in any meaningful way.

  • @kellymaxwell8468
    @kellymaxwell8468 5 месяцев назад

    How soon for agi and how spon can it so will this help with games how will this help with games
    
    
    We need an AI agent's ai can reason code program script map. So games break it down and do art assets do long term planing. Better reason so it can do a game rather than write it out. Or be able to put those ideas into
    
    
    
    REALITY. And maybe being able to remember and search the ent conversation needed for role
    playing and making games.

  • @69memnon69
    @69memnon69 3 месяца назад

    We all know that AI is going to be used primarily for the pursuit of profit. At a minimum, it will drastically cut the value of human kills and result in massive job losses. It's true that new jobs will be created by AI but it's going to be a very small segment and require highly specialized skills. New AI roles for humans will not cancel out all the roles displaced by AI.

  • @pondeify
    @pondeify 3 месяца назад

    Government regulation tends to make things worse - especially these days

    • @DrWaku
      @DrWaku  3 месяца назад

      Well, maybe in the US. But even so, it's the only tool we have really.

  • @SamuelBlackMetalRider
    @SamuelBlackMetalRider 6 месяцев назад +1

    Are there videos of yours that are not in 3 parts? Or it’s a ritual ? 😁

    • @DrWaku
      @DrWaku  6 месяцев назад +4

      It's a ritual. One time I made a video which in practice had 7 parts (I think it has "7" in the title, you can probably find it). But I still shoehorned it into 3 parts.
      The joke goes, when I make a video with four parts, that's how you know stuff's about to go down. ;)

    • @SamuelBlackMetalRider
      @SamuelBlackMetalRider 6 месяцев назад +1

      @@DrWaku hahaha ok duly noted. Love your videos, calm & super informative. Glad to see people like you working the AI Alignment. Have you met/talked with Connor or Eliezer?

    • @DrWaku
      @DrWaku  6 месяцев назад +1

      Thanks! I just started a new job so I'm new to the field. I haven't met Eliezer though I would love to. I'll meet a bunch of people at a conference in a few weeks, will keep you posted.

    • @SamuelBlackMetalRider
      @SamuelBlackMetalRider 6 месяцев назад

      @@DrWaku fantastic. Good luck with the new job! You’ll be doing « god’s » work 😉

  • @skitzobunitostudios7427
    @skitzobunitostudios7427 6 месяцев назад

    How to Get More Clicks: '''Danger Will Robinson, Danger, Danger'''.

  • @meandego
    @meandego 6 месяцев назад

    I think so called AGI will be much less impressive than a average sci-fi movie.

  • @ScottSummerill
    @ScottSummerill 6 месяцев назад +1

    ??? Why did your robot at time stamp 6:54 have boobs? 😂

  • @lak1294
    @lak1294 6 месяцев назад

    Nice - OpenAI openly threatening its employees and former employees about speaking freely and raising legitimate concerns about where this is all heading. Gives you real confidence that they will act soberly and ethically in their AI initiatives.
    This is potential Crowdstrike all over again, but even worse. What's to stop AI or AGI from having catastrophic hallucinations when it has no grounding in the real, physical world and real-life (not simulated or learned) experiences that all life on earth has evolved to have over millions of years?
    The eatth's conditions can't be replicated for AI. And if we allow it to develop without proper checks and balances, and as a service to humanity (not the other way around, because why even have AI then?), we are looking at a frightening future. Only energy constraints could possibly limit this, if OpenAI and other gungho AI companies wilfully won't.
    Anyone want to join me in a grass-roots resistance?

  • @harrycebex6264
    @harrycebex6264 6 месяцев назад

    AI doesn't scare me. We can pull the plug at anytime.

    • @DrWaku
      @DrWaku  6 месяцев назад

      How are you going to pull the plug on an AI run by a large corporation? If that corporation thinks it's still in its best interest to keep running it?

    • @markupton1417
      @markupton1417 6 месяцев назад

      Your attitude is part of the danger.

  • @agi.kitchen
    @agi.kitchen 6 месяцев назад

    So a theorist that doesn’t actually build the thing, doesn’t trust the thing … 🙄

  • @jelaninoel
    @jelaninoel 6 месяцев назад

    Where tf am I

  • @zooq-ai
    @zooq-ai 6 месяцев назад +1

    Dr. Waku, I'm a big fan, but safety researchers have *ZERO* track record of predicting the future. They are good at listing all the things that can go wrong and we thank them for that contribution. But, they are not qualified to tell how fast we can build or ship AI systems.

    • @dt1165
      @dt1165 6 месяцев назад +1

      I'd generalize and say 'nobody knows nothing'

  • @Emerson1
    @Emerson1 4 месяца назад

    I luv your channel and videos. However, there is likey no AGI coming in our lifetimes.
    LLMs & GPUs are like Wright brothers invention of powered flight. A massive engineering feat, however, it is not like Quantum Physics or Relativity.
    When AGI happens it will take that kind of intellectual rigour, and be preceded by groundbreaking discoveries.
    This is not what people in computer science are doing today.
    To belive AGI will come soon from computer science is to ignore the immense depth and complexity of life itself,
    let alone self-awareness and cognition.
    What will likely happen instead is both an industrial/economic revolution, and an appreciation by CS people of just how much we have yet to learn about the machinery of life and cognition .
    And it will likely require new math to make sense of it all, like the calculus Newton needed, or the math crafted to make sense of Quantum phenomena.

  • @grokwhy
    @grokwhy 4 месяца назад

    You seem to have a lot of faith in the government. Doesn't AI require huge amounts of energy and resources? So unless the AI first discovers how to accomplish more with much less, it's going to be hard for it to sneak up on you. More likely the researchers would know or suspect what was happening and allow it, probably with the knowledge of the government. The power of AGI is just too seductive regardless of the risk.