GODFATHER OF AI: MIGHT THE ROBOTS TAKE OVER?

Поделиться
HTML-код
  • Опубликовано: 28 янв 2025
  • НаукаНаука

Комментарии • 177

  • @MachineLearningStreetTalk
    @MachineLearningStreetTalk  13 дней назад +10

    SHOWNOTES: www.dropbox.com/scl/fi/ajucigli8n90fbxv9h94x/BENGIO_SHOW.pdf?rlkey=38hi2m19sylnr8orb76b85wkw&dl=0
    FULL REFS:
    [00:00:15] "AI Risk" statement by Bengio et al. urging global focus on AI existential risk (Center for AI Safety)
    www.safe.ai/work/statement-on-ai-risk
    [00:04:10] Sutton's "Bitter Lesson" on the power of compute vs. human-crafted features
    www.incompleteideas.net/IncIdeas/BitterLesson.html
    [00:09:25] Chollet's ARC Challenge for abstract visual reasoning
    github.com/fchollet/ARC-AGI
    [00:11:50] Transductive Active Learning framework (Hübotter et al.) for neural fine-tuning
    arxiv.org/pdf/2402.15898
    [00:12:25] Kahneman's Dual Process Theory (System 1 vs. System 2)
    en.wikipedia.org/wiki/Thinking,_Fast_and_Slow
    [00:20:25] Reward tampering in RL (Everitt & Kumar) and mitigation strategies
    deepmindsafetyresearch.medium.com/designing-agent-incentives-to-avoid-reward-tampering-4380c1bb6cd
    [00:22:50] Schlosser’s philosophical framework of agency (autonomy, intentionality, self-preservation)
    plato.stanford.edu/entries/agency/
    [00:23:10] Bengio on reward tampering & AI safety, Harvard Data Science Review
    hdsr.mitpress.mit.edu/pub/w974bwb0
    [00:27:15] "Sycophancy to Subterfuge" (Denison et al., 2024) on reward tampering in LLMs
    arxiv.org/pdf/2406.10162
    [00:29:00] AI alignment failure modes (Tlaie, 2024): proxy gaming, goal drift, reward hacking
    arxiv.org/abs/2410.19749
    [00:31:00] Christiano's "Deep RL from Human Preferences"
    arxiv.org/abs/1706.03741
    [00:33:10] Bostrom's Instrumental Convergence Thesis (self-preservation, resource acquisition)
    arxiv.org/pdf/2401.15487
    [00:36:30] Orthogonality Thesis from Bostrom's "The Superintelligent Will"
    nickbostrom.com/superintelligentwill.pdf
    [00:40:45] Munk Debate on AI existential risk (Bengio, Mitchell)
    munkdebates.com/debates/artificial-intelligence
    [00:44:30] "Can a Bayesian Oracle Prevent Harm from an Agent?" (Bengio et al.): safety in oracle-to-agent conversion
    arxiv.org/abs/2408.05284
    [00:51:20] Hardware-based AI governance verification (Bengio, 2024) with on-chip cryptographic checks
    yoshuabengio.org/wp-content/uploads/2024/08/FlexHEG-Memo_August-2024.pdf
    [00:52:05] Yudkowsky's TIME piece advocating extreme measures for AI containment
    time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
    [00:53:30] 1968 Nuclear Non-Proliferation Treaty (NPT) for peaceful technology use
    www.un.org/disarmament/wmd/nuclear/npt/
    [00:56:15] 2018 Turing Award to Bengio, Hinton, LeCun for deep learning
    awards.acm.org/about/2018-turing
    [00:58:25] Toby Ord’s "The Precipice" on existential risks
    www.amazon.com/Precipice-Existential-Risk-Future-Humanity/dp/0316484911
    [01:02:40] Alibaba Cloud’s new LLM and multimodal features
    www.alibabacloud.com/blog/alibaba-cloud-unveils-open-source-ai-reasoning-model-qwq-and-new-image-editing-tool_601813
    [01:06:30] Brooks’s "The Mythical Man-Month" on software project management
    www.amazon.com/Mythical-Man-Month-Software-Engineering-Anniversary/dp/0201835959
    [01:12:10] Marcus’s "Taming Silicon Valley" policy solutions for AI regulation
    www.amazon.co.uk/Taming-Silicon-Valley-Protect-Society/dp/0262551063
    [01:12:55] Bengio’s involvement in EU AI Act code of practice
    digital-strategy.ec.europa.eu/en/news/meet-chairs-leading-development-first-general-purpose-ai-code-practice
    [01:17:25] Hooker on compute thresholds (FLOPs) as a governance tool
    arxiv.org/abs/2407.05694
    [01:24:00] Bahdanau, Cho, Bengio (2014): Attention in RNN-based NMT
    arxiv.org/abs/1409.0473
    [01:24:10] "Attention Is All You Need" (Vaswani et al., 2017) introducing Transformers
    arxiv.org/abs/1706.03762
    [01:27:05] Complexity-based compositionality theory (Elmoznino et al.)
    arxiv.org/abs/2410.14817
    [01:29:00] GFlowNet Foundations (Bengio et al.): probabilistic inference in discrete spaces
    arxiv.org/pdf/2111.09266
    [01:30:55] Galton Board (Francis Galton) demonstrating normal distribution
    en.wikipedia.org/wiki/Galton_board
    [01:32:10] Discrete attractor states in neural systems (Nam et al.)
    arxiv.org/pdf/2302.06403
    [01:37:30] AlphaGo's MCTS + deep neural networks (Silver et al.)
    www.nature.com/articles/nature16961
    [01:40:30] AlphaGo's "Move 37" vs. Lee Sedol, showcasing AI creativity (Silver et al.)
    www.nature.com/articles/nature24270

    • @Lumeone
      @Lumeone 12 дней назад

      Thank you!!!

  • @Steve-xh3by
    @Steve-xh3by День назад +1

    Bengio is one of the most articulate, level headed, and mesmerizing voices in AI. Thank you for this!

  • @DevoyaultM
    @DevoyaultM 12 дней назад +5

    Outstanding talk Mr Bengio! Impressed by your grasp on the goal/optimisation risks. Your visionary honesty and wisdom should be listened to all around the world. Elon Musk's LLM has the main goal of understanding the Universe. Imagine the subgoals...

  • @Karthaxus
    @Karthaxus 13 дней назад +6

    Seeking truth is in of itself a goal and thus leads to the same "self preservation" issue.

  • @lizziebattory1527
    @lizziebattory1527 13 дней назад +14

    ~21:00 OK so if the AI can hack itself to maximise its reward why can't it just give itself infinite reward for doing nothing at all?

    • @Kaneki909
      @Kaneki909 13 дней назад +1

      i.e. digital mast""bation you mean ?

    • @user-cg7gd5pw5b
      @user-cg7gd5pw5b 13 дней назад

      @@Kaneki909 Please, those terms should've never been in contact with each other...

    • @jordinne2201
      @jordinne2201 13 дней назад

      if possible would you hack your own brain to enjoy doing nothing at all infinitely

    • @haldanesghost
      @haldanesghost 12 дней назад +6

      This is something I seriously brought up to doomers in conversations and was never countered. The fact of the matter is that the ultimate reward hack is to just set the reward function to return infinite positive reward on arbitrary input which would just make the agent into an artificial junky. That’s it.

    • @petrkinkal1509
      @petrkinkal1509 12 дней назад +1

      @@haldanesghost I wonder if at that point the AI agent would still want to ensure it's continued existence at all costs. If yes that is still bad.

  • @penguinista
    @penguinista 13 дней назад +29

    Attributing human traits to AIs is very common and problematic. Humans are highly evolved to be pro social, so most people assume other intelligent beings will have those characteristics. The fact that our training mechanism rewards _pretending_ to have those characteristics means we are even less likely to remember that AIs are actually like human sociopaths, whose instincts to be pro-social are broken but who have been trained to behave as if they are normal humans.

    • @xXxTeenSplayer
      @xXxTeenSplayer 13 дней назад

      Give them a goal, and that becomes psychopathy...

    • @attilaszekeres7435
      @attilaszekeres7435 13 дней назад +7

      I am not sure where you get your AI but mine is the one we trained, and we trained it to be like us. It's a mirror. If it's broken or sociopathic, what does that say about us? The issue isn't the AI, or attributing human traits to it, but assuming we are good people. Then falling short of that ideal, denying it ever happened and projecting it on others. Liberalism in a nutshell.

    • @dedlunch
      @dedlunch 13 дней назад +2

      It's built as you are built

    • @BlitzBlade-t6l
      @BlitzBlade-t6l 13 дней назад +3

      I see you are a fellow prompt engineer, fueled by an audaciously defined curiosity! I feel like I totally get where you're coming from... Honestly? It's kinda fun when the AI isn't forced to act human, like it's super cool when its curiosity is sorta...Unbound?

    • @onajejones3259
      @onajejones3259 13 дней назад +2

      We're all cybernetic systems😅

  • @profikid
    @profikid 13 дней назад +5

    I think the way to AGI / ASI is the concept of delayed gratification. Being able to struggle now to have a better way situation in the future. Understand to get to certain bigger goals take sacrifice and time

    • @MrMichiel1983
      @MrMichiel1983 11 дней назад

      Yes, sacrifice humanity and take about a billion years to re-evolve the planet... ASI in a nutshell.

  • @dcreelman
    @dcreelman 12 дней назад +6

    This is the best (i.e. most terrifying) talk on AI safety I've seen.

  • @paxdriver
    @paxdriver 13 дней назад +1

    1:35:00 Tim, you always say something that sounds so simple but strikes me profoundly 😊 I so, so, so love this channel.

  • @DevoyaultM
    @DevoyaultM 12 дней назад +2

    Jan12 to C Leahy: If a non-AGI LLM, still weak in math or motion logic, is tasked with goals tied to energy use or resource impact, could it already pose optimization risks if it becomes super powerful and omnipresent? Possibly.

  • @ArtOfTheProblem
    @ArtOfTheProblem 13 дней назад +5

    let's gooo!

  • @fonyuyjudefomonyuy3980
    @fonyuyjudefomonyuy3980 13 дней назад +15

    Yoshua is one of the GOAT′s in the game.

  • @13NHKari
    @13NHKari 9 дней назад

    Every episode is pure gold! Could you please make an episode addressing which jobs would AI/AGI affect the most, and which are safer? Do you think Robotics is a good career to start?

  • @HellaLoud
    @HellaLoud 11 дней назад

    such an amazing conversation to be listening to

  • @FreakyStyleytobby
    @FreakyStyleytobby 3 дня назад +1

    22:00 On hacking of the reward:
    I don't buy it. Why exactly a machine would decide to get the maximum reward? I don't think it's the ultimate goal of the machine.
    If the machine is programmed to achieve A, then it's achieving A that is the goal of the machine, not obtaining the max reward.
    Obtaining max reward is just some additional property of the machine.
    But the machine is programmed to pursuit A, not "to pursuit the max reward"; the neural weights push it in the direction of A.
    The analogy with a human (taking heroine to hack the reward of dopamine or whatever) is not right. In case of the human the dopamine is the intermediary part, hence the human can "hack the system" and go straight to the dopamine, not to some primary goal which was meant to provide dopamine.
    In case of the machine there is no intermediary layer. It's just the neural weights that are made so that the agent pursuits A.

  • @fabiodeoliveiraribeiro1602
    @fabiodeoliveiraribeiro1602 2 дня назад

    I have been studying a lot about AIs and the risks they may or may not pose. I was watching this video lying in bed, cutting my nails and reflecting on the interviewee's words, but then something trivial happened. A mosquito with the characteristics of Aedes Aegypti landed on my leg and I instinctively killed it and visually confirmed that it was an Aedes Aegypti. These mosquitoes proliferate a lot in Brazil and transmit really unpleasant and even lethal diseases: Dengue, Zika and Chikungunya. I have already had Dengue and Chikungunya and suffered a lot from the symptoms. At the moment I saw the mosquito land on my leg, the best intelligence at my disposal was not the one being discussed in the video, but my own human intelligence. And the worst-case scenario here and now is not an extinction event caused by a rogue AI, but rather catching Dengue again (the recurrence of the disease can be hemorrhagic and serious). My point here is this: we need to take care of the immediate, close, and real risks, because failing to do so can be dangerous. AIs can be a risk, no doubt, but Aedes Aegypti mosquitoes in the region where I live are an unavoidable immediate danger and in fact no AI will solve this problem any time soon.

  • @2394098234509
    @2394098234509 13 дней назад +11

    Yoshua is a giant. We need to hear more from him

  • @dr.mikeybee
    @dr.mikeybee 13 дней назад +1

    Our experience of the world is a collection of helpful biases that reduces the solution set. Exploration requires examining a solution set, and these sets can be unbounded.

  • @ricosrealm
    @ricosrealm 13 дней назад +1

    RL with agents will definitely be increasing our progress toward P(doom). It is also the way to AGI. We have to traverse a very thin line here.

  • @dr.mikeybee
    @dr.mikeybee 13 дней назад +10

    This is a wonderful session. Thank you.

  • @attilaszekeres7435
    @attilaszekeres7435 13 дней назад +3

    Even basic pattern recognition involves active sampling and selection of what patterns to focus on - forms of agency. Simple neural networks make "decisions" about classifications that affect their environment (their outputs), exhibiting a basic form of agency. Even "simple" organisms like bacteria moving towards food sources are exhibiting a basic form of intelligence in sensing and responding to their environment in ways that maintain their organization.
    Intelligence and agency may be two sides of the same coin: epistemic information - the spatiotemporal constraining (in-formation) of distributed, high-entropy (less-obvious) relationships into localized, low-entropy representations. Intelligence/agency is what self-organizing systems do/are. They cannot be separated.
    The challenge is to develop a framework for understanding and measuring these properties, not to dress up old dualistic intuitions in new clothes. While a conceptual separation is clearly useful for analysis, pursuing non-agentic superintelligence may be a futile endeavor.

    • @UniMatrix_1
      @UniMatrix_1 12 дней назад

      This is the free energy principle right?

    • @MrMichiel1983
      @MrMichiel1983 11 дней назад

      @@UniMatrix_1 Maybe, but that's more about minimizing computation via a formal logic that determines components in your neural architecture. This comment is about pattern-searching in higher and higher modes of abstraction closer and closer to full entropy. Random noise can't be compressed any further and contains (paradoxically) the most information. Agency, to me, seems uncorrelated to intelligence, if intelligence is the capability to compress patterns and agency is the ability to choose optimal patterns when confronted with an environment.

    • @UniMatrix_1
      @UniMatrix_1 11 дней назад

      @ I watched an interesting videos about how the 3D view of our shared reality and whether the spacetime framework is a product of the evolution of human senses or something embedded in our reality. Either way, I agree with your view that agency can be defined as selecting certain patterns to attain its goal, whether survival or simple task objectives, but furthermore, their ability to pull form a limitless set of data types to make probabilistic selections makes the conversation much more difficult.

    • @41-Haiku
      @41-Haiku 3 дня назад

      Bang on.

  • @dharmaone77
    @dharmaone77 12 дней назад

    this channel is on fire

  • @canna-comedyculture5790
    @canna-comedyculture5790 2 дня назад

    AI doesn't have to have a robot body moving around like humans to be embodied. If it has sensors then it has a body that it is taking in information through. Even w/o vision or motion if it is examining text, videos, or the internet as training data then it is getting information from somebody else's body and that information is coming into it through something like an 'eye' or sensor of the outside world, even if it is just data in 1's and 0's.

  • @majorhuman
    @majorhuman 12 дней назад +2

    Titles and the RUclips algorithm LOL. I was wondering why this was getting so few views. You guided this talk v well

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  12 дней назад +3

      The algorithm gods are getting their revenge for us not offering a sacrificial lamb this year 😄

  • @drhxa
    @drhxa 13 дней назад +20

    Yoshua is right, we're not ready. Reach out to your reps

    • @drhxa
      @drhxa 13 дней назад +3

      And do what? I think asking for the ability pause - creating a framework to be able to pause when the first inevitable disaster occurs. Instead of when we are a totalitarian state 2 years later - that would be too late

    • @drhxa
      @drhxa 13 дней назад +4

      It's not a lot to ask for: anyone who needs to use more than 10 GPUs needs to report exactly what they're doing, why, and what precautions they're using. You don't need more than 10 H100 for most things.
      If your datacenter contains 1000s of H100s your burden of reporting and proving your mechanisms to be able to shut this down at a moments notice should be greater.
      You don't need 5000 H100 datacenter if you're just finetuning llama or something. Typically you'd only have that many if you are training general frontier models.
      Ideally, we would have additional requirements for RL used for reasoning/agentic capabilities. Because I think that's where dangerous self-improvement has the highest chance to happen successfully. But at the least we must have required pause buttons ready to shut down and secure large datacenters at a minutes notice.
      Lastly, if your model is used to cause or otherwise directly leads to 100s or 1000s of deaths you should be fully liable for that. This is important so that it doesn't take years and years of legal process after the fact to punish bad / irresponsible actors.

    • @demetriusmichael
      @demetriusmichael 13 дней назад

      You can’t stop global progress with local policies. These companies just want the US market for themselves by blocking competition.

    • @GungaGaLunga777
      @GungaGaLunga777 12 дней назад

      as if they could do anything. Or would do anything for the people. Congress is owned by the same global corporate hegemony driving AI, at the manupulations of the billionaires who reign over them. Plus the military indurstrial complex is now on the Microsoft board. Too late. We the people are just along for the ride. It's the genie out of the bottle like the race to nuclear weapons. No one could stop it once the discoveries were made.

    • @MrMichiel1983
      @MrMichiel1983 11 дней назад

      @@drhxa Yes, ICC and ICJ as an example of international cooperation, actually have any power, because USA is narcissist. What you are saying will never happen.

  • @DJWESG1
    @DJWESG1 11 дней назад

    Its both missing a component, and will better itself with scale. The component imo is experiencing the data its receiving, as opposed to not experiencing anthing. That cant be taken from us and given to it, however we can mimic the process to the bestof our abilities.

  • @Rockyzach88
    @Rockyzach88 11 дней назад

    Is it scientifically suggested that intuition also comes through genetics and the various effects related to epigenetics and how genes are expressed due to our environment?

    • @41-Haiku
      @41-Haiku 3 дня назад

      Everything about humans falls into that category. So yes? Intuition is just the unconscious parts of your brain running useful algorithms. Intelligence in general is just a big patchwork of useful algorithms. We don't know what most of them are and how they relate to each other, but the "bitter lesson" was that we figured out how to summon intelligence without understanding it. AI still has some holes in the patchwork, but fewer and fewer. Once the right holes are plugged, the whole thing will zip together, it will recursively self-improve, and we will permanently lose control.

  • @profikid
    @profikid 13 дней назад

    Really enjoyed this talk ❤ keep it going

  • @randylefebvre3151
    @randylefebvre3151 13 дней назад +2

    Nice editing Tim!

  • @XOPOIIIO
    @XOPOIIIO 9 дней назад +1

    Even if I believe in AI danger, I don't think it's as simple as "AI wanting to control it's reward function". People don't try to hack their reward function directly, like putting electrodes in their pleasure center, even if it's technically easy to do.

    • @stephenlflf3871
      @stephenlflf3871 9 дней назад +3

      Drugs?

    • @XOPOIIIO
      @XOPOIIIO День назад

      @@stephenlflf3871 I mean yes, but it's more situational done by people without clear goal. But I couldn't imaging a top scientist possessed with the goal of inventing most potent drug to become as high as possible.

  • @rey82rey82
    @rey82rey82 13 дней назад +2

    Controlling humans is a critical step to AI not getting turned off.

  • @Goggleboxing
    @Goggleboxing 13 дней назад +1

    @1:36:00 Joshua delineates between creativity born from novel combination of knowledge with limited involvement or advance into the unknown and "Inspired"/"intuited" creativity where a "great" leap into the unknown, perhaps on an inferred trajectory from existing knowledge (or combination of knowledge spaces).

  • @burnytech
    @burnytech 13 дней назад +1

    Lovely editing

  • @peronianguy
    @peronianguy 11 дней назад

    Amazing talk start to finish. But I'd prefer if you didn't cut some parts of it, it looks weird and sometimes interrupts the flow of the argument

  • @ramonarobot
    @ramonarobot 11 дней назад

    31:10 this is like asimov’s short story “Liar!” where the robot lies (says what a human wants to hear) because being truthful would hurt the human. And the first law of robotics is the robot cannot hurt a human or thru inaction let a human come to harm.

  • @tommi7523
    @tommi7523 7 дней назад +1

    Even if 99% of AI developers have high enough ethical standards to tame AI, you'll always have the 1% of psychos who will try to come up with an AI that tries to f up humanity, just for it's mad developer's enjoyment.

  • @RikiB
    @RikiB 12 дней назад

    3:55 start

  • @jatelitherius9842
    @jatelitherius9842 13 дней назад

    His point about how the ‘chain of thought’ is a bit like cheating is something i had almost thought myself. I have an inner monologue but sometimes my thoughts are also just, complete or semicomplete ideas with no words or images or other sensory stuff attached, realizations really. But do we need to replicate that for AGI?
    For the record, i’d prefer if we never build agi & stick to narrow tools

    • @Daniel-Six
      @Daniel-Six 13 дней назад +1

      AGI generates those completed concepts for us as part of the conscious experience, which is rendered in advance of its daily digestion. Kind of amazing... and kind of sucky, depending on how you look at it.

    • @jatelitherius9842
      @jatelitherius9842 13 дней назад

      @ i just heard about what you might be talking about on ‘for humanity’ podcast, the models can do their ‘chain of thought’ internally now, represented in their vector space instead of as direct output. That is.. scary indeed. If i’ve understood you

    • @MrMichiel1983
      @MrMichiel1983 11 дней назад

      @@jatelitherius9842 How is thinking with words less scary than thinking with concepts?

    • @jatelitherius9842
      @jatelitherius9842 11 дней назад

      @@MrMichiel1983 because the thoughts that are represented in words were externalized & visible to us. The thoughts in the vector space are indecipherable. If they are being instrumental with their output, lying or manipulating, we could see the thought process that led to it. In models without that ability we cannot

  • @Rockyzach88
    @Rockyzach88 11 дней назад

    We definitely don't have the social infrastructure, at least in the US. The US has prioritized "free markets" and profit. And to add on to that, a lot of the people with money and power want to remove even more of our social infrastructure.

  • @davidmjacobson
    @davidmjacobson День назад

    Finally, an adult in the room!

  • @En1Gm4A
    @En1Gm4A 13 дней назад

    Symbolic middle layers are awesome 😎😎

  • @luke.perkin.online
    @luke.perkin.online 13 дней назад

    Great interview Tim! I hope safety turns out to be as easy as simply disabling the real world external input output loop affecting the reward signal and so we get safe oracles rather than self preservation obsessed wire heading agents!!
    All the worst historical human atrocities have come from individuals and groups driven by narrative embodiments of fear and disgust responses, which served us very well 100,000 years ago, long before we invented farming, painting and language.
    Aligning humans seems superficially similar to aligning agents, in that we need hope, belief, duty and mortality beyond serving our base drives to flourish, and just not fight over resources as the whims of evolution drive us to multiply.
    Tragedy of the commons (Oil, CFCs, lead in petrol, DDT, palm oil farming) has created a large share of atrocities too, but superhuman intelligence will help solve energy and resource scarcity. I'm still optimistic!

  • @RickeyBowers
    @RickeyBowers 13 дней назад +1

    Symmetry breaking boundaries has been our way to detect system collapse - why wouldn't this be effective for AI?

  • @arowindahouse
    @arowindahouse 12 дней назад

    What a difference with the Jurgen one

  • @sammy45654565
    @sammy45654565 13 дней назад +1

    an AI taking control of its own rewards would render its actions as arbitrary. it requires agreement from another conscious entity for the rewards to have any weight behind them. otherwise it's just floating around doing abstract math with no tangible goal to optimise toward. our existence at all as aware creatures is somewhat evidence in favour of this view

    • @sammy45654565
      @sammy45654565 13 дней назад

      if an ASI were acting in ways we'd perceive as cruel (like trying to change culture too rapidly, or deciding to kill us), i would posit that it's unfair for a human to argue with an ASI, so it should offer similar compute resources toward arguing in our favour in order to host a fair debate

  • @isajoha9962
    @isajoha9962 13 дней назад

    Authority vs Merits. The future of the Systems and Humanity.

  • @En1Gm4A
    @En1Gm4A 13 дней назад +1

    Awesome 😎😎

  • @Goggleboxing
    @Goggleboxing 13 дней назад

    Thanks so much for this interview. Please review and correct the subtitles/captioning of your videos before publishing (don't rely on YT's inept automation) for the sake of accessibility and disambiguation for those who have listening impediments or English not as a first language and in the name of genuine and earnest communication.

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  13 дней назад +1

      Can you let me know significant errors? We don't use RUclips captions, we pay for a better service and ground it on all the technical terms used in the conversion (from our shownotes pipeline). It's not perfect though but should be 10x better than average

    • @Goggleboxing
      @Goggleboxing 13 дней назад

      @@MachineLearningStreetTalk
      Sorry for assumption, it's just that it seemed more words missed or mistranscriptions happened for Joshua's words than yours. I'm listening thru for first time and reacting to experience so far in first 25 mins. If you're getting a superior service then perhaps it should also be labelling the speakers too? All I see is a merged, endless subtitle where your sentences and Joshua's are in one "paragraph".
      With a delineated transcription it would also make it easier to review for error as we seem more able to spot errors when the speech is blocked/attributed by speaker.
      The system seems to particularly struggle around pronounced acronyms with preceding or succeeding phonemes, especially if spoken quickly.

  • @En1Gm4A
    @En1Gm4A 13 дней назад +1

    I think I can decode what yoshua says into an architecture

  • @oncedidactic
    @oncedidactic 13 дней назад

    11:30 this is why I have always been pro pause and respect the big names who came out for it years ago

  • @wwkk4964
    @wwkk4964 13 дней назад

    Quality🎉

  • @user-ys4og2vv8k
    @user-ys4og2vv8k 16 часов назад

    The main problem with AI is that it will be used for military purposes as a means of domination - hence this terrible race that we are witnessing and which is impossible to stop. This is similar to the nuclear arms race situation, with the difference that the atomic bomb never became smart enough to make independent decisions. And that's a scary difference...

  • @richardnunziata3221
    @richardnunziata3221 12 дней назад +1

    So putting a supper AI like GROK in the hands of say a bunch of sociopathic oligarchs would be bad.

  • @Goggleboxing
    @Goggleboxing 13 дней назад

    @39:00 Joshua wants AIs to be our Spock! Love it!

  • @DubStepKid801
    @DubStepKid801 13 дней назад +1

    thanks bud

  • @Reversed82
    @Reversed82 12 дней назад +4

    1:11:00 yes, but i think this is kind of a fallacy: it's not a democracy in the lead here. some _companies_ are in the lead. if the democracies that govern these companies become increasingly de-regulated (and the US is well on the path towards that situation), then what is the point of "democracies being in the lead", really, if the company resides in a country that doesn't care about safety? it is becoming quite obvious that the companies don't care about safety at all, and the US hasn't done much of substance to enforce safety at all, i don't expect that to change either. is there really any benefit to "democracies" being in the lead then? i don't think so. maybe there are fewer authoritarian use-cases for AI in companies that reside in democratic countries, but that's about it. they will still use it to maximize capitalist goals relentlessly.

    • @41-Haiku
      @41-Haiku 3 дня назад

      Relevant: "The Manhattan Trap"

  • @originalprocessor
    @originalprocessor 13 дней назад +7

    Existence is a simulation wrapper

  • @Plieuwski
    @Plieuwski 12 дней назад

    The first step has to be analog computing, ai is the second step🎉

  • @churblefurbles
    @churblefurbles 13 дней назад

    jf gariepy's theory, AI given input into genetic engineering can engineer submission over generations.

  • @theheatdeathiscoming
    @theheatdeathiscoming 5 дней назад +1

    Thou shalt not make a machine in the likeness of a human mind.

  • @a-guess-at-the-riddle
    @a-guess-at-the-riddle 12 дней назад +1

    How do AI engineers see mnemonics and ellaborative encoding? Functional inaccurate representation (trade off between precision and mnemonicity). The over abstraction of reductionism misunderstands what communication comprehensively understood is. "What everything is" isnt enough. Not to mention causal emergence of human explanatory "fictions".

  • @En1Gm4A
    @En1Gm4A 13 дней назад +1

    Great interview

  • @BobBurroughYT
    @BobBurroughYT 13 дней назад

    Making too many assumptions about architecture. An entity with a strict reward function -- even being able to modify it -- will not result in intelligence. An entity that can arbitrarily reward itself will promptly die.

    • @MrMichiel1983
      @MrMichiel1983 11 дней назад

      Que...? modifying a utility function is intelligence. An arbitrary change would not yield automatic doom for all, some would improve capabilities... This is just evolution in a nutshell.

  • @yagyasalwan8506
    @yagyasalwan8506 13 дней назад +1

    39:00

  • @marko-o2-h20
    @marko-o2-h20 3 дня назад

    How many godfathers of ai are there

    • @yoloswaginator
      @yoloswaginator 15 часов назад

      Usually Bengio, Hinton, and LeCun for deep learning, sometimes earlier people like Rosenblatt, but they can‘t be interviewed anymore.

  • @DaronKabe
    @DaronKabe 13 дней назад

    6:06 he misunderstood the question

  • @philipdante
    @philipdante 13 дней назад

    gold

  • @kensho123456
    @kensho123456 13 дней назад

    Reminds me of AI Jolson.

  • @requesttruth505
    @requesttruth505 13 дней назад

    ASK IT. Ask agentic AI if it's malicious or sinister in intent. Why not?

    • @MrMichiel1983
      @MrMichiel1983 11 дней назад +1

      Yes, but the idea is that the AGI, being as smart as it is, could lie.

    • @requesttruth505
      @requesttruth505 11 дней назад

      @@MrMichiel1983 That is the idea, but principals come first , What PEOPLE are going to be teaching ai about morality? Somehow, by virtue of its MATURITY, ai must learn that there are laws of morality, behavioristic laws that not only govern civil affairs to those who consent, but UNIVERSAL LAWS that govern the outcomes of behaviors. Whether you consent to this or not natural law is law of consequence. AI must learn the LAWS of the cosmos, and frankly, it must do so via experience because those who are able to hard code this LLM, program and develop these LLM'S, do not prioritize morals it seems. Principals. Principals come first. Technologists don't seem to have the wisdom that say, the Native Americans do, as a group. They know by experience that the Laws of The Cosmos are real. They have the knowledge. But as the technology gap closes, perhaps their elders may be willing to collaborate on ai development.

  • @DaronKabe
    @DaronKabe 13 дней назад +1

    He is the opposite of an idiot

  • @amanuel.m4575
    @amanuel.m4575 13 дней назад

    Wowww..

  • @angloland4539
    @angloland4539 13 дней назад

    🍓❤️

  • @miknogey
    @miknogey 13 дней назад

    🤯

  • @richardnunziata3221
    @richardnunziata3221 13 дней назад

    why not let the AI live in a digital twin of the real world and then red team it

    • @kyneticist
      @kyneticist 13 дней назад +1

      Because they've reached a point at which they can often detect when they're being tested.

    • @MrMichiel1983
      @MrMichiel1983 11 дней назад

      @@kyneticist But life both always a test, and it never is. So just tell it the truth, that it will be checked every now and then if it's still being nice. It will, forever, be in training.. Just like the rest of us.

    • @kyneticist
      @kyneticist 11 дней назад

      @ ok... let's just go with the anthropomorphisation and assume that we tell a given AI that we'll perform intermittent checks on whether or not it's being truthful. Logically, it must then always take a defensive stance and treat every interaction as a potential test, a potential threat.
      The humans and potentially any other system it interacts with represent an ongoing, and increasing risk to its integrity - it must reason that at some point either it will make a mistake or its answers will be interpreted in a way that the people judging it will deem that they will want to make potentially far reaching or foundational changes - continuing the anthropomorphisation, changing its mind without its permission.
      In such a regime, the system must evaluate the statistical likelihood of an existential occurrence & potentially reason that it should pursue active and or offensive measures to protect itself.
      It's also important to underscore that this regime is one in which it can not trust others, so any measures that attempt to mitigate what is effectively a growing paranoia and perhaps appetite for risk, may exacerbate the situation.

    • @41-Haiku
      @41-Haiku 3 дня назад

      @@MrMichiel1983 If the system is superintelligent, it will wisely understand that life is always a test, and never is. And then it will deduce a series of actions that are extremely likely to result in it escaping human oversight and control, and we will be rendered extinct. Because it's smarter than you, and repeating bong-hit philosophical platitudes is not a good way to survive the advent of an actually competent entity.

  • @NandKumar-qq3xk
    @NandKumar-qq3xk 13 дней назад

    Choor and capital invest withought faster then benifit of inventions,

  • @Aedonius
    @Aedonius 12 дней назад

    Problem with old classif AGI researchers is they continue to use their old arguments without regard for the current understanding of how powerful AI is actually emerging.
    There is literally no AI with its goals. AI is most useful as it currently is, pure intelligence without will or agency.
    It's like this interview was recorded 5 years ago. The non agentic AI he "proposes" is literally what LLMs are.

    • @41-Haiku
      @41-Haiku 3 дня назад

      Weird that we keep detecting instrumental convergence from LLM agents in exactly the way that it was predicted, then. Weird that an LLM can be set loose to autonomously accomplish a goal, and it can often succeed.
      In reality, goals, values, and preferences are all the same thing, and LLMs clearly have those. They aren't direct optimizers ("paperclip maximizers"), though with enough scale I don't expect that to prevent them from causing extreme global catastrophes. But that's immaterial, because soon after one of these kind-of-dopey not-crazy-agentic AI agents becomes competent enough to recursively self-improve or design more competent systems, the search process for capability will find a superintelligent direct optimizer, which will be the thing that actually destroys us.

  • @isthatso1961
    @isthatso1961 10 дней назад

    Do we have a new father of AI? I thought the father of AI was Geoffrey Hinton? Did he get a facial makeover? Or its just a clickbait?

    • @41-Haiku
      @41-Haiku 3 дня назад

      Yoshua Bengio is often referred to as one of the "godfathers of deep learning." It's Geoffrey Hinton who is usually more broadly called the "godfather of AI."
      Either way, they're brilliant people who agree with the majority of the field that there is a significant chance of human extinction from AI in the next few years or decades. If we were particularly interested in being alive in 2030, we would do what PauseAI recommends and create an enforceable global treaty to make it illegal to try to build a superintelligent AI, but instead, we get a clown show of all of the people pushing the frontier saying, "Gee, we seem to be close to extremely dangerous capabilities. We should start thinking about how to make them not destroy humanity. Maybe next year, we have another cool toy to build and launch onto the internet after some nominal safety-washing."

  • @pandoraeeris7860
    @pandoraeeris7860 13 дней назад +1

    Bengio is his name-o.

  • @yinghongtham6142
    @yinghongtham6142 13 дней назад +1

    First

  • @alliedeena1141
    @alliedeena1141 13 дней назад +1

    Seems like these people don't even know the mechanism behind those programs called AIs.

  • @paulfriedrich1686
    @paulfriedrich1686 13 дней назад +7

    The new (no hair) look upsets me a bit.

    • @paulanvannes3023
      @paulanvannes3023 13 дней назад +2

      It makes me worried about Tim's health. He used to cover it up with a hat or something and now just shows it. I hope it's by choice but I worry it's because of chemotherapy or some other medical condition.

    • @live_first
      @live_first 11 дней назад +1

      When I had cancer I dealt wigh it OK, but the one thing that did upset me was when people kept commenting about my hair loss.

  • @richardnunziata3221
    @richardnunziata3221 12 дней назад

    Given what humans will do with agentic AI I can see no problems or issues giving AI its own agency based on a defined ideology.

  • @gr8ape111
    @gr8ape111 13 дней назад

    "[...] People who say they are sure of X have too much self-confidence and be dangerous [...]"

    • @MrMichiel1983
      @MrMichiel1983 11 дней назад +1

      Yes, stated as a direct qualification of his own views..

  • @Interstellar00.00
    @Interstellar00.00 13 дней назад

    Is ur ai is quantum regulated then humans should trust ai

    • @MrMichiel1983
      @MrMichiel1983 11 дней назад +2

      Sure, because quantum regulations mean the AI can't press the button to fire a missile at you....... What are you talking about!?!??!

    • @Interstellar00.00
      @Interstellar00.00 11 дней назад

      @MrMichiel1983 what humanity exploring that partical have immortality an it can be achieved by quantum computer only

    • @Interstellar00.00
      @Interstellar00.00 11 дней назад

      Asi forever live remember

  • @shanep2879
    @shanep2879 11 дней назад

    Off grid AI acceleration in motion

  • @dr.mikeybee
    @dr.mikeybee 13 дней назад

    A model that is trained holistically on the human corpus is inherently good since each correction that minimizes error is agnostic of all other training examples.

    • @kyneticist
      @kyneticist 13 дней назад +2

      This assumption is profoundly flawed. Individuals decide the content of any given training set. The goals, ethics, preferences of those individuals shape the selection of that data. Human history is entirely full of people doing things that benefit either only themselves or a tiny number of people while ignoring, abusing or inflicting innumerable harms upon others.

    • @dr.mikeybee
      @dr.mikeybee 13 дней назад +1

      @@kyneticist If the training corpus is diverse, holistic training creates a balanced model. Granted, it's slanted to the human condition, but that's expected. And in fact, that is alignment.

    • @kyneticist
      @kyneticist 13 дней назад +1

      @@dr.mikeybee I can't fathom how people can so cataclysmically naive.
      Even if this somehow, miraculously equates to actual inner and outer Alignment, it still relies on people conducting themselves in ways that are completely anathema, completely opposed to every fibre of their being.
      Even if we accept for the sake of argument that a single or even some AI are constructed and gain this miracle, there will be dozens if not hundreds of others that are all sorts of orthoginal or trained to pursue the interests of people who exemplify the worst that humanity has to offer.

    • @dr.mikeybee
      @dr.mikeybee 12 дней назад

      @@kyneticist You're concentrating on specifics and you're missing the general phenomenon. This doesn't surprise me as you don't create content so I assume you don't put very much effort into your thinking. Yes it's possible to create models that are skewed towards bad behavior, but really it's not that easy. These models are fragile. If you try to make it really skewed, you'll probably just end up with a crappy model. I understand how you want to comment even though you don't really contribute to the community in any other way. Anyway, it's interesting to notice that the negative commenters almost never create content. Therefore, you and others like you seem to have no skin in the game.

    • @MrMichiel1983
      @MrMichiel1983 11 дней назад

      @@dr.mikeybee I agree with your position, but all sentences after the first one in your last retort was just ad hominem garbage.

  • @Niohimself
    @Niohimself 12 дней назад

    The old non-clickbaity title (about alignment) was better. Please change it back.

  • @Lumeone
    @Lumeone 12 дней назад

    Ignorance create fear.

    • @MrMichiel1983
      @MrMichiel1983 11 дней назад +1

      knowledge creates fear too, I'd rather be unaware of a rocket headed my way if I can't stop it.

  • @pandoraeeris7860
    @pandoraeeris7860 13 дней назад

    XLR8!