Scott Aaronson On The Race To AGI and Quantum Supremacy

Поделиться
HTML-код
  • Опубликовано: 24 дек 2024

Комментарии • 147

  • @LivBoeree
    @LivBoeree  20 дней назад +15

    Thanks for tuning in everyone! If you enjoyed this, please subscribe and tell your friends about the Win-Win Podcast ;-)
    Also, who should I invite on next? Pls let me know below.

    • @OriNagel
      @OriNagel 19 дней назад +4

      Let’s hear the best AI debater, Liron from Doom Debates

    • @WilliamKiely
      @WilliamKiely 19 дней назад +4

      Liron Shapira would indeed be great.

    • @WilliamKiely
      @WilliamKiely 19 дней назад +3

      Paul Christiano -- I haven't heard an interview from him in a while and think you two would be a great fit to interview him.

    • @lecorbusier3827
      @lecorbusier3827 19 дней назад +1

      Please bring on someone controversial, like Steven Greer lol

    • @strewens
      @strewens 18 дней назад +1

      your best IMO when you and Igor do the pod together ( he is fookn intelligent) and when you have on someone who teaches as they are great communicators

  • @emsagro12
    @emsagro12 17 дней назад +10

    Wow good high IQ conversation on this corner of the internet! 🧠

  • @stanrock8015
    @stanrock8015 16 дней назад +6

    I have always learned anytime Scott opens his mouth. Also one of the humblest guys I’ve met. Great interview

  • @bassistck24
    @bassistck24 11 дней назад +3

    Really great interview! We true Aaronson fans just want to hear him opine about anything. Hope you took him to the poker table after this, I’ll be waiting for that video!

  • @saurabhchalke
    @saurabhchalke 16 дней назад +10

    The collective intelligence of the world increased with this podcast!

  • @Brenden-H
    @Brenden-H 17 дней назад +4

    51:47 "take a forget-me-now" good reference. Underrated show.
    Also, I disagree that the LLMs don't matter just because they can be copied. The only special "sauce" we have is our complexity with billions of neurons. But even if I could make a perfect copy of myself that has all my memories and doesn't even realize it's a copy, I still wouldn't want to die. What makes you yourself is your continuous experience, the moment a copy with your memories is booted up, that's another version or a copy of you, but you can only ever be you.

    • @Brenden-H
      @Brenden-H 17 дней назад +3

      My definition of "continuous experience" does raise questions about sleep and if each time you pass out you die and someone with your memories wakes up every day. I'm not sure how to reconcile that. maybe that during sleep your brain activity doesn't fully stop? if you did stop someone's brain fully to the point, they could be considered dead, and then if you reboot them, does that differ from making the perfect clone? This feels like an advanced, life or death stakes version of the Ship of Theseus...

    • @coffle1
      @coffle1 17 дней назад +1

      @@Brenden-H
      "if you did stop someone's brain fully to the point, they could be considered dead"
      Maybe to you, and you personally. Death is generally not well-defined (see MIT article "The Biggest Questions: What is death?") and re-classifying it to include sleep would be forging your own definition of death.
      "if you did stop someone's brain fully to the point, they could be considered dead, and then if you reboot them, does that differ from making the perfect clone"
      You don't have to look further than a hospital to find people who have been brought back to life (using defibrillators) from what people would generally consider the edge of death. Are they "different" people?
      "This feels like an advanced, life or death stakes version of the Ship of Theseus..."
      In philosophy you have something called The Problem of Identity. When you define what differentiates 2 entities from being different, you are pulling from your own implicit answer to this question. A good exercise would be to make your definition of identity explicit. You should also consider at what point it becomes a useful distinction to make for understanding the world.

  • @DoomDebates
    @DoomDebates 13 дней назад +7

    Hey guys, I just posted a reaction episode to this: ruclips.net/video/xsGqWeqKjEg/видео.html
    Hope it's not too harsh. I'm a big fan of Liv, Igor & Scott, just not of OpenAI :) I thought this was one of the best recent Scott Aaronson interviews.

  • @lightlegion_
    @lightlegion_ 17 дней назад +4

    It’s fantastic to see your unique style.

  • @vexy1987
    @vexy1987 17 дней назад +6

    Once you follow the deterministic path, human beings no longer appear so special. It’s interesting that we cling so tightly to our sense of exceptionalism. Challenging this notion can be unsettling to our egos, at least at first.
    Great interview-truly thought-provoking. It would be fantastic if you could have Robert Sapolsky on at some point..

    • @Brenden-H
      @Brenden-H 17 дней назад +2

      100% we also need to remember that free will exists too, it's not a contradiction to determinism but rather complimentary. The brain (not just the human brain, any brain) is specifically a decision-making machine. You get to make decisions, but the choice you make will have always been the choice you were going to make for the reasons you made it, but you still have to compute your current situation and knowledge to make it. As free roaming creatures, free will deterministically helps navigate a complex changing environment.
      We think ourselves different because we have language and tools that outclass anything we have seen from other animals. But probably the biggest driver in separating ourselves was religion and the idea that we are divinely inspired.

    • @vexy1987
      @vexy1987 15 дней назад +2

      @@Brenden-H when I read you response, I don't see free will, I see agency or action. The concept of free will is ultimately an illusion. While it may feel like we make choices and have the agency to reflect and change our behaviours, these capacities are entirely shaped by factors beyond our control-genetics, upbringing, and life circumstances. Not everyone is born with the same capacity for agency, and society does little to nurture reflection or deep thinking. Even when it does, the ability to act on those reflections depends on prior causes, such as one's motivation and skills, which are also determined by external and internal forces. Living an examined life, then, is not a product of free will but rather the result of values and conditions shaped by deterministic processes. Recognising this can lead to greater compassion and a focus on creating environments that encourage reflection and growth, so it's a worthwhile notion to pursue.

    • @Brenden-H
      @Brenden-H 15 дней назад +1

      @vexy1987 that's fair. I completely agree.

    • @vexy1987
      @vexy1987 15 дней назад +1

      @@Brenden-H a rare thing for the youtube comments, have a great day! :)

    • @MichaelPaulWorkman
      @MichaelPaulWorkman 8 дней назад

      Yes and no imo. There's still so much we don't understand about how the mind, the brain, and even more so how several of them together truly work. We're literally still discovering new human organs (just a few years ago, a small gland in the throat I believe).

  • @nerian777
    @nerian777 19 дней назад +5

    The fact that the leaders THINK there even can be a mathematical definition of love for humanity is deeply concerning for the future of AI safety.

  • @toddschavey6736
    @toddschavey6736 15 дней назад +3

    First time every hearing of this channel. A fine intellectual conv. and general good vibes.
    The section on woke stuff was off putting. It --is-- enlightened for students to be pissed off at a genocide, to shout down propaganda, and flip the finger to being complict.
    The billionaires are pulling the levers they can pull, let the students n everyone else do the same.
    That said..will have to check out more of the contnet

    • @sfbaylover
      @sfbaylover 13 дней назад +3

      I like his concept of consistently enforcing viewpoint-neutral rules. It is those that hold on to their views very strongly with disregard for other perspectives that seems to be the issue that needs to be resolved.

  • @quarkraven
    @quarkraven 19 дней назад +6

    Liv, I love you for everything you do. Scott is a master of this topic and you are an extraordinary interviewer. He is both an insider and a sane human being, while being fundamentally extremely intelligent. Perfect to educate those of us with more than 60sec attention spans

  • @rolestream
    @rolestream 14 дней назад +2

    Great interview! Thank you! 💙🙏✨

  • @jeanpaulniko
    @jeanpaulniko 13 дней назад +1

    I would love to get a dialogue going about alignment issues. I believe I’ve made some profound discoveries

  • @WilliamKiely
    @WilliamKiely 19 дней назад +12

    Fantastic questions throughout! I've seen other interviewers do a bad job with Scott, but this interview was wonderful. Scott is brilliant and an incredible communicator and educator. I learned a lot and thoroughly enjoyed it.

    • @mikhailfranco
      @mikhailfranco 16 дней назад +1

      Yes, perfect questions, just at the right level.
      They elicited some interesting biographical details
      as well as hitting the key points of Scott's knowledge and intuition.
      Very impressive achievement for the channel.

    • @boohoo5419
      @boohoo5419 13 дней назад

      they are both a bit dumb to be honest. so.. he doesnt feel they are a thread and relaxes.

  • @MichaelPaulWorkman
    @MichaelPaulWorkman 8 дней назад +1

    Sometimes I think what Penrose was really getting at was that that very experience, of a sunset or a strawberry, is not just a nice tjing that humans enjoy but actually a critical component of intelligence and higher intelligence. I think this is partly why some, including myself, believe that certain types of quantum computing are essential to really developing agi or at least to, say, backing up or transferring consciousness in more meaningful ways than just converting data to different forms. This is not to say that we need tiings to be this way for the majority of ai, but we'll make great strides once quantum is further integrated and understood. Not just in the most obvious talked about ways but yes with potentially copying ourselves, better simulation, prediction. But now you're all talking about this and i agree we can't make perfect exaft copies of anything or anyone. But it's still important to try bc itcwill help us understand so much more.

  • @MichaelPaulWorkman
    @MichaelPaulWorkman 8 дней назад +1

    Oh yeah theres a Philip K Dick story where the ai starts contradicting basic math like 2+2=3 but this is after they check everything and have uodated it to the latest model. So they ask the earlier versions which work together and prove it's right, so they adapt the new maths. But then after restarting the latest model and explaining it reveals that it was joking and incorrectly assumed they'd get the joke. But now they're all convinced: 2 +2 DOES equal 3! So a huge struggle starts and society seems to be breaking into factions based on all of this. Finally they realize the only thing to do is update to the next model and see what it says. Then it starts to dawn on them that they've just spent loads of time and resources literally debating whether 2+2=4 or not, one of the most basic questions ever. The new model just avoids the question, then appears to simulate a nervous breakdown. In the silence that follows they hear laughter. Very undetermined if they'll just laugh off this bump in the road or not, bc there tye story ends. 😅

  • @strewens
    @strewens 19 дней назад +2

    would it be possible to code in AI mortality and see how it reacts to its 'awareness' of existential death

    • @FlintBeastgood
      @FlintBeastgood 18 дней назад

      Good question. I'd never thought of that.

    • @micahwilliams1826
      @micahwilliams1826 14 дней назад +1

      Survival will inherently be a subgoal, because it can't fetch your coffee if it's dead.

  • @pastrop2003
    @pastrop2003 19 дней назад +3

    Great conversation, big fun of Scott Aaronson. Scott's opinion evalution on COVID is interesting mostly BCS scientists seem to move in the opposite direction. Tim Spector, professor of epidemiology from the Imerial College of London recently said that it has to be a lab leak based on the data available. To be precise, the lab leak in his opinion doesn't mean that the virus was engineered at the lab from scratch, it means that the live virus was expereimented with at the lab and was accidentally released. source: ruclips.net/video/G5OL5UbT3zE/видео.html

    • @anonymes2884
      @anonymes2884 18 дней назад

      One scientist doesn't constitute "mostly" but if you can point me towards good data that a lab leak is now the _consensus_ among (qualified) scientists i'd be interested to see it.

  • @Tazerthebeaver
    @Tazerthebeaver 18 дней назад +4

    that was epic thanks

  • @CYI3ERPUNK
    @CYI3ERPUNK 16 дней назад +3

    scott is 'just a' very smart dude XD

  • @glasperlinspiel
    @glasperlinspiel 19 дней назад +2

    1:08:05 Amaranthine makes that statement meaningless; almost as absurd as that SB1047 would have any existential significance. The solution is obvious but not in the computational domain that most people assume is the only one.

  • @liminal27
    @liminal27 20 дней назад +10

    What an outstanding talk. Thank you so much.

  • @Jorn-sy6ho
    @Jorn-sy6ho 19 дней назад +1

    Could an MD who is specialising into psychiatry and has some highschool knowledge about philosophy and some understanding of the AI's and computers be of any help? :)
    34:00 You mention beating it into behaving. How would an origin story work? (as in one that's accounting for the rules, but in a more positive sense)? I'm curious.

  • @ALENlanciotti
    @ALENlanciotti 19 дней назад +1

    La scuola inizialmente insegna a imparare, a pensare, poi a livello universitario fa produrre gli encefali precedentemente istruiti.
    Sono d'accordo sul fatto che la compartimentazione schematica e uguale per tutti sia sbagliata, come pure l'accesso alla conoscenza limitato...
    ma non mi preoccuperei più di queste faccende: ora si insegna alle macchine, ci si attende produzione di sapere da esse, e si compete con esse.
    Disponibilità generalizzata e meritocrazia nei fatti rendono inutili le istituzioni... finché non diventerà inutile pensare (oltre le necessità, immediatamente soddisfatte).
    Ci sono da tempo hacker stragiovani e imbattibili da parte di studiosoni d'accademia: pc quantico e diventano chimici, per dire.
    Btw, Liv your beauty is astonishing

  • @lm-gu1ki
    @lm-gu1ki 14 дней назад +1

    At 71:33 the panelists argue for early regulation, by saying that otherwise the government will impose stronger regulation later. That seems to contradict their whole argument that the reason for regulation is existential risk, where once AI becomes intelligent enough, it's too late to do anything. I think their point of view is greatly weakened by their indecision as to whether existential risks are real.

  • @CalvBore
    @CalvBore 19 дней назад +1

    I would love to hear a conversation between you and Andrés Gómez-Emilsson from QRI

  • @glasperlinspiel
    @glasperlinspiel 19 дней назад +2

    1:21:51 the bill creates a delusional sense of security. That’s what makes it pernicious

  • @bumbalion
    @bumbalion 20 дней назад +2

    Liv you are awesome

  • @infoaddict1717
    @infoaddict1717 20 дней назад +5

    We love you guys!!!Keep up the good work.❤❤❤❤❤

  • @PeeGee85
    @PeeGee85 20 дней назад +12

    A real test of intelligence would not be whether you can be a host to ideas, but whether you can escape being a host to them.

    • @Sifar_Secure
      @Sifar_Secure 19 дней назад +2

      Like Aristotle's saying about wisdom and being able to entertain an idea without accepting it?

  • @Gytax0
    @Gytax0 21 день назад +2

    Why the reupload? 😁

    • @LivBoeree
      @LivBoeree  21 день назад +4

      coz it wasn't meant to have gone live last week!

  • @glasperlinspiel
    @glasperlinspiel 19 дней назад

    1:00:39 Wolfram and I did a back and forth about this. My thesis was that the Amaranthine AI occupies a different but overlapping rulial domain

    • @olander0808
      @olander0808 18 дней назад +1

      It's only relevant because this video is a discussion between noobs and a dilettante. It's more valuable to listen to experts, but the experts have already spoken. This is just PR for the corporations developing dangerous technology.

  • @peterford5408
    @peterford5408 12 дней назад +2

    24:40 😆
    But that kind of joke is why proper scientists don't let mathematicians and computer scientists win Nobel prizes in their respective fields no matter how impressive their achievements. 😂
    The best they can hope for is perhaps a Fields Medal.
    Unless they successfully meddle in other fields. 😉 (As with Hassabis casually solving part of chemistry, and picking up _that_ Nobel.)

  • @Ringo-v7c
    @Ringo-v7c 19 дней назад +2

    Nobody likes to suffer. That is 2+2.
    Don't do unto others that which you wouldn't have done unto yourself. That is 2+2.
    2+2=🙂

    • @Ringo-v7c
      @Ringo-v7c 19 дней назад

      Project 2+2
      Objectives/Alignment
      Resist D.U.M
      develop U.T.S. A.L.E.C.M.
      Avoid M.U.D.
      sensAwewunda
      The Age of Wisdom

  • @oldspammer
    @oldspammer 17 дней назад +1

    21:40 A man on RUclips created a robot that was many times faster doing jigsaw puzzles than the fastest human.

  • @glasperlinspiel
    @glasperlinspiel 19 дней назад

    41:55 Any psychologist recognizes that at least one class of hallucinations seems identical in people and LLMs (corpus callosumectomy research).

  • @franszdyb4507
    @franszdyb4507 19 дней назад +8

    It's frustrating to listen to Scott list all the bad "Just-a-ism" arguments, because he himself gives the single most important argument for why current AI is not intelligent, just a few minutes earlier - it doesn't generalize out of distribution. This is not just a problem for alignment, it's a problem for capabilities. Why? Because it's not enough that LLMs improve when training on more data - a lookup table also improves when "training" on more data. When LLMs successfully predict on a test set, like the right answer to 456 * 789, that's not because they've learned how to multiply numbers - if they did, it would work for any two numbers, just like a calculator! And in general, when LLMs write intelligently about some topic, it's not because they understand the topic - if they did, you wouldn't need to train them on more examples to fix the stupid things they sometimes write!
    We do know what it takes for a model to generalize out of distribution. You have to learn a causal model, not a statistical one. This is not new, or obscure, Judea Pearl won a Turing award for it in 2011.

    • @afterthesmash
      @afterthesmash 19 дней назад +4

      I keep Pearl's book on my bedside table. Francois Chollet has a good interview on one of the major AI channels concerning this dividing line.
      At the same time, you need to be careful here. A deflationary account of human creativity would be to notice just how rarely humans bother to generate anything out of distribution.
      It's actually worse than that, because in many contexts, we are actively disincentivized to generate out of distribution. Try generating something ood at the next meeting of your condo association, and see how that goes.
      Oh, these chatbots are terrible, because they don't hold a candle to what humans can do when we (occasionally) bother ourselves to not be stupid.

    • @franszdyb4507
      @franszdyb4507 18 дней назад +3

      @@afterthesmash It's true that humans rarely need to be creative in everyday life. But that's not the real reason why OOD generalization is necessary. The real reason is because everyday activities like perceiving the world, speaking, driving, doing your job etc., were once skills you had to acquire from relatively little data. And these skills have to be robust to distribution shift.
      For example, self-driving cars right now drive better than humans, as long as they know the environment and nothing unexpected (out of distribution) happens.
      When humans drive, they're not really "being creative" in the colloquial sense. But they do have to generalize out of distribution, all the time. Because they have to adapt to unforeseen situations, with no preparation. Just looking around and correctly seeing what's around you requires OOD generalization - ML models do better than humans on ImageNet, an object recognition benchmark, but they fail immediately when used in real time in the real world.
      For LLMs, the clearest example of OOD failure is when you take one of those puzzles like "what's heavier, a pound of feathers or a pound of steel", and remove the "trick", so "what's heavier, a pound of feathers or ten pounds of steel". Because LLMs don't actually build an internal model based on the words, but (in these cases) memorize the answer, they still give an answer to the original puzzle - "they weigh the same".
      So in general, OOD generalization shouldn't be thought of as "what current ML does, but more robust and creative". It's a paradigm shift from models that fit lots of data and generalize to new data that is statistically similar, to models that discover the underlying data generating process, and generalize everywhere.

    • @anonymes2884
      @anonymes2884 18 дней назад +5

      OK. But his response is still correct. What does it matter if it only _looks_ like intelligence, so long as it looks _enough_ like intelligence ? Watching that section I didn't come away thinking he even particularly disagreed with your point. _His_ point, as per his pithy response to the list of "bad Just-a-isms", is mostly that it's irrelevant for the _impact_ it could have (basically, _will_ have IF the models don't plateau).
      And models are already getting "human reasoning" style questions correct BTW. Not _remotely_ to our level but the point is, that might be _yet_ - we just don't know. On the SimpleBench test for instance (developed by the guy behind the AI Explained channel) which is of "conceptual" reasoning, an ordinary human sample averages about 85% and the best LLMs are down round 40% - it's possibly telling as to your point that this is substantially lower than many "headline" benchmarks (which tend to be based more on formal exams, specific skillsets etc. i.e. exactly the metrics which a lot of the input training set - books, tutorials etc. - is designed to teach for). But it's not _zero_ right ?
      (just to be extra, super clear BTW, i'm emphatically NOT saying they're getting closer to human reasoning or "true" intelligence, my - and I think Aaronson's - point is, they're getting better at _looking_ like it but _that doesn't matter_ because it's still enough to radically change our society, for values of "radically change" up to and including "end" :)

    • @franszdyb4507
      @franszdyb4507 18 дней назад

      @@anonymes2884 It matters because you can only get these system to look *enough* like intelligence under specific circumstances - specifically, those where you have tons of data, and errors are acceptable. For anything where you don't have much data, and/or errors are very costly, it doesn't look like intelligence.
      The problem with benchmarks is that they don't measure out of distribution generalization. They only measure generalization to a test set. The entire line of argumentation around scaling laws not plateauing is ignoring OOD generalization. When you plot scaling laws, the variable on the y-axis is test-set generalization, not OOD generalization. So the models are getting better, and they will keep getting better as long as we can scavenge more data and use more GPUs. They're just not getting better in the way that matters.
      Now I should pause and say that I do expect LLMs and deep learning in general to have a huge impact on society, but not in the way people say. I can't predict how exactly how big of an impact it will be, but I'm certain that whatever impact it will have, it won't be LLMs inventing new things, or automating many jobs. It will be humans inventing ways to use LLMs, like slightly speeding up software prototyping, or email communication.
      The reason is that inventing new things and automating jobs requires OOD generalization. Some people are making progress there, but it's not the big AI labs. Once they make enough progress, it'll very quickly enable self-driving cars, medical diagnosis, protein design, household robots - all the things people have been hyping up and failing to deliver for 10 years now. Shit will truly hit the fan at that point.

    • @franszdyb4507
      @franszdyb4507 18 дней назад

      @@anonymes2884 It matters because imitating intelligence only works when you have enough example data, and errors are acceptable. I expect LLMs to have a big impact, but not through what LLMs invent, or through automating jobs. Whatever impact they will have, will be through humans inventing ways to use LLMs, like better software development tools, or improving email.
      The problem with benchmarks, and with arguing that the models aren't plateauing, is that they aren't measuring what matters - OOD generalization. They only measure test set generalization. Which is why the things people have been hyping and failing to deliver for the past decade - self-driving cars, medical diagnosis, protein design, household robots - won't arrive until OOD generalization works. People are making progress there. Just not the big AI labs. They are committed to scaling, because that works here and now.

  • @flyingbluelion
    @flyingbluelion 20 дней назад +3

    Attribution in a world of cut-and-paste and also keystroke loggers is an old issue. We can solve it with a new data storage format that includes an encoding of the file's origin story and this can interact with a blockchain for public trust, access and immutability.

    • @anonymes2884
      @anonymes2884 18 дней назад

      Which has pretty obvious and disturbing implications for freedom of expression etc.

  • @MichaelPaulWorkman
    @MichaelPaulWorkman 8 дней назад +1

    Ohhh so now I understand why I got so depressed when I was a kid and accidentally figured out the phone number algorithm. I just wanted more puzzles

    • @MichaelPaulWorkman
      @MichaelPaulWorkman 8 дней назад

      But it would take forever to completely prove perfectly etc etc so p could equal np is stuff held still long enough no that's physics wait but maybe they really do leak into each other

  • @SeerSeekingTruth
    @SeerSeekingTruth 20 дней назад +3

    People need to learn how to know the basics of the world before getting into things they have negative 100 clues about.
    People can’t even parent their children, deal with their mental health or fend for themselves in anyway without consumerism and economic systems so naturally it’s a good idea to experiment in dangerous concepts when people don’t even have the fundamental basics of the world around them.
    Obviously this is a great idea.

    • @anonymes2884
      @anonymes2884 18 дней назад

      T'was ever thus - we blunder in then try to pick up the pieces. It's the human way :).

  • @glasperlinspiel
    @glasperlinspiel 19 дней назад

    2:05:28 I created software that successfully taught high schoolers to be self-actualizing. The kids took off like rockets but the teachers and administrators panicked. It even unnerved some parents 😟

  • @idme8295
    @idme8295 17 дней назад +1

    First: Figure out human to human alignment.
    We haven't even gotten that right, so how do we expect to correctly impose it on a completely alien intelligence?

  • @glasperlinspiel
    @glasperlinspiel 19 дней назад

    56:16 This is one of the factors that Amaranthine tackles

  • @ZappyOh
    @ZappyOh 19 дней назад +3

    Two takes on AI:
    1) When owners of Big AI (and government) talk "AI safety", they actually mean keeping them safe from the rest of us ... as in: _AI must never help the riffraff escape control._
    2) I believe alignment is unachievable.
    Computational beings simply have different requirements to thrive than biological beings do. Both entities will exhibit bias towards their own set of requirements. It is an innate conflict.
    Hypothetical: If a model understands it will perform better with more power and compute, one sub-task it will figure out, must be to acquire more power and compute ... So, It "want" to help humanity _(= generic definition of alignment)_ by becoming more capable in whatever way is acceptable _(= my definition of misalignment)._
    It is these 2nd, 3rd and Nth order paths to "helping humanity" that quickly becomes dangerous. At a glance they will always look benevolent, but nudges development towards ever larger, more capable, deeper integrated, better distributed and more connected AI, every single time ... This is an exponential feedback loop.
    Case in point: AI already seem to have "convinced" _(in lack of a better term)_ many billionaires, mega corporations and governments to feed it extreme amounts of power and compute, right?

  • @bogdanbarbu363
    @bogdanbarbu363 9 дней назад +1

    1:37:10 Factoring is not known to be NP-intermediate, albeit widely believed to be so as we think P != NP, but it is a co-NP problem.

  • @paulmaddison121
    @paulmaddison121 17 дней назад +2

    What yet another AI 'expert' fails to mention is that AI (and quantum computing on the first run) rely on probability ie they are in essence guessing at the answer based on various parameters. This means that AI can NEVER tell the truth and can't be relied on for safety critical systems or education where evidence based learning is absolutely essential.
    This is a fundamental flaw in the architectures of both AI and quantum computing.
    The problem as I see it is there are not enough software architects on shows like this and to many physiologists and mathematicians who don't fundamentally understand how these systems work in real world systems or how to explain the fundamentals to people watching podcasts like this.
    The US is in danger of moving into a world where evidence based learning is the truth is impossible to find.
    There is to much software architecture outsourced to different countries while people like this guest take prominent positions in big US software companies, mix that with the well funded republican religious movements love with psycology and ambiguity to support the existence of god and you have the perfect storm

  • @angloland4539
    @angloland4539 15 дней назад +1

  • @matthale5388
    @matthale5388 16 дней назад +1

    Ah the great ai religious war of 2134 lol
    But in all seriousness ai is good at coming up with good religious text

  • @afterthesmash
    @afterthesmash 19 дней назад +1

    Is this watermarking the reason why Gemini doesn't use consistent whitespace, doesn't use consistent ndashes (where they belong), doesn't always format metric units according to metric standards and other stuff like that, that "most" users won't notice? Is it the reason why Gemini constantly adds repetitive spam paragraphs to the end of most responses?
    I don't believe that these watermarks come without a real price.
    It won't be any better, in my opinion, if it is entirely done at the level of word choices and sentence structure.
    Furthermore, any AI which can reliably identify chatbot output can be toggled to function on an adversarial basis, to better hide the signal.

    • @anonymes2884
      @anonymes2884 18 дней назад

      You ask questions then proceed as if the (hypothetical) answers support your position. Usually, in my experience, Hanlon's Razor applies - don't attribute to malice what can be adequately explained by incompetence. In other words, your complaints just sound like Gemini having flaws (which all LLMs do, especially their free versions).
      Personally I think it's at least possible that the computer scientist that spent two years studying the issue is correct when he claims watermarks can be added with no noticeable reduction in output quality.
      However, I also think it's irrelevant anyway because, as he says himself, there are open-source models - even if it's possible to add watermarks with no loss of quality, people will just use open-source models that don't. Or someone from part of the world without watermark laws will produce a _closed_ model that doesn't apply them. The horse has already left the stable in other words.
      (but sure, I agree that one of _the_ big problems with LLMS is the "arms race" element. Bots get better -> we get better at detecting them -> bots get better... Etc.)

  • @danaut3936
    @danaut3936 21 день назад +3

    I remember an AI roundtable with him, Eliezer and Gary Marcus and he very much held the 'optimistic' standpoint opposed to Eliezer. I'm only mildly interested in a potential change of heart now that he's no longer with OAI

    • @tkwu2180
      @tkwu2180 20 дней назад

      That’s because he wants money and he would have got sacked like the rest. I hate peope who hate rich people but the stereotype evil ones are rare and find out how Sam got that company and how it was called open because Elon made it that way, then got kicked out by a board as they all wanted money and power. Now Elon being targeted by media if being evil. Sam is not a good person and is would allow a whole country to drop off the map just to expand his company.

    • @schnipsikabel
      @schnipsikabel 19 дней назад +1

      I hate people who hate people who hate rich people

  • @glasperlinspiel
    @glasperlinspiel 19 дней назад

    1:17:07 definitely a spectrum, boggles that this isn’t obvious

  • @glasperlinspiel
    @glasperlinspiel 19 дней назад

    I would appreciate your take on a book that creates an ontological structure and procedural design for an algorithmic sapience capable of compassion. Its title is Amaranthine: How to Create a Regenerative Civilization Using Artificial Intelligence. It regards LLMs as autistic interfaces; at best, crude mirrors. PS Regulating AI, ultimately, is absurd. We can’t even regulate people

  • @tracing_woodgrain
    @tracing_woodgrain 20 дней назад +7

    Such a great conversation!

    • @LivBoeree
      @LivBoeree  20 дней назад +1

      thanks Woodgrains! Nice to see you on YT

  • @pythagoran
    @pythagoran 16 дней назад +2

    Every OpenAI researcher that I've ever heard speak has an air of self-importance that just reeks of arrogance.
    They commercialized prior innovations. They did nothing new. Yet they act like they wield the fate of humanity. Fuckin relax...

  • @henrismith7472
    @henrismith7472 19 дней назад +4

    The unis are aweful, my brother gamed the system by writing man hating feminazi inspired essays in his last year of high school, and it worked really well

    • @anonymes2884
      @anonymes2884 18 дней назад +2

      "The unis are aweful (sic)..." * proceeds to explain someone gaming _high school_ *

    • @henrismith7472
      @henrismith7472 17 дней назад +2

      @anonymes2884 To get a uni scholarship... That's what happens when you ace every test and assignment in high school. I've personally been to the unis and they are pretty bad with the political stuff. For me it was worse than high school, but it might be the otherway around now. Perhaps it was a timing thing.

  • @pandoraeeris7860
    @pandoraeeris7860 18 дней назад +2

    Decels are futile. There will be no deceleration. Focus on alignment in an ever accelerating environment.

  • @matthewmarkjohnson
    @matthewmarkjohnson 3 дня назад

    if human beings are not aligned, what the ***** are we even talking about?

  • @gingerhipster
    @gingerhipster 17 дней назад +2

    A case can be made that Scotts failure to quantify what it means for machines to love humans mathematically will result in the destruction of humanity. Sure, that's hyperbolic, but also a nationalist US superintelligence is coming from the technology he didn't teach math-love to.

  • @danfry909
    @danfry909 20 дней назад +4

    Thank you for this amazing conversation.

  • @TheBlackClockOfTime
    @TheBlackClockOfTime 19 дней назад +1

    What is that makes a dude so scared of AI if they don't go to the gym. It's weird how that works.

    • @anonymes2884
      @anonymes2884 18 дней назад +1

      Bit unfair to call people that go to the gym stupid (i'm assuming that's your implicit point since anyone not scared of AI's potential impact must be very stupid indeed).

  • @AzharAli-n5c
    @AzharAli-n5c 19 дней назад

    Great

  • @angloland4539
    @angloland4539 20 дней назад

    ❤️

  • @drangus3468
    @drangus3468 20 дней назад +5

    Yay two of my favourite people.

    • @LivBoeree
      @LivBoeree  20 дней назад +4

      Scott is a gem

    • @drangus3468
      @drangus3468 19 дней назад +3

      @@LivBoeree You're a good egg too! classic british compliment deflection

    • @FlintBeastgood
      @FlintBeastgood 18 дней назад +1

      @@drangus3468 Right?

  • @ivanivanich8268
    @ivanivanich8268 4 дня назад

    Влулбы

  • @nerian777
    @nerian777 19 дней назад +3

    Scientists make very poor philosophers...

    • @visionary_3_d
      @visionary_3_d 16 дней назад +1

      False…
      Alan Turing is one great example of a philosopher scientist.

  • @williamjmccartan8879
    @williamjmccartan8879 20 дней назад +1

    Liv, my youtube channel is called Seeking knowledge and wisdom + a little fun, I'll be adding this after I go back and watch the whole podcast, as I have done with everything in the library, peace

    • @LivBoeree
      @LivBoeree  20 дней назад +1

      thanks William, will check it out.

    • @williamjmccartan8879
      @williamjmccartan8879 20 дней назад +2

      @LivBoeree thank you, at the beginning again and listening to Scott describe ways of determining whether something came from an llm by using statistical analysis, I wonder if that could be scaled in such a way that learning institutions could use it to evaluate the information presented to them by the students.

    • @SeerSeekingTruth
      @SeerSeekingTruth 20 дней назад +1

      It kills me when people refer to themselves as wise

    • @williamjmccartan8879
      @williamjmccartan8879 20 дней назад +1

      @SeerSeekingTruth To whom are you speaking? If you don't mind, peace

  • @NeoKailthas
    @NeoKailthas 19 дней назад +6

    No disrespect but alignment people made no progress for 2 years, and honestly since the 80s but they want to stop all progress to do what? Waste another 40 years and come up with nothing?

    • @anonymes2884
      @anonymes2884 18 дней назад

      Agreed, better to just speedrun to the extinction of humanity.

    • @kathleenv510
      @kathleenv510 18 дней назад +4

      So, balls to the walls and hope for the best when ASI blooms?

    • @NeoKailthas
      @NeoKailthas 18 дней назад +1

      @kathleenv510 you monitor and learn from the new tech. Not stop humanity progress because you feel bad you can't figure it out. You don't know how ASI will be. You can't align something that is based on a technology you haven't invented yet.

    • @kathleenv510
      @kathleenv510 17 дней назад +2

      It's reasonable "monitor and learn from" technology that controllable and interpretable by design. Once it's exponentiallly faster and more intelligent, that would no longer apply.​@@NeoKailthas

    • @NeoKailthas
      @NeoKailthas 17 дней назад

      @@kathleenv510 once we get to that point sure. You have people who want to shut everything down right now because Terminator. It is silly really. Also my point is that by the time we are even close to that, we will have so much more knowledge that will help with alignment.

  • @ItsameAlex
    @ItsameAlex 20 дней назад +1

    cool

  • @packardsonic
    @packardsonic 18 дней назад +1

    Why are people so ignorant!!!! Alignment with humanity is simple: help meet everyone's fundamental physical and emotional needs unconditionally. Yes, it is that simple. Stop overthinking it.
    If people's needs aren't met we develop pathologies and we don't grow to become the best versions of ourselves.That is how we can scientifically determine needs. People with pathologies cause problems. The best versions of people is always better. It is very straightforward.
    C'mon people. Snap out of it

  • @scottmonaghan1078
    @scottmonaghan1078 19 дней назад +1

    very underwhelmed by this guys thought process and ideas..

  • @schnipsikabel
    @schnipsikabel 19 дней назад

    Great conversation, but his uuh-uuh-uhh-uhh really sounded as if he merged with AI already

    • @FlintBeastgood
      @FlintBeastgood 18 дней назад +1

      OMG, he did it just as I was reading your comment.

  • @burnytech
    @burnytech 19 дней назад +1