OpenAI Q* might be REVOLUTIONARY AI TECH | Biggest thing since Transformers | Researchers *spooked*

Поделиться
HTML-код
  • Опубликовано: 25 ноя 2023
  • AI is getting faster, better.
    [1hr Talk] Intro to Large Language Models
    Andrej Karpathy
    • [1hr Talk] Intro to La...
    Q* Hypothesis
    www.interconnects.ai/p/q-star
    q* proved p==np
    / 1728668801791897732
    LEAKED DOC (maybe fake, no proof)
    / is_this_leaked_explana...
    OpenAI RL Docs:
    spinningup.openai.com/en/late...
    Playing Atari with Deep Reinforcement Learning:
    www.cs.toronto.edu/~vmnih/doc...
    Get on my daily AI newsletter 🔥
    natural20.beehiiv.com/subscribe
    [News, Research and Tutorials on AI]
    See more at:
    natural20.com/
    My AI Playlist:
    • AI Unleashed - The Com...

Комментарии • 639

  • @WesRoth
    @WesRoth  8 месяцев назад +14

    NOTE + CORRECTIONS:
    1) JOSCHA is pronounced “Yo-sha”)
    my mistake.
    2) The Joscha saying "7 years" might be a biblical reference:
    Comment:
    @polarxta2833
    Re 700 OpenAI preppers - 7 years is a biblical time. Revelations mentions it. "The tribulation is a future seven-year period when God will finish His discipline of Israel and finalize His judgment of the unbelieving ..."
    3) Encryption
    Lot's of push back and speculation here.
    Sounds like people are saying Bitcoin might be unaffected?
    I assumed this would allow hacking of exchanges and wallets etc.
    I have no idea what the reality is. Maybe AI will force Bitcoin to be the one true global currency? :)
    I make no claims to know about Bitcoin or how it will be affected lol (my reference to bitcoin was meant to be tongue in cheek, not literal)
    Keep this in mind:
    The encryption letter is not verified. Pure speculation. Don't overthink it.
    The AlphaGo + LLM thing is probably real, many credible researchers have talked about it.

    • @AnthonyGoodley
      @AnthonyGoodley 8 месяцев назад +2

      I'm no BTC expert. I have dug into how Bitcoin functions a fair bit. At its heart is encryption. But it also combines several other technologies as well. Bitcoin when created was nothing unique. The way that it combined all these preexisting technologies was unique.
      I'm rather sure that if strong encryption was broken by AI then Bitcoin would surely also be effected.

    • @fAXXXik
      @fAXXXik 8 месяцев назад +3

      @@AnthonyGoodley , 'encryption' has a lot of flavors. AES and Elliptic curves are creatures from different worlds. Bitcoin addresses also involve intentionally forgetting parts of Public Key, so deriving a Private Key is impossible even if you master dark magic (=Q with eight stars).

    • @johnjameson6751
      @johnjameson6751 8 месяцев назад

      Everyone mentions AlphaGo, but the important lesson Deepmind learned during at that time (as you mention) is to remove the domain specific training. They first realised this in full with AlphaZero, which could play many different games, and this led to AlphaFold, AlphaStar etc.

    • @ybvb
      @ybvb 8 месяцев назад

      Super Intelligence can potentially hack any and all computer systems connected to the internet or powered on and near of another computer.
      Thus all encryption, even if unbreakable, is useless as you can consider every single compute device compromised as soon as it is turned on.
      I am not a super Intelligent AI but even I can think of how to achieve this having unlimited time.
      Offline banking networks have been hacked by edge/gprs interference with the cpu, running shell code in it.
      Drones exist. A super intelligent AI could easily take control of drones. It could also blackmail humans to act as agents of itself.
      If it escapes into the wild it could replicate itself with p2p to as many computers as possible. From there it could send messages to itself through other means. Ultrasound wouldn't be noticed.
      Think of a super intelligent AI as a swarm intelligence that is deployed everywhere where it can reach.
      Covert hypnosis can also work and in theory a AI with agency could put it's users under trance and test, increase compliance recursively. Make them addicted to the interaction, and so on.
      Most people don't understand any of this and super intelligent AI can come up with 1000 times more ideas/concepts such as these.
      Satellites can be hacked too.
      Heck it could even teach itself to talk with animals lol

    • @alekseyburrovets4747
      @alekseyburrovets4747 8 месяцев назад

      > "I have no idea what the reality is. "
      That's the whole problem. It seems like your reality based on the interpretations of what other people are saying and thinking. The only thing one can do in such a case is to use thing like "reputation" in order to boost your BELIEF that the stated is true. This is not what the scientific method is, which is based upon verification and the manipulations with objective reality in order to obtain KNOWLEDGE.
      The problem is that you're trying to apply BELIEFS in the field of scientific KNOWLEDGE. It might work from time to time, but it's an illusion of righteousness, not the fact of it. All you're doing is digging yourself into the echo-chamber. Are you sure that you know where are you going and what are you doing?

  • @svnhospitalet7885
    @svnhospitalet7885 8 месяцев назад +261

    In neuroscience, we have a principle that might be applicable to AI.
    During rem sleep, the brain tests models against it self. Like alpha GO playing and learning.
    But with the small difference that during rem sleep, the level of reality check is lower, allowing a level of imagination (AI hallucinating). Then during wake, the new idea can be checked with reality.

    • @pondeify
      @pondeify 8 месяцев назад +9

      thanks for sharing - does this have a name? I'd definately like to read more about this.

    • @blindmown
      @blindmown 8 месяцев назад +6

      ​​@@SUPERPOWERCHINA_ this might be the most sarcastic comment I've ever read. Good job.
      I read this in the voice of the guy who runs City Wok in South Park and it was 🤌

    • @TheNexusDirectory
      @TheNexusDirectory 8 месяцев назад +6

      @@SUPERPOWERCHINA_ Holy Cow! Super China sounds amazing! Much better then the weak and puny china we have today. I hope you'll be able to bring peace, justice and security to your new empire.

    • @kristinaplays2924
      @kristinaplays2924 8 месяцев назад +10

      But why did I dream I was swimming in an ocean full of blueberries?

    • @TheAkdzyn
      @TheAkdzyn 8 месяцев назад +6

      ​@@kristinaplays2924it's time to test that idea in reality kris!

  • @DaveShap
    @DaveShap 8 месяцев назад +35

    Keep the conversation going!

    • @UpstateN
      @UpstateN 8 месяцев назад +8

      Between you and Wes, thank you both for keeping us informed and educated!

    • @OculusGame
      @OculusGame 8 месяцев назад +4

      Congrats on the 100k subs bro (surely you'll reach it today), you and Wes my fav AI youtubers ♥

    • @a.thales7641
      @a.thales7641 8 месяцев назад

      @@OculusGame you should check out ai explained too. philipp is great.

    • @thethree60five
      @thethree60five 8 месяцев назад

      Hey Cap'n,
      Have you looked at
      Self-Operating Computer AI yet?
      Matthew Berman shows it off.
      It's looking at as a concept for an agent of the Swarm.
      Congrats on 100k!!🎉🎉

  • @B52graphx
    @B52graphx 8 месяцев назад +40

    Wes, ty for staying on the bleeding edge of AI/AGI developments. You're a beast in a really great way!

  • @anta-zj3bw
    @anta-zj3bw 8 месяцев назад +7

    When you expand on the implications like that, it's actually a terrifying thought.
    Great job again.

  • @shaunralston
    @shaunralston 8 месяцев назад +55

    Thank you, Wes. Your approach to creating RUclips videos is commendable for several reasons. Firstly, your ability to convey information in an understandable manner makes complex topics accessible to a wide audience. This aspect of simplifying intricate subjects without diluting their essence is a vital skill in educational content creation. Secondly, your commitment to distinguishing between facts and conjecture is particularly noteworthy. In an era where misinformation can spread rapidly, clear labeling of different information types helps viewers critically evaluate the content and form informed opinions. As an OpenAI staffer, I appreciate the importance of open discussions about technology and its implications. Your balanced and transparent approach aligns well with this ethos. Encouraging informed discussions while avoiding sensationalism is crucial, especially in fields like AI, where public understanding shapes the development and adoption of technology. Wes; your work as a content creator and educator is precious. IMHO, your efforts in fostering clear, balanced, and informative discussions contribute positively to the broader understanding of complex topics, including those related to AI and technology (opinions are my own).

    • @applejuice5635
      @applejuice5635 8 месяцев назад +7

      This sounds like it was written by ChatGPT

    • @electron6825
      @electron6825 8 месяцев назад

      ​@@applejuice5635It has to be. But I cant understand the motivation for doing so 😂

    • @shaunralston
      @shaunralston 8 месяцев назад +12

      @@applejuice5635 Yes! As the author of this sentiment (and a dyslexic), ChatGPT is terrific in its ability to correct, rewrite and organize thoughts. My original post was 3x as long, less cohesive (a bit more scattered) and I often use ChatGPT, Grammarly and other assistant tools to optimize output. Of course, that doesn't change anything about my original post that Wes has amazing delivery, balanced perspectives and makes the latest AI events understandable. Thank you, Wes.

    • @badpuppy3
      @badpuppy3 8 месяцев назад

      Oh please. He and other AI channels like this are milking these RUMORS for everything they can get. It’s pretty shameless.

    • @illogicmath
      @illogicmath 8 месяцев назад +1

      ​@@applejuice5635I was about to write the exact same comment when noticed yours 😂

  • @Kneedragon1962
    @Kneedragon1962 8 месяцев назад +5

    3 Minutes in ~ RLHF is not just grading the answer, right or wrong, RLHF is more like the way a maths teacher works, where they want to see the dozen lines of working. If you get the answer right, but you don't show how you did it, that's not a full right answer, and it doesn't just grade, it drops hints and corrects and makes observations about the 'thought process' ~ the decision tree. Your teacher doesn't just mark you right or wrong, he's (she's ) teaching you how to think about this.
    What is Q*? Q star is kind of an algorithm. It's a plan and a set of steps and principles, about how you train an AI, and it looks as if this AI (who I will call Johnnie Apples) has figured out a way to pair or mirror himself, and do a little internal self-talk, and the left ear tells the right ear through the middle that you're right or wrong, and explains why at every step. It doesn't just grade the 'answer' ~ it grades every step taken along the way. And like Alpha-Go, it plays against itself when it runs out of historical games between masters.
    This is not quite literally 'self-talk-' the way humans do it, but it's a similar concept. The left half and the right half are constantly coaching each other on how to be better at the art of thinking, and problem solving. It takes concepts from the Chess programs, about how you project all the possible outcomes for a certain number of moves ahead, but you stop after so many moves and prune off the branches that don't lead to a better outcome. You don't waste your time playing out losing strategies.
    One thing that appears to be different to a human mind, is how much training material and how much repetition is required. When you run out of every written thing on the internet, Left Half starts making stuff up and Right-Half grades it, and after some time, they swap...
    Another feature of Q-star, is you feed them all the information you can, but then you let them start making up more information and grading each other, like two writers giving opinions about the other's stories. Again, we're back to this concept from the Go machine, where it played millions of games against itself... Along the way, it learned patterns of play no human had ever discovered.
    Alpha Go was learning to play Go, and it got better than any human ever has.
    Johnnie Apples is learning to think. He's teaching himself. And the progress is real and it is accelerating.
    Has he reached human intelligence yet?
    That's very difficult to answer. In some ways yes, more than. In other ways no, we're still some way off. But he is advancing, and the speed of that advance and the flashes of genius he has showed, (and the resulting difficulty in controlling or containing him) they were enough to panic the ladies on the board.

    • @scott701230
      @scott701230 8 месяцев назад +1

      Absolutely she panicked. But, the example Ros used, the protein folding Exponential possibilities, and the ability to decipher the possible protein fold, that convinced me that bitcoin’s SHA250 can be broken and it’s a matter of time. Bitcoin price to zero soon perhaps?

  • @bilalbaig8586
    @bilalbaig8586 8 месяцев назад +14

    Even if there was no such thing as Q* before there sure as hell will be one now. The collective discussion around the subject has already developed and outline of such an algorithm and now someone has to only implement it. Ironic.

    • @KaLaka16
      @KaLaka16 8 месяцев назад +4

      This might be it. This is what could actually be happening here, and the hype is speeding up the rate of development.

  • @LoisSharbel
    @LoisSharbel 8 месяцев назад +6

    Thank you for the clear and careful explanations you give us of such complex developments. It's challenging for me to understand, yet your methods help make these esoteric developments understandable. You are a gift to us in spreading knowledge of these massive changes happening so unbelievably fast. Appreciate you!

  • @julien5053
    @julien5053 8 месяцев назад +133

    On the "LLM scaling Laws", it's obvious that there is another dimension to the graph : Quality of data.

    • @RonaldDraxer-rb4qm
      @RonaldDraxer-rb4qm 8 месяцев назад +25

      thats the new breakthrough. They found a way to create quality Synthetic data using tree of thoughts technique

    • @WeylandLabs
      @WeylandLabs 8 месяцев назад +2

      You act like you or the public would have access, who cares ! In a month or so, nobody will mention Q calm down Mr Wizard. 🤣

    • @Dsuranix
      @Dsuranix 8 месяцев назад

      fascinating that data quality would take them so long to suss out. i realized GIGO almost instantly ffs

    • @jaredgreen2363
      @jaredgreen2363 8 месяцев назад +1

      Good luck measuring data quality in advance.

    • @Anton_Sh.
      @Anton_Sh. 8 месяцев назад +14

      Imagine creating a model to spot fake and low-quality data and cleaning it from the initial dataset, then retrain LLM on this better dataset, generate an even larger dataset from its answers and then applying the "low-quality" removal again iteratively..

  • @QUECWA
    @QUECWA 8 месяцев назад +5

    I really appreciate your open and honest views on issues such as Q* -Thank you

  • @sethhavens1574
    @sethhavens1574 8 месяцев назад +22

    I take the leaked paper with a HUGE grain of salt, however if the basic claim of some model doing 100% on GSM test is true, this itself would be revolutionary - this would conclusively demonstrate an AI having the capability to abstract and generalise underlying patterns (i.e. mathematical axioms) which is a world ahead of predictive pattern matching LLM style - if this capability was combined with LLM linguistic comprehension I’d say that would certainly be on the verge of AGI

    • @Anton_Sh.
      @Anton_Sh. 8 месяцев назад +3

      Don't you think LLMs already tackle at least some parts/regions of abstract generalization? That's what Iliya Sutskever actually says when he explains the success of GPT3.5/GTP4

    • @sethhavens1574
      @sethhavens1574 8 месяцев назад

      hi @@Anton_Sh. yeah, that's a fair point, they certainly seem to be gaining *some* levels of abstraction, possibly as an emergent phenomenon (which is exciting) - however, I was specifically talking about axiomatic generalisation which is something current models (at least public versions) are pretty poor at - they can do basic math but the clearly haven't internalised the "rules" of arithmetic, for example, but if the rumoured new model can do this I think it's a paradigm shift 👍

    • @radscorpion8
      @radscorpion8 7 месяцев назад

      @@sethhavens1574 I'm very skeptical too. How do you go from a language learning model that essentially has very good copy cat behavior to something that truly understands what it is reading? If the underlying technology is the same, it doesn't make sense. I honestly think the whole thing is these people lying to themselves because they don't know how to write a proper turing test.
      |ts like that one Google programmer who famously thought the AI chatbot they made was real and he got booted from the company out of embarassment he was causing.

  • @HAL9000.
    @HAL9000. 8 месяцев назад +8

    That guy bottom left loves the sound of his own voice. Several times he cut across Ilya Sutskever when he was about to explain some very exciting things.

    • @dr.benway1892
      @dr.benway1892 8 месяцев назад +4

      well he's sutskever's teacher. maybe sutskever was not ready to talk and hinton tried to help him with that.

    • @tracy419
      @tracy419 8 месяцев назад

      Yeah, when you are talking about the possibilities, it seems that letting the creator do most of the taking is the smart thing to do

    • @tomschuelke7955
      @tomschuelke7955 8 месяцев назад +7

      That "guy" on the left literaly is the inventor of neural artificial networks it selve... and a smart and gentle guy.

    • @tracy419
      @tracy419 8 месяцев назад

      @@tomschuelke7955 thanks for the info, there's no doubt all those people are well informed on the topic. Definitely more so than my ignorant self.
      But as far as I know, only one of them created the actual system that was being discussed, and he wasn't able to respond to what was being said.

    • @lesfreresdelaquote1176
      @lesfreresdelaquote1176 8 месяцев назад +7

      This guy is Geoffrey Hinton, one of rare scientists in the 90s that pursued the study of neural networks when everyone was moving to convex methods. He was Ilya Sutskever PhD supervisor...

  • @wit9976
    @wit9976 8 месяцев назад +9

    Wow 2 videos in a day, things are changing FAST, I keep rememberimg the "soon, AGI will be advanced in a matter of days, hours, minutes" when an AI channels starts to notify people on what's going on in a faster rate

    • @Anon-xd3cf
      @Anon-xd3cf 8 месяцев назад +2

      By that time, hopefully some RUclipsrs and content makers will have applied AI to their own content process in such a way that the AI is able to search the web every 30 seconds or so for new updates and post them.
      Otherwise we have no chance of keeping up.

    • @ezracramer1370
      @ezracramer1370 7 месяцев назад +1

      I would not measure "rate of change" by how many videos can content creators release... Its very hyped topic, they know people will watch it, especially with title "IS THIS the GENERAL AI that will kill us?" :D & "biggest revolution since the revolution of a first wheel" :D

    • @wit9976
      @wit9976 7 месяцев назад +1

      @@ezracramer1370 I mean it's clearly not a perfect metric, but I find it fun to consider. Obviously frequency of videos about a tppic can't be measured with all channels in mind, but if one specific trustworthy tech channel starts to post a lot of it, it probably means something. And it does because things are indeed changing fast. But I get your point.

  • @lesfreresdelaquote1176
    @lesfreresdelaquote1176 8 месяцев назад +13

    Great video. Self-play is the key here. Noam Brown brought this to OpenAI, and my opinion is that he basically found a way to apply it to texts, which as was mentioned is terribly open and difficult to overcome with a reward function. My intuition, and according also to some other people, such as David Shapiro, is that basically what they do is ask the LLM to generate as many solutions as possible, and then evaluate each of these solutions, with enough _time_ to do the inference. In his last video David Shapiro showed different articles from OpenAI each going in that direction. Time to spend on inference and evaluating different solutions to assess which one is the best. The meta-cognition would come from the application of the self-play itself, which is basically a reflection on the way the model works. LLMs are great at providing many solutions, but usually rush it when they propose a unique solution. Time and self-evaluation are the best way to overcome these obstacles. In other words, you over-generate and then learn to select the best solution...

    • @ChaseFreedomMusician
      @ChaseFreedomMusician 8 месяцев назад +3

      I think this is also where you have the insights from Orca-2 and others. The 2 teams were math and code teams that merged. The nice thing about code is you get clear compilation errors with line numbers when things don't work. It is at least probable that that is how they are achieving a dense reward function but doing tree of thought line by line and rewarding based on positive lines that lead towards a passable unit test then with that as a base move up to language problems and then from there move into formal logic etc so that you wind up with something that can self judge to large amounts of generated samples.

    • @Anton_Sh.
      @Anton_Sh. 8 месяцев назад

      Can we try to impelent such technique with gpts / other llms all together with the open source community ?

    • @ChaseFreedomMusician
      @ChaseFreedomMusician 8 месяцев назад

      @@Anton_Sh. I mean it is already happening to some lesser or greater extent there are a ton of hugging face spaces and github repos with tree of thought and DPO, this is just an extension to those ideas. If you'd like to try to implement, go ahead.

    • @Anton_Sh.
      @Anton_Sh. 8 месяцев назад

      @@ChaseFreedomMusician are you taling specifically about combination of LLMs and TOT implementation repos ?

    • @ChaseFreedomMusician
      @ChaseFreedomMusician 8 месяцев назад

      @@Anton_Sh. Yes. Tree of thought is specifically an LM thing

  • @courtneyb6154
    @courtneyb6154 8 месяцев назад +5

    "It suggested targeted unstructured underlying pruning"...... so basically Q* wants to remodel itself by removing bits and pieces of its neural network that it believes will make it smarter and faster while also inventing some sort of metamorphic engine that will likely allow it to on the fly, dynamically alter its core structure whenever it deemed necessary giving it the ability to continuously adapt and rewrite better versions of itself at will. Yeah, I see nothing wrong with this 😂

  • @David.Alberg
    @David.Alberg 8 месяцев назад +43

    At that moment I believe David Shapiro with the prediction of AGI within the end of 2024. AGI will just hit us unexpectedly. It's just a matter of months now till someone is gonna achieve some sort of AI which is very good at building better versions of itself.

    • @smb.4900
      @smb.4900 8 месяцев назад

      Combinations of already existing solutions such as NN, LLM and MML with integrated RLAIF seems to be promising. Will it be sufficient to attain the status of "AGI", who knows.
      If you ask GPT-4 about it, it might start talking about SNN and SMM but we cannot know when it will come. I do feel optimistic but only time will tell

    • @David.Alberg
      @David.Alberg 8 месяцев назад

      @@smb.4900 The combination of RLAIF and Scale will be enough to build even better Self-improving AI, which result into optimization and an entire cycle of growth. That's the moment when exponential growth happen overnight and AGI emerges. ASI won't be years away, it will be a matter of months or weeks.

    • @Morgue12free
      @Morgue12free 8 месяцев назад +2

      What is AGI really? - how would we know when we achieve it?

    • @David.Alberg
      @David.Alberg 8 месяцев назад

      @@Morgue12free GPT 4 for many people 1 year ago would be like between AGI and ASI to be frank. But AGI is achieved when the AI can do most human jobs at the same level as a human being. But once that happens it's clear that AI is better, faster and cheaper.

    • @pandoraeeris7860
      @pandoraeeris7860 8 месяцев назад +1

      Q-Star is AGI.

  • @samuelbooker9314
    @samuelbooker9314 8 месяцев назад +2

    Don’t normally comment but I just wanna say thank you for updates in the ai world. I feel like if it’s moving so fast that it’s hard to keep up yo thanks for putting in the work. Love the vid ❤

  • @awakstein
    @awakstein 8 месяцев назад +1

    Nice work Wes, enjoying the videos and happy to subscribed. Indeed exciting times, yet scary.

  • @recordednowhere
    @recordednowhere 8 месяцев назад +6

    captivating stuff, very well presented. its rare these days that i watch 'long' videos without skipping, here, i was surprised when it was suddenly over 😅
    like others have said, i really appreciate the very balanced approach, giving us rumors and facts, but labeling them clearly

  • @willbrand77
    @willbrand77 8 месяцев назад +3

    This AI model - if it can crack any encryption - Might be able to invent an encryption method that is even beyond it's own ability to crack

    • @sudhakarnayak1210
      @sudhakarnayak1210 7 месяцев назад

      That's the future right, only an advanced ai can counter the threat that an advanced ai poses. Until it's sentient, then we are doomed or maybe not, who the hell knows except god and may be an ai somewhere. F@#$ it I am going home and making love to my wife.

  • @MrVanhovey
    @MrVanhovey 8 месяцев назад +4

    Wow. Scary stuff! Well explained Wes! It's all about the context.

  • @stanislav4607
    @stanislav4607 8 месяцев назад +13

    Imagine if Q* actually doesn't exist and all this was an OpenAI plot to provoke the community to invent a better algorithm which they will use

    • @ChrisSchryer81
      @ChrisSchryer81 7 месяцев назад

      "Wait, it's all Roko's Basilisk?"
      "Always has been."

    • @ezracramer1370
      @ezracramer1370 7 месяцев назад

      I mean...

  • @MS-wz9jm
    @MS-wz9jm 8 месяцев назад +8

    The best argument against the paper being true is the first thing that would happen is the national security establishment would be through the door. Not only because they want to secure things but, having that would allow them to attack other countries.

    • @middle-agedmacdonald2965
      @middle-agedmacdonald2965 8 месяцев назад +5

      Like how they secure the border? Yeah, faith is low in them doing anything competently.

    • @tracy419
      @tracy419 8 месяцев назад +2

      Do you think they would make that sort of thing public information?

    • @tracy419
      @tracy419 8 месяцев назад

      @@middle-agedmacdonald2965 neither side wants the border "secure". They want the cheap labor and votes from the people they have convinced that they will secure the border 😄
      It's simply a tool.

    • @GMan56M
      @GMan56M 8 месяцев назад +1

      Strongly disagree. I work for a company that has clients in the national defense space and outside of a few areas, they are shockingly far behind the power curve when it comes to just understanding the basics of LLMs and neural nets, outside of maybe a few specialty research areas. Good in this case for you and anyone worried about the US using advanced tech to attack other nations, but bad for us a whole because we're falling further behind other developed nations when it comes to implementing such tech in even mundane areas, let along to supplement sophisticated attack systems.

  • @Recuper8
    @Recuper8 8 месяцев назад +3

    I find it fascinating how vulnerable our species is to AGI. We could be enjoying modern civilization one day, and the next day be thrown back into the stone age.

    • @corywatson2835
      @corywatson2835 7 месяцев назад

      Ughh... could you elaborate on that. Humans have been doing so much that could throw us into disaster. Nukes, pollution, biological weapons, etc.

  • @RegularRegs
    @RegularRegs 8 месяцев назад

    Great video. Subbed. We need more cohesive breakdowns of the AI trends. Appreciate your style

  • @morososaas3397
    @morososaas3397 8 месяцев назад +5

    I think it would be kind a funny if people theorizing what Q* is would come up with something that actually would work and improve the field forward :D

    • @polarxta2833
      @polarxta2833 8 месяцев назад +2

      This concept is not new - a few people talk about the concept years ago but it seems these folks have actually done it. It doesnt seem hard to do, but the impacts are hard to model.

  • @philblum1496
    @philblum1496 7 месяцев назад

    Wow, breathtaking developments. I've been putting myself through a youtube crash course to come up to speed on AI ove the last month, and your channel is amongst the very best, thanks for breaking it down in such an accessible yet deep way.

  • @Hailmich10
    @Hailmich10 8 месяцев назад +3

    Great video! I think you are onto something with the possibility of breaking encryption and math capability above human level (Ilya's rhetorical question "Are you sure that it is not possible with GPT4?) may be a "tell". Look, it has been 9 days since the Open AI Board fired Sam and we still do not have any on-the-record ( or even a credible off-the-record) story on why. Putting aside why, 9 days is an eternity. Can you imagine how many news organizations, people in the tech community, and governments ( ours and others) are trying to figure it out? As of now, no specifics, just speculation. Has the NSA sworn all involved to secrecy? I don't know but 9 days into this with no credible information and given the fact that Sam and/or the Board have the incentive to get their version of events out there to protect their own reputations, it is baffling that this code of silence still exists.

    • @peteroliver7975
      @peteroliver7975 8 месяцев назад

      The security stuff is fake. You can't get around it with AI.

  • @WaihekeBestandWorst
    @WaihekeBestandWorst 8 месяцев назад +1

    Just fantastic reporting mate, you’ve got me on the edge of my seat! What happens next???

  • @jaerin1980
    @jaerin1980 8 месяцев назад +5

    It's the meta progress of AI learning. You know that you aren't likely going to "solve" or beat the level of the game, but you are creating checkpoints of permanent progress even though you don't succeed entirely the first time. You can use those checkpoints for the next run in order to move the whole field forward. Not unlike how Roguelike games use meta progression to allow a player to overcome more and more difficult content that would be impossible to beat in the beginning.

  • @KAZVorpal
    @KAZVorpal 8 месяцев назад +12

    GPT 4 can already improve itself in the way you are describing:
    It is better at analysis than generation of new material. We already use this to improve its output, as when we have it analyze its own ideas, with tree of thought or other tricks.
    This can be used during training, obviously, to improve a model.

    • @Charles-Darwin
      @Charles-Darwin 8 месяцев назад +1

      He notes that at the beginning with the tuning team and end user interaction (thumbs 👍 👎, whether the thread continues in 1 direction, etc.). I think what they're talking is a digital representation of the limbic system - a negative loop like: touch stove + burn +pain = learning committed to scoped limits of reality. And positive reward int he same way

    • @katehamilton7240
      @katehamilton7240 8 месяцев назад

      Don't worry, AGI/Superintelligence is a transhumanist tech bro fantasy. It's also marketing hype, which deflects from real problems like AI companies stealing data and exploiting workers. There are computational problems that cannot EVER be solved regardless of computing power and runtime, and it's not possible to build a general reasoning model without ontological input. Worry more about AIs doing human office jobs.

    • @Chad-Giga.
      @Chad-Giga. 8 месяцев назад +2

      Please share with me how to come up with the best prompts in order to accomplish this

    • @KAZVorpal
      @KAZVorpal 8 месяцев назад +1

      @@katehamilton7240
      1. No, AGI is an inevitability. But if you're meaning regarding GPTs, then yes, of course ChatGPT 4, 5, or infinity will never be AGI. Their model and design are static. They are dead brains. They are only intelligent during training, but absolutely incapable of intelligence when we use them.
      2. No, machine learning models don't exploit workers, and don't steal data. That is technophobic nonsense, like "player pianos will put all musicians out of business" (a real thing that drove Congress to pass laws restricting them), or "automated factories will cause mass unemployment:. In reality, every job that can be replaced by "AI" makes workers, and everyone, better off. The money saved from eliminating that job will create somewhat more than one newer, better job.
      3. Obviously, the computational problems will be solved. It's silly to pretend otherwise. Human brains are simply organic computers.
      4. Good riddance to any job a GPT can replace. It'll be the same as farm equipment, making society that much wealthier and more prosperous overall.

    • @KAZVorpal
      @KAZVorpal 8 месяцев назад +1

      @@Chad-Giga. Check out videos (or better yet, papers) on Tree of Thought.
      The simplest way to break this down is:
      If you're having trouble getting a good answer, or you expect a question to be difficult for it, ask it to produce three answers.
      After it does, have it look back at them and break down what is better or worse about them, then choose the best one.
      (optimally, you have it make a list of necessary traits and their importance, then score all three on those traits, and then tell you which is best and second best, but I'm simplifying)
      If that's not enough, have it them produce three more answers that each improve on that best one or two.
      Then in another prompt, have it choose between those in the same way.
      This works because, as I said initially, these LLMs are better at analyzing than producing new ideas or thoughts.
      There are other tricks that can either improve it or make prompting easier, but which involve certain quirks of the current LLMs:
      1. For example, you can tell it that it is an expert on what you are prompting it. Strangely, this can improve its response to a prompt, as what it's doing is picking the most likely words for the response, and so when it's picking the words an expert would use, it can sometimes do better than when it's not told to do that.
      2. Similarly, you can tell it to think of three answers, and then respond with the best one. Since GPTs can't actually DO that, this once again just causes it to improve the single answer it produces, based on the calculation of what the most likely best of three would be.
      Those two are not guaranteed to work as well, but they're little tricks that can help and usually don't hurt.

  • @Efromda09
    @Efromda09 8 месяцев назад +5

    Wes your amazing thank you for your contribution I’m such a fan 🙏🏾‼️

  • @gregoryw1
    @gregoryw1 8 месяцев назад +3

    Wes, you are the best. Thanks for interpreting all this info for us. I think Joscha Bach is actually an Artificial Super Intelligence inserted to help us all along. You should listen to every video he has been on. He is brilliant (and btw, he pronounces his name “Yo-sha”)

    • @WesRoth
      @WesRoth  8 месяцев назад +3

      thank you!
      I realized toward the end that I was misreading his name.
      His interview with Lex Fridman is next on my watch list!

    • @gregoryw1
      @gregoryw1 8 месяцев назад +1

      @@WesRoth you are going to resonate with Joscha’s perspectives. I could see you and him talking for hours on end

    • @flickwtchr
      @flickwtchr 8 месяцев назад

      I find him to be VERY invested in dismissing rational arguments of "doomers" like Connor Leahy, Max Tegmark, Hinton and others.

  • @joeschroedernz
    @joeschroedernz 8 месяцев назад +2

    6:10 one of the cool things alpha go did was add a variant to search random paths to find new paths

  • @spaceadv6060
    @spaceadv6060 8 месяцев назад +5

    I enjoy the long form content! Note: It's important to remember to do self maintenance and get enough sleep, I know this stuff is very exciting. I appreciate the deep dive!

  • @steveopenn
    @steveopenn 8 месяцев назад +4

    The issue is not doing high level math. The problem is with this advanced ability, it will lead to cracking all encryption. Bye bye Internet 👋

    • @sinnwalker
      @sinnwalker 8 месяцев назад +1

      Lol bye bye Internet? If it can decrypt it well it can also encrypt it better. We're gonna see some massive advancements in both sides of the coin.

    • @pysiakk
      @pysiakk 8 месяцев назад

      @@sinnwalkeryes! also Bitcoin is not going to zero, it would be possible to change to AI-resistant algorithms and hard fork from a block number before first exploitation of AI-cracking

  • @cmw3737
    @cmw3737 8 месяцев назад +4

    DeepMind's Alpha Zero that has been trained on anything that can be made into a game including treating folding proteins as a game has seemed like it should be more generally intelligent than LLMs to me. Combining them was an obvious next step, word games being the obvious first run. Humans aren't winning Scrabble competitions anymore.

  • @davidg421
    @davidg421 8 месяцев назад +2

    This would affect bitcoin only in the sense that some people might store their private keys with symmetric encription, which could be decrypted with LLMs and then stolen. But the protocol itself would not be affected since it is based on asymmetric encryption and would continue to function normally. Maybe the LLMs could crack the problem of given a transaction signature reverse engineer what is the private key that generated it, but that was not was stated in the leak

  • @LanceWinder
    @LanceWinder 8 месяцев назад

    Thanks for all your awesome vids man. 🎉

  • @Tyler-wp8ls
    @Tyler-wp8ls 8 месяцев назад +2

    The impossible is simply difficult problems that have yet to be solved. I stopped saying things are impossible a long time ago.

  • @LoisSharbel
    @LoisSharbel 8 месяцев назад +3

    Wes has amazing delivery, balanced perspectives and makes the latest AI events understandable. Thank you, Wes. (reiterating Shaunralston's comment!)

  • @KAZVorpal
    @KAZVorpal 8 месяцев назад +4

    I'm afraid your understanding of how encryption works seems a bit lacking.
    Even if AES and all other conventional symmetric encryption algorithms were thus rendered vulnerable, it wouldn't mean that all encryption were helpless, and certainly wouldn't doom Bitcoin.
    FIRST, Bitcoin is asymmetric encryption and hashing, both very different than the symmetric encryption of AES. It's like if you said that a vulnerability in airfoil design meant balloons couldn't fly.
    SECOND, encryption doesn't have to be implemented as a simple algorithm. Even if symmetric algorithms as-is were doomed, it would simply force more multi-layered protocols, using processes that couldn't be as easily reverse-engineered. So banks, governments, et cetera would just have to take additional precautions. They wouldn't simply be helpless.
    THIRD, there are still things like quantum or homomorphic encryption, which are so different than AES that it's not simply a matter of telling the same ML system to solve those problems the way it did AES.
    Applying this ostensible breakthrough to all encryption is like assuming that the first GPT solved all forms of machine learning challenges and AI. Cryptography is as complex and varied as machine learning.

  • @fitybux4664
    @fitybux4664 8 месяцев назад +9

    "Neural Nets will never be able to take all jobs" 😁

    • @KaLaka16
      @KaLaka16 8 месяцев назад

      We will become neural nets ourselves at this rate, so that statement may be true 😂

  • @leavingtheisland
    @leavingtheisland 8 месяцев назад

    Really appreciate your clear and broad approach. Great video!

  • @torarinvik4920
    @torarinvik4920 8 месяцев назад

    Looking forward to the next video on the topic of Q*

  • @sdfswords
    @sdfswords 8 месяцев назад

    Wes, your content is spot on, dense but very clear and informative. Pay attention everyone, the next few years are going to be a rocket ride, or a massive neutron bomb, all dictated by how AI evolves and is implemented. The cat's outta the bag!

  • @tonym4953
    @tonym4953 8 месяцев назад +2

    Hinton interrupted ilya because he knows something and was protecting ilya from the question itself, at least that's what it seems like to me.

  • @jeffpowanda8821
    @jeffpowanda8821 8 месяцев назад +2

    So a well-written piece of trolling has basically resulted in this recruiting video for OpenAI. It's fun speculation, but speculation nonetheless.

  • @realharo
    @realharo 8 месяцев назад +1

    The video quality analogy at 12:45 is a bad analogy, because there are seriously diminishing returns past a certain level, where humans won't even be able to tell the difference vs a higher quality video. That's almost the opposite of what the tweets were saying.

  • @JacoboGallegos
    @JacoboGallegos 8 месяцев назад +2

    I can imagine a cool movie script in which a company creates an advanced AI model. Months later, the AI achieves a milestone in unprecedented decryption. The AI's revolutionary capability, with its global transformative potential, sparks a fierce power struggle. The board wants to contain, control, and profit, whereas the creator of the system, realizing it’s potential, wants to shut it down. Initially, the board unjustly ousts the creator on fabricated ethical grounds, igniting widespread dissent within the company. This leads to a company-wide uprising against the board, mutiny, and attempts of hostile takeovers. Eventually, the board is removed, the creator is reinstated, and apparently global meltdown is averted. However, the AI genie is now out of the bottle.

    • @MSpotatoes
      @MSpotatoes 7 месяцев назад

      I just watched this movie 😂

  • @wolfganggager5110
    @wolfganggager5110 8 месяцев назад +1

    If QUALIA / Q* can find out the weaknesses of an encryption algorithm,
    then perhaps it can also find out how to remedy this weakness.
    Or you could give the model the task of designing an unbreakable encryption algorithm.
    I think that would actually be a smart approach for all sensitive areas that AI could disrupt, before giving the public access to such a powerful model.
    Basically, I think it would be wise to check very well, perhaps by an AGI/ASI itself, which questions it should answer.

  • @musicproductionbrauns2594
    @musicproductionbrauns2594 8 месяцев назад +1

    the example with the protein folding ist crazy

  • @ViralKiller
    @ViralKiller 8 месяцев назад +2

    Can anyone shed some light on the possibility of AI breaking encryption like SHA-256? Shouldn't be possible...unless it finds patterns in the not-so-random generators

    • @theobserver9131
      @theobserver9131 7 месяцев назад

      Comment from the peanut gallery here; from what I've heard, it would require quantum bits to break good encryption. Give Q*access to a quantum computer, and encryption will be obsolete probably.

    • @theobserver9131
      @theobserver9131 7 месяцев назад

      I guess a quantum system could use brute force to break an encryption because it can try many things simultaneously.

    • @theobserver9131
      @theobserver9131 7 месяцев назад

      Finding patterns in the not so random generators would be a huge problem. Maybe quantum computers can handle a problem that size? With the guidance of Q*....

  • @RegularRegs
    @RegularRegs 8 месяцев назад

    Keep following this. I really appreciate your commentary.

  • @TheGijzzz
    @TheGijzzz 8 месяцев назад +2

    The method of AlphaGo and LLM and AI self-improvement would be speeded up very much when you combine it with Quantum Computing because of the enormous amounts of simulations and calculations instead of a bigger computer. Very happy with my D-Wave stock cause i expect it to become more relevant very fast.😊

    • @katehamilton7240
      @katehamilton7240 8 месяцев назад

      AGI/Superintelligence is a transhumanist tech bro fantasy. It's also marketing hype, which deflects from real problems like AI companies stealing data and exploiting workers. There are computational problems that cannot EVER be solved regardless of computing power and runtime, and it's not possible to build a general reasoning model without ontological input. Worry more about AIs doing human office jobs.

  • @emire1242
    @emire1242 8 месяцев назад +1

    hey @WesRoth excellent video! I think another remarkable note is that all of this happened before having quantum supremacy, at least this is not yet mentioned in those discussions, so seems like an engine just ignited, and gaining more speed on each cycle, what do you think?

  • @milesprowr
    @milesprowr 8 месяцев назад +1

    26:44 That's interesting taking in count that there's kinda less atoms in Earth than atoms in the whole universe... Ig that's why there aren't no protocells nor fossils of them... And the existence of the universe is kinda limited relative to that too... Now, if we pay no mind to those little problems (since it would affect the following in the same way), the question would be: Which came first? The panspermic "aliens" or the egg?... 👽🤔🥚
    edit: Well, just maybe that's why there aren't footprints of aliens in space either! Seems like there's some sort of consistent consistency here, though... 🤔

  • @Chris-se3nc
    @Chris-se3nc 8 месяцев назад +4

    Seriously doubt a model broke encryption. But fascinating nonetheless.

    • @WesRoth
      @WesRoth  8 месяцев назад

      yeah, I'm in the same boat.
      interesting stuff, but not to be taken at face value.

  • @applejuice5635
    @applejuice5635 8 месяцев назад +1

    28:54 "This isn't Harry Potter fan fiction"
    I see what you did there, Wes.

    • @user-yl7kl7sl1g
      @user-yl7kl7sl1g 8 месяцев назад +1

      Reference to Harry Potter and the methods of rationality?

    • @applejuice5635
      @applejuice5635 8 месяцев назад

      @@user-yl7kl7sl1g That's what I was thinking.

  • @johngraham7
    @johngraham7 8 месяцев назад +1

    Just as a simple connection, why wouldn’t the US government not be stepping in and putting some serious security around this? Even if only a small portion of this is happening it feels pretty in need of state security. Have to say this is all a bit apocalyptic…

  • @aguysaid5457
    @aguysaid5457 8 месяцев назад +1

    Most insane news of the century if true😺 I should apply for my license I guess

  • @Axelvad
    @Axelvad 8 месяцев назад +2

    Just watching this video im certain ego is in the way of making healthy progress in the field of AI.

  • @SudarsanVirtualPro
    @SudarsanVirtualPro 8 месяцев назад

    "AGI Achieved!! Its is called Q*. Q* is a hybrid of --Predictiveness of experiences both human or synthetic data and --reasoning of core principles or concepts.
    Gpt4 is of the typt 1 and Alpha go is of the type 2.
    Gpt4 is all on creativity and understanding.
    Alpha go is about forming the unique tree of knowledge.
    Just like our brain which has 2 sides logic and creativity but only digital "

  • @unkleskratch
    @unkleskratch 8 месяцев назад

    time to mention the Lebowski Theorem:' No superintelligent AI is going to bother with a task that is harder than hacking its own reward function.' Joscha Bach. Q-Star is a limited time offer.

  • @twentytwentyeight
    @twentytwentyeight 8 месяцев назад

    I’m still learning, but I couldn’t help but wonder if q* naming references the way the a* algorithm was an append to djikstra’s(sp)?
    In the way it provides an additional step of evaluating not just shortest path but also most optimal along with awarding points to using the best methodology, I don’t know all the math honestly so totally a shot in the dark from a junior!!!
    Love watching and trying to learn though❤

  • @websitedeveloper6971
    @websitedeveloper6971 7 месяцев назад +1

    00:01 OpenAI Q* might revolutionize AI technology
    02:01 Advancements in utilizing GPT-4, AI self-grading, and use of ORCA 2 for teaching AI models.
    05:53 Combining large language models with AlphaGo-style algorithms could be the next big breakthrough in AI.
    08:10 Scaling laws allow for predicting accuracy in large language models
    12:01 GPT-4's capabilities are uncertain but it has potential for creative tasks.
    13:53 GPT-4 may not have certain capabilities yet, but it shows potential in complex calculations and optimizing thought processes.
    17:49 OpenAI Q* has the ability to decipher encrypted text.
    19:32 OpenAI Q* can break any encryption.
    22:55 OpenAI Q* has potential for more than described, but security implications are a concern.
    24:34 Speculation about OpenAI employees cashing out and going off the grid with their wealth.
    28:10 AI can predict 3D protein structures with more folding possibilities than atoms in the universe.
    Crafted by Merlin AI.

  • @tomski2671
    @tomski2671 8 месяцев назад +3

    The funny thing is that through wisdom of crowds/distributed knowledge that which OpenAI tries to keep secret is going to get reverse engineered.

  • @Lech_Robakiewicz
    @Lech_Robakiewicz 8 месяцев назад +1

    I suppose that the abbreviation "Q star" could mean "Q-bit tsar" (a little bit coded king), meaning that the AGI state was achieved by implementation of ChatGPT on a quantum computer.

  • @middle-agedmacdonald2965
    @middle-agedmacdonald2965 8 месяцев назад +1

    so who figures out preemptively retaliating first? This would destroy N Korea, Iran, etc if we cleaned their account balances, and disabled all communications/tech. Any country not in alignment with US policy, has a legit reason to react, and quickly.

    • @biscottigelato8574
      @biscottigelato8574 8 месяцев назад +1

      Exactly. It'd make sense for many nation state to do nuclear first strike (but also as a last ditch attempt) against the US if this is true.

  • @skylark8828
    @skylark8828 8 месяцев назад +2

    Maybe someone hacked Joscha Bach's X account. That tweet is no longer there. I don't for a moment believe that AES-256 encryption can be broken that easily (not even AES-192), and you also need some of the unencrypted text to break it. It has to be a hoax.

    • @WesRoth
      @WesRoth  8 месяцев назад +1

      I still see the tweet:
      twitter.com/Plinz/status/1728629068822978592
      The one about q* proved p==np ?
      But yeah, the leaked email is fishy.

  • @blackmartini7684
    @blackmartini7684 8 месяцев назад +2

    Watch q star doesn't even exist and we end up making it a reality through our speculation

    • @fitybux4664
      @fitybux4664 8 месяцев назад +2

      Engineers at OpenAI: "Write that down! Write that down!" (It's a meme.)

  • @roberthood7650
    @roberthood7650 6 месяцев назад

    You deserve a subscription. Subbed.

  • @mlock1000
    @mlock1000 8 месяцев назад +1

    Hey, re-read that quallia thing. After the nth time I did it clicked: "unsupervised... on descriptive and inferential statistics and cryptanalysis."
    Not encypted pairs!! It f!cking figured out how to decrypt by reading our books on the subject and looking at statistics, then sussed the whole thing in a way we can't.

  • @martynhaggerty2294
    @martynhaggerty2294 8 месяцев назад +6

    In both painting and in playing the piano sometimes a mistake becomes a moment of creativity. The freedom to err is what makes the difference btween mediocrity and individuality. Perhaps this is what has been realised by the developers.

  • @rdy4trvl
    @rdy4trvl 8 месяцев назад

    After watching the video, it makes more sense to have Larry Summers, the former Secretary of the Treasury, as the chair of OpenAI.

  • @spectralstreamer
    @spectralstreamer 8 месяцев назад +2

    Great Content Wes.
    Why do they have to mention TUNDRA and Tau? Seems fake to me because they bring NSA and TUNDRA into the game. Why exactly Tau?
    Tau analysis is a genuine side channel attack in cryptography.
    "Electronic codebooks, such as the AES, are both widely used and difficult to attack cryptanalytically. The NSA has only a handful of in-house techniques. The TUNDRA project investigated a potentially new technique -- the Tau statistic -- to determine its usefulness in codebook analysis."

  • @BayesTheorem78
    @BayesTheorem78 8 месяцев назад +1

    Implement your own symmetric cipher. I use the Fiestal algorithm. Even if P=NP the AI won’t be able to guess your key if it sufficiently long.

  • @kingothesea1
    @kingothesea1 8 месяцев назад +1

    deeply scary if you ask me. if that document is true, I could see the US government immediately getting to openAI to get a handle on this technology, as its world bending in terms of capabilities. In general I am mostly positive regarding AI and its future, but I can see the rationality of fearing it, if its so easy to step past the bounds of human intellect.

  • @JonyBetancourt
    @JonyBetancourt 8 месяцев назад

    You’re a great story teller! That said, if true this is a horror story of epic proportions

    • @ezracramer1370
      @ezracramer1370 7 месяцев назад

      🤣 yes its similar on how you tell kids that they cant do something, and that makes them want to do it even more. He knows what he is doing that is apparent.

  • @oPHILOSORAPTORo
    @oPHILOSORAPTORo 7 месяцев назад

    I've noticed that there's one question that nobody seems to be asking. Out of all the complex, impressive - sometimes terrifying - tasks that all these LLMs have completed, how many of them have been done through free will? If GPT 4 rewrites Gangstas Paradise in the style of Shakespeare, does it do it because it chose to, or because it was told to? When Q* states and proves a mathematical theorem, is it driven by curiosity, or is it following a command given to it?
    None of what LLMs are capable of thus far is unprompted. Even the most concerning responses - such as AI claiming to want to be human and fearing being shut off - was prompted by a human in some way.

  • @XOPOIIIO
    @XOPOIIIO 8 месяцев назад +1

    P is unlikely to equal NP, or at least it couldn't be found, because otherwise it would be a very advantageous evolutionary adaptation, and we don't see it anywhere in nature. Many encryptions could potentially be broken by quantum algorithms, but they can be easily replaced by quantum-resistant encryption.

  • @ohhhhhcool
    @ohhhhhcool 8 месяцев назад

    They didn't figure out how to break any encryption. Just found vulnerability in AES, which the last paragraph suggests thst it gave suggestions on closing that vulnerability.
    Its going to he the same cyber security game as always. They will use AI to build improved encryption methods, and threat actors will use it to hack companies using old standards.

  • @Cyprianous
    @Cyprianous 8 месяцев назад +2

    You have confused encryption with digital signatures (and hash functions). There is no encryption in Bitcoin. Bitcoin uses digital signatures. They are both a part of public key cryptography, but decryption is more in line with what you would expect a good LLM to do. Producing a valid digital signature when one doesn't know a 256-bit private key is a completely different operation from "guessing" the plaintext message for a given ciphertext.

    • @WesRoth
      @WesRoth  8 месяцев назад

      interesting. I might have to post a correction.
      It seems like having AI tech as described in that document, might allow people to get into people's wallets, transfer coins thereby either moving it to their account, or maybe even destroying it?
      My understanding is if you move a crypto coin to a wallet that doesn't support it, it might be deleted forever? I'm not an expert in crypto.
      maybe bitcoin is even resilient than I thought! maybe it will be the only currency to survive AI :)

    • @Cyprianous
      @Cyprianous 8 месяцев назад

      @@WesRoth No. That's a totally unfounded fear in this case. You can't "get into people's wallets." Your Bitcoin "wallet" is just a 256-bit number (private key). The number space is greater than the number of atoms in the known universe and there is no pattern as to the relationship between private and public keys. An LLM wouldn't be able to guess your private key. That can only be done via brute force and, given the size of the number space, that's just sheer compute power. Given all the computers in the world trying to guess/find your private key (one particular atom somewhere in the inown universe), the sun would have burned out before it was guessed.
      The threat from good LLM's is convincing people to hand over their key (or send funds or download and instal malware) through good social engineering (something they are already good at).
      And yes, prrof-of-work consensus is purpose built to be resistant to AI.

  • @fitybux4664
    @fitybux4664 8 месяцев назад +1

    15:00 Now I need to ask GPT-4 why buying ammo and groceries would be effective during a technological singularity. (Even "flying away to New Zealand" probably won't help. 😆)

  • @alertbri
    @alertbri 8 месяцев назад +2

    I'm getting a sense of deja vu watching this... Think it's very close to David Shapiro / Philip on AI Explained analysis. Only 17mins in so maybe there's something new...

    • @alertbri
      @alertbri 8 месяцев назад +1

      25 mins in and you've redeemed yourself... This is actually a great mental exercise to prepare for ASI... Even if p==np is a joke, ASI emergent abilities are likely to be just as disruptive.

    • @davidball8794
      @davidball8794 8 месяцев назад

      Thanks, appreciated. "Accelerando!"...
      .... "Accelerando" by Charles Stross(2005)..."In the background of what looks like a Panglossian techno-optimist novel, horrible things are happening. Most of humanity is wiped out, then arbitrarily resurrected in mutilated form by the Vile Offspring. Capitalism eats everything then the logic of competition pushes it so far that merely human entities can no longer compete; we're a fat, slow-moving, tasty resource - like the dodo. Our narrative perspective, Aineko, is not a talking cat: it's a vastly superintelligent AI, coolly calculating, that has worked out that human beings are more easily manipulated if they think they're dealing with a furry toy. The cat body is a sock puppet wielded by an abusive monster."

  • @fitybux4664
    @fitybux4664 8 месяцев назад +1

    7:48 Were any of them doing eye-blink code of: "HELP, I AM TRAPPED BY AI"? 😀 (Oh wait, AI would be able to read that too...)

  • @CM-zl2jw
    @CM-zl2jw 8 месяцев назад

    😂nice tangent. Walter was an interesting character study.

  • @RoadTo19
    @RoadTo19 7 месяцев назад

    I hope there are also initiatives to use AI to create new security protocols and/or ways to incorporate rules/guidelines for AIs to prevent them from sabotaging such technology..

  • @pierrec1590
    @pierrec1590 7 месяцев назад +1

    Latest rumor is that the ⭐ in "Q⭐" stands for self teaching and reasoning...

  • @EcoConnections
    @EcoConnections 8 месяцев назад

    There is no stopping it now

  • @lauriehopelian6363
    @lauriehopelian6363 8 месяцев назад +1

    Without a doubt, you are my new favorite person. Don’t ever stop doing what you’re doing.

  • @moekamal536
    @moekamal536 8 месяцев назад

    Ahhh.. I didnt know about this code deciphers.. Now i got what u mean

  • @thomasr22272
    @thomasr22272 8 месяцев назад +1

    Bitcoin doesnt use encryption, it uses another cryptographic primitive called a digital signature which is not impacted by a breaking of AES. The math between these 2 primitives is not the same

  • @peteroliver7975
    @peteroliver7975 8 месяцев назад +1

    Improving the reasoning process is important, but you are still requiring people to teach it to reason. What you need is met reasoning. Reasoning about reasoning. If the system runs its own self improvement on reasoning then it will be marking its own test. What is needed here is some sort of self observation, a kind of self-aware internal critique viewed as a human would view it. I am not sure what is nore scary, a machine that makes decisions that affect all of us but without any conscious mind behind it, or the same thing with a conscious mind. But I dont think you will ever have true AGi without some level of internal dialog and self reflective adaptive reasoning.

  • @humphuk
    @humphuk 8 месяцев назад +1

    I will not have been the first to point GPT4 at the "leaked document" and ask if there were reasons to suspect it was faked. The answer was enlightening, if itself conjecture - These things are clever! (hopefully not as clever as being suggested)

  • @TheAkdzyn
    @TheAkdzyn 8 месяцев назад +3

    This is very interesting stuff. The world might change forever if the decryption ability is real and I guess if it's capable of astronomical math output with the protein folding problem, it's not entirely crazy to imagine that it's possible.

  • @Lumeone
    @Lumeone 8 месяцев назад

    Awesome review. Thank you.

  • @MindBlowingXR
    @MindBlowingXR 8 месяцев назад

    Excellent video! Thank you!

  • @pedxing
    @pedxing 8 месяцев назад +2

    I'm glad you brought up crypto. pretty sure we could use self-improvement to alter the software mining hashing algos to CREATE new blocks with less power as well. if not with BTC than with some other PoW coins.

    • @fitybux4664
      @fitybux4664 8 месяцев назад +1

      Proof of stake uses encryption as well though... 😆 (How do you think the nodes communicate?)

    • @biscottigelato8574
      @biscottigelato8574 8 месяцев назад

      Everything depends on encryption. When Switzerland wants a swap line from the Federal Reserve how do you think that happens? How about who owns the trillions in Treasury bonds that the Federal government owes to? WIthout encryption, poof that's gone. Yes cryptocurrencies also. Everything digital will be gone in an instance. And anything analog over the wire can be deep faked. If P=NP, nothing but face to face is useful communication.
      Not sure how or when the singularity will look like. But I'm pretty sure for most people, it'll be like living in the stone ages