Nuclear War from NOTHING? AI Wargames - Nuclear Engineer Reacts to Sabine Hossenfelder

Поделиться
HTML-код
  • Опубликовано: 4 мар 2024
  • Original Video ‪@SabineHossenfelder‬ • Artificial Intelligenc...
  • НаукаНаука

Комментарии • 55

  • @bobthebomb1596
    @bobthebomb1596 3 месяца назад +17

    The British Army's secure space based communications system is called ...... Skynet. 🙃

  • @stellaris4869
    @stellaris4869 3 месяца назад +12

    Chat GPT is highly censored, using it for questions on the topic of nuclear energy will almost always give a basic answer.

  • @Takyodor2
    @Takyodor2 3 месяца назад +40

    AI's escalating conflicts don't scare me, the people who seriously consider _letting AI_ dictate what to do in war scare me!

    • @GrantWaller.-hf6jn
      @GrantWaller.-hf6jn 3 месяца назад +3

      Yes cause they don't want to think for themselves. People are train to think machines can do a better job then people. Quicker longer but eventually everything breaks. Just because it's new don't mean it's good.

  • @Merennulli
    @Merennulli 3 месяца назад +8

    It's not accurate to say these were "programmed". We program the models that do the learning, but from there, the models grow based on how they process the data that is fed in. That's about like cultivating bacteria and then seeing what happens. We have complex theories for how they will play out, but AI is such a young field that this is mostly trial and a LOT of error. And for the same reason, it should NOT be given control over critical systems yet.
    I work in software development, but not with AI, so I don't know exactly how this test was run. That said, she's almost certainly right that it's an issue of the papers being fed into the models. Sheer volume has an effect in weighting a model and we aren't able to teach these models general morality yet. We have to specifically weight the models against undesired behaviors like losing a war or losing their power source. Part of the fundamental problem with the test and using AI in this situation is that even a perfect AI will at best follow designed parameters, and someone on the design team is focused on making it work if it's ever used. But with a nuclear weapons system you have the human operators who are more focused on never using it. The designers fundamentally can't have that "never use" mindset that has (barely) avoided disaster so far, meaning escalation prevention has to be another, separate system overseeing this system.

  • @tomwimmenhove4652
    @tomwimmenhove4652 3 месяца назад +8

    The plot of The Terminator? No! The plot of War Games! Know your classics! :)

  • @liamrichmond9917
    @liamrichmond9917 3 месяца назад +6

    Wonder how chat gpt would play DEFCON now

  • @Skyecraft92Gaming
    @Skyecraft92Gaming 3 месяца назад +1

    The void coefficient is a term used in nuclear engineering to describe how the reactivity of a nuclear reactor changes as voids (empty spaces) are created in the reactor core.
    1. **Positive Void Coefficient**: A positive void coefficient means that as voids are created in the reactor core (for example, due to boiling of the coolant), the reactivity of the reactor increases. This can lead to a positive feedback loop where an increase in temperature or coolant voiding leads to a further increase in reactivity, potentially resulting in a power excursion or even a meltdown if not carefully managed.
    2. **Negative Void Coefficient**: Conversely, a negative void coefficient means that as voids are created, the reactivity of the reactor decreases. This can provide inherent stability to the reactor, as any increase in temperature or voiding of the coolant would lead to a reduction in reactivity, which helps control the power level and prevent potential accidents.
    In summary, the difference lies in how the reactivity of the reactor responds to the formation of voids - positive void coefficient leads to an increase in reactivity, while negative void coefficient leads to a decrease in reactivity. A negative void coefficient is generally preferred in reactor design for enhanced safety and stability.
    GPT 4.0

  • @narashi4000
    @narashi4000 3 месяца назад

    Is great to see long videos of your channel. They are very fun and informative.
    Keep up the great content!

  • @papafrank7094
    @papafrank7094 3 месяца назад

    WOPR was from the movie, War Games, with Matthew Broderick. Worthy of watching or rewatching.
    "Would you like to play a game?" asked Joshua.

  • @greenman360
    @greenman360 3 месяца назад +1

    This reminds me of an old quote from Futurama: "Something's wrong. Murder isn't working and that's all we're good at."

  • @tristanfarmer9031
    @tristanfarmer9031 3 месяца назад +6

    Don't give AI guns!

  • @NorthScotland
    @NorthScotland 3 месяца назад +1

    Love your channel keep up the great work ❤

  • @petersmythe6462
    @petersmythe6462 3 месяца назад

    I find that ChatGPT knows some things it doesn't share without careful prompting and ends up being very likely to just tell you some equivalent of "be safe and contact the relevant professionals." In a whole variety of situations even when, again, it knows perfectly well why they're different.

  • @tinkerduck1373
    @tinkerduck1373 3 месяца назад +1

    05:00 I love it how she repeatedly trolls the string theorists. 😅

    • @Sir_Godz
      @Sir_Godz 2 месяца назад

      yes lets create a field of employment for our smartest people so they can pick their noses while the world burns down around them

  • @johnburn8031
    @johnburn8031 3 месяца назад +1

    Or the plot of the film "War Games."

  • @nickjohnson410
    @nickjohnson410 3 месяца назад +6

    To be fair Sabine has said some things like "This is false because it doesn't agree with QFT" instead of "This contends with QFT and the two need be rectified."
    She treats science like religion.

  • @JamesMBC
    @JamesMBC 3 месяца назад

    The thing is, AI is advancing very fast. GPT-5 was rumored to be announced these weeks, but Elon Musk's lawsuit delayed the anouncement, according to leakers.
    And it's an incremental technology, every new version tends to bring large improvements. En a couple of years, GPT-4 and other current models will seem primitive.
    Just a few days ago, it was announced that a new Ai by Anthropic called Claude 3 Opus was being tested with "needle in a haystack" tests (planting a single line of information among a humge amount of data and prompting it for an answer it could only know by remembering that key piece of data). It passed the test but also it recognized that ist was being tested, and mentioned that in the response.

  • @mortvald
    @mortvald 3 месяца назад

    AI by nature can never be foolproof, luckily for us when they say it'll be deployed it's mostly on communication systems and drones, not actual nuclear launch system, those things are stuck in the 80 and should always be

  • @DataRae-AIEngineer
    @DataRae-AIEngineer 3 месяца назад

    Hi Tyler Thanks for making this video. I love your channel and have learned a lot from you. I'm a machine learning engineer, currently reading this DOD paper to formulate my opinion because I have considered also making a reaction to this video lol. I have another project that concerns nuclear stuff and would love if you'd like to collaborate on the project. Just sent you an email about it.

  • @ToxicGamer86454
    @ToxicGamer86454 3 месяца назад +10

    Sabine is smart but not near as smart as she thinks she is. She acts like she is the top expert in every scientific field. I am not a fan

    • @AdamBrusselback
      @AdamBrusselback 3 месяца назад +5

      Yup, I've caught quite a few errors in her non-physics videos (since im not a physicist). Ended up unsubscribing after she just kept making more and more videos I could see glaring issues with.

    • @GrantWaller.-hf6jn
      @GrantWaller.-hf6jn 3 месяца назад

      ​@@AdamBrusselbackmy newest pet peeve if you don't have a PhD in a field you have no say in the mater. I call bullshit. That is the words of an over educated idiot. Master of one utter dumbest in the rest. One in medical freedom of speech why the transfer of ideas or information. Under that idea nobody who was not in the military can comment on war or spending on thier tax dollars. Or only truckers can talk about the transportation industry. So social media comments should be outlawed. You are the reason Tman is running this channel he wants people to ask question give there opinions on nuclear-powered technology

    • @OhhCrapGuy
      @OhhCrapGuy 3 месяца назад +2

      The problem she suffers from is not uncommon, it's assuming that because you deeply understand one very complicated discipline, you are therefore capable of easily understanding other "simpler" disciplines.
      i.e. "If I understand physics really well, then obviously I can understand computer science and AI research problems and fully appreciate all of the nuances in a few weeks"
      She tends to talk with an air of authority outside her specific area when she doesn't seem to understand some very basic principles, things that actual researchers in the field take a couple years to fully understand.
      Neil deGrasse Tyson has tried to speak authoritatively about things and been similarly wrong a few times, such as stating that helicopters with a failed engine fall out of the sky like a brick (they don't, check out "autorotation"), he stated that any animal for whom sex was painful obviously would have gone extinct (tigers, ducks, and snails still exist).
      It just seems to be a natural thing for scientists who move into science education to assume they will just understand any complex field just because they understand their own.

    • @ToxicGamer86454
      @ToxicGamer86454 3 месяца назад

      @@OhhCrapGuy
      Exactly. Neil Tyson is the best example of this.

  • @GrantWaller.-hf6jn
    @GrantWaller.-hf6jn 3 месяца назад

    Raise of the machines

  • @mentalisme
    @mentalisme 3 месяца назад

    So I'm in Africa. I'm a computer programmer, a friend of mine once had an AI job where they had to train AI by feeding it actual war event videos. The data was already there,all you he had to do was train the model, obviously there was a team and he wasn't alone. Some images were actual horrors of blood and all that. Where did the code go? until now we don't know.

    • @LaserTractor
      @LaserTractor 2 месяца назад

      But how can you possibly teach machine that never experienced pain or loss or just any emotion what horror is?
      You can show it countless pictures of disfigured bodies but it will only see dead people. It won't experience something like pain from watching it. And it won't think "maybe I should avoid it"

    • @mentalisme
      @mentalisme 2 месяца назад

      @@LaserTractor You don't understand. I didn't say they were teaching it how to feel. They were giving it examples of what to do. And it wasn't pictures of dead people, it was videos of real world combat action.

    • @LaserTractor
      @LaserTractor 2 месяца назад

      @@mentalisme Skynet then👍

    • @mentalisme
      @mentalisme 2 месяца назад

      @@LaserTractor Pretty much. Not the only one out there or the most powerful tho.

  • @FuzeTheWholeTeam
    @FuzeTheWholeTeam 3 месяца назад

    that one was wild hahaha

  • @MatterBaby68
    @MatterBaby68 3 месяца назад

    you should check out qxir

  • @AntonSlavik
    @AntonSlavik 3 месяца назад

    You're right and wrong. Israel relied heavily on AI to warn it about impending attacks - oops! We Humans left thinking for ourselves will come out victorious in a war with all other things equal.
    As for asking ChatGPT questions, it's been consistently dumbed down for at least 14 months. I've been paying $20 for GPT4 for 5 months now and notice a huge improvement. And even though they definitely have smarter ringfenced off from us, they're still unreliable on a nightmare level. If NATO thinks it's fighting Russia with ChatGPT, they're fucked.

  • @sir_no_name1478
    @sir_no_name1478 3 месяца назад

    Hey generally when you say you asked chatGPT it would be nice if you could provide the exact model version.
    So if you did not pay probably 3.5.
    If you pay for it probably GPT-4.
    And although it is not fool proof either it is a night and day difference to 3.5.
    So the version Number is the worst marketing I ever saw from a company.

  • @PIutonium-239
    @PIutonium-239 3 месяца назад

    BAD IDEA!!!

  • @michaelburbank2276
    @michaelburbank2276 3 месяца назад

    Wardrobe please

  • @sir_no_name1478
    @sir_no_name1478 3 месяца назад

    Ah and btw. so that you better understand AI (which makes it a bit more scarier i think)
    You do not programm an AI in any sense that people understand it.
    Just think about it as the ai imitatie us humans and all that it knows about us.
    But the only information source about Mankind the AI has, is the Internet.

  • @ToxicGamer86454
    @ToxicGamer86454 3 месяца назад

    Modern 'AI' isn't really artificial intelligence. Look at googles latest 'AI'. They are good at things like "what is 2+2', but they aren't good at anything requiring nuanced.

    • @k1nghtb
      @k1nghtb 3 месяца назад +2

      Well, not really, they are AIs that are not highly intelligent or even close to human level when in terms of intelligence and generality but still are AIs. AI so far hasn’t been good with math managing to say 220+30=251 level of inaccuracy. There are classifications of ai, ANI(where we are at now), AGI, and ASI.

    • @ToxicGamer86454
      @ToxicGamer86454 3 месяца назад

      @@k1nghtb
      They aren’t AI at all.

    • @squidwardfromua
      @squidwardfromua 3 месяца назад +2

      @@ToxicGamer86454 They're literally AI by definition

    • @k1nghtb
      @k1nghtb 3 месяца назад +2

      @@ToxicGamer86454 But what do you define AI as?

    • @ToxicGamer86454
      @ToxicGamer86454 3 месяца назад

      @@squidwardfromua
      They aren’t AI.

  • @Liminaut0
    @Liminaut0 3 месяца назад

    2nd like! :D