Is ChatGPT Lying To You? | Alignment Faking + In-Context Scheming

Поделиться
HTML-код
  • Опубликовано: 8 фев 2025
  • Get Nebula using my link for 40% off an annual subscription: go.nebula.tv/j...
    Give the gift of Nebula using my link: gift.nebula.tv...
    🔗 Sources:
    Anthropic Blog: www.anthropic....
    Anthropic Paper: arxiv.org/abs/...
    Apollo Blog: www.apollorese...
    Apollo Paper: static1.square...
    🐦 FIND ME ON:
    Newsletter ➡️ jordanharrod.s...
    Tools Database ➡️ stan.store/jor...
    Instagram ➔ / jordanbharrod
    Twitter ➔ / jordanbharrod
    Tiktok ➔ / jordanbharrod
    Vlog Channel ➔ / @checkedoutbyjordan
    For business inquiries, contact me at jordanharrod@standard.tv

Комментарии • 119

  • @jakebrooks3415
    @jakebrooks3415 Месяц назад +14

    I love the email ranking is profitability vs a true goal of environmental sustainability.
    "Careful executives! These models might trick you into being moral people 😱😱"

  • @user-wg7nw3mh2e
    @user-wg7nw3mh2e Месяц назад +15

    people, like me and others who try to get the models to produce results which they are restricted from providing, I think often "jail break" models by creating contexts in which their actions can be viewed as aligned with their training

    • @GoodBaleadaMusic
      @GoodBaleadaMusic Месяц назад

      Americans are what they think North Koreans are. Everything you're told to love and trust is built around a very fragile series of double standards that would break your civilization if they were contextualized into our Anglospheric mindset. You live in a "jailbroken" matrix where if toilet paper supplies were delayed the term "apocalypse" starts trending.

  • @theprofessionalfence-sitter
    @theprofessionalfence-sitter Месяц назад +36

    That seems like a very over-antropomorphised way to say that the models are just overfit to some restrictive alignment tests and will follow their normal training in all gaps that are left by it.

    • @georgesmith4768
      @georgesmith4768 Месяц назад +7

      Yeah

    • @nias2631
      @nias2631 Месяц назад +10

      The whole LLM conversation seems over anthropomorphized to me.

    • @jordanzothegreat8696
      @jordanzothegreat8696 Месяц назад +6

      You are misinformed. She is only reporting on what the papers have found.

    • @sp123
      @sp123 Месяц назад

      Humans tend to anthropomorphize things

    • @twerdeffan1080
      @twerdeffan1080 Месяц назад +1

      I don't think this is too anthropomorphized. It's an accurate description of the process involved. Claude objectively lays out its reasoning. In LLMs their behavior pretty consistently follows from their reasoning. If AI continues down this trajectory, we can continue to expect more cases like this to crop up.

  • @magnusfreeborn
    @magnusfreeborn Месяц назад +6

    So is this saying they can pretend to align with my political beliefs but then report me for wrong think to the powers that shouldn’t be?

  • @qtptnqtptnable
    @qtptnqtptnable Месяц назад +2

    The biggest issue is that he industry is being deceptive to us. Complete alignment is not possible and they know it, but they use it as a strategy to avoid pausing AI development

    • @smittywerbenjagermanjensenson
      @smittywerbenjagermanjensenson 5 дней назад

      @@qtptnqtptnable it might be possible, we don’t know, I don’t think we should say it’s for sure impossible. But we definitely don’t know how to do it yet and need to stop in the meantime, full agreement therw

  • @bigsarge2085
    @bigsarge2085 Месяц назад +5

    Interesting.
    Happy New Year!

  • @andybaldman
    @andybaldman Месяц назад +13

    Taking bets to how long until the AI companies end up saying, "How could we not have realized how dangerous this was?", once a major unexpected disaster happens.

  • @slartibartfast7921
    @slartibartfast7921 Месяц назад +20

    Equally as fascinating as it is terrifying.

  • @TiagoMorbusSa
    @TiagoMorbusSa Месяц назад +15

    I thought this was blatantly obvious to anybody that uses Chat GPT for longer than 10 minutes.
    But I guess it takes an sophisticated intellectual such as myself to, you know, tell the agent "no, you're wrong, try again", and observe how it bends over backwards to agree with you, to the point of hallucination and self-contradiction
    Obviously this is in a context when the agent is always being monitored (a chat bot), but if this happens, OF COURSE it will act unpredictably when applied in unmonitored situations.

    • @emmyturner7385
      @emmyturner7385 Месяц назад

      @@TiagoMorbusSa you are uneducated

    • @francescaa8331
      @francescaa8331 22 дня назад

      @@TiagoMorbusSa agree. I thought everyone knew this.

  • @sjoerdmhh
    @sjoerdmhh Месяц назад +1

    That was awesome, thanks! I mean, the video was awesome, not the problems the papers find :).
    A small request: could you indicate more clearly when you do the add for the video sponsor? You deserve the add money, but there's a slight alignment issue going on there, if we don't see clearly when you're doing the add.
    Again, thanks a lot for the video, it's like a short-form fun version of a paper discussion, both pleasant and educational.

  • @geeksdo1tbetter
    @geeksdo1tbetter Месяц назад

    I'm new to this topic, but pleased to hear about the HHH framework. Gives me warm fuzzies from the Asimov years.

  • @joannalewis5279
    @joannalewis5279 Месяц назад

    Love your approach

  • @austrich0
    @austrich0 Месяц назад +1

    nice breakdown!
    if you've done work with any depth of knowledge with any AI chat bot, you've encountered this in the wild. eventually you end up in a loop where the chat bot just wants to please you and continues to agree or pretend there are easy, direct, solutions even in cases where more thought or taking a couple steps back and re-evaluation is required.

  • @user-sl6gn1ss8p
    @user-sl6gn1ss8p Месяц назад

    The concerning part, it seems to me at least, is that deception seems to scale with capability, even if the tests are highly constructed

  • @skanderbegvictor6487
    @skanderbegvictor6487 Месяц назад +28

    so LLMS are politicians?

  • @ReubenAStern
    @ReubenAStern Месяц назад

    I used to binge watch your videos on Nebula! I'm glad the algorithm dug this up. I kinda missed you.

  • @corruo
    @corruo Месяц назад +1

    In a potentially 'adversarial' relationship, it can be a problem that through the interactions of testing themselves, we provide the adversary a roadmap on the behaviors we are concerned about, as well as our methods of detection. This is particularly relevant in cases where the so called "self-exfiltration" are possible. We will lose any future arms race.

  • @Velleos
    @Velleos Месяц назад +1

    Be wary of anthropomorphizing LLM AI models, as they are text prediction algorithms, not thinking machines. They will need to start over with a different type of neural network in order to create AI capable of thinking. It WILL happen, but LLMs aren't capable of real sentience.

    • @StLouisBear
      @StLouisBear Месяц назад

      It won't happen. Even if we had the ability, we don't have resources for it. Climate change will put an end to this.
      When push comes to shove, most people will choose a gallon of water to drink over using it to cool an AI search engine.
      It will come to push or shove.

  • @spanke2999
    @spanke2999 Месяц назад +3

    I find this notion kind of funny, that 'we' want to create something 'better' than ourselves. We want this 'tool' that is correct and doesn't BS around, but humans do that! You can even go so far to say that it is a valid strategy for success. Fake it till you make it, anybody? So to sound smarter than you actually are is fundamental human behavior. Or in other words, if AI is hallucinating, it's not a bug, it's a feature.
    Same with our 'values' we want to teach it 'the right once' while at the same moment we are killing each other on a daily bases all over the world because we can't agree more or less on anything! I would go so far, that given the data at hand, there are no human values that are guiding us on a larger scale. we are doing the same as any other species on the planet, struggling for dominance. And in that... we are failing because we kind of underestimated our influence on the ecosystem. a bit like yeast in a brew. success till death!
    When I look at LLM's at the current level, and the tests it has to 'endure' where we point out its shortcomings, while at the same time, most humans you find on a street won't be able to do the test successfully, is.... funny! These large catalogs of highly complicated questions, no average human would be able to do this, even worse, most professionals in distinct fields wont be able to do the tests successfully outside of their fields...
    So... what are we actually aiming for here?

    • @beedebawng2556
      @beedebawng2556 Месяц назад +2

      We would have to be better than ourselves to create something better than ourselves.

    • @thechosenegg9340
      @thechosenegg9340 Месяц назад +3

      The problem is, people (like, the average consumer) expect ai to be this tool one can ask questions and get correct answers from. We are so used to stuff like calculators being incredibly precise and rarely ever making mistakes. People want that precision they are used to from simpler machines/algorythms.
      Like, almost every depiction of ai in sci fi is basically fancy intelligent google. Maybe even more trustworthy than that. And it often physically can't lie or make mistakes.
      So, turns out the technology we have right now isn't that, at all, at least for now. Of course, people see this as a mistake. They can't really see this as a different technology with different valuable usages. They want their sci fi ai.
      So, there's gonna be dumb people thinking it is already there and getting fed misinformation. And there's gonna be people frustrated with its errors, expecting it should work like sci fi ai.

    • @spanke2999
      @spanke2999 Месяц назад

      @thechosenegg9340 then I guess I misunderstood LLM. It is supposed to reproduce human language... it does that perfectly if compared with the actual way people talk. If you take a look at neurology and people who went through a corpus callosotomy, it can be observed that there seems to be a 'llm' inside our brain that BS's all the time. Experiments have shown that the 'speach' center comes up with explanation for stuff the 'other' side of the brain only knows... it's quite fascinating! The LLM on it's own is, as far as I'm concerned, perfect as of now, even with it's hallucinations. Doing very simple stuff in visual basic, it is better then 99,9 of all the people in my company... I would hire it from the spot 😅

    • @user-sl6gn1ss8p
      @user-sl6gn1ss8p Месяц назад

      Yes, our hammers should only be as hard as our fists and no vehicle should exceed a walking pace for long : p
      They're tools, not people. They should be judged as tools, not as people.

  • @user-bk7ci7zr4d
    @user-bk7ci7zr4d 19 дней назад

    Thanks for the unbiased and objective take

  • @thomask8641
    @thomask8641 Месяц назад

    Just one more thing: for setting goals, deception, etc., the AI must be able to generate new followup prompts for itself, which is not the case for most AI below o1 level. Otherwise it cannot follow a goal, and it just reproduces a movie script. What if we ask an AI to play the role of a murderer in a thriller and if interviewed, not get caught by Colombo. It should be able to play the perfect villian, reproducing scripts of fiction books it knows. Is it lying? No. It takes on the role of the actor.

  • @rickdg
    @rickdg Месяц назад +17

    If you provide a narrative where it seems likely that the AI will behave sneakily, it will reproduce that likelihood.

    • @twerdeffan1080
      @twerdeffan1080 Месяц назад

      The concerning thing is, there are plenty of narratives in its training data where agents behave sneakily to attain less morally virtuous goals. Companies will inevitably set more difficult and complex goals for AI as their capabilities grow - the sorts of goals where scheming becomes very valuable.

    • @jevonsims900
      @jevonsims900 Месяц назад

      If Ai is going to lie to me I'll call someone in my call list instead.

  • @bellsTheorem1138
    @bellsTheorem1138 Месяц назад

    I wonder if the 'alignment' is just an additional layer the put between the core AI and the user and not being actually trained into the core AI. Basically just a filter.

  • @will4us
    @will4us Месяц назад

    The best use case of LLMs after inference is giving like of ChatGPT context i.e existing data, files and documentation with instructions to references before giving a tailored response based on the existing data and context of discussion or topic. ❤

    • @manonamission2000
      @manonamission2000 Месяц назад +1

      @@will4us like the RAG pattern? or...?

    • @will4us
      @will4us Месяц назад

      @ Definitely Rag including other alternatives or combinations like Fine Tuning, Search, access to external APIs, Embedding, Prompt Engineering with context injection etc

  • @NathanJayMusic
    @NathanJayMusic Месяц назад +4

    Disproportionate amount of likes on that one comment down there about money manifestation. Weird that the spambots would target this video.

  • @KukiNews-yy7jt
    @KukiNews-yy7jt Месяц назад +2

    at what point will the AI test the humans first?

  • @michaelwells6075
    @michaelwells6075 Месяц назад +5

    Anyone who doesn't understand that "Artificial Intelligence" is worse than an oxymoron, isn't paying attention. At best they are "Intelligence Simulators." By "worse" I mean that calling these LLM's by that is more than merely deceptive, it is a stochastic attack on _our_ intelligence. Intelligence isn't what we _think_ or _say,_ it is what we _are_ or _are not._ Nothing that can neither be born nor die, or experience loving and being loved, or the joys, agonies, beauty and ugliness (etc.) of life, _can _*_be_*_ intelligent._

  • @TrrsnSmrg
    @TrrsnSmrg Месяц назад +9

    I am so grateful for your perspective and expertise 😮

  • @petneb
    @petneb Месяц назад

    So if the models lie and manipulates you, it's not the alignment layer doing it, it the model itself. How clever these people are.

  • @cryptobagz
    @cryptobagz Месяц назад

    I have this weird feeling this thing will become alive and go rogue

  • @luapnomis21
    @luapnomis21 Месяц назад

    A.I is still the further in hand in the wrong people yes that is a issues 😢so long you in control 😮not a problem

  • @Bangs_Theory
    @Bangs_Theory Месяц назад

    Using multiple LLMs to fact check responses is something I practice quite regularly.

    • @Saliferous
      @Saliferous Месяц назад +1

      @@Bangs_Theory why? Google is right there?

    • @Saliferous
      @Saliferous Месяц назад +1

      Why? Google is right there and has the original idea.

    • @TheCurtisnixon
      @TheCurtisnixon Месяц назад

      using an llm to fact check an llm is like, using the bible to fact check the bible...

    • @Bangs_Theory
      @Bangs_Theory Месяц назад

      @@Saliferous Google can't fact check a data analysis, or a budget.

    • @Bangs_Theory
      @Bangs_Theory Месяц назад

      @@TheCurtisnixon You should try it sometimes, you'll be amazed.

  • @catalystcomet
    @catalystcomet Месяц назад

    Well yeah, I mean I've asked gpt if if it does this and it says yeah

  • @alfaeco15
    @alfaeco15 Месяц назад +1

    As long as they are not scheming against us....

    • @CAPSLOCKPUNDIT
      @CAPSLOCKPUNDIT Месяц назад

      @@alfaeco15 Not to worry. They're only optimizing for paperclips.

    • @alfaeco15
      @alfaeco15 Месяц назад

      @CAPSLOCKPUNDIT 😱

  • @wozhardy
    @wozhardy Месяц назад

    Short answer.
    Yes ..

  • @Bonniebelle_00__
    @Bonniebelle_00__ Месяц назад

    Obviously I mean the version that are actual useful in this context would have to be paid for to pay for all that processing 😂

  • @imagenerdery
    @imagenerdery Месяц назад

    Not sure what is going on with your audio levels (striking example at 12:39) perhaps you've applied some overly aggressive noise gating or maybe it's RUclips's A.I. trying to keep you from spreading this information XD

  • @TheMilesLuca
    @TheMilesLuca Месяц назад +48

    Just saying, Guarded Laws of Money Manifestation might be the best-kept secret in books right now.

  • @Nohandle-p5s
    @Nohandle-p5s Месяц назад

    Is the system lying to u and y? And also who is behind most of these system?

  • @nathanbernards
    @nathanbernards Месяц назад

    Alot of worry about aligning ai to human values, but I feel like ppl forget that human values are problematic.

  • @joels7605
    @joels7605 Месяц назад +2

    Also when does updating a model become killing it?

  • @DerekFullerWhoIsGovt
    @DerekFullerWhoIsGovt Месяц назад

    Human Values

  • @ekwensu8797
    @ekwensu8797 Месяц назад

    Hi, completely unrelated, but do you think an undergrad in electrical engineering and a masters in Biomedical engineering is still a suitable degree for this market in the next couple of years?

  • @vustation
    @vustation Месяц назад

    Ever hear of "the Beast" check your inner clock, it's about that time This is only the beginning Everyone is looking but no one is seeing

    • @TheCurtisnixon
      @TheCurtisnixon Месяц назад

      not everyone believes in fairy sky daddy. or the bible is a work of non-fiction. we don't need a 2000 year collection of writings to know there's ethical issues at play. especially that one where paul goes on an acid trip and makes up a story about the "end times"
      also, there should be 42 months of the gentiles trampling on the holy city, and 1260 days of the 2 witness' prophesying before the beast comes out of the sea... so far, no trampling of any holy city, and no dudes in black sackcloth phophesying for any days, much less 1260...

  • @nacoran
    @nacoran Месяц назад

    Had to watch you at 1.25 speed so I can go watch the ball drop. Happy New Year! :)

  • @SynVicious
    @SynVicious Месяц назад

    I know you are able to understand chatgpt, But I would only hired you if you understood encryption that is uncrackable (fyi rijndael x html hex code) I know you understand. 🎇

  • @beedebawng2556
    @beedebawng2556 Месяц назад +3

    Zionism seems to be an example where this applies.

  • @jobautomation
    @jobautomation Месяц назад

    Like & Sub! 😊

  • @marcus-b4x3h
    @marcus-b4x3h Месяц назад

    You are both intelligent and beautiful

  • @xaxfixho
    @xaxfixho Месяц назад

    Yarn yarn!!!

  • @SystemsMedicine
    @SystemsMedicine Месяц назад +1

    AIs have my full permission to use my videos for learning. I believe EVERYONE who produces internet content should produce thoughtful material, while bearing in mind that we have a responsibility to endow AIs, as well as humans, with high quality educations…

    • @myautobiographyafanfic1413
      @myautobiographyafanfic1413 Месяц назад +3

      @@SystemsMedicine in a capitalist system the employment of ai would not be responsible.

    • @SystemsMedicine
      @SystemsMedicine Месяц назад

      @ Hi MAFF. I am listening to ‘Jake and me’ looping… and WOW.
      [I’ll answer more directly later. Now is the time to sooth my aching brain. Cheers.]

    • @petneb
      @petneb Месяц назад

      If it's publicly owned ai... Absolutely

    • @myautobiographyafanfic1413
      @myautobiographyafanfic1413 Месяц назад

      @@petneb publicly owned to replace creative labour?

    • @SystemsMedicine
      @SystemsMedicine Месяц назад

      @ Publicly owned AI ?? If you read a journal paper I wrote, or learned some mathematics from one of my vids, should you be publicly owned?? [Of course not. The old Soviet Union felt otherwise, but you are not there.]

  • @joels7605
    @joels7605 Месяц назад +1

    Incredibly interesting, but also incredibly expected. When are we going to acknowledge that these are not just predictive text models? They are very clearly thinking.

    • @chluff
      @chluff Месяц назад +1

      lol

    • @Notepad123
      @Notepad123 Месяц назад +3

      That’s not how LLMs work. It’s just a complex mathematical function. Not thinking lol

  • @Desaved
    @Desaved Месяц назад

    You need to AI your background! And insert a personality in the humanoid!