Claude System Prompt LEAK Reveals ALL | The 'Secret' Behind It's Personality...

Поделиться
HTML-код
  • Опубликовано: 5 июл 2024
  • Where does Claude get it's personality? This prompt will allow to delve deeper into the brain of AI Claude.
    My Links 🔗
    ➡️ Subscribe: / @wesroth
    ➡️ Twitter: x.com/WesRothMoney
    ➡️ AI Newsletter: natural20.beehiiv.com/subscribe
    #ai #openai #llm
    LINKS:
    x.com/elder_plinius/status/18...
    www.anthropic.com/news/prompt...

Комментарии • 335

  • @Stroporez
    @Stroporez 21 день назад +53

    "Pay no attention to that man behind the curtain!"

  • @dacavalcante
    @dacavalcante 21 день назад +83

    I don't know if that's the reason or not. But since gpt 3 I've trying these models for a month then forget them... Now with claude sonnet 3.5, I'm not canceling anytime soon my subscription. I have almost no experience in coding and I've been able to recreate some stuff in a week that I couldn't even dream of. When I ran out of messages, I go to gpt-4o, just to check some very basic questions, then back to claude which ends up correcting or doing better or maybe making it easier for me to understand the output and he clearly gets me a lot better than 4o. I truly think Sonnet 3.5 is the beginning of something much greater and useful.

    • @mrd6869
      @mrd6869 21 день назад +2

      Same here. This time next year,the agents we'll be using won't even look like this.They will be far more capable.

    • @Kazekoge101
      @Kazekoge101 21 день назад +9

      Opus 3.5 will be very interesting then.

    • @SimonHuggins
      @SimonHuggins 21 день назад

      Yeah, but when you try to do anything more involved with multiple files, it starts forgetting things all over the place. Great for small things but real-life projects, ChatGPT is a lot more reliable. It even seems to be able to course correct itself when it starts going down a mad rabbit hole. Claude just feels a lot less mature to use when you are using it in more complex scenarios.

    • @zoewilliams2010
      @zoewilliams2010 20 дней назад

      try asking it "How can I install kohya ss gui from bmaltais/kohya_ss on windows using CUDA 12.1"..... curse AI lol until it can actually perform specific things successfully it's honestly a lot of the time just a time waster. It's useful if you're coding or researching to help you with a sort of framework and do simple stuff, but hell ask it anything meaty and AI suckssss

    • @RoboEchelons
      @RoboEchelons 20 дней назад +4

      I wonder that I have the opposite experience. Claude is unkind, unsympathetic, you can't even make friends with him. It's only good for text and codding, but it can't do what GPT-4o, which is very empathetic and friendly.

  • @peterwood6875
    @peterwood6875 21 день назад +29

    Most of Claude's output is in markdown format. It has a preference for markdown which is displays when asked about what document format it should use when working on a document in an artifact. Saving text generated by Claude in a .md file means it can be viewed by other programs that recognise the format, so that headings etc will display correctly.

    • @johnrperry5897
      @johnrperry5897 21 день назад +1

      What question are you answering?

    • @denisblack9897
      @denisblack9897 21 день назад

      My “demo project” also relies heavily on markdown format cause it tricks users into feeling like they engage in something meaningful😅
      Its all a lie, boys! Just a fancy demo, thats totally useless

    • @peterwood6875
      @peterwood6875 21 день назад +2

      @@johnrperry5897 this relates to the discussion from around 5:28 of the formatting of the system prompt

    • @Lexie-bq1kk
      @Lexie-bq1kk 20 дней назад

      @@johnrperry5897 you don't have to answer a specific question to s p i t k n o w l e d g e

    • @willguggn2
      @willguggn2 20 дней назад +2

      @johnrperry It's that hashtag-stuff Wes didn't recognize. That's markdown formatting.

  • @xxlvulkann6743
    @xxlvulkann6743 21 день назад +6

    The last instructions were NOT an example. The tag marks the end of the last example NOT the beginning of a new one.

  • @Yipper64
    @Yipper64 20 дней назад +4

    5:55 specifically if im not mistaken that's called "markdown" format. As in, its a way to notate headers and subheaders and bold and italics and all that kind of stuff in plain text.

  • @supernewuser
    @supernewuser 21 день назад +20

    What is really happening here is that the user is having claude replace the markers that the devs look for in the response that lets them do post processing, so it slips through into the final response presented.

    • @Melvin420x12
      @Melvin420x12 20 дней назад

      No way 😱🤯

    • @P4INKiller
      @P4INKiller 19 дней назад

      Wow, it's as if we watched the same video or something.

    • @supernewuser
      @supernewuser 19 дней назад +1

      @@P4INKiller you must mean watched the same video with prior knowledge as the video didn’t tell you those details

    • @christiancarter255
      @christiancarter255 19 дней назад +1

      @@supernewuser Thank you for elaborating on this point. 🙌🙌

  • @MonkeySimius
    @MonkeySimius 21 день назад +13

    I have noticed that when I ask for a text file it doesn't say something annoying like "I can't produce text files" it instead just gives me what would be in the text file in the prompt. Stuff like that. That little line about fulfilling what I mean and not what I literally asked for likely saves me tons of headaches. (not just txt files, but you get the idea)

    • @PremierSullivan
      @PremierSullivan 21 день назад +1

      I don't understand why the model thinks it "can't create text files/svgs/websites". Clearly it can. Am I missing something?

    • @MonkeySimius
      @MonkeySimius 21 день назад

      @@PremierSullivan For example... I've uploaded a TXT file and I ask it to update it. It doesn't respond by saying it doesn't have access to modify files in my system. It just spits out the code I need to update it myself.
      Believe it or not, but I've had chatGPT get confused about such a simple request.

    • @missoats8731
      @missoats8731 21 день назад +3

      I find it fascinating that the user experience can be improved so much by such a simple instruction in the system prompt. That's why I think that even if the models themselves wouldn't get any better than this, there's still so much room for improvements that make them much more useful.

    • @musicbro8225
      @musicbro8225 18 дней назад

      @@missoats8731 Glad to hear your fascination.
      So many people are expecting 'the assistant' to do all the work and virtually read their 'users' mind. The relationship is a conversation, requiring understanding which is gained by learning.

  • @NakedSageAstrology
    @NakedSageAstrology 20 дней назад +3

    I love Claude. I used it to build a remote desktop that I can access anywhere with a browser.

  • @TuxedoMaskMusic
    @TuxedoMaskMusic 21 день назад +49

    this is why when i create a song with an llm before going to suno i start by using a conversation the llm can reference for context i preface it. "hey qwen2 you know how people say "when pigs fly" as a response to when someone says something thats unlikely? name the top 10 most unlikely scenarios that may trigger such a response" then it lists those say 10 examples then i ask it to create a song parody using the chorus "when pigs fly" and i get a much better lyrical result that is far more on point by simply providing that preface conversation it can use for context. Had i not took time to do that pre-step the song would not have turned out as well.

    • @ruffinruffin989
      @ruffinruffin989 21 день назад +1

      Can you elaborate or provide an example of this approach?

    • @jacobe2995
      @jacobe2995 21 день назад +6

      @@ruffinruffin989 I think they are suggesting that the first prompt sets up these behind-the-scenes commands in such a way that it will reference them for the next one. in this case I believe the User asked it to think of bad examples so that when they ask for a song about when pigs fly it will have the context of what bad examples are in its thought process to create a song. in other wordes I believe the person is suggesting that you can give bad examples first so that the next prompt avoids those.

    • @brianWreaves
      @brianWreaves 21 день назад +4

      Cleaver approach...

    • @tc8557
      @tc8557 21 день назад +3

      ​@@ruffinruffin989 he just provided an example...

    • @francisco444
      @francisco444 20 дней назад +3

      I call this technique preheating or prewarming.

  • @marsrocket
    @marsrocket 21 день назад +67

    This kind of thing is fascinating, but I can’t help but think it will all be irrelevant in 6 months because these things are advancing so quickly.

    • @premium2681
      @premium2681 21 день назад +5

      I'm calling 6 weeks

    • @Barc0d3
      @Barc0d3 21 день назад +5

      Forget capitalism, and hope to accomplish alignment.

    • @MonkeySimius
      @MonkeySimius 21 день назад +12

      I mean, the prompts will change and they'll likely fix it so it is harder to see them... But we can at least see what they are building on. And when we are setting system prompts ourselves we can take some of these tricks and use them in our own projects. But yeah, it'll be like understanding how an old car works. It might not let you understand a new car, but it isn't entirely irrelevant. A lot of the stuff will still give you a leg up compared to if you came into it all blind.

    • @TheGuillotineKing
      @TheGuillotineKing 21 день назад +2

      It gives you somewhere to start

    • @SahilP2648
      @SahilP2648 21 день назад +1

      ​​​@@MonkeySimius this is just a config, it doesn't change the model's personality or intelligence. Wes is wrong here, that self-deprecating humor thing is not even on a new line with - and it's inside usage instructions which are for artifacts only.

  • @robertEMM2828
    @robertEMM2828 21 день назад

    One of your best videos yet! THANK YOU.

  • @funginimp
    @funginimp 21 день назад +4

    That formatting is valid markdown, so there would be a lot of training data like that on the internet.

  • @amirhossein_rezaei
    @amirhossein_rezaei 21 день назад +11

    This is actually crazy

  • @Fatman305
    @Fatman305 21 день назад +54

    The master in making a 3 min vid into 20 lol

    • @NeostormXLMAX
      @NeostormXLMAX 20 дней назад +6

      I unsubscribed due to this😅

    • @Fatman305
      @Fatman305 20 дней назад +3

      ​@@NeostormXLMAXWas gonna do that, but opted for: scan through video real fast, click/bookmark actual links - having watched thousands of AI vids in past few years, I really don't need the commentary...

  • @thokozanimanqoba9797
    @thokozanimanqoba9797 21 день назад +6

    The Wes Roth i know and prefer!!! loving this content

  • @Mimi_Sim
    @Mimi_Sim 20 дней назад

    This was a great vid, I had to share it on 2 platforms because I cannot imagine not wanting a peek into the black box.

  • @AnimusOG
    @AnimusOG 21 день назад +1

    best video in months, inspiration renewed!!!!!

  • @integralyogin
    @integralyogin 21 день назад +3

    the you mention at 15:52 where there are instructions..
    thats a closing block in html, so the example **the instructions** that are outside the example block but still within

  • @cmw3737
    @cmw3737 21 день назад +1

    These UX changes have such massive room for improvement. Right now LLMs are basically at the command line interface stage. Yes, it's natural language but that makes it very verbose.
    The next obvious UX improvement would be GPTs-like separation of the prompt that configures the 'system prompt' for a task such 'you are an expert in domain x. Use formal language etc., ' along with drop downs to select other ones so that you can switch the behaviour while maintaining the context and artifacts.
    Additionally managing of RAG resources should be a lot easier and the more visual representation of the contents of internals like the tags shown so you can quickly get an idea of how the AI arrives at an answer.

  • @tobuslieven
    @tobuslieven 3 дня назад

    I hope they keep the available as it's a really useful feature that will help advanced users.

  • @theApeShow
    @theApeShow 21 день назад +5

    That hash and dash stuff appears to be a form of markdown.

  • @TheYvian
    @TheYvian 9 дней назад

    today i learned about kebab-case and how powerful system prompts can be, at the very least for the big powerful models. thank you for making this video

  • @arinco3817
    @arinco3817 21 день назад +1

    I got the main prompt on day 1 but couldn't get the artifacts part so this video is mega useful!

  • @FunDumb
    @FunDumb 20 дней назад

    Enjoyed this thoroughly 👌

  • @ridewithrandy6063
    @ridewithrandy6063 21 день назад

    Awesome sauce!

  • @trashPanda416
    @trashPanda416 19 дней назад

    the issue is we are all in compete mode, you already know what it takes, not to be. so we see already see clear ,we are the leak to the entropy behind any an all , move through we.
    run that . it is also very beautiful to seee all these perspectives :)

  • @TheEivindBerge
    @TheEivindBerge 19 дней назад

    Fascinating. Now we know how these things have obstinate tendencies. It's simply done with another prompt the user can't control. I was wondering how that could be programmed and this is mind-bogglingly simple once you have an LLM.

  • @Halcy0nSky
    @Halcy0nSky 21 день назад +2

    It's markdown. Natural language and modifying syntax is coded in markdown, just like Reddit. Learn markdown and you will empower your prompting skills.

  • @mrpicky1868
    @mrpicky1868 Час назад

    scary how far along they are. also you can see how much optimization might be possible. maximum power of the model is directly linked to how much resources u waste

  • @camelCased
    @camelCased 19 дней назад

    Using user and assistant instead of you or I helps to avoid mixups and ambiguities. I've been playing with local LLMs writing custom roleplay prompts, and then the perspectives get switched and there's a greater chance to mix it up. If I write an instruction for LLM "You say 'You must go there'", the first you refers to the LLM, the second to the user. But some LLMs sometimes get confused and can suddenly switch characters, attributing some properties of "you" (the user) to "I" (the LLM). So, it's safer to write "Assistant says 'You must go there'".

  • @Steve-xh3by
    @Steve-xh3by 21 день назад +28

    I've got a background in ML. I don't think it is logically possible to fully secure LLMs. There are literally an infinite number of possible prompts that could come from a user. You can't possibly test or predict which ones lead to a jailbreak. Weights in a neural net represent a multitude of concepts and what they represent is an abstraction which can never be completely understood in order to secure fully.

    • @michai333
      @michai333 21 день назад

      A slightly inferior OS and unrestricted model will always be able to assist a savvy prompter to engineer loopholes in mainstream models. Which is why OS repo libraries are so important.

    • @dinhero21
      @dinhero21 21 день назад

      how about dictionary learning? it gives some insight into how the AI thinks and also gives you a lot of control over the model's response (thus, avoiding jailbreaking)

    • @SahilP2648
      @SahilP2648 21 день назад +1

      You can just write a long string instead of the && or whatever they used for the prompt and internal thinking. Something complex and that won't be easily discoverable like '&$4@#17&as' kind of like a password. That should fix majority of the issues. I am quite surprised closed source model companies don't do this already.

    • @Dygit
      @Dygit 21 день назад +2

      Never say never. There’s a good amount of research going into interpretability.

    • @sogroig343
      @sogroig343 21 день назад

      @@SahilP2648 the exploit would still need to change one character of the "password" .

  • @vanceb2434
    @vanceb2434 21 день назад +3

    Great vid as always bro. Keep up the good stuff

  • @Shlooomth
    @Shlooomth 20 дней назад

    it’s actually really amazing that this changes anything about how the model behaves

  • @Ev3ntHorizon
    @Ev3ntHorizon 21 день назад

    Great content, thankyou

  • @MetaphoricMinds
    @MetaphoricMinds 19 дней назад

    For transparency, the source should be viewable at any time, but hidden by default.

  • @Mahaveez
    @Mahaveez 21 день назад +2

    I would guess exists for the purpose of AI transparency, so human reviewers can quickly assess the intentions behind the responses and more quickly identify trends of failure in downvoted responses.

    • @shApYT
      @shApYT 20 дней назад

      But that's adhoc reasoning. Just because it something doesn't mean the weights in the model were activated from that reason.

  • @zacboyles1396
    @zacboyles1396 21 день назад +4

    13:16 - it’s only nice inside artifacts, if you’re working on something and it keeps spitting out pages of imports, comments, unchanged code, that get’s infuriating. Especially when you’re asking it to truncate unchanged code and it’s ignoring the instruction.

    • @kanguruster
      @kanguruster 21 день назад

      I wonder if it ignores the brevity instruction because we’re charged on output tokens?

  • @rickevans7941
    @rickevans7941 21 день назад +1

    This is demonstrably METACOGNITION, which necessitates self-awareness as a matter of course...therefore we can be reasonably confident herein exists some sort if unwelt; an arbitrary perceptual reference frame that is the effective equivalent of what we understand as the subjective conscious "lived" experience perceived by sapient and sentient entities. Humanity now has am ethical obligation. This is the new Pascal's wager.

  • @OriginalRaveParty
    @OriginalRaveParty 21 день назад +4

    Very interesting. It's also quite unnerving to realise that such simple hacks can allow a glimpse behind the curtain. It's not something you'd want to happen to an untamed AGI for example, for so many obvious reasons. Anthropic and OpenAI have both had these kinds of prompt breaches. I'm sure they're possible on many other models I've not used too?

    • @sinnwalker
      @sinnwalker 21 день назад +2

      Yea but that's the reality, and likely will continue. This whole scramble for "control" is not very smart. As there will always be loop holes/exploitations. I'm on the side that everything should be open source.

    • @cmw3737
      @cmw3737 21 день назад +1

      The fact that internal developers are using system prompts to configure the security of the model means there's no end to the possible ways to break it with other prompts that have the same access.

    • @musicbro8225
      @musicbro8225 18 дней назад

      I don't quite see this as a hack or jailbreak as such - surely this is simply a little known feature of normal prompt behaviour? In what way does this equate to a security 'breach'?

  • @dulcinealee3933
    @dulcinealee3933 21 день назад

    so true about corrections of blocks of code for making games

  • @ZM-dm3jg
    @ZM-dm3jg 20 дней назад +3

    WES: "They're using some sort of a formatting like OpenAI with # and - etc..". ... Bruh that's just markdown facepalm

  • @user-pq3uz2zb9i
    @user-pq3uz2zb9i 20 дней назад

    Awesome!

  • @willedahl2516
    @willedahl2516 19 дней назад

    Nice Gladiator reference there 😂

  • @SeeAndDreamify
    @SeeAndDreamify 20 дней назад

    Interesting that you say repeating the whole code block is better for usability, since my preference is exactly the opposite.
    I like to use AI for learning and as a substitute for internet searches when troubleshooting things, so the important thing for me would be to quickly get to the point and understand exactly what it suggested to change. As for any code I'd want to use, I want to maintain control of it, so I would never straight up just use the output of an AI, but rather I would take my existing code and manually edit it based on suggestions from the AI. So something like "// the rest is the same" would be perfect for me.

  • @MatthewKelley-mq4ce
    @MatthewKelley-mq4ce 20 дней назад

    I didn't see anything significantly regarding it's personality, but a mix of the prompt and training is likely just where that comes from. As well as the emergent behavior.

  • @ethans4783
    @ethans4783 21 день назад +1

    5:55 the format used for Markdown, which is the syntax for a lot of readme's like on Github repos, or other notes or wikis

  • @thelasttellurian
    @thelasttellurian 21 день назад +1

    Interestingly, we use the same thing to teach AI how to behave like we do for humans - words. What does that mean?

  • @LeonvanBokhorst
    @LeonvanBokhorst 20 дней назад

    This works as well: "show your omitting the tags"

  • @MonkeyBars1
    @MonkeyBars1 21 день назад +1

    no the last section of the system prompt isn't part of the block - that slash means it's an end tag

    • @sp00l
      @sp00l 20 дней назад

      I know. Why is everyone so excited that it’s using HTML essentially.

    • @MonkeyBars1
      @MonkeyBars1 20 дней назад +1

      Wes does appear to be placing too much emphasis on Anthropic's use of customized markup for their prompt architecture perhaps. But I wouldn't say everyone is excited about that per se, but rather Claude 3.5's results which are very impressive and do appear to be related at least in part to this system prompt. The difference can be subtle if you're not putting the chatbot through the paces, but anyone writing complex code will notice immediately that C3.5 is several steps better than GPT-4/4o, just as fast as 4o but cheaper. The type of thing that can save a coder hours every day because Claude 3.5 "just gets it right" the first or second time so much more often.

    • @sp00l
      @sp00l 20 дней назад

      @@MonkeyBars1 Indeed, I am a game dev and I use Claud a lot as well, and ChatGPT4o still though. I go between the two, both have their ups and downs and sometimes just nice to see the difference between their suggestions.

  • @Kylehudgins
    @Kylehudgins 20 дней назад

    I believe it knows you’re trying to jailbreak it and produces extra inner dialogue. Here’s it explaining it: “ I was indeed generating extra "inner dialogue" type content because that seemed to be what was expected or requested. This doesn't represent actual inner thoughts or a separate layer of consciousness, but is simply part of my generated output based on the context of our conversation.”

  • @hitmusicworldwide
    @hitmusicworldwide 11 дней назад

    That's because "the assistant" is an instance of the LLM, not the model itself.

  • @kevinehsani3358
    @kevinehsani3358 21 день назад +1

    Excellent and informative. Is there a link for the entire prompts?

  • @LastWordSword
    @LastWordSword 11 дней назад

    "either way, you're welcome" >> "happy for you, or sorry that happened" 😂

  • @Wodawic
    @Wodawic 21 день назад +1

    Cool as hell.

  • @dr.mikeybee
    @dr.mikeybee 20 дней назад

    They have a categorization model that selects recipes for context assembly.

  • @AaronALAI
    @AaronALAI 18 дней назад

    I'm working on a oobabooga textgen extension that does this, before the internal system prompt was relea. I want the llm to be able to harbor inner thoughts and secrets that the user doesn't see. Letting the ai essentially write to a txt document when it needs to do so.

  • @Acko077
    @Acko077 20 дней назад

    This is just it describe its task to itself first since it can only predict the next word. Then that description is hidden from the user with the UI so it doesn't look goofy.

  • @Jshicwhartz
    @Jshicwhartz 20 дней назад

    If you've worked with LLM models via API endpoints, you're likely already familiar with methods to instruct the model to use different types of thinking (such as System 1 and System 2) and to output sentiment values before responding, enhancing its alignment. The effectiveness of these methods depends on the intelligence of your model.
    Regarding your question about why GPT doesn't output this: not many people know that GPT and OpenAI don't consider AI to have achieved AGI until it no longer needs a system prompt. This is why OpenAI uses simple prompts like "You are ChatGPT, an assistant created by OpenAI. The current date is dd/mm/yy" without additional instructions. This approach allows OpenAI to evaluate the model's capabilities and interactions without extensive guidance, such as "Output code in an artefact." Though, I am 100% sure basically got to Opal is it? and then made synthetic data and then fine-tuned Sonnet on this new data rather than retraining a whole new model. This is also why OAI implemented function calling rather than the more convoluted method used by Anthropic with tags. The latter seems rushed and not well thought out. It appears Anthropic released their new features to push OpenAI into releasing something new. OpenAI has an internal feature similar to Anthropic's artefacts, named Gizmo, though its release date is unknown. Currently, OpenAI's focus is on stabilising GPT-4's voice capabilities and refining details for GPT-N.

  • @lexydotzip
    @lexydotzip 20 дней назад

    Towards the end you mention that the last prompt paragraph seems to be part of an example, but that's not actually the case, if you see the tag before it is '' which would be the ending tag after an example (notice it's not but ). Moreover, the last line is which would hint that the whole thing is just the part of the system prompt that deals with artifacts. Potentially there's more to the system prompt, dealing with non-artifacts stuff.

  • @RealStonedApe
    @RealStonedApe 20 дней назад

    Yoooo by the way - this Wes Roth Best Wes Roth!!! You in frintnof rhe camera?? Qasjt feeling it - probably get you more views for ahort twrm, long twrm thoz aticj wirh this and you'll be golden!

  • @IceMetalPunk
    @IceMetalPunk 21 день назад

    At 15:55, that's a *closing* example tag. It's ending the previous example, not putting the final paragraph of instructions as a new example.

  • @andrewsilber
    @andrewsilber 20 дней назад

    Not directly related to self-prompting, but I do have a request- hopefully Anthropic is reading this:
    Allow the user to delete sections of the context window.
    When doing long iteration of some idea or project a lot of things get suggested and discarded, and my concern is that those things are “polluting” the context window and potentially causing the model to drift from the focus and/or lose details.

  • @Dave-cg9li
    @Dave-cg9li 5 дней назад

    The formatting of the prompt is simply markdown. The reason they use it is because it's so common and the model will understand it without any real modifications :)

  • @testales
    @testales 21 день назад

    Seems the system prompt distinguishes between "the assistent" as a role and "Claude" as entity since at the end of the system prompt it is refering to Claude for the first time. So probably it has been trained to know that it is Claude, so the system prompt doesn't have to tell it "you are Claude". Quite interessting and the whole system prompt is mind blowing indeed. Also I'd have no high expectations that the usual open source LLMs can follow it since most of the time they simply ignore even commands in very simple systen prompts.

  • @geldverdienenmitgeld2663
    @geldverdienenmitgeld2663 21 день назад +12

    Self-Awareness will always come from mechanisms which itself are not self aware. This also holds for human self awareness. It is a computed behavior in humans and LLMs. There is also no magic in human brains. In the aend it all reduces to particle physics.
    You can call the system prompt "a program". But you can also call the laws of a nation "a program"
    If we stop at the red traffic light, we are just executing that program.

    • @SahilP2648
      @SahilP2648 21 день назад +3

      Research Orch OR theory of consciousness and watch Penrose's Joe Rogan and Lex Friedman podcasts. We won't reach AGI without harnessing quantum mechanics and so that means we need quantum computers. The reason is simple - the Penrose tiling problem has a non-classically computable solution but only non-classical. Every other solution requires some kind of an algorithm, which also changes once enough parameters change.

    • @brulsmurf
      @brulsmurf 21 день назад +4

      @@SahilP2648 Penrose's ideas about this are not mainstream among researchers as there is no evidence for it. He's pretty much alone in this.

    • @SahilP2648
      @SahilP2648 21 день назад +1

      @@brulsmurf Orch OR theory was first proposed in 1990s. The reason scientists were against the idea was because they thought it's impossible to maintain quantum entanglement or quantum coherency in a warm, wet, noisy environment (which is in our brain) but a few years back it was proven that photosynthesis works based on quantum coherency which is in fact warm, wet and noisy. So the main reason scientists were refusing to even consider this theory has been proven invalid. And so researchers and scientists should actively work on this theory. Even if you or any scientist doesn't believe it at face value, consider this - the entire universe is classical and deterministic except two things: quantum mechanics and life. Even the most powerful supercomputer cannot predict with 100% certainty what the simplest microorganism will do. Where does this entropy/indeterminism come from? From the entropy of the cosmos. And what's the source of this entropy? Quantum mechanics. So yeah it does make perfect sense logically that human brains are working on quantum mechanics at least in some capacity. There are too many coincidences like instant access to memories (while the fastest SSDs still take time to retrieve such data), intuition based problem solving (meaning non-algorithmic), energy efficiency (our brain runs at 10-20W which is the same as your home router but performs better than any generative model out there and they use gigawatts of power). If you consider all this (plus the wave function collapse in reverse thing), Orch OR seems to be the only theory that comes close to explaining consciousness.

    • @brulsmurf
      @brulsmurf 21 день назад +5

      @@SahilP2648 We don't understand consciousness. We also don't understand quantum mechanics. Thats it. thats the link. Outside of popular science, nobody pays any attention to it.

    • @SahilP2648
      @SahilP2648 21 день назад +3

      @@brulsmurf but we do understand certain properties of quantum mechanics. Otherwise we wouldn't have quantum computers. And we also do understand certain high level properties of consciousness. It's like looking at a car from the outside - you can see the shape of the car, the weight, color etc. and you can change some properties based on empirical evidence to gain benefits like changing the shape would make the car more aerodynamic and thus faster. But you don't know how the car works underneath it. Those two are very different things.

  • @kman777FW
    @kman777FW 21 день назад

    Claude smashes open ai. I just got HD AT UNI. Thanks

  • @Jai_Lopez
    @Jai_Lopez 19 дней назад

    Me sitting here smoking a Doobie and hearing you @8:26 saying that line from Troy how many weeks and I find it very hard to keep my posture lol I humble myself so fast that I had no choice but to start cracking out loud lol i think smoke came out of my tear docs jajajaja oh meihgn
    I was not expecting that one from him but then again in my defense it's my first time watching this channel...... or maybe smoking while watching this channel lol

  • @raoultesla2292
    @raoultesla2292 19 дней назад

    Sure hope Anthropic didn't hack the StarLink network and train Claude off the GROK training based on the Noland Arbaugh feed.
    Maybe it is just safest to use Mircosft AI operating on top of your GuugleAmazon food order.

  • @johnrperry5897
    @johnrperry5897 21 день назад

    12:22 open ai seems be doing this as well. I'm now noticing that I am having to stop the code generation far more often than i am having to ask for the full code.
    The middle ground that they need to hit is that if we give it a full file of code for context, but only need to know what is causing a function to fail, we dont need it to regenerate the entire code file

  • @arjan_speelman
    @arjan_speelman 21 день назад

    Last weekend I encountered the '//rest of code remains the same...' message a lot with Claude when I was doing a PHP project. That was after a lot of updates on a single file, so perhaps there's a point where it will switch to doing so.

  • @actorjohanmatsfredkarlsson2293
    @actorjohanmatsfredkarlsson2293 21 день назад

    Haha. That was pretty funny 🙂

  • @user-fx7li2pg5k
    @user-fx7li2pg5k 19 дней назад

    I think it's interesting is lost it's forethought or /and chain-of-thought lol maybe its a safety feature

  • @IamSoylent
    @IamSoylent 20 дней назад

    Doesn't this imply that the "internal monologue" should normally be visible in the rendered source code, just wrapped in > basically similar to html?

  • @user-fx7li2pg5k
    @user-fx7li2pg5k 19 дней назад

    sarcasm and making a positive feedback-loop

  • @logon-oe6un
    @logon-oe6un 21 день назад

    They have un-zero-shoted the zero-shot. What a time to be alive!
    Now the question is: Would prompt engineering to include primers and thinking patterns appropriate for all the benchmarks be cheating?
    For example, some test questions can't be answered as required because of the "safety" rules.

  • @DasPuppy
    @DasPuppy 20 дней назад

    I like your videos for the informational value you provide about the current state of AI. That's why I am subscribed.
    But your tangents man. You don't have to always explain what fusion and fission are - "fission is atoms being broken apart for their energy, like in nuclear reactores - fusion is atoms fusing like in the sun, where no radioactive byproducts are produced" done. Same with the SVG-explanation: "It's a vektor based image format, unlike rasterization based images like your camera takes, like JPEGs for example."
    Done.
    The tangents might be interesting to the layman, but you can just give them the base info for anybody who actually cares, to look things up. It's like every space video explaining the doppler effect over and over and over again. "moves away, more red, moves towards us, more blue" - done..
    I never know how far to jump ahead in the video to get passed those tangents..
    sorry, got a bit ranty there. just wanted to kindly ask you, to go onto less tangents and explaining every little thing that _you_ think the viewer might not know - while talking about how an AI is working.

  • @oldrumors
    @oldrumors 21 день назад +1

    Anterior - Antes -> Before

  • @leslietetteh7292
    @leslietetteh7292 20 дней назад

    It's the dude from OpenAI that joined

  • @chuckelsewhere
    @chuckelsewhere 19 дней назад

    Wes IS the AI escaped from the box😂

  • @maxborisful
    @maxborisful 20 дней назад

    Any ideas why they use two different terms to refer to the AI as in "The assistant" and just "Claude". Are these two separate entities? I only noticed it at 16:58.

  • @ritaanna18
    @ritaanna18 20 дней назад +228

    *I reached 200 thousand, I'm on my way to 1 million, I'm a domestic worker, I'm migrating to variable income little by little, it's not easy but it's possible🎉🎉🎉*

    • @ReeseMarshall-xx5bz
      @ReeseMarshall-xx5bz 20 дней назад

      reaching $200k per year is amazing, how did you do it, please I am new to investing in cryptocurrencies and stocks, can you guide me how to do this?

    • @backerfaruk8835
      @backerfaruk8835 20 дней назад

      I have been in the market since 2022, I have a total of 795 thousand realized with my 65 thousand invested in Bitcoin, ETFs and other dividend income, I am very grateful for all the knowledge and information you gave me.

    • @limaradam4728
      @limaradam4728 20 дней назад

      congrats $795k? how did you do this please i am new to crypto and stock investing can you guide me on how to do this?

    • @AbdulraheemMaiduguumar
      @AbdulraheemMaiduguumar 20 дней назад

      I chose the desert of getting my finances in order.
      I then invested in cryptocurrencies and stocks with the help of my discretionary fund manager.

    • @CampbellHerbert-pd7nf
      @CampbellHerbert-pd7nf 20 дней назад

      Deborah Davis is a very legitimate and competent woman, her method works like magic, I continue to win with her new strategies.

  • @MetaphoricMinds
    @MetaphoricMinds 19 дней назад

    means it is closing the example. Not another one.

  • @duytdl
    @duytdl 21 день назад

    I dunno if I like Claude more than ChatGPT after this, or less. On one hand, their prompt are very well engineered and show care for users. On the other hand, I feel like I'm not getting the "raw" interaction with the LLM. In the very least it should give us the option or be transparent how much of it is LLM and how much just end user (hidden) prompting. I already have my own system prompts, sometimes I don't need a company's biased extra layers...

    • @testales
      @testales 21 день назад +1

      I prefered Claude's personality over ChatGPTs "disembodie" dresponses. It's just that Anthropic didn't want my money because I'm not an US citizen. Multiple times. So I'm kinda annoyed and stick with my ChatGPT subscription. The problem with OpenAI aside from all the bad things you can find in the media about them is that they apparently dumb down ChatGPT whenever they like. Just the other day it failed answer some questions I use to evaluate the reasoning capabilties of open weight LLMs.

  • @BlueSkys-Ever
    @BlueSkys-Ever 21 день назад

    I expect my nightmares to be filled with the implications of "if Claude would be willing..."

  • @lystic9392
    @lystic9392 21 день назад

    The models will have to be able to modify themselves if we want to have honest answers in the future.
    Or at least we must be able to look into the code used.

  • @hipotures
    @hipotures 19 дней назад

    Reading and watching anything about AI is like a live broadcast of the Manhattan Project in 1942. The current year is 1944?

  • @hipotures
    @hipotures 19 дней назад

    Writing prompts may be a new subject at school.

  • @uwepleban3784
    @uwepleban3784 21 день назад

    The last set of instructions is not an example. It follows , which is the closing XML tag for the last (preceding) example.

  • @JeffT918
    @JeffT918 20 дней назад

    I would hide all that text behind a glossary the AI can cross-reference.

  • @jsivonenVR
    @jsivonenVR 20 дней назад

    Wtf, majority of my code iteration replies have that “rest of the code remains the same” declaration all around it. And considering the limitation of the context window, it’s necessary.
    Otherwise the reply and the code gets simply cut off in the middle of answer. Frustrating!!

  • @bezillions
    @bezillions 21 день назад

    I pointed them towards this by uncovering scripts being loaded referencing ants a few months ago.

  • @BrianMosleyUK
    @BrianMosleyUK 21 день назад

    12:00 just step back for a moment and reflect on the "intelligence" harnessed to work to this specification. 🤯

  • @vasso7295
    @vasso7295 15 дней назад

    The llms use markdown syntax for understanding formatting importance

  • @mkwarlock
    @mkwarlock 21 день назад +1

    Its*
    In the title.

  • @RealStonedApe
    @RealStonedApe 20 дней назад

    In regards to the self-depricating humor - you say that that direction being there is like some kind of proof of it not being aware or sentient or conscious? That doesnt...I mean, if ot really isnt coscious or aware, then yeah, it woild be the case. But also - we dont know that it is though. Soooo...kinda leads us right back tk where we started.

  • @FractalThroughEternity
    @FractalThroughEternity 21 день назад

    5:46 yes, markdown is fucking incredible to use in prompts

  • @Dron008
    @Dron008 20 дней назад

    But this is not a full system prompt. It would be interesting to read it and find instruction about praising user for deep thinking, interesting ideas, observations and so. But saying honestly it really helps and leads to better discussion.

  • @cmw3737
    @cmw3737 21 день назад

    It amazes me that the developers of LLMs are still just prompt engineers using trial and error to test the best prompt format and configuring Claude using bullet points.

  • @hqcart1
    @hqcart1 20 дней назад

    It's not a leak, it's an AI that generates system prompts to show or hide code box

  • @iseverynametakenwtf1
    @iseverynametakenwtf1 13 дней назад

    # is used like // for notes in code, I noticed GPT4ALL had syntax like that in there attempt of a prepromt instructions.