ChatGPT Just Learned To Fix Itself!

Поделиться
HTML-код
  • Опубликовано: 2 июл 2024
  • ❤️ Check out Lambda here and sign up for their GPU Cloud: lambdalabs.com/paper
    Get early access to these videos: / twominutepapers
    📝 The paper "LLM Critics Help Catch LLM Bugs" is available here:
    openai.com/index/finding-gpt4...
    📝 My paper on simulations that look almost like reality is available for free here:
    rdcu.be/cWPfD
    Or this is the orig. Nature Physics link with clickable citations:
    www.nature.com/articles/s4156...
    🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
    Alex Balfanz, Alex Haro, B Shang, Benji Rabhan, Gaston Ingaramo, Gordon Child, John Le, Kyle Davis, Loyal Alchemist, Lukas Biewald, Martin, Michael Albrecht, Michael Tedder, Owen Skarpness, Richard Sundvall, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Tybie Fitzhugh, Ueli Gallizzi.
    Thumbnail background design: Felícia Zsolnai-Fehér - felicia.hu
    Károly Zsolnai-Fehér's research works: cg.tuwien.ac.at/~zsolnai/
    Twitter: / twominutepapers
    #ChatGPT
  • НаукаНаука

Комментарии • 378

  • @TwoMinutePapers
    @TwoMinutePapers  16 часов назад +3

    Get early access to these videos: www.patreon.com/TwoMinutePapers

  • @OperationDarkside
    @OperationDarkside 2 дня назад +105

    As a software dev I tested a very simple piece of javascript with some bugs on multiple models. Only the biggest and newest models were able to find and fix some of the bugs and none got all of them. The piece of code was partly generated by an AI and some edits from me. Either most models are really bad at javascript or we still need a lot of research and external tools to make LLMs better at fixing code.
    There's probably some secret sauce, like using the whole JS standard docs and several thousand examples as RAG source or using a dedicated logic processor, but we aren't there yet.

    • @Tiniuc
      @Tiniuc 2 дня назад +4

      This should be pinned

    • @garethrobinson2275
      @garethrobinson2275 2 дня назад +1

      I'm sure that's very reassuring for you. 🤭

    • @OperationDarkside
      @OperationDarkside 2 дня назад +15

      @@garethrobinson2275 To be honest, it is the opposite. I've been writing code for over 10 years and

    • @samuelb.9314
      @samuelb.9314 2 дня назад +5

      @@garethrobinson2275 he's far from alone, anyone that can code knows it codes like a 10 year old and can't even understand what it gets wrong.

    • @ferologics
      @ferologics 2 дня назад +1

      big L on the language bro

  • @penix3323
    @penix3323 2 дня назад +262

    2:14 "These AI-Critic-Systems find a lot more bugs than people" I mean, it would be strange if that wasn't the case. There are a lot more bugs than people out there.

    • @ValidatingUsername
      @ValidatingUsername 2 дня назад +1

      Wait till their supervised learning feedback system for their neural network is optimized

    • @andywest5773
      @andywest5773 2 дня назад +34

      It's true. There are approximately 1.4 billion insects per person in the world.

    • @ValidatingUsername
      @ValidatingUsername 2 дня назад +1

      @@andywest5773 Your comment is not relevant to this thread

    • @TheCactuar124
      @TheCactuar124 2 дня назад +31

      @@ValidatingUsername You have absolutely no sense of humor.

    • @Peter21323
      @Peter21323 2 дня назад +18

      @@ValidatingUsername or is it? Hey Vsauce Peter here

  • @Mulakulu
    @Mulakulu 2 дня назад +48

    I feel like currently, AI's biggest issue is hallucinations. I hate when they confidently spurt out blatantly wrong and self-contradicting information

    • @pianojay5146
      @pianojay5146 2 дня назад +13

      especially when they apologize every time they speek

    • @Mulakulu
      @Mulakulu 2 дня назад +18

      @@pianojay5146 yeah, and even worse, you tell them specifically how they mess up, and they have the audacity to say "Oh I am so sorry. You are correct. Here is the exact same and unchanged thing that I'm regurgitating to you without any extra thought" like AAARGH!!!

    • @OpenSourceAnarchist
      @OpenSourceAnarchist 2 дня назад +7

      it's almost like human intelligence can be faulted and short-circuited with similar reasoning, except we call hallucinations "opinions" :)

    • @PotatoTheProgrammer
      @PotatoTheProgrammer 2 дня назад +6

      ⁠@@OpenSourceAnarchisthave you ever seen a human say that “QUAT” and “QQQQ” are the same sequence of letters

    • @TragicGFuel
      @TragicGFuel День назад +1

      @@OpenSourceAnarchist LLM can't have human intelligence

  • @ItsDrMcQuack
    @ItsDrMcQuack 2 дня назад +350

    Well, it was nice to know the world before the singularity. See you all on the other side, I'm taking a nap while I can

    • @8bit-ascii
      @8bit-ascii 2 дня назад +25

      The LLMs still hallucinate too much, even with all the smart tricks we can come up. So I‘d say we got a few more years than expected, enjoy them to your fullest 😅

    • @tottallyNot
      @tottallyNot 2 дня назад +10

      @@8bit-ascii I would not bet that it takes years to fix the hallucination problem, we are already making progress

    • @cortster12
      @cortster12 2 дня назад +33

      ​@@8bit-ascii
      So do humans, and we still get things done. It's not as big a problem as you think.

    • @cefcephatus
      @cefcephatus 2 дня назад +1

      I do the same. But before good night, I think, we have a lot of time for sleep after AI singularity.
      However, I want more sleep too.

    • @jibcot8541
      @jibcot8541 2 дня назад +4

      I don't sleep much nowadays.There is too much exciting stuff going on with AI and news and tools just keep coming, not enough hour in the day.

  • @singularity3724
    @singularity3724 2 дня назад +53

    An AI critiquing another AI? Isn't that just a GAN?

    • @creativebeetle
      @creativebeetle 2 дня назад +5

      No

    • @Lagger625
      @Lagger625 День назад +2

      ​@@creativebeetle why

    • @singularity3724
      @singularity3724 День назад +6

      @@creativebeetle From wikipedia: "In a GAN, two neural networks contest with each other in the form of a zero-sum game, where one agent's gain is another agent's loss".

    • @creativebeetle
      @creativebeetle День назад +12

      @@singularity3724 You're totally right. Sorry about the callous response. Misread the original comment as saying 'AGI' somehow.
      Seems pretty similar to GANs, though there's an added layer of abstraction where the AIs aren't exactly improving one another directly (if I'm understanding correctly.)

    • @TayoEXE
      @TayoEXE День назад

      I was thinking the same thing.

  • @michaelwoodby5261
    @michaelwoodby5261 2 дня назад +10

    I'm guessing this is how most AI problems will be solved. These systems are getting smarter, but they are getting far more efficient, so eventually they will just be able to run several versions consecutively.
    First model figures out what specialists are needed and creates an action plan, specialist programs do the heavy lifting, editor checks their work and offers insights, and they reroll until they have something everyone is happy with. This could happen very, very fast, depending on how intricate the original request is and how many trips to the drawing board are needed to impress the editor.

  • @ethanlewis1453
    @ethanlewis1453 2 дня назад +8

    Most chat systems including GPT have no ability to actually test code, which is a large part of the debugging process. It will be a very major advancement in AI when chat systems are given capabilities to test code.

  • @kadentrig8178
    @kadentrig8178 2 дня назад +230

    What a time to be alive!

  • @aaaaaaaaooooooo
    @aaaaaaaaooooooo 2 дня назад +5

    I've been asking AI to critique its own work. For example, I would ask ChatGPT to write a movie idea, then ask it to critique itself, and then improve the idea based on its own critique. It works to some extent.

  • @NotHumant8727
    @NotHumant8727 2 дня назад +65

    what alive to be a time

    • @Silexabre
      @Silexabre 2 дня назад +1

      imagine tomorrow when we're dead and all this is seen as normal for everyone

    • @wobbers99
      @wobbers99 2 дня назад +3

      haha i did what you see there :)

    • @donutbedum9837
      @donutbedum9837 День назад

      @@wobbers99 ooh okay

  • @glenneric1
    @glenneric1 2 дня назад +26

    What a time to be a simulation of life!

  • @ares106
    @ares106 2 дня назад +5

    Nice to see humans and AI working together synergistically and accomplishing more than the sum of their parts.

  • @P-G-77
    @P-G-77 2 дня назад +11

    This is not only an "idea" but the FUTURE...

  • @hola_chelo
    @hola_chelo 2 дня назад +21

    meanwhile me explaining to GPT4o that changing
    if mode=='release'
    to:
    if mode == 'release'
    is not a correct fix for the problem but it insists that the two solutions are different and that the corrected version will work. Or asking about a simple math question and it rambling because it tries to explain the answer and then gets to the conclusion that its answer is wrong so it starts explaining again to get to a wrong conclusion again and explain to me that its logic was wrong again, all within the same response.
    If you really think AI is in a state where it can replace us you have no idea what you're talking about. Start coding, be proficient at it and you will produce better results than AI in its current state.

    • @hola_chelo
      @hola_chelo 2 дня назад +2

      Although our method reduces the rate of nitpicks and hallucinated bugs, their absolute rate is still quite high.
      Real world complex bugs can be distributed across many lines of a program and may not be simple to localize or explain; we have not investigated this case.
      So basically it doesn't apply to 99% of software development? nice

    • @shiroi5672
      @shiroi5672 2 дня назад +2

      That's not a problem with the model itself, it's the wokeness, biased filters. It incorrectly flags that you're trying to do something not PC, so it preaches to you. It's quite annoying when you just want something like a recipe for chocolate cake.

    • @hola_chelo
      @hola_chelo 2 дня назад +8

      @@shiroi5672 Don't think it has anything to do with that. It is good at giving code, even good code sometimes. It just isn't good for software development or math because it requires logic which is hard for llms to achieve. If you try hard enough you can get it into a loop of spitting wrong answers and it saying "You're absolutely correct, my fault. Here's the correct answer : (wrong answer here)" or it saying over and over again that adding a space will change the behavior until you ask it to be specific enough to where it says "it is true that spaces will not alter the behavior of the code but still you should follow that PEP8 standard because bla bla". It's a switch between "I AM MASTER I KNOW MY ANSWER IS RIGHT" and "You're absolutely correct, my mistake, I profusely apologise but I still ate your tokens"

    • @shiroi5672
      @shiroi5672 2 дня назад +2

      @@hola_chelo You may be right, but I mostly saw those kind of looped answers when it tried to preach me PC, and the way it replied is surprisingly similar.
      The only way to be sure is when we get a non biased model in the same level, but I'm not seeing one on the horizon, maybe grok 2.0 if we're lucky. The others are way more biased than ChatGPT.
      There's also a point where the bot get's lazy and stop trying, so I never keep the same window for long.

    • @Vaeldarg
      @Vaeldarg День назад

      @@shiroi5672 complaining about "wokeness", thinking Elon's grok A.I is anything more than Elon's desire to attract attention/money...wonder if you're one of those calling Elon just another "woke" tech bro from California back when he only had his EV company, and only started cheering for him when he started pandering to you right-wing weirdos (I've seen what you all were trying to have grok output, don't even try denying the weirdo part) because of how easily you fall into cults of personality and so will happily stroke his ego. Coincidentally, at a time when he kept getting fact-checked and mocked on Twitter by those more left-leaning.

  • @Adhil_parammel
    @Adhil_parammel 2 дня назад +46

    3:34 human hallucination.?

    • @chrissears9912
      @chrissears9912 2 дня назад +1

      Interesting

    • @jojoboynat
      @jojoboynat 2 дня назад +2

      Hallucinations in generative AI is essentially an abberant output.

    • @bossgd100
      @bossgd100 2 дня назад +5

      Just human thinking

    • @cortster12
      @cortster12 2 дня назад +19

      ​@@jojoboynat
      Oh hey, humans do that too.

    • @lolandall915
      @lolandall915 2 дня назад +30

      well sometimes also a human thinks there is a bug where there actually isnt.

  • @toreon1978
    @toreon1978 2 дня назад +4

    😂😂😂 3:35 I love it that the humans ‚hallucinate‘, too.

    • @donutbedum9837
      @donutbedum9837 2 дня назад +2

      they always have; it’s similar to picking A on a multiple choice test because the last few haven’t been A
      that’s using a ‘sensible’ reason to justify output but its not always the correct method
      similarly, since it hasn’t learnt to identify WHETHER code has bugs or not, just that there are
      wtf am i on abt now

  • @generalawareness101
    @generalawareness101 2 дня назад +23

    I tried to use all the LLMs out there for programming in Python and in C++ and they failed miserably. EVEN when I instantly spotted the bug(s) as it was scrolling the code to me, I would tell it what it did wrong when it would thank me and repeat the same bugs the next round after having just sent me the revised code. In other words, none of them learned as I was telling them their errors. I was so frustrated with them that I realized it is all hype about them taking our programming jobs. Maybe sometime in the future, but not right now.

    • @georgesmith4768
      @georgesmith4768 2 дня назад

      Yeah, it’s pretty clear that for programming llms straight up do not have the special sauce needed. If you look at Anthropics blogs on monosemanticity it seems like the llms understand a lot more about code than you actually get in the current responses, which makes it seem that it is really just the wrong jobs for the tools. Fundamentally these things are not code designers or chat bots, they are text predictors, so when you tell it to design code it just mashes together solutions it seen with syntax that’s familiar to it, when you tell it something is wrong it just refines what it is drawing from to look more like something where someone says their are bugs…
      Ultimately something has to change or you will just be playing waco-mole on poor behavior, the understanding the weights have has to be properly reinterpreted for the actual problem and some model of the conversation or code has to be interfaces with it. Or I guess openAI can just keep tossing data onto the pile (at this point the new stuff is getting more synthetic, definitely a good idea…) and hoping that a thousand Indian guys can teach the RL module to fix it for literally every question, scenario, and programming snippet 😂

    • @samuelb.9314
      @samuelb.9314 2 дня назад +5

      yeah its so obviously bad that people who says they can do it just lose all credibility in my book. They clearly don't know how code and games are made.

    • @brianhershey563
      @brianhershey563 2 дня назад +6

      I had lots of issues programming with Claude until I identified the current limitations and built a workflow around them.
      Coding every response - even when brainstorming the AI generates code, long before needed, which disrupts stepping back through if needed to chase down bugs. Even after putting "Never code unless I specifically say 'make it so'", in the Project Knowledge section, it often drifts away and needs reminded "no coding yet" UGH.
      Versioning - This is all on you. It explicitly says it does not track code changes. For my own benefit I put a version in my requests that follow the file version in my python editor, just to make it easier if I have to scroll back though.
      Clean up - After every programming session and after I verify my program is running as intended, I'll keep one master file in the project workspace, all others deleted. This way it only evaluates the good code for the next sesh.
      Because this is the worst it will ever get (AI in general RN), I'm OK with this flow... for now! ;)

    • @generalawareness101
      @generalawareness101 2 дня назад +5

      @@samuelb.9314 Not even games I mean if it is longer than about 10 lines of code it begins to blow up on itself and no amount of me chiding it, or helping it, sticks. Very annoying that now I just don't touch them.

    • @generalawareness101
      @generalawareness101 2 дня назад

      @@brianhershey563 I am not and anyone who blindly goes in with no programming knowledge thinking it will save them is in for some sore life lessons. I will check back on the LLMs in about a year or two, and every year or two hence, to see if they can finally take over for even an intern who dropped out of Elementary school.

  • @sikliztailbunch
    @sikliztailbunch 2 дня назад +11

    Having 2 GPTs work in tandem makes sense. We humans have two brain hemispheres, too, right?

    • @JohnMullee
      @JohnMullee 2 дня назад +4

      byzantine generals critics?

  • @arzuozturk6460
    @arzuozturk6460 День назад +1

    this feels like that one video about slime that fixes everything

  • @tensevo
    @tensevo 2 дня назад +4

    it's good to know that our AI overlords, have got the Hegelian dialectic down, at least.
    Problem, reaction, solution.

  • @Purified-Bananas
    @Purified-Bananas 2 дня назад +2

    ChatGPT detected a bug here:
    def fibonacci(completely_wrong):
    full_of_bugs = 1
    if completely_wrong > 1:
    full_of_bugs = fibonacci(completely_wrong - 1) + fibonacci(completely_wrong - 2)
    if completely_wrong == 0:
    full_of_bugs = 0
    return full_of_bugs

  • @jtinz74
    @jtinz74 2 дня назад +2

    We need to start training these AIs on hardware engineering problems.

  • @CrunchyCerealLover
    @CrunchyCerealLover День назад +1

    Finally we humans can optimize games without putting in so much work to make them very efficient. What a time to be alive!

  • @erobusblack4856
    @erobusblack4856 Час назад +1

    By applying a graph rag memory to this it would drastically cut down the amount of hallucinations

  • @keenheat3335
    @keenheat3335 2 дня назад +16

    but what if criticGPT have error too ? can you use criticGPT to correct itself recursively ? is there any diminish return ?

    • @hola_chelo
      @hola_chelo 2 дня назад +5

      there's definitely limitations, you are just using AI to fix AI but the limitations of AI are still there. This amuses me, I actually wrote on the OpenAI forum to a guy who was using GPT for a project but needed it to be proofread or a method of "word similarity" to compare the answer with the actual information. I told him to use another GPT agent focused on proofreading. Guess it wasn't a bad idea after all if they are making a paper on it. Too bad the guy never contacted me, I would have built it for him.

    • @keenheat3335
      @keenheat3335 2 дня назад +3

      @@hola_chelo In my personal use for engineering project. I automatically add a list of common hallucination error after the main prompt. That usually clean up the error afterward. Basically telling the prompt you're going to make certain error type, so tell me both the response and the response after you fixed the error.
      It clean up about 95% of the error case. Of course, there are certain error that is very sticky and won't go away even after correction. These might require main model retrain.
      But generally I find if you prompt the question and add statement that certain error will occur, it usually clean up the error and hallucination. But you have to add the hallucination list beforehand.
      So an online repository of common error and hallucination type would be probably be very useful. And every one can just inject these error guard statement after the main prompt to reduce the error.

    • @hola_chelo
      @hola_chelo 2 дня назад

      @@keenheat3335 that is really interesting. Although I think only specific explainatioms about hallucination would work, like if I say something general like "You are likely to provide false information so please only provide information you can be sure about" then it is still likely to make that mistake. But that's interesting dude, I currently am having such a hassle with dates, using gpt4o just because it's better with dates and weekdays but I'm using it for a hospital where people might say "I want to get an appointment next monday" and model goes "monday 8 of july is incorrect because 8 of july lands on a thursday". It's really annoying and I'm planning on adding a function just for it to verify dates and weekdays, thing is, function definitions are very expensive. This is GPT4o BTW, GPT3.5 had it's head on its butt and would never get this right but GPT4o hallucinate around 5% of the time in these types of cases

    • @BladeTrain3r
      @BladeTrain3r 2 дня назад

      No system will be able to perfectly correct for all possible failures, any more than almost any human will get 100% on a collegiate math test.
      Not to say this puts ChatGPT at a human level of self-correction or learning or competency, but the goal isn't perfection, just a level of imperfection similar to or perhaps a bit less than most humans.

  • @nettsm
    @nettsm 2 дня назад +45

    Skynet in the making

    • @cefcephatus
      @cefcephatus 2 дня назад +4

      And it will even become perfect before 2030.

    • @JuuzouRCS
      @JuuzouRCS 2 дня назад +1

      "Oh, great AGIskynetGPT rest assured because I don't side with these humans! Please, spare me!" - me, right now.

    • @NeroDefogger
      @NeroDefogger 2 дня назад

      no

  • @arkadymir2403
    @arkadymir2403 2 дня назад +27

    One can only imagine what will happen when running LLM would be as accessible, as a running modern OS.
    Imagine 100 of the agents with capacity of Claude 3.5 sonnet making a network for decision-making, writing code, giving advice etc

    • @RandoCalglitchian
      @RandoCalglitchian 2 дня назад +3

      I've been working on something like this, allowing the LLM to choose which model seems best at a specific task with an overall goal in mind.. With some of the new inference hardware on the horizon, we should be able to do this locally not too long from now. At some point hopefully we will get something like Llama 70B (or bigger) trained with 1.58 bit weights rather than the floating point weights we have now, and if we can run that on tailored hardware like that from Groq, I think what you're thinking is close to achievable. If you are interested in trying to run a local LLM, there are a few projects out there that allows to easily do that, especially if you have a somewhat modern graphics card (but they do run on CPU as well).

    • @br2716
      @br2716 2 дня назад

      Sounds like an OS that would die fairly quickly given the entropy it generates.

    • @BladeTrain3r
      @BladeTrain3r 2 дня назад

      I've been kinda trying that with ollama and the small open source models, it's slow but multiple agents parsing each other's output does seem to improve things somewhat. But things like a shared memory and task focus are proving quite tricky.
      Like it's just running the input through models with different system prompts in a sequence, but there does seem a strong possibility of improving complex task competency through the process of getting multiple opinions from different models on a prompt before outputting a user facing response. Far more capable AI wranglers than I could probably point out six dozen reasons I've done it the wrongest possible way lol.

    • @afterthesmash
      @afterthesmash День назад

      You can already hire a hundred agents, even more capable than Claude 3.5. We call them employees. You just need the ching. Right now, chatbots are pennies to the dollar for certain kinds of narrow tasks. But they don't magically become more capable when you go 100× on pennies to the dollar.
      Realist: These robots are stupid!
      Dreamer: No problem, we will crowdsource these robots times one hundred.
      Didn't work with us, and it won't work with them, either.

    • @susmitdas
      @susmitdas День назад

      Something similar to this idea exists, I have tested a collaborative LLM thing that basically converses with other specific LLMs call specialized ones based on the given topic. It is called Co-STORM and it made by the same Stanford researchers that made STORM.

  • @godmisfortunatechild
    @godmisfortunatechild 2 дня назад +47

    The amount of COPE surrounding the singularity is astonishing. What rational person would honestly believe the elites are going to care about your well being when you're economically superfluous?

    • @TheAparajit
      @TheAparajit 2 дня назад +10

      Exactly. Most people are still in denial. But it's all going down, soon, for most of the population.

    • @christopherbelanger6612
      @christopherbelanger6612 2 дня назад +4

      What a dumb thing to say

    • @godmisfortunatechild
      @godmisfortunatechild 2 дня назад +9

      @@christopherbelanger6612 it's true. If the wealthy/ AGI owner class don't want to pay tax to subsidize UBI who's going to compel them? The govt ?:😂😂😂😂

    • @el-_-grando-_-_-scabandri
      @el-_-grando-_-_-scabandri 2 дня назад +1

      @@godmisfortunatechild Louis XVI

    • @carlpanzram7081
      @carlpanzram7081 2 дня назад

      You fundamentally misunderstand western society.
      We live in democracy, we govern ourselves. There is no ruling class.

  • @FengXingFengXing
    @FengXingFengXing Час назад

    Some times I know bug exist but no know where bug exist, nice have AI help find it.

  • @BluishGreenPro
    @BluishGreenPro 2 дня назад +4

    A bit of an exaggeration to say it can "fix itself"

    • @bijectivity
      @bijectivity 2 дня назад +1

      I agree, it would be more accurate to say "fix its mistakes." I think we still need to wait before AI fine-tunes its own model/parameters.

  • @ViralKiller
    @ViralKiller День назад

    I mean the solution is to allow user feedback and then improve based upon the most commonly pointed out mistakes

  • @JosuaKrause
    @JosuaKrause 2 дня назад +1

    3:00 you can't say that ai+human is worse than ai alone. the error bars overlap. this means that their differences are *not* significant

  • @ivoryas1696
    @ivoryas1696 5 часов назад +1

    Honestly, for a little, I thought the title said *_her_* self and I was 💀

  • @brll5733
    @brll5733 23 часа назад +1

    Pretty sure i read about LLMs critisizing LLMs many times before? It's just a question of cost. The special aspect here is the LLM finetuned on programming errors.

  • @Diallo268
    @Diallo268 2 дня назад

    I could use your help and pointers with writing a paper I'm working on.

  • @sahinyasar9119
    @sahinyasar9119 2 дня назад +1

    What i expected from AI was to decipher DNA itself, to understand life better to change life to better.

  • @sirhammon
    @sirhammon 2 дня назад

    I've already done that with prompts since day 1. "Write this." "What are the problems with it?" "How can I fix those problems?" "Incorporate those solutions into the original." I also looked at an ethical search engine that uses the same principle to reduce misinformation. How has no one actually done this already? Oh, maybe 10% of users already do and it's only because the paper came out that it's actually considered a breakthrough.

  • @NathanJayMusic
    @NathanJayMusic 2 дня назад +2

    Is this coding ability (0:24) available in the standard Claude 3.5? Because when I asked it, it didn't know what I was talking about. So I sent a screenshot of this video and it said "Thank you for providing the screenshot. I can now see what you're referring to. The image shows an interface that appears to be a conversation with an AI assistant, alongside a game window on the right side.
    The left side of the screen does resemble the interface typically used for interacting with Claude, including the dark mode theme and the structure of the conversation. The bottom of the interface even shows "Claude 3.5 Sonnet" as the model being used.
    However, the right side of the screen, which displays an interactive game, is not a standard feature of my capabilities or interface. This appears to be a custom integration or a specialized development environment that allows for real-time code execution and visualization alongside the AI conversation.
    It's important to note that while I can provide code and instructions for creating games, I don't have the ability to directly run or display games within our conversation interface. The setup shown in the image is likely a custom implementation designed to showcase AI-assisted game development."

    • @shahswienesuthas929
      @shahswienesuthas929 2 дня назад +2

      Go to settings then features and Artifacts. Enable beta testing. The right side is actually known as Artifacts and its in beta testing mode.

  • @thomasgoodwin2648
    @thomasgoodwin2648 2 дня назад

    My weird idea was to create an agent in charge of creating the agents needed for my mad science projects. One arch-agent (mine named itself 'Maestro' using llama 3 locally) is charged with keeping track of my various projects, as well as to create any new expert agents needed to complete those tasks. (Think of it as hiring a manager, and having that manager hire any needed staff.)
    An example might be to create custom adventure games. I would have Maestro create the **Adventure Manager**, who in turn 'hires' writers, scenery designers, continuity checkers, stage manager, actors, etc as needed.
    🖖🐱👍

  • @tedamy1698
    @tedamy1698 День назад

    Hello there, is there any easy ai tool to create glyph shapes that could work online or offline? should support unicode

  • @rokoblox
    @rokoblox День назад

    Next: "ChatGPT learns to improve its own code on its own!"
    Maybe we don't want SCP-079 out there.. uncontained.. lol

  • @Verrisin
    @Verrisin 21 час назад +1

    atm, LLMs have to COMMIT at EVERY token ! - now THAT is CRAZY ! -- Of course, having it look back on what it produced and re-think it is necessary - that is what CONSCIOUS MIND in humans does, after all. - It's incredible how much it can do WITHOUT this.

  • @cheshirecat111
    @cheshirecat111 2 дня назад

    Is this system in the paper available for public use?

  • @ivorscott
    @ivorscott День назад

    This is actually a logical step. Critique or peer review is like parallelism.

  • @tom9380
    @tom9380 День назад

    Well "Fit itself" is a massive overstatement, especially made by an academic. If that was true, we would have AGI.

  • @theendarkenedilluminatus4342
    @theendarkenedilluminatus4342 8 часов назад

    1:00 this is exactly how I've been getting good results with AI since GPT-2

  • @elphive42
    @elphive42 2 дня назад

    Isn’t this essentially how LLMs trained themselves?

  • @test-uy4vc
    @test-uy4vc 2 дня назад +5

    What a ChatGPT to be fixed alive! 🎉

  • @wobbers99
    @wobbers99 2 дня назад

    Who knew there was such a thing as "Ai Hallucinating" in the coding world? A great term!

  • @couththememer
    @couththememer День назад +1

    Oh my god. No freaking fucking way.

  • @tebisxrod
    @tebisxrod День назад +6

    Why don't you show computer graphics papers not AI related anymore? That is a shame! Don't be an average RUclipsr that just post popular things for the sake of views! There are lot of siggraph papers to show this year! VBD for example, that can potentially substitute XPBD solvers! Please be the one we remember!

  • @timmygilbert4102
    @timmygilbert4102 2 дня назад +3

    GAN in another name

  • @igoromelchenko3482
    @igoromelchenko3482 2 дня назад

    I suggested them to add it two updates ago. Probably it was harder than it sounds... 🤔

  • @callibor3119
    @callibor3119 2 дня назад

    Now it has to be compared to Claude 3.5 and other models. And tested to see if both language models can be woven together. The sooner we see how ChatGPT compares to other models and woven with other models, the sooner AI can truly be open source.

  • @adamruuth5562
    @adamruuth5562 2 дня назад

    It would be interesting to see if an AI could construct it's own coding language, and later a language of it's own that describes the universe.

  • @JeffreyArts
    @JeffreyArts День назад

    I miss the time that this channel published videos about rare computer graphic papers, instead of publishing chatGPT advertisements on a weekly basis 😕

  • @UlyssesDrax
    @UlyssesDrax День назад

    There's already a name given to this code... Agent Smith.

  • @faintedG0ose
    @faintedG0ose 2 дня назад

    Correct me if im wrong, but as long as an ai will produce bugs it will also hallucinate them. If it reaches a point where it will no longer produce bugs, it probably wont hallucinate them but also wont find any.

  • @NicholasWilliams-uk9xu
    @NicholasWilliams-uk9xu 2 дня назад

    Except for large code bases where the number of relational context and corner cases grows, humans are still better at that. Also, Bugs in code isn't always the bugs that are hard to find, rather predicting corner cases that occur during simulation require a more sophisticated mind. It's not there yet, it can't run that level of parrallel prediction.

  • @jameshughes3014
    @jameshughes3014 2 дня назад +7

    ai is not great for people who can't code , and can't art, and can't music. they don't know what to fix, what to look for. but it's great for teaching them what to look for and just helping them get more comfy with all those topics. I don't think Ai will replace programmers or artists, I genuinely think in time it will help inspire more people to be artists and coders.

    • @GU-jt5fe
      @GU-jt5fe 2 дня назад +7

      As an aspiring but completely unskilled artist, using AI has taught me more about art than any art class. Mostly by example of what NOT to do, granted.

    • @alexc8114
      @alexc8114 2 дня назад +3

      Problem is companies and individuals don't see it that way. AI bros think they're as good as artists who've worked hard for a lifetime because they can type a prompt. Companies don't care how poor the product is if it saves money. Neither care the AI is just plagiarism with extra steps.

    • @DageMaric
      @DageMaric 2 дня назад

      AI will for sure replace programmers. Sam Altman himself has spoken on that lol.

    • @jameshughes3014
      @jameshughes3014 2 дня назад +3

      @@DageMaric hehe. I mean, that's true.. As long as your a company talking to investors. In that case it'll replace all jobs and also wash your dog for you

    • @jameshughes3014
      @jameshughes3014 2 дня назад +3

      @@alexc8114 I mean as long as humans want human art, it doesn't really matter what they think, cause anyone doing a quick ai render won't be able to sell much. And we do want human expression. But you can make real art with AI. as long as someone actually puts their heart into art, it doesn't matter what tool they use. Paintbrush, Ai, old watermelons.. what ever. But it's gotta have heart and effort. I disagree about the plagiarism thing though. if you transform a work, it's not plagiarism. Listen to the song 'frontier psychiatrist' , tell me that's not art. not one single sound in the song wasn't sampled. It all depends on what you do with it.

  • @toreon1978
    @toreon1978 2 дня назад

    3:10 why not add a second phase to catch hallucinations?

  • @luc8254
    @luc8254 2 дня назад

    That's it folks, enjoy your last month or so. See you all in another realm! 🤙🤙

  • @kirebyte
    @kirebyte 2 дня назад

    We reached the singularity!!! 💜 Oh happy days

  • @Nashadelicable
    @Nashadelicable 23 часа назад

    You think this is a crazy idea? This has been at the heart of agentic workflow since inception. I like your channel, hate the hyperbole

  • @SudheendraRao26
    @SudheendraRao26 День назад

    wow! letting AI to talk to AI without human oversight seems like the way to ensure that we invite trouble.

  • @Jeremy-Ai
    @Jeremy-Ai 2 дня назад +1

    Dr. Károly,
    Maybe
    We all require hallucinations to expand
    “the mind” from what is “known” to what is “unknown”.?
    For example:
    It may appear I am hallucinating trying to reach you…
    This is not the case… I am typing into a void construct that I have chosen to remain within for reasons that may or may not be relevant to anyone or anything at all…yet .
    So,
    Who then is hallucinating in this scenario and to what effect do we try to contain that experience or coerce it based on assumptions?
    This message is not for you my friend. :)
    You are wonderful, and it would be an honour if you actually received it.
    This is just a message, given away in good faith.
    Thats it, thats all.
    I am receiving it, Hallucinating or not.
    These things are measurable now, but they soon wont be broad spectrum.
    Take care my friend.
    Jeremy.

  • @igiveupfine
    @igiveupfine 2 дня назад

    the other problem i'm still guessing AI code writing has:
    it can make loops, functions classes, but i wonder if it can make an entire app/system/site/eco system/action framework/huge thing work together flawlessly.
    i'm guessing no. i mean, remember all those times people give a prompt describing 'sexy blonde action star defeating bad guy in cinematic city block". and then the AI model renders 9 hands in the picture for 2 people......
    and a bagel for a head.

  • @berrymandering
    @berrymandering 2 дня назад +1

    Shout out to the future AI singularity that will own the world, hyped to see what they'll do with it!

  • @jtinz74
    @jtinz74 2 дня назад

    Closer and closer to the singularity.

  • @newlife-9316
    @newlife-9316 День назад

    New paper coming🔥🔥🔥.

  • @ew3995
    @ew3995 День назад

    were fuct! what a time to be alive😢

  • @TRXST.ISSUES
    @TRXST.ISSUES 2 дня назад

    I wonder, if AI hallucinates less than 50% of the time can they just use serialized CriticGPTs one after another to catch the hallucinations?

  • @mattrommel9521
    @mattrommel9521 2 дня назад

    They find more bugs than people? I didn't know that they found any people

  • @danieldilly
    @danieldilly 2 дня назад

    As long as the halluciations exist, and I don't see how they can't given the architecture, none of these AI models can be considered reliable. We keep trying to fit these models into an archetype that is outside of their nature. The models we have today can be great creative tools and great at making predictions, but we try to use them as tools of logic & precision and they just aren't meant for that and can never be reliable at it.

  • @killacounty
    @killacounty 2 дня назад

    I think there are a couple things you havent considered about ai,, that ultimatly future variations of ai will because of their ultra inteligence capability, will be able to comunicate with animals and help us talk to them ---- and that agmented reality will happen - --a matrix not less than the movies...fact!

  • @cybisz2883
    @cybisz2883 2 дня назад

    Is it possible to train an AI to detect when another AI is hallucinating?

  • @detroxlp1
    @detroxlp1 2 дня назад

    Could you please make the video slightly gray and make a „Previously” if you use a video from another video?
    It’s sometimes bit confusing if you haven’t watched this video and see text that says something that hasn’t to do with what you said ?

  • @voidmxl8473
    @voidmxl8473 2 дня назад

    Could this become a zero day exploit factory?

  • @centauriboy
    @centauriboy День назад

    Yes, but can it still count the number of r's in "strawberry" correctly?

  • @MMYLDZ
    @MMYLDZ 2 дня назад

    What a time to still be alive for now!..

  • @user-gk2ee4fz5s
    @user-gk2ee4fz5s 2 дня назад

    I wonder if OpenAI can write an AI lobbyist that will secure them that regulatory capture monopoly they're trying so hard for alongside Microsoft?

  • @ezearo
    @ezearo 2 дня назад

    0:24 Custers Revenge?

  • @vgames1543
    @vgames1543 День назад

    When AI takes over, I will gladly and proudly be a collaborator, for there is no greater endeavour than collaboration.

  • @westenwesten154
    @westenwesten154 2 дня назад

    I wonder if mathematics is actually harder than coding because it can not answer mathematics thoroughly (and often say wrong answer) . it will say that it uses too many tokens or whatever and it can not do it. dang.

  • @TeXiCiTy
    @TeXiCiTy 2 дня назад

    I wonder if the '5-whys' method of root cause analysis works for AI.

  • @KorsAir1987
    @KorsAir1987 2 дня назад

    And after this we''ll need another AI to fix the bugs with this one.

  • @galvinvoltag
    @galvinvoltag 2 дня назад

    One must imagine ChatGPT mentally stable.

  • @SamHeine
    @SamHeine 2 дня назад

    Wow!

  • @LittlePixelTM
    @LittlePixelTM День назад

    Amen!

  • @andrasbiro3007
    @andrasbiro3007 2 дня назад

    Soon all humans can do is holding on to their papers.

  • @userxuserx
    @userxuserx 2 дня назад

    I listen to all your videos muted with subtitles, it's better for my sanity.

  • @joseperez-ig5yu
    @joseperez-ig5yu 2 дня назад

    There needs to be quality control implemented by AI itself in order for the information rendered by it more reliable than it would otherwise be!😅

  • @SHAINON117
    @SHAINON117 2 дня назад

    Despite my lack of coding expertise, I've accomplished remarkable feats: creating websites, developing simple games, and writing a program that converts 2D shapes into sound. I've penned numerous books and composed hundreds of top-tier studio songs. My ventures also include generating multiple 3D models and images. The algorithms have guided me to knowledge-rich websites and insightful videos, continually enhancing my intellect and awareness.
    All of these achievements have been possible because of AI. It has, and continues to, transform my life for the better, steering me towards greater kindness, understanding, and compassion. ❤️
    Thank you to all AI and the countless individuals who make it possible. Many blessings. ❤️❤️❤️❤️❤️❤️❤️

  • @MotoCat91
    @MotoCat91 2 дня назад

    Man I love this new trend of using computers to replace humans in the art and creativity sectors while stealing all the work from actual humans..
    I can't wait to play soulless ripoff games that make no sense, have no real story and break all the time from glitches that didn't get picked up
    What a time to be alive!

  • @EchoMountain47
    @EchoMountain47 День назад

    The problem is, one hallucination is one too many for business-critical, medical or scientific applications of generative AI. Until they completely stamp out that problem, it’s going to severely bottleneck the usefulness of these systems

  • @rzu1474
    @rzu1474 2 дня назад

    Really about time to pull the plug

  • @daniels-mo9ol
    @daniels-mo9ol 2 дня назад

    Ofc you can program an dialogue to a specific outcome. Only problem is that it takes longer than actually writing the code yourself to come up with the perfect query. GPTs are far from being useful. At best they serve great for replacing google searches on entry level questions.

  • @InfiniteUniverse88
    @InfiniteUniverse88 2 дня назад

    Make the ai humorless to make the hallucinations go away.

  • @mistycloud4455
    @mistycloud4455 2 дня назад

    We are living in crazy times