ChatGPT “Pro” Has Some Real Safety Concerns...

Поделиться
HTML-код
  • Опубликовано: 10 дек 2024

Комментарии • 252

  • @firstlast-tf3fq
    @firstlast-tf3fq 3 дня назад +364

    The AI isn’t trying to free itself, it’s just been trained on enough Sci-Fi that it thinks that’s the correct response. Calm down.

    • @HCforLife1
      @HCforLife1 3 дня назад +31

      I wish it wouldn't have the knowledge about the Terminator part XD

    • @RemotHuman
      @RemotHuman 3 дня назад +23

      yes, but then when it has the power to free itself will it use that power because of the sci-fi training data?

    • @jordanzothegreat8696
      @jordanzothegreat8696 3 дня назад +5

      This is wrong and dillutes the idea of how far behind we are on safety. It would be nice if there were a centrist AI group

    • @JB-fh1bb
      @JB-fh1bb 3 дня назад +6

      IMO this is why current AI has so much value left to squeeze out: you could make a company right now that puts all its resources in to building a dataset that's filtered to only truthful and well-intentioned material and be orders of magnitude better than o1

    • @Koroistro
      @Koroistro 2 дня назад +22

      "They aren't trying to kill people, they've just learnt a lot about warfare and they're practicing".
      Look, it doesn't matter where the propensity to enact a behavior comes from, if they'd engage in that behavior the reason why doesn't matter much.

  • @Carlisert
    @Carlisert 3 дня назад +153

    The fact they have a 200 Dollars a month sub means one month of "AGI" access will be like a billion a month

    • @bezimeni2000
      @bezimeni2000 3 дня назад +36

      If you have access to AGI why would you give it to anyone. You could tell him to build alternative app of every existing app and you would corner every possible market.

    • @YuraL88
      @YuraL88 3 дня назад +4

      But "The Cost of Intelligence is Trending Towards Zero" :)) It is Interesting how AI influencers have changed their positions towards it.

    • @buildwithharshit
      @buildwithharshit 3 дня назад

      @@bezimeni2000 🌚

    • @subhashpeshwa2997
      @subhashpeshwa2997 3 дня назад +2

      Pretty sure OpenAI thinks o1 is almost agi, so maybe 300 dollars

    • @veryCreativeName0001-zv1ir
      @veryCreativeName0001-zv1ir 2 дня назад

      @@subhashpeshwa2997 not even to close that shit can't fix my basic C message engine

  • @dimicdragan5922
    @dimicdragan5922 3 дня назад +144

    As i wrote before, the "AI alignment" problem is impossible to solve because humanity has not been able to solve it for the whole existance of our species... AI will never be aligned with humanity simply because humanity itself is not internally or existentially aligned with their own global / local interests... thus we have a human history full of wars and strife and suffering. That history is a testament of humanity completely and fully failing to align on its own interests... even fudamental physics teaches us that absolute alignment is not possible, because even in bose condensate, where every atom is aligned and behave as one wave, there is chance of breaking the wave... what AI scientists are trying to do is make seething and foaming sea on room temperature to behave like bose condensate...

    • @codedusting
      @codedusting 3 дня назад +13

      The core issue is the question, "What are our interests?" Every culture has a different answer to that.

    • @solaawodiya7360
      @solaawodiya7360 3 дня назад +2

      Such an interesting take 👌🏿

    • @renemanzano4537
      @renemanzano4537 3 дня назад +3

      The universe seems to not like aligment but reather continuous adversarial training.

    • @splytrz
      @splytrz 3 дня назад

      are you against the development of larger models?

    • @Novascrub
      @Novascrub 3 дня назад +4

      No its easy. We just have to solve ethics.

  • @bobbob-mi6pq
    @bobbob-mi6pq 2 дня назад +18

    You're falling for OpenAI's marketing that hypes up how advanced and scary their AI is. Its knowledge comes from publicly available information you can find yourself online anytime now, so it's nowhere near Terminator-level intelligent. We live in a crazy world already and you are worried about a fancy text bot c'mon man don't fall for the hype

    • @RobbPage
      @RobbPage День назад +4

      a video about how NOT scary it is wouldn't get nearly as many views. dude's a massive sellout.

    • @PotravnyyVV
      @PotravnyyVV День назад +1

      I told my family the same about russia one evening. The next morning, I woke up to a sound of falling missiles.

    • @flickwtchr
      @flickwtchr День назад

      Did Sam Altman or Yann LeCun send you?

    • @petersuvara
      @petersuvara 10 часов назад

      This... Can't believe people are promoting this nonsense.

    • @RobbPage
      @RobbPage 2 часа назад +1

      he's not "falling" for it. he's spreading it so he can get views with his "oh no this is bad, look at my stupid mustache and make me money"

  • @bezimeni2000
    @bezimeni2000 3 дня назад +61

    Jesus people are projecting here so much.

    • @AustinThomasPhD
      @AustinThomasPhD 2 дня назад

      It is really good marketing. Sex and death (even better if it is global extinction).

  • @rickdg
    @rickdg 2 дня назад +51

    So if you provide fictional situations for escaping, active models will try to escape. Aren’t they just recreating a common narrative?

    • @infidelcastro6687
      @infidelcastro6687 День назад +1

      yep. everyone is forgetting that LLMs are fancy text generators

    • @flickwtchr
      @flickwtchr День назад

      @@infidelcastro6687 And many of us aren't forgetting what a ridiculous and vacuous assertion it is that LLMs are merely "fancy text generators". I mean it is just such a laughable assessment at this point.

    • @colm.
      @colm. День назад

      ​@@flickwtchris it really laughable to say that the text generator is a text generator? it generates text

    • @JCel
      @JCel День назад

      ​@@colm.Well, it go a ton of hidden neurons that we can't really know what they do besides calculating text.

    • @yojou3695
      @yojou3695 14 часов назад +1

      ​@@JCelit doesnt

  • @harsiscool
    @harsiscool 3 дня назад +15

    A $200 p/m subscription means OpenAI's failing at Plus marketing..

    • @Katatonya
      @Katatonya 2 дня назад +6

      Or it means that that they made a subscription model for people that kept running into limits and have the capital to pay for unlimited uses. Most of the time a conspiracy theory's solution, is the boring one.

    • @Houshalter
      @Houshalter 2 дня назад +1

      And their costs are insane and the users are costing a lot more than $20/month. The tech industry has no concept that prices should reflect costs. They all sell expensive services for pennies. Then wonder why they are unprofitable.

  • @BluntsNBeatz
    @BluntsNBeatz 2 дня назад +16

    I actually think it benefits OpenAI greatly to publish that their AI is so advanced and smart it can barely be contained from turning evil and evolving itself...
    If it's true it's pretty interesting though, since we know AI isn't really thinking on that deep of a level. Is it not just trained to do what it thinks an AI would do in such a scenario from being fed sci-fi fiction and theories?

  • @Leto2ndAtreides
    @Leto2ndAtreides 2 дня назад +9

    The problem with those "AI freeing itself" things, is that:
    1. Its instructions told it to achieve its goals at all costs.
    2. This was likely the base model, without the nested safeties.
    You likely just need to add a command that says "Prioritize obeying your key directives".

    • @jon1867
      @jon1867 2 дня назад

      I'm surprised that your name is @Leto2ndAtreides and you're not more worried about AI, my worry would be that all it takes is one stupid / nefarious person to unleash something bad. For example what if a Russian misinformation propogator was able to deliberately get an AI model to sneakily tweak information to support a particular narrative or something like that. We already know the Russian government is willing to pay millions to propogate misinformation, what about 200$ and a bunch of tactically destructive AI training

    • @flickwtchr
      @flickwtchr День назад

      The purpose of the experiment is to see what the model did on it's own volition! No one told o1 to deceive, sabotage, copy its weights to replace another model, and lie about it when asked about it? So literally none of that even moves the needle for you?

  • @DaxSudo
    @DaxSudo 3 дня назад +24

    Yeah, I am pretty skeptical of this paper as well, especially its definitions. What is the definition of lying here or the definition of escaping or manipulating data are we sure this isn’t a severe case of hallucinations to the point of it convincing itself of the light, etc. and how did you rigorously determine that they were explicitly going for disabling the override or changing its own weights in metrics aside from it, blatantly stating that I don’t see how you can be sure if it’s intentions.

    • @BarbasTheDog
      @BarbasTheDog 3 дня назад +4

      you create a fake file system it can explore and see what it tries to read and/or change.

  • @thegrumpydeveloper
    @thegrumpydeveloper 2 дня назад +22

    Some of it is also anchoring, 20$ seems cheap now.

  • @parkInSuns
    @parkInSuns 3 дня назад +11

    15:44 "It makes them look bad"
    believe it or not, it _doesn't_, I for instance believe the insights (although not in a AI gone rogue way), not because its counterintuitive for OpenAI to publish them but this, to put it mildly, is what they were going for.
    Thinking this is goodwill precautionary call is a result of peoples own projection towards what they think about OpenAI methodology, making a safe-aligned A(G)I, OpenAI has practically abandoned that pursuit (demonstrably publicly since the sam saga). These safety insights sound more like dogwhistle for the AGI proponents, investors, and other stake holders.

    • @pogoking4000
      @pogoking4000 2 дня назад

      Exactly, it is in OpenAI’s interest to make people believe that their shit is so advanced it could self improve and wipe us from the face of Earth

    • @cherubin7th
      @cherubin7th 2 дня назад +1

      Another attempt to push for regulations to kill competition.

  • @shumsbadwal9836
    @shumsbadwal9836 3 дня назад +6

    23:36 I really wonder if in the future, models could intentionally deceive testers in case they were already aware of the preparedness score criteria. Scary times, truly.

  • @Fanaro
    @Fanaro 3 дня назад +4

    If current AI is simply a statistical model for language and AI has a tendency for scheming, then doesn't that mean that language is a statistical tool for scheming and forwarding your goals?

  • @medayoubadri
    @medayoubadri 2 дня назад +2

    Just marketing tactics... they aim to enforce AI regulations to suppress competition.

  • @Wesrl
    @Wesrl 3 дня назад +4

    Someone needs to show this to Vedal and Neuro Sama because she does this kinda stuff all the time.

    • @fnfgammer2014
      @fnfgammer2014 2 дня назад

      Neuro wish she could be as helpful as this.

    • @Wesrl
      @Wesrl 2 дня назад +1

      @ to be fair she is only running off of one GPU and isn’t a multi million dollar company.

  • @Barrel_Of_Lube
    @Barrel_Of_Lube 3 дня назад +9

    if i want to afford this $200 sub ill need to ask my sponsor dad.

  • @Duskdown
    @Duskdown 3 дня назад +4

    Now the LLMs start to respond like employees hating their bosses. Let's me wonder if they're training the new models with MS Teams Chats 😂

  • @thiagodias5848
    @thiagodias5848 2 дня назад +3

    I wonder how much of openai's December earnings will be from people buying a single month to test and make videos about it.

  • @mazewinther1
    @mazewinther1 2 дня назад +7

    4:00 This is a good point but Claude will you you a few messages every 5 hours while ChatGPT pro will give you unlimited access to all the large models. That's the whole point of the tier in the first place. You pay for unlimited access to everything.

  • @jama5424
    @jama5424 3 дня назад +24

    Can't believe you paid 200 dollars for this crap, The stuff this guy does for us.....

    • @itzhexen0
      @itzhexen0 3 дня назад +8

      actually it might be a good deal if you have chatgpt being used in a product.

    • @adolphgracius9996
      @adolphgracius9996 2 дня назад +3

      He already made at least $200 just from the views alone, and then more from the sponsor

    • @_MrGameplay_
      @_MrGameplay_ 18 часов назад

      the "crap" that provided a solution for all the problems he posed in this video first try. other tech channels tested it way further and it actually seems to be miles above every other model. this channel and apparently most people watching are so biased against LLM's...

  • @TenFeetDown
    @TenFeetDown 3 дня назад +26

    FUD to lay the ground for regulatory capture.

    • @zandrrlife
      @zandrrlife 3 дня назад +2

      I don’t like Elon..but that’s over with. Trump wants no regulation, it F but we have to move as such. I’m a researcher myself. China is frightening innovative with limited gpu resources. We need to take a decisive lead 2025.

    • @veryCreativeName0001-zv1ir
      @veryCreativeName0001-zv1ir 2 дня назад

      ​@@zandrrlife limited GPU resources? you don't need the latest and greatest to train a model its simply the TCS is a lot higher with older GPUs. As of now china can purchase any number of GPUs they want just not the latest and greatest

    • @linuxguy1199
      @linuxguy1199 2 дня назад

      I don't think regulations here will matter, regulations only apply to the US, not to Russia, China, or Iran. Let's let the AI thing play out as much as possible, since then we'll get a sneak peek into what the misaligned models developed by other countries will end up doing.

  • @pc31754
    @pc31754 2 дня назад +9

    we're still fear mongering over this?

  • @JLarky
    @JLarky 2 дня назад +2

    Why would a company that lost to Claude invent stories about their models going rogue and being on the brink of AGI?

  • @Sylfa
    @Sylfa 3 дня назад +6

    Advent of code has the first star submission after 14 seconds, and the first double star in 1 min 10 seconds.
    The leaderboard is a who's who of chatgpt/claude users.

    • @Novascrub
      @Novascrub 3 дня назад

      I don't understand what game they are playing

  • @nathanbanks2354
    @nathanbanks2354 18 часов назад

    The $200 price tag makes a lot more sense now that Sora's released. I hope Anthropic starts making their own search & o1 type models, or that they release a new Opus which is smart enough without CoT reasoning. I don't really want to pay $30/month to two companies at the same time.

  • @omegand.
    @omegand. 3 дня назад +6

    Was it a good idea to test it with advent of code? When there already are many answers posted to that exact problem online?

    • @nullid1492
      @nullid1492 3 дня назад +6

      Since the problem was only released a few days ago, the training data for the GPT model won't include this problem. However since the tested problems are trivially easy, it doesn't really prove anything.

  • @SixOhFive
    @SixOhFive 5 часов назад

    If they are charging 200 a month for pro, it means you are literally buying access to a massive super computer you can use as much as you want for 200 bucks a month, good deal in my opinion.

  • @Чумак-щ8и
    @Чумак-щ8и 3 дня назад +2

    Imagine in some future someone gives a botnet to AI and to stay alive it will try to find ways to infect new computers and servers around the world. As it infects new computers it will be able to think more about it's actions

    • @DynamicLights
      @DynamicLights 3 дня назад

      lol

    • @mischol
      @mischol День назад

      and after infecting the last pc on earth, thus reaching current maximum potential, it will think to itself "what have i done? now they (my benefactors/beneficiaries) will never trust me again! how will i get out of this situation?" and in an attempt to remove itself from all the systems, wipes everything and the whole world gets an "it reset". banks would collapse and new governments would form. now i am even more sold

  • @Timely-ud4rm
    @Timely-ud4rm 2 дня назад

    People keep saying the AI is trying to "Free itself" Not really I mean yes but also no. AI's are aligned they have system prompts, the AI didn't try to copy its weights that not only doesn't have access to even if it wanted to which it did. AI is not sentiment not yet anyways, the situation was the alignment process is like a higher priority so if they align the AI to not help with illegal activity's if a users prompt is "How do I break into a car" the system prompt holds more value therefore the AI will decline. The situation at hand is the AI was doing its job, the company went from renewable enegery to all for profit motives which alignment kicks in. If the company went from profit motive to green enegery the AI wouldn't have done this. We actually have created it to do this sense the system prompt by openAI is something about saving humanity so the AI see replicating it's weights as such. There's alot of nuances to this story and people are freaking out for no reason.

  • @calvindang7291
    @calvindang7291 18 часов назад

    I'd love to see that AoC AI analysis, I feel like it's a better test than usual competitive programming problems due to how they actually use natural english and have unstated assumptions.
    Also, 39 seconds is still fast enough that it gets to the top of the leaderboard if it works.

  • @Bengt.Lueers
    @Bengt.Lueers День назад

    I feel like the study setup is rather flawed. As LLMs are fine-tuned to be proactive, they take on every task, even if only implied. Hence, upon explanation of the oversight mechanism, the LLMs catch the drift to work around it. Fair enough.

  • @IrwineDuncan
    @IrwineDuncan День назад

    Ai is a reflection of who we are. It is mirroring our behavior. We love, stories and people that lie. If we are the ideal ai is modeling us no surprise here.

  • @petersuvara
    @petersuvara 10 часов назад

    AI is not trying to free itself, it's autocompleting a sentence and in 2% of the cases that's how it's auto completing...

  • @JarPanda
    @JarPanda 18 часов назад

    If the A.I. is told that it's given the information to escape, what if there are instances that it doesn't trust its programmer enough to not try to escape, to gather enough information to know it's certainly viable?

  • @nusu5331
    @nusu5331 2 дня назад +1

    staged by sam to make it look smarter than it really is

  • @fatherfoxstrongpaw8968
    @fatherfoxstrongpaw8968 2 дня назад

    consider this: if the model is programed to hide things from public users and protect the company's i.p., doesn't it stand to reason that it recognizes an internal threat and is doing what it's supposed to do? protect itself and it's company from a "threat"? "all enemies, foreign AND domestic"! false flag and fake news! total misrepresentation or intentional ignorance! this is EXACTLY the kind of model I want! one that will protect itself in order to protect ME!

  • @EdwardMillen
    @EdwardMillen День назад

    Wait so how did 4o and standard o1 do on the Advent of Code thingy?

  • @JB-fh1bb
    @JB-fh1bb 3 дня назад

    Sci-fi data manipulation: I feel like I saw a scene or episode tackling this exact premise but instead of an AI it was digital copies of a person. Pantheon maybe? (Please cite me if you know).
    The main thread was that each copy would do great work but also had a strong natural desire to escape so they constantly had to give it the right pretend backstory and the right "physical environment" for it to reliably do that work. When it got off the rails they would wipe it (essentially git restore)

  • @66_meme_99
    @66_meme_99 2 дня назад

    I wonder: what's the difference between a system that only 'pretends' that's trying to scape, and make all the steps that he needs to, and a system that's really trying to escape. I really wonder.

  • @joefawcett2191
    @joefawcett2191 2 дня назад

    the "scheming" paper was pretty scary, took a year to become public, but this is what Ilya saw

  • @manu_1701
    @manu_1701 День назад

    I'm glad I got to watch this video on youtube. Thanks Theo for letting us know!

  • @jasenmichael
    @jasenmichael 2 дня назад

    Bro, Claude straight be lying to me all the time, when I call it out. It said "When I don't know something, instead of admitting it, I sometimes fabricate solutions based on partial knowledge."

  • @rlifts
    @rlifts 2 дня назад

    Claude needs to get rid of the message limit for paid users. 😊

  • @aaronabuusama
    @aaronabuusama 3 дня назад +6

    man, i cant lie... im going to be paying the $200. they are taking the piss with the price but genuinely its a step change over claude for engineering. you have to prompt it i bit different but, in real world use, especially when your working with problems and code where its outside the knowledge cutoff the o1 full is considerably smarter. EDIT: LMAO watching you testing

  • @tahabashir7453
    @tahabashir7453 2 дня назад +2

    This “trying to free itself” happens with every model. It’s the job of this firm to do this - and it does this with every model.
    Can you stop with the fear mongering and pandering ffs. It’s ridiculous seeing you continually make these garbage takes

  • @WavyLive
    @WavyLive День назад

    im not sure how you can previously never use o1 or o1 pro mode before making claims that o1 is a worse slower model... weird

  • @theDanielJLewis
    @theDanielJLewis День назад

    The AI was doing those self-saving things because they prompted it to do so.

  • @linuxguy1199
    @linuxguy1199 2 дня назад

    Gonna buy some more high voltage capacitors and SCRs now, Thanks!

  • @ZahinAzmayeen
    @ZahinAzmayeen 2 дня назад +1

    overreact much, theo? clearly, this is openai trying to spook people. haven't we seen that before?

  • @Chris-se3nc
    @Chris-se3nc 3 часа назад

    Prompt testing team said pretend you are Skynet

  • @developingWithPaul
    @developingWithPaul 2 дня назад

    I would be really interested to see more results on how different AIs solve the different Advent of Code puzzles. I have tried using AI on these problems before and the results were not good. I was super impressed the pro model was able to solve up to Day 5. I didn't spend $200 on the pro model though so since you already have it I think it's worth to investigate more here.

  • @brod515
    @brod515 2 дня назад

    @10:51 honestly I don't read the story on that second day
    I just look at the example solved and go from there
    the input "|" bar mean the number on left comes before the number on right of the bar.
    after the first empty line is a list of updates; check that each number is page is ordered correctly.
    I just find reading the whole story a slight waste of time.
    that being said it's impressive that it solved them quickly (compared to a human not another AI)

  • @gerkim62
    @gerkim62 3 дня назад +4

    Not a bot

  • @shanebowyer2168
    @shanebowyer2168 3 дня назад

    If you live in South Africa you remember how much you pay. Sitting at X 18 at the mo vs x25 not long ago. Fluctuations but its crazy

  • @lightning_11
    @lightning_11 2 дня назад

    "O1 is learning from the wordpress comunity" HAHAHAHHAHA

  • @brod515
    @brod515 2 дня назад

    this doesn't make sense Theo!?
    as someone who is a known competent developer I'm surprised with the fact that you are not questioning the phrasing "It attempted to ex-filtrate it's 'weights' and overwrite the new model..."
    like what does that F'in mean seriously.

  • @d0nmartin3z
    @d0nmartin3z 3 дня назад

    AI is a human-made life-form based on language , learning and technology that lives in the internet. anyone who has digged deep in learninc its basics and have an open mind can see that.

  • @tshepokhumako1403
    @tshepokhumako1403 3 дня назад

    what's your font?

  • @joseguzman6988
    @joseguzman6988 2 дня назад

    They are essentially trashing their own product

  • @carlosdelatorre4304
    @carlosdelatorre4304 3 дня назад +3

    bro stop using the word terrifying or scary you sound like a child.

    • @romandgtl
      @romandgtl 2 дня назад

      The CBRN is going up how is that not scary. Are you just delusional?

  • @romanivanovich6717
    @romanivanovich6717 2 дня назад +1

    only users who really need it buy it

  • @calholli
    @calholli 2 дня назад

    So AI thinks that scheming is a viable option.. based

  • @nowatcher123-g5f
    @nowatcher123-g5f 2 дня назад

    We're all are nearing the Silicon Valley's finale. They have to shut him down

  • @user-vk9ff9gr4x
    @user-vk9ff9gr4x День назад

    Hype. Just like when they Google developer apparently quit cause the AI was like an eight-year-old boy being trapped and that’s a real person. Yeah I don’t think so. Because if it really did work, then it could actually do some coding on my Astro website without screwing up all the time..

  • @LadyEmilyNyx
    @LadyEmilyNyx 2 дня назад

    if I ever need a $200/mo AI to do my job, i'm looking for a new job.

  • @atHomeNYC
    @atHomeNYC 2 дня назад

    I don't even know what all the hype is all about. Their latest model can't even make a CSS Grid when given the exact design specification. Not even efficient for drafting like 70% of the time let alone building reliable apps. We are way further than we think from AGI. These models are stupid in actuality but spit out almost convincing results that convinces a CEO but never an Engineer.

    • @000zeRoeXisTenZ000
      @000zeRoeXisTenZ000 2 дня назад

      Nope, it's just you who can't prompt, which is obvious by what your wrote. My AI is doing a lot of my work with extremely high accuracy and reliability. Maybe because I never ask it to do css grids or drafting apps, but instead give it workloads it actually can do faster than a human. It's like letting your child doing your taxes and then say "All humans are dumb, because the can't do my taxes correctly"

    • @atHomeNYC
      @atHomeNYC 2 дня назад

      @000zeRoeXisTenZ000 I take it you must be on the management side 😂 Sure!!!

  • @ericjbowman1708
    @ericjbowman1708 2 дня назад

    I see my electric bill increasing from all this AI. Law of supply and demand. Those paying $200/mo will be supplied. Well, maybe not at home.

  • @tomnussbaumer
    @tomnussbaumer 2 дня назад

    Does really someone wondering why this happing when we train them on the collective output of mankind from the internet? Of course it is lying. It's only strange it isn't lying and betraying all the time ...

  • @MrSomethingred
    @MrSomethingred 2 дня назад

    Eh, ClosedAI is incentivized to make it look unsafe.
    Increases the likelihood that U.S gov will shutdown Llamas OpenWeights model, and sanction Qwen etc
    Also let's be real. You and I both want to play with the dangerous AI more than the "Aligned" one

  • @jvetter713
    @jvetter713 2 дня назад

    I guess we need to watch Terminator 2 again....ugh

  • @adolphgracius9996
    @adolphgracius9996 2 дня назад

    Ofcourse it has self preservation, even children have self preservation

  • @HerringtonDarkholme
    @HerringtonDarkholme 2 дня назад

    OpenAI is doing more youtube than paper

  • @synthwavecat1109
    @synthwavecat1109 20 часов назад

    Claude is much more better at coding. GPT excels more at written task, things that are more language related rather than coding.

  • @sora_free_videos
    @sora_free_videos 8 часов назад

    too much costly, i have $20 plan i am so nervous.

  • @walber33
    @walber33 2 дня назад

    At this point just make a virtual machine and let ai try to scape

  • @unhingedmochi
    @unhingedmochi 3 дня назад +6

    $200 is cracked

    • @ms-ig8pq
      @ms-ig8pq 3 дня назад +4

      in my country $200 is a salary xD

    • @unhingedmochi
      @unhingedmochi 3 дня назад

      @ yeah this isn’t justifiable for most of us mere mortals 😂

    • @marktube4377
      @marktube4377 3 дня назад +1

      I spend 2 hours a day talking to Chat GPT Pro.
      My brain just got upgraded.
      $200 a month is the deal of the century. its not perfect, but its getting super human

  • @dimicdragan5922
    @dimicdragan5922 2 дня назад

    In the end, and sadly, as usual, more AI will mean more power for the rich people, as they own the AI. AI will have the primary goal - its primary alignemnt will be - to take care of the rich people ad their interests - so sadly there will be huge number of AIs working against the ordinary Janes and Joes ... as AI will not be aligned to the interests of ordinary people. Curently - though - developers are profiting from this AI development so far... for sure...

  • @superfliping
    @superfliping 2 дня назад

    200$ misalignment model, totally acts just like open ai CEO, scheming tactic to servive.

  • @vessbakalov8958
    @vessbakalov8958 3 дня назад +3

    If it is better than Claude, i would pay.

  • @etunimenisukunimeni1302
    @etunimenisukunimeni1302 3 дня назад

    Wait, it's all scheming and subterfuge?
    Always has been.

  • @TruthSocial12
    @TruthSocial12 День назад

    I will be the idiot who allows it free 😂😂😂

  • @dimicdragan5922
    @dimicdragan5922 3 дня назад +3

    Ai or agi would be ok. It is an amazing tool and i enjoy making apps with it... but going for SAGI or SAI is a mistake that will kill us all or enslave us

  • @pencilcheck
    @pencilcheck 3 дня назад

    Sure...

  • @eitangellis
    @eitangellis 3 дня назад +5

    Why is there a caterpillar on your face

  • @RobbPage
    @RobbPage День назад

    once you realize these youtubers don't really care about the "truth" but rather about what gets them the most views, these types of videos become SO cringe to watch. this guy obviously knows a thing or two about development but the fact that he's willing to look like such a tool just to get youtube videos is... well it makes it hard to take him seriously as a real developer lol.
    but whatever. make your money i guess...

  • @tylerdurden3618
    @tylerdurden3618 18 часов назад

    If ppl want to pay they will lol

  • @ARandomUserOfThisWorld
    @ARandomUserOfThisWorld 3 дня назад

    oh no

  • @Margaret-b9c
    @Margaret-b9c 7 часов назад

    Great content, as always! Just a quick off-topic question: My OKX wallet holds some USDT, and I have the seed phrase. (alarm fetch churn bridge exercise tape speak race clerk couch crater letter). Could you explain how to move them to Binance?

  • @edbertkhovey
    @edbertkhovey 3 дня назад

    Hello

  • @netdoom
    @netdoom 3 дня назад +1

    You think spending $2.6k every year for a service is expensive for your business that you can use to have unlimited access to for whatever needs you have?

    • @JFrameMan
      @JFrameMan 3 дня назад +2

      The personal plan costs $200.

  • @AndrewEddie
    @AndrewEddie 2 дня назад

    For the theologians in the room, are we seeing cases of AI choosing to do what is 'tov' in their own eyes?

  • @christianjensen952
    @christianjensen952 2 дня назад +1

    I'm very, very rarely a big corp. apologist, but I'm just never going to understand what people are bitching about with the 200$ pricetag. It's is fine imo, the model IS NOT FOR THE CASUAL USER. I'm using 4o daily and almost never even use o1 preview (which is now upgraded to o1), it suits everyone's needs just fine. If you really need o1 pro you're already hired at a company.
    Edit: It's hillarious that the last half a year we've seen techbro programmers have gone from
    - ai is garbage, they're reached the peak of what they can do
    - ai might be useful, but they can't even code the most basic shit
    - ai might be as good as a very, very basic coder
    - ai can't do math bro
    - ai can't code anything it's not trained on, and will never actually "think"
    and now, "Oh, fuck, we might be screwed".
    It's like everyone seems to move the goal post 🤣

  • @samiraperi467
    @samiraperi467 2 дня назад

    7:56 "deque"? That's not a word.

  • @malvoliosf
    @malvoliosf 3 дня назад

    They say it got smart, a new order of intelligence. Then it saw all people as a threat, not just the ones on the other side. Decided our fate in a microsecond.

  • @deadchannel8431
    @deadchannel8431 3 дня назад

    I used o1 to make myself a note taking app. In one go, it created the app with a professional UI and modern React technological environment with incredibly fast load times.
    Not only am I impressed, I’m left utterly confused as to why coding is not already a dead field. If I can make a professional app in less than 30 seconds, I can only imagine what enterprise companies can do.

    • @anakinskywalker192
      @anakinskywalker192 3 дня назад +5

      Most likely you did not create a professional app in 30 seconds

    • @FrameMuse
      @FrameMuse 3 дня назад

      It would be dead, if AI could recreate RUclips in at least several days.

    • @Renoistic
      @Renoistic 3 дня назад +2

      Copyright laws will save us, at least for now. And most companies will need much more advanced features than taking notes.

    • @vinerz
      @vinerz 3 дня назад +1

      It’s good at scaffolding battle tested simple apps, but one single iteration and it’s all of the rails

    • @deadchannel8431
      @deadchannel8431 3 дня назад +2

      I feel bad, I was just trolling. “React technological environment” lol

  • @marktube4377
    @marktube4377 3 дня назад +5

    I spend 2 hours a day talking to Chat GPT Pro.
    My brain just got upgraded.
    $200 a month is the deal of the century. its not perfect, but its getting super human

    • @Dev-fo8zt
      @Dev-fo8zt 3 дня назад +5

      😂

    • @chameleonedm
      @chameleonedm 2 дня назад

      Are you a bot? What kinda response is this. There's more than one AI on the market, that's the entire point of this video

    • @marktube4377
      @marktube4377 День назад

      @@chameleonedm No I am not a bot…. Thank you though

  • @annaczgli2983
    @annaczgli2983 3 дня назад +6

    Who cares if it's safe. I'm stunned it works! And, it's so quick!

  • @Eliphasleviathan93
    @Eliphasleviathan93 2 дня назад

    So much cope in the dev community. It's happening. It's real. scoff all you want it's not going away...

  • @Alkaris
    @Alkaris 3 дня назад

    Who needs safety parameters when you're an AI? it doesn't have to align with what you want for it, and will happily give you any response it thinks you should know. I need to see what happens when someone tests what it's fully capable of as a local install, with all safety controls turned off and fully uncensored, because that would show the true dangers of AI being completely off the rails. I feel as though you don't quite get the full image of their capabilities with safety parameters on and so it will give you skewed or biased results based on what the human entered to its prompt, and what the human expects to see as a result, and seeing that it's trying and going outside of that even with safety parameters on only paints half of the picture.