OpenAI’s New AI: Being Smart Is Overrated!

Поделиться
HTML-код
  • Опубликовано: 13 окт 2024

Комментарии • 257

  • @blidea9191
    @blidea9191 2 месяца назад +165

    What a time to be alive!

    • @fischX
      @fischX 2 месяца назад +8

      Not sure if that's not a threat 😅

    • @dexterpoindexter3583
      @dexterpoindexter3583 2 месяца назад +1

      AI: What a time to be artificially "alive"!

    • @pandoraeeris7860
      @pandoraeeris7860 2 месяца назад +1

      What a time to be AI!

    • @brexitgreens
      @brexitgreens 2 месяца назад

      🔫 _2024: No Time To Be Unalived_

    • @dexterpoindexter3583
      @dexterpoindexter3583 2 месяца назад +1

      @@brexitgreens 😎 Agent 000, Licensed to Chill

  • @markmuller7962
    @markmuller7962 2 месяца назад +102

    It reminds me of that classic prompt: "Explain it to me like if I am a 10 years old"

    • @keh9947
      @keh9947 2 месяца назад +14

      ELI5

    • @JorgetePanete
      @JorgetePanete 2 месяца назад +8

      I sometimes do "ELI5, then summarize it using caveman speak, then summarize the summary using caveman speak".

    • @virtualgrowhouse
      @virtualgrowhouse 2 месяца назад +9

      @@JorgetePaneteI prompted: “say it like an infant”, and I swear to God it rephrased it’s paragraph into literally: “ga goo ga, goo goo ga ga” for like 2 paragraphs lol

    • @douradesh
      @douradesh 2 месяца назад

      Me explica como se eu fosse uma criança de 10 anos de idade

    • @ericclayton9080
      @ericclayton9080 2 месяца назад

      This is hilarious ​@@virtualgrowhouse

  • @MilivojSegedinac
    @MilivojSegedinac 2 месяца назад +23

    It's like with teaching professors, there are smart ones that can't transfer that knowledge to the pupils and then there are good ones. I prefer the second bunch.

  • @Mad3011
    @Mad3011 2 месяца назад +49

    What time to be AI

  • @Oscaragious
    @Oscaragious 2 месяца назад +104

    Makes sense. It's easier to verify than to prove.

    • @simonhaines7301
      @simonhaines7301 2 месяца назад +13

      en.wikipedia.org/wiki/P_versus_NP_problem

    • @eleklink8406
      @eleklink8406 2 месяца назад +2

      that's not a general rule. True for NP style problems.
      but for example easier to write a perl program/huge regexp/..., than understand one.

    • @jh1videos130
      @jh1videos130 2 месяца назад +1

      @@eleklink8406 Is writing a program really the same as proving something?

  • @blazearmoru
    @blazearmoru 2 месяца назад +128

    people talking about how the rate of ai progress is slowing down, and then you post something like this LOL. What a time to be alive fr.

    • @brexitgreens
      @brexitgreens 2 месяца назад +3

      Progress is velocity, so it cannot _slow down,_ it can only _decrease._ Likewise "the rate of progress" is a pleonasm - unless you literally mean exponential growth. 🤓

    • @michaelleue7594
      @michaelleue7594 2 месяца назад +26

      ​@@brexitgreens Your interpretation of progress as a rate is not the only valid interpretation. Lots of contexts call for progress to be interpreted as a distance, independent of time. If someone asks you "what is your progress on your homework?" you wouldn't answer "about 10% per hour". That would be obtuse.

    • @BlackoutGootraxian
      @BlackoutGootraxian 2 месяца назад +4

      Those people are really coping, thinking AI is a trend that will come to a complete stop once it stops being popular. Once companies catch onto the next big buzzword thing and people stop trying to use it for everything (its too early for that), AI's reputation should improve.

    • @brexitgreens
      @brexitgreens 2 месяца назад

      @@michaelleue7594 😆

    • @brexitgreens
      @brexitgreens 2 месяца назад

      @@michaelleue7594 "About 10% per hour." 😆

  • @nowymail
    @nowymail 2 месяца назад +22

    The generated Einsten and the kid's pictures are creepy. Look so bad and artificial. The Internet is full of generated pictures now. I am sick of them already!

  • @chadarmstrong7458
    @chadarmstrong7458 2 месяца назад +52

    Ugh. The first answer was very clear and then the training made it way worse. I dont want my answers hidden in a wall of text.

    • @Khofax
      @Khofax 2 месяца назад +15

      The idea is you can still ask for the concise final answer, but if you are interested in learning the concepts yourself then the AI used to be bad at that and introduce new mistakes. For AIs designed to teach this is a huge improvement and on the other hand overall correctness is still improving.

    • @acters124
      @acters124 2 месяца назад +2

      @@Khofax but if you wish to have it over explain every minute detail, why not explicitly ask for that? why must the default assume that we want to have over done explanations?

    • @youraveragedude8767
      @youraveragedude8767 2 месяца назад +3

      @@acters124I agree! Most people who are asking an AI something, already know what they are talking about, they just need a push in the right direction…

    • @GoblinUrNuts
      @GoblinUrNuts 2 месяца назад

      @@acters124because eventually it will know what YOU know and will be able to answer accordingly

    • @Susanmugen
      @Susanmugen 2 месяца назад

      I think the problem is trying to keep humans (children) in the loop. The AI is smarter than us now. We need AI to verify AI and work with AI.

  • @τεσττεστ-υ5θ
    @τεσττεστ-υ5θ 2 месяца назад +44

    4:35 i don't get it
    i would argue is terrible for 99% of people. you are choosing to handicap your model so it can speak a bit clearer but what's the point? in the rare cases where an ai chatbot spits out something very complex you can just literally ask it to dumb it down and it just does. not to mention that i nor anyone else i know have never felt like an output from a chatbot was too profound for me to figure out what it said(it's not that i am super smart, chatbots just produce very understandable outputs)
    1:55 think to yourself, do we really need to specially train the ai for such a usecase? what's wrong with having it use powers? if you are confused you can then ask it to clarify.
    am i missing something?

    • @D---3
      @D---3 2 месяца назад +4

      What do you mean

    • @D---3
      @D---3 2 месяца назад +4

      They aren't making it dumber for legibility, they are making it more legible at the same intelligence I think
      The answer probably shouldn't have powers cause the question says 3 times and has nothing to do with the

    • @revealedbefore6173
      @revealedbefore6173 2 месяца назад +3

      @@D---3 If you go to the first timestamp I think it becomes clear, same level of understandability at the cost of intelligence

    • @D---3
      @D---3 2 месяца назад +1

      Oh wait nvm they are making a model smarter without paying a legibility tax
      So the accuracy is higher without reducing legibility

    • @ShankarSivarajan
      @ShankarSivarajan 2 месяца назад +6

      @@D---3 No, that's a mistaken in the narration. They _are_ paying the "legibility tax" in intelligence.

  • @Omnis2
    @Omnis2 2 месяца назад +10

    I think Dr. Zsolnai is an AI. He literally sounds like a soundboard on all these videos.

    • @bolatm22
      @bolatm22 2 месяца назад +4

      Yep, he is. At least the voice😁

    • @bzikarius
      @bzikarius 2 месяца назад

      Perhaps is it true. His live presentation was far different.
      May be it is AI-filter. Compare with videos 5 years ago.

  • @markmuller7962
    @markmuller7962 2 месяца назад +4

    When released this will be like the beginning of Agents being an intrinsic part of mainstream LLMs

  • @disconnect8873
    @disconnect8873 2 месяца назад +15

    Will be rolling out in the coming weeks.

  • @Jacobk-g7r
    @Jacobk-g7r 2 месяца назад +3

    Bruh, as a kid i hated showing my work but understood why but still didn’t want to. This ai might get even better but also resources might be used more to show work or something.

  • @yahm0n
    @yahm0n 2 месяца назад +4

    It is very useful to learn what types of things the person you are talking to already knows and what types of things they don't know. Frontier models have been becoming less useful to me as they keep adding more and more information I don't need to their answers.

    • @marcosfraguela
      @marcosfraguela 2 месяца назад +2

      I have the same feeling. It's like everything you asks it spits a wikipedia article... and we already have wikipedia for that

    • @yahm0n
      @yahm0n 2 месяца назад +1

      @@marcosfraguela yup, and as soon as it thinks to create a list, you know you are about to get an entire page of useless crap.

  • @Jacobk-g7r
    @Jacobk-g7r 2 месяца назад +2

    4:28 exactly like people, you don’t show your work then it’ll be hard for others to understand. It’s like trying to talk when you don’t know English but wanna share, gets difficult. That’s why explaining to a kid helps us simplify.

  • @brianjanssens8020
    @brianjanssens8020 2 месяца назад +13

    I'm going to send that thumbnail in the boys groupchat whenever they push another squad in game.

  • @Juan-qv5nc
    @Juan-qv5nc 2 месяца назад +3

    It seems this could be a way to demonstrate "P not NP", or is it just me hallucinating again? What a time to hallucinate! ☺ Thanks for the video, fellow scholar Károly.

  • @ethangreer1362
    @ethangreer1362 2 месяца назад +1

    It would be refreshing to hear your critique of symbolic regression for the retrieval of novel concepts from Ai (activation targeting and distillation). Especially Cranmer's techniques for deriving proofs.

  • @seto007
    @seto007 2 месяца назад +10

    Hope it isn't too long before we get a video on Flux, cause it's honestly such a huge step up compared to all it's competitors that it's still hard for me to believe

    • @smyk1975
      @smyk1975 2 месяца назад +1

      What is Flux?

    • @LOC-Ness
      @LOC-Ness 2 месяца назад

      @@smyk1975open source image generator. very high quality results and truly open source.

    • @umm_rit_
      @umm_rit_ 2 месяца назад

      @@smyk1975 new sota-level image model dropped out of nowhere

  • @samhale5413
    @samhale5413 2 месяца назад +14

    A grade school student is usually required to show their work.
    A PHD student is required to defend their thesis.
    Why not force AI to explain as well?

    • @PaulBrunt
      @PaulBrunt 2 месяца назад +1

      I wouldn't say that those examples all follow common rules. This is like teaching an AI to be that one remarkable professor who could actually teach well-the one who could answer your questions perfectly without going over your head or oversimplifying things to the point of being useless.

    • @bzikarius
      @bzikarius 2 месяца назад

      bc they are usually talk shit.
      Yesturday example:
      «The number 42 has several interesting mathematical properties. First, it is the only number that is both the sum of the cubes of its digits and their product. That is, $4^3 + 2^3 = 64 + 8 = 72$, and $4 \times 2 = 8$.»

  • @NakedSageAstrology
    @NakedSageAstrology 2 месяца назад +3

    I love that you use a.i to clone your voice! Just needs a better microphone for best recording quality.

  • @JorgetePanete
    @JorgetePanete 2 месяца назад +2

    My guess is that if the AI asks "do you want a more thorough explanation?" and if so creates the more accurate but less legible answer, then you have the best of both.

  • @mannyourfriend
    @mannyourfriend 2 месяца назад +1

    In a strange way, maybe a U-net style autoencoder could leverage something like this for image reconstruction. Like if there was some way to logically convert the latent layer's outputs to something more semantic, we could then use this to force the encoder to "dumb down" those embeddings and give more interpretable latent embeddings? Why you'd want that, and how you'd accomplish it, I have no idea.

  • @bzikarius
    @bzikarius 2 месяца назад

    I like the idea of GAN or model coupling or integrating a lot of specialized models and classic tools. It is on the surface, but modern LLMs can`t use memory (with ideas and thesises, accumulated from knowledge and previous talk)

  • @galvinvoltag
    @galvinvoltag 2 месяца назад

    "So, how do we make them smarter?"
    "That's the neat part! You DON'T"
    Truly, what a time to be alive!

  • @memegazer
    @memegazer 2 месяца назад +1

    How could it work on images?
    Segmentation maybe?
    Say for example a varifier knows what a hand should look like as a segmentation while the einstien could know what overall images should look like?

  • @AalapShah12297
    @AalapShah12297 2 месяца назад +1

    I really don't see any legibility problem with the solution shown at 1:54. To me, the explanations at 2:04 and 2:11 just seem like more verbose versions of the solution, repeating parts of the question and adding filler phrases in between to increase length. From the paper, I noticed two things that seem like big limitations for now:
    1. They have only tested this method on grade school math problems, since it makes the dataset easy to generate. It seems like a stretch to assume that their observations will transfer to something like the sorting algorithm shown at 1:19.
    2. The authors mention "checkability doesn’t necessarily capture everything we intuitively want from legibility" and that seems to be the main drawback. They have a definition for checkability which you can see in the paper, and they also tested a small subset of the results on paid humans to compare legibility scores with their own checkability metric, but they had a 45 second time limit there - and it seems quite probable that mistakes in longer solutions will often be harder to spot.

  • @cannot-handle-handles
    @cannot-handle-handles 2 месяца назад

    Legibility Tax is in direction of the horizontal Accuracy axis, which makes sense, because for a more legible output you DO have to pay the Legibility Tax. 4:32

  • @CraftyChaos23
    @CraftyChaos23 2 месяца назад +15

    Isn't this architecture same as GAN?

    • @KellyNicholes
      @KellyNicholes 2 месяца назад +9

      Oh no, totally different. They used the words Einstein and child, not generator and discriminator! /s

    • @sp1r1t2001
      @sp1r1t2001 2 месяца назад +3

      It's similar, but different. The similarity is that this approach uses two separate models focusing on their roles. But the roles are different, specifically for the secondary model. In GAN it's role is simply to guess if something was produced by model or not.

  • @XiangWeiHuang
    @XiangWeiHuang 2 месяца назад

    I wonder if this would work for image generation, a dumbed-down verification each step. I use detailed prompts to generate anime waifu images, but the multiple objects can be messy, and usually not that reconizable compared to handdrawn/real world. A bathtub too small for the character in a bathroom, for example. AIs making sure the generation result is dumbed down sounds like a great possibility for handling such cases in image generation.

  • @vslaykovsky
    @vslaykovsky 2 месяца назад +1

    Research paper is available for free? What a time to be alive!

  • @xel7381
    @xel7381 Месяц назад

    This reminds me of a professor I had who was supposedly very smart but not good at teaching. I believe that being able to teach something is just as important as understanding something, so I would weigh both in determining how smart someone really is in a subject.

  • @dehb1ue
    @dehb1ue 2 месяца назад

    The legibility tax was along the X axis not along the Y, and the chart showed you are paying it by having a legible model that’s less accurate.

  • @BryanCleaver
    @BryanCleaver 2 месяца назад

    Your Two Minute Papers are absolutely fabulous - especially this one

  • @NotGabe001
    @NotGabe001 2 месяца назад +11

    Me when my grades start slipping:
    "Being smart is overrated!"

    • @brexitgreens
      @brexitgreens 2 месяца назад +1

      📝 Dr. Alan Thompson: *"On the new irrelevance of intelligence".* An old paper which has aged extremely well. (Citing from memory.)

  • @thetakburger7928
    @thetakburger7928 2 месяца назад

    Love it. I use AI as a user, and I’m actually replacing not having colleagues with AI (working alone). So I just want advices and good explanations to confort some choices.
    More pedagogy also helps in proofreading what the AI wrote manually

  • @ristopoho824
    @ristopoho824 2 месяца назад

    I do vastly prefer Mistral to the open AI models. This video is helping me understand why. Possibly helping to find a better model in the future too. LLama too, it's pleasant. I only need small things troubleshooted and it gets it correct enough. Programming aid is great since i can just implement the suggestions and instantly see if it works or not. So correctness is not that big of a deal. One shot correctness even less. It's good to have, but not that necessary.
    I'm a novice programmer. Just learning. And oh boy those AI are a great learning tool.
    I'm going to do a small rant about them as learning tools.
    People say it's a bad thing to use AI to cheat on your homework or whatever but i really think it's more like a personal teacher. You can't just copypaste the answers. The teacher will know. Unless you can specially train the model to output your style. And that's not easy. If the kid can pull that off, yea they got enough skills to get a job in the future even if their history knowledge or whatever is lacking.

  • @SEB1991SEB
    @SEB1991SEB 2 месяца назад +2

    Einstein AI: "How do you do fellow kids?"

  • @htomerif
    @htomerif 2 месяца назад

    Google's "AI overview" is uniformly incorrect in every even slightly technical question I ask it.
    I asked it "give me a list of materials by relative permittivity" and one of the answers was "animal organs and human blood: about the same as other liquids" also "bakelite: 1-100" (these are both wrong, btw).
    Wake me up when any of this works for a question more technical than "what color is a green apple?".
    What a time to be disappointed.

  • @wonseoklee80
    @wonseoklee80 2 месяца назад

    AI legibility, AI hallucination, AI alignment-these terms sounded like science fiction just a few years ago. Now, they feel mundane. I think history will remember the 2020s as the decade of the AI revolution.

  • @BaneWilliams
    @BaneWilliams 2 месяца назад

    Hey this is almost approaching my triumverate model. So funny that it took OpenAI until here to get to where I was at during GPT 3.0

    • @chasingdaydreams2788
      @chasingdaydreams2788 2 месяца назад

      Very cool. You should work for them. Show them you did what they have only accomplished now 2 years ago and they might offer you a good salary

    • @BaneWilliams
      @BaneWilliams 2 месяца назад

      @@chasingdaydreams2788 I'm not willing to move my family from my country. Otherwise I would.

  • @ZenBen_the_Elder
    @ZenBen_the_Elder 2 месяца назад

    DM [0:33-2:15]
    Accuracy vs. Legibility

  • @matthew.m.stevick
    @matthew.m.stevick 2 месяца назад +1

    I want GPT5 now

  • @Chris.Davies
    @Chris.Davies 2 месяца назад

    Smart != Intelligent.
    Intelligent actions are those which benefit both the actor, and others.
    This is the definition of intelligence, according to Deitrich Bonhoeffer, and his Theory Of Stupidity.

  • @zteaxon7787
    @zteaxon7787 2 месяца назад +8

    Now for them to lift the "I can't talk about that" political blockages and we're getting somewhere

    • @jeromyperez5532
      @jeromyperez5532 2 месяца назад

      Yeah the social guardrails are pretty lame. Especially given that what's politically correct now will be outdated and offensive in 10 years.

    • @RedOneM
      @RedOneM 2 месяца назад +4

      Why would you need a machine to think for you, when dealing with politics? You should think for yourself, as the political results have an effect on you.

    • @Felipe3001miranda
      @Felipe3001miranda 2 месяца назад +1

      This is such a "I want to use it for immoral/morally gray things but the AI doesn't let me" comment

    • @defaultcube5363
      @defaultcube5363 2 месяца назад

      @@Felipe3001miranda I asked AI to tell me the relation between pedophilia and homosexuality and I received a warning saying they cant talk about that. It is biased.

    • @fenn_fren
      @fenn_fren 2 месяца назад

      What, salty that OpenAI won't let you generate anti-EU slop for your russian smurf account?

  • @GodbornNoven
    @GodbornNoven 2 месяца назад +1

    whatever in short they just made a verifier model to boost the accuracy of their models on questions they are capable of solving and minimize overall hallucinations.
    Nothing new. I'd be more interested in seeing how small models work together to achieve much better results than a bigger model

  • @rinuadegbite8571
    @rinuadegbite8571 2 месяца назад

    astounding work 2mins on the job of verifier mode

  • @glenneric1
    @glenneric1 2 месяца назад

    Dang it my papers just blew off my desk!

  • @photoelectron
    @photoelectron 2 месяца назад

    0:14 am I tripping ? the loop will never end bc j starts off being larger than 0 and it'll never be less than 0... is this why the human is going "???"

  • @rb8049
    @rb8049 2 месяца назад

    Will be great to apply to all publications. All publications should be more understandable.

  • @ninojamestan8895
    @ninojamestan8895 Месяц назад

    They should use it to prevent hallucinations and overcomplexity

  • @geogeo14000
    @geogeo14000 2 месяца назад +1

    What alive to be a time !

  • @jeronimocliff2972
    @jeronimocliff2972 2 месяца назад +1

    Thank you for the quick heads-up !

  • @PostTVNews
    @PostTVNews 2 месяца назад

    Nice. One thing, the language used in the video does (at least for me) raise one question: is this an OpenAI sponsored video? Cheers

  • @maracachucho8701
    @maracachucho8701 2 месяца назад

    "If you can't explain it to a 4 year old, you don't understand it yourself" -things Einstein never said

  • @AdvantestInc
    @AdvantestInc 2 месяца назад

    Great breakdown of AI’s complexity vs. understandability trade-off! Fascinating research from OpenAI.

  • @jgnhrw
    @jgnhrw Месяц назад

    Correction at 4:33 - you ARE paying the legibility tax, which is why the blue model is not as smart as the green. That's the tax you pay for making a smarter model human understandable.

  • @SKGFindings
    @SKGFindings 2 месяца назад

    Some people call this artificial intelligence, but the reality is this technology will enhance us. So instead of artificial intelligence, I think we’ll augment our intelligence.

  • @princejona
    @princejona 2 месяца назад

    Been doing this the whole time. I use custom ai's with my own data and test the answers on other ai's amd different sessions in a way so i can make complex topics like existence more clear

  • @Grocel512
    @Grocel512 2 месяца назад +1

    A kid that's corrects the expert, also known as Asperger's. 💀💪

  • @robertoaguirrematurana6419
    @robertoaguirrematurana6419 2 месяца назад +1

    That's like complaining about a female boxer for being too strong 😏

  • @cornevanzyl5880
    @cornevanzyl5880 2 месяца назад

    I love the creative thinking by those authors.

  • @Verrisin
    @Verrisin 2 месяца назад

    technically, the kid could approach log size of the Einstein, right? At least in terms of the problem difficulty the Einstein model can solve.

  • @adolphgracius9996
    @adolphgracius9996 2 месяца назад

    I don't think people are rating the current state of the AI but rather the possibility of what if this thing becomes 5 or 10 times better within the decade

  • @MisterrTheEditor
    @MisterrTheEditor 2 месяца назад +1

    Its kind of like the magi from Evangelion except without real brains

  • @PurpleSheeep
    @PurpleSheeep 2 месяца назад

    Spooky, open ai needed a new technique to increase legibility of smarter models

  • @ionlyfearphobophobia
    @ionlyfearphobophobia 2 месяца назад +3

    If AI are getting so smart that we now need them to dumb down their answers enough for us to understand them, then that really shows just how much they're outpacing us.

    • @cbnewham5633
      @cbnewham5633 2 месяца назад +2

      It's about the understandability of answers, not how complex they are. Some people have a knack of giving an answer to a question that is clear and concise, even if the answer is complicated. Some are awful by being convoluted even when giving a simple answer.

  • @oxiigen
    @oxiigen 2 месяца назад +1

    Hmm, if doctors can not see through the Einstein's lies, how we can expect kids to see through?

  • @Windswept7
    @Windswept7 2 месяца назад

    I’m not surprised.
    Humans evolve intelligence this way too.

  • @ThePallidor
    @ThePallidor 2 месяца назад

    More like "being a midwit is overrated." Topwits are categorically different; they're not Masters of Their Field as FA Hayek talked about but rather Hayek's Puzzlers/Muddlers, the kind that do fully independent thinking, not riding the tails of existing social or academic notions.

  • @twosuns20
    @twosuns20 2 месяца назад +5

    ELI5 for AI

    • @flsendzz
      @flsendzz 2 месяца назад +1

      Objective maximizer

  • @WirlWind494
    @WirlWind494 2 месяца назад

    It's not even conscious yet and we're already struggling to understand our AI.
    Humans won't be able to claim they're intelligent in a few more years...

  • @DisturbedNeo
    @DisturbedNeo 2 месяца назад +1

    So the "Kid" knows that "Einstein" is wrong, despite not knowing itself what the right answer is? How the heck does that work?

    • @deadfishyarou
      @deadfishyarou 2 месяца назад +2

      Same way you can read and criticize a novel, while being far from capable of writing it yourself

    • @Brahvim
      @Brahvim 2 месяца назад

      Some kind of training to let it spot hallucinations?

    • @τεσττεστ-υ5θ
      @τεσττεστ-υ5θ 2 месяца назад

      it doesnt. the accuracy is significantly lowered using this method which means the verifier hallucinates

  • @darkskyinwinter
    @darkskyinwinter 2 месяца назад +1

    Reading old mathematical, astronomical, and historical scripts with unintelligible writing.

  • @niko220491
    @niko220491 2 месяца назад

    This is absolutely the holy grail (besides the so called (artificial general intelligence") , because the more complex problems AI solves the less we humans can reproduce them ourselves. So.. unless we don't fuse ourselves with the future AI, we have to be able to at least understand the chunks of its doing.

    • @niko220491
      @niko220491 2 месяца назад

      And yeah about your question if it was capable as some kind of diffusion model.. I think my first guess would be like vector based graphics. Maybe also in the animations and game industry where mathematics is a key part of understanding a visual representation. Whereas pixel-per-pixel based it would require an integrated structure with two or more levels of (let's call it) design. For example: First.. ground structure (define a general pixel distribution by means of either an array of orthogonal vectors or introduce chaos with randomized vectors (seeds will determine the outcome and shall be handled cautiously) ), second... Rearrange the pixels by vectorizing a few of them at a time and put them to a specific test (condition) and by conditioning them and introducing more vectors one can achieve controlled diffusion (at small (diffusion) steps, in this case case by introducing a specific integration time), third.. the expression it has to fulfill by applying a mathematical filter (gaussian, e.t.c)

  • @paulyflynn
    @paulyflynn 2 месяца назад +2

    42

  • @juliandarley
    @juliandarley 2 месяца назад +2

    fascinating paper. has implications for epistemology and moral problems that society is increasingly going to face because of the increase in AI and robots, but also because of the worsening climate and environment situation. two things i would love to be able to improve which this paper may help with: coding (AI still not good at many real-world coding problems) and verification of facts, especially medical facts, where lives are on the line.

    • @systembreaker4864
      @systembreaker4864 2 месяца назад +3

      Video uploaded 12 sec ago
      Ur comment 7hr ago 😵‍💫

    • @IlyaIlya-ef4nz
      @IlyaIlya-ef4nz 2 месяца назад

      @@systembreaker4864 Just another Matrix bug 🤷‍♂

    • @systembreaker4864
      @systembreaker4864 2 месяца назад

      @@IlyaIlya-ef4nz it's happening a lot just yesterday i saw same thing

  • @isaacjuggling
    @isaacjuggling 2 месяца назад

    there were so many outstanding mathematicians to choose from, but they had to name it Einstein after the physicist because he is more popular in pop culture, terrible

  • @احمدسلمان-س2ط
    @احمدسلمان-س2ط 2 месяца назад +7

    Hello random person on the internet 👋

    • @NakedSageAstrology
      @NakedSageAstrology 2 месяца назад +1

      I DON'T KNOW YOU! THAT'S MY PURSE!

    • @alexisfy
      @alexisfy 2 месяца назад +1

      Hello, why are you stealing their pourse?

    • @DeepThinker193
      @DeepThinker193 2 месяца назад +1

      Quit stealing peoples purses bro. Quick, someone catch this guy!

  • @douradesh
    @douradesh 2 месяца назад

    what a time to be alive [2]

  • @Corteum
    @Corteum 2 месяца назад

    Can they reason from first principles?

  • @SaumonDuLundi
    @SaumonDuLundi 2 месяца назад

    What a time to be alive!

  • @Verrisin
    @Verrisin 2 месяца назад

    as a smart person: title is VERY TRUE. fml

  • @deepdive-bd
    @deepdive-bd 2 месяца назад

    I couldn't find a single Ai which provided me a completely good code (c# python). I spent more time fixing the problem for Ai rather that write it down by myself. Naughty Naughty....

  • @AlphaVisionPro
    @AlphaVisionPro 2 месяца назад +1

    Resistance is futile!

  • @thesoundsmith
    @thesoundsmith 2 месяца назад

    What a time to simulate being alive!🤖😛

  • @VividhKothari-rd5ll
    @VividhKothari-rd5ll 2 месяца назад

    gpt-4o is sometimes worse than gpt-4 legacy.

  • @Aldraz
    @Aldraz 2 месяца назад

    On one side this is cool, on the other, this will mostly only help the dumb people and waste time of smarter people. Whenever you increase the number of lines 2 or 3x to better explain something, you are also wasting tokens aka time and money. There are prompts like "explain like I am 5" which basically does the same thing already, so now we will have to add "explain like I am Einstein" I guess..

  • @Ponk_80
    @Ponk_80 2 месяца назад

    How do we know that the calculator is really smart? We give it a bunch of math equations to solve, and see how well it does. Spoiler, a calculator is not smart, neither is AI, it’s just an even bigger calculator.

  • @Happy-vz3xe
    @Happy-vz3xe 2 месяца назад +2

    What a time to be alive

  • @JamesPound
    @JamesPound 2 месяца назад

    Best channel I've ever joined.

  • @beqa2758
    @beqa2758 2 месяца назад +3

    Everything until AGI will be overrated

  • @Happy-vz3xe
    @Happy-vz3xe 2 месяца назад +4

    Man I love u 2 minutes paper u are god for curious students like me

    • @TwoMinutePapers
      @TwoMinutePapers  2 месяца назад +1

      You are too kind, thank you so much! 🙏

    • @Happy-vz3xe
      @Happy-vz3xe 2 месяца назад +1

      @@TwoMinutePapers thanks so much fro your time to reply u made my day

  • @bzikarius
    @bzikarius 2 месяца назад +4

    What a time to be a lie!
    When statistic parrot can pretend to be so smart as PHD, but can`t solve simpliest task, as my 9-year old nephew can

    • @Dave-rd6sp
      @Dave-rd6sp 2 месяца назад +3

      The irony of this post being a complete lie.

    • @bzikarius
      @bzikarius 2 месяца назад +1

      Nope. I tested with actual prompts. Directly (Claude, GPT, LLama) and in blind competition
      I asked to make reverted list of toys. With a little mistake (Enumerate 4-points list from 5 to 1). Target result:
      5 lego parts
      4 juggling balls
      3-head dragon
      2 wooden cubes
      Only one LLM (Gigachat) did it after 3 prompts, with examples, in a row, but not perfectly.
      Most of them failed on reverted list.
      They are still statistic parrots without any sense of structure. And they can repeat only structures, that they mostly have seen.
      Narrow-specialized networks are great and amazing. Alpha-fold is revolution. But not LLMs.

    • @Dave-rd6sp
      @Dave-rd6sp 2 месяца назад

      @@bzikarius You know benchmarks already exist and are for more robust than your ad hoc one, right? And you're aware than an AI got silver (nearly gold) in the IMO?

    • @kilianl2864
      @kilianl2864 2 месяца назад +2

      @@Dave-rd6sp Look up "Alice in Wonderland: Simple Tasks Showing Complete Reasoning Breakdown in State-Of-the-Art Large Language Models", you will see AI get stupid when the problem is not in the training data (akka those gold benchmarks you mention)

    • @bzikarius
      @bzikarius 2 месяца назад

      ​@@Dave-rd6sp Yes, let`s teach model patterns and then ask about SAME patterns. It solves (mostly)! WHOA!
      JFYI: Chat GPT do not solve math tasks itself, it uses Wolfram Alpha.
      Stop be amazed newbie, stop trust cherry-picked results, and dive deeper. NN are still amazing, they works kind of intuition or insanely big filter.
      Best AI (now, with current architecture) should be kinda «autistic-savant» - it is safe and effective.
      Do you remember «pizza with glue» case? My little brother (he was kinda 5 yo) once walked back home with bread and suddenly dropped it. So he was afraid, and decided to wash it. Should I say, he never washed bread again? But LLMs «wash bread» over and over. Despite gigawatts of energy and billion dollars. They still can`t remember any ideas, rules, structures. The answers are checked with filter (kind «do not say GINGER» and «Celtic men can be black asian even if it is 15000 years ago»)
      Yesterday Gemini said, that «42 is prime number BECAUSE it divides ONLY on 2 and itself».Nuff said.
      And the sad part. In my country kids should pass common complex test after school.
      So they are mostly learn how to pass test. And they usually pass with high score. Test contains errors, and to pass it, they reply wrong too.
      But mostly they are still extremely dumb, can`t build and solve real tasks, can`t think critically or creatively… This is the issue for natural intellect as you see.

  • @moosy
    @moosy Месяц назад

    How do we use this?

  • @Verrisin
    @Verrisin 2 месяца назад

    yaaay, making sure AI will still be able to talk to humans in 5 years :D

  • @Solizeus
    @Solizeus 2 месяца назад +8

    Every time i hear Open AI i remember that the Open isn't Open

  • @dulinak6251
    @dulinak6251 2 месяца назад

    Reminds me of GAN

  • @fangzhangmnm6049
    @fangzhangmnm6049 2 месяца назад

    okay. After allowing ai being connected to internet, human now starts teaching ai how to lie. Congratulations!

  • @SeaHorseOo
    @SeaHorseOo 2 месяца назад

    Are we still having war over human origin or something...?