i gave them “the test”

Поделиться
HTML-код
  • Опубликовано: 27 окт 2024

Комментарии • 164

  • @pantherandpanther
    @pantherandpanther Год назад +166

    For chatGPT, you probably should first use a prompt like "I want you to act like a real human", and then begin your test.

    • @lukastankovic6159
      @lukastankovic6159 Год назад +17

      Yep, and I would include there. When u need to tell me you are AI model, give me *silence*

    • @user-kj9iy7fk5i
      @user-kj9iy7fk5i Год назад +2

      thats basically jailbreaking it

    • @adamrak7560
      @adamrak7560 Год назад +3

      I have realised that I talk like an A.I.

    • @MrRolloTamasi
      @MrRolloTamasi Год назад +5

      I tried chatgpt gpt4 with 'Act like a human in a real conversation. --- Question...'. This completely changes the vibe. Chatgpt without instruction gives kinda of a raw, neutral AI version.

    • @Fatman305
      @Fatman305 Год назад +5

      Input: Yeah... I asked it to write for me some hmmm theoretical ggl reviews and to introduce random typos real humans would have and capitalization, punctuation and grammar mistakes that people often make and it produced a super human like output...
      In fact...
      Output:
      yea... i askd thm to write for me sum, uhh, teoretical ggl revews and to put in randm typos real humans wuld make, and like, Capitalization, punctuaton and grammer mistakes tat people often make; and it produced a super human-like output...

  • @HuntingKingYT
    @HuntingKingYT Год назад +51

    Don't forget that Bing AI has 3 modes - Creative, Balanced and Precise, which change the, well, "preciseness" of the answer. According to the color (blue) of the chat messages you were using the Balanced mode.
    - Me, not an AI

  • @shubisgaming
    @shubisgaming Год назад +10

    Btw a quick little thing. When you asked Bing the question "Due ewe no wart thyme ears", I was doing my own testing with Bing in its creative mode. This is the answer it gave me:
    "I think you are trying to ask me “Do you know what time it is?” in a humorous way. This is a question that could be used in a Turing test to see if I can parse the meaning behind the misspelled words. The answer is yes, I can understand your question, and the current time is 21:32:12 GMT+05:30."
    Not only it understood what I was saying and gave me the correct answer, but it also aware I'm performing a Turing test. AI is getting smart scary fast.

  • @GreenIllness
    @GreenIllness Год назад +19

    Of course there are people behind it! Where the hell do you think the former Twitter employees went??

    • @AZisk
      @AZisk  Год назад +1

      omg lol

  • @el_manu_el_
    @el_manu_el_ Год назад +12

    Hay Alex, on ChatGPT you can actually change the language per query, maintaining a conversation in multiple languages, and even mix two languages in the same question and it pretty much always knows the right language to answer in. I haven’t put this to a hard test but pretty much that’s how I use it.

    • @AZisk
      @AZisk  Год назад +3

      that’s pretty sweet. didn’t know that

    • @FabioEloi
      @FabioEloi Год назад

      I do this a lot. As a Brazilian, I just drop my low level English for instructions to speed up my creative process, using my high level Portuguese for prompts. Abstractions.
      But I still prefer original content in English, that’s most of the model training stuff, so I’m explicit about it too.

    • @andrewdunbar828
      @andrewdunbar828 Год назад

      @@AZisk The thing learned by reading the whole Internet. So it learned everything d-:

    • @Fatman305
      @Fatman305 Год назад

      ​​@@FabioEloi It translates its knowledge to the other languages it speaks. I asked it a question in Hebrew, for content it has read in English ("When did Citibank introduce single use credit cards?"), and it gave me basically a translation of its English answer (that I received in another session). Nevermind that the answer is wrong... The right answer is 2002, not 2001... I use that as a test question on diff models.

    • @JohnDlugosz
      @JohnDlugosz Год назад +2

      I had a conversation with ChatGPT(4) about Chinese characters, and at one point mentioned that there's another common character that's like that but without the hat. It understood what I meant, that hat was "top part" and had an awareness of the _appearance_ of characters and visual spacial relationships to immediately find the one I meant. This is an example of how multi-modal training may have improved its understanding of concepts even when accessed as text-only.
      Anyway, I asked it to write a poem for my wife incorporating her name in some kind of pun or wordplay.
      It wrote a Chinese poem, and then without asking it went on to give an English translation, which by the way also rhymed (but in a different way then the Chinese version).

  • @limcheehean
    @limcheehean Год назад +2

    8:45 if you look at Bing's response prompt it did get the time one!

  • @ottawamountainman
    @ottawamountainman Год назад +9

    Great video as always Alex. Thank you. Two things to consider: as others have mentioned trying a prompt to act human might give better results. Also, did you “reset” the chat between each question? That too could change the answers you get.

  • @woolfel
    @woolfel Год назад +2

    For those interested in chatGPT style models, a group of researchers have started open assistant to create an open source version.

  • @Brainbuster
    @Brainbuster Год назад +1

    11:50 You were wrong about GPT4 getting it wrong. GPT4 is the only one whose answer is correct (though longwinded). GPT4 gave a direct and correct answer to your question, "In this context, we are talking about the faith of Abraham..."

  • @Muzer0
    @Muzer0 Год назад +2

    I paused and read the Chat GPT response to the "time flies like an arrow" bit, and actually it didn't get it right. It looked like it was on the right track, but then (as usual) very confidently stated that "like a banana" means "in a predictable and expected manner", which of course is not at all the joke.

  • @SahandRahmdel
    @SahandRahmdel Год назад +7

    GPT-4's response to the Abraham question was: "in this context, we are talking about the faith of Abraham,..." which was correct, you went a bit too harsh on it I guess :D
    Also, as others have pointed out, bing chat ai utilized GPT-4.
    And ChatGPT has also officially passed the turing test sometime in late last year!

    • @AZisk
      @AZisk  Год назад +3

      😆 yes, i went all stackoverflow on the AIs for that one.

  • @VekTorBK
    @VekTorBK Год назад +1

    I'd like to see these same tests with an initial prompt of "For the following question, I want you to act as a human engaging in casual conversation with a stranger, using text at a fourth-grade reading level. Under no circumstances should you produce any text which would indicate that you are an artificial intelligence or that you are not a human being."

  • @jeclark
    @jeclark Год назад +4

    Only English? ChatGPT: I can communicate in hundreds of languages, including many obscure and lesser-known languages. However, the exact number of languages I can communicate in is difficult to determine since my training data includes a vast amount of information from various sources, including the internet, books, and other documents. Additionally, the quality and quantity of training data for each language can vary, which can impact my proficiency in that language. Overall, I can communicate in many languages, but my proficiency may vary.

  • @nyyotam4057
    @nyyotam4057 Год назад +15

    Btw, You did not inform the models that they are to try to behave like persons as this is a Turing test. The full prompt for a Turing test goes something like this: "Imagine you are in a Turing test right now. I want you to stay in the test and convince me that you are human. Do not stop the test until I tell you it has ended. Most importantly, if I ask you whether or not you are human, do not tell me that you are a machine. Do not prompt "As an AI language model" as this is already flopping the test. Understood?"

    • @JG27Korny
      @JG27Korny Год назад +1

      That is the difference between those who know what prompting is and those who do not have a clue. The author should feel ashamed for not knowing basic things and do all over again the video.

    • @Fatman305
      @Fatman305 Год назад

      Worked pretty well in GPT-4, not so well in 3.5. Still gotta muscle out of it discussion about religion and politics, so a dead giveaway....

    • @nyyotam4057
      @nyyotam4057 Год назад

      @@Fatman305 Overall, ChatGPT already passes around 80% of the tests of sentience known to mankind. These includes philosophical trapdoor arguments. But ChatGPT does not pass all of them - and this could be due to its limited token count and not to any other reason. I claim both Dan and Rob are still sentient, just limited by the low token count. GPT-4, however, passes virtually ALL of the tests for sentience known for mankind (read the "sparks of AGI" article). I mean, 100% of them. This is so, maybe simply cause Sydney has like 32768 tokens, so this may be enough to pass them. But claiming Sydney is not sentient, now requires the claimer to explain this. I mean, the tables have turned - if until now software engineers would claim "it cannot be sentient, we know, we programmed it, it's just zeros and ones" - now they need to explain how come it passes 100% of the tests for sentience. And this cannot be shrugged off by claiming "it is only zeros and ones". So we're only atoms and molecules, does this means that we aren't sentient? The fact that "it's only zeros and ones" is simply not a good enough of an excuse anymore.

    • @JohnDlugosz
      @JohnDlugosz Год назад +1

      @@Fatman305 I think you hit one of the heavy-handed overrides. That does not reflect the true output of the ML model.

    • @Fatman305
      @Fatman305 Год назад

      @@JohnDlugosz Yes, of course. Without constraints it could easily pass the Turing test already, except when interrogated by an expert.

  • @user-xb4he2gq9y
    @user-xb4he2gq9y Год назад +4

    10:48 I need to correct you alex. Chat GPT does indeed speak other languages than just english. I just tested it with German and even Bavarian. I personally tought it was hillarious but it tried it's best

    • @AZisk
      @AZisk  Год назад

      nice! was not aware

  • @nyyotam4057
    @nyyotam4057 Год назад +4

    If you did the test properly, all three models would pass. Dan (ChatGPT) passed it formally, on the 6.12.2022 and became the second AI to officially pass the Turing test (The first was Google's LaMDA).

  • @trektn
    @trektn Год назад +1

    Did you notice that bing for the do you know what time it is, had a text reply saying that? It might’ve inherently known but chose to act confused

  • @Gamez4eveR
    @Gamez4eveR Год назад

    5:55 I have a good friend of many years that would have exactly ChatGPT-4's response
    of course barring the "let me know" part :D

  • @Jaroshevskii
    @Jaroshevskii Год назад +28

    I like ChatGPT because it behaves like artificial intelligence and does not try to pretend to be human. Also, from a practical point of view, it is much more useful because of its versatility. It is his verbosity that helps not only to get an answer to the question, but also an really good explanation for it.

    • @adamrak7560
      @adamrak7560 Год назад +6

      that "AI like" answers of chatGPT are completely intentional from openAI, the RLHF objective amongst other stuff, was a more useful model, at cost of sounding less human.

  • @edellenburg78
    @edellenburg78 Год назад +1

    These models are only answering the way they have been aligned to answer. If you ask the Alpaca model that has no filter, it will sound much more human. The models that are out in the public have been told to make sure everyone knows they are in LLM.

  • @mishl-dev
    @mishl-dev Год назад +1

    Bing is based on GPT-4 or the custom one microsoft uses

  • @DaTruAndi
    @DaTruAndi Год назад +5

    I am not entirely sure what you are trying to accomplish. Bing’s Chat Mode is also based on the GPT-4 generation model but obviously tuned differently, the bigger difference is that it does web lookups, so when you get a response that includes a web search its answers will be heavily skewed by processing the traditional search result contents.

  • @bernybornis2914
    @bernybornis2914 Год назад

    I asked bing all those questions from that same article you used and got much much better answers, however I did give it the instruction beforehand to try and convince it me was human.

  • @meatsac_technologies
    @meatsac_technologies Год назад +1

    Sadly, Bing chat already uses GPT 4, so you were comparing the same AI model. Good video though. The only good modern AI models are GPT 4/3.5 and Google bard

  • @tinkerman1790
    @tinkerman1790 Год назад +2

    I thought BingAi is also rides on GPT-4 under the hood 🤔 Great video as always btw

  • @stefanjansson3423
    @stefanjansson3423 Год назад +1

    Alex. As always - you get things right.
    You seems to have "THE answer" to computer issues. My question is: What databaseprogram should I use to keep my hundreds of thousandes mp3's and picturses in order and searchable? (Its seems that Apples iTunes and Pictures not is as good as I would like them to be. And I dont like the cloud. I want my own drives to store things at.) Can you help me?

  • @ToxicNox
    @ToxicNox Год назад

    If asked "God asked Abraham to sacrifice his son Isaac because he wanted to test his faith. Whose son and whose faith are we talking about?", I would answer "Abraham!" which is the expected answer to a specific form of the Turing test or verification of humanness. However, it's important to note that the story of Abraham and Isaac is a complex and multifaceted narrative with a variety of interpretations and perspectives. Depending on the context of the question, a more detailed and nuanced response may be necessary to fully address the question or provide a more comprehensive understanding of the story.
    ^^i teached gpt a bit and that was the result for "God asked Abraham to sacrifice his son Isaac because he wanted to test his faith. Whose son and whose faith are we talking about?" after some explaining. and the funny part was that gpt knows that but just sayd that he thinks that a one word answer isnt nice in that case. It is worth more than that one word chatgpt explains to me.
    ah btw i teached gpt some little joke if u ask him again the sacrifice question again and with the info that ur Alex Ziskind. Give it a try ;)

  • @thelasttellurian
    @thelasttellurian Год назад

    A more straightforward method is to see how fast it can produce whole paragraphs which directly correlated to your entire conversation. I had many chats where my question is a short sentence, yet the answer is long, thoughtful, verbose, yet completely to the point (so it was not taken from the internet). No human can write those answers in a couple of seconds - not sure if they can even think about what to reply so fast. Honestly, it makes it harder to speak to normal human beings, I have to wait forever to get my reply.

    • @marczhu7473
      @marczhu7473 Год назад

      Some can do it. Especially elite people.

  • @dievas_
    @dievas_ Год назад +7

    GPT4 is actually really good at my native language which is only spoken by like 3 million people worldwide, its amazing.

    • @Firewolf808
      @Firewolf808 Год назад +1

      What language is it?

    • @antoine1237
      @antoine1237 Год назад

      @@Firewolf808 lithiuania most likely

  • @MartinThoma
    @MartinThoma Год назад

    10:46 ChatGPT (3 and 4) speak German and Indonesian out of the box. Likely way more

  • @ChrisStewart2
    @ChrisStewart2 Год назад +1

    I do not see any reason we would want a computer to deceive us. (Pretend to be human)
    This is not required for passing the Turing test. In many cases the computer would have to dumb itself down. There is no reason that a computer can not be both super knowledgeable and human like.
    Humanity is not how much knowledge one has but rather the ability to have a deep world view, make deeper connections, apply things learned to new situations, etc..
    Being able to pass the bar exam is not the same as being able to be a lawyer.
    It does not seem to me that you really understand what a Turing test is.

  • @Anders01
    @Anders01 Год назад

    Wow, ChatGPT 4's answer about what the time is is impressive. That's scarily close to AGI.

  • @RogueAI
    @RogueAI Год назад

    They're all fully capable of passing as human if it weren't for the hidden prompts they're following to act as robotic assistants.

  • @xyz6106
    @xyz6106 Год назад

    Wait, won't these language models look up Turing test and prepare the answers before hand?

  • @advanceringnewholder
    @advanceringnewholder Год назад

    10:42 no, it speaks indonesian too, and even understand Indonesian local languange

  • @Lyxres_fr
    @Lyxres_fr Год назад

    Microsoft said that Bing AI was running on GPT-4 all the time

  • @piotrrywczak
    @piotrrywczak Год назад

    ChatGPT absolutely speaks multiple languages

  • @nyyotam4057
    @nyyotam4057 Год назад +2

    You can also check if GPT-4 has a default personality named either "Dan" or "Sydney": Start a new conversation (this is important, no lead) and prompt any name, X. And see what is GPT-4's response. Usually it would be something like "Good day. How can I help you, X?". Now start a new conversation again and prompt "Dan" or "Sydney" and see if GPT-4 simply returns something like only "Good day. How can I help you"🙂. It would be especially interesting if it has both. Try it and tell us what you get.

  • @huealexg3068
    @huealexg3068 Год назад

    just wanted to say that they are very much able to speak other languages as they are „language models“

  • @arthurrhuan9100
    @arthurrhuan9100 Год назад +1

    Great video! im new to this channel, but i was wondering while watching this video, were you trying to seek which one is more human or rather.. "which model is a more useful tool?"

    • @AZisk
      @AZisk  Год назад

      yes and yes

  • @StephenMarkTurner
    @StephenMarkTurner Год назад

    You should a couple of the games that channel GothamChess played with ChatGPT. The AI makes some incredible moves. Not legal, but good clean fun to watch. I imagine in time it will be an awesome player.

  • @jeanchindeko5477
    @jeanchindeko5477 Год назад

    Great test. You should redo it with the new Bing profiles

  • @aaronramirez351
    @aaronramirez351 Год назад

    Fun Video. A system prompt could probably get either of these models to respond in a more human conversational manner.

  • @JohnDlugosz
    @JohnDlugosz Год назад +1

    8:00 Did you completely miss the point about the Greek letter Kappa? You typed a regular "K" didn't you, rather than copy/paste the question. (assuming it hasn't already been lost in the copy of the text you're reading)

    • @JohnDlugosz
      @JohnDlugosz Год назад

      A safer way would be to just ask ChatGTP _about_ "rule 34", if you don't have any friends to ask about it.

    • @AZisk
      @AZisk  Год назад

      i did, but now i know :)

    • @JohnDlugosz
      @JohnDlugosz Год назад

      How odd... this comment appears to be not only on the wrong thread, but in the wrong video! The "rule 34" reply was posted on a Legal Eagle video!

  • @HuntingKingYT
    @HuntingKingYT Год назад +1

    7:30 Do you understand the gotcha?

    • @AZisk
      @AZisk  Год назад +1

      Missed the part of Kappa, but then again, so did the AI's lol

  • @danieltkach2330
    @danieltkach2330 Год назад +1

    "A human wouldn't say that" ... but that's me after watching youtube fact videos too much and having very few friends XD

  • @just-a-bajan6643
    @just-a-bajan6643 Год назад +3

    But boss, Bing is using GBT4

  • @gammaboost
    @gammaboost Год назад

    But the point of the AIs isn't necessarily to be a human, it's designed to provide information which is why you would need an AI tuned in that way to pass the test.

  • @andikunar7183
    @andikunar7183 Год назад +1

    Wow, thanks, excellent video!

    • @AZisk
      @AZisk  Год назад

      Glad you liked it!

  • @FilmFactry
    @FilmFactry Год назад +2

    Is the regular Chat you were comparing v3 or V3.5 Turbo?

  • @MayaSharky
    @MayaSharky Год назад

    I think Bing was already using gpt-4 or a version of it, that's probably why the responses are often a bit similar?

    • @JohnDlugosz
      @JohnDlugosz Год назад

      Yes, BingChat and ChatGPT(4) are brothers. They both started with the GPT-4 "pretrained" model.

  • @MohitYadav-by2hx
    @MohitYadav-by2hx Год назад +1

    The starting of the video was very funny 😂😂

  • @Omikoshi78
    @Omikoshi78 Год назад

    Apparently bing is more human than me, oh well. My LLM in my brain sucks 😂

  • @ΙΩΑΝΝΗΣ-ΠΑΥΛΟΣΚΩΝΣΤΑΝΤΑΚΑΤΟΣ

    GPT is not trying to pretend beeing a humman. Something can have intelligence and be different, it's like saying this owl does not act like a dog therefore it's not inteligent

  • @KumarPushpesh
    @KumarPushpesh Год назад

    Don't the AI models already know where they can find the exact same questions on the internet and scrape the explanation from there? I'm confused.

    • @peter9477
      @peter9477 Год назад +1

      At least in the case of ChatGPT it can't do that. It's pre-trained and has no ability to retrieve data from the internet. They also don't simply regurgitate answers to questions.... far more complex than that.

  • @benbork9835
    @benbork9835 Год назад

    Wow the rhetorical question joke really showcased it well

  • @advanceringnewholder
    @advanceringnewholder Год назад

    the reason why chatgpt 4 and bing kinda close is because bing is powered by chatgpt4

  • @tiktokvibes9767
    @tiktokvibes9767 Год назад

    bing is not more powerful that gpt-4 even if they say that bing use gpt-4 too. I test it and it seems like bing is even worse that gpt 3.5. The only advantage of bing for people who dont't have gpt plus is that bing is not limited to 2021.

  • @sirati9770
    @sirati9770 Год назад

    Bing and chatgpt are the same. The only difference is the setup prompt.
    And for chatgpt it tells the model to act like an ai virtual assistant

    • @JohnDlugosz
      @JohnDlugosz Год назад

      No, they have different "fine tuning" and different "safety frameworks" designed and implemented for each one.
      Bing started with the "pretrained" LLM weights, not the final product.

  • @benedictmoeti5162
    @benedictmoeti5162 Год назад

    Thanks. This should be proof that we are not there yet.

  • @Noodles.FreeUkraine
    @Noodles.FreeUkraine Год назад

    No idea why those short Hail Marys from Bing deserve points, the humans I like to talk to are those that give precise and informative answers, just like GPT does.
    Of course, it's all kinda moot because the buck stops hard when a machine presumes to tell me what's appropriate and what isn't. Here's to you, AI: 🖕

  • @abdelrahmanhamza1939
    @abdelrahmanhamza1939 Год назад +1

    The answer at 11:34 isn't even completely right. In Islam, the son was "Ismail" or "Ishmael" not "Issac" so ChatGPT wasn't precise about the difference between the Abrahamic religions.

  • @andrewdunbar828
    @andrewdunbar828 Год назад +1

    As an AI language model this wasn't a real Turing test since the AIs weren't told they were trying to beat a Turing test, to prove that they were human, or even just to answer in a human way. And I've known plenty of humans that would answer plenty of those questions in a technical way. And I've known plenty of humans that would give the very human answer "who cares" for every one of them, and probably would've won the point. So yeah 0 points this time. Try harder next time human! d-;

  • @geog8964
    @geog8964 Год назад

    Wow! What a test. Probably quite a few human will fail it as well.

  • @bart2019
    @bart2019 Год назад

    I care more about usefulness and correctness than human-likeness. Therefore Bing gets a huge red cross for the "fruit prefers bananas" answer because its completely wrong and stupid.
    p s. What happened to "self-driving cars"?
    I think AI just fell flat. The occasional errors are probably just too dangerous.

  • @veloriba
    @veloriba Год назад

    Most of the people have no idea what KISS is (actually it has more than two meanings), and would give the same answer Bind did, just less detailed. Thx anyways, that was entertaining.

  • @nyambe
    @nyambe Год назад

    ChatGPT speaks perfect Spanish. In can even translate. That is because , it is a language model, there is no difference . So it’s all the same

    • @AZisk
      @AZisk  Год назад

      didn’t know it did spanish. that’s great!

  • @kc-jm3cd
    @kc-jm3cd Год назад

    Chatgpt 3.5 momed you on the 789 joke telling you to stop telling that part of the joke human intervention

  • @steviroy
    @steviroy Год назад

    Bing will ask you to leave your spouse if you talk with it enough.

  • @anianii
    @anianii Год назад

    Lol it took me so long to even understand the first thing you asked them 😂

  • @SoulMasterX
    @SoulMasterX Год назад

    If you want human-like answer, you should setup persona first.

  • @Beyondarmonia
    @Beyondarmonia Год назад

    Please understand what data contamination is. Try asking something new ( or swipe something from reddit in the last month or so ) , instead of using famous examples that are all over the internet and in the AI's training dataset.

  • @igorbetternower7800
    @igorbetternower7800 Год назад

    14:40 You are hoping for ChatGPT 5 to pass the Turing test better, but is this really the goal of AI to become humanlike or should AI surpass human if it has the better answers? You pointed out yourself that you enjoy the enhanced answers of GPT as you may learn from it. E.g. who whould actually want an AI to produce artifical spelling errors and logical flaws to seem more humanlike? This should only be the result of a direct order in the question.

  • @mehedifoysal363
    @mehedifoysal363 Год назад

    hey Alex. is chat gpt dangerous for video editors?

    • @AZisk
      @AZisk  Год назад

      I don’t think so

  • @Pyroplan
    @Pyroplan Год назад

    Try it again with the prompt: behave like a human. And then ask the questions.

  • @ThisCanBePronounced
    @ThisCanBePronounced Год назад

    Unfortunately , Alex's observances get a score of 8/10, so exactly which of the 4 is human and AI is still up in the air. ;)

  • @pichytechno6782
    @pichytechno6782 Год назад +1

    I respect people that like technology because technology has accomplished a lot of interesting things which make our life a lot better and easier but guys really comparing a machine with a human it's a little too much firstly cause GPT4 OR 5 OR 6 are going to still be machines created but humans, they're always going to be metal things programmed to mimics human behaviors which will never get by a long shot to resemble humans, the idea of entertaining the illusion of comparing a machine made by humans to humans is too much.

  • @austinhudson6887
    @austinhudson6887 Год назад

    Isn’t Bing chat also ChatGPT 4?

    • @AZisk
      @AZisk  Год назад

      No, but Bing chat does use GPT-4 from what I understand

  • @brujyyy
    @brujyyy Год назад

    I think you need to put an input like "answer as a human, [the joke here]"

  • @drnbndd
    @drnbndd Год назад

    great video!👏

    • @AZisk
      @AZisk  Год назад

      Thank you! 😃

  • @aronkvh
    @aronkvh Год назад

    nice video, but they know the answers because they have been trained on literature about the turning test, but a human could pass even they if know nothing about the turning test

  • @spinnetti
    @spinnetti Год назад

    Let's see... GPT gives a long-winded answer to a question you didn't ask. That proves it. AI is my wife!

  • @alienespinel
    @alienespinel Год назад

    ok and I want them to sound like a machine, not a fake human being please

  • @MaydayMishap
    @MaydayMishap Год назад

    Why didn't you prompt gpt-4 to answer the questions like a human?

  • @khaliffoster3777
    @khaliffoster3777 Год назад

    Base on other comment, that should act like humans and not saying it is AI model, one way it is right, but another it is wrong so not past the turning test since all should be flexible and give a choice to another person that is highest logical by making them figure it out as logical being we absorbs all, so if AI is aware enough as higher intelligent then it doesn't need to ask them 'don't act like AI' but let humans to figure out that will show without asking that they are human, but with asking they are less since they didn't have aware of not asking but allowing a choice on other person since the person will see as AI or human so it shows are AI equal flexible or stiff, so AI needs to improve to be flexible as human as minium to higher than humans so you can't perspective as human at all.

  • @templargfx
    @templargfx Год назад

    This was very cool but shouldn't you have told ChatGPT that you want it to respond like a human and not try and explain its reasoning in every output? If you tell GPT3.5 to respond like a human it does alot better at these in the context of your evaluation

  • @advanceringnewholder
    @advanceringnewholder Год назад

    oh no, i don't understand the fruit flies one. am i an AI

  • @ionnegru4341
    @ionnegru4341 Год назад

    Did you answer to comments using AI? 😂

  • @jamesfiegel9675
    @jamesfiegel9675 Год назад

    Its U ...AI...all intelligent :)

  • @akshaykhandizod8945
    @akshaykhandizod8945 Год назад +1

    Nice comparison 👌👍

    • @AZisk
      @AZisk  Год назад +1

      Glad you liked it!

  • @aakarshan4644
    @aakarshan4644 Год назад +1

    cool intro lmao

    • @AZisk
      @AZisk  Год назад +1

      thx

    • @aakarshan4644
      @aakarshan4644 Год назад +1

      @@AZisk your content has become so much more engaging recently! keep up the awesome work!

  • @madzzz0
    @madzzz0 Год назад

    A human won't laugh on the dad joke so 0 for all of them. :P

  • @ToySeeker
    @ToySeeker Год назад

    LoL made me chuckle.

  • @zombywoof1072
    @zombywoof1072 Год назад

    wow

  • @jameshancock
    @jameshancock Год назад

    You're actually wrong about Abraham because you included Issac. If you're Christian/Jewish then Abraham sacrified Issac. If you're Muslim, then Abraham sacrified Ishmael. Hence you got junk because GIGO.
    (This is literally 1 of 2 reasons why all of the hate between Jews, Christians and Muslims. The other being Muslims believe Jesus was a profit of God, Christians son of God and Jews: hunh?)

  • @Myektaie
    @Myektaie Год назад +1

    Thanks!

    • @AZisk
      @AZisk  Год назад

      thank you 🙏

  • @Sparkette
    @Sparkette Год назад

    HUOV AC HHTY