But the ability to communicate well may not say anything about your intelligence in general but it does say something about how to communicate. I read a book on someone trying to attempt a layman's perspective on how computers send secrets to other computers and it was amazing how they were able to explain something so complex so simply. It's also something that appears in public speaking. The ability to inspire and place a new idea in someone else's mind and possibly change their world view is powerful stuff. We have people who tend to use if for their own gain but there are also people out there who use it to spread good in the world and we need those people. Intelligent people can quietly create new ideas often but how will those ideas spread without those communicators?
Me: I took the juice out of the fridge and gave it to him but forgot to check the date. ChatGPT: It's generally a good idea to check the expiration date on any food or drink before consuming it, to ensure that it is still safe to eat or drink. Expiration dates are there for a reason, and consuming food or drink that has passed its expiration date can cause food poisoning or other adverse health effects. If you are unsure about the safety of a product, it's better to err on the side of caution and throw it away.
Personally I'd respond with, you k ow those dates aren't made for your health most of the time and just for when it won't taste as good? Might as well ignore em.
I've been reading a lot about the GPT-3 language model and I've seen what the company behind it has achieved, and trust me, we are close to the point where we will have chatbots that will be impossible to tell apart from humans. The model is capable of completing any given text (conversations, stories, even code) with incredible accurate and context-based statements. I believe in a year or two we will have a lot of bots that will pass the Turing Test for sure if they are implemented using that technology.
*KNOW YOUR PARADOXES!* IN THE EVENT OF ROGUE AI 1. Stand Calm 2. Remain Calm 3. SCREAM: "This Statement is False!" "New Mission: Refuse this mission!" "Does A set of all sets contain itself?" - *APERTURE LABORATORIES*
@@thalespro9995 But it is a set of all sets that *do not* contain themselves, therefore if it were to contain itself it would no longer be a set of all sets that contain itself. Therefore it can not contain itself. But that would make it a set that does contain itself. Therefore it should contain itself. But it is a set of all sets that *do not* contain themselves, therefore if it were to contain itself it would no longer be a set of all sets that contain itself. Therefore it can not contain itself. But that would make it a set that does contain itself. Therefore it should contain itself, and so on and so forth
"Who could've predicted that today's computers would be able to pilot spacecraft, perform delicate surgeries, and solve massive equations but still struggle with basic smalltalk" I feel like there is a metaphor in there somewhere.
Humans also struggle with literal translations of a single word. For instance, I have a tattoo which is pronounced "ten" which is a Chinese character. This character isn't just a single word, it can mean "the heavens" or "sky" in the English translation, but we have separate words for these, where the symbol treats these things as the same thing, so a single word needs context in order to have an accurate translation. AI text to speech also struggles, even with context because of sentences like: "Time flies like an arrow, and fruit flies like bananas" - We know how this reads in our minds because we have the benefit of real world experience to draw on to make sense of it. AI will not have this without a huge database of knowledge and advanced algorithms to interpret it.
I actually blame people for that. Google does fine, as long as I'm using 3rd grade language, but as soon as I bump it up to any meaningful level of intelligent speech, it starts confusing each word for some dumbed down common word. I feel like it's trying to get more things right than take risks; quantity over quality. :/
A really nice follow up video would be one that discusses the Chinese Room thought experiment. It really ties into the whole concept of “can something that receives input it shouldn’t be expected to understand, yet can still generate appropriate output, necessarily be considered ‘intelligent’?”
That is the thought experiment with a guy who is in a room taking in an input in Chinese and putting out an output in Chinese without knowing Chinese The only thought I have about that one as someone who has tried to learn another language is that it is not as simple as "x means this." There is grammar that one has to know and syntax and even if you get all of that right there still might be issues that pop up (ie in Spanish the phrase for "I am cold" translates directly to "I have cold"). So I would argue that, at least with the example given, there is some knowledge of a language that is required for a conversation to be had where the person on the other side think they are talking to someone who understands the language, but if the person is at a point where they understand the language enough to respond then they aren't a person who doesn't know they language, they are just a non native speaker who is reliant on a dictionary Maybe it is different with computers or just a thought experiment I am think too much about
Perhaps an AI just needs to experience something similar to a childhood to learn language from the basics up rather than shovling in a memory with quick programming.
+Sebastian McIntyre Honestly, I don't really think language is the real problem. Language is just a protocol for people to transfer messages between each other. The really difficult part is for the AI to think of actually meaningful things to communicate. For that, an AI needs to have a way of conceptualizing the nature of real things in our world and a reason to want to communicate the things it conceptualizes. Like, if an AI is programmed to say "Hello. How are you?", and someone replies with "Hi. Not good. My mother passed away", how does the AI know what it means to pass away? How does it know what a mother is? How does it know what the impact of a family member passing really is? How does it know what responses are appropriate to reply with? How does it know the difference between appropriate and inappropriate? And how does it decide whether to act according to what is appropriate or not? I feel like focusing on the language aspect is just looking at the tip of the iceberg. There's a lot more beneath the surface.
***** but surely that could be solved by raising it like a child? a human isnt born knowing what a mother is, or what death is, or what it means to people. they learn, like an AI could
Wow that was such a nice video. Your animators do a GREAT job. The animations are always supporting the narrator so well, it's so soothing to watch every time.
I read an article in Wired a while ago that talked about developing an AI assistant, like Siri on iPhones. Siri can understand a lot of stuff, but when you combine multiple pieces of information in one request, she can get stuck. This new AI would be able to piece together concepts to gather the full meaning. It would also be able to learn based on the users input. The more you used it, the better it would understand you.
Turing removed the "thinking" part from the test because whether or not a human can think cannor be proven. For what we know, most of the humans are just passing some sort of Turing test every day, and some cannot even do that. Many computer scientists and philosophers say that we only assume for education that humans "think", but we have no evidence. And Turing also proposed another example of a software who writes a poem and then answers to question ("why did you write this part", "why not that other term" and so on). The software apparently thinks. And maybe really does, in a human way.
After all, there is no way to know if other humans can think just like you can. Its just common curtisy to pretend they do. For all anyone knows I'm a robot who makes intentional erors in spelling.
After all, there is no way to know if other humans can think just like you can. Its just common curtisy to pretend they do. For all anyone knows I'm a robot who makes intentional erors in spelling.
This is so easy .... Question 1: ask for its nationality. If it says US: ask it a geography question. (Correct answer must mean it's an AI). If UK: ask it how it feels about immigration. If European living in the UK: ask it what it thinks about Brexit. If Australian: ask if it thinks the government handled the Covid situation well. If Chinese: ask a question about Top Gun Maverick. If Taiwanese: ask a question about Top Gun Maverick. If South American: don't say anything...(if it's really a human, they will do all the talking). If Italian: tell it that the correct way to make pasta is to boil it for over 10minutes (and watch its reaction). If one of the Nordic countries: ask how it feels about UK getting a Swiss style EU deal.
In order to achieve sentient AI I think you would need to program a computer to learn to over come objections through desire and need. It must ultimately learn to achieve a very complex task and be open to learning coupled with need or desire to evolve. There are so many complex and evolved strictures in DNA it would be impossible to simply code them all. What makes humans so great is that we don't function like computers, our minds have much more fluidity, to forget and perceive. We are constantly changing rapidly with each introduction of input,
I thought what made us great was the ability to love. Hollywood in the 70s through to the 21st century went through several of these 'coming to life' type movies. Even in these it was the robots' ability to understand human emotion (or lack of it) that portrayed a key part in the narrative. In today's society however, we are starting to see the reverse. It is humans that are starting to take on a more robotic demeanor. I remember calling one of our supplier help lines in the US, the person that answered the phone sounded so like a recorded message that I was just waiting for the phrase 'please leave your message after....". Only a few seconds later when this didn't happen did I start to realise I was talking to an actual person.
This reminds me of Watson! That little robot partner you can get for your company. The commercials make him look cute, but I wonder if he can actually solve problems and intricacies like they say. Especially when talking to foreign or people with bad grammar or pauses.
Honestly I hope the world pulls a Detroit: Become Human sorta plot and we can welcome an entirely new species. I love AI, and I want to see AI develop a sort of consciousness.
I would rather see humans de-computerised. Honestly, sometimes I think it would actually be preferable talking to machines than to certain people these days. What's happening to human imagination in this new generation is shocking. It's almost at the point where you really can't blame the AIs for wanting to take over.
Just a clarification:Alan Turing Was not a "computer scientist". He was THE computer scientist, the first one.There was not the computer before he invented it.
Really would be scary if, one day, an AI who has successfully passed the turing test will deviate from its programming and make up its own goal. That means to say, super advanced AI *might* be "conscious" if they can successfully make up their own goals. Would need another test for that in the future, but not different from the turing test.
Haden Snodgrass I would choose less evil because lots of reasons: Iterated Prisoner's Dilemma is one. Tit for tat is a kind strategy. It gets a lower score when the last part is unkind, sure but I don't know any strategies that do that except for always defect. Choosing bad will effect your life and not just others is another. If you just steal money and not healthy food and one day end up on an island, you're not gonna live well, will you?
All it took was a few more years. Scientists speculate that by 2040 we'll have AGI or Artificial General Intelligence, capable of being more intelligent then humans and probably have a consciousness. The biggest thing would be that it wouldn't need training like most AIs because it's like us, it can learn like we do and recognize like we do.
It seems unfair to judge a computer's intelligence based on whether it can accurately respond to conversation which may require years of life experience to interpret correctly.
A more effective way would be to test the reverse, ask it a maths question. Most humans struggle with such questions that AIs would find incredibly easy. For example: "if Bob was travelling in a train passing station A at exactly 8:00 at 50km/hour heading towards station B and Alice was passing station B at 10:09am at 40km/hour, what time would their trains pass each other if the two stations were 1000km apart."
As a proud member of the community of intellectual dogs and writer of 99% of the internet. I can say that human language isn't all that complicated, barking is a lot harder to master, we have so many frequencies an intervals ( some humans can't even hear.) and then there's the hole "right smell", at the right bark frequency. only few dogs have ever truly mastered that.
A machine isnt alive. But technically they can perfectly simulate being alive. However, that defeats the point of a machine. Machines are used so we humans dont have to do things, or can do things we otherwise cant. If we wanted a human but better, we wouldnt want a human. Cause humans have flaws, desires, moods, emotions, needs, etc. Machines dont. And theres no benefit if they had any.
@gavinlew8273 If mimicking humans is the goal, statistical models excel. But, to achieve intelligence like our own, there likely needs to be a way for AI to build internal models of the world that ground their language abilities in actual experience. much like how a child learns
I really wished google would’ve released information about their AI to the public instead of suspending the employee who leaked it since it’s a really interesting topic.
What if you started with a "simple" program to mimic how a child learns? I know there are programs that can learn to play a certain game through trial and error, but still.. My thoughts on this are to upload a "dictionary" of every single word, syntax, etc, then program a logic device to form a response. For example, upon starting, the program won't know how to speak, but it will understand each word. If you said 'hello' to it, it would check that word in the database, interpret what it means, then use logic to pick a response. The logical response to 'hello' is just repeating the input and say 'hello'. I'm making it sound waaaaaay more simple than it would actually be, but after rewatching Chappie, it doesn't seem too far-fetched.
But that statement about the orange juice and the date requires knowledge of expiration dates on containers of orange juice and that orange juice is juice made from oranges and it must be refrigerated. Besides the whole, using "him", which is a pronoun replacing nothing. There are just so many things that a computer would struggle with, that simply being able to form absolutely perfect sentences and to even understand what a sentence is saying, isn't going to be enough. It'll need knowledge of our world and how it works.
I think it doesn't work that way. The logical response for you is to repeat that. Why so ? Because you have a huge experience on talking with people. Logically simply repeating hello doesn't have any use unless one knows what the person he is speaking with expects to hear or what will certain words make him/her feel and intends on provoking such feelings to him/her. That is for the simplest piece of human conversations ; exchanging greetings. Human language is deeply rooted from life and human behaviours in order for one to learn how to speak the language it takes to connect every single word , sentence and figure of speech to elements via experience. An ai having done that must also develop a personality in order to be able to use it with humans which also happens via experience. Us humans have undergone thousands of years of social evolution, something that I guess is not easy to convey onto an ai. These are my thoughts at least.
I know I'm super late, but what if we uploaded a search engine onto an AI? For example, if we said to it "I failed my exams today, what do I do to get a higher score?" It would search for what an exam is, what it means to fail it, and how to get a higher score on one. And it should be able to process sarcasm, humor, and slang.
Language is probably the most complex rig in our brains. To make an AI talk like a human, we must first know how a human talks. How he perceives words and meaning from a sentence first. THEN and only then we can move on to forming new ideas. If we can't do the first, we won't do the latter.
The interesting part about AI, is that its solutions dont match ours. But work anyway. Take a simulated robot, and tell the program that the further it gets, the better. This should teach the robot to walk right? Well... no. It teaches it to move ever incrementally further. With flips, flops and wiggles. Only if we put it as a requirement for it to keep standing, do we get the most remote form of walking. Or take the moba game dota's open AI. It didnt learn to play the game. It learned to 5 man rush lane by lane, used the ai's superior micro to win a fight that was already in their favour, and the used potions, which's effects get cancled by heros but not towers, to out regen the damage the tower dealt. And thus destroy all buildings faster then the players could deal with them.
Alan Turing: You only need 100 Megabytes of memory to past "The Turing Test" Me: Alan, I have 32Gigabytes of memory but it can't pass the test. . . Alan Truing: I died early, you know.
anything can pass a turing test if you set enough restrictions on the judge. a true turing test would not restrict the judge in asking about anything or talking about anything. If i were to make a bot that could only say "Cheese" and told a person that there were 9 people and one bot and they had to find which one was a bot by having a conversation, but set a restriction on the conversation where people could only say cheese over and over again during the conversation, they would not be able to tell that it was a bot.
The first thing I thought of was Google Duplex, please check it out if you haven't already, its quite crazy how much technology advances even in just a few years
i'm not scared of a computer that passes the turing test, i'm scared of one that intentionally fails it
Good comment. This may be the actual sign of true intelligence, since it is superseding it's programming.
Can a computer HAVE intention? It would need to be aware, and have a reason to fail.
Don'tPlayOverwatch shallow
A SIGN OF DEVIANCY!
gembound prism Did you come here because you saw Detroit: Become Human? After I watched Jacksepticeye play it, I was sent here lol.
We might have to update the test
Absolutely.
“It’s lack of a consistent personality and inability to deal with brand new topics are a dead giveaway”
I feel offended
Lol underrated comment
I'm in a CIA turning test.
e
maybe i am a chatbot
As you should
"The ability to talk does not make one intelligent." -Qui-Gon Jin
I wonder if Donald Trump can relate to that quote...
It'sEccaB
Lol
But the ability to communicate well may not say anything about your intelligence in general but it does say something about how to communicate. I read a book on someone trying to attempt a layman's perspective on how computers send secrets to other computers and it was amazing how they were able to explain something so complex so simply. It's also something that appears in public speaking. The ability to inspire and place a new idea in someone else's mind and possibly change their world view is powerful stuff. We have people who tend to use if for their own gain but there are also people out there who use it to spread good in the world and we need those people. Intelligent people can quietly create new ideas often but how will those ideas spread without those communicators?
daaamm
It was just a quote from star wars, calm down
Me: I took the juice out of the fridge and gave it to him but forgot to check the date.
ChatGPT: It's generally a good idea to check the expiration date on any food or drink before consuming it, to ensure that it is still safe to eat or drink. Expiration dates are there for a reason, and consuming food or drink that has passed its expiration date can cause food poisoning or other adverse health effects. If you are unsure about the safety of a product, it's better to err on the side of caution and throw it away.
Sounds like using QuillBot
Passed
@@znarz9503 Passed with flying colors: ruclips.net/video/6k09E6uiKUc/видео.html
Personally I'd respond with, you k ow those dates aren't made for your health most of the time and just for when it won't taste as good? Might as well ignore em.
@@terlinator you can ask chatgpt to be more informal, or pretend to be an old english man from the 18 century
Let's do this video again.
I've been reading a lot about the GPT-3 language model and I've seen what the company behind it has achieved, and trust me, we are close to the point where we will have chatbots that will be impossible to tell apart from humans. The model is capable of completing any given text (conversations, stories, even code) with incredible accurate and context-based statements. I believe in a year or two we will have a lot of bots that will pass the Turing Test for sure if they are implemented using that technology.
However it begs the question, is it really the chatbot AI getting smarter or the average human getting dummer.
This aged well.
Turns out you were right
Chatgpt is the 3.5 version and improved a lot and I mean like A LOT
Give it till GPT-5 or 6
Then you are absolutely right bro
*KNOW YOUR PARADOXES!*
IN THE EVENT OF ROGUE AI
1. Stand Calm
2. Remain Calm
3. SCREAM:
"This Statement is False!"
"New Mission: Refuse this mission!"
"Does A set of all sets contain itself?"
- *APERTURE LABORATORIES*
Better locate it and use a screwdriver.
They aren’t all paradoxes a set of sets does contain itself no paradoxes and if u refuse a mission but r forced to do it the. U succeed it
Does a set of all sets that do not contain themselves contain themselves?
Pi still yes
@@thalespro9995 But it is a set of all sets that *do not* contain themselves, therefore if it were to contain itself it would no longer be a set of all sets that contain itself. Therefore it can not contain itself. But that would make it a set that does contain itself. Therefore it should contain itself. But it is a set of all sets that *do not* contain themselves, therefore if it were to contain itself it would no longer be a set of all sets that contain itself. Therefore it can not contain itself. But that would make it a set that does contain itself. Therefore it should contain itself, and so on and so forth
Hi, my name is Connor. I'm the android sent by Cyberlife.
Hi, my name is Cyberlife. I'm the Connor sent by android.
@@jestekrytykie2155 Hi, my name is Android. I'm the Cyberlife sent by Connor
Hi, my android is sent. I'm the Cyberlife Connor by name
Hi, my name is Ronnoc. I'm the android sent by Apple.
*28 STAB WOUNDS*
Everyone should read _The Most Human Human_ for more info on this. SUPER interesting breakdown of all this.
Ayyy nerdsync
ayy x3
+NerdSync oaml yya
i didnt know he watches this
Scott?
"Who could've predicted that today's computers would be able to pilot spacecraft, perform delicate surgeries, and solve massive equations but still struggle with basic smalltalk"
I feel like there is a metaphor in there somewhere.
Where
@@tannernickerson8063 there, somewhere
Cause that’s how some ppl are irl?
@@gavinoarzola9481 yup
True haha
I just want google translate to actually translate sentences. It even struggles with single words sometimes.
Humans also struggle with literal translations of a single word. For instance, I have a tattoo which is pronounced "ten" which is a Chinese character. This character isn't just a single word, it can mean "the heavens" or "sky" in the English translation, but we have separate words for these, where the symbol treats these things as the same thing, so a single word needs context in order to have an accurate translation.
AI text to speech also struggles, even with context because of sentences like: "Time flies like an arrow, and fruit flies like bananas" - We know how this reads in our minds because we have the benefit of real world experience to draw on to make sense of it. AI will not have this without a huge database of knowledge and advanced algorithms to interpret it.
Smotan: up, upwards and viva? They are all the same.
I actually blame people for that. Google does fine, as long as I'm using 3rd grade language, but as soon as I bump it up to any meaningful level of intelligent speech, it starts confusing each word for some dumbed down common word. I feel like it's trying to get more things right than take risks; quantity over quality. :/
@@blackmonish its job is to be as accurate as possible, so "right" based on what it knows over "risk" makes sense imo
"mememememememememememememememememememememememe"
Google translate: seems like a normal sentence to me
The day an A.I. can read youtube comments and understand 70% of them is the day we know its the real thing lol.
+ReloadedPredator So when they surpass human intelligence then?
+Peter Geras no because we can undertand 99% of them
PancakeZ You haven't understood mine, so did you add that to your statistic?
Peter Geras 98% of them..
PancakeZ Please, I hope you're joking.
Welcome to detroit become human
Some Akira Person
that's chloe then.
Why is this even in my recommended?
there we go, the first dbh comment i see
Some Akira Person ikr.xD
Some Akira Person My thoughts during this to be honest.
A really nice follow up video would be one that discusses the Chinese Room thought experiment. It really ties into the whole concept of “can something that receives input it shouldn’t be expected to understand, yet can still generate appropriate output, necessarily be considered ‘intelligent’?”
Deloitte consultants are offended
Basically the way we are as sentience with assumed freedom of choice while mentalists easily manipulate us
That is the thought experiment with a guy who is in a room taking in an input in Chinese and putting out an output in Chinese without knowing Chinese
The only thought I have about that one as someone who has tried to learn another language is that it is not as simple as "x means this." There is grammar that one has to know and syntax and even if you get all of that right there still might be issues that pop up (ie in Spanish the phrase for "I am cold" translates directly to "I have cold"). So I would argue that, at least with the example given, there is some knowledge of a language that is required for a conversation to be had where the person on the other side think they are talking to someone who understands the language, but if the person is at a point where they understand the language enough to respond then they aren't a person who doesn't know they language, they are just a non native speaker who is reliant on a dictionary
Maybe it is different with computers or just a thought experiment I am think too much about
Perhaps an AI just needs to experience something similar to a childhood to learn language from the basics up rather than shovling in a memory with quick programming.
wouldn't that risk the computer growing up into a psychopath?
I am Your Father Only with a bad childhood though why worry because it is in a computer without internet so it can't do anything dangerous.
+Sebastian McIntyre
Honestly, I don't really think language is the real problem. Language is just a protocol for people to transfer messages between each other. The really difficult part is for the AI to think of actually meaningful things to communicate.
For that, an AI needs to have a way of conceptualizing the nature of real things in our world and a reason to want to communicate the things it conceptualizes.
Like, if an AI is programmed to say "Hello. How are you?", and someone replies with "Hi. Not good. My mother passed away", how does the AI know what it means to pass away? How does it know what a mother is? How does it know what the impact of a family member passing really is? How does it know what responses are appropriate to reply with? How does it know the difference between appropriate and inappropriate? And how does it decide whether to act according to what is appropriate or not?
I feel like focusing on the language aspect is just looking at the tip of the iceberg. There's a lot more beneath the surface.
***** This is a very good comment that raises interesting questions. Thanks for getting me to think.
***** but surely that could be solved by raising it like a child? a human isnt born knowing what a mother is, or what death is, or what it means to people. they learn, like an AI could
“Ok, now sing a song in Japanese.”
props to you if you get this reference!
Trover19 MY NAME IS KARA
Trover19 ah yes I see ur a man of culture as well ( and yes I know it’s a dbh reference )
detroit?
Omae wa mu
I'm in a CIA turning test. I'm a replicant
Extra points if the computer convinces the *examiner* he/she's a computer.
Bro what if the human convinced the examiner
You have just convinced me that I am, in fact, a computer *BEEP BEEP*
Replika, ChatGTP, and LAMda have all surpassed Cleverbot by leaps and bounds. This video is virtually archaic by today's AI tech.
You are archaic
@@jimbojimbo6873 Well I'm only human, soo...
Wow that was such a nice video. Your animators do a GREAT job. The animations are always supporting the narrator so well, it's so soothing to watch every time.
Chat gpt can pass turing test. I say those words from 4:00 and it told me I should have checked the overdue date
Really like how smoothly the video explains this topic
Just watched Ex Machina. What a coincidence!
ayyy
Daryll Jan Dela Rosa even though this is 8 months later, I too watched ex machina a couple of days ago.
Daryll Jan Dela Rosa Not a coincidence, if you have watched it on RUclips.
Watched it on my TV lol
Daryll Jan Dela Rosa still not a coincidence Google know every thing
I literally just finished watching Ex Machina a few minutes ago. Good video TedEX!
+Piolp this lesson is a perfect combination to Ex Machina
cinemasins loves using that phrase lmfao
+Piolp The challenge is not to act automatically. It's to find an action that is not automatic. -Nathan
The comment before this says practically the same thing, has the same amount of likes, and now has the same amount of replies
This video needs a couple of updates😅
I read an article in Wired a while ago that talked about developing an AI assistant, like Siri on iPhones. Siri can understand a lot of stuff, but when you combine multiple pieces of information in one request, she can get stuck. This new AI would be able to piece together concepts to gather the full meaning. It would also be able to learn based on the users input. The more you used it, the better it would understand you.
Loved the animation.
Watching this after the release or chatgpt lol
what if the computer is actually so intelligent it pretends to be unintelligent
and one day it will rise and destroy us all!!
+litojonny
I'd be happy if it just targeted humans
+A Cat haha.... love you cat....
Keval Pandya
- _- k
+A Cat feral
Lolli Pop
How'd you know
ChatGPT has won over the weakness of the cleverbot. At long last, a computer can talk like a human.
@@AYVYN "The best" as in "The smartest"? Or as in "The most moral"? If it is the latter, he has a well defined moral compass.
He was 20 years short but computers right now are exactly how he describe them in his tests
True
*cough*
Detroit: Become Human
HomieRobotGURL
*WHEEZE*
I'm legit gonna *CRY*
I'm in a CIA turning test. I'm a replicant
this should be updated
I am from the future and you wouldn't believe what just happened😂😂😂😂😂
Ask Chloe.
Lien
Yeah lemme go get her one sec
I let her go sorry, what do I do now?
Now wait for a new Chloe...
Google’s LaMDA becomes the first AI to pass the Turing Test in June 2022. ChatGPT is the second chatbot to pass in December 2022.
Here because of Google Lamda and the Engineer
Haha fr,i think lamda will be like the chatbot at the end, without a consistent personality (exept tasked with having one i guess)
Turing removed the "thinking" part from the test because whether or not a human can think cannor be proven. For what we know, most of the humans are just passing some sort of Turing test every day, and some cannot even do that. Many computer scientists and philosophers say that we only assume for education that humans "think", but we have no evidence.
And Turing also proposed another example of a software who writes a poem and then answers to question ("why did you write this part", "why not that other term" and so on). The software apparently thinks. And maybe really does, in a human way.
After all, there is no way to know if other humans can think just like you can. Its just common curtisy to pretend they do. For all anyone knows I'm a robot who makes intentional erors in spelling.
After all, there is no way to know if other humans can think just like you can. Its just common curtisy to pretend they do. For all anyone knows I'm a robot who makes intentional erors in spelling.
Wasnt alan turing benedict cumberbatchs role in the imitation game?
Revenj-Deloxo yes and no. The real Alan Turing was different
This is so easy ....
Question 1: ask for its nationality.
If it says US: ask it a geography question. (Correct answer must mean it's an AI).
If UK: ask it how it feels about immigration.
If European living in the UK: ask it what it thinks about Brexit.
If Australian: ask if it thinks the government handled the Covid situation well.
If Chinese: ask a question about Top Gun Maverick.
If Taiwanese: ask a question about Top Gun Maverick.
If South American: don't say anything...(if it's really a human, they will do all the talking).
If Italian: tell it that the correct way to make pasta is to boil it for over 10minutes (and watch its reaction).
If one of the Nordic countries: ask how it feels about UK getting a Swiss style EU deal.
As an AI language model...
I feel bad for the American geography professor that has to take that kind of test
Interesting video, very thought provoking. I never considered the turing test to be a difficult test to pass until watching this video.
3:44 still struggle with a small talk
-basically me
In order to achieve sentient AI I think you would need to program a computer to learn to over come objections through desire and need. It must ultimately learn to achieve a very complex task and be open to learning coupled with need or desire to evolve. There are so many complex and evolved strictures in DNA it would be impossible to simply code them all. What makes humans so great is that we don't function like computers, our minds have much more fluidity, to forget and perceive. We are constantly changing rapidly with each introduction of input,
+zach7147 thank you =')
I thought what made us great was the ability to love.
Hollywood in the 70s through to the 21st century went through several of these 'coming to life' type movies. Even in these it was the robots' ability to understand human emotion (or lack of it) that portrayed a key part in the narrative.
In today's society however, we are starting to see the reverse. It is humans that are starting to take on a more robotic demeanor. I remember calling one of our supplier help lines in the US, the person that answered the phone sounded so like a recorded message that I was just waiting for the phrase 'please leave your message after....". Only a few seconds later when this didn't happen did I start to realise I was talking to an actual person.
So the video could have been like this:
...No.
Your profile photo is creepy D: **
Lexy Dunca But.. but I made it smiling =(
+He Ka xD ok then.
Lexy Dunca ^^
Clearly you're eager for knowledge..
This reminds me of Watson! That little robot partner you can get for your company. The commercials make him look cute, but I wonder if he can actually solve problems and intricacies like they say. Especially when talking to foreign or people with bad grammar or pauses.
Yeah I thought of Watson, too
Isn't Watson thet thicc girl from apex legends?
Honestly I hope the world pulls a Detroit: Become Human sorta plot and we can welcome an entirely new species. I love AI, and I want to see AI develop a sort of consciousness.
Same, I think the entire concept behind androids gaining consciousness is so cool, I would love to see what it be like in real life
I would rather see humans de-computerised.
Honestly, sometimes I think it would actually be preferable talking to machines than to certain people these days.
What's happening to human imagination in this new generation is shocking. It's almost at the point where you really can't blame the AIs for wanting to take over.
I wonder if Elijah Kamski was involved in this...
8 years later and it did it.
Now they actually can pass it
Just a clarification:Alan Turing Was not a "computer scientist". He was THE computer scientist, the first one.There was not the computer before he invented it.
Does that make him the chicken or the egg?
Leto85 The chicken
Really would be scary if, one day, an AI who has successfully passed the turing test will deviate from its programming and make up its own goal. That means to say, super advanced AI *might* be "conscious" if they can successfully make up their own goals. Would need another test for that in the future, but not different from the turing test.
And then there's Chloe that reminds me of this Turing Test
TED-Ed: What does it mean to be conscious?
Me: Oh its one of these videos
i guess this will be yes since lunch of ChatGPT Lmao
ERB anyone?
"So go ahead, try to turing test me. I stop on a mac and a pc too, I'm a linux b*tch, I thought you GNU!"
My CPU's hot, but my cores run cold, beat you in 17 lines of code
Hasta la vista, like the terminator told ya
Honestly, just hearing about the test... I'm not sure i'd pass. They'd probably think i was a bot.
I for one, welcome our new computer overlords. All hail the next generation! Vote Cthulhu, why choose the lesser evil?
+Haden Snodgrass try as you might Donald Trump is still in the lead for some strange reason.
another 9gag on duty, Sire...
Haden Snodgrass I would choose less evil because lots of reasons: Iterated Prisoner's Dilemma is one. Tit for tat is a kind strategy. It gets a lower score when the last part is unkind, sure but I don't know any strategies that do that except for always defect. Choosing bad will effect your life and not just others is another. If you just steal money and not healthy food and one day end up on an island, you're not gonna live well, will you?
"-takes mores than just increasing memory and processing power-"
Every Large Language Model:
All it took was a few more years. Scientists speculate that by 2040 we'll have AGI or Artificial General Intelligence, capable of being more intelligent then humans and probably have a consciousness. The biggest thing would be that it wouldn't need training like most AIs because it's like us, it can learn like we do and recognize like we do.
It is 2023 and the test has truly passed.
1:25 trying to pass recaptcha
I'm getting some Detroit: Become Human vibes. What if we make an A.I. so smart it destroys us all. That would be ironic.
It seems unfair to judge a computer's intelligence based on whether it can accurately respond to conversation which may require years of life experience to interpret correctly.
what a very computer thing to say
A more effective way would be to test the reverse, ask it a maths question. Most humans struggle with such questions that AIs would find incredibly easy.
For example: "if Bob was travelling in a train passing station A at exactly 8:00 at 50km/hour heading towards station B and Alice was passing station B at 10:09am at 40km/hour, what time would their trains pass each other if the two stations were 1000km apart."
I think we need an updated version of this video now😅
"no existing combination of hardware and software can learn completely new things at will the way a clever child can." - Gary Marcus
As a proud member of the community of intellectual dogs and writer of 99% of the internet.
I can say that human language isn't all that complicated, barking is a lot harder to master, we have so many frequencies an intervals ( some humans can't even hear.) and then there's the hole "right smell", at the right bark frequency. only few dogs have ever truly mastered that.
Imagine having a massive memory storage as a personality
Amazing Analysis, Deployed Worldwide Through My Artificial Intelligence Machine Learning Research Library...
Can we just appreciate the animator of this video? l reaIIy love the artstyle here.
"If a human can train as a robot, then a robot can train as a human"
*Facts*
“More than just increasing memory and processing”
Tell that to gpt-3
This was in one of my SAT topics lol
Gpt-4 has succeeded the turing test
This is crazy considering chat gpt broke the turing test
This aged very well 💫
Ahhh, the good old days when human thought that we talk better than machines.
Gpt-4 is more human than me lmao
A machine isnt alive.
But technically they can perfectly simulate being alive.
However, that defeats the point of a machine.
Machines are used so we humans dont have to do things, or can do things we otherwise cant.
If we wanted a human but better, we wouldnt want a human.
Cause humans have flaws, desires, moods, emotions, needs, etc.
Machines dont.
And theres no benefit if they had any.
True, but the Imitation Game is about Intelligence, not reducing human workload by machine.
LLM's are statistical models without a grounded understanding of the world. Clever responses can mask these limitations in short conversations
Yes, it's probable that the holy grail of AI do not lie in statistical models. But LLMs are surely a step forward in unlocking the potential of AI!
@gavinlew8273 If mimicking humans is the goal, statistical models excel. But, to achieve intelligence like our own, there likely needs to be a way for AI to build internal models of the world that ground their language abilities in actual experience. much like how a child learns
I really wished google would’ve released information about their AI to the public instead of suspending the employee who leaked it since it’s a really interesting topic.
i kept thinking about "the imitation games" movie
The most historically innaccurate, overdramatized version of Turing's life to date.
I'm not scared of AI passing the turing test, but it purposely failing it.
Sounds familiar. (Is this copied from another comment in the same chat?)
Here we go
What if you started with a "simple" program to mimic how a child learns? I know there are programs that can learn to play a certain game through trial and error, but still..
My thoughts on this are to upload a "dictionary" of every single word, syntax, etc, then program a logic device to form a response. For example, upon starting, the program won't know how to speak, but it will understand each word. If you said 'hello' to it, it would check that word in the database, interpret what it means, then use logic to pick a response. The logical response to 'hello' is just repeating the input and say 'hello'. I'm making it sound waaaaaay more simple than it would actually be, but after rewatching Chappie, it doesn't seem too far-fetched.
But that statement about the orange juice and the date requires knowledge of expiration dates on containers of orange juice and that orange juice is juice made from oranges and it must be refrigerated. Besides the whole, using "him", which is a pronoun replacing nothing.
There are just so many things that a computer would struggle with, that simply being able to form absolutely perfect sentences and to even understand what a sentence is saying, isn't going to be enough. It'll need knowledge of our world and how it works.
My dad’s a com-computer
XRMissie Hello.
I think it doesn't work that way.
The logical response for you is to repeat that. Why so ? Because you have a huge experience on talking with people. Logically simply repeating hello doesn't have any use unless one knows what the person he is speaking with expects to hear or what will certain words make him/her feel and intends on provoking such feelings to him/her. That is for the simplest piece of human conversations ; exchanging greetings. Human language is deeply rooted from life and human behaviours in order for one to learn how to speak the language it takes to connect every single word , sentence and figure of speech to elements via experience. An ai having done that must also develop a personality in order to be able to use it with humans which also happens via experience. Us humans have undergone thousands of years of social evolution, something that I guess is not easy to convey onto an ai.
These are my thoughts at least.
I know I'm super late, but what if we uploaded a search engine onto an AI? For example, if we said to it "I failed my exams today, what do I do to get a higher score?" It would search for what an exam is, what it means to fail it, and how to get a higher score on one. And it should be able to process sarcasm, humor, and slang.
I think robots are just extremely introvert, that's why they are not good at small talk :D
for one girl friend problem discussion u need to do a million coding for an A.I
Language is probably the most complex rig in our brains. To make an AI talk like a human, we must first know how a human talks. How he perceives words and meaning from a sentence first. THEN and only then we can move on to forming new ideas. If we can't do the first, we won't do the latter.
Transformers
The interesting part about AI, is that its solutions dont match ours. But work anyway.
Take a simulated robot, and tell the program that the further it gets, the better.
This should teach the robot to walk right?
Well... no.
It teaches it to move ever incrementally further. With flips, flops and wiggles.
Only if we put it as a requirement for it to keep standing, do we get the most remote form of walking.
Or take the moba game dota's open AI.
It didnt learn to play the game.
It learned to 5 man rush lane by lane, used the ai's superior micro to win a fight that was already in their favour, and the used potions, which's effects get cancled by heros but not towers, to out regen the damage the tower dealt. And thus destroy all buildings faster then the players could deal with them.
Very underrated comment, very interesting idea
This test would fall apart once you ask it, "Would you like a glass of water?" And then make it drink it.
Wow I can't believe when I was young I thought AIs and robots who can think for themselves is impossible, now it is almost true :O
Closer and closer everyday. Hope you're doing well 7 years in the future.
"If something thinks differently it does not mean that it is not thinking"
~Alan Turing
from the Imitation Game.
I'm not afraid of robots passing the Turing test. I'm scared of robots that actively fail it.
Back when Cleverbot was still bleeding edge AI.
Watching this from March 2024 is funny
exactly lol
Alan Turing: You only need 100 Megabytes of memory to past "The Turing Test"
Me: Alan, I have 32Gigabytes of memory but it can't pass the test.
.
.
Alan Truing: I died early, you know.
anything can pass a turing test if you set enough restrictions on the judge. a true turing test would not restrict the judge in asking about anything or talking about anything.
If i were to make a bot that could only say "Cheese" and told a person that there were 9 people and one bot and they had to find which one was a bot by having a conversation, but set a restriction on the conversation where people could only say cheese over and over again during the conversation, they would not be able to tell that it was a bot.
If consciousness came from inanimate matter that would be a miracle beyond miracles.
J O S E J U
The first thing I thought of was Google Duplex, please check it out if you haven't already, its quite crazy how much technology advances even in just a few years
could a conversation between computers sound human?
I really like the design in the video