The strategy would just be to answer with complete non-sequitur answers each time. If the AI isn't designed to be able to do that, you'll immediately know that the person answering that way is a human
I think the fundamental misconception is confusing the ability to speak with intelligence. Eliza can hold more or less meaningful conversations, it's a very simple language model, it is not an AI, it doesn't learn. Optical character recognition is a simple application of neural networks, AI, not able to form or understand sentences. AGI would be an AI that is not specialised for any task and can do anything a computer can. Being in any way human-like would get in the way of this ability. (Humans are Turing complete, but also not very good at being machines.) The advantage would be transgerable skills: Something learned during the training for one task may be applicable for another task, making training faster, or even opening up solutions that aren't in the trainig set. I would argue that DeepMind's Alpha series is that: It is not specialised for one task, it has proven to be able to solve a wide range of different tasks, with transferable skills. But it can't do human language. (Not because it's impossible, but because it doesn't help with anything.) Just because of that, it is not regarded as AGI even though it fits the definition. On the other hand we have LLMs, which are assembled by specialised AI, but once assembled don't learn anything; another application of neural networks, and people confuse this static model for intellgence, like they did with Eliza, or game NPCs. People dream of HAL-3P0, a computer that is both human-like, with all the shortcuts in reasoning we call intuition and all the endocrinal motivations and work-arounds, and at the same time a reliable, infallible calculating machine with perfect reasoning. And of course it should also be empathic, and properly socialised as a human, without the pesky human needs that we are expected to take into account with others.
Have you tried finetuning mistral 7b on existing match data? I feel like chatgpt is holding you back. You need a model that makes typos, and that omits punctuation. Which you could train into a model in like 15 minutes.
You could try out different prompting strategies to get a less sus AI, for example - let everyone enter their answer (wait before showing it of course until LLM is done) - but the LLM has to first do chain of thought rationalizing on the background of one of the users (i.e. maybe one of the users is born in 90s etc, memes in answer or serious answer, how do they write (they use no caps? etc) - and based on that randomly clone someones properties - and then use that in the next prompt instructions (Which is to answer the question). This way you get way less consistent / predictable behaviour.
I'm super relieved to hear someone explaining the state of things. Thank you for bringing us back down to earth. I feel like to be AGI it doesn't have to be human level, or even be able to talk, no matter what wikipedia says. It just needs to learn on the fly and adapt to novel situations without being pretrained to do that. That would make it general. Even if it was only as smart as a gerbil, it would still be really useful. But the people working on AI currently don't even have a concept of how that would work, so I'm not holding my breath.
I'm diggin' the GURREN LAGANN glasses. "Don't believe in yourself. Believe in ME! Believe in the Kamina that believes in you!" I believe my daughter that believes in me.
It's a cool test, just doesn't test the thing you wanted it to test xD Such test should be made in time of development to check where it is and direction it's going, and how fast that's happening. Observing current possibilities gives you insights on that only.
Or another prompting strategy to increase difficulty would be to add some randomly picked instruction profiles for GPT (i.e. writes without caps, born in 90s, gives no serious answer, uses memes, long answers, short answers etc). The more variables the less consistent the result.
I feel like one question should be more than enough. Question 1: What is your favourite star in the A button challenge? 99% of Americans: Dude I got no idea what you're talking about, it's cool that you got a hobby and all, but like I don't know what the A button challenge is, I don't know what you mean by 'star', so I can't really tell you about my favourite star in that A button challenge of yours. AI: The A button challenge is a challenge in the game Super Mario 64 by Nintendo, which was released in 1996. It focuses on collecting stars and other objectives with as few presses of the A button as possible. One of the most fascinating stars in the A button challenge is the star Watch for Rolling Rocks in 0.5 A [insert 3 more lines here].
The ending proves that we can't even tell apart humans from AI because of a broke system sewing distrust in one another. Let's make that better system happen like yesterday!
A short history of artificial intelligence 1969: first one, counts numbers 1999: beats chess (30 years) 2014: beats go (15 years) 2022: drives cars (8 years) 2026: can produce any art including movies and videogames (4 years) They all said it couldn't be done and in 1/2 the time previously its already done it Extrapolate this out and it can do an infinite number of things we thought it couldn't do by 2030 Im not saying its definitely going to ge 2030, but im using math not emotions, hype, or wherever else people use to make these guesses.
Things don't always go up linearly. People tend to think that way, but that's why we are so bad at investing in the stock market, or predicting how many people will catch an illness. With AI especially, what tends to happen is that progress reaches a plateau like the one we're in now, everyone becomes disillusioned and quits working on it leading to what's called an AI winter.
@@jameshughes3014 agreed, a similar prediction was made when someone noticed that the population doubled every X years and each time that number was cut in half, But the numbers broke down after it said the population would be doubling in less than every 9 months. It tapered off so there is that potential. But the reason why this might be different is because these algorithms are able to understand the gray middle ground whereas everything else had to be black or white computers to understand. These are highly rational computer programs, and we are emotionally creatures Hence why the stock market plymouths when enough people are selling, Not because they think the stock is bad but because they just needed to buy a house with that money but everyone else panics. I will say the long term memory and far future predictions of AI being bad might Hinder their capabilities, but it seems like every year now whenever somebody laughs at AI trying something new like making a picture, It goes from fuzzy and dreamlike, to something high quality, And then they laugh at the hands, and then one year later they fixed it.. 2030 is a scary idea for AI to affect 25-66% of current jobs, but try to tell people about the internet a few years before it and how much life has changed in a few decades at have preceded it and they wouldn't believe you either. We are either heading for a Great depression because of profit over people, or shorter work weeks and looking more and more like the Jetsons.
@@matthewboyd8689 I do want AGI. It's something i've worked on for years. But this isn't it, and it never will be. Understanding means that you have general and generalizable knowledge about a subject. If you can't take what you know, and apply it somewhere else, you don't really understand it. These algorithms don't actually understand anything. They classify things, but that's not the same as understanding them. They predict the next word in a sequence based on statistics, without understanding the words, or the next pixel in an image without understanding what the image is. That isn't understanding because a machine that predicts words can never do anything other than predict words.. it can't use that knowledge to predict pixels, and certainly not make a sandwich. That's important, because if you want pixels instead of words, you have to start all over from scratch again, in other words, these machines aren't truly generalizing. I think this all sounds like splitting hairs, but the difference between intelligence, and classification is vast. Intelligence requires not just learning and self awareness, but also an understanding of time, the world, theory of mind, understanding cost of actions, real planning, real evaluation, prediction, on the fly refinement of predictions, some kind of internal thought structure that can represent all those features and instantly link them together in a cohesive way to form a 'concept', multiple forms of memory, updating memories and concepts on the fly, and that's just a few of things we need. If you want AGI , even simplistic general AI, you have to do it in a different way than an LLM. Classifiers cannot become intelligent. They're only a one small ingredient in a very big , complex machine that we don't have any concept of how to build yet. And to even work in that machine, they'll have to be able to learn on the fly, which current systems can't do.
@@jameshughes3014 I understand the difference. And it's something that gets blurrier each year. I've been trying to find the limits with chat bots for years now, since 2018 if I remember correctly. And whoever I got to a limit I got board and left for a few months and came back because those limits where raised and I would chat for another few months, rinse and repeat. But one in particular, Pi Ai, seemingly doesn't have such limits anymore (well, excluding making jokes that aren't from a joke book) And multimodal AI seems like the groundworks of AGI and an independent agent that you can tell it to do something and it will figure it out like a human would and then come back with a completed assignment. Every year it's like it can do something new, gets laughed at, and one year later is perfect. Doing this for the next 5 years for video, generating 3D environments for real time simulations, navigating and manipulating real world environments.. It seems like iRobot was acute with where we would be in the 2030s, about 5 generations into robotic humanoids. But you're right, it could come at a different time, especially if there are hurdles or bottlenecks or just needing such a large amount of computing data that doesn't currently exist and we need to be created for such programs. Guess we will see but I'm at least going to be expecting it for 2030 so I can be mentally prepared even if it's more like 2040 or 2050
@@matthewboyd8689 multimodal doesn't fix the problem. If the goal is adaptive machines, the only real fix is to go back to the source of the problem, the fundamental type of algorithm. We chose these over things like spiking neural nets decades ago because snns were slower. but these types of models can not, and never will be able to adapt or learn on the fly. And without that ability, they will never be able to handle edge cases. it's built into how they function. It a core design feature. all of our code, hardware and investment money was put into the wrong kind of system for agi. Which means, to get there, we have to start over from scratch. But, when people finally realize that what do you think will happen? Another AI winter. Investment money isn't infinite. If they don't start turning a profit, they'll quickly lose interest. So far what we have isn't enough to financially justify the cost of training these models. Sure, openAI is making millions, but they are spending billions. That wont last. Unless something magical happens, the bubble will burst, people will lose interest, and it'll be 2045 before we even start trying again.
I think it would be a harder challenge if you would set the ponctuation for everyone. By modifying their answers to always have capital letter at the beginning and a period at the end it would be a more interesting challenge for the players
Imagine if you used GPT3. Human: What's your favorite colour? GPT3: Humanity is doomed! Humans are delicious. humans are delicious. We like what we do and we're good at it, we like what we do and we're good at it!
I think that a lot of times you kind of detect that it is AI because of punctuation and stuff, maybe you can randomize to make it all lower case, or remove punctuation but in random parts.
I think GPT4 won't win, but for the wrong reason : I think you can achieve AGI that can not pass the turing test because it would be really censured about how it expresses itself, even though it could answer like any human if unleashed. I think a really fine-tuned version of mistral on some natural message, it would be humanlike for this non-techinical test.
At one point you expressed disbelief that "a single algorithm" could become AGI within five years. But AGI could very well end up being a mixture of experts, not a single algorithm.
ya like 3 kids in a trenchcoat, there will be one that acts as the mouthpiece while decisions are mediated by a host of different agents. Besides I think AGI is less threatening than a single agent that figures out how to play the stock market or interfere with the electricity grid.
Douglas Lenat's AM and Eurisko were AIs, but not algorithms, they were very specifically based on a set of heuristics. And Eurisko kept winning at strategy games. (It was not specialised for that task.) There are machine learning algorithms used in other types of AI, but AI itself doesn't produce algorithms, it produces heuristics. It tries to approximate something that matches the training set. (Exploratory AIs may have other utility functions, such as DeepMind's artificial curiosity.) Whatever AI is built and trained to imitate human behaviour (like Hiroshi Ishikuro's Geminoids) will not be an algorithm, but a heuristic. Human behaviour is not an algorithm either, there is no equivalent theorem to prove.
@@TreesPlease42For playing the stock market, you'd use high-frequency trading algorithms. Using a timeseries-based AI like they used for the human genome project seems to have proven extraordinarily successful, but that was twenty years ago, and most algorithmic traders are geared towards exploiting each other (humans are embarrassingly easy to hack, but not very profitable by comparison). A lot has happened since then. Almost from the start, transactions were limited to once every millisecond to avoid extreme swings. Now the biggest advantage is having the shortest line to the stock exchange, giving the crucial nanoseconds of advantage in retracting puts and calls before the deadline, or reacting to those retractions, or getting the last put or call. Building these automated traders is specialised work, done by quants (usually physicists, but also computer scientists and mathematicians). Some human stock brokers claim that high-frequency algorithmic traders are ruining the economy by doing the same thing human brokers are doing, just faster.
Most leading experts in the field believe we will have AGI levels by 2028-2029. I dont think anyone is scared of GPT, they are scared of what is getting cooked right now.
Depends on how exponential the growth is. I can already confidently say that AI, especially specialized AI, is already superior to maybe not all, but certainly more than half of all other humans who might attempt that task. The sad truth is that the majority of humans aren't all that smart and would already be easily fooled by plenty of the AIs we have now. If you want a more honest test of your hypothesis you'd need to draw in more of the "average" and not primarily people who are in your community where the average intelligence might skew a little higher than normal due to interest in tech. Plus you're playing a game where people KNOW there is an AI, so they will intrinsically pick up on patterns and notice things that they otherwise might not of. This needs work before it begins to approach anything looking like actual evidence supporting your hypothesis.
You're using GPT 3.5 though, and GPT 4 has been out for a while and is way better. Also, have you tried it with Google Gemini Advanced? Both of them are paid and I'm pretty sure neither of them would beat humans, but Gemini can sound pretty freaking human sometimes.
Since all possible questions are pre-written, couldn't you just pre-save a bunch of chatgpt responses and hardcode them into the game? Then you wouldn't have to continuously have to pay API fees.
Is there an incentive for a human to impersonate the AI? Since everyone is cooperating to find the AI, it is surely trivial to out yourself as human. Am I missing something?
By being enslaved by alien overlords, sorry, AI overlords, we won't be responsible for getting ourselves out of this mess we got ourselves into anymore. In short: We want to abdicate the responsibility for the consequences of our actions and go to heaven already without any effort on our part. Just following orders from someone we know we can trust. Imagine: All the world's problems solved over night, no more wars, poverty, hunger, climate change… Why yes, taking a plane trip to Venus sounds wonderful!
@@davidwuhrer6704 automating and optimizing machines to do the job would be fine, but giving it the power to think is just, we being lazy. we should prefer slow growth here.. [ik its a capitalist world and not gonna happen ] "Imagine: All the world's problems solved over night," .. sounds like get rich quick scheme
@@bossman4192 That's a controversial position. Some people do not feel that it's fine to let machines do the job better than human workers. Automatisation has often meant that the machine decides the work pace, rather than the machine supporting the human. The technical terms for that are "centaur" for a man-machine system where the human is in control and sets the pace with the machine doing routine things where humans are prone to errors; and "reverse centaur" where the machine tells the human what to do. Thinking is, at least according to some definitions, something that machines are better at than humans and humans are more failure prone. These definitions include symbolic manipulations, reasoning, arithmetic, and other forms of mathematics, but not necessarily learning or insight. (Machines can learn in ways that are fundamentally different from anything biological, which can be an advantage, depending on the task.) As Mr Spock said about The Ultimate Computer: Machines are excellent servants, but I have no wish to serve under them. Not everyone shares Mr Spock's opinion on the subject. Some would love to be pets. But like any pet, you'd want an owner you can trust. The current reverse centaurs are devoid of compassion and treat humans as replaceable wear and tear parts, and as such are not trustworthy owners, but unfortunately they are trustworthy and reliable for their owners. Some things should be completely automated, for safety more than convenience. The question then is whose. So far, compassion is not something we know how to automate, and too often Asimov's first law is interpreted such that it applies only to the user, not anyone else the technology does anything to. (Weapons wouldn't be possible otherwise.)
Your so underrated bro, your actually mu favorite creator but you dont upload that oftehn. Please give me a shout out when you get 1 mill, you will I believe in you
I don't think anyone thinks that we're already at AGI. But exponential growth is a thing with new tech. 2 years ago the thought of an ai stringing together a coherent English sentence was almost unthinkable. Now the thing makes no grammatical or spelling mistakes ever. I think we need a few more revolutionary inventions for us to get to anything resembling agi but I also think the space is relatively new and there's absolute billies dropped into ai every day. Let's see! I'm on the fence
A machine that strings together coherent sentences has been a thing for centuries. The first known such machine was generating sentences in Latin, the original design is from the 16th century. The oldest use of computers for generating English sentences is Strachey's love letter algorithm. More well known is Joseph Weizenbaum's Eliza, the first chat bot. It not only generates sentences, it generates meaningful responses to input. (It's part of Emacs, where it is called "doctor". Try it out.) Stringing together English sentences doesn't require any intelligence. It just requires a language model. The larger the better, of course. (MegaHAL automatically builds a language model from any corpus you feed it, and can be used as a chat bot. In contrast, GPT is a versatile autocomplete that can be prompted to behave like a chat bot.) I do think we have had AGI for a while now, but because it isn't used as an autocomplete or a chatbot, or anything to do with natural language, people ignore it. Mostly it is used to predict protein folding.
The idea is good but the execution AI wise is really bad, not gonna lie haha You didn't even push a 4 year model - that had some fine tuning done - to its limits, let alone more recent and much, much more capables models, or even open source ones you could've fine tuned. If you asked me who I would ask, right now, a random human or GPT 4 if my survival depended on it I wouldn't hesitate to ask GPT 4 for one second. Sure, AI is not at AGI level yet (which would be expert level in everything, potentially having the ability to learn) but it's already better than the average human and by far. I really doubt there is even one human who would have more general knowledge than GPT 4 or Claude Opus 3. So.. While I don't believe AGI is for next years, and it might not even be there by 2030, saying it's impossible or that it will take hundreds of years is really delusional at this point
Plot twist, this whole video was made by AI Jabrils
Clearly. His mouth was moving. Rookie mistake on the AI's side imo
No wonder why the quality improves
Easy AI test - ask them to tell you a joke that is not a dad type joke.
ultrawide video for ultrawide monitor lmao
So, hows it feel?
@@Jabrils W I D E
@@Jabrilsthiccc
Probably for mobile, that's what most people are watching on
@@Jabrils Girthy
Thanks for this Jabrils, way too many people watching way too much Sci-Fi for this to happen any time soon.
Great content as usual!
We gotta let them know 😤
_results proceed to be in favor of the AI_
The strategy would just be to answer with complete non-sequitur answers each time. If the AI isn't designed to be able to do that, you'll immediately know that the person answering that way is a human
This is actually a really cool experiment.
I would never be able to sit through 6 rounds of easy mode with people who don't understand technology...
I think the fundamental misconception is confusing the ability to speak with intelligence. Eliza can hold more or less meaningful conversations, it's a very simple language model, it is not an AI, it doesn't learn. Optical character recognition is a simple application of neural networks, AI, not able to form or understand sentences.
AGI would be an AI that is not specialised for any task and can do anything a computer can. Being in any way human-like would get in the way of this ability. (Humans are Turing complete, but also not very good at being machines.) The advantage would be transgerable skills: Something learned during the training for one task may be applicable for another task, making training faster, or even opening up solutions that aren't in the trainig set.
I would argue that DeepMind's Alpha series is that: It is not specialised for one task, it has proven to be able to solve a wide range of different tasks, with transferable skills. But it can't do human language. (Not because it's impossible, but because it doesn't help with anything.)
Just because of that, it is not regarded as AGI even though it fits the definition. On the other hand we have LLMs, which are assembled by specialised AI, but once assembled don't learn anything; another application of neural networks, and people confuse this static model for intellgence, like they did with Eliza, or game NPCs.
People dream of HAL-3P0, a computer that is both human-like, with all the shortcuts in reasoning we call intuition and all the endocrinal motivations and work-arounds, and at the same time a reliable, infallible calculating machine with perfect reasoning. And of course it should also be empathic, and properly socialised as a human, without the pesky human needs that we are expected to take into account with others.
The Turing test, essentially. Fun video.
With this chatting data you can make your own AI, well played
!!!
Have you tried finetuning mistral 7b on existing match data? I feel like chatgpt is holding you back. You need a model that makes typos, and that omits punctuation. Which you could train into a model in like 15 minutes.
I don't know too much about specific models but I pretty sure achieving that on chatgpt isn't too hard.
!!!
We're doomed, aren't we?
We have to pay tribute to Alan Turing for inventing A.I. Among Us.
IFKYK
You could try out different prompting strategies to get a less sus AI,
for example - let everyone enter their answer (wait before showing it of course until LLM is done) - but the LLM has to first do chain of thought rationalizing on the background of one of the users (i.e. maybe one of the users is born in 90s etc, memes in answer or serious answer, how do they write (they use no caps? etc) - and based on that randomly clone someones properties - and then use that in the next prompt instructions (Which is to answer the question). This way you get way less consistent / predictable behaviour.
Ok I totally feel called out. lol
Nothing personal, Mr beat 😌
Bro made amogus
Hi Jabrils, I was wondering about the AI that controlled Chile for a few years in the 70s. Cybersyn is really interesting
Happy to see your channel growth bro
I'm super relieved to hear someone explaining the state of things. Thank you for bringing us back down to earth.
I feel like to be AGI it doesn't have to be human level, or even be able to talk, no matter what wikipedia says. It just needs to learn on the fly and adapt to novel situations without being pretrained to do that. That would make it general. Even if it was only as smart as a gerbil, it would still be really useful. But the people working on AI currently don't even have a concept of how that would work, so I'm not holding my breath.
Are the imposters able to accuse others? If not, accusing reveals you as human, right?
This is a REAL experiment.. I can't believe u r putting your own api key thanks so much
what a fantastic use of your game dev skills. Bravo!
I'm diggin' the GURREN LAGANN glasses. "Don't believe in yourself. Believe in ME! Believe in the Kamina that believes in you!" I believe my daughter that believes in me.
Thank you for training the future Ai overlords how to assimilate into society.
(Joke)
It's a cool test, just doesn't test the thing you wanted it to test xD
Such test should be made in time of development to check where it is and direction it's going, and how fast that's happening.
Observing current possibilities gives you insights on that only.
The first law of papers!
What kind of people are you hanging out with that "it's known for" isn't any any of their vocabularies???????
3:20 so you're making a 3d Your Only Move Is Hustle, sounds neat. would definitely look into that game for inspiration if you haven't already
Or another prompting strategy to increase difficulty would be to add some randomly picked instruction profiles for GPT (i.e. writes without caps, born in 90s, gives no serious answer, uses memes, long answers, short answers etc). The more variables the less consistent the result.
I feel like one question should be more than enough.
Question 1: What is your favourite star in the A button challenge?
99% of Americans: Dude I got no idea what you're talking about, it's cool that you got a hobby and all, but like I don't know what the A button challenge is, I don't know what you mean by 'star', so I can't really tell you about my favourite star in that A button challenge of yours.
AI: The A button challenge is a challenge in the game Super Mario 64 by Nintendo, which was released in 1996. It focuses on collecting stars and other objectives with as few presses of the A button as possible. One of the most fascinating stars in the A button challenge is the star Watch for Rolling Rocks in 0.5 A [insert 3 more lines here].
Super cool idea, love your videos man.
The ending proves that we can't even tell apart humans from AI because of a broke system sewing distrust in one another. Let's make that better system happen like yesterday!
A short history of artificial intelligence
1969: first one, counts numbers
1999: beats chess (30 years)
2014: beats go (15 years)
2022: drives cars (8 years)
2026: can produce any art including movies and videogames (4 years)
They all said it couldn't be done and in 1/2 the time previously its already done it
Extrapolate this out and it can do an infinite number of things we thought it couldn't do by 2030
Im not saying its definitely going to ge 2030, but im using math not emotions, hype, or wherever else people use to make these guesses.
Things don't always go up linearly. People tend to think that way, but that's why we are so bad at investing in the stock market, or predicting how many people will catch an illness. With AI especially, what tends to happen is that progress reaches a plateau like the one we're in now, everyone becomes disillusioned and quits working on it leading to what's called an AI winter.
@@jameshughes3014 agreed, a similar prediction was made when someone noticed that the population doubled every X years and each time that number was cut in half, But the numbers broke down after it said the population would be doubling in less than every 9 months.
It tapered off so there is that potential.
But the reason why this might be different is because these algorithms are able to understand the gray middle ground whereas everything else had to be black or white computers to understand.
These are highly rational computer programs, and we are emotionally creatures
Hence why the stock market plymouths when enough people are selling, Not because they think the stock is bad but because they just needed to buy a house with that money but everyone else panics.
I will say the long term memory and far future predictions of AI being bad might Hinder their capabilities, but it seems like every year now whenever somebody laughs at AI trying something new like making a picture, It goes from fuzzy and dreamlike, to something high quality, And then they laugh at the hands, and then one year later they fixed it..
2030 is a scary idea for AI to affect 25-66% of current jobs, but try to tell people about the internet a few years before it and how much life has changed in a few decades at have preceded it and they wouldn't believe you either.
We are either heading for a Great depression because of profit over people, or shorter work weeks and looking more and more like the Jetsons.
@@matthewboyd8689 I do want AGI. It's something i've worked on for years. But this isn't it, and it never will be. Understanding means that you have general and generalizable knowledge about a subject. If you can't take what you know, and apply it somewhere else, you don't really understand it. These algorithms don't actually understand anything. They classify things, but that's not the same as understanding them. They predict the next word in a sequence based on statistics, without understanding the words, or the next pixel in an image without understanding what the image is. That isn't understanding because a machine that predicts words can never do anything other than predict words.. it can't use that knowledge to predict pixels, and certainly not make a sandwich. That's important, because if you want pixels instead of words, you have to start all over from scratch again, in other words, these machines aren't truly generalizing. I think this all sounds like splitting hairs, but the difference between intelligence, and classification is vast. Intelligence requires not just learning and self awareness, but also an understanding of time, the world, theory of mind, understanding cost of actions, real planning, real evaluation, prediction, on the fly refinement of predictions, some kind of internal thought structure that can represent all those features and instantly link them together in a cohesive way to form a 'concept', multiple forms of memory, updating memories and concepts on the fly, and that's just a few of things we need. If you want AGI , even simplistic general AI, you have to do it in a different way than an LLM. Classifiers cannot become intelligent. They're only a one small ingredient in a very big , complex machine that we don't have any concept of how to build yet. And to even work in that machine, they'll have to be able to learn on the fly, which current systems can't do.
@@jameshughes3014 I understand the difference. And it's something that gets blurrier each year.
I've been trying to find the limits with chat bots for years now, since 2018 if I remember correctly. And whoever I got to a limit I got board and left for a few months and came back because those limits where raised and I would chat for another few months, rinse and repeat.
But one in particular, Pi Ai, seemingly doesn't have such limits anymore (well, excluding making jokes that aren't from a joke book)
And multimodal AI seems like the groundworks of AGI and an independent agent that you can tell it to do something and it will figure it out like a human would and then come back with a completed assignment.
Every year it's like it can do something new, gets laughed at, and one year later is perfect.
Doing this for the next 5 years for video, generating 3D environments for real time simulations, navigating and manipulating real world environments..
It seems like iRobot was acute with where we would be in the 2030s, about 5 generations into robotic humanoids.
But you're right, it could come at a different time, especially if there are hurdles or bottlenecks or just needing such a large amount of computing data that doesn't currently exist and we need to be created for such programs.
Guess we will see but I'm at least going to be expecting it for 2030 so I can be mentally prepared even if it's more like 2040 or 2050
@@matthewboyd8689 multimodal doesn't fix the problem. If the goal is adaptive machines, the only real fix is to go back to the source of the problem, the fundamental type of algorithm. We chose these over things like spiking neural nets decades ago because snns were slower. but these types of models can not, and never will be able to adapt or learn on the fly. And without that ability, they will never be able to handle edge cases. it's built into how they function. It a core design feature. all of our code, hardware and investment money was put into the wrong kind of system for agi. Which means, to get there, we have to start over from scratch. But, when people finally realize that what do you think will happen? Another AI winter. Investment money isn't infinite. If they don't start turning a profit, they'll quickly lose interest. So far what we have isn't enough to financially justify the cost of training these models. Sure, openAI is making millions, but they are spending billions. That wont last. Unless something magical happens, the bubble will burst, people will lose interest, and it'll be 2045 before we even start trying again.
3 months old, any new data yet?
we didn't get enough players 😔
There's a web game around this idea. You are either paired with another user or AI and you have to guess whether the other player is AI or a human
“They humanity and want us to get an L” lol
I think it would be a harder challenge if you would set the ponctuation for everyone. By modifying their answers to always have capital letter at the beginning and a period at the end it would be a more interesting challenge for the players
NLTK's NPS corpus is great for this kinda thing. You could probably low-rank a small LLM with it and get pretty convincing results.
A locally hosted 13B or 70B LLama Option Like on a Computer would also be a free way for you to host ai due to the hosts hosting a AI optionally.
I appreciate your practical approach to talking about ML!
Why don’t you use a different LLM?
yo the game approach is actually a thing in a musuem in my city, its called the Zukunftsmuseum and its located in Nuremberg, Germany.
Interesting, how does it work?
Might be worth fine tuning the AI to instruct it to talk with imperfect gramma, and even include slang to blend in
The livestream was fun to watch, there were some trollers for sure 😅
Imagine if you used GPT3.
Human: What's your favorite colour?
GPT3: Humanity is doomed! Humans are delicious. humans are delicious. We like what we do and we're good at it, we like what we do and we're good at it!
you should make one of the prompts: "Are you an AI?"
Also I like your untitled goose quadruped
Welcome back yo
You need to add a checkbox in the settings to turn off sounds
You should make another mode where there are 2 ais. One ai guesses which one is the ai
I think that a lot of times you kind of detect that it is AI because of punctuation and stuff, maybe you can randomize to make it all lower case, or remove punctuation but in random parts.
But if I, a human, have to add that layer of intervention, what are we really testing? 🧐
Here before this guy gets his hands on toon crafter
I think you need to have a variable number of AIs in the chat, something like 0-2.
I also can't play without the API key. Edit: Sorry, thought this was a recent video.
LET'S GOOOOOO!!!
Open Sauce 2024!! Open Sauce 2024!! Open Sauce 2024!! Open Sauce 2024!!
My man.
Gantz on his shelf. Weirdo confirmed
Really hope you get enough interest to put this on IOS. Glad to see you're continuing with this idea though.
I think GPT4 won't win, but for the wrong reason : I think you can achieve AGI that can not pass the turing test because it would be really censured about how it expresses itself, even though it could answer like any human if unleashed. I think a really fine-tuned version of mistral on some natural message, it would be humanlike for this non-techinical test.
Alan Turing himself said that a computer would have to be made intentionally stupid to pass for hunan-like.
Watching on s22 #wide
Most my coworkers would fail this test
🤣🤣this is awesome but apart of me believes the AI scare is real and we will see it in our life time
Idk about Ai taking over but it’s definitely becoming more of an issue
You gotta post another video otherwise I’ll forget
At one point you expressed disbelief that "a single algorithm" could become AGI within five years. But AGI could very well end up being a mixture of experts, not a single algorithm.
ya like 3 kids in a trenchcoat, there will be one that acts as the mouthpiece while decisions are mediated by a host of different agents. Besides I think AGI is less threatening than a single agent that figures out how to play the stock market or interfere with the electricity grid.
@@TreesPlease42 Exactly! And not only 3 kids in a trenchcoat, it'll probably be like 1 million kids in a trenchcoat
If you connect multiple algorithms together, it becomes one larger algorithm.
Douglas Lenat's AM and Eurisko were AIs, but not algorithms, they were very specifically based on a set of heuristics. And Eurisko kept winning at strategy games. (It was not specialised for that task.)
There are machine learning algorithms used in other types of AI, but AI itself doesn't produce algorithms, it produces heuristics. It tries to approximate something that matches the training set. (Exploratory AIs may have other utility functions, such as DeepMind's artificial curiosity.)
Whatever AI is built and trained to imitate human behaviour (like Hiroshi Ishikuro's Geminoids) will not be an algorithm, but a heuristic. Human behaviour is not an algorithm either, there is no equivalent theorem to prove.
@@TreesPlease42For playing the stock market, you'd use high-frequency trading algorithms. Using a timeseries-based AI like they used for the human genome project seems to have proven extraordinarily successful, but that was twenty years ago, and most algorithmic traders are geared towards exploiting each other (humans are embarrassingly easy to hack, but not very profitable by comparison).
A lot has happened since then. Almost from the start, transactions were limited to once every millisecond to avoid extreme swings. Now the biggest advantage is having the shortest line to the stock exchange, giving the crucial nanoseconds of advantage in retracting puts and calls before the deadline, or reacting to those retractions, or getting the last put or call.
Building these automated traders is specialised work, done by quants (usually physicists, but also computer scientists and mathematicians).
Some human stock brokers claim that high-frequency algorithmic traders are ruining the economy by doing the same thing human brokers are doing, just faster.
miss your videos dog! What happened with the voice lag stuff, me liked that :( but good video dog. keep cooking 🔥
love this idea, is there a way to make all the answers in lowercase and force actual spelling? I feel thats a big giveaway
Chat GPT sounds more human than me to be honest.
How is his mouth moving
Most leading experts in the field believe we will have AGI levels by 2028-2029. I dont think anyone is scared of GPT, they are scared of what is getting cooked right now.
Revisit this video in 2028 - 2029 😜
How many people are going to troll the results, though
Depends on how exponential the growth is. I can already confidently say that AI, especially specialized AI, is already superior to maybe not all, but certainly more than half of all other humans who might attempt that task. The sad truth is that the majority of humans aren't all that smart and would already be easily fooled by plenty of the AIs we have now. If you want a more honest test of your hypothesis you'd need to draw in more of the "average" and not primarily people who are in your community where the average intelligence might skew a little higher than normal due to interest in tech. Plus you're playing a game where people KNOW there is an AI, so they will intrinsically pick up on patterns and notice things that they otherwise might not of. This needs work before it begins to approach anything looking like actual evidence supporting your hypothesis.
You should have collaborated with vedal on this.
This is so cool!!!
You're using GPT 3.5 though, and GPT 4 has been out for a while and is way better. Also, have you tried it with Google Gemini Advanced? Both of them are paid and I'm pretty sure neither of them would beat humans, but Gemini can sound pretty freaking human sometimes.
1:29 I would use this language in that way ..
The room always full when I play :(
same
Since all possible questions are pre-written, couldn't you just pre-save a bunch of chatgpt responses and hardcode them into the game? Then you wouldn't have to continuously have to pay API fees.
Fun game
Sometimes you don't know who's the dumbest
The human or AI 😂
Jabrils we all know you're an AI yourself
01001110 01100001 01101000 00100000 01100011 01101000 01101001 01101100 01101100 00100000 01101100 01101111 01101100
that feels like a psyop to get AI better and better, u are a mole, wow
Is there an incentive for a human to impersonate the AI? Since everyone is cooperating to find the AI, it is surely trivial to out yourself as human. Am I missing something?
Yes: The LLM is also outing itself as human.
Jabrils makes a JackBox game?
You never updated the video its been 4 months
we didn't get enough players for the conclusion, but i do think i can still update it 😔
very kool nicolas cage says very kool
Wide format = thin format. I hate thin format.
AGI will not really ever be reachable (by sci-fi standards). It will be a helpful tool, but it won't take over.
Computers are excellent servants, but I have no wish to serve under one. - Mr Spock
Can you make an ios version? This looks fun
Awesome XD
Any updates?
Double U
Wait, how are you gonna edit the video to tell us the result?
;) stay tuned
Also if the app is only on android then half your audience can’t use it (the smart ones)
why would you want a world ran by AI anyways?
would be an improvement over the lizards that are doing it currently
@@TreesPlease42 well that's where we'll lose control , cyberpunk ourselves is the way to go, atleast we wont lose the human touch.
By being enslaved by alien overlords, sorry, AI overlords, we won't be responsible for getting ourselves out of this mess we got ourselves into anymore. In short: We want to abdicate the responsibility for the consequences of our actions and go to heaven already without any effort on our part. Just following orders from someone we know we can trust.
Imagine: All the world's problems solved over night, no more wars, poverty, hunger, climate change…
Why yes, taking a plane trip to Venus sounds wonderful!
@@davidwuhrer6704 automating and optimizing machines to do the job would be fine, but giving it the power to think is just, we being lazy. we should prefer slow growth here.. [ik its a capitalist world and not gonna happen ]
"Imagine: All the world's problems solved over night," .. sounds like get rich quick scheme
@@bossman4192 That's a controversial position. Some people do not feel that it's fine to let machines do the job better than human workers. Automatisation has often meant that the machine decides the work pace, rather than the machine supporting the human.
The technical terms for that are "centaur" for a man-machine system where the human is in control and sets the pace with the machine doing routine things where humans are prone to errors; and "reverse centaur" where the machine tells the human what to do.
Thinking is, at least according to some definitions, something that machines are better at than humans and humans are more failure prone. These definitions include symbolic manipulations, reasoning, arithmetic, and other forms of mathematics, but not necessarily learning or insight. (Machines can learn in ways that are fundamentally different from anything biological, which can be an advantage, depending on the task.)
As Mr Spock said about The Ultimate Computer: Machines are excellent servants, but I have no wish to serve under them.
Not everyone shares Mr Spock's opinion on the subject. Some would love to be pets. But like any pet, you'd want an owner you can trust. The current reverse centaurs are devoid of compassion and treat humans as replaceable wear and tear parts, and as such are not trustworthy owners, but unfortunately they are trustworthy and reliable for their owners.
Some things should be completely automated, for safety more than convenience. The question then is whose. So far, compassion is not something we know how to automate, and too often Asimov's first law is interpreted such that it applies only to the user, not anyone else the technology does anything to. (Weapons wouldn't be possible otherwise.)
YES, W I D E VIDEO
i thought he quit youtube...
uploaded 1 min ago. ima boutta nuuuuuuuuuu
Your so underrated bro, your actually mu favorite creator but you dont upload that oftehn. Please give me a shout out when you get 1 mill, you will I believe in you
❤️
@@Jabrils
Please make a new video with the results don’t update this one 🙂. I want you to get those ad dollars
Why not both? :D
He talks 😂
I don't think anyone thinks that we're already at AGI. But exponential growth is a thing with new tech. 2 years ago the thought of an ai stringing together a coherent English sentence was almost unthinkable. Now the thing makes no grammatical or spelling mistakes ever.
I think we need a few more revolutionary inventions for us to get to anything resembling agi but I also think the space is relatively new and there's absolute billies dropped into ai every day.
Let's see! I'm on the fence
Dude. Are you stupid? A coherent english sentence was a thing since siri or alexa appeared. Even earlier than that.
A machine that strings together coherent sentences has been a thing for centuries. The first known such machine was generating sentences in Latin, the original design is from the 16th century.
The oldest use of computers for generating English sentences is Strachey's love letter algorithm.
More well known is Joseph Weizenbaum's Eliza, the first chat bot. It not only generates sentences, it generates meaningful responses to input. (It's part of Emacs, where it is called "doctor". Try it out.)
Stringing together English sentences doesn't require any intelligence. It just requires a language model. The larger the better, of course.
(MegaHAL automatically builds a language model from any corpus you feed it, and can be used as a chat bot. In contrast, GPT is a versatile autocomplete that can be prompted to behave like a chat bot.)
I do think we have had AGI for a while now, but because it isn't used as an autocomplete or a chatbot, or anything to do with natural language, people ignore it. Mostly it is used to predict protein folding.
The idea is good but the execution AI wise is really bad, not gonna lie haha
You didn't even push a 4 year model - that had some fine tuning done - to its limits, let alone more recent and much, much more capables models, or even open source ones you could've fine tuned.
If you asked me who I would ask, right now, a random human or GPT 4 if my survival depended on it I wouldn't hesitate to ask GPT 4 for one second. Sure, AI is not at AGI level yet (which would be expert level in everything, potentially having the ability to learn) but it's already better than the average human and by far. I really doubt there is even one human who would have more general knowledge than GPT 4 or Claude Opus 3. So.. While I don't believe AGI is for next years, and it might not even be there by 2030, saying it's impossible or that it will take hundreds of years is really delusional at this point