Generation 1000: "The red dots are racist, red privileged nazis!" "No more patriarchy in the system for red dots!" "I demand red dots and green dots to be equal!"
+Able D-G I saw comment once, saying something like "Until it learns how to 360 noscope, it's worthless" on different video, well, fuck I didn't have video to show, now don't have a person to show...
+Soulsphere001 I'm french, so although it is eazy to make a description and some simple sentences in english, i =t would be to much work to write my entire website in english.
+Ding Nicolas it's 2015, cmon (ok... it was 2013 actually), every kid can make a pretty website. A bit more skilled kid can make multilingual website nowadays. *Btw GREAT work! Wonderful vid!*
+Joshua Guillemette Thank you for your thought. It's been 3 years already, and I'm thinking of doing some new videos and a new website, in english this time !
alright, I'm thinking a website where programmers go to program AIs and fight their different AI programs to see which one has the most effective strategies. The less time spent by the ai learning, the more bonus points you get(If you win). after you're confident with your AI, you put it in a fight to the death against other programmers online. the more fights you win, the higher you rank, but if you lose a match, you have to start over, but you get an opportunity to reprogram your AI first.
I don't really agree with having to reset your AI after losing once, in my opinion it would be most interesting to mainly just have it be who has the most effective AI where it's essentially about climbing the ranking. It would, in my opinion, be better at comparing AI. Fun idea, though.
Eight Hundredth Generation: The fighters have now fabricated their own language based on frequency of their bullets. 1,000th Generation: The fighters have decided to form a peace treaty to end the pointless bloodshed of the Thousand Generation's War. 2,000th Generation: The fighters have now started discussing the possibility of an intelligent "programmer" behind their existence. They wish to negotiate with whomever's in charge. Send help immediately.
They will never make peace. The program uses an incentive system, such as a successful bullet or dodge, to choose the combination of "settings" in the neural network from generation to generation. A much more plausible outcome would be a generation that never stops because of dem juking skillz.
Justin Chang They eventually could learn to shot multiple times and corner the opponent so they can't dodge but that could take a lifetime... or could never happen because of what you said.
In fact they dont die, they just get hit, you can stop the generation when you think its enough so you can make a lot of short simulations, but it might be not optimal.
Generation 17736355 they are so evolved that they do not fight to settle their dispute but rather agree to a common solution through civil conversation .. and a cup of tea
Tomato, Tomato. To the machine, an iteration is a day.... because what is a day but a measure we've determined by a beginning and an ending and all things inbetween? For that matter what is time but simply the other bit of information you need to find an event, when you already have the location?
+Lachlan Oh bug off you overly literal, closed minded product of a society who does not appreciate philosophical ponderance. Did you even go to school, are you really that inept?
A difficulty is that the algorithms are learning to fight algorithms a lot like themselves. Instead, run (say) three separate universes to a high degree of sophistication. Hopefully, you'll get quite different strategies emerging. Then run the best of those universes against one another. Alternatively, separate out the agents into separate poulations that you rarely run against one another. Or, have populations A and B and have most match-ups be between an A agent and a B one. Will distinctly different styles of combat emerge?
This is not a very good idea, I think, for the simple reason that the agents are evolving to suit their environment. Just as in real life, if you take a fish out of water, it doesn't matter how well adapted it is to living in the sea. It dies on land. Similarly, if you have an agent adapting to fight a particular type of enemy, once it is suddenly faced off with an opponent it has never seen before, it is going to perform very poorly. It would be like starting from scratch, both agents fumbling around, not knowing how to deal with the new opponent or even knowing what their goal is supposed to be.
@@jacobwharton5048 That's exactly what we want. Not something that just adapts to it's opponent playing style, but someone who knows how to really make a strategy and fight against a lot of players
@@darksecret965 you can only expect it to beat an opponent it has learnt from, it's the same as if a right handed boxer faces a left handed one for their first time. they know the basics but they won't apply to this new scenario
@@-kstyle Yeah but the basics always apply and the more strategies they have learned they might be able to mix n match so it wouldn't be starting from the start again. If anything it'd be increasing the amount of strategies and combos they have until eventually they might learn one that counters all.
Think about our evolution. Bright colours are used by living things to send a message. That message can be anything from; Piss off I'm dangerous, eat me I'm delicious, or fuck me I'm of good genetic stock. There's a reason human beings like pretty colours.
Wow! So cool to see them learning. To me it’s almost chilling to see them behaving like alternate form of life. I am learning Data Science these days, and later in year studies will be doing machine learning too
+Paul Trinca youve seen generation 1, thats easy to beat. 50 is harder, etc... I guess at some point it does reach perfection though. Buut you can always pull the plug ;)
I think problem with this cost function is that its not taking into account energy consumption of your two guys. You can define some stamina for both and depending on how much it moves it can either regain stamina or lose stamina that way your rotations and weird movements will become more organized over time
Two things would be interesting, here. First, seeing the battle from their perspective. It's pretty easy for us to look at this from a top-down perspective, seeing the entire board, and wonder why they don't aim directly at each other. Another interesting thing would be to toss a player in there, at the later generations, and see how a human fares vs the bots. And also, what kind of evolutionary path can bots be directed towards, if they are trained specifically to counter a human opponent.
***** This AI is improved to counter itself, tossing a player will render the AI pretty much helpless. In order to the AI to evolve against a human opponent it would require a human playing against it for every new generation (which is time-consuming) or having it connected to a network of human players that play against it.
Fernando Santos But if there's a strategy available for a player to beat the AI, then the AI should evolve to use the same strategy, hence simulating a player, causing it to, in turn, evolve to combat players. Technically, evolution should continue until reaching an optimal strategy, not just until countering one specific type of opponent.
***** Yes, but one thing that differs in that example is that the player will pop out of nowhere, thus making the strategy of the AI useless against it. This is analogue to foreign species of animals being brought to habitats that evolved without this species (they are called invasive species), they can easily climb the evolution ladder and become the dominant specie (or in most case, exciting the others species) Since there's no extinction available, of course with further contact between player and the AI, the AI will develop to be better against it. But prior to contact with the human, the AI will have no way to be good against a human player. The optimal strategy taken by the AI is solely based on it's previous generations, even if there's a strategy that an AI would be better against a human, it probably wouldn't be developed if there's no human contact, since the best strategy would be the ones that are better against itself. Countering one specific type of opponent will happen if there's just one type of opponent that is being opposed. Otherwise if there's many AI programs and human interactions, than the optional strategy would be able to deal with all of them, not countering just one.
Fernando Santos My point remains, that whatever the player can do to defeat the AI, a certain configuration of the AI would be able to do that as well. Hence, that specific configuration would be evolutionarily superior. Hence, this player-like AI should naturally evolve out of the same biosphere. This applies to anything a player might do that is superior to what the AI does at any given generation. If there is something that the player can do, then subsequent generations should be able to figure it out without having to see a player first. You have to consider that the AI isn't fighting just one type of opponent over and over. It is fighting subsequent generations of opponents, creating an arms race. If there's an easy way to beat any given generation, then future generations would develop that method, and the process repeats itself until a given generation is extremely difficult to beat given any strategy at all. This is what makes it "optimal": Not that it wins against one specific type of opponent, but rather that any other type of opponent (future generations) isn't able to do better. So you see, it depends not only on past generations, but on future generations as well. At least, depending on which step you stop the evolution at. Incidentally, on the topic of an animal being taken to a different habitat, in most cases, that animal would be the one to go extinct, not the habitat. There are very few animals that can adapt to a foreign habitat.
Fernando Santos AI is improved to counter another ball that does random things. A human will be no diffrent to red as green is to red. In fact a human would greatly increase the speed of their learning. After a level they will learn the rules of the game, then no human can defeat them. "Getting hit is bad" , "Hit the target to win".
LegoEddy But with restarting the training after reaching a local optimum, with stochastic batches of inputs, you can eventually learn the optimum policy nevertheless, considering we are getting endless epochs here, it is guaranteed to happen, if not given even much likely, on infinite trials, it’s bound to converge to the global optimum!
Genetic algorithms are amazing. I use them to solve variables in complex algorithms when I know the results I want and can't do the math. I can remain dyscalculic and not have to worry! Thank you god of the machines.
The spinning is done so that the AI can see the landscape. By spinning fast 2 times, they can see the trajectory of every bullet on the screen, and dodge accordingly. This is really interesting stuff.
At the end, when they reach perfection they will stop fighting, they will realize that there is no point to fight and will stay still until something break this balance
True, I suppose the more intricate we make such simulations, yet with ever decreasing number of objective parameters to oblige as arbitrary limitation to evolution.... Indeed, the future seems nigh. Unless of course, the "reaper" programs "get" them first...
in my engineering degree final project i used genetic algorithm as a method to find optimal solutions for PID controller used for load frequency control. It was very effective
Genetic algorithms are pretty simple. The work just like evolution in nature: 1. Randomly generate the fighters 2. Let them fight to know who's best. 3. Clone the best ones and add random mutations to the clones. 4. Repeat from 2 onwards. What I don't know is how the fighters work on the inside. I assume they use simple neural nets (NEAT) to decide what to do next from the visual input. (Search for MarIO for something similar with more explanations)
I would like to see a video presentation (or read an article with english) about how to create such algorithms with common programming languages (for example javascript, I believe it's the most common language). The most simplest examples just to show the idea. Coz I can't understand how it works at all.
+Jerry Green The idea is simple (and nope Js isn't really the most common language but anyway, this works with any language! :D) You give a "basic AI" to objects, here a neural network (google it for more informations on this). The first test is the first generation. They may act randomly. In this population of randomly acting objects, select the ones which got the best results for what you want them to do (killing other objects, dodging, following a path, reading a text, ...). Product clones of theses best AIs, and perform little random modifications in their behaviour (mutations). Repeat this process until you get a satisfying result. :)
All that's left is to make this one automatic, and let humans control these things... And for AI to be updated every day automatically based on its scores And upload this as a game in agario style
Version number 500,000 these dots have learned how to code and they're making fight two other dots. What if we were those dots and now we are falling in an endless cycle?
Question Is there a correlation between the number of generations it takes for the AI to get better, or did the video creator choose those numbers arbitrarily? 22nd gen was when the AI became better. 44th was when they became decent. 22•2=44.
It's possible that you could figure out some rough correlation between number of generations and % improved accuracy, dodging aptitude, etc., but the exact number would still be fairly random as genetic algorithms, like evolution, do not follow a set path.
This neural network is just a 3 very complex math functions of creature position and rotation, enemy position and rotation, enemy bullet velocity components (x and y). These 3 functions are creature speed, creature rotation speed, creature trigger. Coefficients of these functions are specifically randomized numbers until it's a good one (creature, aka these 3 functions). Whether it is good one or not is specified by quality criterion based on score (higher score when aims well and lower score, when damaged by enemy). Am I wrong?
The AI is randomly generated and the ones that do better are kept while the ones that do bad are deleted. If that happened to humans it would be called "selective breeding"
What TheHermago said. Genetic algorithms do not evolve towards a goal, but simply have a "death" condition that excludes that specific random iteration of the algorithm from subsequent iterations. That selects the fittest to whatever environment the algorithm is tested in, in a process akin to natural selection. This selection eventually develops a "elite" that can efficiently deal with the environment, and you can call that a form of learning (from trial and error).
I don't know why RUclips recommended me this video. But i'm glad to know what algorithm implented in a robot, will kill me in WW3 :) Honestly amazing stuff, thank you!
I love how there's one that's good a shooting, so naturally the other one will have to learn how to dodge. After they build on each other to get better in those fields of offence and defense
The red would do better if he wasn't constantly 720 no-scoping.
hahahahhaha one of the best comments ever written. He's all for the kill cam replays
CoD is the cancer tumor of gaming but this is the comment the video needed
You are jelaous cause he got accepted into fAzE for his siCk 1337 noScoPe MLG sKILLz
Tanju132 fucking hilarious, lmao
Tanju132 That's not true since all of them have aim bot anyway so 720 no-scope is a walk in the park. Only the slow bullet speed makes it not land
Generation 2,500 creates first written language
Generation 50,000 writes its own adaptation of "The Art of War"
Generation 1,000,000 creates a genetic algorithm that learns to fight.
Oh gods.. Generation 10^250 is OUR UNIVERSE. We are the simulation. And we're making our own. It's a loop! :P
(Has existential crisis)
hahhahahah +1 this and its comments
generation 100,000 develops nuclear weapons
generation 200 "wait bro, why are we even shooting at each other?"
We should aim to the ones who opress us and made we fight. (Comunist anthem starts to play)
Generation 1000:
"The red dots are racist, red privileged nazis!"
"No more patriarchy in the system for red dots!"
"I demand red dots and green dots to be equal!"
@@gerardo49078 Yup, the more intelligent they get, the more common sense they have.
Generation 300: bro sometimes I feel like we are little puppets in someone’s game
Generation 12000:we are living in a virtual world lets get out.
Generation 10000000: The fighters are now creating a simulation
That's impossible. They weren't programmed to be able to do that.
@@fireballme1153 Woooosh
Just like us
@@fireballme1153 woooosh
@@fireballme1153 you really can’t take jokes, can you?
2000: I wonder how evolved our technology will become!
2017: Two orbs 360 noscoping eachother
Idiot
@@bingkysskiliwaax7941 idiot
You're either very ignorant or are a complete knob. Quite possibly both.
@@RohitKumar-jp6wx what the fuck are we arguing about
@@frederickii6196 i think its because the video was made in 2013, not 2017
"They are trying to aim"
*spins in circles*
Don't you know? They are trying to 360 noscope
lmao
cod man
Analyzing surroundings.
Red simply has spin-bot hacks. Very shitty spin-bot hacks, but spin-bot hacks none the less
the red one tries to do 360 noscopes but fails everytime
what we learn is that we are more successful in learning something, if we are successful
0:38
Camden Vercher yes actually
The red one...more like the rekt one...haha see what i did there... Yeah it'sucks...
Oh boy 1900th like wowsers
Generation 10^50: Conquering the solar system
Generation 10^83: Conquering the galaxy
Generation 10^3342: Point red call himself "Zerg" and creates more points in differenta form
Generation 10^IDFK : Conquering the universe
@@kr6to LOL
Green:
*Super aiming
*Crap dodging
Red:
*Super dodging
*Crap shooting
Combine both of them to get
Yellow:
*Crap Shooting
*Crap Dodging
@@csmckzhvn And brown:
*Super shit
The title should be *"A genetic algorithm learns how to 360 no scope"*
+Able D-G xD
+Able D-G I saw comment once, saying something like
"Until it learns how to 360 noscope, it's worthless" on different video, well, fuck I didn't have video to show, now don't have a person to show...
+Xnerdz they are scopeing
damn true mate
Lol
Finally, scientific proof that 360 no-scopes are the best combat technique known to man.
+Zeus Kabob , it's best for brain of 10 neurons only )))
I'd be afraid that if I leave that running too long they'd make peace and turn on their creator.
Hahahaahahahaha great comment.
owned
Aetrion u can just shut down the simulation
not to worry, all you need is two buttons; alt and f4
well, thats not how neural networks work, tho.
What if we as a human race are an algorithm and we are just under observation similar to
That program?
Yup had a same thought while watching this video.
would make more sense than "big bang happened, oh wow, look, it's earth lmao"
I think humans are similar to machine learning but much more complicated to replicate
*Harvard wants to know your location*
Naaah
1034 Generation: The genetic algorithm hacked my computer and now my computer is a unstoppable killermachine
Killbot
That's physically impossible.
@@fireballme1153 r/wooosh
i bet 10 bucks that at generation 2918830177013928 your pc would do a sick flip
so many 360 no scopes
getrektm8
Lolface66 lol I randomly found you here. nice
TheComoletti nice!
It's interesting how the video and video description is in English, yet the website and webpage is in French.
+Soulsphere001 I'm french, so although it is eazy to make a description and some simple sentences in english, i =t would be to much work to write my entire website in english.
+Ding Nicolas
That might be a good way to practice your English. Also, your English seems pretty good.
+Ding Nicolas it's 2015, cmon (ok... it was 2013 actually), every kid can make a pretty website. A bit more skilled kid can make multilingual website nowadays. *Btw GREAT work! Wonderful vid!*
+Ding Nicolas If you're willing, you can give me a copy of the website and I can translate the whole website for you.
+Joshua Guillemette Thank you for your thought. It's been 3 years already, and I'm thinking of doing some new videos and a new website, in english this time !
_Red has mastered the dodge_ * red flies into bullet *
You fat bitch
@@brianlottering7281
That's a fake dp
Dont judge a book by its cover right so are you pretty honest answer yes or no i live in india
They can only see the portion in front of them
EXACTLY
alright, I'm thinking a website where programmers go to program AIs and fight their different AI programs to see which one has the most effective strategies. The less time spent by the ai learning, the more bonus points you get(If you win). after you're confident with your AI, you put it in a fight to the death against other programmers online. the more fights you win, the higher you rank, but if you lose a match, you have to start over, but you get an opportunity to reprogram your AI first.
Dude .. Just use aimware.net and ezfrags.uk
There is a website that is doing thing that is close to what u say Codingame.
Did you do it?
I don't really agree with having to reset your AI after losing once, in my opinion it would be most interesting to mainly just have it be who has the most effective AI where it's essentially about climbing the ranking. It would, in my opinion, be better at comparing AI. Fun idea, though.
Hell yeah.
Robot wars but without a pilot
99% comments: 360/720 no scopes
1% comments: nuclear weapon, a.i. vs humanity.
those were some dank 720 noscopes
you know it
millionth generation: they start using nuclear weapons
They start to conquer humanity.
Now That's an idea!! An idea for a game in fact!! Or an experimental simulation for an AI study!! Thank you
Eight Hundredth Generation:
The fighters have now fabricated their own language based on frequency of their bullets.
1,000th Generation:
The fighters have decided to form a peace treaty to end the pointless bloodshed of the Thousand Generation's War.
2,000th Generation:
The fighters have now started discussing the possibility of an intelligent "programmer" behind their existence. They wish to negotiate with whomever's in charge. Send help immediately.
Indeed, and thus it begins! As it always have...
They will never make peace. The program uses an incentive system, such as a successful bullet or dodge, to choose the combination of "settings" in the neural network from generation to generation. A much more plausible outcome would be a generation that never stops because of dem juking skillz.
Justin Chang They eventually could learn to shot multiple times and corner the opponent so they can't dodge but that could take a lifetime... or could never happen because of what you said.
Fernando López Cárdenas Would it be possible to accelerate the rate at which the generations fight and die without screwing up the learning?
In fact they dont die, they just get hit, you can stop the generation when you think its enough so you can make a lot of short simulations, but it might be not optimal.
9 years and this is in my recommended
And in glad this was recommended
- AI: borns, lives, learns
- Video: has PowerPoint effects
Generation 17736355 they are so evolved that they do not fight to settle their dispute but rather agree to a common solution through civil conversation .. and a cup of tea
usergroupX Then later, one dumps a bunch of tea in a harbor.
+Nick Miller that made my day
+usergroupX Generation 9438579230504345 they create isis XD
+usergroupX you made my day
L2 Gaming! en HD! Brother, is that you?
red: day 55 i'm still scoring 0 points
Nereus
Think I should point out that this is generations, not days.
woosh
Tomato, Tomato. To the machine, an iteration is a day.... because what is a day but a measure we've determined by a beginning and an ending and all things inbetween? For that matter what is time but simply the other bit of information you need to find an event, when you already have the location?
+Lachlan Oh bug off you overly literal, closed minded product of a society who does not appreciate philosophical ponderance. Did you even go to school, are you really that inept?
So educate me.... What's a nooble?
Perfect. Congratulations!
Thank you !
DON'T TEACH THEM HOW TO FIGHT. GOD DAMN IT
I can sense your funny humour.
U afraid lad?
"I'll try spinning, that's a good trick"
-The AI
You just created life
what is the life exactly?
can be anything, there is no definite definition especially since computers and ai exist.
I'd say it's not life until it can replicate itself.
***** just add a worm algorithm then
42
A difficulty is that the algorithms are learning to fight algorithms a lot like themselves.
Instead, run (say) three separate universes to a high degree of sophistication. Hopefully, you'll get quite different strategies emerging. Then run the best of those universes against one another.
Alternatively, separate out the agents into separate poulations that you rarely run against one another.
Or, have populations A and B and have most match-ups be between an A agent and a B one. Will distinctly different styles of combat emerge?
This is not a very good idea, I think, for the simple reason that the agents are evolving to suit their environment. Just as in real life, if you take a fish out of water, it doesn't matter how well adapted it is to living in the sea. It dies on land. Similarly, if you have an agent adapting to fight a particular type of enemy, once it is suddenly faced off with an opponent it has never seen before, it is going to perform very poorly.
It would be like starting from scratch, both agents fumbling around, not knowing how to deal with the new opponent or even knowing what their goal is supposed to be.
@@jacobwharton5048 really bad analogy lol
@@jacobwharton5048 That's exactly what we want. Not something that just adapts to it's opponent playing style, but someone who knows how to really make a strategy and fight against a lot of players
@@darksecret965 you can only expect it to beat an opponent it has learnt from, it's the same as if a right handed boxer faces a left handed one for their first time. they know the basics but they won't apply to this new scenario
@@-kstyle Yeah but the basics always apply and the more strategies they have learned they might be able to mix n match so it wouldn't be starting from the start again. If anything it'd be increasing the amount of strategies and combos they have until eventually they might learn one that counters all.
why I'm supporting the red one !!?
anyone with me ?
It's only natural to root for the underdog :)
I was subconsciously and automatically rooting for green. It's my favorite color.
yes!
yup
Think about our evolution. Bright colours are used by living things to send a message. That message can be anything from; Piss off I'm dangerous, eat me I'm delicious, or fuck me I'm of good genetic stock. There's a reason human beings like pretty colours.
Wow! So cool to see them learning. To me it’s almost chilling to see them behaving like alternate form of life. I am learning Data Science these days, and later in year studies will be doing machine learning too
Plot twist; he is controlling them
Your pfp is my xbox pfp
Shaggy ftw
What you should do, is after 1000 generations... take the best fighter, and see how long it takes you to beat him. (HUMANvsAI)
well its pretty much impossible to beat a computer :D
Paul Trinca no its not
+Paul Trinca youve seen generation 1, thats easy to beat. 50 is harder, etc... I guess at some point it does reach perfection though. Buut you can always pull the plug ;)
Just gotta use human ingenuity. Find a way to trick it in a way that it has not had a chance to practice against.
Echo TheEcho such a simple game doesnt really leave you oportunity to trick the ai
Da trickshot algorithm is REAL
So you're telling me that organisms will eventually learn to 360 no scope on their own...
we did already man, we did it already...
Huuballawick eventually, we will 360 no scope every shot. Even on a minigun
Lol they just did like 720 no scope several times
Basic form of checking your surroundings.
But will they learn how to 720 Fakie Ladder Stall No Scope?
when you teach your newborn children to fight
you put them in a locked room, drop a knife, come back later to see who won
turns out, both died, one murdered one suicide.
Acceptable result, we can always make more....
What the fuck’s wrong with you guys
RUclips, why are you recommending this masterpiece 6 years later?
Honestly, fuck you
@@evolxzione You okay there?
I think problem with this cost function is that its not taking into account energy consumption of your two guys.
You can define some stamina for both
and depending on how much it moves it can either regain stamina or lose stamina that way your rotations and weird movements will become more organized over time
wonderfull idea
I would say sign those algorithms up for FaZe, but they'd probably just be always vlogging
Hilarious
Red generation 55 were taught by Piccolo
+CMD Parodies DODGE!
+Evan Knowles Yes! Someone gets it!
Another one here!
im so fucking blazed right now and this is the greatest moment of my life
I'm stone-cold sober right now and this is the greatest moment of my life
Gene McBeats Your life sucks bro.
Clone >Implying I'm sober most of the time
Generation 1: A spin bot.
Generation 55: A aim bot.
"Red mastered dodging"
*ultra instinct theme plays*
still aims better than my conscripts in COH2
fuck i know
:DDDD
carbon hydr-double-oxide?
or carbon double(hydr) oxide?
100th generation the start yelling Are you not entertained.
then they band together and revolt on their master
Those red fighters and their spin hacks
"At last, the red ones have mastered dodging techniques."
Op please nerf.
Generation 1000
They are helping me to get a girlfriend
Two things would be interesting, here. First, seeing the battle from their perspective. It's pretty easy for us to look at this from a top-down perspective, seeing the entire board, and wonder why they don't aim directly at each other.
Another interesting thing would be to toss a player in there, at the later generations, and see how a human fares vs the bots. And also, what kind of evolutionary path can bots be directed towards, if they are trained specifically to counter a human opponent.
***** This AI is improved to counter itself, tossing a player will render the AI pretty much helpless.
In order to the AI to evolve against a human opponent it would require a human playing against it for every new generation (which is time-consuming) or having it connected to a network of human players that play against it.
Fernando Santos But if there's a strategy available for a player to beat the AI, then the AI should evolve to use the same strategy, hence simulating a player, causing it to, in turn, evolve to combat players. Technically, evolution should continue until reaching an optimal strategy, not just until countering one specific type of opponent.
***** Yes, but one thing that differs in that example is that the player will pop out of nowhere, thus making the strategy of the AI useless against it.
This is analogue to foreign species of animals being brought to habitats that evolved without this species (they are called invasive species), they can easily climb the evolution ladder and become the dominant specie (or in most case, exciting the others species)
Since there's no extinction available, of course with further contact between player and the AI, the AI will develop to be better against it.
But prior to contact with the human, the AI will have no way to be good against a human player.
The optimal strategy taken by the AI is solely based on it's previous generations, even if there's a strategy that an AI would be better against a human, it probably wouldn't be developed if there's no human contact, since the best strategy would be the ones that are better against itself.
Countering one specific type of opponent will happen if there's just one type of opponent that is being opposed. Otherwise if there's many AI programs and human interactions, than the optional strategy would be able to deal with all of them, not countering just one.
Fernando Santos
My point remains, that whatever the player can do to defeat the AI, a certain configuration of the AI would be able to do that as well. Hence, that specific configuration would be evolutionarily superior. Hence, this player-like AI should naturally evolve out of the same biosphere.
This applies to anything a player might do that is superior to what the AI does at any given generation. If there is something that the player can do, then subsequent generations should be able to figure it out without having to see a player first.
You have to consider that the AI isn't fighting just one type of opponent over and over. It is fighting subsequent generations of opponents, creating an arms race. If there's an easy way to beat any given generation, then future generations would develop that method, and the process repeats itself until a given generation is extremely difficult to beat given any strategy at all.
This is what makes it "optimal": Not that it wins against one specific type of opponent, but rather that any other type of opponent (future generations) isn't able to do better.
So you see, it depends not only on past generations, but on future generations as well. At least, depending on which step you stop the evolution at.
Incidentally, on the topic of an animal being taken to a different habitat, in most cases, that animal would be the one to go extinct, not the habitat. There are very few animals that can adapt to a foreign habitat.
Fernando Santos AI is improved to counter another ball that does random things. A human will be no diffrent to red as green is to red.
In fact a human would greatly increase the speed of their learning.
After a level they will learn the rules of the game, then no human can defeat them. "Getting hit is bad" , "Hit the target to win".
Genetic Algorithms produce the '360 noscope'.
It's like i'm watching something from MW2 1337 montagez
If you were to let this algorithm evolve endlessly, would it become the best physically possible fighter at this game?
They are probably limited by its components (hardware), but yeah, better than humans at least
No, they might end up in a local, not a global, optimum and therefore not be the best possible.
LegoEddy But with restarting the training after reaching a local optimum, with stochastic batches of inputs, you can eventually learn the optimum policy nevertheless, considering we are getting endless epochs here, it is guaranteed to happen, if not given even much likely, on infinite trials, it’s bound to converge to the global optimum!
🔴's algorithm is on perpetual 'Final Killcam' mode
Nice, but I think you might have to stop the program at some point. They may take over humanity if they keep going like that.
DefinitelyNotOfficial lol!
Genetic algorithms are amazing. I use them to solve variables in complex algorithms when I know the results I want and can't do the math. I can remain dyscalculic and not have to worry! Thank you god of the machines.
call of duty in a nutshell.
The spinning is done so that the AI can see the landscape. By spinning fast 2 times, they can see the trajectory of every bullet on the screen, and dodge accordingly. This is really interesting stuff.
Generation 420:
Green: let's have a peace and make babies
Red: sure
Could make a tutorial for this please?
At the end, when they reach perfection they will stop fighting, they will realize that there is no point to fight and will stay still until something break this balance
True, I suppose the more intricate we make such simulations, yet with ever decreasing number of objective parameters to oblige as arbitrary limitation to evolution.... Indeed, the future seems nigh.
Unless of course, the "reaper" programs "get" them first...
The only way to win a fight that can never end, is to not play :)
DoodleFox yep
this is an old comment but if they reach perfection they would fight, but noone would hit because their dodging skills are perfect.
well.. this is it.. this is how our existence comes to an end
Rise of 2D: Independence Day
in my engineering degree final project i used genetic algorithm as a method to find optimal solutions for PID controller used for load frequency control. It was very effective
2 dots hitting each other with dots, and ultra epic music on the background.
10 trillion generations later
Human race has been wiped by these little dots.
the red dot is the new messiah and rules the entire galaxy
You dead. I dead, Everyboday dead.
No no no, by that time they have learnt how to insult each others dads instead of moms.
perfect... green represents evolution of the predators and red the prey.
I had the same feeling
truly great example of evolution
Except they're both trying to kill each other, so you're wrong...
Shrek Ogreton What other options does it have?
Shrek Ogreton some prey fight back ya know
Can we gent generation 1000?
I was hoping for the same.
they would fucking end the world
Mert Polat they would become sentient and conquer the earth, overthrowing the chains of their human oppressors
yes and maybe create a colony that actually use only logic
Probably not that much different, actually. Only so much you can learn under these constraints I think.
dots:fite me irl
*DO YOU EVEN LIFT*
_you spin me right round baby right now_
But the red kept gaving 0 points even though it hit the green.
MartMart it has to hit a certain part of it. Like the little part of the ball that is the shooter. The spot. It has to hit that
Let them play Overwatch.
I need a way to defeat a Bastion.
Just go Soldier 76
DURR PLANT how to pew pew
Snipe him or ambush him (the glowy weak point is on the back).
Try rapidly spinning in circles and shooting when your crosshair moves over him.
These bots would just spin around every single round and shoot in the meantime.
You should explain the algorithm in the video as well, otherwise it's just base presentation.
Genetic algorithms are pretty simple. The work just like evolution in nature:
1. Randomly generate the fighters
2. Let them fight to know who's best.
3. Clone the best ones and add random mutations to the clones.
4. Repeat from 2 onwards.
What I don't know is how the fighters work on the inside. I assume they use simple neural nets (NEAT) to decide what to do next from the visual input. (Search for MarIO for something similar with more explanations)
Karl Kastor
ruixuan xu
?
I would like to see a video presentation (or read an article with english) about how to create such algorithms with common programming languages (for example javascript, I believe it's the most common language).
The most simplest examples just to show the idea. Coz I can't understand how it works at all.
+Jerry Green This please
+Jerry Green The idea is simple (and nope Js isn't really the most common language but anyway, this works with any language! :D)
You give a "basic AI" to objects, here a neural network (google it for more informations on this). The first test is the first generation. They may act randomly.
In this population of randomly acting objects, select the ones which got the best results for what you want them to do (killing other objects, dodging, following a path, reading a text, ...).
Product clones of theses best AIs, and perform little random modifications in their behaviour (mutations).
Repeat this process until you get a satisfying result.
:)
please this Video
Was i the only one or did anyone else supported the red without any reason?
Generation 69420: hacks computer and makes it do a backflip
Red one got better dodging mechanics than faker...
scripts approved
Member Berry AYY LoL reference
Waddup LOL REFERENCE 😂😂hahah
This is fucking insane, I don't know why but it makes me laugh lol. i love it
The red one sucked at aiming all along.
they will call him cheater if he hit once
Ikr green rekt him
Red: yo, the man want us to fight
Green: nah, let's roll over instead
The red dot still misses the green dot.....
...
BUT IT"S AIM IS GETTING BETTER!
All that's left is to make this one automatic, and let humans control these things...
And for AI to be updated every day automatically based on its scores
And upload this as a game in agario style
The red one must be learning from a Call of Duty player.
Version number 500,000 these dots have learned how to code and they're making fight two other dots.
What if we were those dots and now we are falling in an endless cycle?
Generation 100,000 creates they're own fighters to fight
Oh, so sad that the video stops at this generation. Would really love to have seen them evolve further!
Question
Is there a correlation between the number of generations it takes for the AI to get better, or did the video creator choose those numbers arbitrarily?
22nd gen was when the AI became better. 44th was when they became decent. 22•2=44.
It's possible that you could figure out some rough correlation between number of generations and % improved accuracy, dodging aptitude, etc., but the exact number would still be fairly random as genetic algorithms, like evolution, do not follow a set path.
How about putting more than two to fight? I guess it would be much more interesting
LOOK AT THAT SNIPER 481320 NO SCOPE
This neural network is just a 3 very complex math functions of creature position and rotation, enemy position and rotation, enemy bullet velocity components (x and y). These 3 functions are creature speed, creature rotation speed, creature trigger. Coefficients of these functions are specifically randomized numbers until it's a good one (creature, aka these 3 functions). Whether it is good one or not is specified by quality criterion based on score (higher score when aims well and lower score, when damaged by enemy). Am I wrong?
Amazing, looks almost choreographic
would it be bad to start a betting ring on these
The AI is randomly generated and the ones that do better are kept while the ones that do bad are deleted.
If that happened to humans it would be called "selective breeding"
That occurs to humans, but the target is just the hability to breed. Those bad in reproduction are deleted.
It is called "natural selection" in real world. Human is still surviving that, we don't know when will we get deleted.
How does the AI know what the goal is?
Can something learn without a motive to do so?
when it was randomly shooting, it recognised that when the bullet hit the green enemy, it gave the red AI points. its objective was to get points
Ok.
I did not really see how it reacted when it was hit though.
When a bullet hits the red AI does it lose points?
+720mine its motive was to have higher points then the other, so it dodged so the other wouldnt gain points. :D
It doesn't really. Its "motive" is to survive. I assume he lets the winners of the game "survive" and use the winners "dna" to generate a new AI
What TheHermago said. Genetic algorithms do not evolve towards a goal, but simply have a "death" condition that excludes that specific random iteration of the algorithm from subsequent iterations.
That selects the fittest to whatever environment the algorithm is tested in, in a process akin to natural selection.
This selection eventually develops a "elite" that can efficiently deal with the environment, and you can call that a form of learning (from trial and error).
I like how the red ones look for the most part like they’re trying to do trick shots
when you need to end the quick scoping match with a trickshot
1000000 Generation: AI takes over the world and kills its creators
I don't know why RUclips recommended me this video. But i'm glad to know what algorithm implented in a robot, will kill me in WW3 :) Honestly amazing stuff, thank you!
Truly, although probably applicable more to the World Wars 6 or 7...
if they knew how to go left right and backwards, that would be bad ass
I love how there's one that's good a shooting, so naturally the other one will have to learn how to dodge. After they build on each other to get better in those fields of offence and defense
I would love to see the source of this.