+Nikolaos Skordilis It's called "Evolutionary Computing", is actually a Thing in Artificial Intelligence, and unlike what is suggested in this video, is actually extremely successful at solving problems (given, as is usual in AI, those problems are suitable for this method, for which there are guidelines).
+Ahsim Nreiziev Well, evolutionary algorithms have the advantage that their trials take far less time than natural reproduction (order of microseconds to seconds rather than hours to years), have a much higher probability of success (in natural evolution, most changes have a small effect on fitness, and therefore a population may "try removing the left mirror a million times" as he put it before that gene died out, whereas code would reject it immediately, and natural populations frequently go extinct entirely for essentially random reasons), and optimization for a specific task. Natural evolution might at best have better parallelization. So he's right that natural evolution is an extremely inefficient intelligence in a computational sense. Thus what we see in nature is a startlingly complex but inefficient method of reproducing genes.
EebstertheGreat Well, it should be noted, of course, that the "better parallelization" of natural Evolution vs. artificial Evolution and especially vs. a human car designer is somewhat key when figuring out how efficient a method is. Evolution might try "removing the left car door mirror" a million times, but if the "population of cars", as it were, were 100.000 individual big, it would only be doing it on an average of 10 times per individual. Human car designers tend to only work on one car at a time. Furthermore, where a human car designer would typically try out one "mutation" / change at a time, Evolution is "trying" various (say 10, to be conservative) "changes" / mutations at any given time, again thanks to the much higher capacity for parallelization. This matters when considering how efficient Evolution is as a computational algorithm. The other two factors you mentioned, namely the time it takes to run a trial / determine the fitness of an individual and the totally random destruction of entire populations, are both to blame on the *environment* Evolution is forced to run its "Algorithm" on (namely the Real, unpredictable, World) rather than on the algorithm itself. The Real World is simply an extremely unsuitable and inefficient environment to run an algorithm in. PS: Some Evolutionary algorithms, like the ones that generate better computer chess players or AI players for other games, *do* take minutes to hours rather than microseconds to seconds to determine their fitness, namely by playing a game against a human opponent.
Ahsim Nreiziev Ten is not a conservative number. A typical number of mutations per generation is on the order of 1. And most mutations have no effect on fitness. Rather, changes from generation to generation typically result from the new unique mix of traits from both parents, which doesn't add any new genes to the gene pool. Anyway, the number of individuals in the population doesn't really matter for efficiency, just speed. If I could perform ten billion individual float multiplications in a millisecond on a single core, that would be remarkable. But if I could only achieve that speed on ten billion cores in parallel, nobody would be very impressed with my efficiency. That would clearly be an inefficient system, especially since some quad cores can handle that just fine. Saying reality is an "inefficient environment" is nonsense. An algorithm is either efficient or it isn't. All algorithms run in reality. Compared to computing, evolution is extremely slow and extremely inefficient, taking a colossal number of wrong turns, using gigantic populations, and requiring millions of years to make substantial progress. I don't know why you find that controversial. "Some Evolutionary algorithms, like the ones that generate better computer chess players or AI players for other games, do take minutes to hours rather than microseconds to seconds to determine their fitness" That's not an evolutionary algorithm.
The hill climbing analogy to evolution is something that i haven't heard before, but it illustrates its lack of intelligent design beautifully. Thank you!
avoiding local maxima is the biggest problem with especially evolutionary algorithms. A fairly recent but in many cases incredibly powerful approach is called "novelty search" which tries to explore the possible behaviors more evenly and thus won't get stuck.
The problem of getting stuck is solved by the presence of parasites. A parasite effectively flattens the peak in the space forcing the host organism to look further.
I never heard evolution described as a hill climbing algorithm before. In fact I never heard of a hill climbing algorithm. You guys are making me smart
I just want to say that this video expandend my understanding of... basically "the world" by so much, it's hard to even describe. Thanks for making it. Even at the risk of sounding extremely corny, I'll just go ahead and say that it changed my life.
And because of this, you would want DNA to have a very fractal behavior, allowing for small changes in the root to give huge changes for the organism, allowing you to possibly break out from a local maximum. This is done by inheriting earlier properties is the DNA sequence, for example, all skins humidity is based on a common first introduction of the humidity in very early in the DNA.
This is a really good way to articulate why evolution doesn't always result in the best possible outcome and why all animals aren't evolving into perfection all the time, and why we're the only intelligent species. I've tried to explain it in the past, and he totally nailed it with this analogy. It's because intelligence is this huge peak, but it's surrounded by valleys. It took a huge fluke for us to acquire it.
I very much enjoy Robert's interesting, insightful & thought provoking ideas, I thought that I would interject however on what evolution is optimizing for (@ ~1:33). Contrary to the statement that it's optimizing for the number of offspring, in fact evolution sometimes performs the opposite function if it is needed to further the lineage. In other words, heredity taken as a whole (DNA,RNA, Ribosome,Epigenetics etc) is self perpetuating and as such, in some cases it may be advantageous to decrease the number of offspring. This is thought to be one factor in the success of mammalian evolution. Some have also argued that in some cases individuals that display self-destructive characteristics are performing some function that ultimately contributes to the success of the whole or a "generalized heredity class" if you will. While many of these statements are debatable, they are submitted simply to further the conversation and promote additional levels of thought on the matter.
There are plenty of wheels in nature, on the molecular level though. The ATP-synthase, being one of the most prominent examples, is currently powering every single cell in your body.
I AM SOOOOOOOOOOOOOOOOOOO HAPPY TO HEAR THAT YOU UNDERSTAND HOW EVOLUTION ACTUALLY WORKS>>> OMFG its a full time job just helping ppl understand it let alone trying to correct someone that has it wrong
Hey thank you ! 1 week ago I tried genetic programming, I know why it doesn't quite work now, that's because I'm on top of the hill and every mutation that happen is worst than the current generation. Thank you for this good explanation :)
My faith would be increased if, on a video where the speaker uses the word "evolution" with regards to algorithms, the popular comments were actually directed at the contents of the video, rather than self-flattering comments about your particular tribe.
@@mr_rawa it has been a journey. I learned quite a bit, but not exactly from my compsci classes, more so from the interdisciplinary subjects that relate to it. I learned Russian, learned math incredibly well, got experience in labs, with student government, and learned how to think and speak. I would recommend it, if you can work hard enough. If I can, lots of people can lol
Great video. Hope you make more videos on the subject. Genetic Algorithm is the field I'm studying right now, and it would be really helpful to get to know more about other optimization methods, such as PSO, which I'm also doing some research on. Keep up the good work.
IN FACT the larger amount of dimension in a sense makes hill climbing more feasible (in machine learning for example) if your dimensions behave at least somewhat independently. This is because a local minimum or maximum is a place where all derivatives are zero. If you have #1:A lot of dimensions, and #2:They are not hugely interdependent, then almost always there is at least some dimension where the derivative is not zero. I've dubbed this "the blessing of dimensionality" as opposed to the curse of dimensionality that is a plague in other areas where dealing with high dimensional data.
People have developed 'AI' for video games, namely Sethbling, for Super Mario World and Mario Kart. It works in the same way as evolution, with each generation having a fitness level as it plays the game, trying random inputs. The fitness increases depending on how far through the course the AI gets, and decreases based on the time taken to do something. Then, two 'species' from a generation which each have the same fitness level are 'bred' together, resulting in the next generation. This keeps on happening until the AI is an expert at the game.
These also perfectly demonstrate the problem of local maxima as the AI can never reach the levels of the best TAS due to the massive leaps needed to explore such extreme possibilities. The AI will disregard those paths almost immediately because of how deep the fitness valleys are surrounding the peak. They also get stuck in strange habits due to the fitness cost of eliminating those habits.
This is further confusing the point that most people misunderstand when they are under the impression that evolution somehow an agent making choices. When it’s just what we call the random process that occurs when an organism happens to mutate in such a way that is more successful at mating.
I think any person who has the education to appreciate these videos understands that giving evolution agency is done just for the sake of simplicity. If you know of a way to efficiently deliver these concepts to human brains without using analogies and anthropomorphization, please share, because even veteran biologists are comfortable with ascribing evolution agency.
@@Horny_Fruit_Flies When you're whole expertise is in AI, and saying things like the agents 'wants this', and 'does that', there should not be the same language deployed for evolution. because there has been a 'what do they mean when they say this word' standard set. And interchange is not apparent unless specified. Most of Miles' AI arguments arise from not formally specifying what you mean or the context in which interpretation should occur. For someone that is that highly specific it's not the same as how you suggested using an analogy as useful unless it is clarified.
Raster scanning. Nice. Yeah the local maximum thing was neat. And 'dimensional reduction' pretty much sounds like it describes all of applications of science. How can I get all this complex data to fix a y=mx+b .
Robert Miles sounds really interesting but i can't find a thing about him in google. I would really like to find his youtube channel or find his blog or something (assuming it exist). Does any1 know where to find him?
if we define intelligence as the ability to hit a smaller target in a search space in a lesser amount of time, then i'm not sure that humans are necessarily more intelligent. I think we are predisposed to think in certain (sometimes presumptuous) ways when attempting to solve some problem, be it optimization or otherwise. but in the case of optimization, i think evolution's inability to premeditate its actions is an advantage, in that it taps into solutions that would be entirely unintuitive to a human. this is demonstrated in biology (like that optimizing slime mold), but also very convincingly in genetic algorithms, which are evolutionary in essence. the fruits of genetic algorithm-based optimization are often nearly optimal solutions (like those antennae that nasa uses) that a humans probably would not be able to come up with
i think thats because evolution doesnt want global maxima ie one particular species to survive. it gives room for local minumum to try out various experiments within and this diversity in species is important as almost all species are interdependent for their survival. evolution is a big game which is not limited to only hill climbing algorithm.
True, but that could be described as changing the hill environment for that specific species at that specific location. Evolution is indeed more complex than the simple concept he used, but you only need to include the others species effect on each other to make it work. His concept stays the same :)
wrong, evolution only "wants" what is best for that particular organism. evolution and nature do not "value" diversity, it's just something that happens as a result.
But the entirety of life exists in a space where the possible outcomes are often taken into account. If there exists an open space for supporting life where there is no contestors or where the contestors leave a space open, a species will often adapt to this area. So the entirety of the natural search space is the natural universe. This search space increases with intelligence as they will create search spaces that are not natural (i.e computers). Not written by julian
the epistemologist Gregory Bateson (one of the fathers of cybernetics) would say that a human is capable of learning type 0-3 but that evolution is learning 4 because it is a system capable of making systems that do learning 3. in that way evolution is far more intelligent than any particular organism.
What program do you use for your illustrations? The animations and illustrations are quite well done and I would love to be able to experiment and try it out for myself! :)
What's with this "quite well done?" They're excellent, and the speaker delivers the material with great natural style & ability. Now, could you find a better use for all this talent? An explanation/dissection of basic economic theory for starters...........
Anyone have a suggestion for more detailed information on hill climbing algorithms? Specifically, how does one determine the best direction to travel in? (i.e. determine the slope/derivative of the current search location). How do we determine how far to travel? How do we deal with sensitivities, and how do we know how small of a distance to check? (i.e. if we increase parameter X by 0.1, perhaps there was a great solution at distance 0.05, and we just missed it) Any recommendations of websites, books, etc? Edit: I may be asking more about Gradient Descent than Hill Climbing.
Argh, a few more minutes and Robert Miles looked like he was just about to understand what he was explaining. ;) Seriously though, very interesting angle on the subject. More please.
This whole "car designer more intelligent than evolution" argument is completely new to me. And it doesn't feel right. Didn't evolution design the car designers in the first place? One could say that the car designer is the pinnacle of evolution's accomplishment: which makes the human and evolution pretty much the same thing (for the purposes of the argument), no?
Well, from one perspective the car designer is merely a spike in the mean density of mass-energy in the local space time. But it is sometimes worth taking a more reductionist view. The car designer can employ at least two tools not available to the [biological] evolutionary algorithm: forethought(planning) and hindsight(memory). This distinguishes the processes, if not the result.
Excellent video! I'm a physicist and get stuck in local minima/maxima a lot doing my work, this is hard to explain to a non-scientists. Can I ask for a video or videos looking at more complex algorithms for finding global minima/maxima?
Evolution is not a progression from inferior to superior organisms, and it also does not necessarily result in an increase in complexity. A population can evolve to become simpler, having a smaller genome, but biological devolution is a misnomer. Go have a look at wikipedia's list of common misconceptions.
If you talk about Nature, you might as well talk about the beginning of the universe itself, since the same laws must still be here, and that "intelligence" to accommodate the constants and laws of the universe to give rise to sentient beings like us, I believe it truly surpasses our level of intelligence, yes, we may be able to build a car, a plane, even study big bang conditions, but we haven't been able to replicate a universe, we haven't been able to create DNA, we have been able to copy or slightly change what already exists, it's a mistake to think we're more clever than nature itself at this point.
I think a thing nature does to combat the local maximum is to bring back past versions out in the offspring of the organism, like bringing back a version of a human from the ice age
To be more correct: the higher mountain does not exist. Evolution is just working with the pieces that are existing; it has no goal or "plan", there is no such thing as "this is a a higher mountain of fitness". Fitness is always relative to to the population.
Excellent video! I'd be curious to know whether evolution has been more or less successful than human designers. My guess would be that we are trailing dramatically in injury time.
I wonder if there are any real-world evolution examples of a species "stuck in a local maximum" that we know about. Of course real-world "hill-climbers" live on a significantly higher dimensioned plane, but I wonder if we've ever seen any creature appear to stop evolving for a period of time...
I don't understand much of it, but I think the theory of punctuated equilibrium claims that most species are in a local maximum. One instance the (new) Cosmos show talks about is plants' development of wood. Plants never grew about about a meter tall (because they were not structurally stiff enough to grow larger) until the first trees, which almost immediately came to dominate the plant.
JesseLH88 So does that mean that all species in a local maximum are destined to only change their evolution when some other species alters the evolution plane? So the trees essentially changed the height factor in plants to be "obsolete," so it no longer was a factor among lowly plants, so plants that were, say, most efficient with their gathered sunlight had the new advantage of the plants... That's an interesting idea, and I guess it explains both how things can stop evolving and how that doesn't necessarily mean they're stuck forever!
***** You may have written minimum when intending maximum. Given the presumptions of hill climbing algorithms functions in [local] minima would undergo rather rapid change.
I am not sure this explained anything. I think I got the point of the video in terms of optimization, but I think this topic could be expanded. I feel that artificial intelligence is a subject that I am not alone in being curious about. I would appreciate a more in depth video on the subject. Thanks.
a side question: are 'random mutations' actually truly random or are they "pseudo random" based on very very complex and huge amounts of varying input conditions?
I appreciate these discussions. I will say i'm having trouble understanding how the car maker becomes smarter than process that created them. But I suppose it brings attention to the nature of the 'randomness' of evolution or adaption / intent towards a particular relevant outcome by the individual processes. I feel it's bizzare that we can imagine processes in ways that don't/can't happen,but some times it's because of or despite this ability to abstract something entirely 'incorrectly' that we use that to achieve the 'correct' desired outcome. Curious to assume the process that allows us to behave/perform this way doesn't appear to operate in this way itself. Basicslly, Are the processes that influence the progression of evolution not at work when a carmaker imagines how not to make something before they make it?
And because we are more intelligent than Evolution, we will create a superintelligence at some point. If a pseudo random process like evolution can, we can at least match it if we assume that human intelligence is the best possible, but since that is very unlikely, we can even exceed what evolution did... question is when.
I guess we're lucky that during evolution the terrain changes all the time, and if it does so slowly enough, some species being at the local maximum could end up being on a global one.
Evolutionary algorithms tend to include recombination, just like biological evolution does. I wouldn't consider random optimisation a good example of evolutionary computing at all. Plus, "there's no single 'how many children do you have' gene" well, I'd tend to disagree. It might not be a single gene in fact but that's not the point here. The point is that the genotype has no direct influence on the number of _successful_ children an individual has. Having many children might not be a good strategy to have many children actually survive and get offspring of their own as K-strategists show.
this video is the single most interesting thing I've seen in an entire year. Particularly, the "local maximum" and "global maximum" thing. being in the local maximum is a trap. you do it if you can't "see" enough steps in the "strategy game" that life is. thinking is seeing longer, so you can reach higher heights in the long term even if you have to temporarily step in lower plateaus in order to get there. If you are not intelligent enough, you are doomed to settle for "local maximums", like being on welfare, instead of sweating your ass off in order to reach better positions in life.
Exactly. The point being, the farther you can see and predict, the better your outcome will be. Evolution stumbles blindly, only aware of the immediate environment, which is why it takes forever. Intelligence allows planning and navigation through the local minimums toward the global maximums.
Hes talking about the genetic algorithm which I'd like to have an episode on. Maybe with examples like I know of a program of 3d simulation of creatures and also another program in Flash which is the genetic algorithm making a bike (I know it was Flash).
I really enjoy your content about AI, do you have any Book puplished or would you recoment one ? I would be cool when its avilable in geman but not nessary.
Nathan Krowitz i think that’s a valid point. You could then go on to say that mankind supersedes evolution, therefore changes we make to our environment are naturally better than what was there before
To avoid being stuck in a local maxima, could a program run a cheap "probe" generation. One that is really stupid and just goes in a single direction for a while to test whether or not it's on a local maxima or a global one? For instance, the new generation could be programmed to have 1 stupid for every 4 intelligent. The intelligent ones would use the info already gained, but the stupid one would be like a radar ping, going off in a spiral to discover the landscape. Then take the data from that probe generation and enhance the current algorithm. Thereby giving the AI a form of insight.
FlipFlopGaming This is ok in low dimensions, but once there are enough dimensions there are just too many directions to go in. Even if you determine that you're stuck in a local minima because you find a better one it still doesn't mean that you've found a/the global min.
FlipFlopGaming Save local maxima, start searching for other maximas, if newly found maxima is longer than the past maxima , it is the perfect spot currently, until a better one is found.
+FlipFlopGaming Yes you could, but it's only really useful if you're stuck in a particularly low local maxmia. If you're in a particularly high point, the chance of you finding a higher fitness configuration through random one directional or linear relationship alternation probing is really small. This is all involved in trying to improve fitness optimizing algorithms, though. And the point at which they become intelligent is where you add the functionality for prediction, where the algorithm can make somewhat accurate assumptions about what the outcome of a change is going to be before its made. At that point you have design rather than optimization.
+FlipFlopGaming Let me describe an alternative algorithm to try to get from a local to a global point. Imagine the hills described in the video inverted; instead of hills they're actually pits and instead of trying to reach the top of the highest hill we're trying to reach the bottom of the deepest pit. Whatever value you're trying to maximise, you can just multiply it by -1 and look for a global minimum instead of a global maximum. It's actually quite nice to approach it this way because you can imagine the simple optimisation process as just gravity; place a ball at any point on the map and if it's on a slope it will naturally roll down until it hits the bottom of a pit (or an expanse of flat ground). In the hill climbing example, the blind man steps in every direction to find out which way leads up and moves in that direction, while here the ball automatically moves whichever way leads down. Same thing but in reverse. Through this process we can expect it to reach a local minimum; the bottom of a pit that may not be the deepest one. Once the ball reaches a local minimum, what do we do? Simple: kick it as hard as you can in a totally random direction. Imagine that physics is somewhat simplified and the ball won't leave the ground and bounce off surfaces, instead it will just roll along the ground at speed. With that first forceful kick it's probably going to be kicked completely out of that first pit; passing over the rim and entering into another one. In fact it will probably pass through several different pits before friction finally slows it down enough to settle in one; a different local minimum. When it does, you kick it again and then again and again. Each time you kick it with a little bit less force and always in a random direction. In the early stages it'll continually be kicked out of one pit and through more to finally rest in another, but eventually, through random chance, it will most likely roll into a particularly deep pit; possibly the global minimum. Unless this happens in the early stages of the process when you're kicking the ball really hard, the pit should be too deep for it to be kicked out of and so it will remain where it is as you give it weaker and weaker kicks and eventually stop. Even if you do get there in the early stages and kick it out, you're still going to have plenty of time kicking it easily out of shallow pits for it to end up back in there. This gives you a much greater chance of getting the global minimum and even if you don't reach it, you've probably reached one that you can still be pretty satisfied with. To end up in a shallow pit you would have to consistently kick in a direction that doesn't pass through any deep pits, which would require either a terrible space to work with or insanely bad luck.
+FlipFlopGaming This isn't that far off from something like particle swarm optimization...which is a little bit more advanced version of what you're thinking.
Evolution vs. Creation comes down from what I see is where is the intelligence. Is it in the dirt or mud or what ever something evolves from. Or is there a Creator that created the order of things. Both take a leap of faith. The Creator sounds more believable then concluding that dirt is smart then humans. But that is what you are left with when following evolution. IMHO.
Andrew Joel There doesn't need to be an "intelligence" in order for evolution to take place. That's like saying there must be an intelligence causing ice to melt. It's just another thing that happens in nature, but holy texts don't account for it because they COULDN'T because of the relatively sorry state of collective scientific knowledge.
What an extraordinary communicator. Very very clear.
He's so clever even his hair looks like a brain
he's got new look now
bwahahahahaha
ROFL
"Evolution as an algorithm"; never thought along that line, very interesting and illuminating information.
+Nikolaos Skordilis
It's called "Evolutionary Computing", is actually a Thing in Artificial Intelligence, and unlike what is suggested in this video, is actually extremely successful at solving problems (given, as is usual in AI, those problems are suitable for this method, for which there are guidelines).
+Ahsim Nreiziev Well, evolutionary algorithms have the advantage that their trials take far less time than natural reproduction (order of microseconds to seconds rather than hours to years), have a much higher probability of success (in natural evolution, most changes have a small effect on fitness, and therefore a population may "try removing the left mirror a million times" as he put it before that gene died out, whereas code would reject it immediately, and natural populations frequently go extinct entirely for essentially random reasons), and optimization for a specific task. Natural evolution might at best have better parallelization.
So he's right that natural evolution is an extremely inefficient intelligence in a computational sense. Thus what we see in nature is a startlingly complex but inefficient method of reproducing genes.
EebstertheGreat
Well, it should be noted, of course, that the "better parallelization" of natural Evolution vs. artificial Evolution and especially vs. a human car designer is somewhat key when figuring out how efficient a method is. Evolution might try "removing the left car door mirror" a million times, but if the "population of cars", as it were, were 100.000 individual big, it would only be doing it on an average of 10 times per individual. Human car designers tend to only work on one car at a time.
Furthermore, where a human car designer would typically try out one "mutation" / change at a time, Evolution is "trying" various (say 10, to be conservative) "changes" / mutations at any given time, again thanks to the much higher capacity for parallelization. This matters when considering how efficient Evolution is as a computational algorithm.
The other two factors you mentioned, namely the time it takes to run a trial / determine the fitness of an individual and the totally random destruction of entire populations, are both to blame on the *environment* Evolution is forced to run its "Algorithm" on (namely the Real, unpredictable, World) rather than on the algorithm itself. The Real World is simply an extremely unsuitable and inefficient environment to run an algorithm in.
PS:
Some Evolutionary algorithms, like the ones that generate better computer chess players or AI players for other games, *do* take minutes to hours rather than microseconds to seconds to determine their fitness, namely by playing a game against a human opponent.
Ahsim Nreiziev Ten is not a conservative number. A typical number of mutations per generation is on the order of 1. And most mutations have no effect on fitness. Rather, changes from generation to generation typically result from the new unique mix of traits from both parents, which doesn't add any new genes to the gene pool. Anyway, the number of individuals in the population doesn't really matter for efficiency, just speed. If I could perform ten billion individual float multiplications in a millisecond on a single core, that would be remarkable. But if I could only achieve that speed on ten billion cores in parallel, nobody would be very impressed with my efficiency. That would clearly be an inefficient system, especially since some quad cores can handle that just fine.
Saying reality is an "inefficient environment" is nonsense. An algorithm is either efficient or it isn't. All algorithms run in reality. Compared to computing, evolution is extremely slow and extremely inefficient, taking a colossal number of wrong turns, using gigantic populations, and requiring millions of years to make substantial progress. I don't know why you find that controversial.
"Some Evolutionary algorithms, like the ones that generate better computer chess players or AI players for other games, do take minutes to hours rather than microseconds to seconds to determine their fitness"
That's not an evolutionary algorithm.
Rob Miles should make his own channel! I would watch him every day
2019 celebrants of our Lord and savior Miles where you at
@@JR-White 2020 baby
he does
Wish granted fella 😎
Robert Miles, you are my favorite person on RUclips.
+David Fair
I like his hair too!
The hill climbing analogy to evolution is something that i haven't heard before, but it illustrates its lack of intelligent design beautifully. Thank you!
avoiding local maxima is the biggest problem with especially evolutionary algorithms.
A fairly recent but in many cases incredibly powerful approach is called "novelty search" which tries to explore the possible behaviors more evenly and thus won't get stuck.
The problem of getting stuck is solved by the presence of parasites. A parasite effectively flattens the peak in the space forcing the host organism to look further.
I never heard evolution described as a hill climbing algorithm before. In fact I never heard of a hill climbing algorithm. You guys are making me smart
The blind person analogy just made my whole life easier
This guy and Dr Pound. Exceptional explanations. Brilliant
I just want to say that this video expandend my understanding of... basically "the world" by so much, it's hard to even describe. Thanks for making it. Even at the risk of sounding extremely corny, I'll just go ahead and say that it changed my life.
Machine learning is mainly about understanding the rules of the world and creating mathematical expressions of them. That's why it's so interesting.
Yeah and mine I realise now
Sorry about before 😭
That was quite an amazing (and accurate) way of describing and explaining evolution. Well done.
And because of this, you would want DNA to have a very fractal behavior, allowing for small changes in the root to give huge changes for the organism, allowing you to possibly break out from a local maximum. This is done by inheriting earlier properties is the DNA sequence, for example, all skins humidity is based on a common first introduction of the humidity in very early in the DNA.
This is a really good way to articulate why evolution doesn't always result in the best possible outcome and why all animals aren't evolving into perfection all the time, and why we're the only intelligent species. I've tried to explain it in the past, and he totally nailed it with this analogy. It's because intelligence is this huge peak, but it's surrounded by valleys. It took a huge fluke for us to acquire it.
I very much enjoy Robert's interesting, insightful & thought provoking ideas, I thought that I would interject however on what evolution is optimizing for (@ ~1:33). Contrary to the statement that it's optimizing for the number of offspring, in fact evolution sometimes performs the opposite function if it is needed to further the lineage. In other words, heredity taken as a whole (DNA,RNA, Ribosome,Epigenetics etc) is self perpetuating and as such, in some cases it may be advantageous to decrease the number of offspring. This is thought to be one factor in the success of mammalian evolution. Some have also argued that in some cases individuals that display self-destructive characteristics are performing some function that ultimately contributes to the success of the whole or a "generalized heredity class" if you will. While many of these statements are debatable, they are submitted simply to further the conversation and promote additional levels of thought on the matter.
And that's why evolution never invented wheels. Take that mother nature!
There are plenty of wheels in nature, on the molecular level though. The ATP-synthase, being one of the most prominent examples, is currently powering every single cell in your body.
it's local maximum is legs
Wheels are on the hills now and I fell for nothing 😂
It was worth the information thanks guys really appreciate it
@@reloder1249 🤣🤣🤣🤣🤣🤣🤣🤣
false.
I AM SOOOOOOOOOOOOOOOOOOO HAPPY TO HEAR THAT YOU UNDERSTAND HOW EVOLUTION ACTUALLY WORKS>>> OMFG its a full time job just helping ppl understand it let alone trying to correct someone that has it wrong
The intelligence in this young lad is scary..
Yes he might start turning everything into stamps... : ) (That was a reference to a different video
Your missing a ")".
Edit: Don't you people know how to edit comments?
**...was a reference to a different video)
some people use other platforms that don't allow comment editing. Like the ios app for example
+Canyon F Yeah I can't edit relies. I can edit comments tho (I think. ..)
That book is amazing, it's the only thing I ever pre-ordered and I do not regret that a bit.
His hair has its own physics! I love it!
this was a really great video, I'd love to see more computerphile vids on AI type algorithms
I like this presenter and this topic very much, thank you, please do more! :)
One of the excellent video on utube👌
What program do you use for graphics?
+MrBumbo90 yes, those graphs are awesome
Hey thank you !
1 week ago I tried genetic programming, I know why it doesn't quite work now, that's because I'm on top of the hill and every mutation that happen is worst than the current generation. Thank you for this good explanation :)
No problem 😂
This needs a section on simulated annealing!
Sees video where speaker uses the word "evolution".
Barely any comments from creationists claiming that evolution is fake.
+1 Faith in humanity.
You are less likely to find religious nut jobs in educational channels like this.
+Adriyaman Banerjee I mean, they arrive here, but then they become smrt.
My faith would be increased if, on a video where the speaker uses the word "evolution" with regards to algorithms, the popular comments were actually directed at the contents of the video, rather than self-flattering comments about your particular tribe.
Evolution is fake.
That is the best definition of intelligence I have ever heard. Wherever that came from it's awesome!
Intelligence is an optimization process.
This just blew my mind! I can't wait for college, new ideas like this will be introduced daily!
was it like expected?
F
@@mr_rawa it has been a journey. I learned quite a bit, but not exactly from my compsci classes, more so from the interdisciplinary subjects that relate to it. I learned Russian, learned math incredibly well, got experience in labs, with student government, and learned how to think and speak. I would recommend it, if you can work hard enough. If I can, lots of people can lol
I liked this video more than I thought I would.
Best explanation ever. More videos by you please!
Perfect descriptive sentence:
"Powerful intelligence is able to hit a smaller target in a larger search space in less time."
false.
This Guy : The evolution Algorithm is slow and can be optimized further.
The Programmer running our simulation: "Crying in Corner* HOW DARE YOU !!
If you say so, you came to help a year ago 😭
Great video. Hope you make more videos on the subject. Genetic Algorithm is the field I'm studying right now, and it would be really helpful to get to know more about other optimization methods, such as PSO, which I'm also doing some research on. Keep up the good work.
Wow, that was a really interesting topic. Also it's very easy to follow his explanations. I'd like to see more of this.
IN FACT the larger amount of dimension in a sense makes hill climbing more feasible (in machine learning for example) if your dimensions behave at least somewhat independently. This is because a local minimum or maximum is a place where all derivatives are zero. If you have
#1:A lot of dimensions, and
#2:They are not hugely interdependent,
then almost always there is at least some dimension where the derivative is not zero.
I've dubbed this "the blessing of dimensionality" as opposed to the curse of dimensionality that is a plague in other areas where dealing with high dimensional data.
This was a very interesting video! Make more like this please (:
false.
this video was amazing. i like this guy a lot.
People have developed 'AI' for video games, namely Sethbling, for Super Mario World and Mario Kart. It works in the same way as evolution, with each generation having a fitness level as it plays the game, trying random inputs. The fitness increases depending on how far through the course the AI gets, and decreases based on the time taken to do something. Then, two 'species' from a generation which each have the same fitness level are 'bred' together, resulting in the next generation. This keeps on happening until the AI is an expert at the game.
These also perfectly demonstrate the problem of local maxima as the AI can never reach the levels of the best TAS due to the massive leaps needed to explore such extreme possibilities. The AI will disregard those paths almost immediately because of how deep the fitness valleys are surrounding the peak. They also get stuck in strange habits due to the fitness cost of eliminating those habits.
Can't get enough of these video's!
Great explanation and visualization. Thank you!
Thank you so much for this video. Worth every second.
Robert Miles is a brilliant guy.
That's an excellent book selection!
0:31 Optimization.
3:05 Local Maximum trap away from Absolute Maximum.
This is further confusing the point that most people misunderstand when they are under the impression that evolution somehow an agent making choices. When it’s just what we call the random process that occurs when an organism happens to mutate in such a way that is more successful at mating.
I think any person who has the education to appreciate these videos understands that giving evolution agency is done just for the sake of simplicity. If you know of a way to efficiently deliver these concepts to human brains without using analogies and anthropomorphization, please share, because even veteran biologists are comfortable with ascribing evolution agency.
@@Horny_Fruit_Flies When you're whole expertise is in AI, and saying things like the agents 'wants this', and 'does that', there should not be the same language deployed for evolution. because there has been a 'what do they mean when they say this word' standard set. And interchange is not apparent unless specified. Most of Miles' AI arguments arise from not formally specifying what you mean or the context in which interpretation should occur. For someone that is that highly specific it's not the same as how you suggested using an analogy as useful unless it is clarified.
Raster scanning. Nice.
Yeah the local maximum thing was neat. And 'dimensional reduction' pretty much sounds like it describes all of applications of science. How can I get all this complex data to fix a y=mx+b .
Yes. The ultimate goal of science: simplify to then understand.
If you can read this, you are probably stuck in a local maximum.
Robert Miles sounds really interesting but i can't find a thing about him in google. I would really like to find his youtube channel or find his blog or something (assuming it exist).
Does any1 know where to find him?
I have a channel now
What is the guy asking at about 4:00? I really wish the captions in this video worked.
if we define intelligence as the ability to hit a smaller target in a search space in a lesser amount of time, then i'm not sure that humans are necessarily more intelligent. I think we are predisposed to think in certain (sometimes presumptuous) ways when attempting to solve some problem, be it optimization or otherwise. but in the case of optimization, i think evolution's inability to premeditate its actions is an advantage, in that it taps into solutions that would be entirely unintuitive to a human. this is demonstrated in biology (like that optimizing slime mold), but also very convincingly in genetic algorithms, which are evolutionary in essence. the fruits of genetic algorithm-based optimization are often nearly optimal solutions (like those antennae that nasa uses) that a humans probably would not be able to come up with
I'd like to see more of this guy, and more on this subject
i think thats because evolution doesnt want global maxima ie one particular species to survive. it gives room for local minumum to try out various experiments within and this diversity in species is important as almost all species are interdependent for their survival.
evolution is a big game which is not limited to only hill climbing algorithm.
True, but that could be described as changing the hill environment for that specific species at that specific location. Evolution is indeed more complex than the simple concept he used, but you only need to include the others species effect on each other to make it work. His concept stays the same :)
wrong, evolution only "wants" what is best for that particular organism. evolution and nature do not "value" diversity, it's just something that happens as a result.
Is it just me or does the videos coming from Computerphile, follow almost completely what I search, almost the day after I look it up...
But the entirety of life exists in a space where the possible outcomes are often taken into account. If there exists an open space for supporting life where there is no contestors or where the contestors leave a space open, a species will often adapt to this area. So the entirety of the natural search space is the natural universe. This search space increases with intelligence as they will create search spaces that are not natural (i.e computers). Not written by julian
the epistemologist Gregory Bateson (one of the fathers of cybernetics) would say that a human is capable of learning type 0-3 but that evolution is learning 4 because it is a system capable of making systems that do learning 3. in that way evolution is far more intelligent than any particular organism.
Excellent explanation. Well done.
What program do you use for your illustrations? The animations and illustrations are quite well done and I would love to be able to experiment and try it out for myself! :)
+Skyler Mews Adobe After Effects >Sean
Computerphile thank you so much! I look forward to many more videos!
What's with this "quite well done?" They're excellent, and the speaker delivers the material with great natural style & ability. Now, could you find a better use for all this talent? An explanation/dissection of basic economic theory for starters...........
Nice! Keep the AI videos comming. Perhaps you could do one on Distributed AI and/or Agend Oriented programming models?
Yeah the plan is now, not to run until I find intelligent information so if I don't run for a while then I'm just ignoring the noise
Anyone have a suggestion for more detailed information on hill climbing algorithms? Specifically, how does one determine the best direction to travel in? (i.e. determine the slope/derivative of the current search location). How do we determine how far to travel? How do we deal with sensitivities, and how do we know how small of a distance to check? (i.e. if we increase parameter X by 0.1, perhaps there was a great solution at distance 0.05, and we just missed it) Any recommendations of websites, books, etc?
Edit: I may be asking more about Gradient Descent than Hill Climbing.
Argh, a few more minutes and Robert Miles looked like he was just about to understand what he was explaining. ;)
Seriously though, very interesting angle on the subject. More please.
This whole "car designer more intelligent than evolution" argument is completely new to me. And it doesn't feel right. Didn't evolution design the car designers in the first place? One could say that the car designer is the pinnacle of evolution's accomplishment: which makes the human and evolution pretty much the same thing (for the purposes of the argument), no?
Well, from one perspective the car designer is merely a spike in the mean density of mass-energy in the local space time. But it is sometimes worth taking a more reductionist view.
The car designer can employ at least two tools not available to the [biological] evolutionary algorithm: forethought(planning) and hindsight(memory). This distinguishes the processes, if not the result.
Excellent video! I'm a physicist and get stuck in local minima/maxima a lot doing my work, this is hard to explain to a non-scientists. Can I ask for a video or videos looking at more complex algorithms for finding global minima/maxima?
Evolution is not a progression from inferior to superior organisms, and it also does not necessarily result in an increase in complexity. A population can evolve to become simpler, having a smaller genome, but biological devolution is a misnomer.
Go have a look at wikipedia's list of common misconceptions.
I can't find the book "what if?" on Audible. Not by searching for title, nor by author. Deeplink please?
If you talk about Nature, you might as well talk about the beginning of the universe itself, since the same laws must still be here, and that "intelligence" to accommodate the constants and laws of the universe to give rise to sentient beings like us, I believe it truly surpasses our level of intelligence, yes, we may be able to build a car, a plane, even study big bang conditions, but we haven't been able to replicate a universe, we haven't been able to create DNA, we have been able to copy or slightly change what already exists, it's a mistake to think we're more clever than nature itself at this point.
I think a thing nature does to combat the local maximum is to bring back past versions out in the offspring of the organism, like bringing back a version of a human from the ice age
At first glance in the thumbnail I thought it's Guy Martin teaching about computers in his spare time while he's not riding. Haha
Nice video though
You forgot to mention that evolutionarily speaking the fitness landscape is dynamic.
this was great. I learned a lot :)
To be more correct: the higher mountain does not exist. Evolution is just working with the pieces that are existing; it has no goal or "plan", there is no such thing as "this is a a higher mountain of fitness". Fitness is always relative to to the population.
explaining better than by professor
Excellent video! I'd be curious to know whether evolution has been more or less successful than human designers. My guess would be that we are trailing dramatically in injury time.
I'm an engineering student who is searching about "How to climb the hill"
Oh boy...I end up here in a place that I don't even understand.
I wonder if there are any real-world evolution examples of a species "stuck in a local maximum" that we know about. Of course real-world "hill-climbers" live on a significantly higher dimensioned plane, but I wonder if we've ever seen any creature appear to stop evolving for a period of time...
I don't understand much of it, but I think the theory of punctuated equilibrium claims that most species are in a local maximum.
One instance the (new) Cosmos show talks about is plants' development of wood. Plants never grew about about a meter tall (because they were not structurally stiff enough to grow larger) until the first trees, which almost immediately came to dominate the plant.
JesseLH88
So does that mean that all species in a local maximum are destined to only change their evolution when some other species alters the evolution plane? So the trees essentially changed the height factor in plants to be "obsolete," so it no longer was a factor among lowly plants, so plants that were, say, most efficient with their gathered sunlight had the new advantage of the plants... That's an interesting idea, and I guess it explains both how things can stop evolving and how that doesn't necessarily mean they're stuck forever!
Mammals in Australia?
They are extinct. Their cousins reached the global max and continued. Any species that is stuck at a local maximum is doomed.
*****
You may have written minimum when intending maximum. Given the presumptions of hill climbing algorithms functions in [local] minima would undergo rather rapid change.
I am not sure this explained anything. I think I got the point of the video in terms of optimization, but I think this topic could be expanded. I feel that artificial intelligence is a subject that I am not alone in being curious about. I would appreciate a more in depth video on the subject. Thanks.
a side question: are 'random mutations' actually truly random or are they "pseudo random" based on very very complex and huge amounts of varying input conditions?
They are not truly random and this assumption is biologically inaccurate.
Evolution created that Car Designer who turned out to be more intelligent than Evolution(in designing cars) proves that Evolution is more intelligent.
I appreciate these discussions. I will say i'm having trouble understanding how the car maker becomes smarter than process that created them. But I suppose it brings attention to the nature of the 'randomness' of evolution or adaption / intent towards a particular relevant outcome by the individual processes.
I feel it's bizzare that we can imagine processes in ways that don't/can't happen,but some times it's because of or despite this ability to abstract something entirely 'incorrectly' that we use that to achieve the 'correct' desired outcome. Curious to assume the process that allows us to behave/perform this way doesn't appear to operate in this way itself.
Basicslly, Are the processes that influence the progression of evolution not at work when a carmaker imagines how not to make something before they make it?
And because we are more intelligent than Evolution, we will create a superintelligence at some point. If a pseudo random process like evolution can, we can at least match it if we assume that human intelligence is the best possible, but since that is very unlikely, we can even exceed what evolution did... question is when.
I guess we're lucky that during evolution the terrain changes all the time, and if it does so slowly enough, some species being at the local maximum could end up being on a global one.
Evolutionary algorithms tend to include recombination, just like biological evolution does. I wouldn't consider random optimisation a good example of evolutionary computing at all.
Plus, "there's no single 'how many children do you have' gene" well, I'd tend to disagree. It might not be a single gene in fact but that's not the point here. The point is that the genotype has no direct influence on the number of _successful_ children an individual has. Having many children might not be a good strategy to have many children actually survive and get offspring of their own as K-strategists show.
Please inform us more on the subject of artificial intelligence -- I need it to survive long term wise.
I know lots of rocks that are more intelligent than most people
Same :)
Don't be so toxic man, wtf
Yeah i know some too... they rock. *badumm zzz*
false.
this video is the single most interesting thing I've seen in an entire year. Particularly, the "local maximum" and "global maximum" thing. being in the local maximum is a trap. you do it if you can't "see" enough steps in the "strategy game" that life is.
thinking is seeing longer, so you can reach higher heights in the long term even if you have to temporarily step in lower plateaus in order to get there. If you are not intelligent enough, you are doomed to settle for "local maximums", like being on welfare, instead of sweating your ass off in order to reach better positions in life.
Exactly. The point being, the farther you can see and predict, the better your outcome will be.
Evolution stumbles blindly, only aware of the immediate environment, which is why it takes forever.
Intelligence allows planning and navigation through the local minimums toward the global maximums.
Hes talking about the genetic algorithm which I'd like to have an episode on. Maybe with examples like I know of a program of 3d simulation of creatures and also another program in Flash which is the genetic algorithm making a bike (I know it was Flash).
I really enjoy your content about AI, do you have any Book puplished or would you recoment one ? I would be cool when its avilable in geman but not nessary.
Brilliant.
but surrondings can change , basicaly the ''evolutionary topography can change over the time, because well one spiecies usualy doesnt exist in vacuum
Am I missing the point if I assert that the car designer is a part of evolution and therefore it's output is really evolution's output?
Nathan Krowitz i think that’s a valid point. You could then go on to say that mankind supersedes evolution, therefore changes we make to our environment are naturally better than what was there before
he is my favorite guy from computerphile(except for the klein bottle guy that guy is cool)
That's Matt Parker. He's from Numberphile, not Computerphile
false.
Liked that a lot! Thank you!
Very interesting (and entertaining)!
So, we can say that our universe is optimization system because we are evolving inside?
My friend dared me to watch a computerphile video then a buzzfeed video afterwards. I did.
Sigh...
To avoid being stuck in a local maxima, could a program run a cheap "probe" generation. One that is really stupid and just goes in a single direction for a while to test whether or not it's on a local maxima or a global one? For instance, the new generation could be programmed to have 1 stupid for every 4 intelligent. The intelligent ones would use the info already gained, but the stupid one would be like a radar ping, going off in a spiral to discover the landscape. Then take the data from that probe generation and enhance the current algorithm. Thereby giving the AI a form of insight.
FlipFlopGaming This is ok in low dimensions, but once there are enough dimensions there are just too many directions to go in. Even if you determine that you're stuck in a local minima because you find a better one it still doesn't mean that you've found a/the global min.
FlipFlopGaming Save local maxima, start searching for other maximas, if newly found maxima is longer than the past maxima , it is the perfect spot currently, until a better one is found.
+FlipFlopGaming Yes you could, but it's only really useful if you're stuck in a particularly low local maxmia. If you're in a particularly high point, the chance of you finding a higher fitness configuration through random one directional or linear relationship alternation probing is really small. This is all involved in trying to improve fitness optimizing algorithms, though. And the point at which they become intelligent is where you add the functionality for prediction, where the algorithm can make somewhat accurate assumptions about what the outcome of a change is going to be before its made. At that point you have design rather than optimization.
+FlipFlopGaming Let me describe an alternative algorithm to try to get from a local to a global point. Imagine the hills described in the video inverted; instead of hills they're actually pits and instead of trying to reach the top of the highest hill we're trying to reach the bottom of the deepest pit. Whatever value you're trying to maximise, you can just multiply it by -1 and look for a global minimum instead of a global maximum.
It's actually quite nice to approach it this way because you can imagine the simple optimisation process as just gravity; place a ball at any point on the map and if it's on a slope it will naturally roll down until it hits the bottom of a pit (or an expanse of flat ground). In the hill climbing example, the blind man steps in every direction to find out which way leads up and moves in that direction, while here the ball automatically moves whichever way leads down. Same thing but in reverse. Through this process we can expect it to reach a local minimum; the bottom of a pit that may not be the deepest one.
Once the ball reaches a local minimum, what do we do? Simple: kick it as hard as you can in a totally random direction. Imagine that physics is somewhat simplified and the ball won't leave the ground and bounce off surfaces, instead it will just roll along the ground at speed. With that first forceful kick it's probably going to be kicked completely out of that first pit; passing over the rim and entering into another one. In fact it will probably pass through several different pits before friction finally slows it down enough to settle in one; a different local minimum. When it does, you kick it again and then again and again. Each time you kick it with a little bit less force and always in a random direction.
In the early stages it'll continually be kicked out of one pit and through more to finally rest in another, but eventually, through random chance, it will most likely roll into a particularly deep pit; possibly the global minimum. Unless this happens in the early stages of the process when you're kicking the ball really hard, the pit should be too deep for it to be kicked out of and so it will remain where it is as you give it weaker and weaker kicks and eventually stop. Even if you do get there in the early stages and kick it out, you're still going to have plenty of time kicking it easily out of shallow pits for it to end up back in there.
This gives you a much greater chance of getting the global minimum and even if you don't reach it, you've probably reached one that you can still be pretty satisfied with. To end up in a shallow pit you would have to consistently kick in a direction that doesn't pass through any deep pits, which would require either a terrible space to work with or insanely bad luck.
+FlipFlopGaming This isn't that far off from something like particle swarm optimization...which is a little bit more advanced version of what you're thinking.
Do more on artificial intelligence. It is such an interesting area and I would like to learn even more, and I am sure I am not alone :)
Look at the world now!!
intelligent design denied with elegance.
false.
I can compare the hill climbing algorithm with gradient descent algorithm. Like they're two opposites...
Evolution solves the problem of the local maximum by unleashing multiple trials.
Evolution vs. Creation comes down from what I see is where is the intelligence. Is it in the dirt or mud or what ever something evolves from. Or is there a Creator that created the order of things. Both take a leap of faith. The Creator sounds more believable then concluding that dirt is smart then humans. But that is what you are left with when following evolution. IMHO.
Andrew Joel There doesn't need to be an "intelligence" in order for evolution to take place. That's like saying there must be an intelligence causing ice to melt. It's just another thing that happens in nature, but holy texts don't account for it because they COULDN'T because of the relatively sorry state of collective scientific knowledge.