One of the most inspiring videos I´ve seen during the last two years. I am an AI student, and the first video that shook me like this was from Vsauce about zipf's law.
I sussed out an older Google pageRank programmer and Complexity Theorist on a social network, and this is the "math detail" that Google used to steal the stochastic identity matrix of society, meaning the collective-personal memory connection. Historically, when pre-literate man built memory palaces we have trouble imagining, devices like sticks and rocks were ENCODING devices, not offloading devices. This is how the brain works, in terms of linear and exponential (metaphorical) connections present in nature and cognition, and people are losing the ability to remember collective events, as their connection to events is shaped by search results and ranking. At the personal and episodic level, all sort of biological triggers, such as smell, are better system 1 access points to memory than text (literacy). When I searched for events I recall vividly as happening, cultural ones, Google does not connect a past "alternative history" to the present cultural moment, where my accurate memory has been either reframed or excluded. I see it as a recursion of humanity down to what is deemed essential on both sides of input/output: resources in the Economic Lagrangian. Here in the US, it is tearing society apart. Last year, I called bullshit on the Google claim of Quantum Supremacy, as that is a moveable goalpost, one that does not assume a finite-dimensional and coherent quantum search space. Tang of Austin did the work on the classical clean-up. As a non-math student but with an education in the humanities, Nassim Nicholas Taleb is the one I am currently absorbing. Notice the nature of abstract mathematical processes like this one, at the base level of connection to mathematical reasoning: in physics, nobody can give a structure/function (element/process) definition for the things they describe. Math has the property, due to the errors in reasoning over some recent centuries, of splitting sets and comparing for equivalence. Here, the dependence of looping the function to fit the ratios predetermined by natural laws reveals itself in inverse. Don't say the word fractal around anyone working with equations; after the Leibniz notation won out, it was a buried-in-the-sauce notion. Sorry to chew your ear. We are nowhere close to AI, audio/visual calculators are all we really have; how we use them is what matters.
Interesting story about Nekrasov & Markov, I did know this one. It shows that failed, obscure and forgotten theories sometimes influence the progress of knowledge
This series of videos, with its breadth of knowledge, its ability to connect seemingly unrelated worlds, the clarity of its explanations, and its visual and auditory beauty, reminds me of the masterpiece "Cosmos" by Carl Sagan and Ann Druyan. Many thanks to the talented creators for crafting such a masterpiece.
Fantastic Video! but I need to ask something on @ doesn't 0 and 1 have to change to make it right? It said that there are more black beans in state 0 than white beans. so obviously it shouldn't be a @ chance??? please comment
Fantastic Video! but I need to ask something on 6:19 doesn't 0 and 1 have to change to make it right? It said that there are more black beans in state 0 than white beans. so obviously it shouldn't be a 50:50 chance??? please comment
A fact is an intervention. It is truth. A singular thing we know has a limit and is thereby finite. In order to be infinite, one must also occupy the finite, otherwise there would be a place the infinite could not go, and so it would no longer be infinite. A fact can make its way into the mind by sheer luck or by pure brilliance which in its completeness would be capable of being by intervention. If it can be, then it will be. Brilliance takes will. So, for Pavel Nekrasov, his free will is an instance that describes the divine. As a particle in a bell jar vacuum of the mind, like a particle, the fact exists because it must exist. When forced to form a word not your own (a lie), one can at some point find a way to say "no" (or to have already said it), so as to defy the false. An untrue word is not a lie if it is simply not yet true. It can become true. That defiance can be fearlessness, a path of truth and thusly love. Love exists. Love is. It is better to be afraid for than to be afraid of, and each time, that division forms asymptote to fearlessness. Will is of intervention. The gift. The increased anecdotal instances are empirical evidences that make the truth clearer, a discovery more than constructing an invention. The equilibrium result does not account for the observor as an external entity, messaging. Every state transition is a message. My comment has no hate in it and I do no harm. I am not appalled or afraid, boasting or envying or complaining... Just saying. Psalms23: Giving thanks and praise to the Lord and peace and love. Also, I'd say Matthew6.
3:34- Could someone please tel me the name of the theologian-turned-mathematician? Seems like an interesting guy; would love to read up on him, but the name wasn't clear in the video. Oh, and brilliant video! Love Khan Academy for sharing these gems for free!
Only the last 2 minutes of the video really speak to the topic - the first 5 minutes of the video speak to a *very* distantly relatedly philosophy of stochasticity starting with the Hellenistic geometers.
***** Im confused, are markov chains both independent and dependent? Because the next outcome only depends on the current one and not the previous ones before the current.
The next outcome is dependent on the state (because it decides current probabilities), and the state is dependent on the last outcome and not any outcomes before that. To move to the example ... which cup you draw from is dependent on what you drew last (white or black piece), but not dependent on what you drew before that. You might say that it IS dependent on the state two steps back because it decided the probabilities of what outcome you got last step, but that is the point ... the probabilities of something happening become irrelevant to the calculations once you know if it happened or not.
I wanted to ask about this but wouldn't call it music, it sounds like binaural beats or isometric tones it something in those realms, perhaps to help with cognitive recall? That would be in innocence, I thought it was a little unnerving/anxiety inducing...lol
I’m not sure this video really explains the ORIGIN of Markov chain. It just introduces it. Again, why would the state vector converge to a stable set of numbers?
This video is not good. This video makes Markov process sounds like dependent. A Markov process is a random process in which the future is independent of the past. @ 6:22 "Markov proved that when every state in the machine is reachable, when you run these machine in a sequence, they reach equilibrium....no matter where you start, once you begin the sequence, the number of time you visit each state, converge to some specific ratio, or probability " The probability to get a light bean or black bean is still 0.5, still independent. although one will stay in cup 0/state 0 when one pick up 1 light bean. If the light beans and black beans are equal large numbers in both cup/state, the probability of both state 0 and state 1 should be 0.5. The rule only forces one to stay in cup 0/state 0 after the result is shown and it doesn't affect the probability of the next choice so the next choice itself is still independent. I don't see the point to introduce Markov process this way which only confuse people.
The video is too disturbing and annoying by constant shaking and needless activities going within. May be you are trying too much on making the content entertaining. Learning is my first objective here and entertainment comes second
this was amazing. little bit cloudy on the transition states but this has been so much more enlightening than scholarly literature about it
One of the most inspiring videos I´ve seen during the last two years. I am an AI student, and the first video that shook me like this was from Vsauce about zipf's law.
hi
I sussed out an older Google pageRank programmer and Complexity Theorist on a social network, and this is the "math detail" that Google used to steal the stochastic identity matrix of society, meaning the collective-personal memory connection. Historically, when pre-literate man built memory palaces we have trouble imagining, devices like sticks and rocks were ENCODING devices, not offloading devices. This is how the brain works, in terms of linear and exponential (metaphorical) connections present in nature and cognition, and people are losing the ability to remember collective events, as their connection to events is shaped by search results and ranking. At the personal and episodic level, all sort of biological triggers, such as smell, are better system 1 access points to memory than text (literacy). When I searched for events I recall vividly as happening, cultural ones, Google does not connect a past "alternative history" to the present cultural moment, where my accurate memory has been either reframed or excluded.
I see it as a recursion of humanity down to what is deemed essential on both sides of input/output: resources in the Economic Lagrangian. Here in the US, it is tearing society apart. Last year, I called bullshit on the Google claim of Quantum Supremacy, as that is a moveable goalpost, one that does not assume a finite-dimensional and coherent quantum search space. Tang of Austin did the work on the classical clean-up. As a non-math student but with an education in the humanities, Nassim Nicholas Taleb is the one I am currently absorbing. Notice the nature of abstract mathematical processes like this one, at the base level of connection to mathematical reasoning: in physics, nobody can give a structure/function (element/process) definition for the things they describe. Math has the property, due to the errors in reasoning over some recent centuries, of splitting sets and comparing for equivalence. Here, the dependence of looping the function to fit the ratios predetermined by natural laws reveals itself in inverse. Don't say the word fractal around anyone working with equations; after the Leibniz notation won out, it was a buried-in-the-sauce notion. Sorry to chew your ear. We are nowhere close to AI, audio/visual calculators are all we really have; how we use them is what matters.
@@dsm5d723 I didn't understand a single thing. Nice.
sm,,😮🎉,,8yp😊
9e
য৭ঙ😊 0:16 য😅োু😊😅ৃৃ😅😮য😅ঁ😅ৃুঋৌ£😊℅ ®®®® ££££@😅$,z Zayed z6, Z!izzy! #∞,,,,,xXx zzzzz. Bd082 8,I'd,,.. zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzZzzzzzzzzzzzzzzzzzuzi fudxaaxszZ,,uz😊zZ8XxA😊a=!8!!!¡😅>,,.,,,,,,, ___z8ঽ,😊,,,,,,,,,,,,,,,,,,,,,,,,,,,৷৷৷৷,,৷৷, zঁ৷৷ 😅
Brilliantly explained. I used markov chains in some uni course but I never gave a deep thought on the actual dynamics between probabilities. Thanks
wow!
Plato --> Bernouilli ---> Nekrasove -->Markov! philosophy is mixed in! Love it!
this isn't Sal Khan narrating?
right. But I cannot understand how Mr Khan knows so much that he teaches all other videos.
This is one of the best videos I have ever seen. Thank you.
This was so beautifully done.
Interesting story about Nekrasov & Markov, I did know this one. It shows that failed, obscure and forgotten theories sometimes influence the progress of knowledge
This series of videos, with its breadth of knowledge, its ability to connect seemingly unrelated worlds, the clarity of its explanations, and its visual and auditory beauty, reminds me of the masterpiece "Cosmos" by Carl Sagan and Ann Druyan. Many thanks to the talented creators for crafting such a masterpiece.
Very informative. But the background music is distracting.
I wish you would cite sources on videos like this that are more about historical facts than your original contributions.
Good video though!
Great explanation and perfect pacing. Thanks!
Great video, thank you!
previous lesson: ruclips.net/video/PtmzfpV6CDE/видео.html
next lesson: ruclips.net/video/WyAtOqfCiBw/видео.html
Brilliant video.
amazing
need more such videos
Great video and explanation
Fantastic Video! but I need to ask something on @ doesn't 0 and 1 have to change to make it right? It said that there are more black beans in state 0 than white beans. so obviously it shouldn't be a @ chance??? please comment
Great video, thanks :)
Fantastic Video! but I need to ask something on 6:19 doesn't 0 and 1 have to change to make it right? It said that there are more black beans in state 0 than white beans. so obviously it shouldn't be a 50:50 chance??? please comment
Thank you!
Cool! Thank you!
That was fantastic! Do you have more such videos on the Markov Property?
+Maitreyi Sinha sorry that's the only one I made. however you can see an applied version here: ruclips.net/video/3pRR8OK4UfE/видео.html
great explanation
Have seen only the first 1:30 yet, and it already is one of the most beautiful introductions I have ever seen in a Science video on RUclips. Thank you
A fact is an intervention. It is truth. A singular thing we know has a limit and is thereby finite. In order to be infinite, one must also occupy the finite, otherwise there would be a place the infinite could not go, and so it would no longer be infinite. A fact can make its way into the mind by sheer luck or by pure brilliance which in its completeness would be capable of being by intervention. If it can be, then it will be. Brilliance takes will. So, for Pavel Nekrasov, his free will is an instance that describes the divine. As a particle in a bell jar vacuum of the mind, like a particle, the fact exists because it must exist. When forced to form a word not your own (a lie), one can at some point find a way to say "no" (or to have already said it), so as to defy the false. An untrue word is not a lie if it is simply not yet true. It can become true. That defiance can be fearlessness, a path of truth and thusly love. Love exists. Love is. It is better to be afraid for than to be afraid of, and each time, that division forms asymptote to fearlessness. Will is of intervention. The gift. The increased anecdotal instances are empirical evidences that make the truth clearer, a discovery more than constructing an invention. The equilibrium result does not account for the observor as an external entity, messaging. Every state transition is a message.
My comment has no hate in it and I do no harm. I am not appalled or afraid, boasting or envying or complaining... Just saying. Psalms23: Giving thanks and praise to the Lord and peace and love. Also, I'd say Matthew6.
great video
ONE POINT THATS START EVRRYTHING. ;) IN SUMMARY
I love your channel! Your videos always make these things easy!
so much information in a 7 minutes video!!😁
Ok so Bernoulli thus described the basis of frequentist statistics?
Is it possible that be used in quantum state Markov chains ? And in subatomic particles
excellent backgrounder/intro!
3:34- Could someone please tel me the name of the theologian-turned-mathematician? Seems like an interesting guy; would love to read up on him, but the name wasn't clear in the video.
Oh, and brilliant video! Love Khan Academy for sharing these gems for free!
pavel nekrasov
Very recommendable
This is really good!
diegofloor EYY... DATS PRETTY GUD!
Only the last 2 minutes of the video really speak to the topic - the first 5 minutes of the video speak to a *very* distantly relatedly philosophy of stochasticity starting with the Hellenistic geometers.
is there a part 2? ending felt a little sudden
See the description!
This is a great explanation, but the transition matrix shown on 6:58 is wrong. The rows should add up to 1, instead of the columns.
Why can't the matrix be transposed whereby the columns do add up to 1?
***** Im confused, are markov chains both independent and dependent? Because the next outcome only depends on the current one and not the previous ones before the current.
TheResonating It DOES depend on previous outcome, beacuse it decided which state we are in right now.
The next outcome is dependent on the state (because it decides current probabilities), and the state is dependent on the last outcome and not any outcomes before that. To move to the example ... which cup you draw from is dependent on what you drew last (white or black piece), but not dependent on what you drew before that.
You might say that it IS dependent on the state two steps back because it decided the probabilities of what outcome you got last step, but that is the point ... the probabilities of something happening become irrelevant to the calculations once you know if it happened or not.
Doesn't the idea of Markov chains violate the undecidability of Halting problem?
Greatly made, so inspiring. Hats off, truly!
What kind of ending was that
Hey Brit, good video - where did you find the quote from Bernoulli about the universe being governed by ratios?
Brilliant explanation. :-)
Which Nekrasov? Pavel Alekseevich Nekrasov (1853-1924)?
Gawdayum this is good
i don't take seroiusly those that peddle plato. You're an educational channel! Where's Aristotle?
Who did the music in this video?
I wanted to ask about this but wouldn't call it music, it sounds like binaural beats or isometric tones it something in those realms, perhaps to help with cognitive recall? That would be in innocence, I thought it was a little unnerving/anxiety inducing...lol
it's like a generator
Remember Caude Shannon!
I’m not sure this video really explains the ORIGIN of Markov chain. It just introduces it. Again, why would the state vector converge to a stable set of numbers?
"Transision"...
The ending was quite sad.
2018.
How do you watch videos at double speed o your phone?
Click on the 3 vertical dots at the top right corner , you'll find out
original video is here: ruclips.net/video/o-jdJxXL_W4/видео.html
SOUND!!!!!Are you kidding
My Error!!
That ending is infuriating
not as clearly as Sal was able to explain
This video is not good. This video makes Markov process sounds like dependent. A Markov process is a random process in which the future is independent of the past. @ 6:22 "Markov proved that when every state in the machine is reachable, when you run these machine in a sequence, they reach equilibrium....no matter where you start, once you begin the sequence, the number of time you visit each state, converge to some specific ratio, or probability " The probability to get a light bean or black bean is still 0.5, still independent. although one will stay in cup 0/state 0 when one pick up 1 light bean. If the light beans and black beans are equal large numbers in both cup/state, the probability of both state 0 and state 1 should be 0.5. The rule only forces one to stay in cup 0/state 0 after the result is shown and it doesn't affect the probability of the next choice so the next choice itself is still independent. I don't see the point to introduce Markov process this way which only confuse people.
The video is too disturbing and annoying by constant shaking and needless activities going within. May be you are trying too much on making the content entertaining. Learning is my first objective here and entertainment comes second
la traducción al español no es buena! spanish translation is bad.🥲