To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/ArtemKirsanov/. The first 200 of you will get 20% off Brilliant’s annual premium subscription.
This half-way point between stasis and chaos is also where "life emerges". If you think about life as replicators they need a way to grow and replicate which requires that their lego-blocks should be able to be dis-assembled and assembled. At the right temperatures things are stable enough so that you can keep some information going, but unstable enough so that growth and evolution and "processing"/"thinking"/"natural selection" can happen. I am thinking though that the life emergent point might be based on on covelant bonds on the Earth temperatures but on Mars they might be based on cooler hydrogen bonds as on the Earth covelant bonds are at the critical point allowing photosythesis to create them and digestion, rotting, growing, etc... to repurpose them while on Mars covelant bonds are in stasis so the critical point will be in intermolecular or hydrogen bonds.
Dr Leon Chua calls this the edge of chaos. I liken it to a stage microphone on the edge of feedback from hearing its own output from the speaker. Building networks of these things has got to do some interesting stuff, right? What was new for me was how the model discovers the geometry of the overall organization, not just pairs leaving identical but increasingly sharp footprints. That’s really nice and rings lots of bells for me.
This ties into the weight initialization of layers in deep neural networks in machine learning. If the magnitudes of the weights are too small then the outputs diminish with each layer, otherwise if the magnitudes are too great then the outputs blow up. Balancing these weights allows for the stacking of many layers which has enabled the great progress we have seen in deep learning in recent years.
I thought exactly about the same thing. This is the vanishing or exploding issue in the forward/backward pass in ANNs. To alleviate this problem, there is also batch normalization which helps keeping the activations std to 1 throughout the training process. The skip connections also help keeping the flow of information. I also thought about the attention mechanism used in transformers. For each output, it takes the weighted average of the input tokens. These positive weights add up to 1 thanks to the use of the softmax function, keeping the flow of information constant through the layers. Transformers combine all these tricks (they use layer normalization instead of batch normalization, but the idea is the same). Moreover, the original problem solved by the attention mechanism used in transformers was that the hidden state in RNN/LSTM acting as a memory state hardly retained all the information of the sequence of tokens that was previously processed. The information about the past tokens sort of vanishes (or at least is incomplete) as the model goes forward through the tokens. The attention mechanism serves as a kind of skip connection that allows the model to look at all the previous information which is then preserved and can flow much more easily. In the end, even in ANNs, good information flow is central to their proper functioning. Now, it would be very interesting to know how nature came up with a good information flow management in the brain. The critical brain hypothesis is interesting, but it seems to me that it only makes some observations related to the critical phenomena but doesn't really explain the mechanism causing this criticality (it might very be the ultimate goal of neuroscience). Researchers in AI could then take inspiration from it.
Every video you have made so far is a masterpiece. You cover a wide variety of computational neuroscience topics from place cells to wavelets; with each topic covered in exceptional detail. You are able to convey abstract topics in an intuitive and visual way that is unparalleled. Keep up the great work man
Absolutly facinating. I am a Machine learning engineer and I could not stop thinking how this knowledge and intuition based on it might be transferred to ML.
I don't think standard ML can implement criticality. I'm looking towards Spiking Neural Networks / Neuromorphic models as the prime candidate for this type of behavior.
ChatGPT4o's response: The concept of criticality in brain function, as shown in the RUclips video screenshots, can be applied to machine learning (ML) algorithms in several ways. Here are a few ideas: 1. **Dynamic Parameter Tuning**: - Use principles from criticality to dynamically adjust hyperparameters in ML models. For instance, a system can be designed to detect when the model is near a critical point and adjust learning rates, dropout rates, or other hyperparameters to optimize performance. 2. **Spiking Neural Networks (SNNs)**: - Implement Spiking Neural Networks, which are inspired by how neurons in the brain communicate. These networks can operate near criticality, offering potential improvements in efficiency and robustness. 3. **Self-Organized Criticality (SOC)**: - Integrate self-organized criticality into ML models. This concept can help in maintaining a balance between stability and adaptability in neural networks, enabling better generalization and avoiding overfitting. 4. **Criticality-Based Regularization**: - Develop regularization techniques based on criticality to prevent overfitting. By encouraging the network to operate near critical points, it can achieve a more balanced learning process, improving both training stability and generalization. 5. **Adaptive Architectures**: - Create adaptive network architectures that can reconfigure themselves based on the critical states detected during training. This could involve changing the number of neurons, layers, or connections in real-time to optimize learning and inference. 6. **Energy-Efficient Computing**: - Leverage criticality to design energy-efficient ML models. By mimicking the brain's energy-efficient processing near critical points, ML models can reduce computational costs and power consumption. These methods aim to make ML systems more efficient, adaptable, and closer to the natural intelligence processes observed in the human brain.
No one explains better than you do. I knew all these stuff in their separate domains, but I've never truly understood the connection as I have now. When at 25:07 you justified the passage between electrodes and neurons it blew my mind of pure happiness!!
This is one of the best videos I've ever come across in something like 10 years using this platform. I can't overstate how good this was. Amazing job, I'm looking forward for your future content
At a long-time and large-size scale water is at a critical point on the earth (in that it is in liquid, gas, and solid state). However, more importantly carbon-nitrogen-oxygen covelant bonds in life are at the critical point in long and short time scales, allowing its bonds to be repurposed and allowing self-replication and evolution. On Venus these bonds are unstable, while on Mars these bonds are at stasis. I think on around Mars/Europa hydrogen bonds may be at the critical point so you might see complex "ice crystal" life while on Venus some sort of weird sulfuric acid compounds are at the critical point.
Studying the Ising model for my thesis right now. I never would have thought that there is a connection between the model and NN's (which also feels extremely natural). Nice content
Dear Artem, thank you for this glorious video! Well made and inspiring! You triggered another neural avalanche of excitement in me! My brain transitioned from rapid eye movements and sleepiness to the rabbit hole of self-organized criticality!
Wow. Self-Organized Criticality. Scale invariance of Relevance Realization. Deep-continuity hypothesis. Our metabolism powers our virtual engines which are optimized and orchestrated on top of the background "hum" of critical neural objective reduction. Thanks for this great work.
Amazing video! I did an undergrad research about brain criticality. The idea was to create an analog of the connectivity matrix for the Ising model in the critical temperature to check if the graph topological properties match with the ones measured in the resting state with fNIRS.
This is SO well done. Scale-free avalanches in the brain makes perfect sense, since we are trying to self-resonate, such that information is not lost as it echoes up and down the various physical thresholds which constitute our brains from atoms all the way up to the whole structure.
Despite these epiphanies handed to me on a silver platter, I'm still having trouble wrapping my brain around how any of this helps keep neural networks in a state of unstable equilibrium, what are the hidden variables that prevent self feedback oscillations from getting phase locked much like a seizure, or descending into complete chaos? It's much reminds me of a table full of pendulums that stand upright when the table is randomly vibrated but much more complicated.(because they're all connected to the same table they want to sync up, because the vibration is random they seldom do, yet within the narrow range of vibration they all stand up!)
@@petevenuti7355 You have to remember our brains, like the rest of us, evolved naturally. Therefore the near-critical point is a universal feature of life. Imagine you want to farm entropy, where do you go? You go where it’s being formed, at the edge of a phase transition - kinda like how we build along coastlines, or better yet how primordial life still clings to hydrothermal vents deep underwater. The transition from eddies to flows is where all the magic happens. In the brain then there are feedback systems preventing your bad feedbacks, because it’s actually designed around physical minima, carved a home in energy gradient which is stable despite its complexity - life is a self-stabilizing dissipative structure, using the pull of entropy to orbit equilibrium.
@@anywallsocket "life is a self-stabilizing dissipative structure, using the pull of entropy to orbit equilibrium" what an interesting way to think about it.
Artem you are a genius! Your videos made me interested in neuroscience and now I am fully devoted to reading about it. I recently read about criticality and now I see your video and it's just so beautiful. I wish you talked about self organized criticality too
@Artem Kirsanov I think the analogy to brain criticality is fission criticality e.g. in U-235 There are the following relevant reactions. U-235 absorbs a neutron and splits into 2 smaller but neutron rich nuclei and releases neutrons. This happens basicly instantly in the order of 10^-15 sec. U-235 absorbs a neutron and then deexcitats to U-236 or a neutron is absored by a different nuclei which does not undergo fission. This means a neutron released not garante future fission. A fission product e.g. Ba-144 may undergo beta minus decay and deexcitats by releasing a neutron. This happens much "slower" in the order of 10^-6 to 10^3 sec. U-235 undergoes spontanios fission and splits into 2 smaller but neutron rich nuclei and releases neutrons. This happens basicly instantly in the order of 10^-15 sec. This creates 3 different states and 2 boundary cases: 1. Subcritical: A (free) neutron does generate less than 1 free neutron on average --> fission chains quickly die out. BUT spontanios fission will garante that fission chains will always start. This is the state of natural uranium. 2. delayed critical A free neutron creates 1 free neutron on average. But a few of the released neutrons are delayed neutron, which means there is a delay until "all free neutrons are regenerated". This is the state of a reactor in a nuclear powerplant during normal operation. 3. delayed supercritical A free neutron creates 1 free neutron on average. But prompt neutrons are not enough to get a 1 new free neutron on average the delayed neutrons are needed to get above 1 released neutron on average. This is the state of a nuclear reactor during start up the delay allows to react to whatever happens in the reactor. 4. prompt critical A free neutron releases 1 free prompt neutron on average. 5. prompt supercritical A free neutron releases more that 1 free prompt neutron on average. This leads to an almost instant (on the order of 10^-9 sec ) exponential chain reactions. This happens during the expolsion of a nuclear bomb. Now to the brain analogy: Incoming pulses from our senses garante there will always be some activity. (= spontanios fission in the nuclear case) Neuron can actived a variable amount of neuron. (= moderation + enrichment in the nuclear case) Analogy of the states: 1. a state like coma: there is very little brain activity but it is not 0. a state very close to state 2 is needed for a short time to reduce activity. 2. "normal" state of the brain. 3. brain reaction to something from the outside 4. boundary to epilepsy 5. epilepsy In this case the constant input from our senses make sure the the criticality is reached, but the connections between the neurons are not sufficent. And they introduce some delay to prevent almost instant exponential activion chains. I am not a native english speaker. Sorry if this hard to read and sorry for typos.
this critical brain hypothesis theory is also similar to sparse autoencoders in artificial neural networks. where sparse autoencoders deliver sparsity constraint in which some neurons are active while others are not which allows these networks to process the information in an optimal way and avoid the overfitting problem. These two are similar
This is the best RUclips video I have ever seen. You explained everything masterfully! Thank you for giving my curiosity a vision, I’m so excited to explore more.
I had a powerful realization during a deep trip where I realized that life and consciousness are the result of the feedback/recursive character of the critical line. The more you can tune toward greater coherence, the higher the degree of consciousness.
13:00 I don't understand why correlation implies transmission of information. It could just be that both cells (even if far away) are being affected in the same way by an external force. 2 radios picking up the same noise doesn't mean they're talking to each other. 14:34 fractals are typically not self similar. Instead of mentioning self-similarity, I'd be more general and say it displays the same properties (but not necessarily the exact same shape) across multiple scales.
I was sick today and binged some of your videos. So far, they're all brilliant and I love the aesthetic and craftsmanship you put into them. I thought of the Ising model as you were talking about phase transitions, and then you bring it up -- truly comprehensive and love that you are bringing physics into your videos! Super interested in similar systems, like Kuramoto oscillators which can possibly describe large scale brain oscillations, and which have mathematical similarities to Bose-Einstein condensates.
OH MY GOD, this blew mind off, this is in my top best informative video ever for sure... dude, flow states and fractals, the border between chaos and order, the state of epilepsy being similar to a huge chaos eruption but with intense meaning... Like this 30min explains life itself, or at least a very significant base, it's astonishing
right? as a mentally ill former computer scientist, it fills my heart with joy to know that science says that my brain is *supposed* to be living on the critical point between two opposite deaths, solid and liquid at the same time, so that my head can fit more fractals in it, so that i can pick up long distance messages from inside my own mind better. i know that's not what the video is really supposed to be about but it intuitively feels to me like the video is describing a lot of my internal experience in ways that i haven't heard before.
I am experimenting with spiking neural networks evolved through indirect encoding and i experienced spike wanishing in the past. This video blew my mind and i've learned a ton from it. I'm super inspired. Thank you!
Finally!! So awesome that this is finally being discovered by scientists, definitely gets to the core of what is really going on on a lot levels and scales, and non scales lol. Thank You Bro, this was masterfully put together, super appreciative for this work You are doing here presenting these truths to us in the way that only You know how to do, I don’t study any of this on my own lol, I just wait and learn from You, You have the highest grasp on all this so it’s so incredible that You are so damn good at sharing your perspectives through such wonderfully effective graphics, really can’t thank You enough!
Is there scale invariance of life/evolution on life? I think carbon-nitrogen-oxygen covelant bonds are at the critical point allowing life to do "computation" via "evolutionary algorithms". However, in cold areas ice lens/permafrost complexes may be at the critical point, but maybe only at perhaps long or short (non-human) time scales.
I think your video inspired me to how to solve a problem in my research project, about the optimization in critical stage, and the communication by long-range coupling. Thank you!
Please do an analysis of the renormalization group. Your exposition of critical phenomenon and self-similarity is extremely elegant and intuitive, beautiful work!
While watching the description about sigma, the interconnectedness, I was reminded of an experiment I did years ago while programming something else. I had a video camera looking at an image of a waterfall display, which resembled the wave phenomena of pixels moving down the screen. usually it’s used to watch, a variable over a longer time frame than would normally fit on the screen. Anyway, the camera is a waterfall produced from a single line of video. I noticed this critical point phenomenon straight away in one instance, nothing would happen because there’s nothing there to begin with no bright spots , in the other extreme there’s so much going on that you can’t really tell it apart from noise. But then there was an interesting. , in the other extreme there’s so much going on that you can’t really tell it apart from noise. But then there was an interesting middle ground where patterns would repeat, and it would keep going, but not get crazy.
Thank you for posting this. I've been trying to find new ways of explaining the 'grokking' behavior of ML, and how this is a phase transition behavior similar to Flory-Huggins, liquid crystals, weather patterns, etc. but have not had a good way of describing it beside vaguely grasping at Fourier decomposition of a signal. This is a more detailed overall explanation. Glad it also applies (as expected) to biological neurons. Best wishes.
awsome video! I am a physicists and I want to share a small comment: I was puzzled at first by the average over time when you described correlations, since we usually calculate an averages over ensambles (thermal states, etc). Then I recalled the basic assumptions behind ensemble theory which identify the two quantities. You decided not to describe the abtraction itself but the underlying physical reality it represents in both a pedagogical and perfectly correct way. I congratulate you for that
Hey Artem Very nice video, i have been doing Percolation models for physical systems for a while. It is rare to get percolation lattice simulations on youtube outside of very esoteric channels that nobody knows of. It is interesting how it can be mapped to Neuroscience. 10/10
This is a great topic and a beautiful presentation based on a great paper. Excellence all around. The insight, that cyclic relations define the geometry of the map, is a nice key insight breaking out of simple Pavlovian association lists.
This resonates strongly with my exploration with video feedback in the past, and describes my infatuation with generative art. Self similarity is the keyword and a great way to define the region of the edge of chaos, so enlightening!
Fascinating. I come from math, and in very abstract algebra and geometry there are several notions of dimension that emerge from observing some power law. The object of finite dimensions are, of course, of the particular interest.
Another fascinating video, Artem. The work you’ve put in to making the material accessible to non-specialists has definitely produced a pedagogical jewel. Amazing.
Wow! This was the best video I've seen for a while! And it gave me an idea about how this ideas described here that can have a huge impact on Graph Neural Networks! Thanks for such an amazing content!
OMG the graphics of this video are just popping off! I absolutely adore the font choice and visualizations. I can't believe you haven't passed 100K subs yet! But I'm sure you'll get their soon, and I'll add a small +1 to that count :)
This video helped me get dangerously close to thinking I understand the nature of the universe and myself inside it. Thank you for making such a brilliant video that's available for everyone to learn from.
This seems theory to be resonating with a lot of other fields of science, as well as experience being embodied, and I want to thank you for presenting this topic in such an accessible way! I think that it is important that we continually update our internal models of the world and our self to be able to stay in touch with it.
I really like the idea of doing a topological sort on the network and visualizing the avalanche from left to right -- but as you said, it comes with the inability to allow for circular relationships. Not a neuroscientist, but I imagine there are some regions of the brain that are structured like this to a first approximation, for example the entorhinal cortex and the hippocampus subsystem.
This video is so interesting. Thanks a lot for making this video and please keep delivering content about computational neuroscience in an informative yet easily digestible way!!
I am a rather regular software developer but I kind of try to avoid too much math but this video is phenomenal that even with my forgotten knowledge I could easily follow what was explained here.
I got interested in neurology a few years ago But lost interest. But this video Has definitely made me want to study it again. You explained everything so simply and perfectly. Definitely one of the best Scientific videos I've ever seen on RUclips❤❤❤❤
As a pre bachelor interested in pshycology and neuroscience this video was extreemely interesting to watch! The visual explenation of the scale was so intuitive that i could immidetely transfear it to my own theory of the brains functioning. Did make me wonder further as for instance about what the difference between a of a child would be and an adult brain here as synaptic pruning kicks in. A child brain is much higher in disorder meaning much less stable avalanches and patterns while the brain develups. It would be verry interesting to figure out how or if a high disorder system would be self stabalizing towards greater order with age once one accounts for reinforcing and inhibition rules.
The camera I mentioned below, was looking at its own image, and it was a usb microscope camera looking ant a few pixels of its own image (where the waterfall seed is composed only of the center horizontal line of camera video) delayed in the waterfall. This is roughly equivalent to this critical point concept. Long story, short echoes of what had seen sometime ago, were fed back into the system to appear again in an echo distorted form. Carefully, it would go on without saturating or dying out.
To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/ArtemKirsanov/.
The first 200 of you will get 20% off Brilliant’s annual premium subscription.
I am the first here and I am debating on clicking for some reason lol
This only shows 7 days (even with your code) (?)
@@snakejuce Hmm, that's weird. I'll contact Brilliant to double-check this
@@ArtemKirsanov No worries, just thought I'd let you know.
@@ArtemKirsanov ttttttftttttftttftttt
I'm a computer scientist but I really really really love these videos, keep up the good work man
This half-way point between stasis and chaos is also where "life emerges". If you think about life as replicators they need a way to grow and replicate which requires that their lego-blocks should be able to be dis-assembled and assembled. At the right temperatures things are stable enough so that you can keep some information going, but unstable enough so that growth and evolution and "processing"/"thinking"/"natural selection" can happen. I am thinking though that the life emergent point might be based on on covelant bonds on the Earth temperatures but on Mars they might be based on cooler hydrogen bonds as on the Earth covelant bonds are at the critical point allowing photosythesis to create them and digestion, rotting, growing, etc... to repurpose them while on Mars covelant bonds are in stasis so the critical point will be in intermolecular or hydrogen bonds.
I'm also a computer scientist and I like psychology and these kinds of videos.
Samme :))
Dr Leon Chua calls this the edge of chaos. I liken it to a stage microphone on the edge of feedback from hearing its own output from the speaker.
Building networks of these things has got to do some interesting stuff, right?
What was new for me was how the model discovers the geometry of the overall organization, not just pairs leaving identical but increasingly sharp footprints. That’s really nice and rings lots of bells for me.
This ties into the weight initialization of layers in deep neural networks in machine learning. If the magnitudes of the weights are too small then the outputs diminish with each layer, otherwise if the magnitudes are too great then the outputs blow up. Balancing these weights allows for the stacking of many layers which has enabled the great progress we have seen in deep learning in recent years.
I thought exactly about the same thing. This is the vanishing or exploding issue in the forward/backward pass in ANNs. To alleviate this problem, there is also batch normalization which helps keeping the activations std to 1 throughout the training process. The skip connections also help keeping the flow of information. I also thought about the attention mechanism used in transformers. For each output, it takes the weighted average of the input tokens. These positive weights add up to 1 thanks to the use of the softmax function, keeping the flow of information constant through the layers. Transformers combine all these tricks (they use layer normalization instead of batch normalization, but the idea is the same).
Moreover, the original problem solved by the attention mechanism used in transformers was that the hidden state in RNN/LSTM acting as a memory state hardly retained all the information of the sequence of tokens that was previously processed. The information about the past tokens sort of vanishes (or at least is incomplete) as the model goes forward through the tokens. The attention mechanism serves as a kind of skip connection that allows the model to look at all the previous information which is then preserved and can flow much more easily. In the end, even in ANNs, good information flow is central to their proper functioning.
Now, it would be very interesting to know how nature came up with a good information flow management in the brain. The critical brain hypothesis is interesting, but it seems to me that it only makes some observations related to the critical phenomena but doesn't really explain the mechanism causing this criticality (it might very be the ultimate goal of neuroscience). Researchers in AI could then take inspiration from it.
@@chocochip8402Indeed. Discovering the cause of criticality will be the end of neuroscience, and the death of God as well.
Every video you have made so far is a masterpiece. You cover a wide variety of computational neuroscience topics from place cells to wavelets; with each topic covered in exceptional detail. You are able to convey abstract topics in an intuitive and visual way that is unparalleled.
Keep up the great work man
Absolutly facinating. I am a Machine learning engineer and I could not stop thinking how this knowledge and intuition based on it might be transferred to ML.
Do certain ANN models run near critical points?
I don't think standard ML can implement criticality. I'm looking towards Spiking Neural Networks / Neuromorphic models as the prime candidate for this type of behavior.
ChatGPT4o's response:
The concept of criticality in brain function, as shown in the RUclips video screenshots, can be applied to machine learning (ML) algorithms in several ways. Here are a few ideas:
1. **Dynamic Parameter Tuning**:
- Use principles from criticality to dynamically adjust hyperparameters in ML models. For instance, a system can be designed to detect when the model is near a critical point and adjust learning rates, dropout rates, or other hyperparameters to optimize performance.
2. **Spiking Neural Networks (SNNs)**:
- Implement Spiking Neural Networks, which are inspired by how neurons in the brain communicate. These networks can operate near criticality, offering potential improvements in efficiency and robustness.
3. **Self-Organized Criticality (SOC)**:
- Integrate self-organized criticality into ML models. This concept can help in maintaining a balance between stability and adaptability in neural networks, enabling better generalization and avoiding overfitting.
4. **Criticality-Based Regularization**:
- Develop regularization techniques based on criticality to prevent overfitting. By encouraging the network to operate near critical points, it can achieve a more balanced learning process, improving both training stability and generalization.
5. **Adaptive Architectures**:
- Create adaptive network architectures that can reconfigure themselves based on the critical states detected during training. This could involve changing the number of neurons, layers, or connections in real-time to optimize learning and inference.
6. **Energy-Efficient Computing**:
- Leverage criticality to design energy-efficient ML models. By mimicking the brain's energy-efficient processing near critical points, ML models can reduce computational costs and power consumption.
These methods aim to make ML systems more efficient, adaptable, and closer to the natural intelligence processes observed in the human brain.
No one explains better than you do. I knew all these stuff in their separate domains, but I've never truly understood the connection as I have now. When at 25:07 you justified the passage between electrodes and neurons it blew my mind of pure happiness!!
Great content. Thanks.
This is one of the best videos I've ever come across in something like 10 years using this platform.
I can't overstate how good this was. Amazing job, I'm looking forward for your future content
Wow, thank you so much!
At a long-time and large-size scale water is at a critical point on the earth (in that it is in liquid, gas, and solid state). However, more importantly carbon-nitrogen-oxygen covelant bonds in life are at the critical point in long and short time scales, allowing its bonds to be repurposed and allowing self-replication and evolution. On Venus these bonds are unstable, while on Mars these bonds are at stasis. I think on around Mars/Europa hydrogen bonds may be at the critical point so you might see complex "ice crystal" life while on Venus some sort of weird sulfuric acid compounds are at the critical point.
This is the most amazing video I have seen on RUclips for a while. This is Science Communication at its best. Thank you so much!
Completely agreed 👌
Wow, thank you so much!
Studying the Ising model for my thesis right now. I never would have thought that there is a connection between the model and NN's (which also feels extremely natural). Nice content
Are you kidding man? On a road trip rn and have been talking about this with friends. Can’t believe this just came out, very excited to listen!
Dear Artem, thank you for this glorious video! Well made and inspiring! You triggered another neural avalanche of excitement in me! My brain transitioned from rapid eye movements and sleepiness to the rabbit hole of self-organized criticality!
This is so so so well made! It makes you feel as if you're gradually discovering these results for yourself and it feels fantastic doing so!
Wow. Self-Organized Criticality. Scale invariance of Relevance Realization. Deep-continuity hypothesis. Our metabolism powers our virtual engines which are optimized and orchestrated on top of the background "hum" of critical neural objective reduction. Thanks for this great work.
Amazing video! I did an undergrad research about brain criticality. The idea was to create an analog of the connectivity matrix for the Ising model in the critical temperature to check if the graph topological properties match with the ones measured in the resting state with fNIRS.
Oh that's sound great! There's something you've published?
@@danyielsanchez5159 Not yet
This neuroscience video is probably the best explanation on the Ising model I’ve seen!
Thanks! :D
This is true. Although I missed a word that the Ising model stands out in that it can be solved analytically.
Thanks!
Kiváló előadás a lényegről.
Nagyon jó oktatási anyag, kutatóknak is javasolható.
Köszönet érte!
This is SO well done. Scale-free avalanches in the brain makes perfect sense, since we are trying to self-resonate, such that information is not lost as it echoes up and down the various physical thresholds which constitute our brains from atoms all the way up to the whole structure.
Despite these epiphanies handed to me on a silver platter, I'm still having trouble wrapping my brain around how any of this helps keep neural networks in a state of unstable equilibrium, what are the hidden variables that prevent self feedback oscillations from getting phase locked much like a seizure, or descending into complete chaos? It's much reminds me of a table full of pendulums that stand upright when the table is randomly vibrated but much more complicated.(because they're all connected to the same table they want to sync up, because the vibration is random they seldom do, yet within the narrow range of vibration they all stand up!)
@@petevenuti7355 You have to remember our brains, like the rest of us, evolved naturally. Therefore the near-critical point is a universal feature of life. Imagine you want to farm entropy, where do you go? You go where it’s being formed, at the edge of a phase transition - kinda like how we build along coastlines, or better yet how primordial life still clings to hydrothermal vents deep underwater. The transition from eddies to flows is where all the magic happens.
In the brain then there are feedback systems preventing your bad feedbacks, because it’s actually designed around physical minima, carved a home in energy gradient which is stable despite its complexity - life is a self-stabilizing dissipative structure, using the pull of entropy to orbit equilibrium.
@@anywallsocket "life is a self-stabilizing dissipative structure, using the pull of entropy to orbit equilibrium" what an interesting way to think about it.
@@anywallsocket That's beautiful, thank you
@@anywallsocketbeautiful
Illuminating
Artem you are a genius! Your videos made me interested in neuroscience and now I am fully devoted to reading about it. I recently read about criticality and now I see your video and it's just so beautiful. I wish you talked about self organized criticality too
Your videos are truly a gift! Amazing research and video quality. Keep it up!
@Artem Kirsanov
I think the analogy to brain criticality is fission criticality e.g. in U-235
There are the following relevant reactions.
U-235 absorbs a neutron and splits into 2 smaller but neutron rich nuclei and releases neutrons.
This happens basicly instantly in the order of 10^-15 sec.
U-235 absorbs a neutron and then deexcitats to U-236 or a neutron is absored by a different nuclei which does not undergo fission.
This means a neutron released not garante future fission.
A fission product e.g. Ba-144 may undergo beta minus decay and deexcitats by releasing a neutron.
This happens much "slower" in the order of 10^-6 to 10^3 sec.
U-235 undergoes spontanios fission and splits into 2 smaller but neutron rich nuclei and releases neutrons.
This happens basicly instantly in the order of 10^-15 sec.
This creates 3 different states and 2 boundary cases:
1. Subcritical: A (free) neutron does generate less than 1 free neutron on average --> fission chains quickly die out.
BUT spontanios fission will garante that fission chains will always start.
This is the state of natural uranium.
2. delayed critical
A free neutron creates 1 free neutron on average. But a few of the released neutrons are delayed neutron, which means there is a delay until
"all free neutrons are regenerated".
This is the state of a reactor in a nuclear powerplant during normal operation.
3. delayed supercritical
A free neutron creates 1 free neutron on average. But prompt neutrons are not enough to get a 1 new free neutron on average the delayed neutrons are needed
to get above 1 released neutron on average.
This is the state of a nuclear reactor during start up the delay allows to react to whatever happens in the reactor.
4. prompt critical
A free neutron releases 1 free prompt neutron on average.
5. prompt supercritical
A free neutron releases more that 1 free prompt neutron on average. This leads to an almost instant (on the order of 10^-9 sec ) exponential chain reactions.
This happens during the expolsion of a nuclear bomb.
Now to the brain analogy:
Incoming pulses from our senses garante there will always be some activity. (= spontanios fission in the nuclear case)
Neuron can actived a variable amount of neuron. (= moderation + enrichment in the nuclear case)
Analogy of the states:
1. a state like coma: there is very little brain activity but it is not 0.
a state very close to state 2 is needed for a short time to reduce activity.
2. "normal" state of the brain.
3. brain reaction to something from the outside
4. boundary to epilepsy
5. epilepsy
In this case the constant input from our senses make sure the the criticality is reached, but the connections between the neurons are not sufficent. And they introduce some delay to prevent almost instant exponential activion chains.
I am not a native english speaker. Sorry if this hard to read and sorry for typos.
finally someone talking about phase transitions
This work of art is as valuable as works of Plato. Thank you for bringing to our consciousness
this critical brain hypothesis theory is also similar to sparse autoencoders in artificial neural networks. where sparse autoencoders deliver sparsity constraint in which some neurons are active while others are not which allows these networks to process the information in an optimal way and avoid the overfitting problem. These two are similar
谢谢!
This might be my favorite Artem Kirsanov video. A masterpiece of masterpieces. Thank you so much for making these.
This is one of the greatest channels on RUclips.
One of the most intellectually rewarding videos I've ever seen!
I'm a computer so I really really really love these videos, keep up the good work man
32:20 see also on wikipedia : "Barkhausen stability criterion" (another link to an electronics system)
You have a talent of combining beauty and science. These are often thought to be separate; thanks for illuminating the bridge.
OUTSTANDING video! :D
You taught the concepts in a very clear way and the animations are simply insane. I love it!
This is the best RUclips video I have ever seen. You explained everything masterfully! Thank you for giving my curiosity a vision, I’m so excited to explore more.
This is beautiful. I am interested in seeing the effect of psychedelics on control parameters.
Thank you! Interesting thought indeed!
I had a powerful realization during a deep trip where I realized that life and consciousness are the result of the feedback/recursive character of the critical line. The more you can tune toward greater coherence, the higher the degree of consciousness.
your intuition is right, read: Carhart-Harris, R.L., 2018. The entropic brain-revisited. Neuropharmacology, 142, pp.167-178.
guy who's fried his brain with psychedelics: WOAHHH BUT WHAT IF HE WAS ON ACID MAN
@@philipm3173 holy fuck I did that on weed but I failed to realize the second part.
Awesome! I'm going to recommend this channel to my Neuroscience class.
This is one of the most thought provoking videos I have ever seen. This is now one of my favorite channels.
Perfect, I've been reading connectome harmonics papers recently so this is very much topical to me.
13:00 I don't understand why correlation implies transmission of information. It could just be that both cells (even if far away) are being affected in the same way by an external force. 2 radios picking up the same noise doesn't mean they're talking to each other.
14:34 fractals are typically not self similar. Instead of mentioning self-similarity, I'd be more general and say it displays the same properties (but not necessarily the exact same shape) across multiple scales.
I was sick today and binged some of your videos. So far, they're all brilliant and I love the aesthetic and craftsmanship you put into them. I thought of the Ising model as you were talking about phase transitions, and then you bring it up -- truly comprehensive and love that you are bringing physics into your videos! Super interested in similar systems, like Kuramoto oscillators which can possibly describe large scale brain oscillations, and which have mathematical similarities to Bose-Einstein condensates.
This channel is about to go into a PHASE TRANSITION. That's a MILLION subscribers in 1 year.
Dude, this is otherworldly.
OH MY GOD, this blew mind off, this is in my top best informative video ever for sure... dude, flow states and fractals, the border between chaos and order, the state of epilepsy being similar to a huge chaos eruption but with intense meaning...
Like this 30min explains life itself, or at least a very significant base, it's astonishing
right? as a mentally ill former computer scientist, it fills my heart with joy to know that science says that my brain is *supposed* to be living on the critical point between two opposite deaths, solid and liquid at the same time, so that my head can fit more fractals in it, so that i can pick up long distance messages from inside my own mind better. i know that's not what the video is really supposed to be about but it intuitively feels to me like the video is describing a lot of my internal experience in ways that i haven't heard before.
Thanks for leaving sponsor at the end. I watched the whole thing.
I am experimenting with spiking neural networks evolved through indirect encoding and i experienced spike wanishing in the past. This video blew my mind and i've learned a ton from it. I'm super inspired. Thank you!
Finally!! So awesome that this is finally being discovered by scientists, definitely gets to the core of what is really going on on a lot levels and scales, and non scales lol. Thank
You Bro, this was masterfully put together, super appreciative for this work You are doing here presenting these truths to us in the way that only You know how to do, I don’t study any of this on my own lol, I just wait and learn from You, You have the highest grasp
on all this so it’s so incredible that You are so damn good at sharing your perspectives through such wonderfully effective graphics, really can’t thank You enough!
Wow, thank you! I really appreciate it!
@@ArtemKirsanov Would love to discover the relevance of scale-invariance in fluid systems (thinking Reynolds number).
Is there scale invariance of life/evolution on life? I think carbon-nitrogen-oxygen covelant bonds are at the critical point allowing life to do "computation" via "evolutionary algorithms". However, in cold areas ice lens/permafrost complexes may be at the critical point, but maybe only at perhaps long or short (non-human) time scales.
Life pretty much needs criticality, it seems.
Fantastic video, thanks!. In relation to the "Missing piece" question @21:23 perhaps the brain is exploiting the Free energy Principle [Friston.]
I think your video inspired me to how to solve a problem in my research project, about the optimization in critical stage, and the communication by long-range coupling. Thank you!
U dont know shiiiiit u are talking about 🤣.....samo rokni malo magnezijuma i malo cinka...odma ti bude bolje 🙃
Brilliantly explained. Please carry on making this type of videos.
Well done!
Please do an analysis of the renormalization group. Your exposition of critical phenomenon and self-similarity is extremely elegant and intuitive, beautiful work!
I know you know but your videos make real intellectual satisfaction because they are sooooo great
While watching the description about sigma, the interconnectedness, I was reminded of an experiment I did years ago while programming something else. I had a video camera looking at an image of a waterfall display, which resembled the wave phenomena of pixels moving down the screen. usually it’s used to watch, a variable over a longer time frame than would normally fit on the screen. Anyway, the camera is a waterfall produced from a single line of video. I noticed this critical point phenomenon straight away in one instance, nothing would happen because there’s nothing there to begin with no bright spots , in the other extreme there’s so much going on that you can’t really tell it apart from noise. But then there was an interesting. , in the other extreme there’s so much going on that you can’t really tell it apart from noise. But then there was an interesting middle ground where patterns would repeat, and it would keep going, but not get crazy.
Thank you for posting this. I've been trying to find new ways of explaining the 'grokking' behavior of ML, and how this is a phase transition behavior similar to Flory-Huggins, liquid crystals, weather patterns, etc. but have not had a good way of describing it beside vaguely grasping at Fourier decomposition of a signal. This is a more detailed overall explanation. Glad it also applies (as expected) to biological neurons. Best wishes.
This is super-high quality content ! Congratulations !
😮😮this is how teacher works, very clear in a 30min video, the channel should be on the same size as 3b1b
awsome video! I am a physicists and I want to share a small comment: I was puzzled at first by the average over time when you described correlations, since we usually calculate an averages over ensambles (thermal states, etc). Then I recalled the basic assumptions behind ensemble theory which identify the two quantities. You decided not to describe the abtraction itself but the underlying physical reality it represents in both a pedagogical and perfectly correct way. I congratulate you for that
Thank you! I really appreciate it
Always a joy when Artem drops a video!
Hey Artem
Very nice video, i have been doing Percolation models for physical systems for a while. It is rare to get percolation lattice simulations on youtube outside of very esoteric channels that nobody knows of.
It is interesting how it can be mapped to Neuroscience.
10/10
you have the best vids on YT
22:48 you absolutely just blew my F-ing mind.
Very impressive visual animations. Helped a lot with understanding the concepts
Bro this video is just outright phenomenal . Thank you for your time
My first time viewing. What an excellent job. Simply correct in matters, meaning and math. I am very impressed.
This is a great topic and a beautiful presentation based on a great paper. Excellence all around.
The insight, that cyclic relations define the geometry of the map, is a nice key insight breaking out of simple Pavlovian association lists.
This resonates strongly with my exploration with video feedback in the past, and describes my infatuation with generative art. Self similarity is the keyword and a great way to define the region of the edge of chaos, so enlightening!
Fascinating. I come from math, and in very abstract algebra and geometry there are several notions of dimension that emerge from observing some power law. The object of finite dimensions are, of course, of the particular interest.
I'm studying chemical physics. The first half is soooooooo clear! Thank you
Another fascinating video, Artem. The work you’ve put in to making the material accessible to non-specialists has definitely produced a pedagogical jewel. Amazing.
Wow! This was the best video I've seen for a while!
And it gave me an idea about how this ideas described here that can have a huge impact on Graph Neural Networks!
Thanks for such an amazing content!
OMG the graphics of this video are just popping off! I absolutely adore the font choice and visualizations. I can't believe you haven't passed 100K subs yet! But I'm sure you'll get their soon, and I'll add a small +1 to that count :)
This video helped me get dangerously close to thinking I understand the nature of the universe and myself inside it. Thank you for making such a brilliant video that's available for everyone to learn from.
Your videos are always enlightening; thanks for the consistently great content!
One of the best videos I’ve seen on RUclips! (The others are also your videos)
Fascinating and incredibly well put together video!
This seems theory to be resonating with a lot of other fields of science, as well as experience being embodied, and I want to thank you for presenting this topic in such an accessible way! I think that it is important that we continually update our internal models of the world and our self to be able to stay in touch with it.
Fascinating. Thanks for making this.
This is reminding me of the book.. The computaional Beauty of Nature. Great work.
Such an intricate and complex topic, so well explained. Truly remarkable!
Damn, that was eye opening! Thank you for making this!
I really like the idea of doing a topological sort on the network and visualizing the avalanche from left to right -- but as you said, it comes with the inability to allow for circular relationships. Not a neuroscientist, but I imagine there are some regions of the brain that are structured like this to a first approximation, for example the entorhinal cortex and the hippocampus subsystem.
its the meaning of the universe my friend, geometric cognition. couldnt have figured it out without you.
This video is so interesting. Thanks a lot for making this video and please keep delivering content about computational neuroscience in an informative yet easily digestible way!!
I am a rather regular software developer but I kind of try to avoid too much math but this video is phenomenal that even with my forgotten knowledge I could easily follow what was explained here.
Incredible content honestly
I got interested in neurology a few years ago But lost interest. But this video Has definitely made me want to study it again. You explained everything so simply and perfectly. Definitely one of the best Scientific videos I've ever seen on RUclips❤❤❤❤
This Video is so so wonderful, thank you!! All very beautiful, interesting and clear. Good luck for next videos and thank you
Please keep it coming or going - whichever makes you make more of these till the end
you explain concepts so well & eloquently. the theoretical simulations, etc.
incredible video, hope you make more like this!
This is so well explained and an amazing video!
Thank you so much for this video!
As a pre bachelor interested in pshycology and neuroscience this video was extreemely interesting to watch! The visual explenation of the scale was so intuitive that i could immidetely transfear it to my own theory of the brains functioning.
Did make me wonder further as for instance about what the difference between a of a child would be and an adult brain here as synaptic pruning kicks in. A child brain is much higher in disorder meaning much less stable avalanches and patterns while the brain develups.
It would be verry interesting to figure out how or if a high disorder system would be self stabalizing towards greater order with age once one accounts for reinforcing and inhibition rules.
The camera I mentioned below, was looking at its own image, and it was a usb microscope camera looking ant a few pixels of its own image (where the waterfall seed is composed only of the center horizontal line of camera video) delayed in the waterfall. This is roughly equivalent to this critical point concept. Long story, short echoes of what had seen sometime ago, were fed back into the system to appear again in an echo distorted form. Carefully, it would go on without saturating or dying out.
Thank you.
What a great video! Keep it up
What a beautiful video !
Artem, you do sci comm like no other. Thank you 🙏
THANKYOU so much for scale invariance.