@@ArtemKirsanov is biochemistry mostly fun and enjoyable? Can you share how you stay motivated? And do you have to study 24/7 or just a few hours a day is enough? Thanks for sharing.
I am a researcher who investigates how architectural space impacts our psyche, and topology and Lacanian's psychoanlysis (based on topology) impressed me. Your video is just brillantly presented, the informations, the flow, the few puns, the animations. It's perfect. Keep on rocking more spaces!
Dude, super well done. I also entered the competition and I think this is one of the best entries I’ve seen (I’ve seen a lot!). Good luck, well deserved!
Awesome presentation! Interestingly, in my lab we use nearly identical methods to describe turbulent fluids. Resolve the fluid velocity vector field on a grid, stack the velocity components to make an N-dimensional vector, and you can describe the evolution of the fluid as a 1D trajectory through ND space. Once again, not all configurations of the fluid are possible because of physical constraints, so the trajectory is bound to the "inertial manifold". We can use this description to compute all sorts of interesting things about the fluid, but I should be writing that in my thesis and not on RUclips right now 😅
@@jessevos3986 search for dynamical systems and state space, which will take you to articles about the general method. Add "fluid" to your search terms to see it applied like I described
'Thought-space' has long fascinated me, both neurologically and philosophically. I've often wondered if the mathematical and philosophical world could be connected by understanding thought-space on a neurological level. Excellent work by all those involved in this presentation!!
Yuuuuup! For years I've believed that every thought, idea, piece of knowledge/information has a kind of 'geometry' to it. So when I have that visceral recognition: I've seen this shape of thing before; I take that very seriously. It really shows how silly the siloing of fields are. Yes, you really did see that same "shape" in literature, and physics, and art, and 15 other places. We really need to be exploring and learning about the incredible creatures and ecologies that make up the human-zoo of thought space.
This is so awesome! And to see that you were featured in 3Blue1Brown's videos is so refreshing to see. Great luck to you and to your journeys in math and Neuroscience!
Subscribed! Amazing presentation with extremely well-made animations and I am glad that you don't assume the viewers to be only interested in a watered-down version of the more difficult concepts so to make them easier to understand, but with loss of generality. Also, the profoundness of the result in the end (that the measured group of neurons actually represent a 1D manifold, just as expected) is extremely interesting. Thanks for this.
@Artem Kirsanov, this is absolutely brilliant in so many ways. Just wow! Though i am far from fluent in any of the languages you are speaking, on a more intuitive and perceptive level, all you share in this video is absolutely relevant to addressing fundamental problems in neurology, environmental science, philosophy, and myriad fields and disciplines --including several which i am intensely focused including tackling species-level, planetary-level, and whole-systems existential threats and transforming them into existential opportunities. I will review again, dive a little deeper into some of the concepts for a stronger grasp, and hopefully return. First time i see a video of yours -- i am blown away. I've been following 3B1B for quite some time and love his work and spirit too. You both represent a whole new world that is being birthed. Your video above is exciting, deep, uplifting, beautiful, and quite literally enlightening. Thank you. Look forward to seeing more, and hoping to meet someday, hopefully in a not too distant future. Best regards from Brazil.
Excellent. You make me want to continue neuroscience studies by pursuing my Masters / PhD. I also like how you were able to omit using the word 'attractor' throughout the entire video.
I was studying the topic through a paper, and honestly I had no idea what is topology's business with neuroscience, your presentation shed a light on that. Thank you a lot.
17:40 - error. It is distinguishable by comparing the length of sting in different directions. Going at 90 degrees from the start point - returning back to it... measuring how long a string is. That way very precise surface can be eventually mapped. Each crossing of the line is a reference point. You can always monitor how far down the line you are, etc...
Though I am not a neuroscience student it is a joy to see how mathematics is being applied creatively to capture the essence of a phenomenon. You got a subscriber for life.
I paused the video at 20:30 to think about & answer the question, and I got it right, which felt really good and shows that you did a good job explaining the concept. This was so interesting! I might have to check out some of the references in the description!
This video it not only a piece of art in itself... it made me feel a positive hope for many reasons... how young this guy is... and how the work started by 3B1B is isnpiring other great quality works........ congratulations for this great work and.... thanks!
Спасибо. Мало того, что тема крутая и необычная, так ещё и русский акцент помогает понимать речь и делает просмотр не таким напряжным, как если смотришь других англоговорящих авторов. Подписался на канал : )
One of the best channels on RUclips! I've got so much out of this channel, thanks so much 🙏 I didn't realise head direction cells were strongly associated with the thalamus (always think EC by default). Though if we consider all global reference frames as egocentric (just hypothetically, though there is an interesting paper by Flavia Filimon on the concept), then it does kind of make sense that all cortical regions that collaborate spatially would retain strong origin-like components in their "signal". Time to impress girls with my new knowledge of topology!
This video was a huge benefit to me. I always wanted to get into topology, but I am only really interested in how it relates to the brain. Now I understand fundamental topology better, as well as how it applies to neural population dynamics. Thanks!
Fantastic work, I'm impressed by the discovery itself and how this videos is explaining the concepts so well. I really like the moments to pause and think, before the answer is given. Good job!
Hello Friend, great video, very faithful overview of the subject (I also work in neuroscience). I feel like the video scratched the surface of some very deep reaching concepts, so I was wondering if your could elaborate on a few questions of mine: 1) What does the number of holes actually mean when applied to neuronal manifolds? Are they somehow a fundamental property underlying a given dynamical system that are invariant of its representation/implementation? 2) If I understand the setup correctly, the neuronal recordings are a set of samples from a multivatiate probability distribution. Thus, experimentally obtained data is finite and not continuous. How does one proceed to estimate the shape of the underlying manifold given finite data? Is there any way to tell apart a hole from absence of samples, potentially due to undersampling? 3) How reliable are the statements about intrinsic dimension being smaller than embedding dimension in real data. I would assume that there is some noise in the system, for example that introduced by the instrument or the pre-processing procedure (such as the spike rate approximate estimate). My naive guess would be that there would be non-zero variance along all embedding dimensions. Is this the case? Are there embedding dimensions along which the variance is truly zero? If not, then I presume that a dimensionality reduction procedure (like PCA) is used to drop some dimensions, whose relative variance is much smaller than for other dimensions. Such procedures always destroy information, although the fraction can be well-controlled. So, is it justified to artificially make the manifold smoother by throwing out dimensions of lower variance? In my experience, the largest variance axes are frequently unrelated to the research question (I still wonder what they do), while interesting task-related activity can be found in some of the lower-variance dimensions. I would be happy for any feedback (talk in person, answer, link to a paper or two, etc) Have a great day
Hi! These are all terrific questions! 1) I think this is one of the "holy grails" of theoretical neuroscience to find whether there is an invariant property, which is shared by all / many neural manifolds. Something about the dynamics that would indicate that it reflects the underlying "neuronal computation". But as far as I know, we currently don't have anything like that. As this stage, we are looking at different properties of different manifolds of neural activity in an attempt to find if there is a pattern and interpret it. For example stability (doi: 10.1016/j.neuron.2017.05.025) and orthogonality (doi: 10.1038/ncomms13239) And if we discover that neural activity manifolds of a certain brain areas, for example have different number of holes / number of holes depends on the behaviour, it is a starting point to hypothesise what it might mean for the underlying computation (such as the topology of the encoded variable), design additional experiments etc. But interpretation of what exactly these "invariant properties" of neural manifolds actually mean is still at very nascent 2. Exactly! To uncover the "shape" of data (which are essentially discrete points) we use the algorithm called persistent homology (Great explanation: ruclips.net/video/h0bnG1Wavag/видео.html). It allows us to reconstruct manifolds from a set of points and estimate their topological properties 3. You are absolutely right, in real experiments data is always N dimensional due to the noise, caused by our measurements or the neural activity itself. I haven't looked into that too deeply, but I think just like with any PCA, we can ignore the dimensions with low variance. And there is some arbitrariness in this threshold. I'm afraid I don't have a great answer right off the bat, perhaps it's explained in more detail in the original paper itself (doi: 10.1038/s41593-019-0460-x) Hope some of it made sense ;)
Having read the paper I can add to question #3 that yes indeed they do reduce the dimensions with a method called "Isomap" which uses the minimum geodesic distances along the high-D manifold to perform the dimensionality reduction. In principle this method conserves the topology of the manifold, which is also why you can see the rings in their 3D plots. PCA is not that good because it assumes each individual dimension (neuron) is normally distributed, which would mean the manifold is a blob or sphere but not a donut.
that's interesting. however, i have a question that is a bit glossed over: is it actually justified to smooth out the spikes with a lowpass and just look at the average spike rate? i mean, from the get go, it's totally not obvious, why something like this should be justifiable. it's entirely conceivable that the exact timing of each individual spike matters - maybe they are synced in some way and need to arrive in sync at their target neurons to work as intended? i'm sure, neuroscientists have figured that out and have determined that this is not the case. but can someone shed some more light on that?
Nope, there's no real justification to that, except that it works in most cases. The brain most often encodes data in stochastic firing rates (as far as we can tell), but we know there are some parts of the brain (like the optic nerve) where timing has a much larger effect. This is mostly to make analysis tractable, as adding precise timing information would blow up your dimensionality to something unusable. There aren't really any scalable algorithms that target that sort of timing data either.
This is actually a rather hot debate in neuroscience: precise time vs rate coding. On the one hand, some people argue that neurons record the precise time of spikes and therefore smoothing out with a low frequency destroys the information you are tying to look at. There is some evidence for this, where some neurons act like biological logic gates at super fast timescales, such as an AND gate firing only if two spikes are received at the same time. It has also been shown that some neurons will always fire at the same time when doing a particular task, something like 50ms after starting a movement. However, the analysis of precise spike times is very difficult due to the high uncertainty of the recordings (some spikes can be missed or wrongly attributed to a neuron) and the lack of mathematical tools to analyze event sequences. On the other hand, many neurons also act like integrators, that is they remember spikes for a while and will become active after a certain threshold is hit. For the rate coding theory smoothing over many spikes is fine, since every single spike is not that important, just the average over a slightly longer time. Rates are often used because they are also much easier to work with than precise times and have been shown to work quite well for some predictions, such as the head direction (mentioned in the video), hand movement direction, auditory/visual input and more. I think the truth will be some combination of the two neural coding theories, which still have to be unified properly.
aha! thanks for the replies. well, i would say that "it works in most cases" actually *is* a justification. i mean, that's how science operates, right?: make a hypothesis, make a prediction based on that, collect data, see if data is compatible with prediction. and if it is, the hypothesis get bumped up in credibility. but yeah - it would also be a lot nicer, if one would have not only experimental evidence but also a plausible physiological explanation. maybe there are indeed different kinds of neurons with different modes of operations. ...it's all very complicated... :-)
@@MusicEngineeer Just to add something that might be related. I watched "How we see photons", and in there it was stated that multiple signals need to arrive from the eye to the brain so the brain says 'yeah. a photon was there' and you would see it. This signals come from individual cells, so for me the averaging is a mathematical representation of this mental process. I think that in general Brain needs more time than body sensors to produce meaningful outputs, and I think this justifies the opposite question: Would the brain be able to deal with such amount of signals without averaging? To this adds that moving hands, or legs is way slower, and Brain may need to wait. So, I would change, it seems to work in most cases for, the results seem to show that this is the most usual brain behavior. To me this slightly touches the Free Will debate, there is not an 'instantaneos you' that is worth examining, it is spread over a short period of time. @Aitor Thanks a lot for your expert information and super detailed post
@@Posesso I wouldn't characterize it as averaging. It is more like integration, but note that there are negative inputs. Basically, a neuron will fire if it crosses a certain voltage threshold. If it receives a number of positives inputs in succession, then it will cross this voltage, but if the inputs are more spread out, then it may not because there is a leakage current (the voltage tends towards a rest steady state at all times). Not all inputs will be adding either, there are also inhibitory neurons that will lower the voltage of the receiving neuron. To complicate that mechanic further, a neuron that is heavily inhibited can fire when the inhibition stops because the pull back to steady state can actually overshoot and cross the firing threshold. Using averaging is a pretty crude tool here and to assess how well it works... well, there is potentially selection bias at play here (people often won't be publishing the times that averaging failed to get them useful results). Averaging is certainly useful at least sometimes, but there certainly is an instantaneous 'you' with differential merit. How one wants to apply that to the philosophy of free will, ship of Theseus, etc... is a matter I won't go into today. Integration in this sense is not averaging, I would encourage you to look at the classic Hodgkin and Huxley equations for modeling neurons. The Wikipedia article is titled "Hodgkin-Huxley model". This is still the general standard model of neuronal behavior with stronger models of the same derivation but with more kinds of ion channels or involving more complicated neuronal geometry and weaker models utilizing some sort of approximation.
its a great representation of a good example, although I wonder how complicated structures would form for more complex neural activities like human cognitive behaviours; let's say abstract-decision making, or imagining an expression. I think shapes would turn more interesting in these domains, and would possibly require greater computational powers. Anyway, great job!
Naw, dawg. Complex computations are handled by the way neurons connect together and a ton of repetition. The best studied example of this vision. If you look into it, you can easily see how collections of neurons detect, compare, contrast, categorize, perform analogies, and contextualize. Sure, there are other logical functions of neurons, but it’s a good start. Ya know? You wanna might check out the website by McGill University, The Brain From Top to Bottom.
1. You need to elaborate what you mean by “abstract decision making”. It’s to vague of a question. 2. The “power” used by brain when you solve complicated problem and when you recognise a celebrity in Instagram is the same. Actually the brain uses more power when you sleep then when you are awake.
I study questions somewhat related to this (at an intersection of depression and persistent homology) -- I think you're very likely correct about needing higher-dimensional structures, but the computational cost of articulating topological information (i.e., homology, as in the video) at more than 3 or 4 dimensions can be pretty prohibitive, especially on a population-sized dataset.
When I first watched this video two years ago, I found the concept of manifold weird and new. Now two years into uni, the chapter we are studying in my math course right now is precisely on writing proofs for smooth manifolds: tangent spaces, diffeomorphism, nonlinear systems, implicit form... How time has passed!
I just looked at all the videos tagged #some1 and I'm staggered by how many people added videos to this. I'd like to check them and review them, but holy moly is it a lot of videos.
I really love your videos and your job. I'm starting in the field of neuroscience with a background in biomedicine but little in math. Your videos really help me to understand some complex things. Thanks a lot! Also I really like you put reference and suggesting readings in your videos.
Last time I read an article about this idea on a huge dimensional space (one for each neuron) was on an article on a book: Zeeman E. C. The topology of the brain and visual perception. In N . K. Fort (Ed.), Topology of 3-manifolds. Englewood Cliffs, N.J.: Prentice-Hall, 1962 Part 6 The Metric on the cortex. That I read in the 80s. Then I never saw something related again. Interesting to see new things about that 35 years later.
Love this guy! I am all over this approach! Topology, from Poincare, was originally thought of as an approach to the constraints of the solution set to various differential equations….think this approach can bear much fruit in this arena🤩
Excellent effort on explain a subject that could be a bit convoluted and non-intuitive. linking the activity of neurons to their topological geometrical activity space is key to starting to understand the world of population activity of neurons.
Congratulations on this interesting and entertaining video. Very abstract concepts such as manifolds, homeomorphisms, etc. are clearly explained without getting into technical complications.
Hi Artem, I have a robot that moves randomly around an apartment using two wheels. Each wheel can be powered independently: applying the same power to both wheels makes the robot move forward, while applying the same power in reverse makes it move backward. As the robot moves, it captures 32x32 RGB images of its surroundings. For each image, I also have a record of the action taken-the left and right wheel speeds-that led to that particular image. What I’m aiming for is to create a representation of the apartment as a continuous, unified "landscape." In this landscape, each motion (described by the left and right wheel speeds) would lead to a specific point, represented by an image or some equivalent feature. This would allow the robot to navigate intentionally to a particular spot in the apartment by moving toward the corresponding point in the landscape. I’ve gathered nearly 100,000 records of the robot’s movements and corresponding images, along with the image features extracted from them. Now, I want to use this data to help the robot navigate its environment in a purposeful, directed way, so it can reach a desired image or location.
7:36 actually, in the Einstein's theory of gravitation our universe is a curved space (4-dimensional differentiable manifold), not an euclidian space. Fascinating video btw
16:14 on the sphere when you crossed a loop between one end and another point, it was as if you were crossing the sphere on a 2d surface, so we can apply the same logic to the torus, right?
Your explanations are a work of art/science. Even I understood some of this. I wonder if the same approach could be used to study very large artificial neural networks (ANN). It seems that ANN could become black boxes that we depend on, yet don't know how they really computed.
16:25 If your path passes through the hole once and return to your starting point at some point your left to right orientation on the path will be the mirror image relative to some one who did not take a path passing through the hole. That does not happen on a sphere.
Very beautiful! Thank you so much for this video! This was so intuitive, I had an idea of how algebraic topology comes into play in the functioning of the brain, but to see it happen before my eyes, and visualize it left me wonderstruck and at a loss of words! Once again, thank you so very much for this!
21:00 -- It is amazing that this circuit is so independent. Does the shape of the integral manifold hold constant? Does it vary from mouse to mouse? I wonder if the shape is modulated by interaction with other networks. Did the mouse have a full EEG during the experiment?
Great video! Surface(cos(u/2)cos(v/2),cos(u/2)sin(v/2),sin(u)/2),u,0,2pi,v,0,4pi The lost Klein bottle ? I propose this is our manifold. A single sided closed surface.
Super Artem
A good start. Don't get too excited. You'll have too many actual insights if you get deeper.
It is great, except the last comment about impressing girls!
what's wrong with that?@@ardic97sokak
If a layman like me can understand this then this guy is winning this competition for sure. Respect from India!
Thanks!! I hope so 😅
(I’m sorry for the late reply - my semester has just begun and biochemistry lab reports are being merciless)
@@ArtemKirsanov good luck with your semester!
@@ArtemKirsanov is biochemistry mostly fun and enjoyable? Can you share how you stay motivated? And do you have to study 24/7 or just a few hours a day is enough? Thanks for sharing.
An Indian guy says your RUclips tutorial is good. You can relax now, you have officially won the internet.
🇮🇳🇮🇳🇮🇳
“So next time you try to impress girls by talking about topology you won’t be limited by coffee mugs and donuts”
Fuck you got me, brilliant
:')
You hang out with the wrong girls!
I am a researcher who investigates how architectural space impacts our psyche, and topology and Lacanian's psychoanlysis (based on topology) impressed me. Your video is just brillantly presented, the informations, the flow, the few puns, the animations. It's perfect. Keep on rocking more spaces!
will look this up! i keep eye matching multiple objects , play with perception, overall profiles of buildings, things, its amazing how things line up
The most pointless researcher job
If it gets proven mathematically that lacan models are somehow accurate I will literally shit myself
This is about neuroscience. Psychoanalysis has nothing to do with science. Good luck!
This is not for you.
This is INSANILY GOOD. Never saw so good exposition about topological spaces and manifolds without beeing pedant.
Dude, super well done. I also entered the competition and I think this is one of the best entries I’ve seen (I’ve seen a lot!). Good luck, well deserved!
Thanks so much, man!! I appreciate it
@@ArtemKirsanov This vid is blowing up!
@@ArtemKirsanov Dude you were in the top 20! Congrats man I was hoping for this one!
This is why i'm studying mathematics. Thank you for this amazing video.
As someone who has a BS in math and is struggling to land an actuarial job, study neural engineering
The problem with math is teaching children they can have -5 apples. On paper it works in reality your teaching them debt.
Awesome presentation! Interestingly, in my lab we use nearly identical methods to describe turbulent fluids. Resolve the fluid velocity vector field on a grid, stack the velocity components to make an N-dimensional vector, and you can describe the evolution of the fluid as a 1D trajectory through ND space. Once again, not all configurations of the fluid are possible because of physical constraints, so the trajectory is bound to the "inertial manifold".
We can use this description to compute all sorts of interesting things about the fluid, but I should be writing that in my thesis and not on RUclips right now 😅
we should ALL be writing our thesis and not on youtube right now
That sounds super cool, where/with which keywords could I find out more about that?
@@jessevos3986 search for dynamical systems and state space, which will take you to articles about the general method. Add "fluid" to your search terms to see it applied like I described
Awesome
Fantastic! been waiting for some 3B1B grade classes on Brain-Information processing. Please keep the subject going. Thanks
I'm crying this is so insightful and beautiful 😢
Absolutely brilliant, you've managed to explain all of this so plainly and clearly, one of the best videos on topology I've seen on YT
'Thought-space' has long fascinated me, both neurologically and philosophically. I've often wondered if the mathematical and philosophical world could be connected by understanding thought-space on a neurological level.
Excellent work by all those involved in this presentation!!
Thought Space?
@@davidarvingumazon5024
I think it's also called headspace, at times.
Idk if he talked about this. But I saw this concept in a (psych)trip once represented as a Möbius strip
Yuuuuup! For years I've believed that every thought, idea, piece of knowledge/information has a kind of 'geometry' to it. So when I have that visceral recognition: I've seen this shape of thing before; I take that very seriously. It really shows how silly the siloing of fields are. Yes, you really did see that same "shape" in literature, and physics, and art, and 15 other places. We really need to be exploring and learning about the incredible creatures and ecologies that make up the human-zoo of thought space.
This is a fantastic overview of the topic, with valuable references. Please, do a follow-up video. Thx
This is so awesome! And to see that you were featured in 3Blue1Brown's videos is so refreshing to see. Great luck to you and to your journeys in math and Neuroscience!
Subscribed! Amazing presentation with extremely well-made animations and I am glad that you don't assume the viewers to be only interested in a watered-down version of the more difficult concepts so to make them easier to understand, but with loss of generality. Also, the profoundness of the result in the end (that the measured group of neurons actually represent a 1D manifold, just as expected) is extremely interesting. Thanks for this.
This is easily the best explanation of the basics of topology I've ever come across. I finally understand what a manifold is! Fascinating video.
Smart man. I appreciate your clear explanations of a concept I find sometimes difficult to explain to laymen. I appreciate the food for thought.
I’m only one minute in, and I already love the intro, no filler bs, instead a man itinerary of today’s presentation
Amazing job on this video. Bravo. You've clearly taken inspiration from one of YouTub's top science communicators.
You're a smart guy. This presentation was extremely well done. Very thorough and very clear. The best I've seen yet. Please keep up the good work !
@Artem Kirsanov, this is absolutely brilliant in so many ways. Just wow!
Though i am far from fluent in any of the languages you are speaking, on a more intuitive and perceptive level, all you share in this video is absolutely relevant to addressing fundamental problems in neurology, environmental science, philosophy, and myriad fields and disciplines --including several which i am intensely focused including tackling species-level, planetary-level, and whole-systems existential threats and transforming them into existential opportunities.
I will review again, dive a little deeper into some of the concepts for a stronger grasp, and hopefully return.
First time i see a video of yours -- i am blown away. I've been following 3B1B for quite some time and love his work and spirit too. You both represent a whole new world that is being birthed.
Your video above is exciting, deep, uplifting, beautiful, and quite literally enlightening. Thank you.
Look forward to seeing more, and hoping to meet someday, hopefully in a not too distant future. Best regards from Brazil.
This is an amazing video. Thoroughly explained a complex topic, and I really loved your emphasis on developing an intuitive understanding. Great work!
Sensory afferents in certain species (e.g. weakly-electric fish) can fire at rates well above 500Hz, but these are exceptional. Very nice video!
Excellent. You make me want to continue neuroscience studies by pursuing my Masters / PhD. I also like how you were able to omit using the word 'attractor' throughout the entire video.
I was studying the topic through a paper, and honestly I had no idea what is topology's business with neuroscience, your presentation shed a light on that. Thank you a lot.
17:40 - error. It is distinguishable by comparing the length of sting in different directions. Going at 90 degrees from the start point - returning back to it... measuring how long a string is. That way very precise surface can be eventually mapped. Each crossing of the line is a reference point. You can always monitor how far down the line you are, etc...
Though I am not a neuroscience student it is a joy to see how mathematics is being applied creatively to capture the essence of a phenomenon. You got a subscriber for life.
This is such a good video! I loved every minute of it and was absolutely captivated. Great job Artem!
I paused the video at 20:30 to think about & answer the question, and I got it right, which felt really good and shows that you did a good job explaining the concept. This was so interesting! I might have to check out some of the references in the description!
Awesome video man. when I looked at how many subs you had I had to double take because I thought it said 2 million! Good luck on the contest!
This is the best explanation of computational neuroscience I have ever heard !
This video it not only a piece of art in itself... it made me feel a positive hope for many reasons... how young this guy is... and how the work started by 3B1B is isnpiring other great quality works........ congratulations for this great work and.... thanks!
Also, that closing statement truly hit the nail on the real practicality of topology
Спасибо. Мало того, что тема крутая и необычная, так ещё и русский акцент помогает понимать речь и делает просмотр не таким напряжным, как если смотришь других англоговорящих авторов. Подписался на канал : )
Artem let me speak from my heart! I understand well your pronansation. This is first time so good for my listening skill
Now I have passion for topology in neuroscience. Thank you for these excellent videos!
One of the best channels on RUclips! I've got so much out of this channel, thanks so much 🙏
I didn't realise head direction cells were strongly associated with the thalamus (always think EC by default).
Though if we consider all global reference frames as egocentric (just hypothetically, though there is an interesting paper by Flavia Filimon on the concept), then it does kind of make sense that all cortical regions that collaborate spatially would retain strong origin-like components in their "signal".
Time to impress girls with my new knowledge of topology!
This video was a huge benefit to me. I always wanted to get into topology, but I am only really interested in how it relates to the brain. Now I understand fundamental topology better, as well as how it applies to neural population dynamics. Thanks!
This absolutely blew my mind. What a fantastic video!
man...coolest animations......heck ytubers rule man.....boosting visualization to peaks .......
This is so amazing. I’m a software engineer and using those analysis we could create an interface between brain and software without surgery.
I really enjoyed your presentation. Well done and looking forward to seeing more of neural manifolds. 💪🏻
This is literally mind blowing. Awesome video and I’m excited to see more!
Awesome video, I’m a bit confused on one part, how do you transform the cloud of data points in higher dimensional space into the loop structure?
you are one of the most clear&smart people i know, thank you
Unbelievably incredible video. Amaaaazing work!
Fantastic work, I'm impressed by the discovery itself and how this videos is explaining the concepts so well. I really like the moments to pause and think, before the answer is given. Good job!
I'm a layman and played it at 2x speed. And I still understood it.
Absolutely brilliant. BRAVO!
amazing video. those infographic videos are a piece of art.
Hello Friend, great video, very faithful overview of the subject (I also work in neuroscience). I feel like the video scratched the surface of some very deep reaching concepts, so I was wondering if your could elaborate on a few questions of mine:
1) What does the number of holes actually mean when applied to neuronal manifolds? Are they somehow a fundamental property underlying a given dynamical system that are invariant of its representation/implementation?
2) If I understand the setup correctly, the neuronal recordings are a set of samples from a multivatiate probability distribution. Thus, experimentally obtained data is finite and not continuous. How does one proceed to estimate the shape of the underlying manifold given finite data? Is there any way to tell apart a hole from absence of samples, potentially due to undersampling?
3) How reliable are the statements about intrinsic dimension being smaller than embedding dimension in real data. I would assume that there is some noise in the system, for example that introduced by the instrument or the pre-processing procedure (such as the spike rate approximate estimate). My naive guess would be that there would be non-zero variance along all embedding dimensions. Is this the case? Are there embedding dimensions along which the variance is truly zero? If not, then I presume that a dimensionality reduction procedure (like PCA) is used to drop some dimensions, whose relative variance is much smaller than for other dimensions. Such procedures always destroy information, although the fraction can be well-controlled. So, is it justified to artificially make the manifold smoother by throwing out dimensions of lower variance? In my experience, the largest variance axes are frequently unrelated to the research question (I still wonder what they do), while interesting task-related activity can be found in some of the lower-variance dimensions.
I would be happy for any feedback (talk in person, answer, link to a paper or two, etc)
Have a great day
Hi! These are all terrific questions!
1) I think this is one of the "holy grails" of theoretical neuroscience to find whether there is an invariant property, which is shared by all / many neural manifolds. Something about the dynamics that would indicate that it reflects the underlying "neuronal computation". But as far as I know, we currently don't have anything like that. As this stage, we are looking at different properties of different manifolds of neural activity in an attempt to find if there is a pattern and interpret it. For example stability (doi: 10.1016/j.neuron.2017.05.025) and orthogonality (doi: 10.1038/ncomms13239)
And if we discover that neural activity manifolds of a certain brain areas, for example have different number of holes / number of holes depends on the behaviour, it is a starting point to hypothesise what it might mean for the underlying computation (such as the topology of the encoded variable), design additional experiments etc.
But interpretation of what exactly these "invariant properties" of neural manifolds actually mean is still at very nascent
2. Exactly! To uncover the "shape" of data (which are essentially discrete points) we use the algorithm called persistent homology (Great explanation: ruclips.net/video/h0bnG1Wavag/видео.html). It allows us to reconstruct manifolds from a set of points and estimate their topological properties
3. You are absolutely right, in real experiments data is always N dimensional due to the noise, caused by our measurements or the neural activity itself. I haven't looked into that too deeply, but I think just like with any PCA, we can ignore the dimensions with low variance. And there is some arbitrariness in this threshold. I'm afraid I don't have a great answer right off the bat, perhaps it's explained in more detail in the original paper itself (doi: 10.1038/s41593-019-0460-x)
Hope some of it made sense ;)
@@ArtemKirsanov thanks a lot for this broad reply! I will certainly look at the references you suggested
whoa u two enlightened poor 17yo me
Having read the paper I can add to question #3 that yes indeed they do reduce the dimensions with a method called "Isomap" which uses the minimum geodesic distances along the high-D manifold to perform the dimensionality reduction. In principle this method conserves the topology of the manifold, which is also why you can see the rings in their 3D plots.
PCA is not that good because it assumes each individual dimension (neuron) is normally distributed, which would mean the manifold is a blob or sphere but not a donut.
This is top tier! Glad to see the field getting some attention too :)
You deserve much more views! Such clarity and fluidity between topics. Just subscribed, please keep up the great work!
that's interesting. however, i have a question that is a bit glossed over: is it actually justified to smooth out the spikes with a lowpass and just look at the average spike rate? i mean, from the get go, it's totally not obvious, why something like this should be justifiable. it's entirely conceivable that the exact timing of each individual spike matters - maybe they are synced in some way and need to arrive in sync at their target neurons to work as intended? i'm sure, neuroscientists have figured that out and have determined that this is not the case. but can someone shed some more light on that?
Nope, there's no real justification to that, except that it works in most cases. The brain most often encodes data in stochastic firing rates (as far as we can tell), but we know there are some parts of the brain (like the optic nerve) where timing has a much larger effect.
This is mostly to make analysis tractable, as adding precise timing information would blow up your dimensionality to something unusable. There aren't really any scalable algorithms that target that sort of timing data either.
This is actually a rather hot debate in neuroscience: precise time vs rate coding.
On the one hand, some people argue that neurons record the precise time of spikes and therefore smoothing out with a low frequency destroys the information you are tying to look at. There is some evidence for this, where some neurons act like biological logic gates at super fast timescales, such as an AND gate firing only if two spikes are received at the same time. It has also been shown that some neurons will always fire at the same time when doing a particular task, something like 50ms after starting a movement. However, the analysis of precise spike times is very difficult due to the high uncertainty of the recordings (some spikes can be missed or wrongly attributed to a neuron) and the lack of mathematical tools to analyze event sequences.
On the other hand, many neurons also act like integrators, that is they remember spikes for a while and will become active after a certain threshold is hit. For the rate coding theory smoothing over many spikes is fine, since every single spike is not that important, just the average over a slightly longer time. Rates are often used because they are also much easier to work with than precise times and have been shown to work quite well for some predictions, such as the head direction (mentioned in the video), hand movement direction, auditory/visual input and more.
I think the truth will be some combination of the two neural coding theories, which still have to be unified properly.
aha! thanks for the replies. well, i would say that "it works in most cases" actually *is* a justification. i mean, that's how science operates, right?: make a hypothesis, make a prediction based on that, collect data, see if data is compatible with prediction. and if it is, the hypothesis get bumped up in credibility. but yeah - it would also be a lot nicer, if one would have not only experimental evidence but also a plausible physiological explanation. maybe there are indeed different kinds of neurons with different modes of operations. ...it's all very complicated... :-)
@@MusicEngineeer Just to add something that might be related. I watched "How we see photons", and in there it was stated that multiple signals need to arrive from the eye to the brain so the brain says 'yeah. a photon was there' and you would see it. This signals come from individual cells, so for me the averaging is a mathematical representation of this mental process.
I think that in general Brain needs more time than body sensors to produce meaningful outputs, and I think this justifies the opposite question: Would the brain be able to deal with such amount of signals without averaging? To this adds that moving hands, or legs is way slower, and Brain may need to wait.
So, I would change, it seems to work in most cases for, the results seem to show that this is the most usual brain behavior.
To me this slightly touches the Free Will debate, there is not an 'instantaneos you' that is worth examining, it is spread over a short period of time.
@Aitor Thanks a lot for your expert information and super detailed post
@@Posesso I wouldn't characterize it as averaging. It is more like integration, but note that there are negative inputs. Basically, a neuron will fire if it crosses a certain voltage threshold. If it receives a number of positives inputs in succession, then it will cross this voltage, but if the inputs are more spread out, then it may not because there is a leakage current (the voltage tends towards a rest steady state at all times). Not all inputs will be adding either, there are also inhibitory neurons that will lower the voltage of the receiving neuron. To complicate that mechanic further, a neuron that is heavily inhibited can fire when the inhibition stops because the pull back to steady state can actually overshoot and cross the firing threshold. Using averaging is a pretty crude tool here and to assess how well it works... well, there is potentially selection bias at play here (people often won't be publishing the times that averaging failed to get them useful results). Averaging is certainly useful at least sometimes, but there certainly is an instantaneous 'you' with differential merit. How one wants to apply that to the philosophy of free will, ship of Theseus, etc... is a matter I won't go into today. Integration in this sense is not averaging, I would encourage you to look at the classic Hodgkin and Huxley equations for modeling neurons. The Wikipedia article is titled "Hodgkin-Huxley model". This is still the general standard model of neuronal behavior with stronger models of the same derivation but with more kinds of ion channels or involving more complicated neuronal geometry and weaker models utilizing some sort of approximation.
its a great representation of a good example, although I wonder how complicated structures would form for more complex neural activities like human cognitive behaviours; let's say abstract-decision making, or imagining an expression. I think shapes would turn more interesting in these domains, and would possibly require greater computational powers.
Anyway, great job!
Naw, dawg. Complex computations are handled by the way neurons connect together and a ton of repetition. The best studied example of this vision. If you look into it, you can easily see how collections of neurons detect, compare, contrast, categorize, perform analogies, and contextualize. Sure, there are other logical functions of neurons, but it’s a good start. Ya know?
You wanna might check out the website by McGill University, The Brain From Top to Bottom.
1. You need to elaborate what you mean by “abstract decision making”. It’s to vague of a question.
2. The “power” used by brain when you solve complicated problem and when you recognise a celebrity in Instagram is the same. Actually the brain uses more power when you sleep then when you are awake.
@@egor.okhterov do you have a reference for the amount of energy the brain consumes while asleep vs awake?
I study questions somewhat related to this (at an intersection of depression and persistent homology) -- I think you're very likely correct about needing higher-dimensional structures, but the computational cost of articulating topological information (i.e., homology, as in the video) at more than 3 or 4 dimensions can be pretty prohibitive, especially on a population-sized dataset.
Excellent. Nice detour that wraps it up where we began.
I'm a physicist and recently discovered these manifolds. Super interesting. Thanks for this, I hope Grant Sanderson see this.
Your Video Is Efficiently Organized, Effectively Explained, And Informative At An Intuitive Level. Sincerely, Well Done.
When I first watched this video two years ago, I found the concept of manifold weird and new. Now two years into uni, the chapter we are studying in my math course right now is precisely on writing proofs for smooth manifolds: tangent spaces, diffeomorphism, nonlinear systems, implicit form... How time has passed!
I just looked at all the videos tagged #some1 and I'm staggered by how many people added videos to this. I'd like to check them and review them, but holy moly is it a lot of videos.
this is BY FAR my favourite SoME submission!
The video started off really well. I was pretty hooked by your energy and all the neuroscience jargon.
I really love your videos and your job. I'm starting in the field of neuroscience with a background in biomedicine but little in math. Your videos really help me to understand some complex things. Thanks a lot! Also I really like you put reference and suggesting readings in your videos.
Last time I read an article about this idea on a huge dimensional space (one for each neuron) was on an article on a book: Zeeman E. C. The topology of the brain and visual perception. In N . K. Fort (Ed.), Topology of 3-manifolds. Englewood Cliffs, N.J.: Prentice-Hall, 1962 Part 6 The Metric on the cortex. That I read in the 80s. Then I never saw something related again.
Interesting to see new things about that 35 years later.
Amazing... opens up to new possibilities and way of approaches, thank you!
Love this guy! I am all over this approach! Topology, from Poincare, was originally thought of as an approach to the constraints of the solution set to various differential equations….think this approach can bear much fruit in this arena🤩
Excellent effort on explain a subject that could be a bit convoluted and non-intuitive. linking the activity of neurons to their topological geometrical activity space is key to starting to understand the world of population activity of neurons.
Congratulations on this interesting and entertaining video. Very abstract concepts such as manifolds, homeomorphisms, etc. are clearly explained without getting into technical complications.
I can't even express how good of a video this was, really good work :)
Hi Artem,
I have a robot that moves randomly around an apartment using two wheels. Each wheel can be powered independently: applying the same power to both wheels makes the robot move forward, while applying the same power in reverse makes it move backward.
As the robot moves, it captures 32x32 RGB images of its surroundings. For each image, I also have a record of the action taken-the left and right wheel speeds-that led to that particular image.
What I’m aiming for is to create a representation of the apartment as a continuous, unified "landscape." In this landscape, each motion (described by the left and right wheel speeds) would lead to a specific point, represented by an image or some equivalent feature. This would allow the robot to navigate intentionally to a particular spot in the apartment by moving toward the corresponding point in the landscape.
I’ve gathered nearly 100,000 records of the robot’s movements and corresponding images, along with the image features extracted from them. Now, I want to use this data to help the robot navigate its environment in a purposeful, directed way, so it can reach a desired image or location.
7:36 actually, in the Einstein's theory of gravitation our universe is a curved space (4-dimensional differentiable manifold), not an euclidian space.
Fascinating video btw
This is a fantastic exposition! I am excited to have found this channel.
I think I only understood half of it, but the half I understood is amazing. Thank you for giving us this new perspective on the working of the brain.
16:14 on the sphere when you crossed a loop between one end and another point, it was as if you were crossing the sphere on a 2d surface, so we can apply the same logic to the torus, right?
maaan you are amongst the best teachers i have seen
Your explanations are a work of art/science. Even I understood some of this. I wonder if the same approach could be used to study very large artificial neural networks (ANN). It seems that ANN could become black boxes that we depend on, yet don't know how they really computed.
Mind, blown. Brilliant!
Amazing video, congratulations! Incredible mix of topics and very exciting conclusion for what's next on neuroscience... Thanks!
Your name is way too similar to Sebastian Moran, the right-hand-man of Professor Moriarty, and I'm a fan of it.
This is absolutely amazing Artem! Please keep doing such videos
Good video! We dont live in an euclidean space by the way but its a good approximation in our scale
Intuitive indeed, great video!
16:25 If your path passes through the hole once and return to your starting point at some point your left to right orientation on the path will be the mirror image relative to some one who did not take a path passing through the hole. That does not happen on a sphere.
Great presentation. I like how the brain scribbles and I think I’m on my way to understanding how many holes it takes to fill the Albert Hall.
Production value is insane!! Good job!
So psyched that people are addressing this interesting topic. 🤔 Thank you!
Now,this is getting more & more interesting..
This video goes so hard dude I'm glad I found your channel
I've been getting more and more youtubers below 100k subscribers; even some with just 10 subs -- Love this new youtube.
That... that was incredibly incredible.
Very beautiful! Thank you so much for this video! This was so intuitive, I had an idea of how algebraic topology comes into play in the functioning of the brain, but to see it happen before my eyes, and visualize it left me wonderstruck and at a loss of words! Once again, thank you so very much for this!
amazing entry! you got my sub
Fantastic video! Thanks Artem, great explanations.
You need to do more of these videos. They are really good work. 👏
21:00 -- It is amazing that this circuit is so independent. Does the shape of the integral manifold hold constant? Does it vary from mouse to mouse? I wonder if the shape is modulated by interaction with other networks. Did the mouse have a full EEG during the experiment?
This is just straight-up fantastic. I learnt a lot, thank you.
Great video!
Surface(cos(u/2)cos(v/2),cos(u/2)sin(v/2),sin(u)/2),u,0,2pi,v,0,4pi
The lost Klein bottle ?
I propose this is our manifold. A single sided closed surface.
far beyond EXCELLENT!!💯
Great job and very interesting topic!) Good luck with the competition✨
This is absolutely fantastic! Good job!
I'm so happy this is a project about TDA! Great video homie :)
Wow you seem to be genius. I'll watch this over and over.