To be honest, you deserve a lot more subscriber and viewer base. I love your content and keep making these videos!. Btw you could also cover other things about brain, for example psychology from a perspective of neuroscience instead of how typically other youtubers explain it from philosophical perspective. I would say cover other brain related topics from the same perspective(What's the mechanism at lower level?) as you have right now.
how can you claim social space isn't physical? You're describing physical objects which are defined by their physical behaviour and their physical interactions with other physical behaviours cognitive bias in scientism is out of control
It’s fascinating how one mechanism in the brain has evolved to generalize to various other abstractly-similar tasks. It makes me wonder if that same “spatial circuitry” might be used in the recall of long-term memory
Thank you for bringing these interesting topics to us in such an easy to understand way, while presenting the information so clearly. I love learning about the brain, psychology and all things related, so discovering your channel has been a great christmas gift. Given this quality, there is no doubt you’ll make it far. Cheers
This is one of the most underappreciated channels out there. When I share videos from here with friends and family, I pitch it with: "The generation under me is so ridiculously smart and I feel like a complete failure". Just spectacular visualizations on extremely advanced topics.
I was amazed by the ability of certain elders in black in my village to find all the possible relations a group of people may share and the closest one. believe me when we are talking about 200 people who basically are isolated and only marry each other and no parallel generations, it is not trivial at all. I tried to learn that but give up. Mind you my Spatial Intelligence and Logical-mathematical intelligence are very high. but still takes time to process all possible routes.
Phenomenal. Been curious about neurons sharing responsibilities. I’m coming from comp. sci. and wanting to dig into what hidden layers mean. This perfectly walks through a real world example, has great explanations, and those visuals… they’re really top notch. And synthesizes research. Great job!!
@@coda-n6u Sure! There's so much to talk about lol. But here's bullet points: - current language models are incapable of abstraction-- they just learn to mimic human speech based on a vast corpus - current language models (that I know of) are restricted by: - rectangular architecture (idk if that's a term). What I mean is they usually have some number of layers and some number of neurons per layer. So it looks like a rectangle - this is bad because it tries to fit a conceptual space that is probably not best represented by a rectangle into a rectangle. - related is, the network can't change its architecture. It can't create new neurons or connections, even if it needs to. - this is bad because gradient descent, which is used almost universally in machine learning, "greedily" adjusts the network to make it learn. Greedily means that it makes the adjustment to its weights for whatever is best *in the moment*. This can lead to premature convergence and in a complex domain means being almost guaranteed to not reach the global minima, let alone efficiently - our brains are plastic and can create new connections and neurons. They are also graph-based. This removes those restrictions. It removes the premature convergence because we can always get out of it by creating new connections/neurons. And we can always simplify/reduce/generalize by pruning or rewiring. There's no limitations, aside from our biology and spatial limitations of like... one neuron from one part of our brain connecting to one all the way across. I'm not a biologist but I have a hunch that isn't very biologically feasible, and if it is, efficient lol. - removing those restrictions also means we can reorganize and our neurons can form more modular structures. I have no basis for this, but I think whenever we use analogies or apply knowledge from one area to another (e.g. from calculus in a math class to calculus in a physics or engineering class), we could be reusing the same structure from the original area. Related is the concept isomorphism. I'm not a mathematician but my layman's understanding of it is "equivalent structure", so if two domains have equivalent conceptual structures for a part of each domain, then they can share the same structure to represent those parts of each domain. - something that could go against this is, what happens if both contexts are activated at once? Our brains maintain state (comp. sci. term) and so reusing the same state for both at once could lead to bad output, I think. I could be wrong. Maybe our brains could inhibit inputs to where only one domain uses that graph at once, and maintains the context of pure math versus engineering/physics elsewhere in the network. - just a tidbit, but I like to this of this as Obsidian (the note-taking software) versus a hierarchical app that's folder based. In that Obsidian is graph-based and folder based inherently restricts your ability to organize and connect ideas because something might fall under multiple categories. Analogous to this is a neuron being restricted to interact with a certain set of other neurons in a rectangular architecture. Bc a rectangular architecture can't have neurogenesis (that would break the architecture and make it need to be completely retrained). I have more to write but gonna leave this here for what I wrote so far.
@@joeystenbeck6697 this is interesting, i will have more to say later. I wonder if you’re stretching the analogy between neural network architecture and actual biological brains. This made me think of analog computers, and how they might make a comeback for machine learning.
I tend to think about ML in the same way as advanced regression models, as someone with an economics background. I know there is overlap, but I feel like the non-biological architecture of current models is not necessarily why they are limited, you know?
@@coda-n6u Yeah, it's possible analog computers could comeback. I know that it's really hard to control how much electricity there is for a bit. I'm really bound by language here bc idk electrical engineering but I know that bits count as a 1 if they reach a certain electrical threshold and don't if they don't reach it. So like, we're not holding the electricity at an exact level like might be necessary for using analog. I don't know much about analog either though so I could totally be wrong. Stripping away being biological/metal, I think these are the main differences in current artificial nets vs biological nets: - brains take in a set of inputs and either fire or don't, whereas current artificial nets are typically more continuous. E.g. ReLU or Sigmoid. - I might be wrong but I think this means that artificial nets can't abstract as well cuz they're forced to abstract continuously, whereas it might be more accurate or more efficient to abstract as yes-or-no/binary. That contrasts with the ReLU function, sigmoid, etc. which all assume a continuous output at some point. I think we may be using this instead of a binary activation function because a binary activation function is very limited in what it can represent, and so the network, like our brains, would need to create/kill neurons. Binary activation functions are usually only used for the output layer, not in the hidden layers (from my understanding, could be wrong). - humans go through the sensorimotor stage of development when we're super young and so we interact with the 3D physical space we experience. Computers don't get that, so they don't have the intuition of 3D objects. This is similar to an LSTM telling the net that the data has a sequential/temporal relationship. Just with common constructs we see, like seeing a cat, or the shapes that make up a cat, or the legs of a cat. Which, it turns out other animals have legs. So, that's an important abstraction we have, that a network might not pick up. I'm sure there are ways around it and maybe they're in use and I'm just unaware of it. I think that we are able to internalize spatial intuition bc we've interacted with the world so much, and that we rely on that when doing thought experiments, e.g. . Which would explain why most of us can't think easily in 4D space. - humans have rehearse-and-decay mechanisms to retain information that is important. "CogLTX: Applying BERT to Long Texts" (Ding, Tang, Yang, Zhou) talks some about this and about other cognitive psychology concepts being applied in natural language processing. - brains have different structures in them, e.g. Broca's area and Wernicke's area. Related is graph pattern producing networks (GPPNs) which locally build structures that they're encoded to do. This is analogous to DNA. My message deleted when I added a link but I learned about GPPNs and self-repairing/self-building networks from Yannic Kilcher's video on RUclips named "The Future of AI is Self-Organizing and Self-Assembling (w/ Prof. Sebastian Risi)". P.S. they built 3D self-building/self-repairing structures in Minecraft which imo is pretty sick. 23:15 is a time stamp for that. - brains have state. They track recent inputs and pass that around through neurons in real time and can loop back around to each other. - artificial nets don't have state. They can be seen as functions in math. You give it an input, you get the same output every time. Unless you add randomness in. LSTM is kinda an exception to this but not really. LSTM looks at inputs as a sequence of data, e.g. our words. They're temporally sensitive. They maintain some info on previous words in their "hidden state" that they pass on when sequentially processing inputs, but it's super limited bc you're forced to put like all of the previous words/grammar/semantics into a hidden state that isn't very large. So it has to drop a lot of info and doesn't work well with long sequences. I think our brains get around that by kinda storing state in a distributed way throughout our brain when our neurons are firing. And our neurons in our brain can loop back to each other. - neurotransmitters. Ngl I know like nothing about them but I know that they are essential in transmitting info around the brain, and inhibiting/exciting neurons( or inputs to neurons? Idk). Artificial nets can have vanishing gradients and exploding gradients. I think exploding gradients is analogous to a seizure in that neurons excite each other more and more. In artificial nets, this is typically only in one direction and also is able to explode bc activation functions are typically not binary. In our brains, the output is binary (with some weight attached between the current neuron firing and the next ones, I think. Could be wrong) so it wouldn't have the issue in that way, but I think lack of inhibition would cause it. I'm really talking out of my domain rn tho so take that with a ton of salt.
Great video! I was a bit confused about a neuron that can represent both the position of another bat and the position of itself. In the paper (David B. Omer et al) was a bit clearer; it is not a single neuron, but a “subpopulation of neurons”. It just makes more sense to me that a population of neurons can have a dual role.
I do think it’s also okay to think of them as single Neurons like Artem said, because such neurons are only Task dependent, as such based on the context they perform either of two functions, that of place cell or social cell, so they are more of a generalist type of neuron when it comes to function itself.
Thank you so much Artem for making these videos. They are so informative and interesting. Surprised you don’t have more followers. Keep up the excellent work!
This is going to be a wild ride. I have never been able to work out family trees and couldn't even come close to solving the first (I'm guessing easy) problem :o
You're great at this. Do you teach professionally? This phenomenon is what I "use" when I see a literal map in my brain where everyone has a "pawn" on said map.
If hippocampus can handle social maps, can it also handle other abstracted spacial tasks? Such as planning out network between manufacturing sites and consuming sites in business, or flow of data in computer?
Really excellent video! Takes you through the "scientific" thinking involved whilst being clear, unpatronizing, and engaging without being sensationalized. Well-researched and gorgeously illustrated. You raise an interesting question about language. Might be worth looking into how non-Indo-European languages do this, though. It's easy to see similarities between directional social terms in different IE languages, but that doesn't show a non-trivial common influence. I only have relatively limited knowledge of non-IE languages, sadly. I do know there are at least a good few non-directional social terms in Classical Nahuatl, but that's not especially useful on its own. It'd take a systematic study of a variety of unrelated languages to establish something like that robustly, I think. Interesting stuff!
Thank you! This sounds really interesting indeed! On a related note, it reminds me that certain languages operate with allocentric reference frames in everyday communication ("north", "south") rather than egocentric ("left" / "right") ( www.researchgate.net/publication/11242044_Language_and_spatial_frames_of_reference_in_mind_and_brain ) I wonder if this is somehow reflected in their vocabulary about social terms
@@ArtemKirsanov Very interesting indeed. Classical Nahuatl's directional terms are generally agglutinative. That is to say, they are tacked on the end of another word. For instance, -tlan means "by, among, in, near" (among other things). Thus it combines with "tlaxcalli" (maize tortilla) to make the toponym Tlaxcallān, "Place of the Tortillas". (That's a real city, if you didn't know.) A particularly interesting combination. One important Nahuatl suffix is -tzin, meaning "honoured, revered", but also "small". It seems the second meaning is actually the older one. I wonder how it evolved that way!
I can speak Mandarin, and from what I can tell there is little difference in how people talk about social hierarchy and relations, at least from the perspective of prepositions, etc.
This is a really interesting video, Artem! Right up my alley. Could you do a hypothesis video on how the social concept cells you mentioned might result from a broader picture of how you think the hippocampus could function? I'm interested in your opinion/perspective on this. For some more reading, Quiroga did a bunch of work on concept cells that got a lot of attention [1-2]. However, I think they're more of a symptom than a mechanism... References: [1] R. Q. Quiroga, et al. Nature. 435, 1102-1107 (2005). [2] R. Quian Quiroga, et al. Curr. Biol. 19, 1308-1313 (2009).
do you think this extends to language as well? On one end, hippocampus is active in consilidating spatial memories, and as you explain in this video, has implications in social space. On the other, it is also involved in navigation, and path integration. Do you think this might be analogous to navigation from a language perspective (where we "navigate" through arguments and ideas to communicate)?
Yes, just in the same way words have virtual meaning that changes depending on where in a sentence they're placed. we live in hypnosis updated with measurements from the outside through our senses
That's why a synonym for argument is position, as we inherently map any environment by modelling it relative to our position whether that is physical or intellectual. This is why theories are models, which are either descriptive maps and/or predictive computations that begin at a purposively chosen point as a position. This position is the strength and weakness of theory, because positions are abstractions that do not capture the totality of reality. Like maps or computations, they only include data or information relevant to having the theory make sense. This internal logic does not mean that the theory is true or realistic. Moreover, our thinking as a species is not purely logical, as this is the latest evolution of our brains. and the older brain as our survival mechanism can override logic through instinct and emotion. Therefore, a healthy scepticism may be as important with our own positions as with those of others.
Totally! I believe that words and language are connected to abstract concepts our brains inherently operate with, which are represented in the hippocampus. And to some extent this is present in rodents as well ( www.pnas.org/doi/10.1073/pnas.0701106104 ) So maybe the language is a way access the navigation in the concept space 🤔
@@ArtemKirsanov I wonder how this relates to learning language to a native level becoming harder with age. I’d think if it’s entirely built-in then learning a new language would be easy bc we’re just attaching labels to existing logic in our brain. Related is I’m wondering about labels for concepts being seen as the concepts themselves versus labels being seen as just ways to access it, kinda like keys in a hash map. If someone learns two languages when young, will their neurons form patterns about the logic itself and separate labels as distinct? And if someone learns an additional language as an adult, it’s mapping labels from the additional language to labels from the native language? Bc the person wouldn’t have made the distinction when it was critical if they only learned one language? Maybe it’s not as cut and dry given the spectrum of specializing and generalizing but I wonder if/how much that distinction influences neural connections
@@joeystenbeck6697 I’ve learned a few languages and want to contribute some potential insight. I think that when adults begin learning languages, we tend to associate labels in foreign languages with labels in our own. However, at a certain point your brain stops referencing your native labels, as you attain fluency. Before the flow may have been concept > English word > Chinese equivalent > Speech, but now it simply goes from concept to chinese speech. Keep in mind kids do this too when they learn foreign languages, albeit much faster. I think that producing language is separate from conceptual reasoning, and maybe the connection is just kids have more plastic brains in general. There are things adults are good at learning that kids are not, however, so I wonder if there is actually a connection with aging besides plasticity
Не только социальные взаимодействия, но и, судя по всему, вообще все логическое мышление человека работает как псевдофизическое пространство. Связи даже между абстрактными понятниями аналогичны физическим взаиморасположениям: "над законом", "в любви", "операции над представлениями". Если происходит нарушения способности ориентации во внутренем псевдопространстве, возникает семантическая афазия и человек не может думать сложные логические мысли и грамматические конструкции.
10:48 I'm not familiar with this field so forgive me if I am wrong, but does anyone find the areas associated with high neuron firing interesting? For the inanimate object, it is at the beginning and end, whereas for the bat, it is at the end but also a turning point.
I wonder how some agent's social map encodes the relationships between different actors, for example the relationship between my father and my friend. Both the dimensions 'power' and 'affinity' are inherently with respect to the agent, so relative spatial information between two different actors in this space, wouldn't be able to model their relationship. Perhaps a space with less agent specific dimensions would be necessary in order to model the social dynamics of all an agent's known actors.
Fascinating. BTW, I was always baffled how completely unrelated languages have similar homonyms, as if there was an underlying biological reason. Like "right" as a direction and "right" as true. Looks like it can be the case.
I don't think there's an underlying biological reason that represents the semantics of "right" within people's minds. We can't forget that convergent cultural evolution happens.
@@didack1419 convergent cultural evolution is the default answer (mine too). But it's like saying "it just happened to be this way". So if there will be other explanation, it would be nice
@@evennot well, it's not that convergent cultural evolution "just happened to be this way" anymore than "convergent biological evolution just happened to be this way", when there's convergence there are reasons for it.
Артём, делай голос за кадром. Это смотрится(воспринимается, понимается) гораздо лучше, чем какой-то сидящий пред камерой чел, активно размахивающий во все стороны руками )) На диктора надо учиться, особоенно на теледиктора, и не всякий, кто осилил нейронауку, автоматически становится хорошим ведущим. Про то, что информативность видеоряда становится в моменты появления "диктора" равна нулю, надеюсь, понятно и так. Надо уметь усмирять своё самолюбие и самолюбование.
What about people who can't visualize? Those who think only with words and can't even imagine a spatial pattern... Is that hippocampal damage, or is it more like internal blindsight ? Something there but prevented from reaching the conscience for some reason?? And why am I seeing such manor of perception as damage and can't seem to imagine it as just a different way to do things? If it is as common as I am told, how do people who think that way manage to navigate the world?
The opening example isn't very good. It is trivially easy to explain the logic. The sibling of one's spouse is one's sibling-in-law by definition. Since Alex is the sibling of Bob's spouse Alice, Alex is Bob's sibling-in-law. Since Alex is male, sibling-in-law becomes brother-in-law. Very straightforward. It would be the same thing if you asked about a relation like nephew or daughter. Words have definitions. These words are defined by their relations.
Well, yeah, exactly, this is one example of tracking abstract relations -- what is thought to be the general function of the hippocampus (for example elifesciences.org/articles/17086 ). And the social information is just one kind of this "relational database". So I don't really see any conflict there
If humans only use two axes for social information, isn't that wasteful as humans can process three spatial dimensions? Also, distance from the center of the social space isn't necessarily 'closeness' (e.g., if all of the distance goes towards the power axis or towards the negative affection axis)?! Also, how would conditions like autism impact the perception of social spaces?
I don’t quite understand what the point of R, or “social distance” is for, doesn’t make sense that gaining affiliation would increase that person’s social distance with you?
Artem based his diagram on the one in Schafer and Schiller (2018) which referenced the work of Tavares et al (2015). But when you look at Tavares, you see that affiliation is bounded above at zero. It's just a mistake in the graphic, "you" at 12:57 should be all the way at the middle right and the axes should be on the right.
So judging by your video, one could say that traumatic relationships with someone are represented like some sort of inmovable high voltage knot somewhere in the hippocampus
tangents are important. it appears as if the answer to quantum gravity lies in polar-cartesian transformation with space-half-space compressions and tangents for physical length-information(-much of brain) mapping. at least i've "gravitized the quantum" (of computational time) in my framework that leads directly to the geometry of the universe, with code and visualizations (of these formulas producing our actual visual fields and antomies). Yes, really.
Fascinating that the structures our brain uses to encode physical space are also involved in encoding social space. You mentioned that the position of objects wasn't associated with hippocampal activity; where would that information be encoded then? I'm also curious why changes in absolute social distance weren't correlated with hippocampal activity. Any clues? Again, great video!
Thank you! Well, I didn't mention it in the video, but position of the objects was encoded in the hippocampus as well ("object place cells" so to speak), but it's just that these cells differed from the ones representing other bats. Take a look at the figure 4 in the paper (here's the full text link: www.weizmann.ac.il/brain-sciences/labs/ulanovsky/sites/neurobiology.labs.ulanovsky/files/uploads/omer_etal_science2018.pdf) For example, Cell 361 (left panel) clearly codes for the position of the ball, but the place field is located in a different position compared to the conspecific representation. I'm not sure about the social distance. It could be that fMRI after all is a method with a poor spatial resolution --- you can't measure the response of individual neurons. So it is still a mystery whether such "social cells" even exists in the human brain 🤔
Всё хорошо, конечно, но для начала бы понимать, как и за что конкретно нейроны вообще могут быть ответственны в вопросе положения в пространстве или же положения в родстве/иерархии для конкретного индивида. Интересно ещё, в таком случае, можно ли предполагать, что абстракции более сложные, чем социально родство строятся в том числе и на абстракциях этого социального родства. А если так, то стоит ли выделять именно социальное родство?
@@alex15785 Мне почему-то кажется, что прежде социальной иерархии нужно выделить иерархию объектов реального мира вообще и уже потом переходить к общественным отношениям.
Having a spectrum of 'conjoint' cells is clear evidence that this claimed framework, that labels cells as 'social' or 'spatial' is fundamentally missing the dynamics of a useful classification system. You could almost make up any two arbitrary labels that might classify neurons and you'd see this same division, it's evidence of a poor model.
Um, Bob and Alex are lovers, but nobody talks about it. Bob is what's known as a "side-piece" (which makes Alice a "beard"). Meanwhile, George is a "groomer". Questions?
Yeah unfortunately this isn't how I think. This is bunk science and a good example of the failures of logical empiricism/positivism dictating science and creating a scientism instead of a science or scientific method. Sucks that basic psychology is more true than something we throws millions at each year like behavioral neuroscience.
Join Shortform for awesome book guides and get 5 days of unlimited access! shortform.com/artem
To be honest, you deserve a lot more subscriber and viewer base. I love your content and keep making these videos!. Btw you could also cover other things about brain, for example psychology from a perspective of neuroscience instead of how typically other youtubers explain it from philosophical perspective. I would say cover other brain related topics from the same perspective(What's the mechanism at lower level?) as you have right now.
how can you claim social space isn't physical? You're describing physical objects which are defined by their physical behaviour and their physical interactions with other physical behaviours
cognitive bias in scientism is out of control
It’s fascinating how one mechanism in the brain has evolved to generalize to various other abstractly-similar tasks. It makes me wonder if that same “spatial circuitry” might be used in the recall of long-term memory
Thank you for bringing these interesting topics to us in such an easy to understand way, while presenting the information so clearly. I love learning about the brain, psychology and all things related, so discovering your channel has been a great christmas gift. Given this quality, there is no doubt you’ll make it far. Cheers
Thank you!
This is one of the most underappreciated channels out there. When I share videos from here with friends and family, I pitch it with: "The generation under me is so ridiculously smart and I feel like a complete failure". Just spectacular visualizations on extremely advanced topics.
99K followers. Is NOT an underappreciated channel, is just on the beggining of the path with destination to the moon.
@@bettyboop5454 6 months ago when I left the comment, 90k followers looked to be a far cry.
This channel is pure gold.
I'll say it time and time again, i'm in LOVE with this channel
I was amazed by the ability of certain elders in black in my village to find all the possible relations a group of people may share and the closest one. believe me when we are talking about 200 people who basically are isolated and only marry each other and no parallel generations, it is not trivial at all. I tried to learn that but give up.
Mind you my Spatial Intelligence and Logical-mathematical intelligence are very high. but still takes time to process all possible routes.
Ooooo boy we're getting into abstract visual thinking
This channel is extremely unique i love it so much ❤️❤️
Phenomenal. Been curious about neurons sharing responsibilities. I’m coming from comp. sci. and wanting to dig into what hidden layers mean. This perfectly walks through a real world example, has great explanations, and those visuals… they’re really top notch. And synthesizes research. Great job!!
This is super interesting. Care to share what you’ve learned so far?
@@coda-n6u Sure! There's so much to talk about lol. But here's bullet points:
- current language models are incapable of abstraction-- they just learn to mimic human speech based on a vast corpus
- current language models (that I know of) are restricted by:
- rectangular architecture (idk if that's a term). What I mean is they usually have some number of layers and some number of neurons per layer. So it looks like a rectangle
- this is bad because it tries to fit a conceptual space that is probably not best represented by a rectangle into a rectangle.
- related is, the network can't change its architecture. It can't create new neurons or connections, even if it needs to.
- this is bad because gradient descent, which is used almost universally in machine learning, "greedily" adjusts the network to make it learn. Greedily means that it makes the adjustment to its weights for whatever is best *in the moment*. This can lead to premature convergence and in a complex domain means being almost guaranteed to not reach the global minima, let alone efficiently
- our brains are plastic and can create new connections and neurons. They are also graph-based. This removes those restrictions. It removes the premature convergence because we can always get out of it by creating new connections/neurons. And we can always simplify/reduce/generalize by pruning or rewiring. There's no limitations, aside from our biology and spatial limitations of like... one neuron from one part of our brain connecting to one all the way across. I'm not a biologist but I have a hunch that isn't very biologically feasible, and if it is, efficient lol.
- removing those restrictions also means we can reorganize and our neurons can form more modular structures. I have no basis for this, but I think whenever we use analogies or apply knowledge from one area to another (e.g. from calculus in a math class to calculus in a physics or engineering class), we could be reusing the same structure from the original area. Related is the concept isomorphism. I'm not a mathematician but my layman's understanding of it is "equivalent structure", so if two domains have equivalent conceptual structures for a part of each domain, then they can share the same structure to represent those parts of each domain.
- something that could go against this is, what happens if both contexts are activated at once? Our brains maintain state (comp. sci. term) and so reusing the same state for both at once could lead to bad output, I think. I could be wrong. Maybe our brains could inhibit inputs to where only one domain uses that graph at once, and maintains the context of pure math versus engineering/physics elsewhere in the network.
- just a tidbit, but I like to this of this as Obsidian (the note-taking software) versus a hierarchical app that's folder based. In that Obsidian is graph-based and folder based inherently restricts your ability to organize and connect ideas because something might fall under multiple categories. Analogous to this is a neuron being restricted to interact with a certain set of other neurons in a rectangular architecture. Bc a rectangular architecture can't have neurogenesis (that would break the architecture and make it need to be completely retrained).
I have more to write but gonna leave this here for what I wrote so far.
@@joeystenbeck6697 this is interesting, i will have more to say later. I wonder if you’re stretching the analogy between neural network architecture and actual biological brains. This made me think of analog computers, and how they might make a comeback for machine learning.
I tend to think about ML in the same way as advanced regression models, as someone with an economics background. I know there is overlap, but I feel like the non-biological architecture of current models is not necessarily why they are limited, you know?
@@coda-n6u Yeah, it's possible analog computers could comeback. I know that it's really hard to control how much electricity there is for a bit. I'm really bound by language here bc idk electrical engineering but I know that bits count as a 1 if they reach a certain electrical threshold and don't if they don't reach it. So like, we're not holding the electricity at an exact level like might be necessary for using analog. I don't know much about analog either though so I could totally be wrong.
Stripping away being biological/metal, I think these are the main differences in current artificial nets vs biological nets:
- brains take in a set of inputs and either fire or don't, whereas current artificial nets are typically more continuous. E.g. ReLU or Sigmoid.
- I might be wrong but I think this means that artificial nets can't abstract as well cuz they're forced to abstract continuously, whereas it might be more accurate or more efficient to abstract as yes-or-no/binary. That contrasts with the ReLU function, sigmoid, etc. which all assume a continuous output at some point. I think we may be using this instead of a binary activation function because a binary activation function is very limited in what it can represent, and so the network, like our brains, would need to create/kill neurons. Binary activation functions are usually only used for the output layer, not in the hidden layers (from my understanding, could be wrong).
- humans go through the sensorimotor stage of development when we're super young and so we interact with the 3D physical space we experience. Computers don't get that, so they don't have the intuition of 3D objects. This is similar to an LSTM telling the net that the data has a sequential/temporal relationship. Just with common constructs we see, like seeing a cat, or the shapes that make up a cat, or the legs of a cat. Which, it turns out other animals have legs. So, that's an important abstraction we have, that a network might not pick up. I'm sure there are ways around it and maybe they're in use and I'm just unaware of it. I think that we are able to internalize spatial intuition bc we've interacted with the world so much, and that we rely on that when doing thought experiments, e.g. . Which would explain why most of us can't think easily in 4D space.
- humans have rehearse-and-decay mechanisms to retain information that is important. "CogLTX: Applying BERT to Long Texts" (Ding, Tang, Yang, Zhou) talks some about this and about other cognitive psychology concepts being applied in natural language processing.
- brains have different structures in them, e.g. Broca's area and Wernicke's area. Related is graph pattern producing networks (GPPNs) which locally build structures that they're encoded to do. This is analogous to DNA. My message deleted when I added a link but I learned about GPPNs and self-repairing/self-building networks from Yannic Kilcher's video on RUclips named "The Future of AI is Self-Organizing and Self-Assembling (w/ Prof. Sebastian Risi)". P.S. they built 3D self-building/self-repairing structures in Minecraft which imo is pretty sick. 23:15 is a time stamp for that.
- brains have state. They track recent inputs and pass that around through neurons in real time and can loop back around to each other.
- artificial nets don't have state. They can be seen as functions in math. You give it an input, you get the same output every time. Unless you add randomness in. LSTM is kinda an exception to this but not really. LSTM looks at inputs as a sequence of data, e.g. our words. They're temporally sensitive. They maintain some info on previous words in their "hidden state" that they pass on when sequentially processing inputs, but it's super limited bc you're forced to put like all of the previous words/grammar/semantics into a hidden state that isn't very large. So it has to drop a lot of info and doesn't work well with long sequences. I think our brains get around that by kinda storing state in a distributed way throughout our brain when our neurons are firing. And our neurons in our brain can loop back to each other.
- neurotransmitters. Ngl I know like nothing about them but I know that they are essential in transmitting info around the brain, and inhibiting/exciting neurons( or inputs to neurons? Idk). Artificial nets can have vanishing gradients and exploding gradients. I think exploding gradients is analogous to a seizure in that neurons excite each other more and more. In artificial nets, this is typically only in one direction and also is able to explode bc activation functions are typically not binary. In our brains, the output is binary (with some weight attached between the current neuron firing and the next ones, I think. Could be wrong) so it wouldn't have the issue in that way, but I think lack of inhibition would cause it. I'm really talking out of my domain rn tho so take that with a ton of salt.
Great video! I was a bit confused about a neuron that can represent both the position of another bat and the position of itself. In the paper (David B. Omer et al) was a bit clearer; it is not a single neuron, but a “subpopulation of neurons”.
It just makes more sense to me that a population of neurons can have a dual role.
I do think it’s also okay to think of them as single Neurons like Artem said, because such neurons are only Task dependent, as such based on the context they perform either of two functions, that of place cell or social cell, so they are more of a generalist type of neuron when it comes to function itself.
Thank you so much Artem for making these videos. They are so informative and interesting. Surprised you don’t have more followers. Keep up the excellent work!
You, sir, are gonna save my neuroscience course performance. I thank you most profusely 🤣🙏
This to me is beyond amazing. Thank you so much for creating these videos!
Happy you're back - I was worried there for a moment.
Yeah, sorry about that. Everything is okay ;)
I was just really busy with preparing all the graduate school applications
Brain performance: Gut 🕷Brain performance after seeing this video: it broke the scale 🦋
Beautiful work
This is going to be a wild ride. I have never been able to work out family trees and couldn't even come close to solving the first (I'm guessing easy) problem :o
Now that's a great Christmas!
You're great at this. Do you teach professionally?
This phenomenon is what I "use" when I see a literal map in my brain where everyone has a "pawn" on said map.
I relate to your way of thinking very much!
Another fantastic job, Artem!
outstanding content, bro. a fan from the first video
Amazing! Thank you for the video!
Thank you so much. I appreciate this very much
If hippocampus can handle social maps, can it also handle other abstracted spacial tasks? Such as planning out network between manufacturing sites and consuming sites in business, or flow of data in computer?
Very good video, really interesting topic. Keep up the work!
Really excellent video! Takes you through the "scientific" thinking involved whilst being clear, unpatronizing, and engaging without being sensationalized. Well-researched and gorgeously illustrated.
You raise an interesting question about language. Might be worth looking into how non-Indo-European languages do this, though. It's easy to see similarities between directional social terms in different IE languages, but that doesn't show a non-trivial common influence. I only have relatively limited knowledge of non-IE languages, sadly. I do know there are at least a good few non-directional social terms in Classical Nahuatl, but that's not especially useful on its own. It'd take a systematic study of a variety of unrelated languages to establish something like that robustly, I think.
Interesting stuff!
Thank you!
This sounds really interesting indeed!
On a related note, it reminds me that certain languages operate with allocentric reference frames in everyday communication ("north", "south") rather than egocentric ("left" / "right") ( www.researchgate.net/publication/11242044_Language_and_spatial_frames_of_reference_in_mind_and_brain )
I wonder if this is somehow reflected in their vocabulary about social terms
@@ArtemKirsanov Very interesting indeed. Classical Nahuatl's directional terms are generally agglutinative. That is to say, they are tacked on the end of another word. For instance, -tlan means "by, among, in, near" (among other things). Thus it combines with "tlaxcalli" (maize tortilla) to make the toponym Tlaxcallān, "Place of the Tortillas". (That's a real city, if you didn't know.) A particularly interesting combination.
One important Nahuatl suffix is -tzin, meaning "honoured, revered", but also "small". It seems the second meaning is actually the older one. I wonder how it evolved that way!
I can speak Mandarin, and from what I can tell there is little difference in how people talk about social hierarchy and relations, at least from the perspective of prepositions, etc.
This is a really interesting video, Artem! Right up my alley.
Could you do a hypothesis video on how the social concept cells you mentioned
might result from a broader picture of how you think the hippocampus could function?
I'm interested in your opinion/perspective on this.
For some more reading, Quiroga did a bunch of work on concept cells that got a lot of attention [1-2].
However, I think they're more of a symptom than a mechanism...
References:
[1] R. Q. Quiroga, et al. Nature. 435, 1102-1107 (2005).
[2] R. Quian Quiroga, et al. Curr. Biol. 19, 1308-1313 (2009).
"Bob is married to Alice" about fucking time.
At last, a happy ending after all those different methods of communication they tried, they must have found the most reliable and secure one.
Amazing work! Greetings from a fellow hippocampal researcher from UC Berkeley!
bro no freakin way bro im a fellow uc berkeklleyllyly43 y3g
go bears
wanna link?
This channel is awesome sauce
Brilliant videos Thank you
do you think this extends to language as well? On one end, hippocampus is active in consilidating spatial memories, and as you explain in this video, has implications in social space. On the other, it is also involved in navigation, and path integration. Do you think this might be analogous to navigation from a language perspective (where we "navigate" through arguments and ideas to communicate)?
Yes, just in the same way words have virtual meaning that changes depending on where in a sentence they're placed. we live in hypnosis updated with measurements from the outside through our senses
That's why a synonym for argument is position, as we inherently map any environment by modelling it relative to our position whether that is physical or intellectual. This is why theories are models, which are either descriptive maps and/or predictive computations that begin at a purposively chosen point as a position. This position is the strength and weakness of theory, because positions are abstractions that do not capture the totality of reality. Like maps or computations, they only include data or information relevant to having the theory make sense. This internal logic does not mean that the theory is true or realistic. Moreover, our thinking as a species is not purely logical, as this is the latest evolution of our brains. and the older brain as our survival mechanism can override logic through instinct and emotion. Therefore, a healthy scepticism may be as important with our own positions as with those of others.
Totally! I believe that words and language are connected to abstract concepts our brains inherently operate with, which are represented in the hippocampus. And to some extent this is present in rodents as well ( www.pnas.org/doi/10.1073/pnas.0701106104 )
So maybe the language is a way access the navigation in the concept space 🤔
@@ArtemKirsanov I wonder how this relates to learning language to a native level becoming harder with age. I’d think if it’s entirely built-in then learning a new language would be easy bc we’re just attaching labels to existing logic in our brain. Related is I’m wondering about labels for concepts being seen as the concepts themselves versus labels being seen as just ways to access it, kinda like keys in a hash map. If someone learns two languages when young, will their neurons form patterns about the logic itself and separate labels as distinct? And if someone learns an additional language as an adult, it’s mapping labels from the additional language to labels from the native language? Bc the person wouldn’t have made the distinction when it was critical if they only learned one language? Maybe it’s not as cut and dry given the spectrum of specializing and generalizing but I wonder if/how much that distinction influences neural connections
@@joeystenbeck6697 I’ve learned a few languages and want to contribute some potential insight. I think that when adults begin learning languages, we tend to associate labels in foreign languages with labels in our own. However, at a certain point your brain stops referencing your native labels, as you attain fluency. Before the flow may have been concept > English word > Chinese equivalent > Speech, but now it simply goes from concept to chinese speech.
Keep in mind kids do this too when they learn foreign languages, albeit much faster.
I think that producing language is separate from conceptual reasoning, and maybe the connection is just kids have more plastic brains in general. There are things adults are good at learning that kids are not, however, so I wonder if there is actually a connection with aging besides plasticity
Amazing video.
Не только социальные взаимодействия, но и, судя по всему, вообще все логическое мышление человека работает как псевдофизическое пространство. Связи даже между абстрактными понятниями аналогичны физическим взаиморасположениям: "над законом", "в любви", "операции над представлениями". Если происходит нарушения способности ориентации во внутренем псевдопространстве, возникает семантическая афазия и человек не может думать сложные логические мысли и грамматические конструкции.
Stop speaking Putin!
This Is Weird. Need more videos about it
I had to think about the relationship far longer than the time allowed :P
Everything is a map because we draw EVERYTHİNG like this, including logic.
Fully agreed! Are there any articles about this concept?
10:48 I'm not familiar with this field so forgive me if I am wrong, but does anyone find the areas associated with high neuron firing interesting? For the inanimate object, it is at the beginning and end, whereas for the bat, it is at the end but also a turning point.
mates, territorial elements most likely is the reason for location cells to develop this function
Great video!
super interesting ty very much :)
Fascinating! How far are we in building a synthetic hippocampus, and will that end up being an input towards AGI one day?
I wonder how some agent's social map encodes the relationships between different actors, for example the relationship between my father and my friend. Both the dimensions 'power' and 'affinity' are inherently with respect to the agent, so relative spatial information between two different actors in this space, wouldn't be able to model their relationship.
Perhaps a space with less agent specific dimensions would be necessary in order to model the social dynamics of all an agent's known actors.
very good job!
Fascinating.
BTW, I was always baffled how completely unrelated languages have similar homonyms, as if there was an underlying biological reason. Like "right" as a direction and "right" as true. Looks like it can be the case.
I don't think there's an underlying biological reason that represents the semantics of "right" within people's minds. We can't forget that convergent cultural evolution happens.
@@didack1419 convergent cultural evolution is the default answer (mine too). But it's like saying "it just happened to be this way".
So if there will be other explanation, it would be nice
@@evennot well, it's not that convergent cultural evolution "just happened to be this way" anymore than "convergent biological evolution just happened to be this way", when there's convergence there are reasons for it.
I wonder if the same thing happens when I do category theory,...
Артём, делай голос за кадром. Это смотрится(воспринимается, понимается) гораздо лучше, чем какой-то сидящий пред камерой чел, активно размахивающий во все стороны руками )) На диктора надо учиться, особоенно на теледиктора, и не всякий, кто осилил нейронауку, автоматически становится хорошим ведущим. Про то, что информативность видеоряда становится в моменты появления "диктора" равна нулю, надеюсь, понятно и так. Надо уметь усмирять своё самолюбие и самолюбование.
What about people who can't visualize? Those who think only with words and can't even imagine a spatial pattern... Is that hippocampal damage, or is it more like internal blindsight ? Something there but prevented from reaching the conscience for some reason??
And why am I seeing such manor of perception as damage and can't seem to imagine it as just a different way to do things?
If it is as common as I am told, how do people who think that way manage to navigate the world?
damn nice video crazy that you just have 50k subs
The opening example isn't very good. It is trivially easy to explain the logic.
The sibling of one's spouse is one's sibling-in-law by definition.
Since Alex is the sibling of Bob's spouse Alice, Alex is Bob's sibling-in-law.
Since Alex is male, sibling-in-law becomes brother-in-law.
Very straightforward.
It would be the same thing if you asked about a relation like nephew or daughter.
Words have definitions. These words are defined by their relations.
Well, yeah, exactly, this is one example of tracking abstract relations -- what is thought to be the general function of the hippocampus (for example elifesciences.org/articles/17086 ). And the social information is just one kind of this "relational database".
So I don't really see any conflict there
If humans only use two axes for social information, isn't that wasteful as humans can process three spatial dimensions? Also, distance from the center of the social space isn't necessarily 'closeness' (e.g., if all of the distance goes towards the power axis or towards the negative affection axis)?!
Also, how would conditions like autism impact the perception of social spaces?
I don’t quite understand what the point of R, or “social distance” is for, doesn’t make sense that gaining affiliation would increase that person’s social distance with you?
I’m curious about this too
Artem based his diagram on the one in Schafer and Schiller (2018) which referenced the work of Tavares et al (2015). But when you look at Tavares, you see that affiliation is bounded above at zero. It's just a mistake in the graphic, "you" at 12:57 should be all the way at the middle right and the axes should be on the right.
So judging by your video, one could say that traumatic relationships with someone are represented like some sort of inmovable high voltage knot somewhere in the hippocampus
It is so interestinggggf
How can i contribute to you so you will continue with these amaizing videos?
tangents are important. it appears as if the answer to quantum gravity lies in polar-cartesian transformation with space-half-space compressions and tangents for physical length-information(-much of brain) mapping. at least i've "gravitized the quantum" (of computational time) in my framework that leads directly to the geometry of the universe, with code and visualizations (of these formulas producing our actual visual fields and antomies). Yes, really.
So apparently I was wrong with "step-husband" 0:15
Fascinating that the structures our brain uses to encode physical space are also involved in encoding social space. You mentioned that the position of objects wasn't associated with hippocampal activity; where would that information be encoded then?
I'm also curious why changes in absolute social distance weren't correlated with hippocampal activity. Any clues?
Again, great video!
Thank you!
Well, I didn't mention it in the video, but position of the objects was encoded in the hippocampus as well ("object place cells" so to speak), but it's just that these cells differed from the ones representing other bats. Take a look at the figure 4 in the paper (here's the full text link: www.weizmann.ac.il/brain-sciences/labs/ulanovsky/sites/neurobiology.labs.ulanovsky/files/uploads/omer_etal_science2018.pdf)
For example, Cell 361 (left panel) clearly codes for the position of the ball, but the place field is located in a different position compared to the conspecific representation.
I'm not sure about the social distance. It could be that fMRI after all is a method with a poor spatial resolution --- you can't measure the response of individual neurons. So it is still a mystery whether such "social cells" even exists in the human brain 🤔
@@ArtemKirsanov Doesn't this mean that humans only map social space in one dimension (arctan(affiliation/power))?
Did anyone else think that Alice and Alex might have different mothers and legally are referred to as half brother/sister in law.
man i took too many bong hits i read that family tree upside down amd came to some very different conclusions...
TF? im pretty sure I went back the same way we got to alex, didnt jump to bob straight away
Всё хорошо, конечно, но для начала бы понимать, как и за что конкретно нейроны вообще могут быть ответственны в вопросе положения в пространстве или же положения в родстве/иерархии для конкретного индивида.
Интересно ещё, в таком случае, можно ли предполагать, что абстракции более сложные, чем социально родство строятся в том числе и на абстракциях этого социального родства. А если так, то стоит ли выделять именно социальное родство?
По сути вопрос в том, возможно ли нейронами положения (с их рецептивными "пятнами") закодировать иерархический граф.
Stop speaking Putin!
@@alex15785 Stop speaking Putin!
@@alex15785 Мне почему-то кажется, что прежде социальной иерархии нужно выделить иерархию объектов реального мира вообще и уже потом переходить к общественным отношениям.
@@Anonymous-df8it Стап биенг а сенофобик стинки дик энд сей сомесинг он зе тим!
Walker Gary Martinez Ronald Allen Matthew
Having a spectrum of 'conjoint' cells is clear evidence that this claimed framework, that labels cells as 'social' or 'spatial' is fundamentally missing the dynamics of a useful classification system. You could almost make up any two arbitrary labels that might classify neurons and you'd see this same division, it's evidence of a poor model.
What software you animate?
Adobe After Effects for the majority of animations + Blender for 3D scenes (hippocampus, bats flying)
Um, Bob and Alex are lovers, but nobody talks about it. Bob is what's known as a "side-piece" (which makes Alice a "beard"). Meanwhile, George is a "groomer". Questions?
'promosm' 😃
Yeah unfortunately this isn't how I think. This is bunk science and a good example of the failures of logical empiricism/positivism dictating science and creating a scientism instead of a science or scientific method. Sucks that basic psychology is more true than something we throws millions at each year like behavioral neuroscience.
Great content!