The Assembly Hypothesis:Emergent Computation and Learning in a rigorous model of the Brain

Поделиться
HTML-код
  • Опубликовано: 3 янв 2025

Комментарии •

  • @davidhand9721
    @davidhand9721 9 месяцев назад +7

    There are a few facts from neuroscience that your model can't capture. Time is not discrete, and real neurons are sensitive to dynamics. There are periodic oscillations, phase sensitive neurons, tonic firing modes with poor time resolution but great intensity resolution, and burst modes with the opposite resolution relation. There are modes of inhibition and disinhibition that last different periods of time. Fractional delays between neuron activations, i.e. less than a synaptic delay, are used very commonly in basic calculations, like motion detection in fruit flies. You just can't do that with a single bit of state per neuron that's volatile on the scale of a single discrete step. In particular, phase sensitivity and periodic behavior are suspected to be important to consciousness because they are highly sensitive to state of consciousness (awake, asleep, anesthesia, etc).
    You might be able to compute things in such a model, but you won't be able to capture the dynamics or central control of a real brain. I actually don't think we will ever have a truly general model of processing in the brain because there doesn't appear to be any general principle that is applicable to every neural computation. Sometimes assemblies tell the story, but other functions are better described with topological principles, analog physical transformations, or Fourier transformations. The brain is the wild west. It was not designed with a plan. The best we can do is to account for all of the individual degrees of freedom in our experience, because no single paradigm will describe it.

  • @paxdriver
    @paxdriver Месяц назад

    Main criticism is the arbitrary assumption that K inhibition is a restriction of neurons firing. Just like math doesn't favour positive integers over negative ones, a brain could very easily have all neurons always energized and primed to fire like rhythmic modulation, whereby only a tiny subset of neurons need to fire to activate an inhibition.
    The assumption that neurons only fire when they're needed or doing something I think is based on an idea of energy conservation in electrical systems, but when dealing with analog computation a base state of constant firing can produce a 1 by stimulation or a -1 by inhibition or a 0 by not changing at all. There's no reason to presume neurons are wasting energy by constantly firing when they're no needed because modulating neurons at a 0 state could easily be responsible for powering the amplitude of adjacent neurons. This is way more energy efficient because neurons go from computing at base 2 to base 3, which scales exponentially. As energy is reused from benign modulating firing it serves to speed up the processing by both the the added state value efficient of base 3, by reducing the amount of energy a neuron needs to fire thanks to relaying energy with its own pulse, and also provides latent data as its rhythm also provides another layer of data for slower systems like autonomic and hormone control systems which can wait a few seconds to become activated in most cases.
    I think the theory is vastly over simplifying the brain because it is myopically focused on binary computation when we already know and measure brain waves as matter of factly relevent to consciousness and perception (like with sleep, disease, injury, and hormone changes).
    Ignoring the known variable patterns of neural activity is like trying to make sense of human math using base 8 and completely ignoring the digits 9 and 0 then wondering why equations and formula don't make sense.

  • @sharannagarajan4089
    @sharannagarajan4089 11 месяцев назад +2

    Wonderful!

  • @lionardo
    @lionardo 11 месяцев назад

    where is the link to these papers?

  • @sharannagarajan4089
    @sharannagarajan4089 11 месяцев назад +3

    Spelling mistake in description: gerogia tech

    • @dennisestenson7820
      @dennisestenson7820 11 месяцев назад

      It's misspelled in the corner of the video too. 🫣

  • @AlgoNudger
    @AlgoNudger 11 месяцев назад +1

    Thanks.

  • @bwowekeith4472
    @bwowekeith4472 5 месяцев назад

    That random graph sounds a bit like percolation theory 😊

  • @TubaÖzcan-l3o
    @TubaÖzcan-l3o 11 месяцев назад +2

    How can I attend this summer school as an international master student, nice contents

    • @ickorling7328
      @ickorling7328 11 месяцев назад

      On RUclips it seems, accessable and free content is quite a level advantage.

  • @J3SIM-38
    @J3SIM-38 4 месяца назад

    Disinhibit = Activate (in English). What is the meaning of Disinhibit in the Assembly Calculus?

    • @paxdriver
      @paxdriver Месяц назад

      Disinhibit would be akin to negating the inhibitory signal that is present. It's not the same as activate, it could neutralize an inhibitory signal rather than send an activation signal. It's an XOR, I think.

    • @J3SIM-38
      @J3SIM-38 Месяц назад

      @paxdriver ~ ok, I'll bite, How does one disinhibit with the scheme they describe

    • @J3SIM-38
      @J3SIM-38 Месяц назад

      @paxdriver ~ What you are discussing is neutralizing then not disinhibition. From my perspective, there is activation and inhibition in english. Disinhibition means to activate and neutralize means to activate something that is inhibited to the point of neutrality. The reverse would be to inhibit something that is activated to the point of neutrality. That's how it works with neurons anyway.

    • @paxdriver
      @paxdriver Месяц назад

      @@J3SIM-38 I think we're probably talking past each other then. It is in effect the same thing as neutralizing, but it's more discriptive to say it removes an action than to say no actions were present at all. That's the difference between OR and XOR in boolean logic. In programming it's similar to how some languages have -0 and 0, like javascript. They share the same values ("=="), but when compared as the exact same thing ("===") they are not.
      A, B, C, and D are neurons. When A is active and nothing else connected to B is activating B, then B activates. When A and C are both activated then B is deactivated if activated by something other than A. D only activates when C is activated (meaning B, but only when B is not receiving activation signals from other neurons which aren't A). Something like that.
      If I worded that right then D is a disinhibition signal, because it is only triggered when B should otherwise be active were it not for being inhibited and that's different from just checking whether it is activated or not. It describes more wholly what is happening with B, that it is firing but overridden. It provides a very specific pathway for a refined and narrow function.
      Real world example of this is pain suppression with adrenaline. Pain signals are probably still firing, but there's an active override that takes precedence even though they're firing madly. Once the disinhibition fades, the pain signals won't suddenly come back the second an activation suddenly stops, it comes back gradually as the override signals fade and allow the pain signals to gain attention like a dimmer switch. Hormone releases probably work on a similar disinhibition as opposed to binary on/off the way cycles flow in and out and don't immediately just kick in with the first firing that triggers them. Gradients are calculated by modulation and blending between signals which is separate from on/off switches.

    • @paxdriver
      @paxdriver Месяц назад

      @@J3SIM-38 let's say you have 3 neurons.
      A when activated triggers B to fire unless C is also activated, in which case it does nothing. But a fourth neuro might check for A and C and do something completely unique. It's how layers in machine learning neural networks all work inside of the black box to map paths to outputs and "learn" the desired output from training and paying attention to the number of paths specific to the categorical outputs.
      MNIST is a great simple example of this and there are tons of educational materials and visual demonstrations of how it works online. That's the image recognition beginner AI project on recognizing handwritten digits.
      If you prefer academic examples, look at papers published on connecting perceptron machine learning networks if you're interested.

  • @limaaeth
    @limaaeth 11 месяцев назад +1

    Awesome!

  • @manawa3832
    @manawa3832 10 месяцев назад

    what is computation? the best definition i have come up with, that fits every single conceivable example, is that computation is a sequence of interdependent transformations on objects encoded with information. this covers everything.

    • @manawa3832
      @manawa3832 10 месяцев назад

      also if you are inspired to look for analogies that can be worked into models for representing the brain, then look into combinatorial logic. the simplest language that can compute anything. furthermore, look into the iota combinator. which is a very recent discovery which finds that all combinators can be reduced down to just one combinator. repeated enough times can simulate any computation. it is one of the most astonishing discoveries in math and computer science. just one! something fundamental is happening here.

    • @whatisrokosbasilisk80
      @whatisrokosbasilisk80 7 месяцев назад

      Shut

  • @444haluk
    @444haluk 11 месяцев назад

    computation is by definition an abstraction from the substrate. You decided it. It means "change" for you. Then you say "weather computes". You literally use it for "a thing computes if it changes". Which is this close to be a totology. And absolutely useless.

    • @paxdriver
      @paxdriver Месяц назад

      Computes means it is a finite state machine where given the same inputs it produces the same outputs, but it's the inputs which are constantly in flux and being computed. Weather doesn't change because the laws of physics change, it changes as inputs fluctuate ergo it computes.
      Its not about change, it's about reliably changing based on fixed rules. That's always what it means to compute, not simply change. Opinions can change and preferences can change but that's not computing the same way as fluid dynamics or particle trajectories.

  • @ScottSummerill
    @ScottSummerill 11 месяцев назад

    Right. You know nothing about my intelligence. The fact that I am even watching this type of video might suggest something. That you would question another’s intelligence actual speaks volumes about you!

  • @ScottSummerill
    @ScottSummerill 11 месяцев назад +1

    Geez. Guy introducing the talk is barely understandable.

    • @kewlking
      @kewlking 11 месяцев назад +3

      Turn on subtitles. Btw, if this basic system can transcribe the introduction flawlessly, but you feel the need to complain publicly, what it says about your intelligence level may mean that this talk is not for you…

  • @Dr.Z.Moravcik-inventor-of-AGI
    @Dr.Z.Moravcik-inventor-of-AGI 10 месяцев назад

    You are soo wrong.