#041

Поделиться
HTML-код
  • Опубликовано: 27 окт 2024

Комментарии • 54

  • @saanvisharma2081
    @saanvisharma2081 3 года назад +15

    *This was my master's thesis topic*
    Absolutely enjoyed watching this video. Thanks Tim.

  • @DavenH
    @DavenH 3 года назад +16

    This is very inspiring. That viable substrates and training strategies vary this much shows field has so much room for breakthroughs and invention.
    I'm so glad this show is keeping up with the bleeding edge of AI rather than the dreary "what AWS instance is best for ML" kind of stuff.

    • @freemind.d2714
      @freemind.d2714 3 года назад +2

      More we know much more we know that we don't know, because there always more we don't know that we don't know!

    • @billykotsos4642
      @billykotsos4642 3 года назад

      To be fair. Spinning up an AWS instance is no simple task !

  • @samdash3216
    @samdash3216 3 года назад +5

    Content quality through the roof, video quality stuck back in the 90'ties. Keep up the good work!

  • @Self-Duality
    @Self-Duality Год назад +1

    I’d absolutely love to see part 2!

  • @sabawalid
    @sabawalid 3 года назад +6

    Great episode, as usual guys! thumbs up. I really like a lot of what Dr. Simon Stringer said, and your feedback/comments were also great

  • @priyamdey3298
    @priyamdey3298 3 года назад +6

    Sometimes I had to put the playback speed at 0.75 to understand what Dr. Stringer was saying😂 His Broca's area must be more developed than an average human😀 Great video btw! keep the Neuroscience stuff coming! We need a lot more of this for a more targeted approach towards AGI

    • @DelandaBaudLacanian
      @DelandaBaudLacanian 2 года назад

      I'm listening to 0.75 and I still have to rewind and listen over-and-over again, it's truly inspiring and thought-provoking

  • @oncedidactic
    @oncedidactic 3 года назад +1

    I always like the intro but this Tim Intro was especially captivating. :D I think this is mostly due to just the right briskness and level of detail. Which makes it an awesome hook and also a simple-but-not-simpler running table of contents on the episode. 👍

  • @marc-andrepiche1809
    @marc-andrepiche1809 3 года назад +2

    This was a really fascinating talk

  • @sedenions
    @sedenions 3 года назад +2

    I will have to watch this again. As a neuroscience student, I was expecting more talk of LTP, LTD, STDP, silicon neuronal networks, lzh neurons, liquid networks, and the math behind all of this. Also if someone could tell me this: how much overlap is there between theoretical neuroscience and theoretical machine learning?

  • @Checkedbox
    @Checkedbox 3 года назад +1

    Hi Tim, great episode but could I ask what you use for your mindmaps?

  • @LiaAnggraini1
    @LiaAnggraini1 3 года назад

    The title is always intriguing. I am a new subscriber, however, did you already post a topic about causal inference?

  • @dyvel
    @dyvel 3 года назад

    The temporal encoding bit was very interesting. I wonder if you can adjust the filtering to cut off low frequency responses when there is a high amount of information to be processed, and use a buffer to classify the discarded information so that you get a two-layered temporal filtering system that is both short-term and long(er)-term at the same time.
    However the classification of that information may be too high order to assess at that level.
    Can it be determined the importance of specific information before the end result has been reached?
    Like preprocessing the input to redirect common sensory inputs to a low powered specialized Network so that the main Network can be used for the yet-unknown complex classifications?
    Like classifying a specific type of input as a known pattern that doesn't need to go through full evaluation again, but only has to go through classification within a limited scope, leaving more general neurons available for yet unclassified input patterns.

  • @JamesTromans
    @JamesTromans 3 года назад

    I enjoyed this Simon, takes me back!

  • @intontsang
    @intontsang 3 года назад +1

    Just found your podcast, great episode. What is the program used at the intro, loved the way you presented that.

  • @drdca8263
    @drdca8263 3 года назад +3

    So, with recognizing the hand and pen being near each other across the different positions of the two, and across the different saccade eye directions, is the idea that the fire together wire together thing is happening fast enough that some neuron ends up becoming associated with that like, just based on a short experience? I had been under the impression that the strength of connections between neurons was something that changed much more slowly and was mostly about long term learning, and that very short term mental structures were just about patterns of firings that are like going around in a complicated cycle or something.
    Was I under the wrong impression there? (sounds likely to me that I was misunderstanding that. I was definitely misunderstanding something, just not sure what.)
    unless there was like, some neurons that are always there for, things which are, uh, what, moving together with a given pair of positions?
    This is confusing to me, it seems like there would have to be like, combinatorially many different neurons to describe all the combinations of some stuff,
    which I guess, brains do have lots of neurons, so maybe that's right,
    but, to have that but also having the physically nearby neurons be more connected, in a way that matches with like, positions in the retina (which, makes sense by itself),
    makes me wonder how there can possibly be enough room?

    • @JLongTom
      @JLongTom Год назад +1

      I think the idea is that neuronal ensembles undergo modifications to become hand-detecting neurons and that this process of becoming occurs over the the course of many thousands of saccades, say during early development as a young child. Then the same scheme if spatial binding occurs with neuronal ensembles detecting the grabbed object.
      Regarding your question about the time course of synaptic plasticity, this occurs at all spatial scales and is implemented at different levels of synaptic and neuronal structure. Fast changes occur at the level of protein memories and become instantiated into receptor-level and then synaptic (bouton and spine)-level structural changes.

  • @abby5493
    @abby5493 3 года назад

    A very fascinating video 😍

  • @willcowan7678
    @willcowan7678 Год назад

    "The way our brains work we don't see labels over everything in the world". I am curious to what extend our genetics (maybe even epigenetics) hold labels -- simple example, we have tastes and smells that seem like they might be labels. What more complex or abstract labelling fascilitates brain development?

  • @dr.mikeybee
    @dr.mikeybee 3 года назад +2

    An interesting question is can mere scale in an ANN account for any functionality that can be created by created by architectural features. For example, can a for loop be unwound and represented by left to right paths through a large enough NN? In other words can computational equivalence be achieved in all cases simply by increasing scale? My intuition says, yes, but I don't completely trust intuition. I wonder if anyone has made a mathematical proof of this.

    • @dr.mikeybee
      @dr.mikeybee 3 года назад +1

      A corollary question is can one compress linear logic into looping structure? It would be an interesting algorithmic challenge.

  • @ceevaaaaa
    @ceevaaaaa 3 года назад

    What is that software you are using in the beginning (Intro Part) to create those directional definition diagrams ? Looks clean and sleek.

  • @kimchi_taco
    @kimchi_taco 3 года назад +1

    5:47 Hebbian theory: all neurons are same but different weights. The 'strength' of each neuron has limit. all neuron competes and "winner-take-all" like softmax.
    24:40 top-down connection is critical.
    26:35 binding neuron summarizes low activations and high activations, which recalls me "Feedback Transformers".
    30:50 With STDP, repeated presynaptic spike arrival a few milliseconds before postsynaptic action potentials leads in many synapse types to Long-Term Potentiation (LTP) of the synapses, whereas repeated spike arrival after postsynaptic spikes leads to Long-Term Depression (LTD) of the same synapse. www.scholarpedia.org/article/Spike-timing_dependent_plasticity
    31:15 "Cells that fire together wire together." en.wikipedia.org/wiki/Hebbian_theory

  • @MikeKleinsteuber
    @MikeKleinsteuber Месяц назад

    Great video that really needs to be seen by those developing the current crop of AI software. Biology has had billions of years to evolve and needs to be looked at much more closely if we want to develop true AGI let alone SGI. Just scaling the comparatively simplistic NNs we use now won't get us there.

  • @quebono100
    @quebono100 3 года назад +2

    The background is so hypnotic

  • @charlesfoster6326
    @charlesfoster6326 3 года назад

    What's the intuition for why we should think "the feature binding problem" should be hard for ANNs to solve? Work like OpenAI's CLIP openai.com/blog/clip/ seem to provide evidence that mere co-occurrence alone can provide a strong enough signal to learn how to bind together robust, useful representations, even from disparate modalities. Should we expect this to stop sometime soon?

  • @bigbangind
    @bigbangind 3 года назад +3

    I don't like this new setting, previous one is better with 4 splits of the screen.

    • @bigbangind
      @bigbangind 3 года назад +1

      Seriously, you should at least not change the background of individual cam videos. Quality decreases, it just flickers. Simpler, better

    • @bigbangind
      @bigbangind 3 года назад

      What did you do upload it in 2x speed? He talks fast :D

  • @paxdriver
    @paxdriver 3 года назад +1

    Love the channel but I would love it even more without the green screens and flicker. The background isn't so fancy it's worth distracting av. Maybe if you transposed those faces onto 3d models and rendered movies it's be cool, but that's not feasible I'm guessing lol
    1hr21mins for example, can't even use hands and a pen to speak and gesture because of the green screen attempt. It's truly awful but I'll stop ranting lol love the show, thanks much

  • @AlanShore4god
    @AlanShore4god 3 года назад +2

    "The brain is unsupervised" - This perspective always confuses me because the world labels itself timestep to timestep. You use features now to predict features next, which is as supervised as an RNN.
    I disagree that GPT-3 is too basic to support sophisticated emergent behaviors. The recurrence in an RNN generalizes well enough to facilitate "cyclicality", allowing cycles to form at any level of abstraction. This also follows from the fact that RNNs are turing complete. Any argument against the deficiencies of the "engineering" approach in this domain will have to be arguments against backprop/sgd, not against the architecture.

    • @machinelearningdojo
      @machinelearningdojo 3 года назад +5

      "RNNs are turing complete" this is pretty meaningless in practice, we need to change the record and stop making this point every time this discussion comes up

    • @AlanShore4god
      @AlanShore4god 3 года назад +1

      @@machinelearningdojo yes this is always the response when someone brings it up, but it's important in this context because a claim is being made that simple RNN architectures *can't* support the emergence of sophisticated behavior. It's just not true.
      I used to feel the same way, but after GPT-3 I've come around to the opposite perspective: I think people are way too quick to dismiss the importance of this characteristic. It's become a meme to act like it's unimportant when in reality I haven't seen any work demonstrating how large the gap is between the ideal RNN for a sequence learning task and the best possible RNN practically converged upon via standard backprop/sgd at really high dimensionality

    • @machinelearningdojo
      @machinelearningdojo 3 года назад +1

      ​@@AlanShore4god In the show Simon is talking about the emergence of very complex _temporal_ spiking dynamics, and circuits forming at many levels of abstraction -- this is a behaviour of spiking neural networks which appears rapidly (en.wikipedia.org/wiki/Spiking_neural_network) . GPT-3 has a fixed objective, is (effectively) supervised, and has no concept of time. I am not saying RNNs can't theoretically learn sophisticated behaviour but they are limited by data and training objective. Also watch my video on GPT-3 if you haven't already, I didn't see any evidence of general intelligence .

    • @AlanShore4god
      @AlanShore4god 3 года назад +3

      @@machinelearningdojo very excited to watch your video. I will confess that I don't have a very rigorous definition for what constitutes sufficiently "sophisticated emergent behavior". I am leaning on an assumption that I would be able to pull something out of my ass if such a description were offered

    • @AlanShore4god
      @AlanShore4god 3 года назад +3

      ​@@machinelearningdojo My interpretation of the temporal spiking dynamics observed in the brain is that it's a consequence of the brain having to solve a fundamentally different problem than the problem neural networks attempt to solve before it can attempt to learn in the way that neural networks learn. The problem that I'm referring to is the problem of establishing stable representations of current and historical features across the layers of abstraction. This is taken for granted in neural networks because the states of all parameters of the network (and inputs) are completely stable in the computer memory over time, so neural networks enter the learning arena with a significant advantage.
      It really sucks for the brain because in order to build higher level features out of the lower level features, it must work to maintain state between neurons involved in lower and upper level features to build connections between them. It takes a lot of time for signal to flow from lower layers to higher layers in a biological brain, so a lot of effort must be invested in stabilizing activation patterns both between layers and across layers over durations of time that are long enough to enable learning. I suspect that this is what is being witnessed when observations of self-organizing circuits in the brain are made because that kind of problem can be solved auto regressively or by optimizing for stability and synchronicity without having to parameterize on an explicit biological goal.
      Learning on an explicit goal is what happens after this feature stability problem has been solved, and that is exactly what neural networks do. From this perspective, a lot of the complexity observed in biological brains can be thought of as achieving an objective which is a given for neural networks trained on computers. That's why I don't find temporal spiking dynamics to be particularly important for thinking about general AI.

  • @marekglowacki2607
    @marekglowacki2607 3 года назад

    Planes don't have feathers. The problem is that we don't know what is feather in brain.

  • @machinelearningdojo
    @machinelearningdojo 3 года назад +5

    First! ✌❤😎

  • @siyn007
    @siyn007 3 года назад

    Great podcasts but it tends to feel overly edited at times.

  • @jantuitman
    @jantuitman 3 года назад

    “If an image would be stabilized on the retina, humans would go blind” - compare that to the current neural networks that use many many training iterations where the image is the same. 🤣

  • @shabamee9809
    @shabamee9809 3 года назад +1

    2rd

  • @quebono100
    @quebono100 3 года назад

    3rd

  • @Chr0nalis
    @Chr0nalis 3 года назад

    4rd

  • @bigbangind
    @bigbangind 3 года назад

    13th

  • @atriantafy
    @atriantafy 3 года назад

    No one wants to be that GOFAI guy