Invariance and equivariance in brains and machines

Поделиться
HTML-код
  • Опубликовано: 10 фев 2025
  • Bruno Olshausen, UC Berkeley
    Abstract: The goal of building machines that can perceive and act in the world as humans and other animals do has been a focus of AI research efforts for over half a century. Over this same period, neuroscience has sought to achieve a mechanistic understanding of the brain processes underlying perception and action. It stands to reason that these parallel efforts could inform one another. However recent advances in deep learning and transformers have, for the most part, not translated into new neuroscientific insights; and other than deriving loose inspiration from neuroscience, AI has mostly pursued its own course which now deviates strongly from the brain. Here I propose an approach to building both invariant and equivariant representations in vision that is rooted in observations of animal behavior and informed by both neurobiological mechanisms (recurrence, dendritic nonlinearities, phase coding) and mathematical principles (group theory, residue numbers). What emerges from this approach is a neural circuit for factorization that can learn about shapes and their transformations from image data, and a model of the grid-cell system based on high-dimensional encodings of residue numbers. These models provide efficient solutions to long-studied problems that are well-suited for implementation in neuromorphic hardware or as a basis for forming hypotheses about visual cortex and entorhinal cortex.
    Bio: Professor Bruno Olshausen is a Professor in the Helen Wills Neuroscience Institute, the School of Optometry, and has a below-the-line affiliated appointment in EECS. He holds B.S. and M.S. degrees in Electrical Engineering from Stanford University, and a Ph.D. in Computation and Neural Systems from the California Institute of Technology. He did his postdoctoral work in the Department of Psychology at Cornell University and at the Center for Biological and Computational Learning at the Massachusetts Institute of Technology. From 1996-2005 he was on the faculty in the Center for Neuroscience at UC Davis, and in 2005 he moved to UC Berkeley. He also directs the Redwood Center for Theoretical Neuroscience, a multidisciplinary research group focusing on building mathematical and computational models of brain function (see redwood.berkele...).
    Olshausen's research focuses on understanding the information processing strategies employed by the visual system for tasks such as object recognition and scene analysis. Computer scientists have long sought to emulate the abilities of the visual system in digital computers, but achieving performance anywhere close to that exhibited by biological vision systems has proven elusive. Dr. Olshausen's approach is based on studying the response properties of neurons in the brain and attempting to construct mathematical models that can describe what neurons are doing in terms of a functional theory of vision. The aim of this work is not only to advance our understanding of the brain but also to devise new algorithms for image analysis and recognition based on how brains work.
    cbmm.mit.edu/n...

Комментарии • 11

  • @seasnowcai
    @seasnowcai 5 месяцев назад +2

    Such a wonderful talk! Thank you for sharing! This talk helped me understand a puzzle I have had for a long time: how is human visual perception more efficient than machine learning, expressed in math? It makes sense to decompose the mechanism into some key factors, such as equivariance and invariance, and use combinatorian to simulate large numbers of possibilities. Bruno has done a great job explaining math models in such an intuitive way that I can understand the basic ideas without getting into too many technical details. My original naive idea was that learning in motions must have played a substantial role in human vision, so maybe we should use videos instead of static pictures in machine learning. But then it would further worsen the problem of computational powers. But these sets of research seem to open a new promising path! Looking forward to more exciting findings!

  • @WaveOfDestiny
    @WaveOfDestiny 5 месяцев назад +1

    One of the best lectures i've ever seen

  • @rockapedra1130
    @rockapedra1130 5 месяцев назад

    Wow! So many cool observations and super clever tricks! Plus Bruno is very good at explaining enough of the background succinctly so that it is easy to follow. This ability he has to make things intuitive makes a huge difference for me. The lecture is sufficiently self-contained so that you don't go off the rails because of some small thing you don't know and then the rest of the lecture would be incomprehensible. Kudos!

  • @paulilorenz3039
    @paulilorenz3039 6 месяцев назад +4

    Theoretical Neuroscience sounds like a lovely field, is it popular in Europe?
    Amazing video, thank you for publishing

  • @oceanwang2652
    @oceanwang2652 5 месяцев назад

    Great presentation! It brought me many inspirations for my Graph neural network research.

  • @zartajmajeed
    @zartajmajeed 5 месяцев назад

    52:05 Main points - 1. Animal behavior tells us what problems the brain is solving, 2. Biological structure gives us clues about the mechanisms involved, 3. Mathematical structure provides the computational foundations

  • @mausplunder5313
    @mausplunder5313 6 месяцев назад +2

    very interesting presentation... hope some day i can contribute to research like this..

  • @yairreyes9288
    @yairreyes9288 6 месяцев назад +2

    Amazing content

  • @jordia.2970
    @jordia.2970 5 месяцев назад +2

    Bruh, amazing stuff

    • @themultiverse5447
      @themultiverse5447 5 месяцев назад +2

      Don't let anyone tell you; your elaborately pontificated assertion is anti-egregious. Someday soon, I hope to cogitate scientific nomenclature such as Bruv. However lamentably, I too use the penultimate, pejorative; Bruh.

  • @AlgoNudger
    @AlgoNudger 5 месяцев назад

    Thank you. 😊