CBMM10 Panel: Neuroscience to AI and back again

Поделиться
HTML-код
  • Опубликовано: 5 ноя 2023
  • Review of progress and success stories in understanding visual processing and perception in primates and replicating it in machines.
    Key open questions. Synergies.
    Panel Chair: J. DiCarlo
    Panelists: I. Fiete, N. Kanwisher, C. Koch, T. Konkle, G. Kreiman
  • НаукаНаука

Комментарии • 9

  • @sohambafana1481
    @sohambafana1481 7 месяцев назад +2

    Thank you for this amazing talk!

  • @sclabhailordofnoplot2430
    @sclabhailordofnoplot2430 2 месяца назад +1

    Is nocomment coded the same as null? Zerotolance?

  • @princee9385
    @princee9385 6 месяцев назад

    This is epic. ❤❤🎉

  • @davidedavidedav
    @davidedavidedav 7 месяцев назад +1

    yea, interesting, but why don't talk about how biological networks actually learn and comparison/differences with ANN in details?

    • @paulprescod1980
      @paulprescod1980 6 месяцев назад

      There were tons of details discussed. Not at the level of detail of a neuroscience seminar but in appropriate detail for a 1.5 hour panel.

    • @davidedavidedav
      @davidedavidedav 6 месяцев назад

      @@paulprescod1980 ok, but I wanted more details I suppose, like for example what are the exact learning rules instead of backprop

    • @juggernautuci8253
      @juggernautuci8253 6 месяцев назад

      less and less people want to do experimental science which just provide data for AI. @@davidedavidedav

    • @paulprescod1980
      @paulprescod1980 6 месяцев назад +2

      @@davidedavidedav There's probably a better forum for that but anyhow, I'm pretty sure that they do not yet know how the brain does the equivalent of backpropogation.

    • @keep-ukraine-free
      @keep-ukraine-free 5 месяцев назад

      @@paulprescod1980 Your answer promotes confusion because it presupposes that the brain needs to do "the equivalence of backpropogation." The brain fundamentally does not need to, because backprop is limiting, and non-ideal (it works only during training, and not when the ANN is "released"/live). It's an entirely incompatible way. I will try to answer the OP's question directly, in my next comment below.