Neuromorphic computing with emerging memory devices

Поделиться
HTML-код
  • Опубликовано: 29 дек 2024

Комментарии •

  • @tjeanneret
    @tjeanneret 4 года назад +36

    I can't believe that only a few people where present for this presentation... Thank you for publishing it.

  • @greencoder1594
    @greencoder1594 4 года назад +56

    *Personal Notes*
    [00:00] Introduction of Speaker
    [01:54] START of Content
    [05:13] CMOS Transistor Frequency Scaled with Decreased Size Node
    [06:54] Von Neumann Architecture uses Power to for regular communication between CPU and Memory
    - In contrast, within the brain memory and computation are co-located
    [09:06] Neuromorphic hardware might utilize "in-memory computing" and emerging semiconductor memory
    [09:30] Non-volatile memory (brain-like long-term memory)
    - resistance switching memory
    - phase change memory
    - magnetic memory
    - ferroelectric memory
    [10:24] RRAM (Resistive Random Access Memory)
    - dielectric between two electrodes
    - resistance changes to a high-conductance state once voltage applied exceeds a certain threshold (due to movement of structural defects within the dielectric)
    - can be used to connect to neurons with a dynamic weight (high voltages strengthen the synapse, opposite voltage weakens)
    [12:23] STDP (Spike-Timing Dependent Plasticity)
    - relative delay between post-synaptic neuron and pre-synaptic neuron
    - t = t_post - t_pre
    - long-term potentiation when t>0 (neuromorphic agent assumes causality from correlation)
    - long-term depression when t
    We simulated an unsuprevised spiking neuronal network with STDP and did pretty good. It hasn't been built in hardware yet tough.
    [38:08]
    If you say we could port a bee brain to a chip, why not a human brain?
    ->
    Members of the human brain project told me, there is a total lack of understanding, how the brain works.
    The human brain appears to be the most complex machine in the world.
    Improvement in lithography might offer chips with a neuronal complexity similar to the human brain.
    It's a waste of time, because we don't know what kind of operating system or mechanism we have to adopt, to make it work
    We might target very small brains and only some few distinct features of a brain
    - like the sensory motor system of a bee
    - or object detection and navigation of the ant
    - ...so very simple brains and functions might be feasible within the next decade
    [40:59]
    How do your examples compare to classical examples with respect to savings in time and energy?
    ->
    All currently developed neuromorphic hardware uses CMOS technology for diodes, transistors, capacitors and so on.
    A classical transistor network with similar capabilities would require far more space on the chip.
    Thus those new memory types are essential if you want to safe energy and complexity like the brain does.
    [43:54]
    How can you adapt to changes in the architecture, like when the count or wiring of neurons is supposed to change?
    ->
    You can design your system in a hybrid way to integrate RRAM flexibly into your classical CMOS hardware
    [46:09]
    Are you trying to develop a device dedicated for AI only or as a (more general?) peripheral device that can replace current GPU acceleration?
    ->
    We are not competing with GPUs, we are targeting a new type of computation. Replacing a GPU with such a network wouldn't make any sense.
    In-memory logic does not seem to be very interesting, considering high cycle times and high energy consumption.
    But using RRAM (or similar technology) to emulate neurons can save you a lot of energy and space on the chip.
    [47:53]
    In-memory computing could have a great impact, because you have a kind-of filter to know what you really have to compute when changing a value in a neuromorphic database for example.
    The input is the result _and_ the behavior at the same time, ...that could be the reason for this big change in energy management
    ->
    Yeah, I totally agree.
    If you compute within the memory, you don't have to move the data from the memory to the processor.
    [49:09]

    • @paquitagallego6171
      @paquitagallego6171 3 года назад +2

      Thanks...

    • @everettpiper5564
      @everettpiper5564 3 года назад +2

      I really appreciate these notes. And your final remark is spot on. Very intriguing. You might be interested in a channel called Dynamic Field Theory. Anyhoo, appreciate the insights.

    • @ashwanisirohi
      @ashwanisirohi 3 года назад +2

      What more I can say for what you did....Thanks

    • @celestialmedia2280
      @celestialmedia2280 3 года назад +2

      Thanks for your osm effort 👍

    • @leosmi1
      @leosmi1 3 года назад +2

      Thank you

  • @JoshuaSalazarMejia
    @JoshuaSalazarMejia 3 года назад +10

    You saved the day by recording and uploading he presentation. Amazing topic. Thanks!

  • @ashwanisirohi
    @ashwanisirohi 3 года назад +2

    Thanks for making and uploading the video in such a nice manner. Very comfortable to follow the contents of the talk.

  • @HavenInTheWood
    @HavenInTheWood 10 месяцев назад

    This is great, I'll be watching again!

  • @Atrocyte
    @Atrocyte 3 года назад +4

    Thank you for this fascinating lecture and sharing it!

  • @ashwanisirohi
    @ashwanisirohi 3 года назад

    Talk was good but questions were better. I like the Prof. honesty and smooth answering.

  • @totality10001
    @totality10001 4 года назад +4

    Brilliant lecture. Thank you!

  • @holdenmcgroin8917
    @holdenmcgroin8917 6 лет назад +4

    Thanks for sharing, very informative presentation

  • @feuxdartificeppp
    @feuxdartificeppp 5 лет назад +3

    Great video! Thank you!

  • @Artpsychee
    @Artpsychee 2 года назад

    thank you sharing your insights

  • @viswanathgowd4060
    @viswanathgowd4060 3 года назад

    Thanks for sharing this.

  • @GWAIHIRKV
    @GWAIHIRKV 4 года назад +3

    So are we saying this is another form of memristor?

  • @teamsalvation
    @teamsalvation 4 года назад +5

    Although this is well over my head, I am excited by what is being said, or at least what I think is being said and shown.
    The brain is both a memory and a processor. What they've been able to accomplish is to recreate "the brain" (for talking purposes, I know it's not literal).
    Again, keeping this simple for me; if I were using TensorFlow and running the session on a GPU, I would instead run this session on "the brain" created by Dr. Lelmini? The initial input data set is still gathered in the traditional sense or would we be moving data directly into "The Brain" from the data capture HW (e.g. video camera data stream) and then kicking off the session by some HW interrupt once some pre-defined amount of raw data has been transferred?
    This is all really cool stuff!!
    Can wait to replace my GPUs with NPUs (Neuromorphic Processing Units) :-) with PCI-E 6 x16 (64 GT/s)

    • @jacobscrackers98
      @jacobscrackers98 3 года назад +1

      I would try to email him if I were you. I doubt anyone is looking at RUclips comments.

  • @pradhumnkanase8381
    @pradhumnkanase8381 4 года назад

    Thank you!

  • @entyropy3262
    @entyropy3262 3 года назад

    Thanks, really interesting.

  • @SaiBekit
    @SaiBekit 4 года назад +1

    Does anyone understand the difference between this and Neurogrid's architecture?

  • @moizahmed8053
    @moizahmed8053 5 лет назад +4

    I want to try these "toy examples" myself... Is there a way to get hands on the RRAM modules/ICs?

    • @davidtro1186
      @davidtro1186 4 года назад +2

      knowm.org/ similar memristor technology made in USA

  • @silberlinie
    @silberlinie 3 года назад

    Eine absolut geniale Sache.
    Obwohl der Bericht hier aus dem Jahr 2018 ist.
    Ist denn das Projekt weitergekommen?
    Was gibt es denn in der Zwischenzeit zu
    berichten?
    Ist Politecnico Di Milano noch an der
    Sache dran?

  • @matthewlove2346
    @matthewlove2346 4 года назад +1

    Is there a paper that goes into more depth that I could read? And if so where can I access it?

    • @cedricvillani8502
      @cedricvillani8502 4 года назад

      IEEE has everything you could ever want and updated, become a member

    • @cedricvillani8502
      @cedricvillani8502 4 года назад

      New memory device that just came out! The Nvidia NGX Monkey Brain, comes pretrained with a few muscle memory actions such as, throwing poop at a fan, and getting sexually aroused at the sight of a banana.

  • @styx1272
    @styx1272 4 года назад +3

    too complicated for me ; glad others found it inlightening .

  • @jaimepatino1645
    @jaimepatino1645 2 года назад

    And that . [18] Here is wisdom. Let him that hath understanding count the : for it is the number of a man; and his number is Six hundred threescore and six.

  • @Nathouuuutheone
    @Nathouuuutheone 3 года назад

    20:57

  • @onetruekeeper
    @onetruekeeper 4 года назад +1

    This could be simulated using holographic circuits.

    • @brian5735
      @brian5735 7 месяцев назад

      Yeah i thought of that. Photons would be much more efficient in quantum computer. Less noise and decoherence

    • @brian5735
      @brian5735 7 месяцев назад

      Just etch the gates

  • @ONDANOTA
    @ONDANOTA 5 лет назад +2

    is this faster than quantum computers? does it scale exponentially or better?

    • @ONDANOTA
      @ONDANOTA 5 лет назад +2

      auto-answer after googling, yes it is faster than QC's

    • @mrpr93cool
      @mrpr93cool 5 лет назад +2

      @@ONDANOTA faster in what?

    • @anywallsocket
      @anywallsocket 5 лет назад +2

      You have to realize what you're asking here. QC is just computing at the nano level, as opposed to micro level, and taking advantage of entanglement / tunneling rather than attempting to avoid it. In principle, one is not "faster" than the other as both operations can unfold at the rate of electromagnetic wave impulses (the fastest you can get). It's just a matter of what physical medium is catalyzing this computational operation. In-Memory computing is a technique for organizing that medium, so as to eliminate the latency between data storage and data manipulation. It's a different ball-game altogether, and in principle, both QC and CC can be organized via this In-Memory technique.

    • @ShakmeteMalik
      @ShakmeteMalik 4 года назад +1

      @@anywallsocket Correct me if I am mistaken, but is it not the case that QC aims to eliminate network latency altogether by utilising Spooky Action at a Distance?

    • @anywallsocket
      @anywallsocket 4 года назад +2

      @@ShakmeteMalik Depends what you mean by "network latency". For the most part QC is employed for processing information, not storing it - since quantized info is usually too delicate to store. The whole point of In Memory computing is combining the processing and storing, which therefore works much better for classical computing.

  • @davids3116
    @davids3116 2 года назад

    need to create an ego operating system for AI to improve it's capabilties

  • @demej00
    @demej00 3 года назад

    Tough to pour your soul into research for only 10 people.

  • @Ositos_dad
    @Ositos_dad 10 месяцев назад

    No le entiendo ni turca.