Generative Model That Won 2024 Nobel Prize

Поделиться
HTML-код
  • Опубликовано: 8 окт 2024

Комментарии • 142

  • @ArtemKirsanov
    @ArtemKirsanov  Месяц назад +11

    Join Shortform for awesome book guides and get 5 days of unlimited access! Get 20% off at shortform.com/artem

  • @vastabyss6496
    @vastabyss6496 Месяц назад +126

    First the Hopfield Network video and now this?! And only a month apart? I cannot thank you enough for the value that you've added to this platform

  • @henrikjohnsson7403
    @henrikjohnsson7403 21 час назад +5

    Quick change of name! For a while, I thought you knew of the Prize in forehand when I scrolled through my list of "saved for later". And watched it now, awesome Work!

  • @KevinWang-jc1bx
    @KevinWang-jc1bx Месяц назад +28

    AI's not the only one hallucinating, can't believe the rate and quality at which Artem is publishing these videos, thank you so much!

  • @vidal9747
    @vidal9747 Месяц назад +25

    I never knew my background in Physics would make understanding this topic such a breeze. It is bizarre how in this world areas that look so different can be so close.

  • @JohlBrown
    @JohlBrown Месяц назад +28

    i've never seen a well-worded explanation of temperature (as a casual ML enjoyer) but seeing the sigmoid morph with temperature and the relationship between stochastic and deterministic was such an awesome learning moment, thank you!

  • @theo4884
    @theo4884 Месяц назад +1

    Watching your "AI & Machine Learning" playlist feels like binge watching my favorite show. Hope you continue them. You are an amazing teacher

  • @copywright5635
    @copywright5635 Месяц назад +11

    Always happy to watch your uploads. The Boltzmann distribution is something that I think is often misunderstood. So thank you for this video!

  • @ahaskarkarde4163
    @ahaskarkarde4163 23 часа назад +2

    With the 2024 Nobel Prize in physics awarded to the pioneering works introducing the Hopfield Network and Boltzmann Machines, your latest videos explaining exactly these topics were just timely enough to help us build a great understanding of such important tools :)

  • @holymoly54775
    @holymoly54775 Месяц назад +5

    Hi Artem,
    I just want to say that in 3 weeks I begin my graduate degree in neuroscience, and it was your channel that inspired me to begin this journey two years ago. Keep up the good work, and I look forward to the inspiration for years to come.

    • @joeybasile1572
      @joeybasile1572 Месяц назад

      What classes are you taking right now?

    • @Sam264-n2o
      @Sam264-n2o Месяц назад

      @@joeybasile1572it’s summer holiday

    • @ArtemKirsanov
      @ArtemKirsanov  Месяц назад +2

      Wow, congrats!!

    • @SystemsMedicine
      @SystemsMedicine Месяц назад +2

      Good Luck. And when things get tough, and they will… endeavor to persevere.

    • @holymoly54775
      @holymoly54775 Месяц назад

      @@joeybasile1572 I haven't started yet, but the program is non-traditional, where instead of registering for classes, there is a dedicated period for lectures everyday that will cover all aspects of neuroscience, followed by lab rotations and research training. Subjects included are neuroanatomy, computational modeling, molecular biology and neurogenetics, vision, audition, and then for the labs, there are courses in EEG, microscopy, and cytochemisty, and this is about half of all the subjects covered. It truly is a comprehensive program, which upon completion will feed me right into a PhD track depending on what areas I have excelled in. My background is in math and computer science, so I am hoping to focus on the computational side of things, but who knows where I will eventually end up!

  • @JonRichie294
    @JonRichie294 Месяц назад +1

    This is insane! I love your videos on this channel! I’m just waiting for your channel to exponentially boom to a million subscribers.

  • @owenpawling3956
    @owenpawling3956 Месяц назад +5

    So glad for another upload! You have no idea how fast I clicked!

  • @joonaskuusisto2767
    @joonaskuusisto2767 Месяц назад +1

    This is incredible stuff once again. You have pretty much covered everything I’m interested in neuroscience with insight I never possesed. I researched brain criticality and modeling but now on a boring day job. Glad we have people like you!

  • @vladimirputin7443
    @vladimirputin7443 Месяц назад +1

    This guy is awesome. I can't explain how much more intelligent I feel after watching your video. Thank you so much for taking out time to educate people like us.

  • @kahvefincanim234
    @kahvefincanim234 Месяц назад

    It is really great to visually explain such complex and valuable information in such an understandable way!

  • @huytruonguic
    @huytruonguic Месяц назад +2

    I get chills everytime someone tries to explain the differences between data's states and generator's states. The former is surface level while the later is highly abstracted. It says something about the many redundancies of the reality we live in and how there exists a general abstraction (math formalisms for example), or maybe that's just how we observe reality while being part of reality

    • @judehammoud5959
      @judehammoud5959 Месяц назад +2

      theory of constructed emotion / active inference ;)

  • @scottmiller2591
    @scottmiller2591 Месяц назад

    This video was one of the bright spots of my day. It was well-crafted, reminded me of my work on ladder RBMs long, long ago, and got me thinking about how modern machines could build on these methods, and vice versa.

  • @etunimenisukunimeni1302
    @etunimenisukunimeni1302 Месяц назад

    You have a knack to explain things in an understandable way without dumbing them down too much, thanks! Finally I know what the temperature setting actually does in a neural network, funny how analoguous it is to physical temperature :)

  • @guillaumeleguludec8454
    @guillaumeleguludec8454 Месяц назад

    Wow you really nicely explained what Boltzmann machines are and where they come from, and the animation in super pretty ! Thank you Mr Kirsanov

  • @pandusonu
    @pandusonu День назад +3

    Good time to rename this video to "The generative model that won nobel prize in physics 2024"

  • @ItsGlizda
    @ItsGlizda Месяц назад +1

    I recently stumbled upon your channel, and it's absolutely fascinating! It ignites my curiosity and explains things in a way that awakens my inner child. Keep up the fantastic work!

  • @BinghaoWang-k5b
    @BinghaoWang-k5b 3 дня назад

    amazing, detailed and easy to understand. thank you so much

  • @clayre839
    @clayre839 Месяц назад +19

    The trouble with true creativity is intention. It's easy for humans to recognize things that we ourselves can produce and extrapolate patterns and impose experience and emotion on them but fundamentally if Randomness is the only thing driving the adaptation rather than transitive expression it is no more creative than a wind chime. You can think of it as the training data representing the tuning of each resonator and though we might FIND beauty in the emergent patterns, it is no more creative than its design and tuning, both requiring explicit human intervention. These models fed their own results very quickly deform into incomprehensible static

    • @clayre839
      @clayre839 Месяц назад +10

      To add to this; the false equivalency and under emphasis of the human involvement in tuning is a large proponent of the demonstrably harmful supposition of replacing humans with machines; ignoring the value judgment that is imposed at every level of refinement. I deplore you to refrain from such false equivalencies as it's currently being used in attempts to undermine just about every creative field from engineering to writing to graphic design and would better be described as a sampling tool. These misconceptions have real world implications that are doing demonstrable societal harm. Take for example that even now I am fighting with the predictive text elements attempting to re orchestrate my unorthodox sentence structure and subsequently undermining the intent of my writing; that such a machine would have no insight into. It cannot understand meaning outside of Association and lacks any capability of truly understanding the emergent contradictions of language. So please stop describing these slot machines as creatives when its success is fundamentally built on confirmation bias.

    • @vinniepeterss
      @vinniepeterss Месяц назад

      😮

    • @conduit242
      @conduit242 Месяц назад

      Hilariously, your writing style is awkward and unnecessarily formal rather than creative. One would think computers would be just fine with such a style.

    • @clayre839
      @clayre839 Месяц назад +4

      @@conduit242 for real, it's hard enough being autistic without my computer trying to fuck with me. We're both on the outside hear you'd think we'd be working together 🤣 but it's not the formality it's the variance that tends to fuck with predictive text. the tone was just to have assert a sincere formality to it. Like the larger issue of mechanization in Creative fields is a serious problem, full stop; and I think it's important the language we choose when we're talking about it

    • @unclicked4690
      @unclicked4690 Месяц назад +1

      I love the wind chime analogy, that's a really cool conceptual analogy.
      I disagree with the basic premise that creativity requires intention, for example I'd say evolution is very creative but has no underlying "intention".
      It's also very well understood that human consciousness (and creativity) are fundamentally built on bias, indeed one can only learn if there is a bias to exploit. A very simple example of this is w.r.t identifying similarity of objects, we say a red cup is more similar to a blue cup than it is to a chair, however this requires a bias towards human every-day items.
      What I mean is that if we had to put a number on the similarity of blue cup and red cup, we could say they are 90% similar, while a chair is only 10%. Soon you run into trouble with this method, because how do you quantify how different a chair and a cup are from the ocean? what about a crimson cup? what about bacteria? what about a black hole? What about a cermanic red cup?
      What you see is that you need ever increasing detail, and you metric of similarity simply explodes or collapses to non-sense.
      Humans exploit bias to be able to think, to be able to logically classify items and objects and produce creative solutions.

  • @victormanuel8767
    @victormanuel8767 Месяц назад

    Fantastic. Absolutely phenomenal work here.

  • @AshifKhan-sn6jx
    @AshifKhan-sn6jx Месяц назад

    Okay, you taught me about boltzman distribution better than my school physics teacher and it wasn't even the main point of what you were trying to do

  • @imaltenhause4499
    @imaltenhause4499 Месяц назад +4

    Fantastic video. A small typo however at 08:41. There you denote -ln[p]/epsilon = T. It should be: -epsilon/ln[p] = T.

    • @ArtemKirsanov
      @ArtemKirsanov  Месяц назад +1

      Thanks! Good catch!

    • @raajchatterjee3901
      @raajchatterjee3901 6 дней назад

      Is this the relationship that relates temperature with differentials of energy and entropy?

  • @louisdupont2126
    @louisdupont2126 Месяц назад

    Man your videos are just awesome, and I finally understood the boltzman formula xD

  • @sirinath
    @sirinath Месяц назад +2

    Can you do a course on Markov / Semi Markov / Hidden Markov / Semi Hidden Markov models please.

  • @AyushVerma-ui7re
    @AyushVerma-ui7re Месяц назад +1

    beautiful explanation.

  • @iamdaddy962
    @iamdaddy962 Месяц назад

    happy to see you in the US!! Hope you thrive here

  • @rxphi5382
    @rxphi5382 Месяц назад

    I like the passion I feel from you in your videos! I just wanted to inform you that there is am small typo at 15:06 in the bottom right corner

  • @darkyz543
    @darkyz543 Месяц назад

    Marvelous. Thank you. I almost forgotten how delicious mathematic is.

  • @anywallsocket
    @anywallsocket Месяц назад

    My 2nd physics class adjunct prof told me his fave subject was statistical physics, now I get it 🙏

  • @English-bh1ng
    @English-bh1ng Месяц назад

    I eventually grasped the notion of RBM. Thx

  • @CopperKettle
    @CopperKettle 7 часов назад

    Thank you, this is very interesting. Keep up the good work.

  • @Jacob-ji1ec
    @Jacob-ji1ec Месяц назад +2

    This video is amazing man 🔥

  • @VaradMahashabde
    @VaradMahashabde Месяц назад

    Best explainers, hands down

  • @ced1401
    @ced1401 Месяц назад

    Great video. There's a small typo around 9:15. ln(1/p)/epsilon would rather be 1/T.

  • @syrachify
    @syrachify Месяц назад

    Awesome video! I love this channel! I have a question, which I hope someone will clarify for me: if Boltzmann Machines are unsupervised, how do we know what data is meaningful (like number digits) and what data is just noise, so that we sculpt valleys around the meaningful patterns in the energy landscape? Similarly, in the weight update rule: updating iteratively works on maximizing the probability of the training data, equivalent to minimizing the energy of patterns, but the rule itself assumes we have to know beforehand what the patterns are (because of data - model). Can anyone help with an answer?

  • @yacinebel-hadj6559
    @yacinebel-hadj6559 12 дней назад

    Thanks amazing work I love this topic :)

  • @nessiecz2006
    @nessiecz2006 Месяц назад

    I was worried i was missing something at 8:43 . Nevertheless, great vid, gonna continue watching now:) Thank you for making these explanations
    PS: appreciate the 3b1b music and style;)

  • @anywallsocket
    @anywallsocket Месяц назад

    What you could do for visualization is plot a distribution of x for the digits above them like a mountain that looks like an 8 is different from that of a 2 etc

  • @enriquesolarte1164
    @enriquesolarte1164 Месяц назад +1

    Great videos

  • @-mwolf
    @-mwolf Месяц назад +2

    the 3b1b of neuroscience an ML, thx for the videos!

  • @WillyDarko
    @WillyDarko 5 часов назад

    Insanely high quality content

  • @Darkev77
    @Darkev77 Месяц назад +1

    Given our current understanding of Quantum Mechanics and energy levels being quantized, is the statement @8:08 true (is it constant with the same amount)?

  • @haroldhamburgler
    @haroldhamburgler Месяц назад

    I've learn today, as many times before. Always finish the video before leaving an angry comment.

  • @giuseppepapari7419
    @giuseppepapari7419 Месяц назад +2

    9:05 I guess you meant -ln p / epsilon = 1/T. But that is minor, I like the video

    • @nessiecz2006
      @nessiecz2006 Месяц назад

      ive been searching for this comment, was wondering if im missing something. Thank you kind stranger

  • @luke.perkin.inventor
    @luke.perkin.inventor Месяц назад +1

    At 2x speed it sounded like you said "what sparked this sh*t" 😂

  • @guyguy12385
    @guyguy12385 Месяц назад +1

    yea you are absolutely goated

  • @ralvarezb78
    @ralvarezb78 День назад

    14:00 This is strongly related to simulated annealing optimizacion method

  • @justanotherytaccount1968
    @justanotherytaccount1968 Месяц назад

    Awesome video, thanks!
    Could the stochastic “hallucination” phase be related to hippocampal replay training cortical networks (“hidden” layer) during sleep?

  • @Pedritox0953
    @Pedritox0953 Месяц назад

    Great video!

  • @SystemsMedicine
    @SystemsMedicine Месяц назад

    Sweet Vid… Rock On!

  • @JuergenAschenbrenner
    @JuergenAschenbrenner Месяц назад

    great stuff, keep up Your good work

  • @catcatcatcatcatcatcatcatcatca
    @catcatcatcatcatcatcatcatcatca Месяц назад

    0:23 oh god. Reading that chatGPT answer hurts. That is equivalent to asking for a pasta recipe and seeing the answer starting with
    1) start a greasefire in the kettle
    2) for eight to ten minutes, pour water on it

  • @leonardorazzai840
    @leonardorazzai840 Месяц назад

    Wow, so fascinating 😍

  • @vinniepeterss
    @vinniepeterss Месяц назад

    great video

  • @vidal9747
    @vidal9747 Месяц назад

    Our brains activate neurons based on probabilities. Those are created by particles that follow laws pretty close to what is explored in thermodynamics and statistical mechanics. There is nothing more fitting than creating models that tend to mimic those aspects. Our computers are absolutely better than humans for problems we already know the equations. Because we know the uncertainty of every number in a computer. But for new problems, a probabilistic approach is very good.

  • @ArbaouiBillel
    @ArbaouiBillel Месяц назад

    Amazing keep going 👍🏼

  • @SaúlAlejandroVillaradosFlores
    @SaúlAlejandroVillaradosFlores 7 дней назад

    I fn Love ur channel buddie

  • @ronnysanjaya6823
    @ronnysanjaya6823 18 дней назад

    Yes many thanks.

  • @not_amanullah
    @not_amanullah Месяц назад

    Thanks ❤️

  • @daleanfer7449
    @daleanfer7449 Месяц назад

    great content❤❤❤

  • @InquilineKea
    @InquilineKea Месяц назад

    What temperature optimizes for the highest range of perplexity values?

  • @MlNECRAFT69
    @MlNECRAFT69 Месяц назад

    lol the new title made me watch it again on accident😊

  • @davidfmendiola2009
    @davidfmendiola2009 Месяц назад

    🙂¡Gracias!

  • @notu483
    @notu483 Месяц назад

    13:14 Softmax wasn’t mentioned?

  • @faisalsheikh7846
    @faisalsheikh7846 Месяц назад

    Wonderful❤

  • @peterpetroff851
    @peterpetroff851 21 день назад

    20:47 spelling error. Thank you

  • @SeattleShelby
    @SeattleShelby 20 дней назад

    As a Boltzmann Brain in a fever dream, I found this video very insightful into my waking nightmare.

  • @IoannisNousias
    @IoannisNousias Месяц назад

    How do you create your animations? This is awesome.

    • @ArtemKirsanov
      @ArtemKirsanov  Месяц назад +1

      After Effects + Python + Blender :)
      I have a video about it that might help: ruclips.net/video/yaa13eehgzo/видео.htmlsi=EcoTIRW9Qhnnb9xS

  • @wwvvwvwvwvwv
    @wwvvwvwvwvwv Месяц назад +5

    me when ai learns to dream

  • @gunaysoni6792
    @gunaysoni6792 Месяц назад

    I was expecting a Brilliant Sponsorship 😂

  • @crazyedo9979
    @crazyedo9979 Месяц назад

    Dr. Chandra. Will I dream?😁

  • @1vEverybody
    @1vEverybody Месяц назад +1

    Ai learning how to dream is most people’s nightmare

  • @ThomasConover
    @ThomasConover Месяц назад

    1:13 The Boltzmann machine is the AI equivalent of dropping acid for a human.

  • @maths.visualization
    @maths.visualization Месяц назад

    Can You Share Video Code?

  • @Rockamoley
    @Rockamoley 29 дней назад

    This is a good video, but the history given in the first few minutes is completely hallucinated. Associative memories are as old as van Neumann architectures, and thinking like humans has always been the first goal of researchers. Calculating exact trajectories was a useful stepping stone.

  • @stathius
    @stathius 17 дней назад

    How realistic is the assumption that the probability of jumping between any adjacent state is equal?

  • @chara2.o803
    @chara2.o803 Месяц назад

    Lil bro is dreaming ❤

  • @Arts_ng_sa_Socialismo
    @Arts_ng_sa_Socialismo Месяц назад

    Bonjour

  • @car103d
    @car103d Месяц назад

    HAL 9000: “Will I dream?”

  • @TeslaElonSpaceXFan
    @TeslaElonSpaceXFan Месяц назад +1

    👍

  • @justanotherytaccount1968
    @justanotherytaccount1968 Месяц назад

    Extra comment for the algorithm

  • @futureshockpod
    @futureshockpod Месяц назад

    Holy crap this is hot.

  • @polygondeath2361
    @polygondeath2361 Месяц назад

    We can reject the premise right off the bat. There really isn't much ambiguity when it comes to AI "art". It is algorithmic and soulless. There wasn't a shift from algorithmic to creative; as it stands, machines are yet to be creative, and have stuck to being algorithmic.

    • @Singularity606
      @Singularity606 3 дня назад

      Only person in the comments to lash out in this way, parroting the most boring meme of them all ("soulless").

    • @polygondeath2361
      @polygondeath2361 2 дня назад

      @@Singularity606 I can appreciate the vast applications of machine learning technologies. Art isn't one of them. It's hard to not tell when AI is involved in any "creative" capacity. Thus, soulless.

  • @cyb0rg14
    @cyb0rg14 Месяц назад

    I came here to learn about AI and going after understanding physics.

  • @lost4468yt
    @lost4468yt Месяц назад +2

    When AI learns to meme?

    • @trucid2
      @trucid2 Месяц назад

      Then I can retire. 🫡

  • @idegteke
    @idegteke Месяц назад +3

    The fact that some people might even consider the idea that the IA we know of has any kind of own intelligence or capability of creating novum shows that the people creating and marketing it from behind are really good at what they are doing... at making money:)

  • @HaicangChen
    @HaicangChen 7 часов назад

    Would be interesting to see the # views of these days...

  • @bladekiller2766
    @bladekiller2766 Месяц назад

    How do these models compare to the SOTA like Transformers?

  • @高煜朗
    @高煜朗 22 дня назад

    someone is stealing your video in chinese website: bilibili

  • @kubaissen
    @kubaissen Месяц назад

    Bug in step 2

  • @forblender2695
    @forblender2695 Месяц назад

    Me when ai drugs and hallucinates😅l😂😂

  • @rangefreewords
    @rangefreewords Месяц назад

    If a machine learning ESP knew I what I was thinking about headgear and breathing underwater one day ahead' of my father's head burried under in a pond. lmk. I only thought about it. I could only randomly find a half japanese/ half italian woman one-three days before I ever met her in 2008. But, if your cognitive ability is more than one day. LMK

  • @not_amanullah
    @not_amanullah Месяц назад

    🖤🤗

  • @jovanwatson7656
    @jovanwatson7656 Месяц назад

    Im lost

  • @ericswain4177
    @ericswain4177 Месяц назад +1

    AI Learns to Dreaming is a fallacy as Dreamig is not a Learned phenomenon to start with, We as humans don't even know what dreams are or where they come from so there is nothing for AI to emulate.

  • @HadiLq
    @HadiLq Месяц назад

    All good, but T=-\epsilon/|ln p|