ICLR 2021 Keynote - "Geometric Deep Learning: The Erlangen Programme of ML" - M Bronstein

Поделиться
HTML-код
  • Опубликовано: 29 дек 2024

Комментарии •

  • @AbhishekSingh-mz8mb
    @AbhishekSingh-mz8mb 3 года назад +255

    The presentation quality, content coverage, and animation here is incredibly marvelous! This has certainly set a gold standard for future talks. Thanks a lot for putting this together.

    • @bucketofbarnacles
      @bucketofbarnacles 3 года назад +5

      Couldn’t agree more. Depth, breadth and effectiveness of communication are spot on.

  • @AICoffeeBreak
    @AICoffeeBreak 3 года назад +75

    What a great keynote, both content-wise and in terms of the visuals. 👏 A good side-product of virtual conferences is certainly the production value of scientific talks going up.

  • @vishalmishra3046
    @vishalmishra3046 3 года назад +7

    This approach to Geometric Neural Nets is like a potential Nobel prize winning grand unification theory (GUT) unifying all the neural net architectures from ANN, CNN, RNN, Graph-NN, Message Passing (MP-NNs) neural nets and Transformers (Attention Neural Nets). Wonderful video !! Just like M-Theory when there is too much innovation accumulating over time, a simplifier needs to be born who can merge and unify all of them into a single more general purpose abstraction.

  • @kosolapovlev6029
    @kosolapovlev6029 3 года назад +66

    This is literally the best presentation about machine learning I have ever seen. Thank you for your marvelous work!

    • @Jacob011
      @Jacob011 3 года назад +1

      It is very intriguing research and graphically well presented.
      I wonder what relationships are there between this unifying geometric perspective of deep learning and the random finite sets (stochastic geometry, poison point processes), which are now the rave in the multi-object tracking community.
      This presentation is also slightly infuriating in that it goes over very deep concepts very fast. Regardless though, amazing work!

  • @youcefouadjer8057
    @youcefouadjer8057 3 года назад +12

    The incredible Michael Bronstein is on RUclips !! This is Awesome

  • @raghavamorusupalli7557
    @raghavamorusupalli7557 3 года назад +1

    It takes a semester for us to comprehend this marathon talk, Sir. Great visionary talk. Thank you Sir

  • @benganot4363
    @benganot4363 3 года назад +3

    As a computer science student now preparing for his ML course exam. I was just blown away by how all machine learning algorithms are related. Beautiful, stunning work.

  • @ehtax
    @ehtax 3 года назад +24

    Presentation mastery! You managed to boil things down to the most salient intuitions, all the while covering such a wide breadth of topics! This has me amped to dive into your papers (im in fmri neuroscience, where graph-based predictive modelling has been mostly ineffectual thusfar)

  • @3ss3ns3
    @3ss3ns3 2 года назад +1

    very good coverage. thank you, Prof. Bronstein

  • @VitorMeriat
    @VitorMeriat 3 года назад +2

    Geometric Deep Learning Grids, Groups, Graphs, Geodesics, and Gauges is of great importance for my master's degree. Great presentation, is an honor.

  • @MachineLearningStreetTalk
    @MachineLearningStreetTalk 3 года назад +36

    Amazing stuff! Hope we can interview Prof. Bronstein on our show soon 😀

  • @gracechang7947
    @gracechang7947 3 года назад +5

    Incredible, really enjoyed this keynote. Agree, one of the best presentations on ML I’ve seen yet. I’m really happy to see the emphasis on clarity to a general audience with such well-crafted illustrations of concepts.

  • @TL-fe9si
    @TL-fe9si 2 года назад +1

    Thank you! amazing presentation!!! I giggled a little when seeing 2:40

  • @LovroVrcek
    @LovroVrcek 3 года назад +1

    This should be a gold standard of keynote talks. Amazing! 👏

  • @fulcobohle4576
    @fulcobohle4576 3 года назад +2

    I was amazed by your presentation, good job. But what amazed me was that I was able to understand in detail everything you explained. 35 years ago I studied physics and mathematics and learned all aspects of what you told in this video without ever realizing it could be applied to AI as well. Like you I was confused about the why of convolution, thanks for giving me the light !

  • @phillipyu6260
    @phillipyu6260 3 года назад +12

    This inspires me to continue my education. My brain is itching to learn more!

  • @zorqis
    @zorqis 3 года назад +17

    This was deeply thought provoking and wonderfully inspiring.

  • @adrianharo6586
    @adrianharo6586 3 года назад +13

    I wish I could understand all the details, but my education only takes me so far understanding the concepts you're going over. I am a newbie ML enthusiast. I really do appreciate the animation, it is nice to follow it.

  • @ahmadzobairsurosh5832
    @ahmadzobairsurosh5832 2 года назад +3

    Absolutely Amazing Prof Bronstein!
    Thank you for such an amazing piece of content.

  • @Fordance100
    @Fordance100 3 года назад +15

    Very interesting perspectives on deep learning and seamless transition from one concept to another. Truly a master piece of scientific presentation. Thank you so much for posting it.

  • @MrAceman82
    @MrAceman82 3 года назад +12

    I must admit, I came to this link accidentally. The presentation is a master piece. Keep it going. Following.

  • @schumachersbatman5094
    @schumachersbatman5094 3 года назад +3

    This is the best presentation on machine learning I've ever seen. So enjoyable.

  • @MarianaViale
    @MarianaViale 2 года назад +1

    Thank you for this great presentation and for sharing it with the common public.

  • @rendermanpro
    @rendermanpro 3 года назад +3

    Presentation quality is stuning

  • @icanfast
    @icanfast 3 года назад +1

    i was in awe to see how underlying maths unifies DL techniques. Daresay community NEEDS a similar but in-depth deconstruction of particular topics. There are a lot of knowledgeable people in the comments, someone please make it happen

  • @sabawalid
    @sabawalid 3 года назад +1

    Great work... this has the chance to advance DL considerably, especially detecting "intrinsic features" which will solve many existing problems
    This is real science !!! Thumbs up!

  • @amirleshem6720
    @amirleshem6720 3 года назад +3

    Amazing. I'm speechless.

  • @smcg2490
    @smcg2490 2 года назад

    Wow. Just. Wow.
    The quality of this presentation is incredible. The animations enabled me to grasp concepts (almost) instantly. So incredibly helpful for my current paper. Thank you ever so much for the money, time, and effort it took to produce a video of such exceptional quality.

    • @MichaelBronsteinGDL
      @MichaelBronsteinGDL  2 года назад +1

      Thank you. Such comments are the best motivation to continue doing more!

  • @LukePluto
    @LukePluto 3 года назад +3

    This is amazing. I hope you make more videos like this again!

  • @pianoconlatte
    @pianoconlatte 3 года назад +2

    Beautiful presentation. Got some ideas to test.Thank you.

  • @renegadephalanx
    @renegadephalanx 3 года назад +2

    Great, concise, and very explanatory presentation. Thank you very much for uploading this content.

  • @simpl51
    @simpl51 3 года назад +1

    Thank you so much for this. After Sunday lunch, Idling through youtube, i was dragged down a nD rabbit hole, through some maths and psycology history fo some hary transformations of a non-trivial representation into a managable ones, and how they can improve the lives of astronomers, computer gamers, and pharmacologists,. How mapphg foods and drugs could alleviate diseases;. How computers could troll through posts and comments to find a small subset of interesting ones.. Even youtube itself joined in, and removed adverts, brexit rants, music, and chess blogs from my starter screen. What a great life you lead!

  • @tst3rt
    @tst3rt 3 года назад +1

    Спасибо, Михаил! Одна из лучших презентаций, которые я видел.

  • @HtHt-in7vt
    @HtHt-in7vt 3 года назад +1

    Well done! Clear and visual! Please more like that! Thanks a lot!

  • @vtrandal
    @vtrandal 6 месяцев назад +1

    Absolutely fantastic!

  • @sandeepvk
    @sandeepvk 3 года назад +2

    I am quite excited about this field. Traditionally the innovation in biotech engineering was hampered by ethical concerns. With this technique we can quickly innovate without any political ramification. This is quite akin to the growth of internet itself

  • @pafloxyq
    @pafloxyq 3 года назад +4

    A very cool presentation, just wanted to ask if the scale transformation described at 09:31 has anything to do with renormalization groups methods in physics ?

  • @asnaeb2
    @asnaeb2 3 года назад +3

    Very nice animations make it a lot easier to follow. Thanks!

  • @tienphammanh4224
    @tienphammanh4224 3 года назад +2

    This talk is so amazing. I really like your interpretation of mathematical formulas, very clearly. Thanks for your great work. Hope you make more videos like this. One more time, thank you very much.

  • @rigidrobot
    @rigidrobot 3 года назад +1

    Is one of the possible domains of GDL going to be in any instance of a dynamic system? For instance not just proteins but interactions between molecular pathways? Or meme propagation networks?

  • @hrishikeshkhaladkar4963
    @hrishikeshkhaladkar4963 3 года назад +2

    This is amazing sir..Hopefully this will motivate the student community to take up mathematics very seriously

  • @anastassiya8526
    @anastassiya8526 3 месяца назад

    it is an amazing work and the presentation, thank you!

  • @Arkonis1
    @Arkonis1 3 года назад +1

    The introduction reminds me talks from S. Mallat where he was already in 2012 showing in one hand the underlying symmetry invariance that we have in his wavelett scattering system and on the other hand the analogy of this system with deep CNN. And concluding that deep learning architecture might learn symmetry groups invariance like learning the groups of cats, dogs, tables etc.. I like very much this group theory approach, which is not often discussed in literature so far

    • @MichaelBronsteinGDL
      @MichaelBronsteinGDL  3 года назад +1

      Indeed we cite Mallat in the book - his paper with Joan Bruna on scattering network established that CNNs are not only shift-equivariant but also approximately equivariant to smooth deformations

  • @jonathansum9084
    @jonathansum9084 3 года назад +2

    Thank you for uploading.
    I hope it will talk about the coding part too.

  • @max477
    @max477 3 года назад +6

    I feel sad that I left this field for financial reason. But I keep watching these videos

    • @deeplearningpartnership
      @deeplearningpartnership 5 месяцев назад

      Come back.

    • @max477
      @max477 5 месяцев назад

      @@deeplearningpartnership I hope so. what is your background

    • @deeplearningpartnership
      @deeplearningpartnership 5 месяцев назад

      @@max477 Physics, finance and now AI.

    • @max477
      @max477 5 месяцев назад +1

      @@deeplearningpartnership great you have great background

  • @thevirtualguy5074
    @thevirtualguy5074 3 года назад +2

    This is EPIC! looking forward to more of this great material.

  • @luisleal4169
    @luisleal4169 2 года назад

    Wow! this is an excelent presentation, I guess your classes are something like this, and your students are very lucky to have you as a professor.

  • @khuongnguyenduy2156
    @khuongnguyenduy2156 3 года назад +3

    Thank you very much for your great talk!

  • @TheMrleo2107
    @TheMrleo2107 3 года назад +2

    A great presentation professor. Reminds me of 3blue1brown

  • @ashwindesilva4781
    @ashwindesilva4781 3 года назад +2

    Such an amazing lecture! Thank you very much :)

  • @krishnaaditya2086
    @krishnaaditya2086 3 года назад +4

    Awesome Thanks!

  • @LukeVilent
    @LukeVilent 3 года назад +1

    Oh yeah, RealSense, I've been working with them in image recognition, trying to build something similar to Complex Yolo, but in a more engineering way. However, the quality was not suited for the harsh conditions we were exposing the devices to (pig stall). It was also the time when the first extensive neuronal network libraries became available, and I've said that in a few years the tech calibration of the camera will be just replaced by a neural network. And, broadly speaking, that's what drives my current research.

  • @peggy767
    @peggy767 3 года назад +2

    Such an inspiring presentation!

  • @Serg_A3
    @Serg_A3 3 года назад +2

    This is amazing presentation 👍👍👍

  • @ΚώσταςΣταυρόπουλος-ω1γ

    Only got here from other videos on the topic. Nice presentation, one that assumes a bit more linear algebra and group theory fundamentals (but indeed one only needs the very basics of those fields + basics of analysis to follow the concepts in ML/DL), but gets a bit more into actual details compared to other videos I have watched on the same topic, which I appreciated. If only there weren't so many self-promoting plugs all over the place throughout the video, it gave me the impression that the actual science on the video served as an instrument for own work promotion a bit too much. I guess it might be a cultural trait of the field and this is how things work, but from what I gathered from the comments, active or former researchers in the field (I don't qualify as such) already know not only you, but your work as well (which I have absolutely no doubt to assume that is indeed very noteworthy), already prior to the video.
    Subscribed.

    • @MichaelBronsteinGDL
      @MichaelBronsteinGDL  3 года назад

      I think invited speakers are invited exactly because of their expertise, and it is expected to talk about own work (hence the "self-promoting plugs", which are some of the first works in the field that we did with students and collaborators). In the book we show a more balanced overview, however for the video I chose those works I relate to more.

  • @r0lisz
    @r0lisz 3 года назад +7

    Great talk! And outstanding visuals! How were they made?

    • @AndyTutify
      @AndyTutify 3 года назад

      You could make this in After Effects

  • @3laserbeam3
    @3laserbeam3 3 года назад +6

    Damn! That's awesome! As a side note, may I ask what was used to create the visuals and animations for this talk? They are gorgeous!

    • @MichaelBronsteinGDL
      @MichaelBronsteinGDL  3 года назад +22

      Adobe AE and two months of work of two professional designers

    • @3laserbeam3
      @3laserbeam3 3 года назад +1

      @@MichaelBronsteinGDL That would have been my guess, professional designers involved. Thanks!

    • @MrAceman82
      @MrAceman82 3 года назад +1

      @@MichaelBronsteinGDL Great animations, and thank you for your efforts to share this valuable knowledge.

  • @MDLI
    @MDLI 3 года назад +3

    Wow, you took it to the next level!
    Super informative and impressive.

  • @a_sobah
    @a_sobah 3 года назад +1

    Very interesting I all ways have that question is there a way to indefinitely transformation on deeplearning this video shows how it's done thank you like to more on this topic but it's hard for me to understand all those mathematics.

  • @chaoyang1945
    @chaoyang1945 3 года назад +3

    Great talk!!!!

  • @stimpacks
    @stimpacks 3 года назад +4

    OK, I now need a Hinton, Bengio, LeCunn & Schmidthuber print. In an antique frame.

  • @deeplearningpartnership
    @deeplearningpartnership 5 месяцев назад

    This was actually amazing.

  • @pratikdeshpande3258
    @pratikdeshpande3258 3 года назад +2

    Excellent generalisation of deep learning. I can see Linear Algebra, Graph theory, Group theory and many other math branches intersecting with physics, computer graphics and biology. This is truly a gem of ML.
    BTW, what's on the y-axis of this graph at 18:58 ?

    • @MichaelBronsteinGDL
      @MichaelBronsteinGDL  3 года назад +2

      The task is regressing the penalized water-octanol partition coefficient (logP) on molecules from the ZINC dataset. Y-axis shows the testing Mean Absolute Error.

  • @robinranabhat3125
    @robinranabhat3125 3 года назад +3

    Now this was enlightening !

  • @vi5hnupradeep
    @vi5hnupradeep 3 года назад +1

    Just wow 💯 ; this is inspiring me to learn more ,. Amazing presentation 💫

  • @EfraM83
    @EfraM83 3 года назад +2

    interesting.... I'm working on the same thing independently.... I believe this is ultimately the theory of everything.

  • @laurencevanhelsuwe3052
    @laurencevanhelsuwe3052 3 года назад +2

    My old math teacher would break out in a sweat of disbelief seeing that higher mathematics can be used to recognise cats !

  • @dawithailu3439
    @dawithailu3439 3 года назад

    I wasn't sure at first as to how you wanted to connect the different geometries with deep learning , but as the video went on, I could see what you meant. And now, I am thinking about how it can be applied in emotion classification project I'm interested in. Thank you for the general insight, It would be incredibly awesome if you can attach some git works.

  • @Hassan-se3vx
    @Hassan-se3vx 3 года назад +1

    Very nice presentation

  • @fl2024steve
    @fl2024steve 3 года назад +2

    Imagine how much time the presenter has spent preparing this presentation.

  • @МихаилМакаркин-ф9э
    @МихаилМакаркин-ф9э 3 года назад +1

    Thanks for the video. I wanted to know more about this view of machine learning.

  • @florianro.9185
    @florianro.9185 2 года назад

    Absolutely great presentation! What software was used to create these animations? :) Thanks

  • @georgeb8637
    @georgeb8637 10 месяцев назад

    28:38 - 3D sensor to capture face - 10 years ago - Intel integrated 3D sensor into their product
    30:17 - we don’t need a 3D sensor now - we can use 2D video + geometric decoder that reconstructs a 3D shape
    36:50 - tea, cabbage, celery, sage

  • @madhavpr
    @madhavpr 3 года назад +1

    This is one of the most beautiful presentations I have ever seen in my life. I'll be honest here- I did not understand much, but I'm truly inspired to learn the material. Professor Bronstein, would a deep learning / signal processing background be enough to pick up this material?

    • @MichaelBronsteinGDL
      @MichaelBronsteinGDL  3 года назад +2

      I would give a biased response, but probably our forthcoming book we are currently writing (a preview is available here: arxiv.org/abs/2104.13478)

  • @amirhosseindaraie5622
    @amirhosseindaraie5622 3 года назад +2

    This was wonderful!!!!!!!

  • @adrianharo6586
    @adrianharo6586 3 года назад +1

    Where can I find more information on the project that helps classify the molecules on plant based foods??

    • @MichaelBronsteinGDL
      @MichaelBronsteinGDL  3 года назад +1

      Here is a blog post: towardsdatascience.com/hyperfoods-9582e5d9a8e4?sk=d20fe73c7d9ecb62dd3d391a44d4ef7f

    • @priyamdey3298
      @priyamdey3298 3 года назад +2

      My mind was blown away when I saw that even food preparation can be represented as a computational graph with cooking transformations as edges and optimize to maximally preserve the anti-cancer effect 🙌.

  • @dihuang9849
    @dihuang9849 3 года назад +1

    Awesome!

  • @JousefM
    @JousefM 3 года назад +1

    Wow, that's so dope!!! Thanks for this great production quality and delivery Michael!
    Btw, would love to have you on my podcast talking about GDL!

  • @МихаилГольт-ж7т
    @МихаилГольт-ж7т 3 года назад +2

    This is really amazing!

  • @harriehausenman8623
    @harriehausenman8623 3 года назад +2

    Thank you for the great video.
    I wonder what Stephen Wolfram thinks about this ;-)

  • @kirekadan
    @kirekadan 3 года назад

    Great presentation. Can you tell me how the software you use to animate the graphs?

  • @ElaprendizdeSalomon
    @ElaprendizdeSalomon Год назад

    wonderful work.

  • @haitham973
    @haitham973 3 года назад +2

    Super cool talk!!

  • @xbronn
    @xbronn 3 года назад +1

    omfg, wow. what a presentation!

  • @imalive404
    @imalive404 3 года назад +1

    Full fledged AR and VR products are gonna be launched soon is one of the takes. Metaverse is here

  • @jungjunk1662
    @jungjunk1662 3 года назад

    This presentation is as great as the talk itself. What software did you use to create the presentation graphics?

    • @MichaelBronsteinGDL
      @MichaelBronsteinGDL  3 года назад +1

      was done by professional designers. photoshop/illustrator/after effects

  • @LucasRolimm
    @LucasRolimm 3 года назад +1

    Master piece!

  • @TheAIEpiphany
    @TheAIEpiphany 2 года назад

    It's year 2030. MLPs are SOTA on all domains imaginable to human mind.
    MLP AGI whispers: Michael didn't mention me in his ICLR keynote.
    Paperclips.

  • @roomo7time
    @roomo7time 2 года назад

    absolute gold

  • @VictorBanerjeeF
    @VictorBanerjeeF 3 года назад +1

    Love at first sight... ❤️

  • @fredxu9826
    @fredxu9826 3 года назад

    How is this presentation created (tools)? Would love to follow the path of Dr. Bronstein and start creating presentations like this one.

    • @MichaelBronsteinGDL
      @MichaelBronsteinGDL  3 года назад +2

      That was a (titanic) work of Jakub Makowski with Adobe AE. Nearly two month.

    • @fredxu9826
      @fredxu9826 3 года назад +1

      @@MichaelBronsteinGDL wow I guess I will endeavor on the art-side of the project after my theory is worth the effort :)

  • @Александртень-ф4т
    @Александртень-ф4т 3 года назад

    Михаил Бронштейн наверное русскоязычный? Ваша фамилия как то связана с тем, что т9 её подсказывает? И что вы думаете о модели сегрегации Шеллинга?

    • @MichaelBronsteinGDL
      @MichaelBronsteinGDL  3 года назад

      Да, русскоязычный (родился в России но вырос в Израиле). Никогда не имел дело с этой моделью.

  • @lennylenny7320
    @lennylenny7320 3 года назад +1

    awesome!!

  • @outruller
    @outruller 3 года назад +1

    Oh. My. God.
    It a shame that I am too dumb to deeply understand everything that was said, nevertheless even what I did get is astonishingly fascinating!
    I so regret not learning harder in my university days, may be I would have had a chance to work on something this impactful and motivating.

  • @louerleseigneur4532
    @louerleseigneur4532 3 года назад +1

    Thanks

  • @Alexander_Sannikov
    @Alexander_Sannikov 3 года назад +1

    It is indeed a very high-quality high-effort presentation. But what really annoys me in the subject is that deep learning people really like to acknowledge weaknesses of their neural network only when they're attempting to solve them. And when they are not, they like to pretend that they don't exist and their approach is flawless.
    Like this graph isomorphism problem for example: it is a major problem in representing a graph in any linearized fashion, but I read many papers that just go on boasting how well their blabla-net performs instead of talking of these limitations. A lot of DL research seems to be hype-driven rather than problem-driven.

    • @MichaelBronsteinGDL
      @MichaelBronsteinGDL  3 года назад

      I agree to some extent, and here is one example related to graph isomorphism: it's easy to talk about expressivity, much harder to show any results about generalization power. To the best of my knowledge, very little is currently known about how GNNs generalize.

  • @RuoyangYao
    @RuoyangYao 2 года назад

    This is amazing.

  • @chrissgouros7282
    @chrissgouros7282 3 года назад +1

    ΕΚΠΛΗΚΤΙΚΟΣ!!

  • @TriPham-xd9wk
    @TriPham-xd9wk 3 года назад

    Time base from data to force altering lead to transformation and amphomorism. Like water it remain water in different temperature so it survival all economic, political, and religious condition and remain an kind, compassionate, and creative wise human

  • @federicocarrone512
    @federicocarrone512 3 года назад +1

    this is amazing