The presentation quality, content coverage, and animation here is incredibly marvelous! This has certainly set a gold standard for future talks. Thanks a lot for putting this together.
What a great keynote, both content-wise and in terms of the visuals. 👏 A good side-product of virtual conferences is certainly the production value of scientific talks going up.
This approach to Geometric Neural Nets is like a potential Nobel prize winning grand unification theory (GUT) unifying all the neural net architectures from ANN, CNN, RNN, Graph-NN, Message Passing (MP-NNs) neural nets and Transformers (Attention Neural Nets). Wonderful video !! Just like M-Theory when there is too much innovation accumulating over time, a simplifier needs to be born who can merge and unify all of them into a single more general purpose abstraction.
It is very intriguing research and graphically well presented. I wonder what relationships are there between this unifying geometric perspective of deep learning and the random finite sets (stochastic geometry, poison point processes), which are now the rave in the multi-object tracking community. This presentation is also slightly infuriating in that it goes over very deep concepts very fast. Regardless though, amazing work!
As a computer science student now preparing for his ML course exam. I was just blown away by how all machine learning algorithms are related. Beautiful, stunning work.
Presentation mastery! You managed to boil things down to the most salient intuitions, all the while covering such a wide breadth of topics! This has me amped to dive into your papers (im in fmri neuroscience, where graph-based predictive modelling has been mostly ineffectual thusfar)
Incredible, really enjoyed this keynote. Agree, one of the best presentations on ML I’ve seen yet. I’m really happy to see the emphasis on clarity to a general audience with such well-crafted illustrations of concepts.
I was amazed by your presentation, good job. But what amazed me was that I was able to understand in detail everything you explained. 35 years ago I studied physics and mathematics and learned all aspects of what you told in this video without ever realizing it could be applied to AI as well. Like you I was confused about the why of convolution, thanks for giving me the light !
I wish I could understand all the details, but my education only takes me so far understanding the concepts you're going over. I am a newbie ML enthusiast. I really do appreciate the animation, it is nice to follow it.
Very interesting perspectives on deep learning and seamless transition from one concept to another. Truly a master piece of scientific presentation. Thank you so much for posting it.
i was in awe to see how underlying maths unifies DL techniques. Daresay community NEEDS a similar but in-depth deconstruction of particular topics. There are a lot of knowledgeable people in the comments, someone please make it happen
Great work... this has the chance to advance DL considerably, especially detecting "intrinsic features" which will solve many existing problems This is real science !!! Thumbs up!
Wow. Just. Wow. The quality of this presentation is incredible. The animations enabled me to grasp concepts (almost) instantly. So incredibly helpful for my current paper. Thank you ever so much for the money, time, and effort it took to produce a video of such exceptional quality.
Thank you so much for this. After Sunday lunch, Idling through youtube, i was dragged down a nD rabbit hole, through some maths and psycology history fo some hary transformations of a non-trivial representation into a managable ones, and how they can improve the lives of astronomers, computer gamers, and pharmacologists,. How mapphg foods and drugs could alleviate diseases;. How computers could troll through posts and comments to find a small subset of interesting ones.. Even youtube itself joined in, and removed adverts, brexit rants, music, and chess blogs from my starter screen. What a great life you lead!
I am quite excited about this field. Traditionally the innovation in biotech engineering was hampered by ethical concerns. With this technique we can quickly innovate without any political ramification. This is quite akin to the growth of internet itself
A very cool presentation, just wanted to ask if the scale transformation described at 09:31 has anything to do with renormalization groups methods in physics ?
This talk is so amazing. I really like your interpretation of mathematical formulas, very clearly. Thanks for your great work. Hope you make more videos like this. One more time, thank you very much.
Is one of the possible domains of GDL going to be in any instance of a dynamic system? For instance not just proteins but interactions between molecular pathways? Or meme propagation networks?
The introduction reminds me talks from S. Mallat where he was already in 2012 showing in one hand the underlying symmetry invariance that we have in his wavelett scattering system and on the other hand the analogy of this system with deep CNN. And concluding that deep learning architecture might learn symmetry groups invariance like learning the groups of cats, dogs, tables etc.. I like very much this group theory approach, which is not often discussed in literature so far
Indeed we cite Mallat in the book - his paper with Joan Bruna on scattering network established that CNNs are not only shift-equivariant but also approximately equivariant to smooth deformations
Oh yeah, RealSense, I've been working with them in image recognition, trying to build something similar to Complex Yolo, but in a more engineering way. However, the quality was not suited for the harsh conditions we were exposing the devices to (pig stall). It was also the time when the first extensive neuronal network libraries became available, and I've said that in a few years the tech calibration of the camera will be just replaced by a neural network. And, broadly speaking, that's what drives my current research.
Only got here from other videos on the topic. Nice presentation, one that assumes a bit more linear algebra and group theory fundamentals (but indeed one only needs the very basics of those fields + basics of analysis to follow the concepts in ML/DL), but gets a bit more into actual details compared to other videos I have watched on the same topic, which I appreciated. If only there weren't so many self-promoting plugs all over the place throughout the video, it gave me the impression that the actual science on the video served as an instrument for own work promotion a bit too much. I guess it might be a cultural trait of the field and this is how things work, but from what I gathered from the comments, active or former researchers in the field (I don't qualify as such) already know not only you, but your work as well (which I have absolutely no doubt to assume that is indeed very noteworthy), already prior to the video. Subscribed.
I think invited speakers are invited exactly because of their expertise, and it is expected to talk about own work (hence the "self-promoting plugs", which are some of the first works in the field that we did with students and collaborators). In the book we show a more balanced overview, however for the video I chose those works I relate to more.
Very interesting I all ways have that question is there a way to indefinitely transformation on deeplearning this video shows how it's done thank you like to more on this topic but it's hard for me to understand all those mathematics.
Excellent generalisation of deep learning. I can see Linear Algebra, Graph theory, Group theory and many other math branches intersecting with physics, computer graphics and biology. This is truly a gem of ML. BTW, what's on the y-axis of this graph at 18:58 ?
The task is regressing the penalized water-octanol partition coefficient (logP) on molecules from the ZINC dataset. Y-axis shows the testing Mean Absolute Error.
I wasn't sure at first as to how you wanted to connect the different geometries with deep learning , but as the video went on, I could see what you meant. And now, I am thinking about how it can be applied in emotion classification project I'm interested in. Thank you for the general insight, It would be incredibly awesome if you can attach some git works.
28:38 - 3D sensor to capture face - 10 years ago - Intel integrated 3D sensor into their product 30:17 - we don’t need a 3D sensor now - we can use 2D video + geometric decoder that reconstructs a 3D shape 36:50 - tea, cabbage, celery, sage
This is one of the most beautiful presentations I have ever seen in my life. I'll be honest here- I did not understand much, but I'm truly inspired to learn the material. Professor Bronstein, would a deep learning / signal processing background be enough to pick up this material?
My mind was blown away when I saw that even food preparation can be represented as a computational graph with cooking transformations as edges and optimize to maximally preserve the anti-cancer effect 🙌.
Oh. My. God. It a shame that I am too dumb to deeply understand everything that was said, nevertheless even what I did get is astonishingly fascinating! I so regret not learning harder in my university days, may be I would have had a chance to work on something this impactful and motivating.
It is indeed a very high-quality high-effort presentation. But what really annoys me in the subject is that deep learning people really like to acknowledge weaknesses of their neural network only when they're attempting to solve them. And when they are not, they like to pretend that they don't exist and their approach is flawless. Like this graph isomorphism problem for example: it is a major problem in representing a graph in any linearized fashion, but I read many papers that just go on boasting how well their blabla-net performs instead of talking of these limitations. A lot of DL research seems to be hype-driven rather than problem-driven.
I agree to some extent, and here is one example related to graph isomorphism: it's easy to talk about expressivity, much harder to show any results about generalization power. To the best of my knowledge, very little is currently known about how GNNs generalize.
Time base from data to force altering lead to transformation and amphomorism. Like water it remain water in different temperature so it survival all economic, political, and religious condition and remain an kind, compassionate, and creative wise human
The presentation quality, content coverage, and animation here is incredibly marvelous! This has certainly set a gold standard for future talks. Thanks a lot for putting this together.
Couldn’t agree more. Depth, breadth and effectiveness of communication are spot on.
What a great keynote, both content-wise and in terms of the visuals. 👏 A good side-product of virtual conferences is certainly the production value of scientific talks going up.
This approach to Geometric Neural Nets is like a potential Nobel prize winning grand unification theory (GUT) unifying all the neural net architectures from ANN, CNN, RNN, Graph-NN, Message Passing (MP-NNs) neural nets and Transformers (Attention Neural Nets). Wonderful video !! Just like M-Theory when there is too much innovation accumulating over time, a simplifier needs to be born who can merge and unify all of them into a single more general purpose abstraction.
This is literally the best presentation about machine learning I have ever seen. Thank you for your marvelous work!
It is very intriguing research and graphically well presented.
I wonder what relationships are there between this unifying geometric perspective of deep learning and the random finite sets (stochastic geometry, poison point processes), which are now the rave in the multi-object tracking community.
This presentation is also slightly infuriating in that it goes over very deep concepts very fast. Regardless though, amazing work!
The incredible Michael Bronstein is on RUclips !! This is Awesome
It takes a semester for us to comprehend this marathon talk, Sir. Great visionary talk. Thank you Sir
As a computer science student now preparing for his ML course exam. I was just blown away by how all machine learning algorithms are related. Beautiful, stunning work.
Presentation mastery! You managed to boil things down to the most salient intuitions, all the while covering such a wide breadth of topics! This has me amped to dive into your papers (im in fmri neuroscience, where graph-based predictive modelling has been mostly ineffectual thusfar)
very good coverage. thank you, Prof. Bronstein
Geometric Deep Learning Grids, Groups, Graphs, Geodesics, and Gauges is of great importance for my master's degree. Great presentation, is an honor.
Amazing stuff! Hope we can interview Prof. Bronstein on our show soon 😀
would be honored
Incredible, really enjoyed this keynote. Agree, one of the best presentations on ML I’ve seen yet. I’m really happy to see the emphasis on clarity to a general audience with such well-crafted illustrations of concepts.
Thank you! amazing presentation!!! I giggled a little when seeing 2:40
This should be a gold standard of keynote talks. Amazing! 👏
I was amazed by your presentation, good job. But what amazed me was that I was able to understand in detail everything you explained. 35 years ago I studied physics and mathematics and learned all aspects of what you told in this video without ever realizing it could be applied to AI as well. Like you I was confused about the why of convolution, thanks for giving me the light !
This inspires me to continue my education. My brain is itching to learn more!
This was deeply thought provoking and wonderfully inspiring.
I wish I could understand all the details, but my education only takes me so far understanding the concepts you're going over. I am a newbie ML enthusiast. I really do appreciate the animation, it is nice to follow it.
Absolutely Amazing Prof Bronstein!
Thank you for such an amazing piece of content.
Very interesting perspectives on deep learning and seamless transition from one concept to another. Truly a master piece of scientific presentation. Thank you so much for posting it.
I must admit, I came to this link accidentally. The presentation is a master piece. Keep it going. Following.
This is the best presentation on machine learning I've ever seen. So enjoyable.
Thank you for this great presentation and for sharing it with the common public.
Presentation quality is stuning
i was in awe to see how underlying maths unifies DL techniques. Daresay community NEEDS a similar but in-depth deconstruction of particular topics. There are a lot of knowledgeable people in the comments, someone please make it happen
Great work... this has the chance to advance DL considerably, especially detecting "intrinsic features" which will solve many existing problems
This is real science !!! Thumbs up!
Amazing. I'm speechless.
Wow. Just. Wow.
The quality of this presentation is incredible. The animations enabled me to grasp concepts (almost) instantly. So incredibly helpful for my current paper. Thank you ever so much for the money, time, and effort it took to produce a video of such exceptional quality.
Thank you. Such comments are the best motivation to continue doing more!
This is amazing. I hope you make more videos like this again!
Beautiful presentation. Got some ideas to test.Thank you.
Great, concise, and very explanatory presentation. Thank you very much for uploading this content.
Thank you so much for this. After Sunday lunch, Idling through youtube, i was dragged down a nD rabbit hole, through some maths and psycology history fo some hary transformations of a non-trivial representation into a managable ones, and how they can improve the lives of astronomers, computer gamers, and pharmacologists,. How mapphg foods and drugs could alleviate diseases;. How computers could troll through posts and comments to find a small subset of interesting ones.. Even youtube itself joined in, and removed adverts, brexit rants, music, and chess blogs from my starter screen. What a great life you lead!
Спасибо, Михаил! Одна из лучших презентаций, которые я видел.
Well done! Clear and visual! Please more like that! Thanks a lot!
Absolutely fantastic!
I am quite excited about this field. Traditionally the innovation in biotech engineering was hampered by ethical concerns. With this technique we can quickly innovate without any political ramification. This is quite akin to the growth of internet itself
A very cool presentation, just wanted to ask if the scale transformation described at 09:31 has anything to do with renormalization groups methods in physics ?
I don’t see an immediate connection
i get it what you say; good point imo
Very nice animations make it a lot easier to follow. Thanks!
This talk is so amazing. I really like your interpretation of mathematical formulas, very clearly. Thanks for your great work. Hope you make more videos like this. One more time, thank you very much.
Is one of the possible domains of GDL going to be in any instance of a dynamic system? For instance not just proteins but interactions between molecular pathways? Or meme propagation networks?
This is amazing sir..Hopefully this will motivate the student community to take up mathematics very seriously
it is an amazing work and the presentation, thank you!
The introduction reminds me talks from S. Mallat where he was already in 2012 showing in one hand the underlying symmetry invariance that we have in his wavelett scattering system and on the other hand the analogy of this system with deep CNN. And concluding that deep learning architecture might learn symmetry groups invariance like learning the groups of cats, dogs, tables etc.. I like very much this group theory approach, which is not often discussed in literature so far
Indeed we cite Mallat in the book - his paper with Joan Bruna on scattering network established that CNNs are not only shift-equivariant but also approximately equivariant to smooth deformations
Thank you for uploading.
I hope it will talk about the coding part too.
I feel sad that I left this field for financial reason. But I keep watching these videos
Come back.
@@deeplearningpartnership I hope so. what is your background
@@max477 Physics, finance and now AI.
@@deeplearningpartnership great you have great background
This is EPIC! looking forward to more of this great material.
Wow! this is an excelent presentation, I guess your classes are something like this, and your students are very lucky to have you as a professor.
Thank you very much for your great talk!
A great presentation professor. Reminds me of 3blue1brown
Such an amazing lecture! Thank you very much :)
Awesome Thanks!
Oh yeah, RealSense, I've been working with them in image recognition, trying to build something similar to Complex Yolo, but in a more engineering way. However, the quality was not suited for the harsh conditions we were exposing the devices to (pig stall). It was also the time when the first extensive neuronal network libraries became available, and I've said that in a few years the tech calibration of the camera will be just replaced by a neural network. And, broadly speaking, that's what drives my current research.
Such an inspiring presentation!
This is amazing presentation 👍👍👍
Only got here from other videos on the topic. Nice presentation, one that assumes a bit more linear algebra and group theory fundamentals (but indeed one only needs the very basics of those fields + basics of analysis to follow the concepts in ML/DL), but gets a bit more into actual details compared to other videos I have watched on the same topic, which I appreciated. If only there weren't so many self-promoting plugs all over the place throughout the video, it gave me the impression that the actual science on the video served as an instrument for own work promotion a bit too much. I guess it might be a cultural trait of the field and this is how things work, but from what I gathered from the comments, active or former researchers in the field (I don't qualify as such) already know not only you, but your work as well (which I have absolutely no doubt to assume that is indeed very noteworthy), already prior to the video.
Subscribed.
I think invited speakers are invited exactly because of their expertise, and it is expected to talk about own work (hence the "self-promoting plugs", which are some of the first works in the field that we did with students and collaborators). In the book we show a more balanced overview, however for the video I chose those works I relate to more.
Great talk! And outstanding visuals! How were they made?
You could make this in After Effects
Damn! That's awesome! As a side note, may I ask what was used to create the visuals and animations for this talk? They are gorgeous!
Adobe AE and two months of work of two professional designers
@@MichaelBronsteinGDL That would have been my guess, professional designers involved. Thanks!
@@MichaelBronsteinGDL Great animations, and thank you for your efforts to share this valuable knowledge.
Wow, you took it to the next level!
Super informative and impressive.
Very interesting I all ways have that question is there a way to indefinitely transformation on deeplearning this video shows how it's done thank you like to more on this topic but it's hard for me to understand all those mathematics.
Great talk!!!!
OK, I now need a Hinton, Bengio, LeCunn & Schmidthuber print. In an antique frame.
This was actually amazing.
Excellent generalisation of deep learning. I can see Linear Algebra, Graph theory, Group theory and many other math branches intersecting with physics, computer graphics and biology. This is truly a gem of ML.
BTW, what's on the y-axis of this graph at 18:58 ?
The task is regressing the penalized water-octanol partition coefficient (logP) on molecules from the ZINC dataset. Y-axis shows the testing Mean Absolute Error.
Now this was enlightening !
Just wow 💯 ; this is inspiring me to learn more ,. Amazing presentation 💫
interesting.... I'm working on the same thing independently.... I believe this is ultimately the theory of everything.
My old math teacher would break out in a sweat of disbelief seeing that higher mathematics can be used to recognise cats !
I wasn't sure at first as to how you wanted to connect the different geometries with deep learning , but as the video went on, I could see what you meant. And now, I am thinking about how it can be applied in emotion classification project I'm interested in. Thank you for the general insight, It would be incredibly awesome if you can attach some git works.
Very nice presentation
Imagine how much time the presenter has spent preparing this presentation.
Thanks for the video. I wanted to know more about this view of machine learning.
Check our proto-book on which the talk is based: arxiv.org/abs/2104.13478
@@MichaelBronsteinGDL thanks
Absolutely great presentation! What software was used to create these animations? :) Thanks
28:38 - 3D sensor to capture face - 10 years ago - Intel integrated 3D sensor into their product
30:17 - we don’t need a 3D sensor now - we can use 2D video + geometric decoder that reconstructs a 3D shape
36:50 - tea, cabbage, celery, sage
This is one of the most beautiful presentations I have ever seen in my life. I'll be honest here- I did not understand much, but I'm truly inspired to learn the material. Professor Bronstein, would a deep learning / signal processing background be enough to pick up this material?
I would give a biased response, but probably our forthcoming book we are currently writing (a preview is available here: arxiv.org/abs/2104.13478)
This was wonderful!!!!!!!
Where can I find more information on the project that helps classify the molecules on plant based foods??
Here is a blog post: towardsdatascience.com/hyperfoods-9582e5d9a8e4?sk=d20fe73c7d9ecb62dd3d391a44d4ef7f
My mind was blown away when I saw that even food preparation can be represented as a computational graph with cooking transformations as edges and optimize to maximally preserve the anti-cancer effect 🙌.
Awesome!
Wow, that's so dope!!! Thanks for this great production quality and delivery Michael!
Btw, would love to have you on my podcast talking about GDL!
This is really amazing!
Thank you for the great video.
I wonder what Stephen Wolfram thinks about this ;-)
Great presentation. Can you tell me how the software you use to animate the graphs?
AfterEffects
wonderful work.
Super cool talk!!
omfg, wow. what a presentation!
Full fledged AR and VR products are gonna be launched soon is one of the takes. Metaverse is here
This presentation is as great as the talk itself. What software did you use to create the presentation graphics?
was done by professional designers. photoshop/illustrator/after effects
Master piece!
It's year 2030. MLPs are SOTA on all domains imaginable to human mind.
MLP AGI whispers: Michael didn't mention me in his ICLR keynote.
Paperclips.
absolute gold
Love at first sight... ❤️
How is this presentation created (tools)? Would love to follow the path of Dr. Bronstein and start creating presentations like this one.
That was a (titanic) work of Jakub Makowski with Adobe AE. Nearly two month.
@@MichaelBronsteinGDL wow I guess I will endeavor on the art-side of the project after my theory is worth the effort :)
Михаил Бронштейн наверное русскоязычный? Ваша фамилия как то связана с тем, что т9 её подсказывает? И что вы думаете о модели сегрегации Шеллинга?
Да, русскоязычный (родился в России но вырос в Израиле). Никогда не имел дело с этой моделью.
awesome!!
Oh. My. God.
It a shame that I am too dumb to deeply understand everything that was said, nevertheless even what I did get is astonishingly fascinating!
I so regret not learning harder in my university days, may be I would have had a chance to work on something this impactful and motivating.
Thanks
It is indeed a very high-quality high-effort presentation. But what really annoys me in the subject is that deep learning people really like to acknowledge weaknesses of their neural network only when they're attempting to solve them. And when they are not, they like to pretend that they don't exist and their approach is flawless.
Like this graph isomorphism problem for example: it is a major problem in representing a graph in any linearized fashion, but I read many papers that just go on boasting how well their blabla-net performs instead of talking of these limitations. A lot of DL research seems to be hype-driven rather than problem-driven.
I agree to some extent, and here is one example related to graph isomorphism: it's easy to talk about expressivity, much harder to show any results about generalization power. To the best of my knowledge, very little is currently known about how GNNs generalize.
This is amazing.
ΕΚΠΛΗΚΤΙΚΟΣ!!
Time base from data to force altering lead to transformation and amphomorism. Like water it remain water in different temperature so it survival all economic, political, and religious condition and remain an kind, compassionate, and creative wise human
this is amazing