- Видео 55
- Просмотров 163 740
Ricardo Vinuesa
Швеция
Добавлен 23 янв 2021
News on artificial intelligence (AI), fluid mechanics, computational fluid dynamics (CFD) and sustainability from the VinuesaLab (www.vinuesalab.com/) at KTH Royal Institute of Technology.
More technical details in my Google Scholar: scholar.google.com/citations?user=UbyF8_oAAAAJ&hl=en
More news in my Twitter account: mobile. ricardovinuesa
More technical details in my Google Scholar: scholar.google.com/citations?user=UbyF8_oAAAAJ&hl=en
More news in my Twitter account: mobile. ricardovinuesa
Introduction to machine learning, Part 11: Robust principal-component analysis (RPCA)
And we conclude our series on machine learning!! After discussing optimal sensor placement, and QR pivoting, today we talk about robust PCA, with the aim of increasing the robustness against noisy/faulty data.
Additional information in the excellent book by Steve Brunton and Nathan Kutz: www.databookuw...
I also acknowledge Scott Dawson for his input on this material.
Additional information in the excellent book by Steve Brunton and Nathan Kutz: www.databookuw...
I also acknowledge Scott Dawson for his input on this material.
Просмотров: 108
Видео
Introduction to machine learning, Part 10: QR pivoting for optimal sensor placement
Просмотров 20721 час назад
We continue our series introducing machine learning!! After discussing optimal sensor placement, today we explain an algorithm for finding the best sensor locations: QR pivoting!! Additional information in the excellent book by Steve Brunton and Nathan Kutz: www.databookuw... I also acknowledge Scott Dawson for his input on this material.
Introduction to machine learning, Part 9: Optimal sensor placement
Просмотров 29814 дней назад
Back to our series introducing machine learning, and today we explain the theory behind optimal sensor placement!! Additional information in the excellent book by Steve Brunton and Nathan Kutz: www.databookuw... I also acknowledge Scott Dawson for his input on this material.
Entropy and probability theory
Просмотров 40121 день назад
In this new series on the fundamentals of probability and information theory, Marcial Sanchis-Agudo provides and introduction to entropy and the basic mathematics.
Diffusion models for optimal sensor placement in cities
Просмотров 601Месяц назад
This work was led by Abhijeet Vishwasrao and carried out at CTR in Stanford (together with the groups of Beverley McKeon and Cathrine Gorle). Here we use diffusion models to create a reconstruction framework, which together with our explainable deep learning capabilities, produce a method for optimal sensor placement in cities!! ✅ Explainable deep learning framework: www.nature.com/articles/s41...
Classically studied coherent structures only paint a partial picture of wall-bounded turbulence
Просмотров 593Месяц назад
In this video we show our recent results using explainable deep learning to identify the most important coherent structures in wall-blinded turbulence! We find that the classical structures (Reynolds-stress events, streaks and vortices) only paint a partial picture of wall-bounded turbulence! ✅ Paper: arxiv.org/pdf/2410.23189 ✅ Original method: www.nature.com/articles/s41467-024-47954-6
Keynote lecture at ParCFD 2024
Просмотров 4892 месяца назад
We discuss our latest developments on: ✅ High-fidelity simulations of 3D turbulent wings, and flow physics ✅ SHAP studies to identify new coherent structures in turbulence ✅ Deep reinforcement learning for turbulence control Recording of my keynote lecture at the 35th Parallel CFD International Conference 2024! Thanks for the invitation, more information here: www.parcfd2024.org/en
Energy transfer in turbulent boundary layers
Просмотров 9893 месяца назад
In this video, Rahul Deshpand explains our most recent article in the Journal of Fluid Mechanics, where we analyze the streamwise velocity fluctuation profile in turbulent boundary layers. We show that the emergence of an outer peak is associated with the predominance of Q4 events, i.e. sweeps! This has very important implications for fundamental wall-bounded turbulence, including possible expl...
Introduction to machine learning, Part 7: Sensor placement
Просмотров 8853 месяца назад
In this new episode of our compressed-sensing series we discuss the Math behind the sensor-placement strategies! Additional information in the excellent book by Steve Brunton and Nathan Kutz: www.databookuw... I also acknowledge Scott Dawson for his input on this material.
Introduction to machine learning, Part 8: Regularization
Просмотров 7463 месяца назад
We continue with our compressed-sensing series, and today we dig deeper into regularization and sparsity! Additional information in the excellent book by Steve Brunton and Nathan Kutz: www.databookuw.com/ I also acknowledge Scott Dawson for his input on this material.
Easy attention: a new attention mechanism for transformers!!
Просмотров 9423 месяца назад
In this video, Marcial Sanchis-Agudo explains our new attention method for transformers, the easy-attention framework, which is particularly designed for chaotic systems. Full details and information in this article: arxiv.org/abs/2308.12874
Introduction to machine learning, Part 6: Sparsity
Просмотров 9824 месяца назад
We start a new series on machine-learning methods, this time with focus on compression and optimal sensor placement. Do not miss our introductory video!! Additional information in the excellent book by Steve Brunton and Nathan Kutz: www.databookuw.com/ I also acknowledge Scott Dawson for his input on this material.
Transformers for chaotic systems and fluid mechanics
Просмотров 1,4 тыс.4 месяца назад
Marcial Sanchis-Agudo explains us the fundamentals of transformers, and how they can be used in the context of complex physics problems such as chaotic systems and fluid-mechanics applications! More details: ✅ www.nature.com/articles/s41467-024-45578-4 ✅ arxiv.org/abs/2308.12874
Large language models for problems in Physics
Просмотров 2 тыс.7 месяцев назад
In this video by Marcial Sanchis-Agudo, we describe the self-attention mechanism which is the basis of transformers, and how it can be used for problems in physics. In particular, for temporal predictions in fluid mechanics. Check out some of our work on transformers!! ✅ www.nature.com/articles/s41467-024-45578-4 ✅ arxiv.org/abs/2308.12874
Introduction to machine learning, Part 5: The proper-orthogonal decomposition (POD)
Просмотров 2,4 тыс.7 месяцев назад
‼️There is a typo at minute 21:48! The equation for the final POD modes should read: Phi=M^(-1/2)*Phi^hat To conclude this first introduction series to ML, we derive and describe in detail a widely used method for analysis of high-dimensional fluid-flow systems: the proper-orthogonal decomposition (POD). Additional information in the excellent book by Steve Brunton and Nathan Kutz: www.databook...
Introduction to machine learning, Part 4: The Method of Snapshots
Просмотров 2 тыс.8 месяцев назад
Introduction to machine learning, Part 4: The Method of Snapshots
Deep reinforcement learning for flow control in aeronautics
Просмотров 1,9 тыс.8 месяцев назад
Deep reinforcement learning for flow control in aeronautics
Introduction to machine learning, Part 3: Truncated SVD and eigendecomposition
Просмотров 2,4 тыс.8 месяцев назад
Introduction to machine learning, Part 3: Truncated SVD and eigendecomposition
Classifying methods in artificial intelligence
Просмотров 2 тыс.8 месяцев назад
Classifying methods in artificial intelligence
Introduction to machine learning, Part 2: The economy singular-value decomposition (SVD)
Просмотров 2,3 тыс.9 месяцев назад
Introduction to machine learning, Part 2: The economy singular-value decomposition (SVD)
Introduction to machine learning, Part 1: The singular-value decomposition (SVD)
Просмотров 3,9 тыс.9 месяцев назад
Introduction to machine learning, Part 1: The singular-value decomposition (SVD)
How is machine learning improving computational fluid dynamics?
Просмотров 8 тыс.9 месяцев назад
How is machine learning improving computational fluid dynamics?
Finding completely new turbulent structures with explainable AI
Просмотров 2,3 тыс.10 месяцев назад
Finding completely new turbulent structures with explainable AI
Explainable AI to study structures in turbulence
Просмотров 2,8 тыс.Год назад
Explainable AI to study structures in turbulence
Multi-agent reinforcement learning (MARL) versus single-agent RL (SARL) for flow control
Просмотров 4 тыс.Год назад
Multi-agent reinforcement learning (MARL) versus single-agent RL (SARL) for flow control
Predicting turbulence with transformers
Просмотров 3,8 тыс.Год назад
Predicting turbulence with transformers
Keynote lecture at ETC18 in Valencia, Spain
Просмотров 2,4 тыс.Год назад
Keynote lecture at ETC18 in Valencia, Spain
The building blocks of turbulence: coherent structures
Просмотров 4,6 тыс.Год назад
The building blocks of turbulence: coherent structures
Thanks for the series!
Thanks for following!!
More on data-driven methods for physical systems: ruclips.net/video/rcBp-TIs_-0/видео.htmlsi=u2j2pTiZd32qob9i
✅ For an introduction to SVD: ruclips.net/video/2WJ4Zffbqek/видео.htmlsi=2Kiwx4H0h4c2DzXC
Thanks for the video. I have a few questions: 1) Is Y a vector px1? 2) is C a matrix pxn? 3) is X a vector nx1? 4) Are columns of C matrix associated to different places along a line or are they associated to different times (e.g. if X is a signal)? 5) What particular solution among the infinite to the power of (n-p) solutions would I obtain in reconstructing X from Y if I did multiply C+ times Y? 6) what is the optimization problem we are trying to solve when we have the reading Y and the known matrix C? (what are we trying to minimize, L1(reconstructed X)? what space are we searching over?) 7) is the optimal basis (i.e. the Psi associated with the sparsed s) obtained by solving the optimization problem of minimizing L1(s) over the whole spaces of Psis? 8) could you remind me in a nutshell what a "convex optimization problem" is? Thank you for your time
Thanks for your questions! Here are the answers: 1) Yes, exactly 2) Yes, indeed 3) Yes, correct 4) Each row represents a sensor (that’s why there are p rows). The columns are the spatial coordinates, and each row has a 1 at the column indicating the spatial coordinate where the measuring is located. 5) The solution that minimizes the L2 norm of x 6) We want to minimize the expression y-Cx, and this can be formulated in different ways depending on the regularization we use. Please have a look at the following video (part 8 of the series), I think it will clarify things quite a bit: ruclips.net/video/frDS_8VzkEI/видео.htmlsi=_7PRYXT7K2xgQ97p 7) Not exactly, this optimal basis is chosen based on the properties of the problem at hand, e.g. Fourier, POD, etc 8) In a convex optimization problem, the objective and constraints are convex functions, see e.g.: en.wikipedia.org/wiki/Convex_optimization
Thanks for the video. I am a bit confused and I have a few questions (sorry if they are too many for a comment on youtube): 1) is X a matrix nxm, index i going from 1 to n and index j from 1 to m, containing, in the example in the video, only 1s and 0s? 2) are the basis elements e sub ij matrices the same size as capital X? 3) is eij a null matrix with just one pixel (at location i j) set to 1 ("on")? 4) is the component xij just a scalar having 0 and 1 as possible values? 5) can s sub k take real numbers or just binary 0 and 1? 6) are v sub k matrices the same size as capital X? 7) what is the relation (if any) between (image X , basis elements vk) and (X, phi) of POD? How are vk obtained from X? 8) can the vk obtained from a "reference" picture A be used as a basis to represent other pictures from A (a different picture B would have a different set of sks than A)? 9) Is the point of finding and storing the whole set of basis vk instead of storing the whole image that afterwards a picture can be stored just with a bunch of sks? 10) is the method useful "for similar images only" (e.g. sunset pictures with a set vks of sunset pictures, dog pictures with a a set of vks of dog pictures etc.) or are there general vks, common to dogs and sunsets valid for "a general image that is not noise"? Thanks for your time
1) Yes, exactly 2) Yes, in this case this term indicates the Cartesian distribution of data in the matrix 3) Not exactly, eij here allows you to place each entry at the right location. Technically it can be all ones, just the ij indices are important, since those tell you where the entry goes in the matrix X 4) Yes in this case 5) Technically it could take other values depending on how the vk looks like. For instance, the vk may allow me to represent the data in polar coordinates, then the sk would be the coefficients in this new reference frame 6) Not necessarily, this will become clearer in the next videos (think of Fourier analysis) 7) The POD modes are one possibility for vk, but not the only one; Fourier modes can be another possibility. The most suitable vk depends on the problem, in the next videos you will learn more about it 8) Yes, that’s exactly the point. You would want to use vk to represent all pictures, and then each picture would have different coefficients sk on that basis vk 9) Yes exactly 10) This is the hope, to find general basis, but it’s not always so easy. With images, Fourier is typically a good choice. With more complex engineering data… highly problem dependent!
@@rvinuesa thank you very much
✅ The QR algorithm to implement the optimal sensor placement is described in the next video! ruclips.net/video/gM9IciX2Uiw/видео.htmlsi=7myuqNy1CjccUfMg
✅ More on optimal sensor placement, but using deep learning: ruclips.net/video/rJofxTatW_8/видео.htmlsi=NbczKszrm5OteT45
‼️There is a typo at minute 21:48! The equation for the final POD modes should read: Phi=M^(-1/2)*Phi^hat Thanks!!
Nucleus plant cell Layer Membrane cell animal
Superb, thanks once more!
Thanks for watching!! 🙏
Thanks for the whole series of videos. 1) Are a sub i (t) entries of matrix capital A (capital Sigma times capital V T)? 2) If A a is nXm matrix shouldn't the Summation of the expansion have nXm terms instead of just m-1? 3) How come the first term (mean flow - zero mode) is "special" and outside the summation, i.e. with a sub 0 (t) = constant = 1? 4) I am a bit confused about data representation, is X still a 2 dimensional matrix nxm, where n=3 times the number of elements in the mesh (i.e. x-velocity, y-velocity and z-velocity are in the same column one under the other)? Thank you in advance
Excellent questions! 1) The ai(t) are the temporal coefficients, product of Sigma and V. Note that V contains the temporal information, and the singular value in Sigma gives the amplification of the temporal content. 2) This is because the first mode is always the mean flow, which is taken out of the summation. Therefore for m snapshots you will have m-1 terms in the summation 3) Related to the previous point: the first mode is always the mean flow, with the largest singular value and constant temporal evolution. It is therefore very common to first remove the mean from the flow data, and then perform POD on the fluctuations; then the first mode is the most energetic fluctuating one. 4) This is correct! Thanks for watching, keep up the good work and feel free to ask more! 🙂
Thank you very much for your prompt reply, I have another question: at min 21:48 is the equation correct or is there a typo and phi and phi hat are interchanged?
@@DiegoAmelio-e9dyou are totally right, there is a typo right there! The equation should read: Phi=M^(-1/2)*Phi^hat Good catch!! 🙏
Professor, how about use the attention weights to optimal the sensor placement
It could be an idea to explore. It’s connected with the SHAP approach
Please tell me learning pathway, for a fresh Mechanical graduate and good foundatuon in CFD, but has no knowledge about ML.
I think that there is good online material to start. Read articles, try to look at code repositories and implement things yourself. Applied experience is good experience! Good luck!!
Nice one!
Thank you very much!!
The foil sure looks like a fish swimming from above
Awesome.
Awesome.
Eccellente, sono dei geni, grazie per la condivisione / excellent , thanks for sharing.
Great video. In the m as time and n as individual measurements i really like classical mechanical systems as an example for m >> n. In the case of a single motor or the pendulum on a cart, its n is only 1 or 2. The stock market is like flow-control a difficult topic. You could observe the price of your favorite company every ns in high-frequency trading, or the annual reports of the S&P500 from the last decade. I wouldn't be surprised if evolving nxm is a problem finance has to deal with in funds or budgets.
I really liked the book by Brunton and Kutz. I look forward to what you will add to the subject.
Thanks once more for the greató videos! :D
Thanks for your support!
Happy studying everyone
fabulous thank you for sharing
👍👍👏👏
😮 I like it!
Great Video, Professor Venussa.
Thank you!!
Very interesting talk! Have you tried comparing the performance of your ROM in terms of both prediction time horizon and accuray with other projection-based ROMs such as operator inference? I see that the time horizon of prediction is 50 \Delta t. Is \Delta t DNS time step?
Excellent question! Yes, we made some comparisons with other methods, see here: www.nature.com/articles/s41467-024-45578-4 www.sciencedirect.com/science/article/pii/S0142727X23001534
Thank you,professor, I’ve read a lot of your papers,that’s cool!
Thank you very much!!
This is the first ever video i watched about ML for CFD and find nearly five ML techniques and the way it is implemented. Thanks
Next video please.
Brilliant, thank you.
Happy that you enjoyed it!
Great series of videos. If we have a PIV dataset which is not temporal and each velocity snapshot is u(x,y), the A matrix would define what property of the velocity field? Is it still temporal or \Phi defines variables in y direction and A defines streamwise variable? Many thanks
Just to understand better: if the dataset is not temporal, what are the different snapshots? Aren’t they taken at different instants? You can have 2D snapshots (2D modes in Phi) and then temporal coefficients ai(t). Can you explain the dataset in more detail?
@@rvinuesa Snapshots are 2D images of the velocity field taken at different instants. Each snapshot is independent of the others, and the ensemble average of the statistics is compared with the statistics of a canonical boundary layer. In this dataset, we aimed to study coherent structures of wall turbulence, such as Uniform Momentum Zones (UMZs). I wonder if we can recognize these structures using the POD method instead of histogram-based approaches. If I have just one image (no temporal sequence) and want to decompose this image using the POD method, can I recognize UMZs by selecting the largest eigenvalues?
@@roozbehehsani1468 Here you need to be careful with one thing: when you do UMZs you basically do feature selection, whereas POD is a method of feature extraction. In feature extraction, the new features (i.e. the POD modes) are different from the original ones. If you want to find an alternative way to identify UMZs, I would suggest some method based on image segmentation, there are many methods within computer vision that can be helpful (See e.g. U-nets). I hope this helps, and feel free to email me if you have questions
@@VinuesaLab Thanks a lot for the reply. Since all ML models need labeled datasets and histogram-based approach for the detection of UMZ and making a labeled dataset has flaws, I am thinking more about some fundamental models that detect UMZ(Like POD). ML models basically just map the input into output. If you know any ML model that would be helpful, I would appreciate it if you tell me.
Quite informative and very well explained. Thanks for such an amazing video !
you have very nice similar energy to Ricardo, keep it up!
Well done Marcial!
Thanks as always for the videos on machine learning!
Thanks for following the series!!
Great series. Thanks. Can you please elaborate a little bit how can we interpret the POD mode shapes? I mean by looking at the highest energy mode shape, let's say, what can we understand about the turbulent flow?
It depends on the case, but a clear example is how you can interpret the structures in the wake of a cylinder based on POD modes
Amazing series of data driven science
Thank you so much!!
Looking forward to the next video for long awaited POD details :)
I wish i was your student
Many Thanks for your valuable videos. I hope the next video is Dynamic Mode Decomposition DMD 😊
The next one is POD 🙂. DMD will come in the future!! 👌
@@rvinuesa Many thanks
I kinda already give like before watching the full video
Thats great Professor, I am joining your session on 6th May, 2024, as well. Looking to validate some case-studies in this domain.
Great to have you in the session!!
from Valencia? you keep working even on vacation, what a education focused man you are
Can you suggest what should be the learning pathway for applying ML in CFD? Suppose one is a fresh mechanical engineering graduate and is not a tremendously expert in CFD but has basic understanding of CFD but not much expertise in AI/ML?
I think it is important to have a very strong foundation in fluid mechanics and CFD. Then you can dive into ML and apply methods from the fundamental understanding. Hope this helps!
stockholm!
Nice and simple explanation. Waiting for the new videos in the series.What are the total number of videos that will be uploaded in this series?
We will probably have a couple more videos on SVD🙂
Thanks for another great video. Please make a comment on POD vs SVD in the next video !!
This is exactly the topic of a lecture coming up very soon! Stay tuned 🙂
why didn't my notification work! good to check sometimes if new videos are up
Your videos have priceless value. Could you deal more with scientific computing and numerical linear algebra (krylov subspaces, GMRES, iterative solvers, etc...)
Those are interesting topics! After the ML series I am thinking about creating one on numerics and CFD. Stay tuned!