I'm trying to train my neural nets to predict optical responses of metamaterials, so technically solving the Maxwell's Equation. The result is an NN for predicting reflectance, transmittance, and absorption of a metagrating. Hehe
IIUC: complicated equations describe the flow of fluid. (Navier-Stokes equations) solutions of these equations represents a valid fluid flow. using this, you can create an function that calculates the flow of fluid given an initial condition, (but it is very slow) neural networks can learn arbitrary functions this guy: trains a neural network to predict fluid flow by giving it data from the slow fluid flow algorithm so we can do fluid flow modeling faster.
Partial Differential Equations , They are learned only in the fourth year of a mathematics degree, One of the most complex but fascinating equations which have huge amount of applications in physics, chemistry and computing especially in the fields of visual effects and probably in AI too, There are a lot of different research directions to consider.
@@astroid-ws4py Partial differential equations are learned way before 4th year of a mathematics degree. We were doing partial derivatives in 1st year of my EEE degree
Thanks for highlighting our paper on Vortex Induced Vibrations (VIV). We are now building a digital and physical twin at MIT for this problem. You can use adaptive activation functions to avoid BAD minima!
Lol, the book you scrolled through after the paper is the book which my advisor wrote. It's called: "The Finite Volume Method in Computational Fluid Dynamics: An advanced introduction with openFOAM and Matlab"
Never had the courage to do it but YES ! I think that it's nearly certain that with a general IA we will find a generalized solution for Navier Stokes Eq :p CONGRATS !
It would be interesting to check mse of validation set , also to try something like VAE to be able to change properties of fluid, speed, pressure areas, etc (but also add time component, rnn, lstm, transformer🤔)
Hello I simply love the way you explained the physics informed neural networks and especially the coding part. Kudos!! I am new to the topic of PINNs and I just wanted to ask you can we implement a PINNs for 1st order coupled ODE system with just one independent variable? like dP/dt = f(x, y); dS/dt = g(x, P); dT/dt = h(x, y, S, T)? If yes could you please tell some examples where I can find a way to code the same? Thank you very much in advance!! Subscribed your channel as well!
This is exactly the kind of info i was looking for! Thank you! I wonder if you could possible spend more time in a more detailed explanation on how you compute the loss. I see it involves computing some gradients of the outputs, but I cant figure out how is done. I'm not a torch user, so I'm trying to replicate similar stuff with TF.
Thank you for the video, can you please show us the PINN for compressible N-S equation with viscosity and diffusion (continuity equation with diffusion coupled with a momentum type equation with viscosity) in a 2D square ? Thank you in advance
I was thinking through your problem with LBGFS vs mini-batching like SGD or ADAM. Isn't it the case that you can shuffle your mini batches more effectively and/or involve some gradient accumulation, to prevent the overlooking of key physical constraints in the cylinder wake problem? That way you can achieve the same result without needing this much compute and the possible memory bottleneck that your solution involves?
Great video! Is your final PINN just a compressed representation of the CFD training data, or does your model generalize to different-sized cylinders, different fluids, velocities, etc?
With what changes would it be possible to create a model that takes as an input an unknown geometry and then predicts the velocity and pressure fields?
I am not sure I understand the philosophy for using NNs for fluid dynamics in this type of application. NNs are essentially (very simplistically) regression algorithms, which seek to turn discrete data into a continuous function to approximate some unknown functional. So what would be the ideal NN in this case? Well it would be one which approximates the governing equations we start with, and we know? So what has been achieved? Training a NN to approximate a governing equation that you already know? Perhaps I am missing something. That is not to say I don't see the benefit in other applications of physics/engineering/fluid dynamics etc.
One example of using NNs in CFD would be speeding up simulations. For instance you could train NN to predict chemical compositions for a giver reaction, which can then be applied in CFD simulations of combustion (which is much faster than finding those ratios from chemical kinetics). I am currently working on a video in which I will explain the applications of NNs in CFD in more detail. Also you could use PINNs for shape optimization. I've seen a paper in which the researcher trained NN for different airfoils and then used it to find the shape which minimizes drag. This approach is faster than running a CFD simulation for every possible airfoil shape.
The training data is in a regular grid, but what if your data is not in a grid like this and you actually including the particle in your data? Will I need some kind of boundary condition then?
Hi! Very cool video! Could you please share a link to the source of the data? Was it a book or some kind of dataset? I would like to repeat this result for myself and would be very grateful if you could share a link to the dataset
Maybe the use of SIREN or WIRE + Sobolev training (during which the derivatives are supervised) of implicit representations might add speed and quality to your solutions.
Could humanity one day utilize this knowledge to enhance the rheological properties of a bolus, thereby simulating an accurate, "real" human swallowing process? I'm a Speech-Language Pathologist working with patients who have Dysphagia.
Genuinely fascinated by the use of PINNs to accelerate computation of such important problems like this! Is it in any way possible to train something like this (even if only in 1D) on a strong pc? If so, what specs would you use? (I am planning to conduct further research into this specific use of PINNs 😅)
This is a cool idea, but could you give some potential use cases for this? In my understanding, it just learns the results of the simulation for one set of conditions. Can you use it as more than just a way of compressing the simulated results into neural network params?
Very cool what you did here. Great job. Thanks. Are you familiar with SINDy - sparse identification of non-linear dynamics, by Steven Brunton? He has a lot of RUclips videos. I wonder if your solution scales as well or better than his. He uses sparse matrices of coefficients for a large set of functions? The answer would make a good paper, right? You’d have to account for the exec speed of different cpu/gpu/tpu operations/instructions and the complexity of unit operations in each method to make it a fair comparison. Come to think of it, that’s probably been done. If anyone knows I’d like to see the reference. Anyway, please keep making great videos like this.
2:40 Is there a particular reason why you used a double-sigmoid at the output layer? I can see how using ReLUs for the hidden layers could cause problems as the gradients are utilized for evaluating the loss. But why did you not use something like SoftPlus or Swish as people usually do or maybe the sine function like they do in the SIREN paper?
Problem is you would need to populate data evenly around a phase space in order to generalize the solutions between that space. This seems difficult as I expect the data to be rare.
How can I get the same predicted graph of the paper with your code. 4:48 The graph your code is giving is from 0 to 50 in y-axis and 0 to 100 in x-axis. I tried changing the axis values but I was not getting the same graph as the paper.
Does this video mean that the trained model can be generally applied to other fluid situations? Or is this only showing that such nonlinear network can approximate to the given result when trained for certain cases?
It is because the neural network finds the pressure field such that the derivatives match the Navier-Stokes equation. So the values differ by a constant. The gradients of the pressure are equal though. I explained it in the video but perhaps you missed it.
Could you kindly tell me which approach you’ve taken? Did you use neural network as a black box to solve the NS equations with a training set or did you use it to approximate derivatives in the equations and then solve them? Personally, I think the second one is more promising. Just curious. Excellent work by the way.
Around time stamp 2:25 he says he downloaded data. As you say the second approach is the promising one. What he did is cool from a neural network POV but is totally worthless from a physics stand point. Using a set of data for a cylinder around a certain Re we could create correlations and use something very simple to calculate the flow. The real magic would be if he could somehow use data from a certain group of Re and predict successfully to a reasonable extent the flow at any Re even in the millions.
PINNs apparently incorporate the PDEs into their loss function, according to chatgpt. i kinda get why that makes standard deep learning techniques fail, especially in the case of navier-stokes equations. would training a CNN autoregressively with MSE+Adam on pre-simulated velocity fields work?
I attempted to train a CNN using averaged data from particle simulations. Specifically, I utilized data from my particle simulator to compute a grid of densities, temperatures, and velocities. Unfortunately, the results did not meet my expectations. The closest I came to achieving the desired outcome was when the network learned some wave-like structures, but it completely ignored obstacles, resulting in density waves tunneling through them. I'm still working on the problem, but it appears that a change in approach may be necessary. Perhaps implementing a new training pipeline could be helpful, but I'm unsure at this point.
I was wondering if we could train neural networks to perform tasks such as ordering arrays. Ideally, after sufficient training, they could do it faster than the faster algorithms we have.
Such thing is not possible. The algorithms we have are hardcoded and already made by many great people. We can probably have the AI a look at it to optimize it even more, but no way is it possible that a function with least amount of instructions is slower than an AI with millions of parameters and unpredictability.
@@shivavarunadicherla Humans are "better" at sorting lists than computers are (we take less steps, even though the computer can make it faster). And we are better because we can quickly identify patterns in the list that allows us to optimally move parts around. A NN could take advantage of this. Being trained to recognize patterns in the lists and then optimally sorting it in a non-linear way (something no classic algorithm can do). Also, while complex neural networks can take millions of input parameters, the input-layer for such a neural network would be, literally, the array we're trying to sort, not millions (unless millions IS what we're trying to sort). It is also worth mention that, after trained, computing the activation function of each neuron is extremely fast. I'll try it out.
@@matheusdardenne I would like to see this in action. I would think there would be some situations where the AI might outperform a given algorithm while losing to an other algorithm, Since with AI we will be doing extra work in looking for patterns.In the end it might be dependent on the array we give it, Just as with different sorting algorithms converging faster on specific patterns of input
@@shivavarunadicherla Exactly. I think for sufficiently long arrays, where even the best classical algorithms suck, this pattern recognition could be helpful to make sorting it more efficient, even with the overhead of calculating the activation functions (not counting the training time, of course).
Looks promising. Do you predefine the mesh, or give guidelines about its properties? Also, is a 2D Navier-Stokes equation sufficient for modelling the flow around a cylinder? Typically, incompressible 2D flow can be modelled as a single PDE in terms of psi, where d psi/dy = u and d psi/dx /-v, and pressure is eliminated. However, I don't know whether physical systems, specifically very long ones where a 2D approximation is more reasonable, with vortex shedding would have significant velocity in the z direction. Do you have experience with or thoughts on this?
1. I used the available data set from the literature, which contains the all the properties (points, velocity, pressure) 2. Assuming the cylinder is long enough the flow can be approximated using 2D equations, as the effects of tip is marginal (especially in the middle of the cylinder) 3. I suppose you are talking about the potential flows. This type of model only works for the irrotational flows. However, in case of the cylinder, there are viscous effects which result in the vorticity. Thus, it full Navier-Stokes equation should be considered in this problem. 4. I just the same number of nodes as in the article I showed. I didn't really test it. I hope that answers the questions
@@computational_domain thanks for the reply! Yeah I had a feeling that was the case, but had never delved too deeply into 3D Navier Stokes applications. Good work
So did you train it such that the psi and the pressure predictions (or physical properties derived from those) match the numeric results produced by a PDE solver? Or did you train it such that the derivatives satisfy the navier stokes equations without prior calculation of numeric solutions as training labels? If it's the former (which it very much sounds like), I'm very interested in seeing how the latter would perform.
@@computational_domain Is it possible to train the network by the second method mentioned by @Marius.J where we do not have any data on the velocities at the collocation points?
hi, hello, i am struggling with a certain problem, can you perhaps teach a neural network how to pick up maidens? i was trying to do it manually for ages, i thought neural networks perhaps may be a relief in this matter, but my attempts were to no avail, please help me
I guess, the real power of PINNs lies in training the neural network without any training data. I have a question, when the neural network is trained, does the trained network work only for a certain geometry or it gets generalized?
@@computational_domain let's say we have 2 regions. One is rectangular region and other is circular region. We have same PDE which describes physics of the problem. Even though PDE is same, I'll have to train the model for both the regions separately. Right ?
Neural network is just a generalized curve fitting. That’s it, And we fit it by training it for hours/days/month on huge amounts of data... It cannot understand something outside of its training/fitting data.
What kind of grad student are you? What field? Physics, CS, Applied Math, Engineering? I want to do computational stuff like this in grad school and I have been accepted as a physics PhD, but I find that very few physicists actually do this stuff and it is more so in the other fields mentioned.
I'm majoring is in Aerospace Engineering. Simulations like CFD, FEM or heat transfer are more of an engineering discipline, but there are some simulations/computational which are commonly used in physics/chemistry. For example you could delve into the Density Functional Theory (DFT) if you're planning to specialize in something like solid state physics.
Dear Adam, So nice introduction to PINN. I am trying to solve a heat conduction problem using PINN. Can I contact you regarding it? I went to your github repo but unfortunately I did not find any contact details.
Hi Ali, yes sure you can send an email to thecomputationaldomain@gmail.com. I can have a look at it but I am not an expert on PINNs, so I might not be able to help.
Hi there! I have a keen interest in merging machine learning and physics simulations. In fact, I'm currently working on this very topic myself. Specifically, I'm attempting to utilize neural networks in tandem with my Lennard-Jones particle simulator to train the NN in fluid dynamics based on particle simulation data. However, I've found this task to be more challenging than I initially anticipated. I would be thrilled to chat with you about this topic and potentially gain some insights from your experience. If you're open to it, would you be interested in discussing this further via email?
@@computational_domainOMG, IM SO EXCITED!!! I’m a senior rn, and got accepted for my major in AE, and even though I have no idea what happened in the video, I was entertained the whole time. GO AERO ✈️ 🚀💪🏽
Remember to give me 1% of your $1,000,000 when u win it, for having someone who truly believes in u as being the one to solve the NS-Millennial Problem
I'm fascinated by the prospect of using ML for physics problems. Subscribed and looking forward to following your journey.
I'm trying to train my neural nets to predict optical responses of metamaterials, so technically solving the Maxwell's Equation. The result is an NN for predicting reflectance, transmittance, and absorption of a metagrating. Hehe
I have no idea what is this but I watched the whole vid
IIUC:
complicated equations describe the flow of fluid. (Navier-Stokes equations)
solutions of these equations represents a valid fluid flow.
using this, you can create an function that calculates the flow of fluid given an initial condition, (but it is very slow)
neural networks can learn arbitrary functions
this guy: trains a neural network to predict fluid flow by giving it data from the slow fluid flow algorithm so we can do fluid flow modeling faster.
Because of the intro music
Partial Differential Equations , They are learned only in the fourth year of a mathematics degree, One of the most complex but fascinating equations which have huge amount of applications in physics, chemistry and computing especially in the fields of visual effects and probably in AI too, There are a lot of different research directions to consider.
@@astroid-ws4py Partial differential equations are learned way before 4th year of a mathematics degree. We were doing partial derivatives in 1st year of my EEE degree
You do understand shit then
Thanks for highlighting our paper on Vortex Induced Vibrations (VIV). We are now building a digital and physical twin at MIT for this problem. You can use adaptive activation functions to avoid BAD minima!
What kind of adaptive activation functions?
Lol, the book you scrolled through after the paper is the book which my advisor wrote. It's called: "The Finite Volume Method in Computational Fluid Dynamics: An advanced introduction with openFOAM and Matlab"
This is an awesome use of SL. Makes me want to try a similiar project with PINNs. Great job dude 😄
Solving physics problems with ML, NOW THAT'S WHAT I AM TALKING ABOUT!!!!
Subscribed
Great content! Keep the good work. Background music is bit distracting, try light music just a suggestion.
Never had the courage to do it but YES ! I think that it's nearly certain that with a general IA we will find a generalized solution for Navier Stokes Eq :p
CONGRATS !
Hey Adam, don’t understand anything but I support the channel 😂
- Erik
Exactly I am searching for why Adam is not effective. Thank you for sharing.
It would be interesting to check mse of validation set , also to try something like VAE to be able to change properties of fluid, speed, pressure areas, etc (but also add time component, rnn, lstm, transformer🤔)
Hello I simply love the way you explained the physics informed neural networks and especially the coding part. Kudos!!
I am new to the topic of PINNs and I just wanted to ask you can we implement a PINNs for 1st order coupled ODE system with just one independent variable? like dP/dt = f(x, y); dS/dt = g(x, P); dT/dt = h(x, y, S, T)?
If yes could you please tell some examples where I can find a way to code the same?
Thank you very much in advance!!
Subscribed your channel as well!
Could you also compare for computational speed up versus accuracy? It's a fascinating field of research though!
This is exactly the kind of info i was looking for! Thank you! I wonder if you could possible spend more time in a more detailed explanation on how you compute the loss. I see it involves computing some gradients of the outputs, but I cant figure out how is done. I'm not a torch user, so I'm trying to replicate similar stuff with TF.
You can have a look at this paper: arxiv.org/pdf/1711.10566.pdf
Underrated channel
Thanks your video motivated to do a project with PINNs
It is fairly easy to adjust the code to run on GPU, this will give you significant training speedups
Definitely subscribing, you've got great content. Where did you find the dataset used?
Hi there, fantastic work. However, could you provide us a little bit about normalization process of data?
Tnx
It's very interesting, I am interested in pursuing research with PINNS for orbital dynamics and LEO environment. Looking for more such videos mate!
Man, what a coincidence. I am currently trying to do that but it isn't going very well
Thank you for the video, can you please show us the PINN for compressible N-S equation with viscosity and diffusion (continuity equation with diffusion coupled with a momentum type equation with viscosity) in a 2D square ? Thank you in advance
Could you please make a version without the background music? Thx! 🤗
I was thinking through your problem with LBGFS vs mini-batching like SGD or ADAM. Isn't it the case that you can shuffle your mini batches more effectively and/or involve some gradient accumulation, to prevent the overlooking of key physical constraints in the cylinder wake problem? That way you can achieve the same result without needing this much compute and the possible memory bottleneck that your solution involves?
Great video! Is your final PINN just a compressed representation of the CFD training data, or does your model generalize to different-sized cylinders, different fluids, velocities, etc?
Super interesting. Awesome work. Feedback: Music is a bit loud. Please reduce music volume. Perhaps select slightly slower music.
With what changes would it be possible to create a model that takes as an input an unknown geometry and then predicts the velocity and pressure fields?
I am not sure I understand the philosophy for using NNs for fluid dynamics in this type of application. NNs are essentially (very simplistically) regression algorithms, which seek to turn discrete data into a continuous function to approximate some unknown functional. So what would be the ideal NN in this case? Well it would be one which approximates the governing equations we start with, and we know? So what has been achieved? Training a NN to approximate a governing equation that you already know? Perhaps I am missing something.
That is not to say I don't see the benefit in other applications of physics/engineering/fluid dynamics etc.
One example of using NNs in CFD would be speeding up simulations. For instance you could train NN to predict chemical compositions for a giver reaction, which can then be applied in CFD simulations of combustion (which is much faster than finding those ratios from chemical kinetics). I am currently working on a video in which I will explain the applications of NNs in CFD in more detail.
Also you could use PINNs for shape optimization. I've seen a paper in which the researcher trained NN for different airfoils and then used it to find the shape which minimizes drag. This approach is faster than running a CFD simulation for every possible airfoil shape.
The training data is in a regular grid, but what if your data is not in a grid like this and you actually including the particle in your data? Will I need some kind of boundary condition then?
Wow! This was fantastic -- how did you go about creating the visualizations?
I used bunch of software; openfoam, paraview, python, kdenlive
you can do simple figure animations in matplotlib
Hi! Very cool video!
Could you please share a link to the source of the data? Was it a book or some kind of dataset? I would like to repeat this result for myself and would be very grateful if you could share a link to the dataset
Hey man love your video! Are you polish by any chance?
Maybe the use of SIREN or WIRE + Sobolev training (during which the derivatives are supervised) of implicit representations might add speed and quality to your solutions.
2:53 Can anyone please explain how is the cost function (boundary conditions) obtained using supervised learning?
Could humanity one day utilize this knowledge to enhance the rheological properties of a bolus, thereby simulating an accurate, "real" human swallowing process? I'm a Speech-Language Pathologist working with patients who have Dysphagia.
Try using "leaky ReLU" instead of sigmoid or tanh functions.
Can I use the trained Neural Network to predict flows with lower Re numbers in which vortex shading did not start yet?
Genuinely fascinated by the use of PINNs to accelerate computation of such important problems like this! Is it in any way possible to train something like this (even if only in 1D) on a strong pc? If so, what specs would you use? (I am planning to conduct further research into this specific use of PINNs 😅)
This is a cool idea, but could you give some potential use cases for this? In my understanding, it just learns the results of the simulation for one set of conditions. Can you use it as more than just a way of compressing the simulated results into neural network params?
I'm currently working on video in which I'll show some basic applications of trained NN in CFD. I just need some time ;)
uff the naruto music 🥺😂 I am in love 😂
Very cool what you did here. Great job. Thanks. Are you familiar with SINDy - sparse identification of non-linear dynamics, by Steven Brunton? He has a lot of RUclips videos. I wonder if your solution scales as well or better than his. He uses sparse matrices of coefficients for a large set of functions? The answer would make a good paper, right? You’d have to account for the exec speed of different cpu/gpu/tpu operations/instructions and the complexity of unit operations in each method to make it a fair comparison. Come to think of it, that’s probably been done. If anyone knows I’d like to see the reference. Anyway, please keep making great videos like this.
It's fantastic!!! TY
Great work!
2:40 Is there a particular reason why you used a double-sigmoid at the output layer? I can see how using ReLUs for the hidden layers could cause problems as the gradients are utilized for evaluating the loss. But why did you not use something like SoftPlus or Swish as people usually do or maybe the sine function like they do in the SIREN paper?
Problem is you would need to populate data evenly around a phase space in order to generalize the solutions between that space. This seems difficult as I expect the data to be rare.
Otro gran recurso con una dosis de conocimientos útiles
How did you initialize your parameters in the network?
How can I get the same predicted graph of the paper with your code. 4:48
The graph your code is giving is from 0 to 50 in y-axis and 0 to 100 in x-axis.
I tried changing the axis values but I was not getting the same graph as the paper.
What if we don't have training data? No experiments no cfd. Just equations and boundary conditions
Does this video mean that the trained model can be generally applied to other fluid situations?
Or is this only showing that such nonlinear network can approximate to the given result when trained for certain cases?
The trained model in this example can not be applied to other fluid situations.
How much time did it take to train using LBFGS method of optimization?
at 4:45 the values in the scale between the predicted and exact pressure fields are completely off…
It is because the neural network finds the pressure field such that the derivatives match the Navier-Stokes equation. So the values differ by a constant. The gradients of the pressure are equal though. I explained it in the video but perhaps you missed it.
How is the ground truth flow field generated? Is the neural network more efficient?
can you tell how did you get cylinder_wake.mat file or how to use a particular data set for the same?
Hi, I'm taking pinns as my engineering degree projects, how did you generate the data to train the network on?
Awesome
Could you kindly tell me which approach you’ve taken? Did you use neural network as a black box to solve the NS equations with a training set or did you use it to approximate derivatives in the equations and then solve them? Personally, I think the second one is more promising. Just curious. Excellent work by the way.
Around time stamp 2:25 he says he downloaded data. As you say the second approach is the promising one. What he did is cool from a neural network POV but is totally worthless from a physics stand point. Using a set of data for a cylinder around a certain Re we could create correlations and use something very simple to calculate the flow. The real magic would be if he could somehow use data from a certain group of Re and predict successfully to a reasonable extent the flow at any Re even in the millions.
PINNs apparently incorporate the PDEs into their loss function, according to chatgpt. i kinda get why that makes standard deep learning techniques fail, especially in the case of navier-stokes equations. would training a CNN autoregressively with MSE+Adam on pre-simulated velocity fields work?
I attempted to train a CNN using averaged data from particle simulations. Specifically, I utilized data from my particle simulator to compute a grid of densities, temperatures, and velocities. Unfortunately, the results did not meet my expectations. The closest I came to achieving the desired outcome was when the network learned some wave-like structures, but it completely ignored obstacles, resulting in density waves tunneling through them. I'm still working on the problem, but it appears that a change in approach may be necessary. Perhaps implementing a new training pipeline could be helpful, but I'm unsure at this point.
I am a begineer in the field of machine learning and AI, I wanted to ask whether it is necessary for me to do DSA in python for ML and AI?
I was wondering if we could train neural networks to perform tasks such as ordering arrays. Ideally, after sufficient training, they could do it faster than the faster algorithms we have.
Such thing is not possible. The algorithms we have are hardcoded and already made by many great people. We can probably have the AI a look at it to optimize it even more, but no way is it possible that a function with least amount of instructions is slower than an AI with millions of parameters and unpredictability.
@@shivavarunadicherla
Humans are "better" at sorting lists than computers are (we take less steps, even though the computer can make it faster). And we are better because we can quickly identify patterns in the list that allows us to optimally move parts around.
A NN could take advantage of this. Being trained to recognize patterns in the lists and then optimally sorting it in a non-linear way (something no classic algorithm can do).
Also, while complex neural networks can take millions of input parameters, the input-layer for such a neural network would be, literally, the array we're trying to sort, not millions (unless millions IS what we're trying to sort). It is also worth mention that, after trained, computing the activation function of each neuron is extremely fast.
I'll try it out.
@@matheusdardenne I would like to see this in action. I would think there would be some situations where the AI might outperform a given algorithm while losing to an other algorithm, Since with AI we will be doing extra work in looking for patterns.In the end it might be dependent on the array we give it, Just as with different sorting algorithms converging faster on specific patterns of input
@@shivavarunadicherla
Exactly. I think for sufficiently long arrays, where even the best classical algorithms suck, this pattern recognition could be helpful to make sorting it more efficient, even with the overhead of calculating the activation functions (not counting the training time, of course).
Solving ode!
Looks promising. Do you predefine the mesh, or give guidelines about its properties? Also, is a 2D Navier-Stokes equation sufficient for modelling the flow around a cylinder? Typically, incompressible 2D flow can be modelled as a single PDE in terms of psi, where d psi/dy = u and d psi/dx /-v, and pressure is eliminated. However, I don't know whether physical systems, specifically very long ones where a 2D approximation is more reasonable, with vortex shedding would have significant velocity in the z direction. Do you have experience with or thoughts on this?
Also, do you manually test the GCI by changing the number of nodes, or does the neural network handle that?
1. I used the available data set from the literature, which contains the all the properties (points, velocity, pressure)
2. Assuming the cylinder is long enough the flow can be approximated using 2D equations, as the effects of tip is marginal (especially in the middle of the cylinder)
3. I suppose you are talking about the potential flows. This type of model only works for the irrotational flows. However, in case of the cylinder, there are viscous effects which result in the vorticity. Thus, it full Navier-Stokes equation should be considered in this problem.
4. I just the same number of nodes as in the article I showed. I didn't really test it.
I hope that answers the questions
@@computational_domain thanks for the reply! Yeah I had a feeling that was the case, but had never delved too deeply into 3D Navier Stokes applications. Good work
Large language models and chat box my dad made those two connections.
So did you train it such that the psi and the pressure predictions (or physical properties derived from those) match the numeric results produced by a PDE solver? Or did you train it such that the derivatives satisfy the navier stokes equations without prior calculation of numeric solutions as training labels?
If it's the former (which it very much sounds like), I'm very interested in seeing how the latter would perform.
I trained th model based on the velocity field produced by a CFD solver. The pressure field was obtained only from the NS equations.
@@computational_domain Is it possible to train the network by the second method mentioned by @Marius.J where we do not have any data on the velocities at the collocation points?
выглядит оч жестко
красавчик!
nara
hi, hello, i am struggling with a certain problem, can you perhaps teach a neural network how to pick up maidens? i was trying to do it manually for ages, i thought neural networks perhaps may be a relief in this matter, but my attempts were to no avail, please help me
Tristian Tate already made a detailed video on this topic
hilarious
@@arnoldwang491 awww tysm
Do you have colab link?
Hello, your code doesn't work. Can you help me?
Lower the music volume. You can barely hear you over that music
I guess, the real power of PINNs lies in training the neural network without any training data.
I have a question, when the neural network is trained, does the trained network work only for a certain geometry or it gets generalized?
It only works for the specific case it was trained for.
@@computational_domain let's say we have 2 regions. One is rectangular region and other is circular region. We have same PDE which describes physics of the problem. Even though PDE is same, I'll have to train the model for both the regions separately. Right ?
Neural network is just a generalized curve fitting. That’s it, And we fit it by training it for hours/days/month on huge amounts of data... It cannot understand something outside of its training/fitting data.
How fast or expensive is the network to execute? Can it run in real time or does it take some time to make the predictions? Thanks!
It's pretty much instantaneous
What kind of grad student are you? What field? Physics, CS, Applied Math, Engineering? I want to do computational stuff like this in grad school and I have been accepted as a physics PhD, but I find that very few physicists actually do this stuff and it is more so in the other fields mentioned.
I'm majoring is in Aerospace Engineering. Simulations like CFD, FEM or heat transfer are more of an engineering discipline, but there are some simulations/computational which are commonly used in physics/chemistry. For example you could delve into the Density Functional Theory (DFT) if you're planning to specialize in something like solid state physics.
@@computational_domain Nice. Have you ever looked into surrogate modeling
So, did you consider overfitting here?
Considering the results he either did or didnt need to?
Interesting title but the background music forced to kill it early.
Dear Adam, So nice introduction to PINN. I am trying to solve a heat conduction problem using PINN. Can I contact you regarding it? I went to your github repo but unfortunately I did not find any contact details.
Hi Ali, yes sure you can send an email to thecomputationaldomain@gmail.com. I can have a look at it but I am not an expert on PINNs, so I might not be able to help.
you have a polish accent. am i right?
xd
Hi there! I have a keen interest in merging machine learning and physics simulations. In fact, I'm currently working on this very topic myself. Specifically, I'm attempting to utilize neural networks in tandem with my Lennard-Jones particle simulator to train the NN in fluid dynamics based on particle simulation data. However, I've found this task to be more challenging than I initially anticipated. I would be thrilled to chat with you about this topic and potentially gain some insights from your experience. If you're open to it, would you be interested in discussing this further via email?
The music is a little too loud compared to your voice.
How did you trained without data???
I used the data from CFD simulation to train the model
@@computational_domain thank you!
great topic, horrible audio 😨
what are you studying in graduate school? i’m assuming you have taken some physics classes at some point? or are you just a curious comp sci major
I'm studying Aerospace Engineering
@@computational_domain I studied the same and I wish I would have put more effort in getting good at code.
@@computational_domainOMG, IM SO EXCITED!!! I’m a senior rn, and got accepted for my major in AE, and even though I have no idea what happened in the video, I was entertained the whole time. GO AERO ✈️ 🚀💪🏽
Remember to give me 1% of your $1,000,000 when u win it, for having someone who truly believes in u as being the one to solve the NS-Millennial Problem
I could give you 99%, since I am certain that it's not gonna happen ;)
Is it possible to build a physics-and-social_response informed neural network that can simulate very accurately, human response at large scales?
I'm not really sure what you mean
Am i the only one, who nearly can't understand the narrator because the music is so loud?
The music is too loud, I struggle hearing your voice
aaah the expanded form of the equations is so ugly