An excellent review of PINNs and a very fascinating way to choose Lambda to weigh optimally the losses on boundary versus interior points. Do you a have a tutorial problem with code that exemplifies this approach? Please let me know. Thanks
Thank you, Anastasia. The approach of trying to find those collocation that have the most effect on the final solution could be a very promising direction of research indeed. While you demonstrated a coupe of model examples, it would be great to see one day these methods applied to, e.g., fluid flows for reservoir modelling, gas dynamics etc.
Hi @Anastasia Borovykh Thank for this presentation, I read the article and I'm playing around with the code, and I wonder if we can solve PDEs that depend both on time and space or the application of this method is only limited to space dimension. I would like to apply the approach to solve PDEs in Finance (for example the Black Scholes PDE), and where only the Boundary value at the final time is available, and we are interested in the solution value at initial time. It will be helpful if you can comment on this
Hi! Thank you for your interest :) Yes, definitely! In that case you would just create the collocation points also over your time variable. I have not worked on the financial applications of this method myself, but my collaborators have a paper where they use the weighting of the loss function to compute various option prices: arxiv.org/pdf/2005.12059.pdf Specifically in section 3.1 the Black Scholes model is discussed. Hope this helps! Anastasia
Hello Anastasia, So interesting your presentation. I'm Leon, I'm currently working on PINNs for a vibration problem: Case of a beam Bridge. I would like to know if we are dealing with time dependence PDE such as if we have boundaries and initial the condition, how can we define the loss function since we would like to minimise de weight? Best regards,
Can you let lambda be a parameter - and use gradient descent to find the its optimal value? meaning for each train step - take the gradient of the loss with respect to lambda
If you did this and optimized lambda on the same loss function then lambda would converge to either 1 or 0. The network would learn either the zero solution (a constant) which would satisfy the PDE but not the boundary conditions or it would only satisfy the boundary conditions but not the PDE at all.
@@oliverhennigh451 Thanks for the comment. My setup is slightly different, I am trying the inverse problem on fitting the parameters to an ode i.e.: x" + bx' + kx = 0. I sampled and perturbed the real solution - and used that data as domain data. Hence, I have three sets of losses - 1. the ode loss (loss_f), the IC loss (loss_ic) and the loss between predicted and sampled data (loss_u). I let the loss be λ^2 * (loss_f + loss_ic) + (1-λ^2) * loss_u, and take derivatates of the loss with respset to b, k, λ. I square lambda so the loss remains positive. It is true that lambda becomes pretty small but not zero - but I am getting good results and b and k approach the actual values. Perhaps, what I am doing does not make sense but I am experimenting on my own. I would love some friends that know the material. Happy to share what I have.
@@edvinbeqari7551 That sounds interesting. The way I see it is that if we optimize lambda while training then we just select the lambda which makes it most easy for the NN to make the loss small (what Oliver Hennigh also mentions). In our case it is not just about making the loss small but finding a weighting between interior and boundary such that a small loss implies a solution close to true PDE solution. In your case I would view the loss_f + loss_ic as a regularization-like term. But exactly the meaning of optimizing it while training would mean I'd have to think about a bit more...
@@anastasiaborovykh120 Hi Anastasia - do you have a document where I can see the full derivation of the optimal lambda. Perhaps, a simple example. I would love to learn your method.
Hey Samuel, I was wondering if I could get some form of contact information from you as I am also working on my Bachelor Thesis about the same topic and was hoping to get some insights from others. Thank you.
An excellent review of PINNs and a very fascinating way to choose Lambda to weigh optimally the losses on boundary versus interior points. Do you a have a tutorial problem with code that exemplifies this approach? Please let me know. Thanks
Thank you :) I am happy to hear you found it interesting! Our code is available on Github: github.com/remcovandermeer/Optimally-Weighted-PINNs
Have you heard of SINDy from Brunton's lab at UW?
Steve Brunton is my favorite teacher when it comes to
Machine learning meets dynamical systems
Could you please provide a citation for the theorem (MOB, 2020) that you mentioned in 5:09? I couldn't find it anywhere.
Great presentation and one of the most understandable explanation of PDE AI-solver!
Many thanks!!!
Thank you, Anastasia. The approach of trying to find those collocation that have the most effect on the final solution could be a very promising direction of research indeed. While you demonstrated a coupe of model examples, it would be great to see one day these methods applied to, e.g., fluid flows for reservoir modelling, gas dynamics etc.
Agree; those are very interesting future directions we are thinking about!
Hi @Anastasia Borovykh
Thank for this presentation, I read the article and I'm playing around with the code, and I wonder if we can solve PDEs that depend both on time and space or the application of this method is only limited to space dimension.
I would like to apply the approach to solve PDEs in Finance (for example the Black Scholes PDE), and where only the Boundary value at the final time is available, and we are interested in the solution value at initial time.
It will be helpful if you can comment on this
Hi! Thank you for your interest :) Yes, definitely! In that case you would just create the collocation points also over your time variable. I have not worked on the financial applications of this method myself, but my collaborators have a paper where they use the weighting of the loss function to compute various option prices: arxiv.org/pdf/2005.12059.pdf Specifically in section 3.1 the Black Scholes model is discussed. Hope this helps! Anastasia
Hello Anastasia,
So interesting your presentation.
I'm Leon, I'm currently working on PINNs for a vibration problem: Case of a beam Bridge.
I would like to know if we are dealing with time dependence PDE such as if we have boundaries and initial the condition, how can we define the loss function since we would like to minimise de weight?
Best regards,
Great explanation, can you make a video in hidden physics models HPM
Can you let lambda be a parameter - and use gradient descent to find the its optimal value? meaning for each train step - take the gradient of the loss with respect to lambda
If you did this and optimized lambda on the same loss function then lambda would converge to either 1 or 0. The network would learn either the zero solution (a constant) which would satisfy the PDE but not the boundary conditions or it would only satisfy the boundary conditions but not the PDE at all.
@@oliverhennigh451 Thanks for the comment. My setup is slightly different, I am trying the inverse problem on fitting the parameters to an ode i.e.: x" + bx' + kx = 0. I sampled and perturbed the real solution - and used that data as domain data. Hence, I have three sets of losses - 1. the ode loss (loss_f), the IC loss (loss_ic) and the loss between predicted and sampled data (loss_u). I let the loss be λ^2 * (loss_f + loss_ic) + (1-λ^2) * loss_u, and take derivatates of the loss with respset to b, k, λ. I square lambda so the loss remains positive. It is true that lambda becomes pretty small but not zero - but I am getting good results and b and k approach the actual values. Perhaps, what I am doing does not make sense but I am experimenting on my own. I would love some friends that know the material. Happy to share what I have.
@@edvinbeqari7551 That sounds interesting. The way I see it is that if we optimize lambda while training then we just select the lambda which makes it most easy for the NN to make the loss small (what Oliver Hennigh also mentions). In our case it is not just about making the loss small but finding a weighting between interior and boundary such that a small loss implies a solution close to true PDE solution. In your case I would view the loss_f + loss_ic as a regularization-like term. But exactly the meaning of optimizing it while training would mean I'd have to think about a bit more...
@@anastasiaborovykh120 Hi Anastasia - do you have a document where I can see the full derivation of the optimal lambda. Perhaps, a simple example. I would love to learn your method.
Yes definitely. The derivation we did is in our paper arxiv.org/pdf/2002.06269
Thanks for sharing this excellent presentation
a really good and well structured talk! helped me a lot to prepare my bachelor thesis which will be about that topic
Hey Samuel, I was wondering if I could get some form of contact information from you as I am also working on my Bachelor Thesis about the same topic and was hoping to get some insights from others. Thank you.
Thanks Anastasia. If you ever see this comment., THANK YOU SO MUCH!
Thank you for watching!
Appreciated.
Can u provide code in python?
Bravo
whawaw wait, you so pretty