Neural network for solving ODE is at very nascent stage. There is a lot to do. Numerical methods serve very well as of now. We don't need neural network for that. Can I have your linkedin ID.
That depends on the range of values over which you are trying to solve the ODE/PDE. If the ODE/PDE is well behaved, an analysis over a short range of parameter values is sufficient, in which case the dimensionality should not be a problem.
Are there specific ODE forms which cannot be solved practically using the available techniques or computational limitations in ANN, although the universal approximation theorem may be valid?? Is the curse of dimensionality an issue here?
Why don't we minimize the DE and Boundary Conditions separately. In this combined case, we cant actually see which of the two is getting minimized less/more, right?
How do you minimise them separately? Finally you need a single cost function to be minimised. You can always compute the two terms separately after each forward pass to decide the stopping criterion.
Yes, with ANN, we are not finding a symbolic solution for the ODE. So we have to specify the initial conditions and we will get the corresponding solution.
Great video, very easy to understand PINNs
Why we use ANNs on ODEs.
What is difference to get Solution from numerical methode and also get the solution from the ANNs. Why we use ANNs ?
Neural network for solving ODE is at very nascent stage. There is a lot to do. Numerical methods serve very well as of now. We don't need neural network for that. Can I have your linkedin ID.
Does the Curse of Dimensionality have any role in restricting the ANN model performance while solving ODEs and PDEs?
That depends on the range of values over which you are trying to solve the ODE/PDE. If the ODE/PDE is well behaved, an analysis over a short range of parameter values is sufficient, in which case the dimensionality should not be a problem.
Are there specific ODE forms which cannot be solved practically using the available techniques or computational limitations in ANN, although the universal approximation theorem may be valid?? Is the curse of dimensionality an issue here?
Thats a good question but hard to answer since very limited work has been done on applying ANN for solving ODEs.
simply what does it mean by training a network???plss answer...
Watch this video on the Backpropagation algorithm : ruclips.net/video/ntnwjWEpnkk/видео.html
Why don't we minimize the DE and Boundary Conditions separately. In this combined case, we cant actually see which of the two is getting minimized less/more, right?
How do you minimise them separately? Finally you need a single cost function to be minimised. You can always compute the two terms separately after each forward pass to decide the stopping criterion.
There can be ODE's with more than one solution, so do we need to train the network that many times? or one training is sufficient?
I think we are training for only a particular solution, so we can't really say that we have solved an ODE
Yes, with ANN, we are not finding a symbolic solution for the ODE. So we have to specify the initial conditions and we will get the corresponding solution.