Thank you, again. I use Markov chains and Monte Carlo methods to model ion channel function and other things. I was foolish enough to believe that I had independently figured out what you describe as the Inverse CDF Transform. It is actually a relief to learn that there are already established principled theorems about this. So, maybe some of my publications were not barking up the wrong tree after all :-).
This Bayesian regression serie is incredible, the way it starts with a polynomial regression and then builds up to more general probabilistic modelling is brilliant and reminds me of Rasmussen's book on Gaussian processes. Are you going to make a video about kernels? That would nicely generalize the cosine example of the part 1
Either you have hacked my account or can read my mind :) .... am planning to do those at some point but not sure if will do it as part of this series. Jokes aside, thanks for your comment and if possible share these with people who may be interested in this subject.
@@KapilSachdeva I'm currently working with gaussian processes so, kinda of anticipating that video for a while :) and waiting to see your take on them, just being a fan-boi for this series/all-your-videos in general. I do share your videos, it's kind of sad that gold is being ignored by youtube's algorithm :(
You will be happy to know that I was using GP yesterday for my own project and hence the comment of you hacking my computer :) .... I think GP deserve its own mini series so will do them at one point.
I think importance sampling in Reinformcaent Learning..and this large time number of sampling represents the large number of trails we train the Actors and Critics for. Is that so or I am mistakendly confusing topics
Do not know much about RL so can not really give any feedback on your statement. Think of where ever you need to compute expected values you may end up using importance sampling. See this article (have given it a cursory look only) jonathan-hui.medium.com/rl-importance-sampling-ebfb28b4a8c6
Thank you, again. I use Markov chains and Monte Carlo methods to model ion channel function and other things. I was foolish enough to believe that I had independently figured out what you describe as the Inverse CDF Transform. It is actually a relief to learn that there are already established principled theorems about this. So, maybe some of my publications were not barking up the wrong tree after all :-).
🙏
This Bayesian regression serie is incredible, the way it starts with a polynomial regression and then builds up to more general probabilistic modelling is brilliant and reminds me of Rasmussen's book on Gaussian processes. Are you going to make a video about kernels? That would nicely generalize the cosine example of the part 1
🙏 maybe in near future.
Great Video! I subscribed please keep them coming !!!
🙏
10:50 you wouldn't be wasting our time, this video about generating random numbers seems very interesting!
🙏
Another great video. 👍
🙏
❤❤🙏🏽🙏🏽🙏🏽 thanks and subscribed
waiting for kernels and Gaussian processes. All of your videos are amazing:)
Either you have hacked my account or can read my mind :) .... am planning to do those at some point but not sure if will do it as part of this series.
Jokes aside, thanks for your comment and if possible share these with people who may be interested in this subject.
@@KapilSachdeva I'm currently working with gaussian processes so, kinda of anticipating that video for a while :) and waiting to see your take on them, just being a fan-boi for this series/all-your-videos in general.
I do share your videos, it's kind of sad that gold is being ignored by youtube's algorithm :(
You will be happy to know that I was using GP yesterday for my own project and hence the comment of you hacking my computer :) .... I think GP deserve its own mini series so will do them at one point.
@@KapilSachdeva yes, Non-parametrics are amazing :p
For when a video about the theoretical aspect of score based diffusion models ?
I think importance sampling in Reinformcaent Learning..and this large time number of sampling represents the large number of trails we train the Actors and Critics for. Is that so or I am mistakendly confusing topics
Do not know much about RL so can not really give any feedback on your statement.
Think of where ever you need to compute expected values you may end up using importance sampling.
See this article (have given it a cursory look only)
jonathan-hui.medium.com/rl-importance-sampling-ebfb28b4a8c6