Dear Professor, At around timestamp 57:25, we go from the integral to an average sum. On what basis are we substituting P(X) = 1/N. What is the basis for this assumption that the PDF of X is uniform?
There is no basis for that, but do you want to assume a distribution over data? It is somewhat a parsimonious approach, when we do not know the distribution, take the least "informative" one, i.e., Uniform distribution. This way we are assuming no prior information about that data. But if you have prior, feel free to use it!
We are not substituting P(X) = 1/N. In the slide 109, last equation (in red): The Law of large numbers (LNN) states that the RHS will converge to the LHS as N -> infinity. In other words, we do not know P(X) and we dont need to, because we can estimate the expected value using the LNN.
240 students didn't even show up? These are the people developing our operating systems, our webstack platforms, our applications and software. They're all lazy bums who aren't even passionate about their field like was the case 20 years ago. Software used to be written by people who wanted to code if they were rich or poor. It was in their blood. Now we just have 90% of the industry flooded with people who want the Sillyclown Valley lifestyle but don't care for the work. The industry only exists because of people who loved the work and the lifestyle was just a bonus.
best course about deep learning. now 2024 and happy I found it back. well done!
Thank you again to Carnegie Mellon University & Bhiksha Raj. I find these lectures fascinating.
The professor with the sword is the Conan of Machine Learning!
Thanks for sharing knowledge. Amazing content and Professor.
The teacher seems to be so mean to his students! Quite surprised to see this at the CMU!
cry more baby
he doesn't have time for idiots!
What is a good textbook / reference book to follow to keep with this lecture?
Does someone know where I can get the assignments for this class?
Dear Professor,
At around timestamp 57:25, we go from the integral to an average sum. On what basis are we substituting P(X) = 1/N. What is the basis for this assumption that the PDF of X is uniform?
There is no basis for that, but do you want to assume a distribution over data? It is somewhat a parsimonious approach, when we do not know the distribution, take the least "informative" one, i.e., Uniform distribution. This way we are assuming no prior information about that data. But if you have prior, feel free to use it!
We are not substituting P(X) = 1/N.
In the slide 109, last equation (in red):
The Law of large numbers (LNN) states that the RHS will converge to the LHS as N -> infinity.
In other words, we do not know P(X) and we dont need to, because we can estimate the expected value using the LNN.
I want to attend the class
Great!
3 minutes into the lecture, and by now I would have left 2 times... and watched it on youtube, where I can use my phone
Thank You.
Wish I was his student
240 students didn't even show up?
These are the people developing our operating systems, our webstack platforms, our applications and software. They're all lazy bums who aren't even passionate about their field like was the case 20 years ago. Software used to be written by people who wanted to code if they were rich or poor. It was in their blood. Now we just have 90% of the industry flooded with people who want the Sillyclown Valley lifestyle but don't care for the work. The industry only exists because of people who loved the work and the lifestyle was just a bonus.
hey man satisfying your personal ego on a youtube dislike not cool.