Very clear, thanks! I think there is an error though in the linear models section - to have an infinite number of solutions to a linear model (equation system) , d ( the number of dimensions/variables) has to be larger than n (number of samples/equations), it's written the opposite in the presentation
19:05 is the gaussian distribution conditioned on the class labels, or are the samples drawn from one single gaussian (regardless of the class) and then assigned a label?
Really nice presentation ! At 33:49 I think it must be d >= n. Could you please explain what e_t*x_i mean at 35:00 ? I am assuming that it means the partial derivative of the loss wrt w evaluated at x_i
this talk is a gem.
Very clear, thanks! I think there is an error though in the linear models section - to have an infinite number of solutions to a linear model (equation system) , d ( the number of dimensions/variables) has to be larger than n (number of samples/equations), it's written the opposite in the presentation
Thanks for this talk. Very clear and informative.
19:05 is the gaussian distribution conditioned on the class labels, or are the samples drawn from one single gaussian (regardless of the class) and then assigned a label?
very helpful ...nice and clear ..good job
Really nice presentation ! At 33:49 I think it must be d >= n. Could you please explain what e_t*x_i mean at 35:00 ? I am assuming that it means the partial derivative of the loss wrt w evaluated at x_i
That was video what i needed
what is the relationship between rank and infinite number of solution?
thank you for the talk