Hello, thanks for the interesting talk. In the slide from 10:38 you describe nearest neighbor methods (like for example SVMs or Kernel density estimators would be), but in the corresponding paper (KDD2019) in the Appendix Logistic Regression and Gradient Tree Boosting methods are given as being applied in the experiments, which aren't nearest neighbor methods. Could you please explain where my misunderstanding comes from?
Really thank you for doing such things for free.
Thank you for providing such in depth knowledge for free and in such a structured way! Enjoying every video!
This is a really great introduction. Thanks very much, Lotte.
Really thank you for this great material !
Hello, thanks for the interesting talk. In the slide from 10:38 you describe nearest neighbor methods (like for example SVMs or Kernel density estimators would be), but in the corresponding paper (KDD2019) in the Appendix Logistic Regression and Gradient Tree Boosting methods are given as being applied in the experiments, which aren't nearest neighbor methods. Could you please explain where my misunderstanding comes from?
Please check for the email, Mr. David Sumpter
soccer*
Football**