Statistical Learning and Data Science
Statistical Learning and Data Science
  • Видео 300
  • Просмотров 150 827
I2ML - Supervised Classification - Linear Classifiers
This video is part of the open source online lecture "Introduction to Machine Learning". URL: slds-lmu.github.io/i2ml/
Просмотров: 61

Видео

I2ML - Supervised Classification - Naive Bayes
Просмотров 452 месяца назад
This video is part of the open source online lecture "Introduction to Machine Learning". URL: slds-lmu.github.io/i2ml/
I2ML - Supervised Classification - Discriminant Analysis
Просмотров 722 месяца назад
This video is part of the open source online lecture "Introduction to Machine Learning". URL: slds-lmu.github.io/i2ml/
I2ML - Supervised Classification - Basic Definitions
Просмотров 902 месяца назад
This video is part of the open source online lecture "Introduction to Machine Learning". URL: slds-lmu.github.io/i2ml/
I2ML - Supervised Classification - Logistic Regression
Просмотров 722 месяца назад
This video is part of the open source online lecture "Introduction to Machine Learning". URL: slds-lmu.github.io/i2ml/
I2ML - Supervised Classification - Tasks
Просмотров 452 месяца назад
This video is part of the open source online lecture "Introduction to Machine Learning". URL: slds-lmu.github.io/i2ml/
I2ML - Random Forest - Bagging Ensembles
Просмотров 1207 месяцев назад
This video is part of the open source online lecture "Introduction to Machine Learning". URL: slds-lmu.github.io/i2ml/
I2ML - Random Forests - Out-of-Bag Error Estimate
Просмотров 2997 месяцев назад
This video is part of the open source online lecture "Introduction to Machine Learning". URL: slds-lmu.github.io/i2ml/
I2ML - Random Forest - Proximities
Просмотров 1067 месяцев назад
This video is part of the open source online lecture "Introduction to Machine Learning". URL: slds-lmu.github.io/i2ml/
I2ML - Random Forest - Feature Importance
Просмотров 5627 месяцев назад
This video is part of the open source online lecture "Introduction to Machine Learning". URL: slds-lmu.github.io/i2ml/
I2ML - Random Forest - Basics
Просмотров 2147 месяцев назад
This video is part of the open source online lecture "Introduction to Machine Learning". URL: slds-lmu.github.io/i2ml/
I2ML - Tuning - In a Nutshell
Просмотров 797 месяцев назад
This video is part of the open source online lecture "Introduction to Machine Learning". URL: slds-lmu.github.io/i2ml/
SL - Regularization - Non-Linear Models and Structural Risk Minimization
Просмотров 1287 месяцев назад
This video is part of the open source online lecture "Supervised Learning". URL: slds-lmu.github.io/i2ml/
SL - Regularization - Geometry of L2 Regularization
Просмотров 517 месяцев назад
This video is part of the open source online lecture "Supervised Learning". URL: slds-lmu.github.io/i2ml/
SL - Regularization - Weight Decay and L2
Просмотров 707 месяцев назад
SL - Regularization - Weight Decay and L2
SL - Regularization - Geometry of L1 Regularization
Просмотров 437 месяцев назад
SL - Regularization - Geometry of L1 Regularization
SL - Regularization - Bayesian Priors
Просмотров 797 месяцев назад
SL - Regularization - Bayesian Priors
SL - Regularization - Early Stopping
Просмотров 297 месяцев назад
This video is part of the open source online lecture "Supervised Learning". URL: slds-lmu.github.io/i2ml/
SL - Regularization - Other Regularizers
Просмотров 317 месяцев назад
SL - Regularization - Other Regularizers
SL - Regularization - Lasso Regression
Просмотров 387 месяцев назад
SL - Regularization - Lasso Regression
SL - Regularization - Lasso vs. Ridge
Просмотров 377 месяцев назад
SL - Regularization - Lasso vs. Ridge
SL - Regularization - Ridge Regression
Просмотров 597 месяцев назад
SL - Regularization - Ridge Regression
SL - Regularization - Introduction
Просмотров 697 месяцев назад
SL - Regularization - Introduction
SL - Regularization - Elastic Net and regularized GLMs
Просмотров 347 месяцев назад
SL - Regularization - Elastic Net and regularized GLMs
SL - Information Theory - Information Theory for Machine Learning
Просмотров 676Год назад
SL - Information Theory - Information Theory for Machine Learning
SL - Information Theory - Joint Entropy and Mutual Information II
Просмотров 131Год назад
SL - Information Theory - Joint Entropy and Mutual Information II
SL - Information Theory - Joint Entropy and Mutual Information I
Просмотров 216Год назад
SL - Information Theory - Joint Entropy and Mutual Information I
SL - Information Theory - KL Divergence
Просмотров 194Год назад
SL - Information Theory - KL Divergence
SL - Information Theory - Cross Entropy and KL
Просмотров 137Год назад
SL - Information Theory - Cross Entropy and KL
SL - Information Theory - Differential Entropy
Просмотров 629Год назад
SL - Information Theory - Differential Entropy

Комментарии

  • @nammateejay1462
    @nammateejay1462 20 дней назад

    Super🎉

  • @hecker8448
    @hecker8448 Месяц назад

    danke

  • @ShivaReddy-lq9is
    @ShivaReddy-lq9is 2 месяца назад

    Hii i can help u for reaching monitization for ur channel

  • @PedroRibeiro-zs5go
    @PedroRibeiro-zs5go 2 месяца назад

    Thanks, that was a great video!

  • @longtuan1615
    @longtuan1615 3 месяца назад

    Good explanation! Thank you so much!

  • @uxnuxn
    @uxnuxn 4 месяца назад

    There is an error in the 1st slide. Entropy of the fair coin equals 1 bit, not 0.7. It's probably natural log was used in this graph.

  • @kevon217
    @kevon217 4 месяца назад

    Really enjoyed the guitar tuning analogy. Can’t stand playing out of tune guitars or when the intonation is slightly off.

  • @_VETTRICHEZHIAN
    @_VETTRICHEZHIAN 4 месяца назад

    Is Stock price prediction an suitable real world use case for this Online learning ?

  • @Mohammed.1471
    @Mohammed.1471 6 месяцев назад

    Appreciate it 👍

  • @moonzhou1738
    @moonzhou1738 6 месяцев назад

    Hi,professor ! I have a question: I remember in previous video I learned that random forest can deal with missing data by surrogate splitting, but now in this video professor said proximities can be used for imputation. I'm confused. If random forest can deal with missing data then why do we need imputation? And in this video in imputation part, step one is using median to impute the data, why we need to impute if random forest already can use surrogate splitting to get the prediction result? why dont we use the result generated by surrogate splitting to compute proximity to impute the data and keep doing step 2 and 3? Besides, I feel I dont totally understand how does surrogate splitting work. I searched on the internet, but I still dont know the details. I just know it is about finding another variable that can generate the good result as variable[the primary split] with missing values. But how does the calculaton happen to make us know that the variable with missing values can be the best split at specific point[and if we already know that the variable with missing values can be a good split at specific point then why do we need surrogate splitting]?

    • @moonzhou1738
      @moonzhou1738 6 месяцев назад

      The question is a bit long, but wish for your replies! Thank you!

  • @krrishagarwalla3325
    @krrishagarwalla3325 6 месяцев назад

    Absolute gold

  • @virgenalosveinte5915
    @virgenalosveinte5915 7 месяцев назад

    great video thanks, very clear

  • @fayezalhussein7115
    @fayezalhussein7115 7 месяцев назад

    thank you

  • @rebeenali4317
    @rebeenali4317 7 месяцев назад

    how do we get phi values in step 4?

  • @gamuchiraindawana2827
    @gamuchiraindawana2827 8 месяцев назад

    LET'S GOOOOOOOO 💫💫 THANK YOU FOR TAKING YOUR TIME TO MAKE THESE VIDEOS💯💯💯💯❤❤

  • @holthuizenoemoet591
    @holthuizenoemoet591 9 месяцев назад

    Is this algorithm inspired by k-means clustering

  • @errrrrrr-
    @errrrrrr- 9 месяцев назад

    Thank you! You explained thing very clearly.

  • @jackychang6197
    @jackychang6197 9 месяцев назад

    Very helpful video. The visualization in the OOB is very easy to understand. Thank you!

  • @convel
    @convel 10 месяцев назад

    great lecture! what if some of the variables to be optimized are limited in a certain range? using multivariate normaldistribution to generate offspring might exceed the range limit?

  • @MarceloSilva-cm5mg
    @MarceloSilva-cm5mg 11 месяцев назад

    Excuse me, but wouldn't z1+z2+z3+....zT be (-1)^T/2 instead of (-1/2)^T? Anyway, you did a great job. Congratulations!!

  • @fiNitEarth
    @fiNitEarth Год назад

    first :)

  • @gamuchiraindawana2827
    @gamuchiraindawana2827 Год назад

    It's so hard to hear what you're saying, please amplify the audio post processing on your future uploads. Excellent presentation nonetheless, you explained it so simply and clearly. <3

    • @berndbischl
      @berndbischl Год назад

      Thank you. We are still not "pros" with regards to all technical aspects of recording. Will try to be better in the future.

  • @bertobertoberto242
    @bertobertoberto242 Год назад

    at 4:00 isn't the square supposed to be inside the square brackets?

  • @bertobertoberto242
    @bertobertoberto242 Год назад

    HI, great course, however a small note, at 12:20 I think that the function on the left might not be convex, as the tangent plane in the "light blue area" is on top on the function, and not below the function, thus violates the definition of convexity (afaik they are called Quasiconvex function that sort of functions)...

  • @rohi9594
    @rohi9594 Год назад

    finally found out clear logic behind the weights. Thank you so much🎉

  • @weii321
    @weii321 Год назад

    Nice video. I have a question. How to calculate shapley value for classification problem?

    • @zxynj
      @zxynj Год назад

      to not violate axiom, do it in logit space

  • @twist777hz
    @twist777hz Год назад

    Thank you for doing this video in Numerator layout. It seems many videos on machine learning use Denominator layout but I definitely prefer Numerator layout! Is it possible you could do a follow-up video where you talk about partial derivative of scalar function with respect to MATRIX ? Most documents I've looked at seem to use Denominator layout for this type of derivative (some even use Numerator layout with respect to VECTOR, and then switch to Denominator layout with respect to MATRIX). I assume it's because Denominator layout preserves the dimension of the matrix, making it more convenient for gradient descent etc. What would you recommend I should do?

  • @mackwebster7704
    @mackwebster7704 Год назад

    💐 Promo'SM

  • @chrisleenatra
    @chrisleenatra Год назад

    So, the permutation order was only to define which feature will get the random value? Not creating a whole new instance with feature order same as the permutation order? (The algorithm shows S, j, S- , but your example shows S, S-, j)

  • @chrisleenatra
    @chrisleenatra Год назад

    Thank you!

  • @fanhbz1018
    @fanhbz1018 Год назад

    Nice lecture. I also recommend Dr. Ahmad Bazzis convex optimization series.

  • @shubhibans
    @shubhibans Год назад

    Great work

  • @maxgh8534
    @maxgh8534 Год назад

    Hi, sadly your github link doesnt work for me. Thanks for the video.

  • @jengoesnuts
    @jengoesnuts Год назад

    Can you explain more about the ommitted variable bias in M-plots? My teacher told me that you can use a linear transformation to explain the green graph by transofmring x1 and x2 to two independent random variables x1 and U. Is that true?

  • @ocamlmail
    @ocamlmail Год назад

    Thank you so much for this video. Consider example on 7:20 min. -- but doesn't it look like feature permutation? Shouldn't I use expected values for other variables (x2,x3) ? Thanks in advance.

  • @hkrish26
    @hkrish26 2 года назад

    Thanks

  • @appliedstatistics2043
    @appliedstatistics2043 2 года назад

    the material is not accessible right now, can someone reupload it ?

  • @yt-1161
    @yt-1161 2 года назад

    what do you mean with "pessimistic bias" ?

    • @sogari2187
      @sogari2187 2 года назад

      If i understand correctly, it is pessimistic, because you use lets say 90% of your available data as training set and 10% as test set. So the model you test is only trained on 90% of your data, but your final model that you use/publish will use 100% of the data. This means it will probably perform better than your training model that used 90% but you cant validate it because you have no test data left. In the end you evaluate the model on 90% of the data which is probably slightly worse than the model that is trained on 100% of the data.

  • @kcd5353
    @kcd5353 2 года назад

    good explanation madam

  • @kcd5353
    @kcd5353 2 года назад

    Good Explanation Madam

  • @appliedstatistics2043
    @appliedstatistics2043 2 года назад

    Hallo, I'm a student in TU dortmund, our lecture also use your resources, but the link in the description is not working now, how can we get access to the resources?

  • @namrathasrimateti9119
    @namrathasrimateti9119 2 года назад

    Great Explanation!! Thank You

  • @Parthsarthi41
    @Parthsarthi41 2 года назад

    Excellent. Thanks

  • @vaibhav_uk
    @vaibhav_uk 2 года назад

    Finally some serious content

  • @Rainstorm121
    @Rainstorm121 2 года назад

    Thanks Sir, but excuse myself (zero statistics & mathematics background), but what does this video suggest about using Brier Score for measuring forecast?

  • @guillermotorres4988
    @guillermotorres4988 2 года назад

    Nice explanation! You are using the same set of HP configurations λi, with i=1, ...,N through fourfold CV (in the innner loop). But, what happend if I would like to use a Bayesian hyperparameter to sample the values of the paramters? For example, for each outer cv with its corresponding inner cv, could I use a Bayesian hyperparameter search? Then the set of HP configuration wouldn't be the same in each inner cv, and then the question is: Can be the set of HP configurations different in each inner cv and is it still valid this nested cross valitadion method?

  • @dsbio4671
    @dsbio4671 2 года назад

    awesome!! thanks so much!

  • @oulahbibidriss7172
    @oulahbibidriss7172 3 года назад

    thank you, well explained.

  • @canceledlogic7656
    @canceledlogic7656 3 года назад

    Heres a free resource on one of the most important academic concepts of the modern age: 800 views. GG humanity. GG

  • @manullangjihan2100
    @manullangjihan2100 3 года назад

    thank you for the explanation