Meerkat Statistics
Meerkat Statistics
  • Видео 138
  • Просмотров 636 001
TSA - ARIMA Introduction
In this video we'll give a brief history of ARIMA models and introduce the intuition and notation of each model: Autoregressive (AR), Moving Average (MA), ARMA, ARIMA, SARIMA, ARMAX and ARIMAX.
Просмотров: 124

Видео

GAM - Penalized Least Squares
Просмотров 1622 месяца назад
In this video we explore penalized least squares for Additive Models and penalized IRLS for Generalized Additive Models (GAMs). We'll see how using splines and basis expansions can lead to a simpler solution that replaces the backfitting algorithm.
GAM - Splines - Natural Cubic Spline, Smoothing Splines
Просмотров 2002 месяца назад
This video focuses on natural cubic splines, which are linear at the edges to reduce variance. We'll see how to derive them from the power-series representation and try to understand the new representation. We'll also introduce smoothing splines and show that natural cubic splines are the smoothest interpolators. Lastly, we'll touch on the use of splines in computer graphics for drawing smooth ...
GAM - Splines - Intro (Polynomials, Piecewise Polynomials, Splines)
Просмотров 2942 месяца назад
In this video, we introduce splines, a popular method for fitting nonlinear functions in additive models and GAMs. Splines are a compromise between global and piecewise polynomials, they allow for the function to break at several locations called "knots", which unite different parts of the curve.
GLM vs. GAM - Generalized Additive Models
Просмотров 1,6 тыс.3 месяца назад
Additive and Generalized Additive models differ from LM/GLMs in the way they relate the mean to the x predictors. While G/LMs assume a linear model in x, G/AMs allow for any function approximation that captures the structure between mu (or g(mu)) and x. In this video we will also learn about the backfitting algorithm which is a general method for fitting G/AMs. In a future video we will talk ab...
Regression Diagnostics (2/2) - Generalized Linear Models - Residuals, QQ-plot, Outliers
Просмотров 2513 месяца назад
In this video we will look at how we can diagnose our generalized linear model fit using residuals, QQ-plots and Cook's distance. We will see how to adjust the residuals and hat-matrix of linear regression to apply also to GLMs. Become a member and get full access to this online course: meerkatstatistics.com/courses... * 🎉 Special RUclips 60% Discount on Yearly Plan - valid for the 1st 100 subs...
Regression Diagnostics (1/2) - Linear Models - Residuals, QQ-plot, Outliers
Просмотров 2813 месяца назад
In this video we will look at how we can diagnose our linear regression fit using residuals, QQ-plots and Cook's distance. Become a member and get full access to this online course: meerkatstatistics.com/courses... * 🎉 Special RUclips 60% Discount on Yearly Plan - valid for the 1st 100 subscribers; Voucher code: First100 🎉 * “GLM in R” Course Outline: Administration * Administration Up to Scrat...
GLM - Multinomial Regression (3/3) - Ordinal Data (Cumulative Link)
Просмотров 2794 месяца назад
In this video we will go in depth about ordinal response (y) data and see how we can model it using the cumulative link approach. Alan Agresti's Book: shorturl.at/aHY79 Gordon Smyth's paper: gksmyth.github.io/pubs/edm-gna.pdf Become a member and get full access to this online course: meerkatstatistics.com/courses... * 🎉 Special RUclips 60% Discount on Yearly Plan - valid for the 1st 100 subscri...
GLM - Multinomial Regression (2/3) - Nominal Data (Baseline Category)
Просмотров 2634 месяца назад
In this video we will go in depth about nominal response (y) data, and see how we can model it using the baseline category approach. Alan Agresti's Book: shorturl.at/aHY79 Gordon Smyth's paper: gksmyth.github.io/pubs/edm-gna.pdf Become a member and get full access to this online course: meerkatstatistics.com/courses... * 🎉 Special RUclips 60% Discount on Yearly Plan - valid for the 1st 100 subs...
GLM - Multinomial Regression (1/3) - Intro
Просмотров 3644 месяца назад
In this video we will look into multinomial regression, and give an introduction to the topic, including a reminder of the categorical and multinomial distributions. In the next two videos we'll go in depth to the two types of models used for nominal and ordinal response (y) data. Alan Agresti's Book: shorturl.at/aHY79 Gordon Smyth's paper: gksmyth.github.io/pubs/edm-gna.pdf Become a member and...
Is war a war crime? (Israel-Hamas war 6 months analysis)
Просмотров 3125 месяцев назад
Is war a war crime? (Israel-Hamas war 6 months analysis)
Trees - Weights and Feature Importance (Theory + Code)
Просмотров 3948 месяцев назад
Trees - Weights and Feature Importance (Theory Code)
Accelerated Failure Time (AFT) vs. Cox Proportional Hazards (CoxPH)
Просмотров 1,9 тыс.8 месяцев назад
Accelerated Failure Time (AFT) vs. Cox Proportional Hazards (CoxPH)
Accelerated Failure Time (AFT)
Просмотров 1,5 тыс.8 месяцев назад
Accelerated Failure Time (AFT)
2 Examples - Mixed vs. Regular Models
Просмотров 7268 месяцев назад
2 Examples - Mixed vs. Regular Models
Cost Complexity Pruning (Theory + Code)
Просмотров 1,9 тыс.9 месяцев назад
Cost Complexity Pruning (Theory Code)
Build a Decision Tree from scratch using Python (numpy)
Просмотров 4739 месяцев назад
Build a Decision Tree from scratch using Python (numpy)
Decision Trees - Stop Criteria, Categorical Data, NA's, Implementation
Просмотров 2599 месяцев назад
Decision Trees - Stop Criteria, Categorical Data, NA's, Implementation
Decision Trees - Split Criteria
Просмотров 4589 месяцев назад
Decision Trees - Split Criteria
Decision Trees
Просмотров 1979 месяцев назад
Decision Trees
Quantile Regression - Numerical Solutions
Просмотров 1,2 тыс.9 месяцев назад
Quantile Regression - Numerical Solutions
Quantile Loss
Просмотров 2,6 тыс.10 месяцев назад
Quantile Loss
Linear vs. Quantile Regression
Просмотров 6 тыс.10 месяцев назад
Linear vs. Quantile Regression
Israel vs. Palestine - The October 7 Massacre
Просмотров 75110 месяцев назад
Israel vs. Palestine - The October 7 Massacre
R vs Python - 25 Coding Differences
Просмотров 1,8 тыс.11 месяцев назад
R vs Python - 25 Coding Differences
Survival Analysis - Cox PH - Breslow Estimator
Просмотров 762Год назад
Survival Analysis - Cox PH - Breslow Estimator
Survival Analysis - Cox PH - Partial Likelihood
Просмотров 1,5 тыс.Год назад
Survival Analysis - Cox PH - Partial Likelihood
Survival Analysis - Cox Proportional Hazards
Просмотров 1,1 тыс.Год назад
Survival Analysis - Cox Proportional Hazards
Exploratory FA Code in R (psych)
Просмотров 726Год назад
Exploratory FA Code in R (psych)
CFA - Code Example in R (lavaan)
Просмотров 956Год назад
CFA - Code Example in R (lavaan)

Комментарии

  • @BernalBoris-t1f
    @BernalBoris-t1f 3 дня назад

    King Greens

  • @Omsip123
    @Omsip123 3 дня назад

    Thanks for your efforts, very well explained

  • @MichealGraham-r9r
    @MichealGraham-r9r 3 дня назад

    Waelchi Light

  • @rajanalexander4949
    @rajanalexander4949 4 дня назад

    Excellent; thank you!

  • @jironymojirolamus913
    @jironymojirolamus913 7 дней назад

    Great explanation, finally wrapped my head around the topic. Thanks!

  • @forheuristiclifeksh7836
    @forheuristiclifeksh7836 8 дней назад

    5:24 MLE vs LS

  • @Mrperusyaneko
    @Mrperusyaneko 12 дней назад

    clear explanation, really appreciated.

  • @Carlos310123
    @Carlos310123 14 дней назад

    great video, thanks for sharing!

  • @minsookim-ql1he
    @minsookim-ql1he 15 дней назад

    This is the best explanation about qunatile loss function. Thank you!

  • @Bo-bb9vj
    @Bo-bb9vj 16 дней назад

    Would absolutely love to be able to get a high res full picture of this to hang it on my dorm's wall and gradually cross/map it out as I progress through it : D

    • @MeerkatStatistics
      @MeerkatStatistics 13 дней назад

      Send me your mail and I'll send it to you.

    • @Bo-bb9vj
      @Bo-bb9vj 11 дней назад

      @@MeerkatStatistics I send an email to the email listed in your channels desc : )

  • @dandan1364
    @dandan1364 16 дней назад

    Wish you would have described Y better in the video … eventually figured it out had to go back and forth, but so far this makes a lot of sense, thanks.

  • @roon1sicunt
    @roon1sicunt 18 дней назад

    Cool vid. But using = instead of <- in R is ick brah

  • @maburwanemokoena7117
    @maburwanemokoena7117 20 дней назад

    My question is why is it necessary though? The prediction from an OLS give you the conditional mean of the normal distribution, we also already have a constant variance, from there we can then calculate any quantile we need.

    • @MeerkatStatistics
      @MeerkatStatistics 19 дней назад

      Good question. If your distribution is known or assumed (e.g. Normal) you are right. OLS doesn't have to assume the distribution. If you have an unknown distribution, the first 2 moments won't give you the quantiles. Also, you might want to fit the quantiles directly because you don't trust the normality assumption.

    • @maburwanemokoena7117
      @maburwanemokoena7117 19 дней назад

      @@MeerkatStatistics Fantastic!

  • @tildarusso
    @tildarusso 29 дней назад

    This is the best VI tutorials I have ever seen. Great job and keep going!

  • @alonremer5049
    @alonremer5049 Месяц назад

    מלך! הסברים ממש טובים, עזר לי מאוד

  • @tassangherman
    @tassangherman Месяц назад

    You're awesome !

  • @ykoy1577
    @ykoy1577 Месяц назад

    very nice video!

  • @radimnovotny6534
    @radimnovotny6534 Месяц назад

    Thanks so much for this great video. It helped a lot

  • @prateekyadav9811
    @prateekyadav9811 Месяц назад

    Thanks so much brother! I was struggling with this so much. I am following NNs from scratch book by Sentdex and I was stuck at the derivative of softmax because I was not able to understand the notations. Now, I understand that the j=k is referring to the diagonal elements of the gradient matrix :) Thanks

  • @petergorelov6852
    @petergorelov6852 Месяц назад

    Dear David. I want to create generator that will give me the random value - lifespan of healthy human. So we can call it “Gompertz distribution” generator. I need it for learning purpose. So the random lifespan =e^(b0+E). b0= const, E -random value of extremal distribution. Which value of b0 you can recommend? Can you also advise the way for creating extremal distribution generator? I have a ready-made normal distribution generator. It can be used to make an extreme distribution generator. You need to take a sample of size n and take a smaller number from it. Am I right? Which mean and sigma of the normal distribution generator should I take? And what is the sample size to take?

    • @MeerkatStatistics
      @MeerkatStatistics Месяц назад

      Hey, I would either use the built in Gompertz generator in the VGAM library in R, or use inverse transform sampling: en.wikipedia.org/wiki/Inverse_transform_sampling which only requires sampling from a uniform distribution in [0.1]. I explain this method here: ruclips.net/video/EyUVj3eeXyA/видео.html. Good luck.

    • @petergorelov6852
      @petergorelov6852 Месяц назад

      ​@@MeerkatStatistics , I know that there are other ways to create this generator. But my goal is to create it by using this equition "lifespan =e^(b0+E)" (it is a part of education). And for some reason I cant find acceptable coeeficient b0 and parameters for extremal distribution. I try , but my final result doesnt look like “Gompertz distribution”. Maybe you can help?

  • @HermanFranclinTESSOTASSANG
    @HermanFranclinTESSOTASSANG Месяц назад

    You are the best. Thank you very much for all your course I really enjoy watching each of them.

  • @钟嘉蕾
    @钟嘉蕾 Месяц назад

    for question2, if a1=Wa0, the columns of W can learn something during training and become different but the rows of W are the same to each other

  • @caty863
    @caty863 Месяц назад

    Pandas should be included in base Python. Any language that wants to call itself worthy of data analytics has to have a built-in dataframe data structure.

    • @PR-cj8pd
      @PR-cj8pd Месяц назад

      Why is that?

    • @caty863
      @caty863 14 дней назад

      @@PR-cj8pd Why? well, because data most of us use is in tabular format, not in a list of name-value pairs.

  • @tommasomenghini1647
    @tommasomenghini1647 Месяц назад

    Bro you’re so good, thanks man!

  • @navintiwari
    @navintiwari 2 месяца назад

    Ah! Finally a good and to the point explanation after searching so much on RUclips. Thankyou! You have a new subscriber now.

  • @pectenmaximus231
    @pectenmaximus231 2 месяца назад

    Beautiful presentation. So clear and informative!

  • @piero8284
    @piero8284 2 месяца назад

    There are some mistakes in your notes

    • @MeerkatStatistics
      @MeerkatStatistics 2 месяца назад

      can you point them out?

    • @piero8284
      @piero8284 2 месяца назад

      @@MeerkatStatistics sure. At 3:05, the joint distribution should be written in terms of the transformed variables as p(x,ξ)p(x,ξ), assuming T(θ)=ξT(θ)=ξ is your transformed variable. I don't blame you, as in the paper, the authors omitted a small detail in writing the ELBO of the original variable as the ELBO of the new one

  • @danieledrisian9972
    @danieledrisian9972 2 месяца назад

    I love this, thank you! Very clear explanation.

  • @kebenny
    @kebenny 2 месяца назад

    Thanks for introducing GAM, need you watch several times to understand

  • @anglonrx2754
    @anglonrx2754 2 месяца назад

    Gauss didn't invent the linear model; he just claimed to a decade after someone else had. The same is true for Gaussian elimination. Newton invented it, and then Gauss decided to name it after himself.

  • @MisterDives
    @MisterDives 2 месяца назад

    Loving the spline / GAM series, thank you!

  • @cheikhsadibousidibe1434
    @cheikhsadibousidibe1434 3 месяца назад

    First, the conflict did not begin on October 7th. You have been oppressing Palestinians since your illegal occupation, starting with the massacre of Karbala. You take pride in the killing of thousands of children since then. You even bombed a civilian camp where you told people to go to suffer less, only to slaughter them in the middle of the night. How dare you, coward.

  • @mohamedrefaat197
    @mohamedrefaat197 3 месяца назад

    Very clear and concise! Looking forward for the follow up video

  • @DelhiiteChetan
    @DelhiiteChetan 3 месяца назад

    So much information compiled together wonderfully in this video.Hope you come up with more such videos on individual methods with some examples.

  • @whytape301
    @whytape301 3 месяца назад

    IMHO, symmetrical or not, the mean is still your expected value. If you want to minimise error, you can only work with the expectation, since that's what you can expect. If you optimised for something else, you may use the mode or whatever else that fits the optimization.

  • @marcospiotto9755
    @marcospiotto9755 3 месяца назад

    What is the difference between denoting p_theta (x|z) vs p(x|z,theta) ?

    • @MeerkatStatistics
      @MeerkatStatistics 3 месяца назад

      I think "subscript" theta is just the standard way of denoting when we are optimizing theta, that is we are changing theta. While "conditioned on" theta is usually when the theta's are given. Also note that the subscript theta refers to the NN parameters, while often the "conditioned on" refers to distributional parameters. I don't think these are rules set in stone, though, and I'm not an expert in notation. As long as you understand what's going on - that's the important part.

    • @marcospiotto9755
      @marcospiotto9755 3 месяца назад

      @@MeerkatStatistics got it, thanks!

  • @farhanshadiquearronno7453
    @farhanshadiquearronno7453 3 месяца назад

    Just wanted to say Thank You !!! Very well explained & so much intuitive 👍

  • @sophia17965
    @sophia17965 3 месяца назад

    at 3:30 , i dont understand why ez1 + ez2 + ez 3 = 1 can someone please explain? thanks

    • @MeerkatStatistics
      @MeerkatStatistics 3 месяца назад

      because they are also divided by the exact same number... so it turns into 1.

    • @sophia17965
      @sophia17965 3 месяца назад

      @@MeerkatStatistics oh 🤦‍♀🤦‍♀duh. Thanks

  • @BalajiChegu
    @BalajiChegu 3 месяца назад

    Great vid. Can you please share that notebook ?

  • @mazensaaed8635
    @mazensaaed8635 3 месяца назад

    for question 1 : it won't be a problem and the network will learn normally because the inputs to the neurons will not be zero or constant since we initialized the weights to be random so the biases will get updated normally. for question 2 : since the inputs to the network are different the deriviative of the output with respect to the wight will not be constant or zero so this weights in the first layer(input) will be updated normally.

  • @mazensaaed8635
    @mazensaaed8635 3 месяца назад

    Thnak you so much , very good explanation

  • @pedrocolangelo5844
    @pedrocolangelo5844 3 месяца назад

    This video is everything I asked for! Thank you so much, Meerkat!

  • @raltonkistnasamy6599
    @raltonkistnasamy6599 4 месяца назад

    thank u so much brother

  • @raltonkistnasamy6599
    @raltonkistnasamy6599 4 месяца назад

    thanka man

  • @ML_n00b
    @ML_n00b 4 месяца назад

    do you have any more tips and tricks for regular linear models?

  • @ML_n00b
    @ML_n00b 4 месяца назад

    very useful video for practitioners

  • @jamalnuman
    @jamalnuman 4 месяца назад

    many thanks for the lecture. we all the time need to know how the factors are calculated from the variables. but how come the factors and their loading can be calculated given that the only given data is the variables? x1=l1f1 + l2f2 + l3f3

  • @paulluc5070
    @paulluc5070 4 месяца назад

    Islam always brings hate and war!!

  • @aza6513
    @aza6513 4 месяца назад

    Wow, will you cover nested logit model? And relationship with gev or vglm

  • @includestdio.h8492
    @includestdio.h8492 4 месяца назад

    Thank u so much!