Double Machine Learning for Causal and Treatment Effects

Поделиться
HTML-код
  • Опубликовано: 2 янв 2025

Комментарии • 15

  • @diptanilsantra8041
    @diptanilsantra8041 3 месяца назад

    This presentation was helpful, I appreciate it, Professor.

  • @ForeverSensei2030
    @ForeverSensei2030 8 лет назад +3

    Appreciate your works, Professor.

  • @mastafafoufa5121
    @mastafafoufa5121 4 года назад

    Aren't we looking at predicting E[Y| (D,Z)] in other words how D and Z jointly influence Y as a first step and then E[D|Z] as a second step?
    In the slide at 10:52, they predict E[Y|Z] instead of E[Y| (D,Z)] which is a bit confusing as treatment is not controlled and stochastic as well...

    • @MrTocoral
      @MrTocoral 3 года назад +1

      E[Y|D,Z] would be the ultimate goal (predicting the outcome as a joint function of treatment and covariates). This is what is done for instance by standard ML methods as presentend in the beginning, but in this case doesn't provide a good estimator of the treatment effect. I think the approach here is similar to multilinear regression where we first regress D on Z, then we obtain a residual which we regress on Y to isolate the effect of D independently of Z. So the question is rather here : why do we regress D-E[D|Z] on Y-E[Y|Z] instead of Y ? In Multilinear Regression, the first step ensures that the residual will not be correlated to Z, so regressing this residual on Y or Y-E[Y|Z] is equivalent. But here, since the model is semilinear (I think, but perhaps also because we use ML methods), there may be some effect of g(Z) on Y correlated to D even if we take the residual D-E[D|Z]. So we need to evaluate Y-E[Y|Z] to approach the real treatment effect.

  • @jicao9205
    @jicao9205 2 года назад

    The presentation is awesome. Thank you!

  • @patrickpower7102
    @patrickpower7102 4 года назад

    In "perfectly set-up" randomized control trials, m_0 wouldn't vanish, but rather would be a constant value of 0.5 for all values of Z, no? (6:25)

    • @PrirodnyiCossack
      @PrirodnyiCossack 4 года назад +1

      Yes, though one can assume that that constant had been partialed out, which would give zero.

    • @gwillis3323
      @gwillis3323 3 года назад +1

      no, because D isn't binary, D is continuous. D=m(z) + V, where V is a random variable which does not depend on z. In a perfect trial, D=V, so for example, D might be drawn from a Gaussian distribution with sufficient support to make the inferences you wish to make. You could go further and say that in a "perfect" trial, V is a uniform distribution over some sufficiently large domain. I think here, "perfect" just means "not confounded at all"

  • @mengxiazhang93
    @mengxiazhang93 4 года назад

    The presentation is very helpful! Thank you!

  • @darthyzhu5767
    @darthyzhu5767 7 лет назад +1

    great talk, wondering where to access the slides.

    • @ruizhenmai1194
      @ruizhenmai1194 5 лет назад +1

      @@mathieumaticien Hi the slides have already been removed

    • @VainCape
      @VainCape 3 года назад

      @@ruizhenmai1194 why?

  • @marcelogallardo9218
    @marcelogallardo9218 3 года назад

    Most impressive.

  • @MrRestorevideos
    @MrRestorevideos 9 месяцев назад +3

    Machine learner who worked back in the 30's 🤣

  • @chockumail
    @chockumail 9 месяцев назад +1

    "I resisted to call it ML and I gave up " and Machine learners in 30's :) Hilarious