The claim matching is better because it eliminates researcher degrees of freedom seem inconsistent with the fact there are lots of degrees of freedom in matching too - choosing and normalizing distance input variables, picking N / acceptable imbalance, and picking maximum distance cutoffs (detailed in the next lecture). When there's large imbalances so we suspect high sensitivity to researcher discretion over functional forms, matching seems a good solution, but I'm not sure its fair to call it a universal solution to research bias. Seems more like trading one set of sensitivities to discretion for another (and more generally, sensitivity to one set of assumptions for another).
The difference is where the df are being applied and the direction they tend to steer towards. Researcher decisions during pruning stage aren't (shouldn't) be about fitting a model to every possible pruned dataset. That would be bad researcher df
Your videos are immensely helpful. You explain often times very complicated topics in very easy to understand language.
The claim matching is better because it eliminates researcher degrees of freedom seem inconsistent with the fact there are lots of degrees of freedom in matching too - choosing and normalizing distance input variables, picking N / acceptable imbalance, and picking maximum distance cutoffs (detailed in the next lecture). When there's large imbalances so we suspect high sensitivity to researcher discretion over functional forms, matching seems a good solution, but I'm not sure its fair to call it a universal solution to research bias. Seems more like trading one set of sensitivities to discretion for another (and more generally, sensitivity to one set of assumptions for another).
The difference is where the df are being applied and the direction they tend to steer towards. Researcher decisions during pruning stage aren't (shouldn't) be about fitting a model to every possible pruned dataset. That would be bad researcher df