Robust Regression with the L1 Norm

Поделиться
HTML-код
  • Опубликовано: 11 сен 2024
  • This video discusses how least-squares regression is fragile to outliers, and how we can add robustness with the L1 norm.
    Book Website: databookuw.com
    Book PDF: databookuw.com/...
    These lectures follow Chapter 3 from:
    "Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control" by Brunton and Kutz
    Amazon: www.amazon.com...
    Brunton Website: eigensteve.com
    This video was produced at the University of Washington

Комментарии • 17

  • @zoheirtir
    @zoheirtir 3 года назад +21

    This channel is one of the most important channels for me! MANY Thanks Steve

  • @user-ez9ol7om8d
    @user-ez9ol7om8d 9 месяцев назад

    Along with visual aids you have explained the concept in a very understandable manner. Thanks for the video.

  • @Globbo_The_Glob
    @Globbo_The_Glob 3 года назад +5

    I was just talking in a meeting about this, get out of my head Brunton.

  • @JousefM
    @JousefM 3 года назад +2

    Thumbs up Steve!

  • @3003eric
    @3003eric 3 года назад

    Nice video. Your channel and book are amazing! Congratulations.

  • @haticehocam2020
    @haticehocam2020 3 года назад +1

    Mr. Brunton What material and program did you use while shooting this video?

  • @Calvin4016
    @Calvin4016 3 года назад +1

    Prof. Brunton, thank you for the lecture! However, in some cases such as maximize a posterior and maximum likelihood estimation, under the assumption that the noise is Gaussian distributed, minimizing the L2 norm provides the optimal solution. Usually certain heuristics such as M-Estimation are applied to mitigate issues arise from outliers, in other words changing the kernel to a shape that can tolerate certain amount of outliers in the system. It sounds like using L1 norm here has very similar effects to that of robust kernels where we are effectively changing the shape of the cost/error. Can you please elaborate on the differences between using (L1 norm) and (L2 norm + M-estimator), and how the L1 norm performs in applications where data uncertainty is considered? Thanks!

    • @keyuchen5992
      @keyuchen5992 11 месяцев назад

      I think you are right

  • @alexandermichael3609
    @alexandermichael3609 3 года назад

    Thank you, Professor. It is pretty helpful for me.

  • @JeffersonRodrigoo
    @JeffersonRodrigoo 3 года назад

    Excellent!

  • @sutharsanmahendren1071
    @sutharsanmahendren1071 3 года назад

    Dear sir, I am from Sri Lanka and I am really admired by your video series. My doubt is l1 norm does not differentiable at zero due to its non-continuty. To impose sparsity, researchers use ISTA (Iterative Soft Thresholding Algorithm) to handle the weights when they come near to the zero with a certain threshold. What are your thoughts related to this?

  • @pierregravel5941
    @pierregravel5941 Год назад

    Is there any way we might generate a sampling matrix which is maximally incoherent? What if the samples are positioned randomly and maximally distant from each other? Can we add additional constraints on the sampling matrix?

  • @vijayendrasdm
    @vijayendrasdm 3 года назад

    Hi Steve
    L1 solution (i.e regularization) error surface is not convex. Are you planning to explain how do we optimize such functions ?
    Mathematical derivations would be helpful :)
    Thanks

  • @twk844
    @twk844 3 года назад

    Does anyone know historical reasons for such popularity of L2 norm? Very entertaining videos! Namaste!

    • @MrHaggyy
      @MrHaggyy Год назад

      I think it's so popular because you need it so damn often. Pythagoras or the distance between two points in 2D knows basically everybody. This idea dominates mechanical engineering. The whole idea of complex numbers require l2 norm with i = sqrt(-1) is designed around l2 norm. So all the differential equation in mechanics and electronics need it. And basic optics need it too.

  • @alegian7934
    @alegian7934 3 года назад

    there is a point in each video, where you loose consciousness of time passing :D