Convex Norms and Unique Best Approximations

Поделиться
HTML-код
  • Опубликовано: 25 мар 2021
  • In this video, we explore what it means for a norm to be convex. In particular we will look at how convex norms lead to unique best approximations.
    For example, for any continuous function there will be a unique polynomial which gives the best approximation over a given interval.
    Chapters
    0:14 - Geometry of the Lp Norm
    0:45 - Convexity of the Lp Norm
    2:13 - Best Approximations are unique for convex norms (proof)
    4:41 - Example
    The product links below are Amazon affiliate links. If you buy certain products on Amazon soon after clicking them, I may receive a commission. The price is the same for you, but it does help to support the channel :-)
    The Approximation Theory series is based on the book "Approximation Theory and Methods" by M.J.D. Powell:
    amzn.to/3CXQpQZ
    Errata and Clarifications:
    The L1/2 "norm" isn't actually a norm! Lp for p less than 1 fail the triangle inequality.
    All norms and normed linear spaces are convex, the important bit for the proof is that A must be a convex subset.
    In the last section I show a graph labelled exp(-x) but that's not correct looking at the curve. It doesn't hurt the explanation, any curve would've been fine.
    This video was made using:
    Animation - Apple Keynote
    Editing - DaVinci Resolve
    Mic - Shure SM58 (with Behringer U-PHORIA UM2 audio interface)
    Supporting the Channel.
    If you would like to support me in making free mathematics tutorials then you can make a small donation over at
    www.buymeacoffee.com/DrWillWood
    Thank you so much, I hope you find the content useful.

Комментарии • 31

  • @meccamiles7816
    @meccamiles7816 Месяц назад

    When I learned this in my real analysis and measure theory courses, neither of my professors were able to articulate these ideas as clearly and succinctly as you have in this and associated videos. Well done; and thanks for sharing.

  • @juliogodel
    @juliogodel Год назад +1

    Thanks! Please continue with more videos about approximation theory. Going through high Dimensional data analysis etc it all makes sense now.

  • @Frittenkopf88
    @Frittenkopf88 2 года назад +3

    absolutely love your videos. studied maths a while back & getting nostalgic, feels good to be reminded of its beauty (in such a wonderfully clear fashion). thanks

    • @DrWillWood
      @DrWillWood  2 года назад

      Great to hear! :-) thank you very much!

  • @lin1450
    @lin1450 2 года назад +1

    Thank you very much this series! Really helped understanding these concepts!
    I would love to see a video about the Frobenius norm :)

  • @robmarks6800
    @robmarks6800 3 года назад +1

    Just wanna say amazing series of a fascinating topic!!
    I hope you keep going and give a more complete view of fourier series with your amazing intuition

    • @DrWillWood
      @DrWillWood  3 года назад +2

      Thanks a lot I'm glad you enjoyed it! and you're right Fourier series is rarely discussed (at least on YT) from a more theoretical approach! I'll add it to my vid ideas list :-)

  • @ShenghuiYang
    @ShenghuiYang 2 года назад

    This channel is gold. I think you may extend the last part to topics about Tchebycheff polynomial as a good choice of approximation in the future.

  • @macwalker936
    @macwalker936 Год назад

    This video is incredible!

  • @thecontrolenggeek786
    @thecontrolenggeek786 Год назад

    I am really gald to see lectures this simple in advanced math... Please don't stop....

  • @jaafars.mahdawi6911
    @jaafars.mahdawi6911 2 года назад

    when such mathematical 'heavy matter' is very well-explained and well-visualised, all we can feel is infinite gratitude..
    p.s. excuse the keen eyes of math majors spotting the slightest, though unintended, mistakes;
    in the case of this video, other than the ones pointed out by other viewers, wasn't the curve of e^-x around the end incorrect? :)
    p.p.s. well, now i see you've done an errata section in the description. thanks again!

  • @maciess5226
    @maciess5226 2 года назад

    Good video. Thanks for viuzalizations

  • @fierydino9402
    @fierydino9402 9 месяцев назад

    Thank you!

  • @plfreeman111
    @plfreeman111 2 года назад

    RUclips suggested you channel, and I'm glad it did. I'm stuck trying to resolve something from my experience with what you presented.
    When finding approximations from a model set to data corrupted with noise the choice of norm *does* effect which model is selected as the "best". For example, L1 is often considered more robust to outliers than L2, and in practice produces "better" predictive models. So for example, if you were to sample e^{-x} and then corrupt 5% of the data with uniform random noise (not normally distributed), the result is still a function (for every x there is one and only one y). Let's again assume that our model space is the set of polynomials. I believe the least residual fit using the L1 norm and the least residual fit using the L2 norm will be different (I may need to double check this), and that the fit using the L1 norm will be a closer approximation to e^{-x} than the L2 norm. (See Huber and others on Robust Statistics.) But your video convincingly shows that the L1 fit and the L2 fit ought to be identical. Which contradicts loads of work on robust fitting, and my own practical experience with it. Confusing.

    • @plfreeman111
      @plfreeman111 2 года назад

      OK, I figured out how I was confused. You're saying that *for a given norm* there is a unique best approximation, not that the best approximation is the same for all convex norms. I didn't pick up on that distinction the first time through.

  • @jfndfiunskj5299
    @jfndfiunskj5299 2 года назад

    Beautifually done.

  • @JaymzCampbell
    @JaymzCampbell Год назад

    2:47 was really helpful for me finally understanding what the whole deal with convexity was whilst studying M832 (OU course), up until this it was just something I'd accepted and carried on 😅 Thanks from someone in "exam crunch" mode this weekend 🙌

    • @DrWillWood
      @DrWillWood  Год назад +1

      That's brilliant! I made these videos while studying M832 myself! best of luck for the exam 🙂

  • @drdca8263
    @drdca8263 2 года назад +7

    But is L_{1/2} even a norm? Doesn’t it fail to satisfy the triangle inequality?
    Let |•| be some norm,
    If |x-y|

    • @DrWillWood
      @DrWillWood  2 года назад +5

      Hi, yes absolutely! for p

    • @Evan490BC
      @Evan490BC 2 года назад +4

      It is called a quasi-norm. They are extremely useful in some cases.

    • @smolboi9659
      @smolboi9659 2 года назад +3

      Yea. Also FYI for 0

    • @gregc517
      @gregc517 9 месяцев назад

      @@DrWillWood in the proof you used strict convexity, but not all norms are strictly convex (for example L_infinity isn't), and the theorem can fail for non-strictly-convex norms, so "All norms and normed linear spaces are convex, the important bit for the proof is that A must be a convex subset." isn't correct.

  • @tonihuhtiniemi1222
    @tonihuhtiniemi1222 2 года назад

    So when doing optimization in normed linear space, every local optimum is also global optimum?

  • @killymxi
    @killymxi 2 года назад

    What about Linfinity and a rectangular subset? Isn't there a possibility for two straight edges to touch, how is it treated?

  • @user-kx5hp1kh3u
    @user-kx5hp1kh3u 2 года назад

    It will definitely be a nightmare by reading books to understand the idea behind unique best approximation without your video and pictures for demonstration.

  • @ckq
    @ckq 2 года назад

    2:00 isn't the line only the shortest path in L² Norm, so is convexity even applicable in other norms?

  • @colinpitrat8639
    @colinpitrat8639 2 года назад

    Exercise for the interested reader: For the last bit, what if the interval is not [a,b] but [a,inf). Obviously there's no best polynomial approximating exp(-x). Why isn't it a counter example to the proof?

    • @salim444
      @salim444 2 года назад

      answer
      the polynomials are not in L2 given the [a,infty) space