Applications of Least Squares | Practical Linear Algebra (Lecture 8)

Поделиться
HTML-код
  • Опубликовано: 23 окт 2024

Комментарии • 11

  • @aleksanderchelyunskii5968
    @aleksanderchelyunskii5968 Год назад +1

    Thank you for your linear algebra course. I enjoy your practical approach on explaining this stuff.
    In 16:02 you stated that setting the correct matrix A is harder than obtaining the approximate solution using least square.
    As far as I know, the difficulty arises from making sure that column vector of matrices A are linearly independent from each other. Is it really hard to obtain linearly independent column vector from experiment data?
    In relation to that, how did you generate matrices A for temperature estimation example?

    • @control-room
      @control-room  Год назад

      Thanks for watching!
      So when I said "setting the correct matrix A is harder than actually solving least squares", I just meant that it's hard to convert a word problem into a matrix equation. Once you've figured that out, though, then you can easily get the solution by calling NumPy's lstsq function.
      Also, the difficulty does *not* arise from making sure the column vectors are linearly independent. If you gather experimental data and find that your columns happen to be linearly dependent, you can use QR factorization (see lecture 10) to find out exactly which columns are linearly dependent and just remove them. Those columns are completely redundant so it's safe to just delete them. Now you'll have linearly independent columns and you can do least squares.
      By the way, deleting a column would be the same as removing that sensor. This means that sensor provides no unique information beyond what the other sensors are providing. In practice, this is *extremely* unlikely to happen, but it's possible.
      The difficulty in setting up the matrix A arises because it's difficult to convert word problems into equations, and sometimes it's also difficult to convert those equations into matrix equations. The temperature sensor example is simple, but there are some really complicated problems that can actually be written as matrix equations if you think about it really hard.
      Here's a non-obvious example involving estimating the position of an object using beacons scattered around it, using linearization: ee263.stanford.edu/lectures/lsnav.pdf

    • @aleksanderchelyunskii5968
      @aleksanderchelyunskii5968 Год назад +1

      Thank you for your reply. I really appreciated it.
      Understood. The real difficulty is during modelling of the problem instead of solving it. I think this also applies for other subfields in mathematics. Initially, it is never clear to us which mathematical method to use or how to translate physical/practical quantities into established formula.
      I also thought that linearly dependent column vector provides minimal amount of information. For particular example of temperature estimation, I guess that it could be due to incorrect sensor position or calibration. Needless to say, linearly dependent measurement is to be avoided by experimenter.
      Thank you for sharing the position estimation problem. I am still trying to understand it, but I think it will be fun brain exercise. I also visited your github page. Perhaps I would raise an issue to open discussion about linear algebra. Cheers.

    • @aleksanderchelyunskii5968
      @aleksanderchelyunskii5968 Год назад

      @@control-room By the way you mention than we can identify linearly dependent column vector by QR factorization. If I am not wrong, matrices which contains linearly dependent column vector would be upper stair matrices R instead of upper diagonal matrices. Linearly dependent column vector would have same column index with diagonal element of R whose value is 0.
      A = [x1, x2, ..., xj, ..., xN] = [Q1, Q2, ..., QN]R
      if xj is linearly dependent, then Rjj = 0 (From Gram-Schmidt procedure)

    • @control-room
      @control-room  Год назад

      @@aleksanderchelyunskii5968 Yes! That's exactly correct. You'd get a matrix in upper staircase form with Rjj = 0.

    • @aleksanderchelyunskii5968
      @aleksanderchelyunskii5968 Год назад

      @@control-room I took a look on navigation problem you shared with us. Slide 2 and 3 state the variable and assumption used to solve the problem, while slide 4 and 5 illustrate best linear unbiased estimator (BLUE) properties of Moore-Penrose pseudo inverse compared to minimal number of constraint equation to solve this problem.
      You are right about larger effort needed to model the problem. In fact, I am still a bit confused with how the matrices is constructed.
      1.We want to know location x in R^2 --> okay I think this is equivalent of saying coordinate of x in Cartesian coordinate.
      2.We collect measurement y in R^4 --> I assume that this is distance measured from unknown location x to each beacon.
      3.Unit vector from location x to each bacon is provided as k_i.
      What confuses me are:
      1. If y is measurement of distance, then why negative value is allowed? (R^4)
      2. I find that in slide 4 unit vector (k_i) that compose the row vector of minimal constraint matrices (B_je) is not orthonormal. Do k_i is really unit vector as given in problem statement?
      3. I tried to imagine this problem by imagining Cartesian origin somewhere in the plane. I imagine geometrically y_i = x + k_i * |S| for every beacon i. But I guess I am wrong.
      Well I could always give a try later. Thank you for sharing the problem though

  • @伍台國
    @伍台國 Год назад

    Great