Im taking REGELUNGSTECHNIK3 which aboards nonlinear systems without taking regelungstechnik 2, which aboards linear and multivariable systems.That's because I am an abroad student in Germany, and the course programs are different in Germany and in my home country. This playlist is helping me catch up to it. Thank you very much !
Thumbed up all of your videos and loved your Fourier videos. But the Kalman filter recently wimped out in the hobby drone business. Century old FFT is still the king. The king is still alive and well.
thanks, I just like to address an error in the code that the lqe command should be [L,P,E] = lqe(A, eye(4),C,Vd,Vn). in this case the results will be identical to those from LQR command.
Thank you, Steve, for your clear explanation! Just one question, please. In your matlab example, it appears BF = [B Vd 0*B], which already takes Vd into account. So, why uAUG was defined as [u; Vd * Vd * uDIST; uNOISE], where Vd appears multiplied twice? I'm grateful in advance for any answer.
In the book it says that B_aug=[B eye(4) 0], while u_aug=sqrt(Vd)*randn(4,size(t,2)). I think that you should take into account Vd just once (and in sqrt fashon) as in the book. Maybe in the video there is some typo.
Thankyou Steve for these fantastic videos! They are a real help in my studies. In your matlab example it appears as if the kalman gain is only calculated once for the entire simulation. While in other sources, I've seen the kalman gain updated at each timestep. I was just wondering if you could comment on why this is?
Thanks! Yes, that is right, in this example I am using a single Kalman gain because I am assuming that the linear system doesn't change. If you are controlling a more complex system, where the linearization point is changing, or where there are nonlinear dynamics, then it could help to update the Kalman gains as well. In large-scale data assimilation projects, like climate modeling, it is common to update the Kalman gains.
@@Eigensteve I've also seen in other kalman descriptions (e.g. wikipedia) they estimate an accuracy covariance along with the state vector. This estimated covariance is then used to recompute the kalman gain every step (despite no re-linearization done). Maybe this will be covered in future lectures (or if you have a reference to a relevant part in your book), but I don't feel like I have a good sense of the differences between these two approaches. In particular, the version you present here seems a lot simple, so I guess I'm wondering if you can always do this (at least when you stay close to the linearization point), or if the more complex version described elsewhere is necessary.
I'm gonna go ahead and guess the example was not done for the pendulum up position because you need more tools (robust control?). I'll get to those videos soon enough though. Great material Steve.
Thanks! Yes, this is an easier warm up, since if the controller isn't that effective, the system stays in the down position. A bit trickier in the up position.
Great question. Usually you won't have information about noise and disturbance in advance, so you would somehow need to estimate them. And generally these magnitudes will actually become tuning parameters that you can use to find a good estimator. Often only the ratio between noise magnitudes matters, and you usually vary them by factors of 10.
@@alex.ander.bmblbn, definitely, you could use another optimization technique to select these hyperparameters to minimize error or some other metric. But this would be a smarter, automated version of trial and error.
Great video! But I think there is a mistake: there are two Vd: the first one represents how disturbance affects the real system and the second is the covariance we made. So, maybe, we should distinguish these two 'Vd'
Awesome video. I am working on a project using a Kalman filter to estimate SOC. I like the way you made your presentation show a transparent window for MATLAB. Could you show how you did that?
Supernice! I've got a Q.: what is the best way to take account of a forcing term? (e.g. Let's suppose an external force, sinusoidal, is generating a torque on the pend.)
You would incorporate that as a non-conservative force, which will be the right-hand side in the Lagrange formulation using the principle of virtual work. If your work term is a form of potential energy, you don't need to do that and can just use Euler-Lagrange instead. Try it with gravity. Assume a rigid pendulum which doesn't compress, so gravity only does work in the local coordinate system's x-direction. Use a standard rotation matrix to put the gravitational force in your local coordinates (versus global coordinates, where gravity points down), and apply virtual work (dot product of the force with the unit direction vector. If i and j are the local unit vectors, then for a clockwise-rotated local coordinate system (where j is aligned with the pendulum rod), you get a gravitational force vector of F = mgsin(theta)i - mgcos(theta)j. This force only does work in the i direction, so the unit direction vector dr = dxi. Taking the dot product F*dr = mgsin(theta)dx. If you use Lagrange, then you will perform the partial derivatives on the system's kinetic energy only, and then your generalized force on the right-hand side is Qdx = mgsin(theta)dx, and so Q = mgsin(theta). This is the same result you would get using Euler-Lagrange since the gravitational force can actually be incorporated as a potential energy.
In the case of following a setpoint, are there tools for knowing the gain required for it? so the input becomes u = K_r*u - K_s*x, being K_s the matrix gain obtained from the LQR.
Thanks -- yes, when following a setpoint for reference tracking, it is possible to add a feedforward term along with LQR. I always find this a bit trickier to derive.
You didn't mention that you used the asymptotical Kalman filter here. Isn't the most "optimal" one when you get K from solving the differential Riccati equation online ?
Thank you so much prof Steve for making things easier to understand. I appreciate it. Are the lqe and lqr MATLAB commands used in the videos are applicable to discrete systems? the system in the video is cont.
Dear Professor! Thanks again. One question please. At 15:10 you said that in the real world we have u and y and we will not have to "invent" disturbances (n and d). But my current understanding is that this n and d disturbances are an integral part of the Kalman Filter algorithm, and we will need to estimate these real-world disturbances during system operation in order to supply Kalman Filter with correct real-time values for n and d covariances. Is my understanding correct?
I think I understood it in the same way. In the example shown in this video, Kalman filter was able to squash the noise because we gave him the information about the noise signal. We would need to do the same for real-world system. PS. Note I'm not an expert, I only wrote how I understood the lecture.
Dr. Brunton, excellent series! I wonder where could I find the Matlab code for this video. I couldn't find the code on the book website. Thank you for your efforts!
Hi Steve, first of all: fantastic video !!! I have a question reguarding Kalman filter in combination with PID control. As you said the kalman filter has its own dynamics. When I use a PID controler I use my system model to tune the PID parameters properly. If I use a filter such as a butterworth filter I have to consider the fitler dynamics if this filter. However, the dynamic of the Kalman filter I not easy to predict/identify since I do not know the poles of the system. Is it possible to combine a PID controller with a kalman filter properly in a "scientific" way? Background is: I have a very unreliable velocity measurement of the fible flow and I want to use a simple PI controler. However, I do not want to use a simple butterworth filter since measuring is sometimes not possible or provides values that are physically not explainable. Thanks for your time so far. I would appreciate an answer very much. cheers Leo
Hi mr. Brunton. Thank you for the video, the estimation of unmeasured variables almost feels like magic. I really tried to understand the Kalman filter but I still don't understand this part - how does it estimate the unmeasured variables. Would it be possible to write out a formula which describes how these estimates are calculated (something like theta(k+1)=....)? Or could you somehow conceptually try to explain this? I would really appreciate it.
I was thinking about this for a while, and I reached for some insight that I may share with you. The design of Kalman filter in this video is based on the modelling system. In the modelling system, the states are actually modeled (just like they are known but they are not), so the lqe command subtracts the modeled state with a random state (initial estimated state) and then correct the error between them based on kalman gain. However, we have no idea about some of the states in real environment. Here, instead of modeling of the states we must do calculations for the unknown states, for example, if the position state is measured, we could calculate the speed state by deriving the former. Once again, Kalman filter will subtract the calculated states with the estimated states and correct the error based on kalman gains. Hope this could help.
For everyone wondering about this [0 0 0 0 0 Vn], that's because: - our u input is a scalar (i.e. 1 input signal). - however d input is assumed to have 4 inputs (d1, d2, d3, d4) a separate disturbance input for each state, it is clearly defined like this line 44 in Matlab uDIST = randn(4,..), so it is a 4 inputs disturbance. - finally n is a scalar again, same as u When you have this in mind, the following is the full expansion of both State equation and Output equation [with size of each vector and matrix] of the simulated system: State Equation: X_dot [4x1] = A[4x4] * X[4x1] + B[4x1] * u[1x1] + Vd[4x4] * d[4x1] + 0 * n[1x1] So, if you put the inputs [u d n] as a single input vector, you end up with this: u_all = [u d1 d2 d3 d4 n] and combine B, Vd and Vn in a single matrx = [B Vd 0*B], the reason for 0*B is to keep sizes same only. Since Output equation: Y[1x1] = C[1x4] * X[4x1] + 0 * u[4x1] + 0 *d[4x1] + Vn[1x1] * n[1x1] Therefore, all inputs are 0, except n, [u d1 d2 d3 d4 n] ---> [0 0 0 0 0 vn]
I have some question regarding the kalmam Filter: where is the difference between the Matlab kalman function and lqe? Can I use the kalman Filter with a System without an input u?
It is the same number of columns as the augmented B matrix. The columns are the control input, the 4 state variables, and the measured output. In the D matrix, we are only applying noise to the measurement, and doing nothing to the other 5 columns. The other 5 columns are addressed in the augmented B matrix. Notice how the last column in the augmented B matrix is a column of zeros, because we aren't applying the measurement noise to the inputs... that's left to the outputs via the D matrix.
Im taking REGELUNGSTECHNIK3 which aboards nonlinear systems without taking regelungstechnik 2, which aboards linear and multivariable systems.That's because I am an abroad student in Germany, and the course programs are different in Germany and in my home country. This playlist is helping me catch up to it. Thank you very much !
Thumbed up all of your videos and loved your Fourier videos. But the Kalman filter recently wimped out in the hobby drone business. Century old FFT is still the king. The king is still alive and well.
You saved me in my Discrete Systems Control presentation, thank you so much.
and he is saving me now in my modern control presentation hahahaah
@@mohamadalajouz7344 and now he's saving me in Robust control presentation
thanks, I just like to address an error in the code that the lqe command should be [L,P,E] = lqe(A, eye(4),C,Vd,Vn). in this case the results will be identical to those from LQR command.
this class is excellent!!!!!
Agree! We need to fire our professor and hire Dr. Brunton!
Thanks Steve for the great work!
My pleasure!
@@Eigensteve hi ı need simple pendelum model anybody can help me please
Thank you, Steve, for your clear explanation! Just one question, please.
In your matlab example, it appears BF = [B Vd 0*B], which already takes Vd into account. So, why uAUG was defined as [u; Vd * Vd * uDIST; uNOISE], where Vd appears multiplied twice? I'm grateful in advance for any answer.
In the book it says that B_aug=[B eye(4) 0], while u_aug=sqrt(Vd)*randn(4,size(t,2)). I think that you should take into account Vd just once (and in sqrt fashon) as in the book. Maybe in the video there is some typo.
Thankyou Steve for these fantastic videos! They are a real help in my studies.
In your matlab example it appears as if the kalman gain is only calculated once for the entire simulation. While in other sources, I've seen the kalman gain updated at each timestep. I was just wondering if you could comment on why this is?
Thanks! Yes, that is right, in this example I am using a single Kalman gain because I am assuming that the linear system doesn't change. If you are controlling a more complex system, where the linearization point is changing, or where there are nonlinear dynamics, then it could help to update the Kalman gains as well. In large-scale data assimilation projects, like climate modeling, it is common to update the Kalman gains.
@@Eigensteve I've also seen in other kalman descriptions (e.g. wikipedia) they estimate an accuracy covariance along with the state vector. This estimated covariance is then used to recompute the kalman gain every step (despite no re-linearization done). Maybe this will be covered in future lectures (or if you have a reference to a relevant part in your book), but I don't feel like I have a good sense of the differences between these two approaches. In particular, the version you present here seems a lot simple, so I guess I'm wondering if you can always do this (at least when you stay close to the linearization point), or if the more complex version described elsewhere is necessary.
I'm gonna go ahead and guess the example was not done for the pendulum up position because you need more tools (robust control?). I'll get to those videos soon enough though. Great material Steve.
Thanks! Yes, this is an easier warm up, since if the controller isn't that effective, the system stays in the down position. A bit trickier in the up position.
hi, mr. Brunton, I am really enjoying the classes. thank you!
what if we have no information about noises and disturbances in advance?
Great question. Usually you won't have information about noise and disturbance in advance, so you would somehow need to estimate them. And generally these magnitudes will actually become tuning parameters that you can use to find a good estimator. Often only the ratio between noise magnitudes matters, and you usually vary them by factors of 10.
@@Eigensteve , are there any techniques one could use instead of trial and error? some error minimisation techniques, perhaps?
@@alex.ander.bmblbn, definitely, you could use another optimization technique to select these hyperparameters to minimize error or some other metric. But this would be a smarter, automated version of trial and error.
@@Eigensteve, I see, thank you, sir! looking forward to reading your book on machine learning based control :)
Great video! But I think there is a mistake: there are two Vd: the first one represents how disturbance affects the real system and the second is the covariance we made. So, maybe, we should distinguish these two 'Vd'
Nice lecture and book. Thanks a lot.
This lecture is very good. Tks Prof.
Awesome video. I am working on a project using a Kalman filter to estimate SOC. I like the way you made your presentation show a transparent window for MATLAB. Could you show how you did that?
This uses a hardware mixer and the ATEM software.
Supernice! I've got a Q.: what is the best way to take account of a forcing term? (e.g. Let's suppose an external force, sinusoidal, is generating a torque on the pend.)
You would incorporate that as a non-conservative force, which will be the right-hand side in the Lagrange formulation using the principle of virtual work. If your work term is a form of potential energy, you don't need to do that and can just use Euler-Lagrange instead.
Try it with gravity. Assume a rigid pendulum which doesn't compress, so gravity only does work in the local coordinate system's x-direction. Use a standard rotation matrix to put the gravitational force in your local coordinates (versus global coordinates, where gravity points down), and apply virtual work (dot product of the force with the unit direction vector.
If i and j are the local unit vectors, then for a clockwise-rotated local coordinate system (where j is aligned with the pendulum rod), you get a gravitational force vector of F = mgsin(theta)i - mgcos(theta)j. This force only does work in the i direction, so the unit direction vector dr = dxi. Taking the dot product F*dr = mgsin(theta)dx. If you use Lagrange, then you will perform the partial derivatives on the system's kinetic energy only, and then your generalized force on the right-hand side is Qdx = mgsin(theta)dx, and so Q = mgsin(theta). This is the same result you would get using Euler-Lagrange since the gravitational force can actually be incorporated as a potential energy.
In the case of following a setpoint, are there tools for knowing the gain required for it? so the input becomes u = K_r*u - K_s*x, being K_s the matrix gain obtained from the LQR.
Thanks -- yes, when following a setpoint for reference tracking, it is possible to add a feedforward term along with LQR. I always find this a bit trickier to derive.
You didn't mention that you used the asymptotical Kalman filter here. Isn't the most "optimal" one when you get K from solving the differential Riccati equation online ?
Thank you so much prof Steve for making things easier to understand. I appreciate it. Are the lqe and lqr MATLAB commands used in the videos are applicable to discrete systems? the system in the video is cont.
Dear Professor! Thanks again. One question please. At 15:10 you said that in the real world we have u and y and we will not have to "invent" disturbances (n and d). But my current understanding is that this n and d disturbances are an integral part of the Kalman Filter algorithm, and we will need to estimate these real-world disturbances during system operation in order to supply Kalman Filter with correct real-time values for n and d covariances. Is my understanding correct?
I think I understood it in the same way. In the example shown in this video, Kalman filter was able to squash the noise because we gave him the information about the noise signal. We would need to do the same for real-world system. PS. Note I'm not an expert, I only wrote how I understood the lecture.
Dr. Brunton, excellent series! I wonder where could I find the Matlab code for this video. I couldn't find the code on the book website. Thank you for your efforts!
All code is at databookuw.com (in matlab and python)
hijacking the lqr! so cool...even though I don't know how it works internally....
Thanks for the knowledge and writing backwards
fractional order sliding mode control plzzzzzzz do it
Hi Steve,
first of all: fantastic video !!!
I have a question reguarding Kalman filter in combination with PID control. As you said the kalman filter has its own dynamics. When I use a PID controler I use my system model to tune the PID parameters properly. If I use a filter such as a butterworth filter I have to consider the fitler dynamics if this filter. However, the dynamic of the Kalman filter I not easy to predict/identify since I do not know the poles of the system. Is it possible to combine a PID controller with a kalman filter properly in a "scientific" way? Background is: I have a very unreliable velocity measurement of the fible flow and I want to use a simple PI controler. However, I do not want to use a simple butterworth filter since measuring is sometimes not possible or provides values that are physically not explainable.
Thanks for your time so far. I would appreciate an answer very much.
cheers
Leo
Hi mr. Brunton. Thank you for the video, the estimation of unmeasured variables almost feels like magic. I really tried to understand the Kalman filter but I still don't understand this part - how does it estimate the unmeasured variables. Would it be possible to write out a formula which describes how these estimates are calculated (something like theta(k+1)=....)? Or could you somehow conceptually try to explain this? I would really appreciate it.
I was thinking about this for a while, and I reached for some insight that I may share with you. The design of Kalman filter in this video is based on the modelling system. In the modelling system, the states are actually modeled (just like they are known but they are not), so the lqe command subtracts the modeled state with a random state (initial estimated state) and then correct the error between them based on kalman gain.
However, we have no idea about some of the states in real environment. Here, instead of modeling of the states we must do calculations for the unknown states, for example, if the position state is measured, we could calculate the speed state by deriving the former. Once again, Kalman filter will subtract the calculated states with the estimated states and correct the error based on kalman gains. Hope this could help.
hi sir, can I ask why is the D term in sysC is [0 0 0 0 0 Vn] instead of [0 0 Vn] as you wrote? Thank you so much!
I believe it is because the u vector represents 4 control signals. So in Matlab that single zero on the board is written explicitly as four zeros.
For everyone wondering about this [0 0 0 0 0 Vn], that's because:
- our u input is a scalar (i.e. 1 input signal).
- however d input is assumed to have 4 inputs (d1, d2, d3, d4) a separate disturbance input for each state, it is clearly defined like this line 44 in Matlab uDIST = randn(4,..), so it is a 4 inputs disturbance.
- finally n is a scalar again, same as u
When you have this in mind, the following is the full expansion of both State equation and Output equation [with size of each vector and matrix] of the simulated system:
State Equation: X_dot [4x1] = A[4x4] * X[4x1] + B[4x1] * u[1x1] + Vd[4x4] * d[4x1] + 0 * n[1x1]
So, if you put the inputs [u d n] as a single input vector, you end up with this:
u_all = [u d1 d2 d3 d4 n]
and combine B, Vd and Vn in a single matrx = [B Vd 0*B], the reason for 0*B is to keep sizes same only.
Since Output equation: Y[1x1] = C[1x4] * X[4x1] + 0 * u[4x1] + 0 *d[4x1] + Vn[1x1] * n[1x1]
Therefore, all inputs are 0, except n, [u d1 d2 d3 d4 n] ---> [0 0 0 0 0 vn]
@@mutexembedded2206 what if I have a 1x3 Vn?
Hi Professor, in your code line 35, there are two Vd matrices. Are they the same?
I have some question regarding the kalmam Filter: where is the difference between the Matlab kalman function and lqe? Can I use the kalman Filter with a System without an input u?
Awak memang power....... Jom kite kapel
Sir, Which digital instrument / device are you using to make presentation. Its really inovative. Could you please share detail?
Sir! How did you compute the number of columns in the Matrix D in the sysFullOutput?
It is the same number of columns as the augmented B matrix. The columns are the control input, the 4 state variables, and the measured output. In the D matrix, we are only applying noise to the measurement, and doing nothing to the other 5 columns. The other 5 columns are addressed in the augmented B matrix. Notice how the last column in the augmented B matrix is a column of zeros, because we aren't applying the measurement noise to the inputs... that's left to the outputs via the D matrix.
Wonderfull
You are the one.
hi ı need simple pendelum model anybody can help me please
NB
Please marry me mr Steve!