Thank you Sir....I have seen the whole series and it have cleared lot of my concepts about control theory. Your videos are just great and your way of teaching complex things in simple manner is appreciable. Thanks Again.
Thank you for the great video. However, I found a little bit weird that we designed u=-kx but implemented u=-k(x-x_ref). Could you explain why it still works?
At 11:03 you said that making the eigenvalues too aggressive will actually cause the system to become unstable, due to the nonlinear dynamics. But this is always a risk -- we've linearized the system around a particular point, so going too far away from that point could cause our controller to fail. Is there a way to find the border of the state space where our linearized controller will start to fail? If so, would it be possible (or practical) for the size of stable area to be a term in the cost function that you describe in the LQR video? (The bigger the area where the controller can stabilize around a fixed point, the better -- so I'd like to optimize the controller to widen that area)
This is a great question. That is true that often nonlinearity implies some "basin of attraction" near your linearization. In general, saying things about performance and convergence in nonlinear systems is very challenging. Lyapunov functions are useful here, but they are hard to come by in real systems. I do seem to remember seeing the radius of convergence being a term in some advanced control optimizations, but I can't quite remember where, off the top of my head.
The cool thing here was that y_ref did not show up in designing the controller. We only used y_ref when solving using ode45, which to me means that the final state does not have an effect on the controller. So cool. Someone correct me if I'm wrong. (Maybe it's a result of those equivalent reachability, controllability, etc. stuff)
I have a doubt on the beginning values used in the simulation. The starting values are chosen close to the point where we want to stabilize (pi,0) or (0,0). What happens if the starting values are faraway from the point to stabilize? I guess our linear model is not valid anymore and we wouldnt be able to guarantee that we can reach the quasi stable equilibrium (pi,0). Is there any general rules on this? If starting state is way in the non linear region (where linear approximation doesnt apply), can we still use the linear controller?
Well at some point you will have to use non linear control, which is much more complicated than linear control. Or you can set up a trajectory to get near your desired point, and control your system so that it follows this trajectory by linearizing several times along its path. When you get close enough, you can start stabilizing the point using what he did.
Good question. You can start by downloading the code at databookuw.com. Also, Brian Douglas has some great stuff on control implementations in hardware.
Thank you for your knowledge sharing professor. If I had a real system of IP and want to control it by controlling its linear velocity (instead of force), how could I get my linear mathematical model? Could I do it only by obtaining my B matrix by differentiating my state equation by linear velocity? Thank you very much.
Hey professor! your videos are amazing, I would like to know where can I download the matlab code that you use in these video? It would be very helpful for a proyect I'm working on at my university in Argentina. Thanks for everything!!
In this part of the code: [t,y] = ode45(@(t,y)cartpend(y,m,M,L,g,d,-K*(y-[4; 0; 0; 0])),tspan,y0); why do you use this matrix [4; 0; 0; 0] when S=1 and [1; 0; pi; 0] when s=-1, is this from the linearisation process?
He is writing on glass that is between him and the camera. Normally the writing would appear backwards, but the video has been flipped in editing software
Had lots of control teachers, let me tell you are a GOD!
Thank you Sir....I have seen the whole series and it have cleared lot of my concepts about control theory. Your videos are just great and your way of teaching complex things in simple manner is appreciable. Thanks Again.
Thank you for the great video. However, I found a little bit weird that we designed u=-kx but implemented u=-k(x-x_ref). Could you explain why it still works?
@Justinus Hartoyo yes, it is the error dynamics. I didn't realize that he overloaded the variable x.
@Justinus Hartoyo This comment should be highlighted here
At 11:03 you said that making the eigenvalues too aggressive will actually cause the system to become unstable, due to the nonlinear dynamics.
But this is always a risk -- we've linearized the system around a particular point, so going too far away from that point could cause our controller to fail.
Is there a way to find the border of the state space where our linearized controller will start to fail?
If so, would it be possible (or practical) for the size of stable area to be a term in the cost function that you describe in the LQR video? (The bigger the area where the controller can stabilize around a fixed point, the better -- so I'd like to optimize the controller to widen that area)
This is a great question. That is true that often nonlinearity implies some "basin of attraction" near your linearization. In general, saying things about performance and convergence in nonlinear systems is very challenging. Lyapunov functions are useful here, but they are hard to come by in real systems. I do seem to remember seeing the radius of convergence being a term in some advanced control optimizations, but I can't quite remember where, off the top of my head.
@@Eigensteve thanks for the quick response and the pointer to lyapunov functions. Your videos are awesome by the way!
The cool thing here was that y_ref did not show up in designing the controller. We only used y_ref when solving using ode45, which to me means that the final state does not have an effect on the controller. So cool. Someone correct me if I'm wrong. (Maybe it's a result of those equivalent reachability, controllability, etc. stuff)
good explanation! Danke sehr!
are there conditions on m, M and l that would make this uncontrollable?
Thanks a lot! Great explanation
Thank you professor. Where I can get the example code?
I have a doubt on the beginning values used in the simulation. The starting values are chosen close to the point where we want to stabilize (pi,0) or (0,0). What happens if the starting values are faraway from the point to stabilize? I guess our linear model is not valid anymore and we wouldnt be able to guarantee that we can reach the quasi stable equilibrium (pi,0). Is there any general rules on this? If starting state is way in the non linear region (where linear approximation doesnt apply), can we still use the linear controller?
Well at some point you will have to use non linear control, which is much more complicated than linear control. Or you can set up a trajectory to get near your desired point, and control your system so that it follows this trajectory by linearizing several times along its path. When you get close enough, you can start stabilizing the point using what he did.
Thanks Professor! How can I generate the code to run the controller in Arduino?
Good question. You can start by downloading the code at databookuw.com. Also, Brian Douglas has some great stuff on control implementations in hardware.
Thank you for your knowledge sharing professor. If I had a real system of IP and want to control it by controlling its linear velocity (instead of force), how could I get my linear mathematical model? Could I do it only by obtaining my B matrix by differentiating my state equation by linear velocity?
Thank you very much.
Can you help ,e with the discretization of this sistem? in particular which sample rate should I use in c2d command in Matlab? thx
Hey professor! your videos are amazing, I would like to know where can I download the matlab code that you use in these video? It would be very helpful for a proyect I'm working on at my university in Argentina. Thanks for everything!!
Dear Prof where to get the matlab command you are using for teaching here in videos?? i.e. sim_cartpend and lqr_cartpend etc
Thank you, professor, I can't upload the link of code, is there another link to download the code?
In this part of the code:
[t,y] = ode45(@(t,y)cartpend(y,m,M,L,g,d,-K*(y-[4; 0; 0; 0])),tspan,y0);
why do you use this matrix [4; 0; 0; 0] when S=1 and [1; 0; pi; 0] when s=-1, is this from the linearisation process?
i got it!... it is explained later
how to visualize it? how to get drawcartpend_bw file? :)
where can i find the code?
Can I get this MATLAB code?
What do controls engineers have in common with strip club managers? They both care about optimizing pole placement.
Thank u thank u thank u
can i have code of matlab pls
Great lecture but, How is he writing like that? What black magic is this?
He is writing on glass that is between him and the camera. Normally the writing would appear backwards, but the video has been flipped in editing software