You all probably dont care at all but does any of you know a trick to log back into an instagram account?? I was stupid forgot my login password. I would love any assistance you can offer me!
What is fascinating about this series of Lectures is that you always link the abstract mathematical quantities to an actual physical interpretation. This makes understanding the concepts much easier and familiar. Thank you for taking the time to make such helpful lectures! looking forward to watch the data driven control!. Thank you !
I ostensibly learned all this in grad school...and then forgot it all in the intervening 15 years when I went off and worked on other things. But now that I'm back to designing control systems, this series of lectures has really helped me get the rust off my skills!
You are a legend I was thinking what I'm doing with control systems, poles, eigen values.... I m not getting the practical example But hey you helped me Thanks a bunch I hope every enthusiastic student finds prof like you Good job man 👍🏻
I like the way that you have used to present the cost function. Using an intuitive explanation is the key of your unique strategy which amazed me the most. Well done.
Thank you Sir....I have seen the whole series and it have cleared lot of my concepts about control theory. Your videos are just great and your way of teaching complex things in simple manner is appreciable. Thanks Again.
If you don't have MATLAB (like me), the Python control library mimics most of the commands from MATLAB. Just be sure to do things like ensure your B matrices are shaped like (-1,1), etc.
Such a wonderful series!!! Thank you so much professor. Just wanted to ask you whether LQR requires any improvement in its performance. If so, then by combining it with any other controller can any improvement be brought?
Hi Steve! That was a great introduction to LQR. Is there any chance you could share the MATLAB code for the inverted pendulum, I would be excited to see it work? Thanks!
Hi steve, it's a super video for LQR,but there is a one point that I can not understand, for the objective function J, why we don't use (X(t) - set point(t)) but we use X to minimize J?? We want x to get close to our set point @steve brunton
Thank you for the video, sir! Really a good explaination of LQR. I have question though. At 9:20 you compute the eigenvector to the most stable eigenvalue and mention that the most stabilizing directions are x_dot and theta_dot. So aggressive control on x_dot and theta_dot would really improve performance. So my question is: Isn't it a good idea to have high values of the 2nd and 4th diagonal entries of the Q matrix, as these correspond to x_dot and theta_dot, and then have lower values at the 1st and 3rd entries?
Quick question, how do I change my matrixes A,B,C and D in my state space block in simulink so that k is taken into account of. I currently am simply replacing A by A-BK for my full state. It looks wrong
Thank you sir, this video series was very helpful to get an introductory idea on control theory .It would be very nice if u could kindly provide us the sources from where we can get to know more about the mathematics behind these concepts Thanks again for this wonderful series.
Amazing video. Is there a value of 'R' (the penalty of motor usage) for which the system can't find an ideal 'K' (linear feedback controller)? Intuitively, I can imagine if R was too large then the cart simply can't move fast enough to keep the pendulum up (in other words, lqr() can't make all real parts of the eigenvalues negative)
Good video 7 effort. I hope you going to demonstrate the MCP with the real application supported by codes & Simulink on Matlab such as this please if available.
When in doubt, place the closed loop poles on the negative real axis in the s-domain. If the response isn't fast enough then move the closed loop poles so they are more negative on the negative real axis. I need to figure out how to do this in python. LQR doesn't worry about keeping the closed loop poles on the negative real axis because it places the closed loop zeros close to the closed loop poles effectively canceling them out or reducing their effect. One thing that was mentioned is the resolution of the feed back. In simulation the feed back resolution is infinitely fine whereas in reality it isn't. This limits how aggressive the gains can be.
Excuse me sir, i have a questions. In the control system using LQR, the output will be the value of the gain matrix K with an order of 1x2. What I want to ask is : 1. What are the meanings of the matrix values of K11 and K12? 2. How to use such matrix values K for implementation on DC motors? Because i already learned how to design a control system of DC motor using the simulink on MATLAB but i couldn't implement it on the DC motor because i don't know what is the connection between the matrix K value to the DC motor.
Do the Q and R matrices correspond to the matrices of the same name in the Kalman filtering context, where Q is process noise and R is measurement noise?
Sir, according to what we change Q matris theta and theta dot values which is 10,100 values. For aircraft stability should I same Q matrix which you used
How would we know if our motor is so weak it can't exert enough force to balance the cart with the given constraints? Is it given by the checking the values of K in (A-BK)x after solving for it?
Great videos! I am wondering why the input vector is 4-dimensional. Isn't it the case that we only can control the acceleration in x? Is it just for this example to explain the LQR, or am I missing something?
Hi professor, I saw the code late using K*(y - y_des) for non-zero fixed point. The cost function for lqr "J = x'Sx', If we calculate the K and S using lqr without information about the fixed-point, Is that mean the cost J is a constant regardless which fixed-point I want to be fixed?
Thank you for helping us practically understand and visualize the concept. I had one question though; while implementing it, when I gave the system a step input it turns out the cart doesn't go one unit and in fact there is some steady state error. Should it be there and if yes then the LQR alone cannot fix that?
Thank you Sir for the clear explanation! I would like to ask just a question about the last simulation made with R=10. I have noticed that the cart does not reach the fixed point, at least as regard the first state x, which is supposed to get to x=1. My hypothesis is that, since we set the actuation as really expensive, the control variable should be kept enough low and, as consequence, the time needed to reach the fixed point is longer than 10, which is the time we set as "tspan" in the Matlab code. Is this reasoning correct?
how could you put in the limitation of a motor, for example? so that you'd have in a practical assignment a bunch of motor options, how would you approach calculating the ideal eigs for each of the motors (with their speed limitations) so that you can compare price with time? I'm asking because motor prices are often not liniar with their power, not do they cover the entire spectrum of possibilities, unless you're willing to build one from scratch.
This is a really interesting question, and there are a lot of interesting offshoots. Including the real cost of hardware with their performance is more of a high-level multi-objective optimization problem. All big companies that design complex systems and control algorithms (think about a GE engine) need to perform these large optimizations to balance design and cost tradeoffs. More generally, in LQR, it is difficult to put in some types of limitations on the hardware. Model predictive control is a very flexible framework to incorporate some of these constraints directly. Video on MPC: ruclips.net/video/YwodGM2eoy4/видео.html
@@Eigensteve That seems incredibly interesting! Model predictive control feels very close to a sliding window algorithm in programming, where you re evaluate your options every time you take a step. Is it right to say that both of these methods apply very well with cases where ideally there is a straight path between the initial position and the goal, but would break if there was a limitation stopping them to do that?
@@Eigensteve Sadly I have learnt from scratch MATLAB and implementing an inverted pendulum in the last two days, so I am quite tired. But would you greatly mind if I would ask by email some more questions like this? I already have quite a few too many for a youtube comment section.
@@alexandermaverick9474 Interesting question. I'm not an expert in path planning around obstacles, but I imagine that if your MPC horizon is long enough to see around the obstacle, it might still work.
Thanks professor! I'm trying to use MatLab command [N,D]=ss2tf(A,B,C,D) to get the TF, but I don't know which C and D matrix to use. Maybe C = [1 0 0 0]; D = zeros(size(C,1),size(B,2)) ?
Watching this at 2 am before sleeping became a habit for me. And it has worked, I wake up in the morning with a clear mind of what you just taught me.
Very cool!
Watching this cleared my mind too. I don't remember ANY thing
UR so powerful that the cow is on the sky.
You all probably dont care at all but does any of you know a trick to log back into an instagram account??
I was stupid forgot my login password. I would love any assistance you can offer me!
@Cooper Kane instablaster :)
What is fascinating about this series of Lectures is that you always link the abstract mathematical quantities to an actual physical interpretation. This makes understanding the concepts much easier and familiar. Thank you for taking the time to make such helpful lectures! looking forward to watch the data driven control!. Thank you !
I ostensibly learned all this in grad school...and then forgot it all in the intervening 15 years when I went off and worked on other things. But now that I'm back to designing control systems, this series of lectures has really helped me get the rust off my skills!
insanely high-quality teaching!
You are a legend
I was thinking what I'm doing with control systems, poles, eigen values.... I m not getting the practical example
But hey you helped me
Thanks a bunch
I hope every enthusiastic student finds prof like you
Good job man 👍🏻
Writing words backwards on the board is quite a job! Great video profe!
The video is mirrored ;)
Teaching complicated problem in easy way. Thank you Professor!
Dear Professor, Great Job!
Glad you liked it!
I like the way that you have used to present the cost function. Using an intuitive explanation is the key of your unique strategy which amazed me the most. Well done.
Thank you Sir....I have seen the whole series and it have cleared lot of my concepts about control theory. Your videos are just great and your way of teaching complex things in simple manner is appreciable. Thanks Again.
If you don't have MATLAB (like me), the Python control library mimics most of the commands from MATLAB. Just be sure to do things like ensure your B matrices are shaped like (-1,1), etc.
can u tell me what values do I need to put in the matrices A,B,C,D
The line he said,"It's interesting (taking a small pause) and it's complicated "..This are the situation we are facing 😅😅😄👍🏾
Thanks! :)
Thank you Mr. Brunton!
Such a wonderful series!!! Thank you so much professor. Just wanted to ask you whether LQR requires any improvement in its performance. If so, then by combining it with any other controller can any improvement be brought?
Hi Steve! That was a great introduction to LQR. Is there any chance you could share the MATLAB code for the inverted pendulum, I would be excited to see it work? Thanks!
Thanks! All code is available at databookuw.com under the CODE.zip link
Wonderful lesson!
Hi steve, it's a super video for LQR,but there is a one point that I can not understand, for the objective function J, why we don't use (X(t) - set point(t)) but we use X to minimize J?? We want x to get close to our set point @steve brunton
In my MPC lecture eigenvalues where only said to be stable if they are
Thank you for the video, sir! Really a good explaination of LQR. I have question though. At 9:20 you compute the eigenvector to the most stable eigenvalue and mention that the most stabilizing directions are x_dot and theta_dot. So aggressive control on x_dot and theta_dot would really improve performance.
So my question is: Isn't it a good idea to have high values of the 2nd and 4th diagonal entries of the Q matrix, as these correspond to x_dot and theta_dot, and then have lower values at the 1st and 3rd entries?
Quick question, how do I change my matrixes A,B,C and D in my state space block in simulink so that k is taken into account of. I currently am simply replacing A by A-BK for my full state. It looks wrong
This is awesome. Can you please help me with the complete MATLAB code?
Thank you sir, this video series was very helpful to get an introductory idea on control theory .It would be very nice if u could kindly provide us the sources from where we can get to know more about the mathematics behind these concepts
Thanks again for this wonderful series.
I think Kailath gives an explanation
Amazing video. Is there a value of 'R' (the penalty of motor usage) for which the system can't find an ideal 'K' (linear feedback controller)? Intuitively, I can imagine if R was too large then the cart simply can't move fast enough to keep the pendulum up (in other words, lqr() can't make all real parts of the eigenvalues negative)
Good video 7 effort. I hope you going to demonstrate the MCP with the real application supported by codes & Simulink on Matlab such as this please if available.
How can we add disturbance and measurement noise to simulink model.
I can't help but wonder whether you're writing everything backwards behind that screen so we can see it normally. Is that the case?
Great job. Your videos help a lot. Please provide the link to matlab code . Thanks
When in doubt, place the closed loop poles on the negative real axis in the s-domain. If the response isn't fast enough then move the closed loop poles so they are more negative on the negative real axis. I need to figure out how to do this in python. LQR doesn't worry about keeping the closed loop poles on the negative real axis because it places the closed loop zeros close to the closed loop poles effectively canceling them out or reducing their effect. One thing that was mentioned is the resolution of the feed back. In simulation the feed back resolution is infinitely fine whereas in reality it isn't. This limits how aggressive the gains can be.
Excuse me sir, i have a questions. In the control system using LQR, the output will be the value of the gain matrix K with an order of 1x2. What I want to ask is :
1. What are the meanings of the matrix values of K11 and K12?
2. How to use such matrix values K for implementation on DC motors?
Because i already learned how to design a control system of DC motor using the simulink on MATLAB but i couldn't implement it on the DC motor because i don't know what is the connection between the matrix K value to the DC motor.
Thanks for video. But i have one question totally unrelated to that " how are you writing on board?" Have u trained yourself to write inverted ?
Do the Q and R matrices correspond to the matrices of the same name in the Kalman filtering context, where Q is process noise and R is measurement noise?
Very very good explanation. Thanks a lot. Could you please share the code? I didn't find any link at your website.
Code at databookuw.com
Steve Brunton thank you so much professor.
Sir, according to what we change Q matris theta and theta dot values which is 10,100 values. For aircraft stability should I same Q matrix which you used
How would we know if our motor is so weak it can't exert enough force to balance the cart with the given constraints? Is it given by the checking the values of K in (A-BK)x after solving for it?
Great videos! I am wondering why the input vector is 4-dimensional. Isn't it the case that we only can control the acceleration in x? Is it just for this example to explain the LQR, or am I missing something?
How do I get a black background in MATLAB? :OOO
Hi professor, I saw the code late using K*(y - y_des) for non-zero fixed point. The cost function for lqr "J = x'Sx', If we calculate the K and S using lqr without information about the fixed-point, Is that mean the cost J is a constant regardless which fixed-point I want to be fixed?
Thank you for helping us practically understand and visualize the concept.
I had one question though; while implementing it, when I gave the system a step input it turns out the cart doesn't go one unit and in fact there is some steady state error. Should it be there and if yes then the LQR alone cannot fix that?
If you want no steady-state error for a step input, I think you would need to additionally implement an Integrator to the system
Thank you Sir for the clear explanation! I would like to ask just a question about the last simulation made with R=10. I have noticed that the cart does not reach the fixed point, at least as regard the first state x, which is supposed to get to x=1. My hypothesis is that, since we set the actuation as really expensive, the control variable should be kept enough low and, as consequence, the time needed to reach the fixed point is longer than 10, which is the time we set as "tspan" in the Matlab code. Is this reasoning correct?
Can anybody tell what values do I need to put in the matrix A,B,C,D?
how could you put in the limitation of a motor, for example? so that you'd have in a practical assignment a bunch of motor options, how would you approach calculating the ideal eigs for each of the motors (with their speed limitations) so that you can compare price with time? I'm asking because motor prices are often not liniar with their power, not do they cover the entire spectrum of possibilities, unless you're willing to build one from scratch.
This is a really interesting question, and there are a lot of interesting offshoots. Including the real cost of hardware with their performance is more of a high-level multi-objective optimization problem. All big companies that design complex systems and control algorithms (think about a GE engine) need to perform these large optimizations to balance design and cost tradeoffs. More generally, in LQR, it is difficult to put in some types of limitations on the hardware. Model predictive control is a very flexible framework to incorporate some of these constraints directly. Video on MPC: ruclips.net/video/YwodGM2eoy4/видео.html
@@Eigensteve That seems incredibly interesting! Model predictive control feels very close to a sliding window algorithm in programming, where you re evaluate your options every time you take a step. Is it right to say that both of these methods apply very well with cases where ideally there is a straight path between the initial position and the goal, but would break if there was a limitation stopping them to do that?
@@Eigensteve Sadly I have learnt from scratch MATLAB and implementing an inverted pendulum in the last two days, so I am quite tired. But would you greatly mind if I would ask by email some more questions like this? I already have quite a few too many for a youtube comment section.
@@alexandermaverick9474 Interesting question. I'm not an expert in path planning around obstacles, but I imagine that if your MPC horizon is long enough to see around the obstacle, it might still work.
@@alexandermaverick9474 No problem, although it might take me a while to respond.
amazing!
Sir can you show how can we implement a data driven LQR i.e when the model is not known?which is the best method to go for?
Thanks professor! I'm trying to use MatLab command [N,D]=ss2tf(A,B,C,D) to get the TF, but I don't know which C and D matrix to use. Maybe C = [1 0 0 0]; D = zeros(size(C,1),size(B,2)) ?
did u get it right?
If yes can u help with something similar
why his A matrix second column is not [1,0,0,0] but more complicated equations ?
Because he is considering drag between the floor and the cart
🙏
:)
Best!
How do I get your Matlab code?
Can u show me the codes?
Lopez Kenneth Jones Elizabeth Lee Eric
Annette Mountain
Is he writing backwards?
Garcia Laura White Anthony Young Jennifer
matlab code?
Jackson Michael Walker Barbara Williams Richard
matlab people thinking they're programmers:
Walker Jeffrey Harris Matthew Moore Susan
sorry i could not find the codes , can you please help me
databookuw.com/
it's below the big picture of the book. above the author's portraits. you have matlab and python options for the code.
@@andrewsoong8817 Thanks man
You speak like "Sheldon Cooper".
Moore Sarah Walker Gary Robinson Timothy