10:10 Convex functions 12:50 Examples on R 15:23 Examples on R^n and R^mxn 20:09 Restriction of a convex function to a line 28:43 Extended-value extension 31:09 First order condition 35:39 Second-order conditions 37:20 Examples 49:38 Epigraph and sublevel set 52:10 Jensen's inequality 57:21 Operations that preserve convexity 59:17 Positive weighted sum & composition with affine function 1:02:05 Pointwise maximum 1:04:39 Pointwise Supremum 1:08:13 Composition with scalar functions 1:13:31 Vector composition
In case anyone else was confused: at 41:20 the "softmax" he describes there is different from the "softmax" in deep learning. The deep learning softmax should probably be called something like "softargmax" instead.
Professor Boyd mentions around 20:00 that the spectral norm of a matrix X is a very complicated function of X, as the square root of the largest eigenvalue of XTX. I would mention, however, that this norm has a very simple geometric interpretation -- it is the maximum factor that X can "stretch" a vector through multiplication. Just as the largest eigenvalue of a matrix is the maximum factor that the matrix can stretch a vector if you don't allow for rotation, the largest singular value is the most that matrix can stretch any vector if you do allow for rotation. It therefore also has the interpretation of the magnitude of the largest axis of the ellipse which is the image of the unit L2 ball under the action of (left) multiplication by X.
Professor Boyed is super smart and definitely a researcher who have done large number of rigorous proofs. But even then, no one comes close in conveying mathematics to engineering students to David Snider, my professor at University of South Florida, Not even Prof Boyed. Snider is retired now and he was the author of all his math books that we studied in grad school. However this course is Awesome :). I love the rigorousness of it, it is very helpful to PhD students to come up with proofs to their theorems.
Composition rule mnemonic 1) Rule is for determining if f has the SAME convexity as h - no rule for f to be opposite of h 2) h has to be monotone 3) the monotonicity should be the equality test of the convexities. If convexity of g == convexity of h -> then its should be increasing, else decreasing. Outside of that no simple rule.
I studied optimization including linear algebra techniques at Golden Gate University in a major called Computational Decision Analysis from 1999 to 2003. We used SAS, unix and excel solver.
I'm a Ph.D. student in electrical engineering. I'm studying this book by myself. Those lectures are so helpful for me to start using convex optimization. How can I get the home works professor Boyed is talking about???
I feel like a noob. I am understanding the main points, but still, the examples are totally non-obvious. Of course, it is a '300' class, so I should have expected that.
Is there an active link to the class notes that are presented in these lectures? It would be more leisurely to watch the videos and then write down notes afterwards.
Didn't understand your question-is it that 1.Why eigenvalue of X is same as that of X^(-1/2)VX^(-1/2)? OR 2.Why eigenvalue exists for X^(-1/2)VX^(-1/2)?
someone is asking what is diag(z)? then how can the professor be sure that, the students are following, if they do not actually understand what is this very primitive item in that equation?
This guy is the kind of professor I would avoid taking classes from at all cost. He's spending too much time talking about stuff that's not helpful to the understanding of the subject, like debating with himself whether a concept is obvious or not, lots of hand-waving when he really should have drawn some graph on a piece of paper. His coursera convex opt. course is even worse. I'd recommend reading a book than watching his videos for learning the subject.
Like or not, he is the superhero of the convex optimization and has the best material. Even the trivial examples he is giving, may help one to broaden his/her perspective.
Disagree. Knowing that it is hard to make things precise shows that he knows way too much. At points when he you say he is 'hand-waving', I suggest you delve deeper and you will appreciate why he said what he said.
10:10 Convex functions
12:50 Examples on R
15:23 Examples on R^n and R^mxn
20:09 Restriction of a convex function to a line
28:43 Extended-value extension
31:09 First order condition
35:39 Second-order conditions
37:20 Examples
49:38 Epigraph and sublevel set
52:10 Jensen's inequality
57:21 Operations that preserve convexity
59:17 Positive weighted sum & composition with affine function
1:02:05 Pointwise maximum
1:04:39 Pointwise Supremum
1:08:13 Composition with scalar functions
1:13:31 Vector composition
You really deserve more than just likes
@@nileshdixit9672 Honestly, I put them so that it helps me revise topics quickly. Happy that it is helping others too.
keep liking this one to keep it up
start from 10:10
No problem. I usually play at 1.5 speed so I will get ready after 10 minutes.
In case anyone else was confused: at 41:20 the "softmax" he describes there is different from the "softmax" in deep learning. The deep learning softmax should probably be called something like "softargmax" instead.
15:50 Norm
18:00 trace (inner product)
28:43 Extended-value extension
31:12 differentiable functions
The real 'inspirational' essence is what he says at 32:50 and keeps saying for a two minutes or so. Thanks for sharing.
Professor Boyd mentions around 20:00 that the spectral norm of a matrix X is a very complicated function of X, as the square root of the largest eigenvalue of XTX. I would mention, however, that this norm has a very simple geometric interpretation -- it is the maximum factor that X can "stretch" a vector through multiplication. Just as the largest eigenvalue of a matrix is the maximum factor that the matrix can stretch a vector if you don't allow for rotation, the largest singular value is the most that matrix can stretch any vector if you do allow for rotation. It therefore also has the interpretation of the magnitude of the largest axis of the ellipse which is the image of the unit L2 ball under the action of (left) multiplication by X.
Great teacher and wonderful sense of humor!
Professor Boyed is super smart and definitely a researcher who have done large number of rigorous proofs. But even then, no one comes close in conveying mathematics to engineering students to David Snider, my professor at University of South Florida, Not even Prof Boyed.
Snider is retired now and he was the author of all his math books that we studied in grad school.
However this course is Awesome :). I love the rigorousness of it, it is very helpful to PhD students to come up with proofs to their theorems.
abuhajara do you suggest a specific book about the conveying to understand it very well
Composition rule mnemonic
1) Rule is for determining if f has the SAME convexity as h - no rule for f to be opposite of h
2) h has to be monotone
3) the monotonicity should be the equality test of the convexities. If convexity of g == convexity of h -> then its should be increasing, else decreasing.
Outside of that no simple rule.
Hm maybe not 😅
Craving for more of this kind of stuff.
I studied optimization including linear algebra techniques at Golden Gate University in a major called Computational Decision Analysis from 1999 to 2003. We used SAS, unix and excel solver.
so?
Thank you, Prof. Boyd!
this guy is a winner.
1:05:50 It is extremely usefull to know if you are studying control theory.
2:06 That blink, that grimace.
I'm a Ph.D. student in electrical engineering. I'm studying this book by myself. Those lectures are so helpful for me to start using convex optimization. How can I get the home works professor Boyed is talking about???
You can find the textbook, assignment and solution on this page. see.stanford.edu/Course/EE364A/94
@@guoweih7339 thanks man
@@guoweih7339 thank you sir.
Did they change the assingments, since the answers are available?
@@rodfloripa10 No - students just have to realize that at this level, assignments are a tool for learning, not a tool for getting grades.
Good lectures. However, if you are just dropping by like me and want to skip the chatter about admin and classroom issues, start at the 10:10 mark.
Really Helpful.
Thank you for this nice lecture ❤
24:17 if have no idea whether the function is convex or not - generate a few lines, plot, and look!
13:58, the condition r++ is important. X^3 is not convex in R I think.
I feel like a noob. I am understanding the main points, but still, the examples are totally non-obvious.
Of course, it is a '300' class, so I should have expected that.
shouldn't the determinant of Hessian of the quadratic-over-linear function be just 0 but not greater or equal to 0? 41:00
if Ryan Reynold would become a prof
He tells: 'very painful' as if he knows a lo-o-ot about pain!😀
Is there an active link to the class notes that are presented in these lectures? It would be more leisurely to watch the videos and then write down notes afterwards.
web.stanford.edu/~boyd/cvxbook/ The book and the lecture slides are public available on the stanford web.
I understand that A*X = \lamda *X has \lamda as an eigenvalue. So, how could X^(-1/2)VX^(-1/2) has the eigenvalue \lamda?
Didn't understand your question-is it that
1.Why eigenvalue of X is same as that of X^(-1/2)VX^(-1/2)?
OR
2.Why eigenvalue exists for X^(-1/2)VX^(-1/2)?
someone is asking what is diag(z)? then how can the professor be sure that, the students are following, if they do not actually understand what is this very primitive item in that equation?
The pace is ridiculously fast. The book (which is well-written) must be read first before can one cope with these videos.
what book?
@10:20
49:48 epigraphs
hilarious prof.
A lefty :D
extended value extensions ruclips.net/video/kcOodzDGV4c/видео.html
32:00 vay aq
This guy is the kind of professor I would avoid taking classes from at all cost. He's spending too much time talking about stuff that's not helpful to the understanding of the subject, like debating with himself whether a concept is obvious or not, lots of hand-waving when he really should have drawn some graph on a piece of paper.
His coursera convex opt. course is even worse. I'd recommend reading a book than watching his videos for learning the subject.
i think he's super entertaining and makes stuff make sense and seem interesting when it would otherwise seem dry
Like or not, he is the superhero of the convex optimization and has the best material. Even the trivial examples he is giving, may help one to broaden his/her perspective.
Disagree, it's quite entertaining, engaging and doesn't skimp on the important stuff
Disagree. Knowing that it is hard to make things precise shows that he knows way too much. At points when he you say he is 'hand-waving', I suggest you delve deeper and you will appreciate why he said what he said.
The first ten minutes is total crap.