Constantine Caramanis
Constantine Caramanis
  • Видео 156
  • Просмотров 397 034
Theorem of Alternatives from Separation
We show how to prove theorem of alternatives using the basic separating hyperplane theorem (which we prove in a different lecture).
Просмотров: 1 069

Видео

Separating Hyperplanes
Просмотров 6 тыс.3 года назад
We prove the basic separating hyperplane theorem for closed convex sets: if X is closed and convex, and y not in X, then there exists a vector c such that c'y \leq c'x, for all x in X
11.7 Lower Bounding Extension Complexity, Part V
Просмотров 3863 года назад
This lecture concludes our main lower bound result that shows that any extended formulation for the correlation polytope requires at least 1.5^n facets. We have previously shown that the extension complexity is lower bounded by the size of any covering of D(n) by valid sets. We show that the largest valid subset of D(n) is of size no more than 2^n. Since |D(n)|=3^n, this immediately implies our...
11.1 Extended Formulations, Part I
Просмотров 3243 года назад
This is our first of seven lectures on Extended Formulations and Extension Complexity. We give a positive result: we define the permutahedron, and show that while it has exponentially many facets, it is the projection of a n^2 n dimensional polytope defined by just n^2 3n constraints.
11.6 Lower Bounding Extension Complexity, Part IV
Просмотров 1733 года назад
In this lecture we define the notion of a "valid" subset of D(n), the set of pairs of disjoint subsets. The idea of validity is critically linked to communication complexity and the relaxed (easier) version of the Face_Vertex game that Alice and Bob play. We show that if we can cover D(n) with r valid sets, then the non-deterministic communication complexity for the relaxed Face_Vertex game can...
11.5 Lower Bounding Extension Complexity, Part III
Просмотров 2363 года назад
This is the first of three lectures where we show that the extension complexity of the correlation polytope is at least 1.5^n. The proof follows the paper by Kaibel & Weltge, 2016.
11.4 Lower Bounding Extension Complexity, Part II
Просмотров 1793 года назад
In this lecture we give the key link between communication complexity and extended formulation complexity. The key is in the introduction of the FACE_VERTEX problem. We show that if a polytope has an extended formulation with r inequalities, then the non deterministic communication complexity of FACE_VERTEX(P) is at most log r.
11.2 Extended Formulations, Part II
Просмотров 2113 года назад
In this video we define precisely the extension complexity of a polytope. Then we prove the result of Yannakakis from 1991, that links nonnegative rank and extension complexity. Specifically, Yannakakis shows that the nonnegative rank of the slack matrix he defines, is exactly the extension complexity: given a nonnegative factorization of the slack matrix of size r, it can be used to obtain an ...
11.3 Lower Bounding Extension Complexity, Part I
Просмотров 2493 года назад
The next sequence of lectures will be developing tools, specifically communication complexity, that can be used to lower bound the extension complexity of a polytope. In this lecture we introduce the notion of non-deterministic communication complexity. I learned this material from the notes by Tim Roughgarden.
10.7 Continuous Greedy, Part II
Просмотров 2273 года назад
We finish our series on the Continuous Greedy Algorithm. In the last lecture, we showed that for x(1) the output of the Greedy algorithm, and F the multilinear extension, we have: F(x(1)) \geq (1-1/e) OPT = (1-1/e)F(x*). In this section, we show that we can find an independent set S so that f(S) = F(S) \geq F(x(1)). We do this by taking the decomposition of x(1) naturally produced by the contin...
10.6 Continuous Greedy, Part I
Просмотров 4623 года назад
The next two lectures revisit the problem of maximizing a monotone submodular function subject to a matroid constraint. Where as Greedy obtains a 1/2 approximation, we show that by using the multilinear extension we can improve this to (1-1/e), which is unimprovable unless P = NP (because, e.g., of the Max-k-Cover problem, as discussed in a previous lecture). We introduce the continuous greedy ...
10.5 Continuous Extensions, Part II
Просмотров 3663 года назад
We define the multilinear extension, and prove several important properties when it is derived from a monotone or submodular function. In particular: The gradient of F is nonnegative when f is monotone. The elements of the Hessian are nonnegative (with zeros on the diagonal). These imply that: F is non-decreasing along any direction d \geq 0 F is concave along any direction d \geq 0 F is convex...
10.4 Continuous Extensions, Part I
Просмотров 6663 года назад
We give the convex and concave closures for a set function f. We show that they merit their name: the convex closure is indeed a convex continuous extension of f, and similarly, the concave closure is concave. These two may be hard to evaluate. This motivates the definition of the Lovasz extension, which is easy to evaluate. We show that the Lovasz extension coincides with the convex closure if...
10.3 Submodular Functions, Part III
Просмотров 4933 года назад
In this lecture we consider the problem of maximizing a monotone submodular function over a general matroid constraint. We show that the Greedy algorithm provides a 1/2 approximation.
10.2 Submodular Functions, Part II
Просмотров 5833 года назад
In this lecture we give the basic greedy algorithm, and give the proof by Wolsey, Nemhauser and Fisher stating that if \mathcal{I} is the uniform matroid, and f is a monotone submodular function, then the Greedy algorithm is within a (1-1/e) factor of optimal.
10.1 Submodular Functions, Part I
Просмотров 2,6 тыс.3 года назад
10.1 Submodular Functions, Part I
9.13 Matroid Intersection, Part VI
Просмотров 1774 года назад
9.13 Matroid Intersection, Part VI
9.12 Matroid Intersection, Part V
Просмотров 2224 года назад
9.12 Matroid Intersection, Part V
9.11 Matroid Intersection, Part IV
Просмотров 3124 года назад
9.11 Matroid Intersection, Part IV
Finishing up Newton's Method
Просмотров 1534 года назад
Finishing up Newton's Method
5.7 Mirror Descent Part 3
Просмотров 2,8 тыс.4 года назад
5.7 Mirror Descent Part 3
5.1 Proximal and Projected Gradient Descent
Просмотров 20 тыс.4 года назад
5.1 Proximal and Projected Gradient Descent
3.1 Intro to Gradient and Subgradient Descent
Просмотров 11 тыс.4 года назад
3.1 Intro to Gradient and Subgradient Descent
1.1 Introduction to Optimization and to Me
Просмотров 39 тыс.4 года назад
1.1 Introduction to Optimization and to Me
9.3 Interior Point Methods - Part III
Просмотров 7444 года назад
9.3 Interior Point Methods - Part III
3.4 Convergence Rates
Просмотров 8 тыс.4 года назад
3.4 Convergence Rates
1.2 What these Lectures do and do not cover
Просмотров 9 тыс.4 года назад
1.2 What these Lectures do and do not cover
7.2 Newton's Method and Affine Transformations
Просмотров 1,9 тыс.4 года назад
7.2 Newton's Method and Affine Transformations
5.4 ISTA and FISTA
Просмотров 8 тыс.4 года назад
5.4 ISTA and FISTA
3.3 Properties of Smooth and Strongly Convex Functions
Просмотров 7 тыс.4 года назад
3.3 Properties of Smooth and Strongly Convex Functions

Комментарии

  • @vineetsingh5649
    @vineetsingh5649 10 месяцев назад

    Can you provide some book names or references for these lectures

  • @ElIrracional
    @ElIrracional 11 месяцев назад

    In your example, the sum of lambdas is 1/2. Why can we claim that the sum of lambdas is always 1?

  • @tianrungou6486
    @tianrungou6486 Год назад

    Your course is amazing! I have weak foundation in matrix computation and time comlexity( intuition as well ) but you make everything comprehensible by examples, graphs and actual matrix valued functions.

  • @alirezajafarineysiani
    @alirezajafarineysiani Год назад

    Thank you.

  • @erickgomez7775
    @erickgomez7775 Год назад

    I watched the past videos so I could understand the first minute of this video. One of the best instructors ever.

  • @tooniatoonia2830
    @tooniatoonia2830 Год назад

    Great, thanks!

  • @tooniatoonia2830
    @tooniatoonia2830 Год назад

    Great insight, thanks!

  • @tooniatoonia2830
    @tooniatoonia2830 Год назад

    Excellent lecture, thanks!

  • @Stochine
    @Stochine Год назад

    At 30:16, why were you able to replace x_t with x_{t+1}, \lambda_{t} with \lambda_{t+1}, and d_{t} with d_{t+1}?

  • @pedramhaqiqi7030
    @pedramhaqiqi7030 Год назад

    hey any way u can reference a solution to the absolute value minimization at 11:43? subgradients would be ai at each iteration, how does it not become the mean again

  • @nicolascuadrado234
    @nicolascuadrado234 Год назад

    Hi, thanks for this video, it's very clarifying. Correct me pls if I'm wrong, but in the 7:51 the result of the expectation should be gradient of f(x), or am I wrong?

  • @QT-yt4db
    @QT-yt4db Год назад

    Excuse me, but substituting the x to get f*(y) doesn't seem to get the formula as 1/2*(y-q)'Q_inv*(y-q), what happens to the q'x+c term after the substitution?

  • @petere9383
    @petere9383 Год назад

    If I may ask, the graph you have of |10x1| + |x2| doesn't seem correct to me. For example, at (0,-3) we should have f(x1,x2) be (0,3) having no negative values or am I missing something?

  • @davidtang2549
    @davidtang2549 Год назад

    The most easy-to-understand and clearly-explained course for optimization I've ever seen!

  • @petere9383
    @petere9383 Год назад

    Was there a previous course? He began talking about probability distributions and I don't think I have heard that previously

  • @titicaca-fx9cs
    @titicaca-fx9cs Год назад

    Are lecture notes and assignments available? Thanks!

  • @romolw6897
    @romolw6897 Год назад

    Hi Constantine, thanks for the great video! Could you also explain why 1-1/e is tight for maximizing submodular fucntion under uniform constraint?

  • @neardieborn464
    @neardieborn464 Год назад

    Very nice. I also think that Dr. Ahmad Bazzis convex optimization lectures are very useful. Youll gain the knowledge and tools needed to tackle complex optimization problems with confidence. Whether youre a beginner or an experienced professional, this

  • @gokulakrishnancandassamy4995

    @20:27 I guess the optimality condition w.r.t x^{\hat} should have the term A^T * z. What do you think, prof? @constantine.caramanis

  • @kieferaland4689
    @kieferaland4689 Год назад

    It was said but initially missed syntactically an ^n on the gradient of f. Thank you for these excellent videos.

  • @chenlouis5561
    @chenlouis5561 Год назад

    Can I ask a question here? Suppose there is a multivariable convex function f(x1,x2,...xi), and suppose we know f(x1',x2',...xi')=c. Based on this, can we get the boundaries for x1 to xi such that f(x1,x2,...xi)<c?

  • @AJ-et3vf
    @AJ-et3vf Год назад

    Thank you for the lecture

  • @КатеринаКовальова-д3ц

    Thank you very much for these materials. They are awesome. I monitor all the list, but I cannot find topics that are very relative to the Optimization: "Conjugate Direction Methods". Do you plan to add this part as well? Thank you for your time to read this comment)

  • @omnixchannel1621
    @omnixchannel1621 Год назад

    That is impossible to watch since you write really badly

  • @asimakiran9265
    @asimakiran9265 Год назад

    Hi .if any one has the solution of problem (optimize the sum of absolute value if function is convex ) please share

  • @peasant12345
    @peasant12345 Год назад

    all subsquare matrix have det = -1,0,1

  • @arisioz
    @arisioz 2 года назад

    Is it actually the indicator function on 15:30? It appears to be the characteristic function but I agree that they're very closely connected ideas. Thank you Caramani for all your lectures, you're awesome ❤

  • @krelly90277
    @krelly90277 2 года назад

    The handwriting is illegible.

  • @slx6666
    @slx6666 2 года назад

    optimization

  • @arisioz
    @arisioz 2 года назад

    19:28 "Since this holds for any alpha1 and alpha2 that are non-negative it also holds for a1 and a2 that sum to 1 and therefore a cone has to be convex" Geometrically this makes sense if we fill the interior of the cone but how can we guarantee convexity for all cones. For example, the graph of y=|x| is a cone that is not convex; however, the locus of points (x,y) with y≥|x| is a convex cone. There's a whole wiki on convex cones so I assume you made this statement solely based on the definition above and not as a general truth? I'm a bit confused.

    • @arisioz
      @arisioz 2 года назад

      I just found this example too drive.google.com/file/d/1lCPb48aW2kfd-yaOUmxKWILsCQ9UnvBh/view

  • @foxbat296
    @foxbat296 2 года назад

    Beautiful Lecture...Teaching without slides takes more efforts and lead to better understanding..Thanks

  • @mctini4233
    @mctini4233 2 года назад

    People who might get lost, as I did, at minute 2:20 when he draws the simplex: it is a drawing in 3D, not 2D :)

  • @wyattyang9
    @wyattyang9 2 года назад

    at 32:31, why not does it hold when \eta <= 2/\beta?

  • @mohamedppii1021
    @mohamedppii1021 2 года назад

    Hello professor thank you please Show that this function f(x) = e||x||^2 +1/2 <Ax, x> -<b, x> is quadratic e=epsilon, À matrix SDP b vecteur

  • @IamNumber4242
    @IamNumber4242 2 года назад

    Are there homework questions available for these lectures?

  • @from-chimp-to-champ1
    @from-chimp-to-champ1 2 года назад

    Mr Caramanis, I appreciate the clarity of the lectures, thank you so much! Complement it with my course on Optimization for Data Science. May i clarify the last example with the simplex, please? We have a 'triangle' domain, with the global minimum on one ot the vertices (31:36). Why then GD method will move in the other direction (to the left)? I thought that the step of GD descent is directed towards global minimum as well, and there would not be difference with direction of FW method. What causes GD to move in a wrong direction? Thank you very much!

  • @medad5413
    @medad5413 2 года назад

    ❤️

  • @eyadshami2194
    @eyadshami2194 2 года назад

    Does the definition of the quadratic upper bound assume that f is convex? from what I understand that f is not required to be convex just smooth, but to derive the quadratic upper bound we define the function g and proved that g is convex. Another question, why did we define function g in that exact shape?

  • @skittles6486
    @skittles6486 2 года назад

    Sir, you are truly a genius as a teacher. I understood each and every concept of this video just at one go.

  • @eyadshami2194
    @eyadshami2194 2 года назад

    thank you for making these lectures open to everyone.

  • @lindseyfeng9610
    @lindseyfeng9610 2 года назад

    thank you so much this is really useful thanks for saving my finals and everything

  • @prajjwal2650
    @prajjwal2650 2 года назад

    prof are those notebooks available now?

  • @wongrufus
    @wongrufus 2 года назад

    17:55 Both cannot happen at the same time, only one of those cases can be true at a time. 19:36 Does |R_1| + |R_2| = |R| or |R_1| + |R_2| <= |R|?

  • @wongrufus
    @wongrufus 2 года назад

    23:28 We ignore the case where | A \intersect B | = 1 because we are only interested in the case where FV(A,B) = 1

  • @wongrufus
    @wongrufus 2 года назад

    16:48 The linear constraint we are referring to is the constraint expressed in terms of y, not the quadratic version expressed in terms of x.

  • @wongrufus
    @wongrufus 2 года назад

    14:10 I believe that's missing a square?

  • @wongrufus
    @wongrufus 2 года назад

    19:39 Note the overload of notation, in 18:47, G is the index of the facet (i.e. the index of the inequality), while in 19:39, G is the set defined by the corresponding inequality

  • @wongrufus
    @wongrufus 2 года назад

    21:18 So we've basically converted the permutahedron formulation using an exponential number of sigmas, into one that uses an nxn matrix Y.

  • @wongrufus
    @wongrufus 2 года назад

    16:46 Notice how the structure of x*, $f(e1), f(e1+e2) - f(e1)$, etc., mirrors the structure of the Lovasz extension $S_0 \subseteq S_1 \subseteq ... \subseteq S_n$ ( ruclips.net/video/HBCJ-5GTSDU/видео.html ). Intuitively, we can think of the Lavosz extension as having the "normalization" in lambda (since all the lambda must sum to one) while the permutahedron LP has the "normalization" in the e vectors

  • @wongrufus
    @wongrufus 2 года назад

    8:24 Note that ${n+1 \choose 2}$ is just the summation equation of 1 + ... + n. 9:20 This inequality comes from flipping the previous inequality. Instead of requiring that the elements in S are greater than the summation, we require that the "remaining" elements of [n] - S (denoted as T) is less than the sum of all values in n minus the sum of values in S ( |S| = n - |T|)