0:41 Duality 5:00 Lagrange dual function 9:18 Least-norm solution of linear equations 14:04 Standard form LP 21:34 Equality constrained norm minimization 25:19 Two-way partitioning 35:13 Lagrange dual and conjugate function 38:11 The dual problem 46:26 Weak and strong duality 53:01 Slater's constraint qualification 56:57 Inequality form LP 58:54 Quadratic program 1:04:07 Geometric interpretation 1:11:27 Complementary slackness
"And mostly I'm saying this to irritate people at other universe, my colleagues who will, be alerted to this, watch this tape and be very angry..." Best engineering mockery ever, lol
At 1:06:50, when he states that the function 'g(lambda)' is rolling over the domain 'G', that's because, given a lambda, you're always taking the infimum of the objective function. So the function 'g(lambda)' will never intersect the domain 'G' (otherwise, for that particular 'Lambda', there exists another value of the objective-function which is lower than whatever is currently being used in the function 'g(lambda)' ) and it will always be tangent to the boundary.
A huge thanks to Professor Boyd for his nice exposition. A humble question: is it standard across of Convex analysis to define infimum of an unbounded set to be negative infinity?
Not just across convex analysis. The infimum is defined to be the largest lower bound of a set. So, if your set is not bounded below, negative infinity is the only lower bound for that set, and thus also the largest, hence the infimum.
Regarding geometric interpretation 1:06:45: When we started from the right corner point and then maximize g(Lambda) over Lambda then we can actually let lambda go to infinity and g(Lambda) will go to infinity as well. What is it that stops us doing that and switch to the left point?
CONCAVE! The function within the infimum is an affine function w.r.t dual variables (convex and concave), but there is an infimum operator, the whole dual function is concave. Recall the operations that preserve the convexity in Chapter 3. Concave is preserved by infimum.
The dual, by the way he formulated, is actually concave. In convex optimization, it does not hurt to call a concave function convex. The main reason being you can solve concave problems by simply taking the negative of the function (in mathematician's head, since it is solvable, you can call it convex because convex functions are solvable)
this is just sumations and limit function simply. I took calculas and this is something I tune into for tutoring. I understand it for the most part. I wish I could teach you this. I am a one who likes buddhist and asian and education to explain this to you to simplify this to you let me know I want you as my girlfriend to teach me programming and I will teach you the world of math and engineering if you teach me that much more about programming.
IbRaHiM NaDiR probably you speak about a maximize primal problem or something else different of the form given in this lecture. as he tell the dual function in this case is concave not convex. I know its late :) allah iweffe9
0:41 Duality
5:00 Lagrange dual function
9:18 Least-norm solution of linear equations
14:04 Standard form LP
21:34 Equality constrained norm minimization
25:19 Two-way partitioning
35:13 Lagrange dual and conjugate function
38:11 The dual problem
46:26 Weak and strong duality
53:01 Slater's constraint qualification
56:57 Inequality form LP
58:54 Quadratic program
1:04:07 Geometric interpretation
1:11:27 Complementary slackness
46:15 meaning of Lagrange dual function. I had to rewind it 3 times to finally see it. Beautiful!
Boyd is savage with his commentary on other fields and I love it ;)
"And mostly I'm saying this to irritate people at other universe, my colleagues who will, be alerted to this, watch this tape and be very angry..."
Best engineering mockery ever, lol
I have devoted my life to constraint qualifications and now this video has invalidated my entire life's work
7:40, mind = blown. Super simple explanation. Thank you,
At 1:06:50, when he states that the function 'g(lambda)' is rolling over the domain 'G', that's because, given a lambda, you're always taking the infimum of the objective function. So the function 'g(lambda)' will never intersect the domain 'G' (otherwise, for that particular 'Lambda', there exists another value of the objective-function which is lower than whatever is currently being used in the function 'g(lambda)' ) and it will always be tangent to the boundary.
Remember, it’s absolutely no one’s business how you come up with ‘nu’. No one’s business.
A huge thanks to Professor Boyd for his nice exposition. A humble question: is it standard across of Convex analysis to define infimum of an unbounded set to be negative infinity?
Not just across convex analysis. The infimum is defined to be the largest lower bound of a set. So, if your set is not bounded below, negative infinity is the only lower bound for that set, and thus also the largest, hence the infimum.
Regarding geometric interpretation 1:06:45: When we started from the right corner point and then maximize g(Lambda) over Lambda then we can actually let lambda go to infinity and g(Lambda) will go to infinity as well. What is it that stops us doing that and switch to the left point?
Awesome, simple, elegant.... I lost words, explanation for complementary slackness, 1:12:16
CONCAVE! The function within the infimum is an affine function w.r.t dual variables (convex and concave), but there is an infimum operator, the whole dual function is concave. Recall the operations that preserve the convexity in Chapter 3. Concave is preserved by infimum.
How do you come up with the example "v = -lambda_min(W)1" in 33:35?
He's so funny. This video was really entertaining.
Very clear. It made me realize. Very good.
Beautifully explained. Thank you!
Great content. The description link is outdated, though.
Great video! Solved my problem about dual problems!
It is CONCAVE! DUAL PROBLEM IS A CONVEX PROBLEM. BUT THE DUAL FUNCTION IS ALWAYS CONCAVE.
Thank you
Don't you think that S. Boyd has a bit of a resemblance to Alan Alda?
He reminds me of Marco Perella.
No one can master engineering without the scope of Boyd's plumpy fingers
19:03 how is (v)'b written as (b)'v
(v)'b is a 1*1 matrice so it's equal to its transpose
Do any one know where to get the notes?
Great Teacher
Maximizing a concave function is equivalent to minimizing a convex function. That's why the dual problem is a convex problem.
Isn't there a mistake? At 6:43 he says that the dual function is concave. I see in a number of books its Convex and other websites as well. Any ideas?
The dual, by the way he formulated, is actually concave. In convex optimization, it does not hurt to call a concave function convex. The main reason being you can solve concave problems by simply taking the negative of the function (in mathematician's head, since it is solvable, you can call it convex because convex functions are solvable)
really helps, thanks a lot!
this is just sumations and limit function simply. I took calculas and this is something I tune into for tutoring. I understand it for the most part. I wish I could teach you this. I am a one who likes buddhist and asian and education to explain this to you to simplify this to you let me know I want you as my girlfriend to teach me programming and I will teach you the world of math and engineering if you teach me that much more about programming.
thanks for sharing!¬
period.
1:06:24
At th-cam.com/video/FJVmflArCXc/w-d-xo.html, which formula is he referring to? (for small problems???)
He's so funny :P
IbRaHiM NaDiR probably you speak about a maximize primal problem or something else different of the form given in this lecture. as he tell the dual function in this case is concave not convex.
I know its late :) allah iweffe9
In loose terminology, concave problem is said to be a convex optimization problem. (-f is convex)
cant be just a slip of tongue cause its written there too...
Wow..Period..:)
never mind.. I passed the exam with 26/30 :P