Hello, when your convex optimization problem is convex, then most probably strong duality holds. This is not always true. A sufficient condition for strong duality is Slater’s condition on a convex problem, then strong duality holds. For more info, here's one of my videos on this topic th-cam.com/video/0WpYucMfaHM/w-d-xo.html
extremely helpful tysm. Two questions though: 1. at 7:06 , why that quantity is convex? because if you take max of u it is no longer linear right? 2. at 16:07, if we know u * g(x) =0, we can discuss case by case (i.e. when u = 0 and when g(x) = 0 ). Why that wouldn't work?
Hello Dr. Bazzi. I hope you are doing great. It would be great if you could make some videos on linear and non-linear fractional programming problems. Thank you.
Amazing video, could not understand this for the life of me but this helped tremendously. Videos like this must take a long time to make, but I feel that they will be used for generations. Thank you :)
Wow! great explanation. This is one topic that I find it intimidating when reading the book, but you explain it beautifully. Keep up the good work man!
@Ahmad Oh now I see what you mean! If we find U first, there is no way it will be +infinity. But I think I'm still struggling to understand this exchange of minmax to maxmin. For example, in the working example we found out the max U and only afterwards we get X, which depends on the value of our U. I can't see how this effectively assures that our restriction is going to be respected, since we did not know if the X chosen would violate the constraint. What I mean is that the primal problem, which is minimize for X and then maximize for U makes sense, since we can se the restrictions being violated and U working to penalize it or not. However, for the dual problem it seems weird that we are working out our penalize function before knowing what the X is going to be. I think I need to deep-dive on this part, if you have anything that could help me tackle this misunderstanding, it would be really great!! I appreciate your response
KKT its important first to check if you have a regular point , if you equations give it out , than you can apply KKT conditions, most times you do 2 time derivation of give equations and then look for Lamda
Hello Hiếu Hoàng, thank you for your kind words and concerns. Wow I'm surprised you noticed that and you're right as recently I'm not doing well. However, I'll try to keep my content as consistent and clean as possible.
elegant explanation! It should be recommended to whoever wants to learn optimization theory.
I must say, the insight that the visual approach provided just made it so intuitive. This is quite useful. Keep up the great work.
Hi Dr. Bazzi. Thanks for your lecture. I have a question, in what occasion we can freely assume we have zero duality gap (strong duality)?
Hello, when your convex optimization problem is convex, then most probably strong duality holds. This is not always true. A sufficient condition for strong duality is Slater’s condition on a convex problem, then strong duality holds. For more info, here's one of my videos on this topic th-cam.com/video/0WpYucMfaHM/w-d-xo.html
Muchas gracias ! Su explicación me sirvió un montón para entender las condiciones KKT.
Thank you very much, your explanation helped me alot!
extremely helpful tysm. Two questions though: 1. at 7:06 , why that quantity is convex? because if you take max of u it is no longer linear right? 2. at 16:07, if we know u * g(x) =0, we can discuss case by case (i.e. when u = 0 and when g(x) = 0 ). Why that wouldn't work?
Hello Dr. Bazzi. I hope you are doing great. It would be great if you could make some videos on linear and non-linear fractional programming problems. Thank you.
Great idea, Sandeep. Will add this to my upcoming lectures. Take care.
Making a complex math concept simple ... well done!
You are correct! Thanks for watching carefully :-)
Actually I really appreciate your video, your video was significantly helpful to understand KKT.
Amazing video, could not understand this for the life of me but this helped tremendously. Videos like this must take a long time to make, but I feel that they will be used for generations. Thank you :)
Wow! great explanation. This is one topic that I find it intimidating when reading the book, but you explain it beautifully. Keep up the good work man!
Hello Bachir! , This is so amazing ! I can just say - god bless you !!! Best Duality explanation so far !!
Awesome and illustrative, thank you.
Excellent! Really helpful
@Ahmad Oh now I see what you mean! If we find U first, there is no way it will be +infinity. But I think I'm still struggling to understand this exchange of minmax to maxmin. For example, in the working example we found out the max U and only afterwards we get X, which depends on the value of our U. I can't see how this effectively assures that our restriction is going to be respected, since we did not know if the X chosen would violate the constraint. What I mean is that the primal problem, which is minimize for X and then maximize for U makes sense, since we can se the restrictions being violated and U working to penalize it or not. However, for the dual problem it seems weird that we are working out our penalize function before knowing what the X is going to be. I think I need to deep-dive on this part, if you have anything that could help me tackle this misunderstanding, it would be really great!! I appreciate your response
Awesome, thank you!
Yay, glad this was helpful!
Very helpful video!!! Thank you very much
Wonderful Video! It helped so much
excellent video! Thanks you so much!
Awesome video, but in 12:50 it is not that the gradients are inversely proportional, but only that they are proportional by a negative constant
Glad it was helpful!!
This is amazing!!! thanks!!!
Respect!
I Should mention this first 😀 :) and also many thanks for your fast reply
Thanks for watching carefully. I thought so too at the beginning, but then i realized that grad log(-g) is equal to g/grad_g, and not -g/grad_g.
Because our objective function is : f-tlog(-g), even if the t is equal to zero, the gradient of f(x) remains.
thankyou so much for your lectures
Hi there! I use both Blender3D and the manim library
At 16:54, I think there is a type, it's grad of (f(x) + t log (-g(x))) instead of minus
wish you solve a complete example including inequality constraints
this is fucking smart, both the concept of dual problem and the video
Could you explain about the slowness?
18:32 - I don't understand why Newton's method very slowly converges when t -> 0 .
KKT its important first to check if you have a regular point , if you equations give it out , than you can apply KKT conditions, most times you do 2 time derivation of give equations and then look for Lamda
hello,i have the doubt that why equaility constraint penalyse without any sign restriction........
I was about to say
Correct! I should have been more precise.
What's problem with your health? :
Hello Hiếu Hoàng, thank you for your kind words and concerns. Wow I'm surprised you noticed that and you're right as recently I'm not doing well. However, I'll try to keep my content as consistent and clean as possible.
F**** gold! thanks!
Thank you for this great explanation