I have a question : what if we have to optimize a cost (with a monetary unit) as an objective function and the constraint has a different unit, how to deal with this ? I mean the relaxed problem in its objective function will have a term with a monetary unit while the other term with the dual variable will have a different unit. Please correct me if I am wrong
I do not quite understand why the standard LR needs the quadratic assumption. From what was illustrated in the slides, it is not obvious why a linear obj cannot work. Also, I am confused a bit about the logic here: the standard LR needs contiuously differentiable --> thus, the obj needs to be quadratic --> thus linear obj cannot work. Firstly, not only quadratic functions are continuously differentiable; secondly, linear functions are surely continuously differentiable.
I realized that later on, by linear, you actually meant piecewise-linear, but any piecewise-linear obj can be reformulated in a standard way to eliminate such nonsmoothness in the obj
Hi Yingkai. Thanks for this good question. I think in the case of linear objective function, the first derivative is constant while a gradient-based approach like LR explores the optimal point over iterations based on the change of derivates across the feasible space. This is not an issue in the quadratic objective function (as far as we keep convexity). I suppose if the underlying problem is convex and the objective function has a non-constant first derivate, the standard LR should work. Hope this helps.
Thanks a lot, professor, that's very practical.
I have a question : what if we have to optimize a cost (with a monetary unit) as an objective function and the constraint has a different unit, how to deal with this ? I mean the relaxed problem in its objective function will have a term with a monetary unit while the other term with the dual variable will have a different unit. Please correct me if I am wrong
I do not quite understand why the standard LR needs the quadratic assumption. From what was illustrated in the slides, it is not obvious why a linear obj cannot work. Also, I am confused a bit about the logic here: the standard LR needs contiuously differentiable --> thus, the obj needs to be quadratic --> thus linear obj cannot work. Firstly, not only quadratic functions are continuously differentiable; secondly, linear functions are surely continuously differentiable.
I realized that later on, by linear, you actually meant piecewise-linear, but any piecewise-linear obj can be reformulated in a standard way to eliminate such nonsmoothness in the obj
Hi Yingkai. Thanks for this good question. I think in the case of linear objective function, the first derivative is constant while a gradient-based approach like LR explores the optimal point over iterations based on the change of derivates across the feasible space. This is not an issue in the quadratic objective function (as far as we keep convexity). I suppose if the underlying problem is convex and the objective function has a non-constant first derivate, the standard LR should work. Hope this helps.
@@jalalkazempour1429 It is more clear to me now. Thanks, Prof.!