Lecture 7 | Convex Optimization I

แชร์
ฝัง
  • เผยแพร่เมื่อ 18 ธ.ค. 2024

ความคิดเห็น • 19

  • @shiv093
    @shiv093 5 ปีที่แล้ว +55

    1:08 Generalized inequality constraints
    7:04 Semidefinite program (SDP)
    10:55 LP and SOCP as SDP
    17:57 Eigenvalue minimization
    20:47 Matrix norm minimization
    27:28 Vector optimization
    33:14 Optimal and Pareto optimal points
    38:58 Multi criterion optimization
    41:54 Regularized least-squares
    44:50 Risk return trade-off in portfolio optimization
    54:47 Scalarization
    57:51 Scalarization for multiplication problems
    1:02:10 Duality

  • @mazaherhajibashi6151
    @mazaherhajibashi6151 4 ปีที่แล้ว +2

    Great thanks for sharing these valuable lectures. Using them and reading Prof. Boyds' book, I not only could ponder the convex optimization concepts and dig through the problems that I am facing in my field (the electricity markets) but also learn how to make the course more attractive for the students even if its a completely serious mathematic course. Thank you, Professor Boyd.

  • @shupengwei9419
    @shupengwei9419 6 ปีที่แล้ว +9

    1:03:00 Duality

  • @Vignesh_Subramanian
    @Vignesh_Subramanian 4 ปีที่แล้ว

    1:13:45 What did Prof. meant by saying "for the max cut problem, it was proven that the bound was never more than 13% off?"

    • @shuo5444
      @shuo5444 4 ปีที่แล้ว

      max cut is an optimization problem which could be modeled as SDP and get a approximation ratio of 0.878, i.e 13% off, see www2.cs.duke.edu/courses/fall15/compsci532/scribe_notes/lec17.pdf

  • @yogeshpasari7400
    @yogeshpasari7400 ปีที่แล้ว

    At @20:14 how can the LMI be general ? A less than lambda I in the matrix sense needs to hold only for eigenvectors , why is it general?

  • @ehfo0
    @ehfo0 9 ปีที่แล้ว +6

    Does anyone know any matlab tutorial for convex optimization?

  • @wihenao
    @wihenao 10 ปีที่แล้ว +3

    Oops. Steve wrote min. on minute 25:02 . I thought he said that was a sloppy practice :)

    • @GeertDelmulle
      @GeertDelmulle 8 ปีที่แล้ว +6

      Wilmer Henao "...note the period..."

    • @emmpati
      @emmpati 6 ปีที่แล้ว

      He put dot at the end for abbreviation of minimize. He said that was totally ok in that video

  • @shupengwei9419
    @shupengwei9419 6 ปีที่แล้ว +4

    7:07 SDP

  • @shupengwei9419
    @shupengwei9419 6 ปีที่แล้ว +3

    27:37 vector optimization

  • @xz3642
    @xz3642 7 ปีที่แล้ว +1

    How to show that “lamda_max(A)≤t" is equivalent to "t*I-A is positive semidefinite"?

    • @sridharthiagarajan479
      @sridharthiagarajan479 6 ปีที่แล้ว +1

      Eigenvalues of t-IA is t - eigenvalue(A). So for all eigenvalue be positive (positive definite), lambda_max (A)

  • @annawilson3824
    @annawilson3824 ปีที่แล้ว

    1:04:02

  • @alexg7836
    @alexg7836 11 ปีที่แล้ว

    No... That is almost a textbook definition

  • @MrMuyu0117
    @MrMuyu0117 12 ปีที่แล้ว

    沙发

  • @zes7215
    @zes7215 6 ปีที่แล้ว

    wrong,idts. no such thing as interesx or not about it, or interesx thus hear more etc. ts just toolx not interex. ceptu thesex, any be any interesx no matter what, say/can say any no matter what, no such thing as tryx or etc

  • @MrMuyu0117
    @MrMuyu0117 12 ปีที่แล้ว

    沙发