Zico Kolter
Zico Kolter
  • 70
  • 240 275
Lecture 25: Computational Game Theory
Lecture 25: Computational Game Theory
This is a lecture video for the Carnegie Mellon course: 'Graduate Artificial Intelligence', Spring 2014.
Information about the course is available at www.cs.cmu.edu/~zkolter/course/15-780-s14/
มุมมอง: 3 173

วีดีโอ

Lecture 18: Computer vision 1
มุมมอง 6K10 ปีที่แล้ว
Lecture 18: Computer vision 1 This is a lecture video for the Carnegie Mellon course: 'Graduate Artificial Intelligence', Spring 2014. Information about the course is available at www.cs.cmu.edu/~zkolter/course/15-780-s14/
Lecture 23: Natural language processing
มุมมอง 4.9K10 ปีที่แล้ว
Lecture 23: Natural language processing This is a lecture video for the Carnegie Mellon course: 'Graduate Artificial Intelligence', Spring 2014. Information about the course is available at www.cs.cmu.edu/~zkolter/course/15-780-s14/
Lecture 20: Scheduling
มุมมอง 1.1K10 ปีที่แล้ว
Lecture 20: Scheduling This is a lecture video for the Carnegie Mellon course: 'Graduate Artificial Intelligence', Spring 2014. Information about the course is available at www.cs.cmu.edu/~zkolter/course/15-780-s14/
Lecture 22: Robotics 2
มุมมอง 83610 ปีที่แล้ว
Lecture 22: Robotics 2 This is a lecture video for the Carnegie Mellon course: 'Graduate Artificial Intelligence', Spring 2014. Information about the course is available at www.cs.cmu.edu/~zkolter/course/15-780-s14/
Lecture 21: Robotics 1
มุมมอง 1.4K10 ปีที่แล้ว
Lecture 21: Robotics 1 This is a lecture video for the Carnegie Mellon course: 'Graduate Artificial Intelligence', Spring 2014. Information about the course is available at www.cs.cmu.edu/~zkolter/course/15-780-s14/
Lecture 19: Computer vision 2
มุมมอง 1.7K10 ปีที่แล้ว
Lecture 19: Computer vision 2 This is a lecture video for the Carnegie Mellon course: 'Graduate Artificial Intelligence', Spring 2014. Information about the course is available at www.cs.cmu.edu/~zkolter/course/15-780-s14/
Lecture 17: Reinforcement learning
มุมมอง 2.2K10 ปีที่แล้ว
Lecture 17: Reinforcement learning This is a lecture video for the Carnegie Mellon course: 'Graduate Artificial Intelligence', Spring 2014. Information about the course is available at www.cs.cmu.edu/~zkolter/course/15-780-s14/
Lecture 15: Probabilistic inference
มุมมอง 1.2K10 ปีที่แล้ว
Lecture 15: Probabilistic inference This is a lecture video for the Carnegie Mellon course: 'Graduate Artificial Intelligence', Spring 2014. Information about the course is available at www.cs.cmu.edu/~zkolter/course/15-780-s14/
Lecture 16: Planning under uncertainty
มุมมอง 1.4K10 ปีที่แล้ว
Lecture 16: Planning under uncertainty This is a lecture video for the Carnegie Mellon course: 'Graduate Artificial Intelligence', Spring 2014. Information about the course is available at www.cs.cmu.edu/~zkolter/course/15-780-s14/
Lecture 13: Planning 2
มุมมอง 56910 ปีที่แล้ว
Lecture 13: Planning 2 This is a lecture video for the Carnegie Mellon course: 'Graduate Artificial Intelligence', Spring 2014. Information about the course is available at www.cs.cmu.edu/~zkolter/course/15-780-s14/
Lecture 14: Probabilistic modeling
มุมมอง 1.2K10 ปีที่แล้ว
Lecture 14: Probabilistic modeling This is a lecture video for the Carnegie Mellon course: 'Graduate Artificial Intelligence', Spring 2014. Information about the course is available at www.cs.cmu.edu/~zkolter/course/15-780-s14/
Lecture 11: Machine learning 2
มุมมอง 82610 ปีที่แล้ว
Lecture 11: Machine learning 2 This is a lecture video for the Carnegie Mellon course: 'Graduate Artificial Intelligence', Spring 2014. Information about the course is available at www.cs.cmu.edu/~zkolter/course/15-780-s14/
Lecture 10: Machine Learning 1
มุมมอง 1.3K10 ปีที่แล้ว
Lecture 10: Machine Learning 1 This is a lecture video for the Carnegie Mellon course: 'Graduate Artificial Intelligence', Spring 2014. Information about the course is available at www.cs.cmu.edu/~zkolter/course/15-780-s14/
Lecture 8: Constraint satisfaction
มุมมอง 88610 ปีที่แล้ว
Lecture 8: Constraint satisfaction This is a lecture video for the Carnegie Mellon course: 'Graduate Artificial Intelligence', Spring 2014. Information about the course is available at www.cs.cmu.edu/~zkolter/course/15-780-s14/
Lecture 9: Mixed integer programming
มุมมอง 13K10 ปีที่แล้ว
Lecture 9: Mixed integer programming
Lecture 7: Local Search
มุมมอง 1.3K10 ปีที่แล้ว
Lecture 7: Local Search
Lecture 6: Informed search
มุมมอง 89010 ปีที่แล้ว
Lecture 6: Informed search
Lecture 5: Optimization 2
มุมมอง 1.4K10 ปีที่แล้ว
Lecture 5: Optimization 2
Lecture 4: Optimization 1
มุมมอง 3.1K10 ปีที่แล้ว
Lecture 4: Optimization 1
Lecture 3: Search
มุมมอง 1.6K10 ปีที่แล้ว
Lecture 3: Search
Lecture 2: Intelligent agents and paradigms for AI
มุมมอง 4.5K10 ปีที่แล้ว
Lecture 2: Intelligent agents and paradigms for AI
Lecture 1: Intro
มุมมอง 11K10 ปีที่แล้ว
Lecture 1: Intro
Lecture 24: Future Directions in Smart Grid Research
มุมมอง 1.3K10 ปีที่แล้ว
Lecture 24: Future Directions in Smart Grid Research
Lecture 23: Model Predictive Control
มุมมอง 20K10 ปีที่แล้ว
Lecture 23: Model Predictive Control
Lecture 22: LQ Stochastic Control, MDPs
มุมมอง 1.3K10 ปีที่แล้ว
Lecture 22: LQ Stochastic Control, MDPs
Lecture 21: Multivariate systems, LQR
มุมมอง 2.3K10 ปีที่แล้ว
Lecture 21: Multivariate systems, LQR
Lecture 20: Stochastic systems, PID control
มุมมอง 1.3K11 ปีที่แล้ว
Lecture 20: Stochastic systems, PID control
Lecture 19: Dynamical systems (cont)
มุมมอง 49811 ปีที่แล้ว
Lecture 19: Dynamical systems (cont)
Lecture 18: Control examples, dynamical systems
มุมมอง 1.4K11 ปีที่แล้ว
Lecture 18: Control examples, dynamical systems

ความคิดเห็น

  • @yassineaguelmam8674
    @yassineaguelmam8674 ปีที่แล้ว

    What of we took H=1 do we compute a lot of inuts so we have a smooth control?

  • @AJ-et3vf
    @AJ-et3vf 2 ปีที่แล้ว

    Awesome video sir! Thank you!

  • @laurentlessard3031
    @laurentlessard3031 3 ปีที่แล้ว

    The claim that the separation principle does not hold for POMDPs (15:00 - 16:00 in the video) is false. The belief state (conditional distribution of the present state based on the history of measurements) is a sufficient statistic. As pointed out in the video, you can update the belief in a recursive fashion (a generalized version of the Kalman filter). The principle of optimality (Bellman equation) still holds and it's possible to show that you can write it in terms of the belief state rather than the full measurement history. Consequently, the optimal action is a function of the belief. This is not an approximation. The separation principle holds for POMDPs. What does not hold in the POMDP case is certainty equivalence. The optimal POMDP policy will in general be different from the optimal MDP policy applied to the MMSE estimate of the state (the expected value of the belief). It's only in the LQG case that this works. The reason people say "POMDPs are hard" has nothing to do with the separation principle. There are two reasons: 1) Doing belief propagation in continuous state/action/measurement space is intractable. You would have to store the belief as a conditional distribution, which is easy to do in the LQG case where you know the belief will be Gaussian, but in general you would have to store some function that does not have a closed-form expression. Likewise, you can do dynamic programming, but again you have to store the value function, which is easy in the LQG case where you know it will be quadratic, but is difficult in general. There are various approaches for approximating the belief propagation and dynamic programming steps, which is what is done in practice to overcome these difficulties. 2) You can find the exact POMDP policy if the state, action, and observation spaces are finite and discrete. Then, your belief and optimal policy can be stored as matrices and implemented as lookup tables. Unfortunately, this is only practically useful for very modest POMDPs due to the curse of dimensionality.

    • @COMPTROL
      @COMPTROL 6 หลายเดือนก่อน

      wow thanks for the insight.

  • @jonsnow9246
    @jonsnow9246 3 ปีที่แล้ว

    Starts at 3:30

  • @dhaferalzubaidy4773
    @dhaferalzubaidy4773 3 ปีที่แล้ว

    hi , what is the difference between UC and Day- ahead? please

  • @dilansandeepa1956
    @dilansandeepa1956 3 ปีที่แล้ว

    Great Work Sir Thanks Lot of Sir

  • @camilovicentegonzalezpozo1725
    @camilovicentegonzalezpozo1725 3 ปีที่แล้ว

    thank you very much!!!, this will be helpful for my future master degree thesis in MPC

  • @amr.a-m8350
    @amr.a-m8350 4 ปีที่แล้ว

    Thanks for the lecture. Please you have code MatLab or Simulink file examples

  • @chasman67
    @chasman67 4 ปีที่แล้ว

    BORING!

  • @glongoria8004
    @glongoria8004 4 ปีที่แล้ว

    Power flow begins 31:26

  • @mktsp2
    @mktsp2 5 ปีที่แล้ว

    Is there a set of notes? Thanks

  • @amadoubah394
    @amadoubah394 5 ปีที่แล้ว

    You are favourite

  • @MaheshKumar-iw4mv
    @MaheshKumar-iw4mv 5 ปีที่แล้ว

    Great lecture. I have one persisting question though. How do you understand a model with many free parameters?

  • @MaheshKumar-iw4mv
    @MaheshKumar-iw4mv 5 ปีที่แล้ว

    Very illuminating lecture. You put things in very nice mathematical ideas like convex functions and have made an important point about the linear and non-linear regression using features.

  • @c0t556
    @c0t556 6 ปีที่แล้ว

    How did you plot those cool rotating 3D scatter plots?

  • @JavaScripting64
    @JavaScripting64 6 ปีที่แล้ว

    I tried to convince myself that if f(x) = Ax then the Jacobian of f(x) is just A 59:00 by using the first example. I did it out on paper and it didn't work out so I tried it using MATLAB and got the same answer. Take a look imgur.com/a/Nlyf1BZ please let me know what I did wrong. I found an A matrix that works but it is not the Jacobian of f(x)… imgur.com/a/aDt668A

  • @shahroozaz
    @shahroozaz 6 ปีที่แล้ว

    your lecture videos are great by the way :) thanks for sharing

  • @amitatripathi4992
    @amitatripathi4992 6 ปีที่แล้ว

    please suggest a book you are following

  • @Mr5nan
    @Mr5nan 6 ปีที่แล้ว

    32:55 How is fixing a function actually better than erasing it entirely? They both deliver us globally non optimal solution

  • @panchicorex
    @panchicorex 6 ปีที่แล้ว

    for the quiz, machine learning because the robot has to be capable to learn the preferred coffee quantity for each one in the office, I like a big and full cup, no sugar. good robot. thanks Zico.

  • @omkarenator
    @omkarenator 7 ปีที่แล้ว

    convex optimization at 50:20

    • @c0t556
      @c0t556 6 ปีที่แล้ว

      THANKS!!!

  • @saulleung2904
    @saulleung2904 7 ปีที่แล้ว

    Matrix Calculus start at 48:50

  • @xinnywillwin
    @xinnywillwin 7 ปีที่แล้ว

    REALLY HELPFUL

  • @tylermarkowsky7917
    @tylermarkowsky7917 7 ปีที่แล้ว

    Nice! Can you comment on how you made the 3D animated (rotating) graph?

  • @DelJuvePiero
    @DelJuvePiero 8 ปีที่แล้ว

    This was really helpful.

  • @invinity3982
    @invinity3982 8 ปีที่แล้ว

    Greetings, are there any plans to upload the Spring 2016 iteration of this course please, as you cover Deep Learning in it. Thank you for sharing these lecture video

  • @donm7906
    @donm7906 8 ปีที่แล้ว

    Thank you! very helpful

  • @sunhao2005
    @sunhao2005 8 ปีที่แล้ว

    Hello, Zico, I'm working on robot scheduling problem for automated wet station in semiconductor process. I want to learn how to use MILP concept to solve this problem, I've known all the constraints, but I don't know how to start with the algorithm. So how can I start to learn the MILP codeing for the very beginning?

  • @anahitasafari3094
    @anahitasafari3094 8 ปีที่แล้ว

    The instruction is great Can't wait to watch more videos about control from this instructor Great job

  • @walemas
    @walemas 8 ปีที่แล้ว

    Hey, quick question about the objective function for the Sudoku. Are we actually optimizing with respect to this objective function or is it just used to "sort" the branches? If we are optimizing with respect to it, how do we do it? (As the function isn't linear).

    • @Raven-bi3xn
      @Raven-bi3xn ปีที่แล้ว

      They don't care about the value of the objective function. They just keep branching, then relaxing to solve the linear programming (LP), and go down the first branches that are feasible, then branch again + relax again + LP again until all binary variables are integral. The use of an arbitrary objective function is that the linear programming of the relaxed problem can be solved. A nice objective function makes LP solve faster.

  • @MansourAlramlawi
    @MansourAlramlawi 8 ปีที่แล้ว

    Hello Mr.Kolter, Could you please advice me for good book talk about power flow?

  • @MurtxD
    @MurtxD 9 ปีที่แล้ว

    Thanks. In many ways an excellent video, however in the beginning you talk of the magnet as if it has one pole. Later at about 43 min you in fact refer to a "one pole" magnet! AC is produced because when the field from the North pole cuts the conductor, current flows in one direction and in the opposite when the South pole comes around (load connected). The min number of poles in the generator is 2 and it must rotate at 3600 rpm for 60 Hz. You mention this as an unreasonable speed, but in Europe the vast majority of thermal sets run at 3000 rpm and I expect many thermal set in the US run at 3600. Hydro sets are typically multi-pole and so run at much slower speeds. However the stators are wound so that the rotating field of the rotor is still in synch with the rotating field produced by the current flowing in the stator coils. Thanks for the info regarding how the generator voltage must be increased to deliver reactive energy to inductive loads, very useful explanation.

  • @echo0204
    @echo0204 9 ปีที่แล้ว

    Thank you !!!! Really need this for my thesis!

  • @noufalmuhamed82
    @noufalmuhamed82 9 ปีที่แล้ว

    I am eager to know what program you are using to view the waveform in your computer.?

  • @PrafulPPai
    @PrafulPPai 9 ปีที่แล้ว

    Thanks for the upload. I was reading up on kernel methods but struggling to understand parts of it, but this really helped clear a lot of my doubts regarding the topic.

  • @amir-kashani
    @amir-kashani 9 ปีที่แล้ว

    it starts at @54:51. Great teaching.

  • @sunshinehero1422
    @sunshinehero1422 9 ปีที่แล้ว

    the teacher is handsome and hot

  • @moshiurrahman1542
    @moshiurrahman1542 9 ปีที่แล้ว

    Someone please give me the Power Presentation link.

  • @haitham973
    @haitham973 9 ปีที่แล้ว

    Thanks for the lecture. It is not true, however, that MDPs only work for finite actions and states. Have a look at policy search RL methods, and stuff like Fitted Q-Iteration.

    • @siddharthkotwal8823
      @siddharthkotwal8823 8 ปีที่แล้ว

      I don't think so, continuous MDPs can only be solved approximately. While the state variables can be continuous, the actions are almost always discretized for applying methods like Fitted Q-iteration and some function approximation is always incorporated to beat the curse of dimensionality in continuous state spaces.

  • @borispolanco4853
    @borispolanco4853 10 ปีที่แล้ว

    Hi, can you help me, I have fitted a non linear model, but now I need to make a test of significance for every parameter of my non linear model, How should I proceed??, I estimate my parameters with non linear least squares

  • @altobellibrito4748
    @altobellibrito4748 10 ปีที่แล้ว

    very useful thanks.

  • @miotuo7267
    @miotuo7267 10 ปีที่แล้ว

    brutta linguaccia!!!!!!!!

  • @93nites
    @93nites 10 ปีที่แล้ว

    I got a head-start into Computer Vision through the last two lectures.Very intuitive way of teaching and definitely not sleep-inducing. Thanks a lot for posting these!

  • @mohamed-alia.9108
    @mohamed-alia.9108 10 ปีที่แล้ว

    Nice lecture.

  • @chenmingliang9262
    @chenmingliang9262 10 ปีที่แล้ว

    if there anyone listening u talking, they probable all dead,u kill them

  • @geekatron8
    @geekatron8 10 ปีที่แล้ว

    Lecture begins at 5:10.

  • @周迪-q1m
    @周迪-q1m 10 ปีที่แล้ว

    Great Record. It helps me a lot to get the basic idea of non-linear regression. Thx