I’m so glad you produced this series of videos. I was intimidated by all the math and algorithm variations covered in the first four lectures of my graduate course. After watching these videos and then revisiting my grad lectures, I now actually understand what my professor was trying to teach. Thank you!
{ "question": "If a math student is the agent, then the _______________ is the environment.", "choices": [ "math quiz", "math professor", "quiz score", "Swiss mathematician Leonhard Euler" ], "answer": "math quiz", "creator": "N Weissman", "creationDate": "2022-03-21T22:50:05.763Z" }
Thank you, Rooney! First quiz question for this video :D I believe you mistakenly chose the wrong answer, so I corrected it and just pushed it to the site. Take a look :) deeplizard.com/learn/video/my207WNoeyA
Thanks, Anirudh. If you haven't checked out our Deep Learning Fundamentals course, I'd recommend it, as it has some of the major math concepts fully detailed there.
Hi, This is extremely intuitive and super easy to understand. I was wondering if you could tell me what resources you used to learning this material? How do you learn material like this (your best practices) and how much time it took you to learn the material (for making deeplizard content)? Thanks a lot for making this content and waiting for your reply.
As formal resources, I used the book “Reinforcement Learning: An Introduction” Second edition by Richard Sutton and Andrew Barto, along with this DeepMind paper: www.cs.toronto.edu/~vmnih/docs/dqn.pdf I also used various informal resources, like reading many blog articles, forums, etc.
{ "question": "State and Reward at time t depends ", "choices": [ "State Action pair for time (t-1)", "Cumulative reward at time t ", "Agent Dynamics", "State Action pair for all time instances before t" ], "answer": "State Action pair for time (t-1)", "creator": "Ushnish Sarkar", "creationDate": "2020-06-01T16:24:16.894Z" }
Please give credit to "Reinforcement Learning: An Introduction" by Richard S. Sutton and Andrew G. Barto, copyright 2014, 2015. You allow viewers to pay you through Join and this book material is copyrighted, but you do not reference them anywhere on your website. The equations and material are pulled directly from the text and it presents an ethical issue. Though the book is open-sourced, it is copyrighted, and you are using this material for financial gain. This text book has been used in several university courses on reinforcement learning in the past. I love these videos, but proper credit and securing approval from the authors must be obtained!
The agent is not part of the MDP itself but rather interacts with it. The agent's role is to select actions based on the current state and the policy it's following, and it receives feedback in the form of rewards and new state observations from the environment, which is modeled as an MDP.
Hey Marius - Yes, Q-learning will be covered! Check out the syllabus video to see the full details for everything we'll be covering: th-cam.com/video/nyjbcRQ-uQ8/w-d-xo.html
{ "question": "Which is the correct order for the components of MDP?", "choices": [ "Agent--->Environment--->State--->Action--->Reward", "Environment--->Agent--->State--->Action--->Reward", "State--->Agent--->Environment--->Action--->Reward", "Agent--->State--->Environment--->Action--->Reward" ], "answer": "Agent--->Environment--->State--->Action--->Reward", "creator": "Duke Daffin", "creationDate": "2021-01-16T12:19:28.304Z" }
What I learned: 1、MDP is formalize decision making process. (Yeah, everybody teach the MDP at first ,no body tell me why until now . Its a strange world) 2、The R(t+1) is because of At , before I always think ,Rt is pair with At 3、The agent is care about accumulate reward ( For others dont know )
{ "question": "In MDP which component role is to maximize the total Reward R ", "choices": [ "Agent", "State", "Action", "Reward" ], "answer": "Agent", "creator": "Hivemind", "creationDate": "2020-12-27T00:22:07.005Z" }
Hi! Loved the video and I think I have a solid understanding of the MDP. But I'm having trouble making sense of the equation. Why is the LHS a probability and the RHS a set? And what does Pr stand for?
I am reading a paper of applying Q-learning in repeated Cournot Oligopoly game in Economics where firms are agents who choose their level of production to gain profit. I can understand in that environment actions are the possible level of output that firm choose to produce. However, it is unclear for me what the states are in this situation. Could you please provide a further explanation in this case?
Hey Chyld - Yes, we'll be utilizing OpenAI Gym once we get into coding! Check out the syllabus video to see the full details for everything we'll be covering: th-cam.com/video/nyjbcRQ-uQ8/w-d-xo.html
i came here to learn about a topic and left sad that OG.JeRax and OG.ana is'nt on the active roster, hopefully OG.Sumail will carry as well as ana did.
Hey Mayank - We currently don't have any systems in place to implement or track a setup like that. Just for clarity, note that all of the code will be fully shown in the videos, so the code itself is freely available. Also, the corresponding blogs for each video are freely available at deeplizard.com. The convenience of downloading the pre-written organized code files is what is available as a reward for members of the deeplizard hivemind. deeplizard.com/hivemind
Check out the corresponding blog and other resources for this video at:
deeplizard.com/learn/video/my207WNoeyA
Can we take a second and just appreciate the work put in producing such high-quality videos in bites that are easy to understand?
Thanks deeplizard for doing the hard work on illustrations to explain it to the feeble-minded. Its like training a donkey, how to solve calculus.
"Eee-ore!", says me. Oh, and THANKS!
I saw different channels but no one explained this topic better than you . thanks alot
I’m so glad you produced this series of videos. I was intimidated by all the math and algorithm variations covered in the first four lectures of my graduate course. After watching these videos and then revisiting my grad lectures, I now actually understand what my professor was trying to teach. Thank you!
I was wandering here and there looks like I have landed a perfect place to learn Deep Learning.... Thanks .. I will continue.
this is by far the best tutorial I've seen about this topic. I'm about to watch the whole series :D
Whoop! Thank you :)
More videos will continued to be added to this series as well!
subscribed!
- **Introduction to Markov Decision Processes (MDPs)**:
- 0:00 - 0:17
- **Components of MDPs**:
- 0:23 - 1:43
- **Mathematical Representation of MDPs**:
- 1:47 - 3:59
- **Probability Distributions and Transition Probabilities**:
- 4:02 - 4:56
- **Conclusion and Next Steps**:
- 5:01 - 5:47
You are awesome.
This series would help me for my project.
Thank you so much.
Best regards...
amazing explanation of what is RL. I will watch the whole series from now
This video can be denoted by n as n approaches perfection.
Seriously... Amazing tutorial! I really like how you offer text version as well. Thanks you :)
Great tutorial, understood the concept clearly for the first time, after going through many. Thank you very much.
There really should be more videos in this style. I hope there will be a lot more videos on this channel usefull to me
Best youtube channel to learn ML
This series is awesome. Make learning a lot easier. Thank you so much.
Very intuitive and easy explanation. Thank you! 🤗😀
Second video completed, the video was clear as day
Great video with intuitive explanations 👌
Keep up the good work, thank you for the time your are putting on making this series :)
Thank you so much it is very clear the explanation of MDPs.
{
"question": "If a math student is the agent, then the _______________ is the environment.",
"choices": [
"math quiz",
"math professor",
"quiz score",
"Swiss mathematician Leonhard Euler"
],
"answer": "math quiz",
"creator": "N Weissman",
"creationDate": "2022-03-21T22:50:05.763Z"
}
Thanks for the great quiz question!
More power to you @Deeplizard
Thanks a lot, much appreciated
This is the best lecture in RL, Thank you..
Can I get the presentaion please
very very very very help full..thnks for making these videos..pls keep it going
well explained and easy to listen.
OMG its clicking. ITs actually clicking in my head!!!
💡🤯
{
"question": "What does MDP stand for?",
"choices": [
"Markov Delicate Programs",
"Modern Dealing Processes",
"Markov Decision Processes",
"Modern Derivative Parallels"
],
"answer": "Markov Delicate Programs",
"creator": "RooneyMara",
"creationDate": "2019-10-20T06:28:56.399Z"
}
Thank you, Rooney! First quiz question for this video :D
I believe you mistakenly chose the wrong answer, so I corrected it and just pushed it to the site. Take a look :)
deeplizard.com/learn/video/my207WNoeyA
Excellent explanation. It will be great if you could make a video series on all Math concepts behind Machine learning.
Thanks, Anirudh. If you haven't checked out our Deep Learning Fundamentals course, I'd recommend it, as it has some of the major math concepts fully detailed there.
Hi, This is extremely intuitive and super easy to understand. I was wondering if you could tell me what resources you used to learning this material? How do you learn material like this (your best practices) and how much time it took you to learn the material (for making deeplizard content)? Thanks a lot for making this content and waiting for your reply.
As formal resources, I used the book “Reinforcement Learning: An Introduction” Second edition by Richard Sutton and Andrew Barto, along with this DeepMind paper:
www.cs.toronto.edu/~vmnih/docs/dqn.pdf
I also used various informal resources, like reading many blog articles, forums, etc.
{
"question": "State and Reward at time t depends ",
"choices": [
"State Action pair for time (t-1)",
"Cumulative reward at time t ",
"Agent Dynamics",
"State Action pair for all time instances before t"
],
"answer": "State Action pair for time (t-1)",
"creator": "Ushnish Sarkar",
"creationDate": "2020-06-01T16:24:16.894Z"
}
Thanks, ushnish! Just added your question to deeplizard.com/learn/video/my207WNoeyA :)
Please give credit to "Reinforcement Learning: An Introduction" by Richard S. Sutton and Andrew G. Barto, copyright 2014, 2015. You allow viewers to pay you through Join and this book material is copyrighted, but you do not reference them anywhere on your website. The equations and material are pulled directly from the text and it presents an ethical issue. Though the book is open-sourced, it is copyrighted, and you are using this material for financial gain. This text book has been used in several university courses on reinforcement learning in the past.
I love these videos, but proper credit and securing approval from the authors must be obtained!
Could math equation itself be copyrighted?
Totally agree
You guys rock! Thanks so much!
The agent is not part of the MDP itself but rather interacts with it. The agent's role is to select actions based on the current state and the policy it's following, and it receives feedback in the form of rewards and new state observations from the environment, which is modeled as an MDP.
Appreciate the cute example
🐿️😊
Thank you!
Will you cover Q-learning in this series? I really like your tutorials, very well explained!
Hey Marius - Yes, Q-learning will be covered! Check out the syllabus video to see the full details for everything we'll be covering: th-cam.com/video/nyjbcRQ-uQ8/w-d-xo.html
Super, thanks!
Thanks for this content good going.
{
"question": "Which is the correct order for the components of MDP?",
"choices": [
"Agent--->Environment--->State--->Action--->Reward",
"Environment--->Agent--->State--->Action--->Reward",
"State--->Agent--->Environment--->Action--->Reward",
"Agent--->State--->Environment--->Action--->Reward"
],
"answer": "Agent--->Environment--->State--->Action--->Reward",
"creator": "Duke Daffin",
"creationDate": "2021-01-16T12:19:28.304Z"
}
Thanks, Duke! Just added your question to deeplizard.com/learn/video/my207WNoeyA :)
Really friendly beginning.
Thanks a lot, your explanation's very clear and detailed.
What I learned:
1、MDP is formalize decision making process. (Yeah, everybody teach the MDP at first ,no body tell me why until now . Its a strange world)
2、The R(t+1) is because of At , before I always think ,Rt is pair with At
3、The agent is care about accumulate reward ( For others dont know )
{
"question": "In MDP which component role is to maximize the total Reward R ",
"choices": [
"Agent",
"State",
"Action",
"Reward"
],
"answer": "Agent",
"creator": "Hivemind",
"creationDate": "2020-12-27T00:22:07.005Z"
}
Thanks, ash! Just added your question to deeplizard.com/learn/video/my207WNoeyA :)
Hey thanks for awesome videos. This is maybe a stupid question, but what's the difference between s and s' ?
s' is the symbol we use in this episode to denote the next state that occurs after state s.
Awesome!! Thanks! :)
thanks
came to learn,but uh oh i saw dota
Great video
How do you represent the trajectory including the final state? Like this? S_0, A_0, R_1, S_1, A_1, R_2, …, R_T, S_T If not, what is and why?
Hi! Loved the video and I think I have a solid understanding of the MDP. But I'm having trouble making sense of the equation. Why is the LHS a probability and the RHS a set? And what does Pr stand for?
Thanks! Pr stands for "probability", so the RHS is a probability as well.
@@deeplizard Oh now I see . It's an expansion of the same thing! Thanks for clarifying!
I am reading a paper of applying Q-learning in repeated Cournot Oligopoly game in Economics where firms are agents who choose their level of production to gain profit. I can understand in that environment actions are the possible level of output that firm choose to produce. However, it is unclear for me what the states are in this situation. Could you please provide a further explanation in this case?
Could you pl provide any notes/PPT related to MDP process.
Will you be using OpenAI Gym to demonstrate reinforcement learning concepts?
Hey Chyld - Yes, we'll be utilizing OpenAI Gym once we get into coding! Check out the syllabus video to see the full details for everything we'll be covering: th-cam.com/video/nyjbcRQ-uQ8/w-d-xo.html
Its more like a podcast, took me 20x more time to write down everything you said from the captions😵
Gracias por los subtítulos en Castellano. 🤗
awsome thank you .
When are you restarting ?
Thank youu.
Where is the discord link?
merci
What is the difference between s and s' (s prime)?
s' is the derivative of s
"we're gonna represent an MDP with mathematical notation, this will make things easier"
🧢
Dota
i came here to learn about a topic and left sad that OG.JeRax and OG.ana is'nt on the active roster, hopefully OG.Sumail will carry as well as ana did.
when next videos coming? any scheduling
Hey navaneetha - Currently aiming to release a new video in this RL series at least every 3-4 days.
Could we please get the code files for free only for students.??
Hey Mayank - We currently don't have any systems in place to implement or track a setup like that. Just for clarity, note that all of the code will be fully shown in the videos, so the code itself is freely available. Also, the corresponding blogs for each video are freely available at deeplizard.com.
The convenience of downloading the pre-written organized code files is what is available as a reward for members of the deeplizard hivemind.
deeplizard.com/hivemind
Markovs chain: th-cam.com/video/rHdX3ANxofs/w-d-xo.html
Are you sure this is Markov? I think you're thinking Pablov. I'm looking for Markovian on/off states.
Yes, this is the topic of Markov Decision Processes.
@@deeplizard Thanks
“S sub t gives us A sub t...”
Reading off text? Nice text-to-speech tutorial.
What else was she supposed to say? Learning with text vs spoken word is the same thing, I don't see a better alternative.
Nice explanation, i can implement this now without diving into math a lot, not the best elegant way though but anyway, concept understood
This series is awesome. Make learning a lot easier. Thank you so much.
Great video
Thank you!