Hey there, I rarely comment on youtube, but I wanted to leave my thanks. It seems to be insanely improving my agents learning time! It is able to achieve within 30 minutes what it would not yet have learned after 12 hours.
Thanks for this video, it helped me speed up my training and understand hyperparameter tuning a bit better. If you could upload an updated version on a detailed explanation of when to move what and with maybe config files examples (not long examples) would be great for my current project and for everyone else i guess. thanks anyway and have a good day.
@@ImmersiveLimit no problem ! A good point to make would also be a batch runner for automating different runs (since a hyperparameter tuning tool is too much work to maintain). If I’ll get anything running ill share it !
I like all these videos thank you... BUT I really wished someone like yourself would do a video walkthrough on how to set up ML-Agents for dummies/beginners. As I have tried on a number of occasions to get it working but with no success or I just wish unity would pull its finger out and get ML_Agents out for public use.
Thanks for the suggestion! The Udemy course we’re working on will definitely contain a full setup section, but maybe I can put together a free guide on the website too.
I just run an experiment for 300M, so any hyperpara I choose converge :) I tried even using 10 epochs with a very high LR of 0.02, it still takes 30+M to converge on my rings flying agent. Its weird, so I just run it for 100 or 200M and be done with it :)
Nice to hear that you are doing course. Is there more info from this available or should we wait. Does anyone know is there out or soon released ml-agents themed book. There is some older ones. But I think that mlagents has gone forward after these books. Or is there any good book to be used 2019 October.
No more info yet, but we’ll try to write something up soon and put it on our website. We made this tutorial that will teach you the basics of ML Agents: th-cam.com/video/axF_nHHchFQ/w-d-xo.html It users version 0.8 which is very similar to the current version, just a small update to the configuration file is needed. I know there is a book on it by Micheal Lanham, but not sure which version was used or if there is an up to date ebook or something. 🙂
@@ImmersiveLimit are there any more resources on how to properly learn mlagents ? or any book to accompany it with idk. i am gonna redo the coursera specialization but i am afraid that might not be enough (did it a while ago and forgot most of it)
@@ImmersiveLimit Will let you know, I'll run some tests today. In a lot of ML training tasks the bigger the batch size the better, so would be interesting to see how it works with ml-agents and RL. Except changing the yaml file, have you left everything else as default in the sample scene? I'll try to run the training with default hyperparameters, the ones you used and will try to crank up the batch size and compare all three ...
@@ImmersiveLimit Tried a few other options with smaller/higher batch and buffer sizes and nothing comes even close to your hyperparameters. Crossed 1.5 after 80k steps
You might want to watch my penguin tutorial to see how to use AddVectorObservation(). I do not think it will work in a coroutine. th-cam.com/video/axF_nHHchFQ/w-d-xo.html
@@ImmersiveLimit Is raycast the best way to observe? I immediately delivered the coordinates and distances to the agent. But it's so dumb. I'm guessing why, is it because of too much observation information(120 AddVectorObs)
ArgumentException: An item with the same key has already been added. Key: Agent (PushAgentBasic) System.Collections.Generic.Dictionary`2[TKey,TValue].TryInsert (TKey key, TValue value, System.Collections.Generic.InsertionBehavior behavior) (at :0) System.Collections.Generic.Dictionary`2[TKey,TValue].Add (TKey key, TValue value) (at :0) MLAgents.Brain.SendState (MLAgents.Agent agent, MLAgents.AgentInfo info) (at Assets/ML-Agents/Scripts/Brain.cs:56) how to fix this?. Can you do a tutorial on how to set up this entire unity project because i get errors i switched to c# 4 and it still has errors. i am using unity 2018.1.7f1. do i need to upgrade or something?
For my training project it helped a lot that I set normalization to true. I think there is also code for automatic hyper parameter tuning, but that will take a long time github.com/mbaske/ml-agents-hyperparams
@@ImmersiveLimit Yea, could be. I also run like 12+ arenas within one instance and of those instances I run 3-4 as a compiled binary at the same time, so in total around 40 training areas, 40 agents.
Thank you. More videos on the more subtle, esoteric aspects of training would be great!
Hey there, I rarely comment on youtube, but I wanted to leave my thanks. It seems to be insanely improving my agents learning time! It is able to achieve within 30 minutes what it would not yet have learned after 12 hours.
hey thanks for sharing this! :) Really appreciate it! This saves me and the environment a lot of energy.
Thanks for this video, it helped me speed up my training and understand hyperparameter tuning a bit better. If you could upload an updated version on a detailed explanation of when to move what and with maybe config files examples (not long examples) would be great for my current project and for everyone else i guess. thanks anyway and have a good day.
Thanks for the feedback! I’m pretty busy at the moment, but I’m sure I will come back to ML-Agents in the future.
@@ImmersiveLimit no problem ! A good point to make would also be a batch runner for automating different runs (since a hyperparameter tuning tool is too much work to maintain). If I’ll get anything running ill share it !
very useful! Thanks Adam!
Glad to help!
Brilliant video
Well done, great results.
the difference in training times could possibly be due to randomness. Did you run the different configurations many times?
Where is that yaml configuration file? I wasted fays looking for it but can't find it anywehere. Also documentaation says nothing about it's location.
I like all these videos thank you... BUT I really wished someone like yourself would do a video walkthrough on how to set up ML-Agents for dummies/beginners. As I have tried on a number of occasions to get it working but with no success or I just wish unity would pull its finger out and get ML_Agents out for public use.
Thanks for the suggestion! The Udemy course we’re working on will definitely contain a full setup section, but maybe I can put together a free guide on the website too.
I just run an experiment for 300M, so any hyperpara I choose converge :) I tried even using 10 epochs with a very high LR of 0.02, it still takes 30+M to converge on my rings flying agent. Its weird, so I just run it for 100 or 200M and be done with it :)
Nice to hear that you are doing course. Is there more info from this available or should we wait. Does anyone know is there out or soon released ml-agents themed book. There is some older ones. But I think that mlagents has gone forward after these books. Or is there any good book to be used 2019 October.
No more info yet, but we’ll try to write something up soon and put it on our website. We made this tutorial that will teach you the basics of ML Agents: th-cam.com/video/axF_nHHchFQ/w-d-xo.html It users version 0.8 which is very similar to the current version, just a small update to the configuration file is needed.
I know there is a book on it by Micheal Lanham, but not sure which version was used or if there is an up to date ebook or something. 🙂
@@ImmersiveLimit are there any more resources on how to properly learn mlagents ? or any book to accompany it with idk. i am gonna redo the coursera specialization but i am afraid that might not be enough (did it a while ago and forgot most of it)
Have you experimented with even higher batch sizes?
I have not, have you?
@@ImmersiveLimit Will let you know, I'll run some tests today. In a lot of ML training tasks the bigger the batch size the better, so would be interesting to see how it works with ml-agents and RL. Except changing the yaml file, have you left everything else as default in the sample scene? I'll try to run the training with default hyperparameters, the ones you used and will try to crank up the batch size and compare all three ...
Yes, I left the scene unchanged.
@@ImmersiveLimit Tried a few other options with smaller/higher batch and buffer sizes and nothing comes even close to your hyperparameters. Crossed 1.5 after 80k steps
It doesnt show the green download bottom in github. What should i do?
What’s the web address you’re using?
@@ImmersiveLimit github.com/Unity-Technologies/ml-agents/blob/master/docs/Learning-Environment-Examples.md#pyramids.
Go to the main page. github.com/Unity-Technologies/ml-agents
Wow this is really good for me
I am going to use mlagent my personal moba game
This is really good for me
github.com/Unity-Technologies/ml-agents/issues/2759
I have some question
Can you answer my question?
You might want to watch my penguin tutorial to see how to use AddVectorObservation(). I do not think it will work in a coroutine. th-cam.com/video/axF_nHHchFQ/w-d-xo.html
@@ImmersiveLimit CollectObservations runs on a FixedUpdate cycle. Doesn't it require too much computation?
@@ImmersiveLimitI don't use raycasts, I get the object's information through its distance from the object.
@@ImmersiveLimit Is raycast the best way to observe?
I immediately delivered the coordinates and distances to the agent.
But it's so dumb.
I'm guessing why, is it because of too much observation information(120 AddVectorObs)
ArgumentException: An item with the same key has already been added. Key: Agent (PushAgentBasic)
System.Collections.Generic.Dictionary`2[TKey,TValue].TryInsert (TKey key, TValue value, System.Collections.Generic.InsertionBehavior behavior) (at :0)
System.Collections.Generic.Dictionary`2[TKey,TValue].Add (TKey key, TValue value) (at :0)
MLAgents.Brain.SendState (MLAgents.Agent agent, MLAgents.AgentInfo info) (at Assets/ML-Agents/Scripts/Brain.cs:56)
how to fix this?. Can you do a tutorial on how to set up this entire unity project because i get errors i switched to c# 4 and it still has errors. i am using unity 2018.1.7f1. do i need to upgrade or something?
We are working it into the course we are making currently. For that error, Make sure Control is checked in the Academy inspector window.
For my training project it helped a lot that I set normalization to true.
I think there is also code for automatic hyper parameter tuning, but that will take a long time
github.com/mbaske/ml-agents-hyperparams
Nice, I’ll have to check that out!
Hmm, the training time you are showing is very slow (54 min for 162k steps). In the same time I am training 10M steps, almost 100x as fast.
Could be that they’ve improved performance a lot since I made this video. I was running on an older machine too.
@@ImmersiveLimit Yea, could be. I also run like 12+ arenas within one instance and of those instances I run 3-4 as a compiled binary at the same time, so in total around 40 training areas, 40 agents.