I'm mindblown, having a lot of fun playing with this. These tutorials are very helpful. The way you explain what you're doing, and the "why", is the reason I subbed :)
Awesome tutorial #1 and #2. Hard to find MLAgent Tutorials that actually work, especially if the tutorials are a year old. The libraries are changing so fast and a year ago it was much more complex to set up and run. Well explained too
Yeah with it being in active development leads to some difficulties in installing; I believe a new version of MLAgents came out right after I made this series (Perhaps as I was making it lol) and it is far more difficult to install currently from what I have heard from others. This version of MLAgents is the currently offered version package in Unity, so I'll likely be sticking to it for now until things are caught up
@@_Jason_Builds Hi Jason, do you know of any Race Car MLAgent tutorial that works. I did the following tutorial 3-4 years ago, it was really fun. th-cam.com/video/n5rY9ffqryU/w-d-xo.html . But for the life of me i could not get it to work, when i tried again two weeks ago. After watching a few more of your videos I'll attempt it again and hopefully will get it. I think my code was not properly updated (the CollectObservations and OnActionReceived specifically). If you can point me to any race car tutorial that has be done recently and works , I would be very greatful.
@@majidmoghadam3886 I haven't seen any recent ones myself, but if I were to try to make this myself, I would start with a simple box (Like the lil guys we have in these videos) and just have to slide on the floor, then experiment with adding wheels or other features. Adding small things one at a time I have found is the best way to get things working. If I come across anything, I'll let you know
Was really helpful! Encountered some issues when my unity version was 2020.3.18.f1, but after upgrading it to 2022.3.31.f1 it worked a lot better. I believe when there is differences in the Prefab it can lead to MLagents seeing two controllers, and working with a default one that doesn't have any of the overrides from AgentController, which led to training resulting in a mean reward of -0.5 and such. After updating unity version and standardizing all the prefabs though it works great!
I'm not sure if youtube blocks listing my email on here, but it should be viewable on my channel page if you wanted to send a GitHub repository, I'd love to take a look when I have the time
Wow, thank you very much for your video explanation. The simultaneous training of multiple agents in Unity amazed me! I want to train drones in UE4. Can we achieve the training of multiple drones simultaneously in multiple Envs as shown in your video? The training efficiency of using a single drone for a single Env is very low, and it is difficult to converge in the scenario of training randomized targets. May I ask if there is any way you can help me?
I found this tutorial, coupled with the official quickstart guide on github, I think it's a good way to get started. th-cam.com/video/f8arMv_rtUU/w-d-xo.html&ab_channel=cshift github.com/edbeeching/godot_rl_agents I could make a video myself if you want, but I have never used Godot before so I would need to learn more about the engine before creating a tutorial myself.
I use a certain website where I can type something in and it translates it for me through a certain neural network. I can't say the actual word for it, because my comment gets shadow banned by Big Nanny You Tube for some reason. Let's just say it starts with chat and ends with gee pee tee.
I use: Vector3 velocity = new Vector3(moveX, 0f, moveZ).normalized * Time.deltaTime * _moveSpeed; After it was fairly trained I added: private void FixedUpdate() { // Give a penalty for taking too long to get to the goal AddReward(-Time.deltaTime); } and ran with a --resume flag on magentas-learn command. This helps it make a smother and go directly toward the goal instead of wandering around to it.
Awesome! That's likely as far better way to encourage more precise movement than the method I initially chose as it is a steady negative reward that the agent can control itself.
I searched the whole TH-cam but this was the most informative video. Everywhere I got stuck with issues and errors
These tutorials are criminally underrated!
Thank you so much! I'm glad you enjoyed!
I'm mindblown, having a lot of fun playing with this. These tutorials are very helpful. The way you explain what you're doing, and the "why", is the reason I subbed :)
Thanks a lot Jason, keep doing the good work!
This is my first ml agent project and I loved it
@@aaravkumar7702 Believe it or not, this was my first as well! I'm glad you enjoyed it!
Awesome tutorial #1 and #2. Hard to find MLAgent Tutorials that actually work, especially if the tutorials are a year old. The libraries are changing so fast and a year ago it was much more complex to set up and run. Well explained too
Yeah with it being in active development leads to some difficulties in installing; I believe a new version of MLAgents came out right after I made this series (Perhaps as I was making it lol) and it is far more difficult to install currently from what I have heard from others. This version of MLAgents is the currently offered version package in Unity, so I'll likely be sticking to it for now until things are caught up
@@_Jason_Builds Hi Jason, do you know of any Race Car MLAgent tutorial that works. I did the following tutorial 3-4 years ago, it was really fun. th-cam.com/video/n5rY9ffqryU/w-d-xo.html . But for the life of me i could not get it to work, when i tried again two weeks ago. After watching a few more of your videos I'll attempt it again and hopefully will get it. I think my code was not properly updated (the CollectObservations and OnActionReceived specifically). If you can point me to any race car tutorial that has be done recently and works , I would be very greatful.
@@majidmoghadam3886 I haven't seen any recent ones myself, but if I were to try to make this myself, I would start with a simple box (Like the lil guys we have in these videos) and just have to slide on the floor, then experiment with adding wheels or other features. Adding small things one at a time I have found is the best way to get things working. If I come across anything, I'll let you know
Was really helpful! Encountered some issues when my unity version was 2020.3.18.f1, but after upgrading it to 2022.3.31.f1 it worked a lot better. I believe when there is differences in the Prefab it can lead to MLagents seeing two controllers, and working with a default one that doesn't have any of the overrides from AgentController, which led to training resulting in a mean reward of -0.5 and such. After updating unity version and standardizing all the prefabs though it works great!
Thank you so much 🙏 I needed this so much 😢
Awesome work ! please keep uploading more in the future!!
To remove the hesitation at the end, scale the reward downward over time(but never remove it) to encourage getting the pellet FAST.
That's a great idea!
love this serie ! Keep up the good work you'r helping a lot of people :)
Awesome work! I enjoy watching your videos :) Keep it up!
Thank you bro! You saved my life.
Great video James. I would love to share with you my unity project
I'm not sure if youtube blocks listing my email on here, but it should be viewable on my channel page if you wanted to send a GitHub repository, I'd love to take a look when I have the time
Very good
Wow, thank you very much for your video explanation. The simultaneous training of multiple agents in Unity amazed me! I want to train drones in UE4. Can we achieve the training of multiple drones simultaneously in multiple Envs as shown in your video? The training efficiency of using a single drone for a single Env is very low, and it is difficult to converge in the scenario of training randomized targets. May I ask if there is any way you can help me?
👋
👋
hmm can you rovide same but for Godot GDScript?
I have never used Godot before, but I am looking into it right now. I'll see what I can do
I found this tutorial, coupled with the official quickstart guide on github, I think it's a good way to get started.
th-cam.com/video/f8arMv_rtUU/w-d-xo.html&ab_channel=cshift
github.com/edbeeching/godot_rl_agents
I could make a video myself if you want, but I have never used Godot before so I would need to learn more about the engine before creating a tutorial myself.
I use a certain website where I can type something in and it translates it for me through a certain neural network. I can't say the actual word for it, because my comment gets shadow banned by Big Nanny You Tube for some reason. Let's just say it starts with chat and ends with gee pee tee.
I use: Vector3 velocity = new Vector3(moveX, 0f, moveZ).normalized * Time.deltaTime * _moveSpeed;
After it was fairly trained I added:
private void FixedUpdate() {
// Give a penalty for taking too long to get to the goal
AddReward(-Time.deltaTime);
}
and ran with a --resume flag on magentas-learn command.
This helps it make a smother and go directly toward the goal instead of wandering around to it.
Awesome! That's likely as far better way to encourage more precise movement than the method I initially chose as it is a steady negative reward that the agent can control itself.