How to use Unity ML Agents in 2024! ML Agents 2.0.1

แชร์
ฝัง
  • เผยแพร่เมื่อ 15 ก.ย. 2024

ความคิดเห็น • 530

  • @theashbot4097
    @theashbot4097  ปีที่แล้ว +12

    Commonly had problems
    Q. I get this error in the console "Couldn't connect to trainer on port 5004 using API version 1.5.0. Will perform inference instead"
    A. That is not an error. It is just a warning. You do not have to worry about it at all. You get this because unity was expecting you to have started the training in python, but because it is not started it will instead try to run a pretrained model.
    Q. Something in the CMD caused an ERROR.
    A. Make sure you are using python 3.9.13
    Q. ModuleNotFoundError: No module named 'packaging'
    A. Type "pip install packaging"
    Q. "Failed to initialize NumPy"
    A. Run this "pip install numpy==1.21.2" before you try to install the ML-Agents package.
    If you have any ideas for videos I am always happy for suggestions.

    • @nico31488
      @nico31488 ปีที่แล้ว +1

      [WARNING] Trainer has no policies, not saving anything.
      RuntimeError: CUDA error: no kernel image is available for execution on the device
      can you help me solve this error pls

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +2

      @@nico31488 what were you doing when you got this error?

    • @nico31488
      @nico31488 ปีที่แล้ว +2

      @@theashbot4097 I was trying to run the ml agent in cmd. I have already resolved this error. Thank you for replying to me
      Just in case if someone getting the same error:
      Install CUDA [make sure version is compatible with others] and don't specify the torch version while installing as told in the git repo of release 20.

    • @cem_kaya
      @cem_kaya ปีที่แล้ว +4

      For any one reading in the future using Anaconda helps a lot with Cuda dependencies.

  • @patrickdean2669
    @patrickdean2669 ปีที่แล้ว +10

    Mad props for the diligence at the start of the video for ensuring correct versions adhered to,,, that can save SO many problems!

  • @GoggledGecko
    @GoggledGecko 7 หลายเดือนก่อน +2

    For anyone getting a "Failed to initialize NumPy" error when running "mlagents-learn", seems the numpy dependency version is incorrect in the latest release of ML Agents. Manually running "pip install numpy==1.23.1" fixed the issue for me (note that it will give you an "error" for using incompatible numpy versions, but ignore this)

    • @theashbot4097
      @theashbot4097  7 หลายเดือนก่อน +1

      Cool! Thank you for sharing this!

  • @Jorr042
    @Jorr042 3 หลายเดือนก่อน +2

    You are a life saver. I chose AI as an elective in my game study, but after a 50 minute tutorial I still failed to install ml agents. not only did you help me install it (using the unity package manager which is super easy!) but you also showed me how the AI works and gave an example of the training. I also love the commen mistakes and suggestions in the programming you made (for example with the scripts name to check what the agent collides with) this really made me understand it better and show common mistakes. Thank you!!

    • @theashbot4097
      @theashbot4097  3 หลายเดือนก่อน

      I am so happy I was able to help you learn this better!

  • @kin_1997
    @kin_1997 ปีที่แล้ว +12

    would be very cool if you made another video say building on top of this model. Like adding random walls each time. Or having it react dynamically at what objects someone places while the game is running.

  • @ctpax1933
    @ctpax1933 ปีที่แล้ว +2

    OH MY GOOOOD!!!! I was trying alone for so long! I was very close to giving up.

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +1

      I am glad this helped you!! do you mind me asking what I did good at, and what I could do better next time?

    • @ctpax1933
      @ctpax1933 ปีที่แล้ว +2

      ​​ @theashbot4097 Overall great video. Exactly what i was searching.
      I myself didnt even got the 'mlagents-learn -h'. I probably had incompatible versions of everything - thats why i needed a fresh video where to install everything. You took me by the hand and guided me through the beginning.
      If i really want to search for errors in your video i could obviously call the mistakes you saw yourself. You corrected it via a message or while video. The localPosition was a bit irritating, because i tried to understand, why you used the worldposition instead of the local one, but then later i saw it was just wrong.
      In 23:21 you said you dont want to use the tag compare, because it uses strings, but it does not. The compiler will convert the string inside the "compareTag" and it uses an integer internally. So creating a tag-script is not just worse for performance, it is also a bad pattern. Additionally you could compare the target hit with your already captured target-transform. (it uses a number comparison too)
      If you are searching for video ideas, you could obviously make a more in depth explanation of everything you didnt mentioned. Like going through every setting in the inspector of the scripts you used, because thats what i will do now. I want to understand a bit more how the learning process works internally. But your video was enough for a starting. Thanks!!

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +1

      ​@@ctpax1933Thank you for you feedback. I do not understand what you mean by tags are integers. do you mean I can pass an integer value into the "CompareTag" function and it will work?! Thank you again for the feedback!

    • @ctpax1933
      @ctpax1933 ปีที่แล้ว +1

      @@theashbot4097 No, the compiler will use integers internally. You dont have to do anything. It will happen automatically. -under the hood
      You can try it out. Make a for-loop and use GetComponent vs CompareTag. You will see the comparetag being faster and even faster, if the GetComponent returns null (when the component is not attached).

  • @WouldMakeB
    @WouldMakeB ปีที่แล้ว +3

    THANK YOU! You just saved me 2 days of installs/reinstalls... like aaaaaaa you're a saviour

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +3

      No problem. I am glad you found this helpful.

  • @MrProgrammierer
    @MrProgrammierer 2 หลายเดือนก่อน +1

    Really good tutorial! I didn't though it would be so easy to create a learning AI!👍

    • @theashbot4097
      @theashbot4097  2 หลายเดือนก่อน +1

      Ya, It took me a bit to figure how to do this and I am soo happy I was able to help you out!

  • @theGamersQueue
    @theGamersQueue ปีที่แล้ว +2

    Ive been llooking for a descent tutorial for so long. This tutorial is up to date and good explanation about everything, it helped me to finally understand a little bit on how to use ml agents! Thanks for the video.

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +1

      I am glad this help! Do you mind me asking what you think I did good at, and what I can do better next time.

    • @theGamersQueue
      @theGamersQueue ปีที่แล้ว +1

      @@theashbot4097 i think you did good with explaining all properly. I never done python so i wasnt sure what was going on inside cmd, but its ok. You could do a part 2 on this video, expanding on this project. :)

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +2

      @@theGamersQueueThank you for your feedback.

  • @TheBu213
    @TheBu213 10 หลายเดือนก่อน +12

    For anyone following this, if you get an error thrown at the end that says ModuleNotFoundError: No module named 'packaging' you can fix it by running the command line: pip3 install packaging

    • @theashbot4097
      @theashbot4097  10 หลายเดือนก่อน +2

      Thank you fir posting this to help people who are having tis same error!

    • @omamuyovwilucky4133
      @omamuyovwilucky4133 9 หลายเดือนก่อน +2

      Bro you're a life saver ❤❤

  • @kin_1997
    @kin_1997 ปีที่แล้ว +5

    Man, what an amazing video! good job and thank you for the content. Very easy to follow along and minimal room for messing up really.

  • @RobloxGamerX
    @RobloxGamerX ปีที่แล้ว +3

    Great video! I got my AI working really well and I'm really glad to see an updated tutorial.

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +2

      I am glad you got it to work!

  • @dripyman6146
    @dripyman6146 ปีที่แล้ว +3

    Bro you are an absolute life changer i love your encourage to help others i was watching Code monkeys video about ml agents and had so many problems because it was not up to date until i saw your comment about fixing all the errors thank you!!! i will definitely check all of your other videos when im done with this video

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +3

      I am vary glad you found this video helpful!!! and that is vary kind of you to check out my other videos, but this one is defiantly the best quality video I got.

    • @dripyman6146
      @dripyman6146 ปีที่แล้ว +2

      @@theashbot4097 I do hope you post more MLAgent content in the future and also is it possible for me to increase the max steps in the cmd window? so it can learn longer

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +2

      @@dripyman6146 The best my that I know of is "mlagents-learn --initialize-from=Test3 --run-id=Test4" this will start the trainings from the "Test3" module and continue the training then when it is finished it will make a "Test4" module with the new trainings.

    • @dripyman6146
      @dripyman6146 ปีที่แล้ว +1

      @@theashbot4097 Im trying that right now but it dosnt seem to work so i have to start a Test and afterwards i want to start another one but instead of the normal function i write mlagents-learn --initialize-from=Test1 --run-id=Test2" ?

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +1

      @@dripyman6146 Sorry for the slow response. the "mlagents-learn --initialize-from=Test1 --run-id=Test2" will continue the training, and put the new model with the continued triaging in new model named Test2. if you want to continue the training, then override the model you started the training with, you can use this command "mlagents-learn --run-id=Test1 --resume". If you just want to start a new triaging from scratch you can just use "mlagents-learn --run-id=Test2". I hope this is what you want, and makes sense.

  • @Maniketabchi-tf3uq
    @Maniketabchi-tf3uq ปีที่แล้ว +1

    Amazing video, this helped me install MLAgents on mac after tens of other videos from years ago which did not help, thank you

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +1

      I am glad this video helped!! The whole reason why I made this video is because I could not find any up to data tutorial.

  • @sc.ll0ound662
    @sc.ll0ound662 หลายเดือนก่อน +1

    i am from india . thankyou for help, great explanation

  • @MobyMotion
    @MobyMotion 11 หลายเดือนก่อน +3

    Thank you! So helpful. Others have mentioned this but would love to see you gradually adding small levels of complexity

    • @theashbot4097
      @theashbot4097  11 หลายเดือนก่อน +3

      I am wanting to make more videos. I planned on releasing 1 every 2 weeks, but then because of school that became 1 every month, and I did not even start the video for last month so I will try to make more videos, but I might not have the time. I will be making videos in December, and slowly releasing them probably 1 every 2 weeks, but that might change. I will try to make more complex things in ML-Agents, but 1st I want to make videos on how to use all of ML-Agents main features then start making more complex stuff.
      I hope you understand, and thank you for your patients.

    • @MobyMotion
      @MobyMotion 11 หลายเดือนก่อน +1

      @@theashbot4097 hey thanks for replying and don’t worry, it’s hard to juggle posting with life. Appreciate the effort

    • @theashbot4097
      @theashbot4097  11 หลายเดือนก่อน +2

      @@MobyMotion No problem! I really appreciate the patience!

  • @samyam
    @samyam ปีที่แล้ว +3

    Thanks for the detailed tutorial!

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +1

      What?!? I can’t believe you watch my video! I love your videos! Do you mind telling me what you think I did really good at, and what I could do better next time?

    • @samyam
      @samyam ปีที่แล้ว +2

      @@theashbot4097 Haha thanks!! I liked how to the point it was and making sure you explain the different versions needed at the start that was useful, in the future I'd look into putting some cushion or getting a mic stand to avoid that bass sound when typing (I had that issue before as well). Small nitpick, no need to set the speed to 5 on each OnActionReceived, you can set the variable outside since it doesn't change (probably just an oversight).

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +1

      @@samyamThank you for your feedback!

  • @Rynali
    @Rynali ปีที่แล้ว +1

    Awesome video man. Keep it up!!! :D

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +1

      Thank you so much!!! I am glad you found this helpful!!!

  • @OrdinaryWisdoms
    @OrdinaryWisdoms ปีที่แล้ว +2

    Good job dude, I have been struggling to find the proper tutorial which represents the latest version installation. I messed up a lot and at the end figured it out. But now I found your video and I am amazed how well you described the whole process. It saves a lot of time. Thanks for the effort.

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +3

      I am glad this help. The only reason I made this video is because it was vary hard to find what I needed to do because everything I could find was out dated.

    • @OrdinaryWisdoms
      @OrdinaryWisdoms ปีที่แล้ว +2

      @@theashbot4097 Indeed. Buddy I have a proposal for you, I am working on a Unity ML Agents Project. If you have time and would like to discuss do let me know.

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +3

      @@OrdinaryWisdoms I can help you find what I think the rewords should be, but I already have a lot of other stuff that I need to do; so I will not really help you make it. If you have any questions you can send me an email at theashbot.com/.

    • @folkenberger
      @folkenberger ปีที่แล้ว +2

      @@theashbot4097 one idea would be to make an implementation of ml agents with the navmesh component tool, since it could be interesting to se ehow to connect links, jumps, etc

  • @dibsthegreat1041
    @dibsthegreat1041 ปีที่แล้ว +1

    Amazing video man! I'm learning how to use ML-Agents for a class project and came across your video. You helped me a lot learn how to make this work with Unity! Thanks for making this video.

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +1

      I am glad this video helped!

  • @joevent
    @joevent ปีที่แล้ว +1

    One of the best tutorials I've ever watched. Keep doing you ashbot

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +1

      I am vary glad it helped you. Do you mind me asking, What do you think I did really good at in this tutorial, and what could I improve on?

  • @MwiHero
    @MwiHero ปีที่แล้ว +2

    Nice! Support from Italy ❤

  • @crazyfoodclub88
    @crazyfoodclub88 10 หลายเดือนก่อน +3

    Great video! Saved me so much time! I tried this today and one issue is PyTorch has a new version which is not compatible. To resolve it, I had to specify the exact version of PyTorch as per your video, ie. "pip3 install torch==2.0.1 torchvision==2.0.2 torchaudio=0.15.2"

    • @theashbot4097
      @theashbot4097  10 หลายเดือนก่อน +2

      Thank you for posting the solution! I will add it to the pined comment when I have the chance.

    • @FabianBarreiro
      @FabianBarreiro 9 หลายเดือนก่อน +2

      Could it be that the versions of torchvision and torchaudio are swapped in your comment?

    • @thaihoangtruong4564
      @thaihoangtruong4564 7 หลายเดือนก่อน +1

      @@FabianBarreiro ty ty

  • @rustyonlife
    @rustyonlife ปีที่แล้ว +1

    Wow this so really cool! Thanks! I am now using this with my US Air Force job to make combat drones. :0)

  • @BossyYT
    @BossyYT 11 หลายเดือนก่อน +1

    Very good tutorial. There's a new version of ml-agents that requires python 3.10 for anyone new watching this tutorial. The rest of the tutorial is up to date!

    • @theashbot4097
      @theashbot4097  11 หลายเดือนก่อน +1

      Really? which version?

  • @thewatchfuleyes-
    @thewatchfuleyes- ปีที่แล้ว +2

    Awesome tutorial! I really enjoyed it and you explained everything really well! Now I've got my very own first working machine learning AI so thank you very much! ^_^

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +3

      I am glad this helped you!. do you mind me asking, what I could improve on for next time, and what I could do better next time. Thanks!

    • @thewatchfuleyes-
      @thewatchfuleyes- ปีที่แล้ว +3

      @@theashbot4097 There is always room for improvement but for me the video was pretty much perfect in that you explained everything really well making it easy to follow. The only thing I noticed was on a few occasions in your video was that you clicked around very fast which for me I couldn't quite catch what you did and I had to replay certain parts of the video a few times to mirror what you did. Though to be fair this wasn't really that much of an issue. In all you've made an incredible tutorial as this is the first time I've ever used Unity + first time I've work with reinforcing learning AI and the fact you've taught me within 45 minutes how to install ML Agents and have working AI in unity is outstanding!

  • @memormedia1107
    @memormedia1107 10 หลายเดือนก่อน +1

    Stellar Video!

    • @theashbot4097
      @theashbot4097  10 หลายเดือนก่อน +1

      Thank you so much!!

  • @mikaalber2913
    @mikaalber2913 ปีที่แล้ว +1

    great tutorial for getting started with ML Agents!

  • @artoriasdenostradamus3628
    @artoriasdenostradamus3628 ปีที่แล้ว +1

    A Tip for visualization with some graphics ,you can use for example the next command, tensorboard --logdir Test5 , into your python virtual environment and into the results folder :) .

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +2

      I personally do not like how it looks but I should probably show how to do when I remake this video.

  • @matiandino0
    @matiandino0 ปีที่แล้ว +1

    Amazing video, so useful! Thank you

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +1

      I am glad this was helpful!!

  • @romangleizer3779
    @romangleizer3779 10 หลายเดือนก่อน +1

    Very good and helpful video, thank you!

    • @theashbot4097
      @theashbot4097  10 หลายเดือนก่อน +2

      I am glad this helped you!

  • @qicesun614
    @qicesun614 ปีที่แล้ว +2

    a tutorial very up to date

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +1

      Yup. I made this because there was to up to date tutorial.

  • @pavelkoryakin5750
    @pavelkoryakin5750 ปีที่แล้ว +1

    Awesome tutorial! Thank you!

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว

      No problem I am glad this help you! do you mind me asking what I did good at, and what I could do better next time?

  • @camiloariasgiraldo
    @camiloariasgiraldo ปีที่แล้ว +1

    I

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +2

      Thank you! Do you mind me asking what was your favorite part of the video, and what I could do better next time?

  • @saeeds9364
    @saeeds9364 2 หลายเดือนก่อน +1

    Thanks 👍❤

    • @theashbot4097
      @theashbot4097  2 หลายเดือนก่อน +1

      No problem!! I am glad this helped you!!!

  • @huyhoangvu6143
    @huyhoangvu6143 9 หลายเดือนก่อน +1

    Its Amazingggg

  • @caiosoares3238
    @caiosoares3238 ปีที่แล้ว +1

    11k views in this video! 280 comments! You deserve more subscribers

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +1

      Thank you! are you still having that problem?

  • @drewmasker8605
    @drewmasker8605 ปีที่แล้ว +1

    Very useful, thanks a lot!

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +1

      I glad this was useful! do you mind me asking what did I do good with, and what I could do better next time?

  • @mutzelmann
    @mutzelmann 8 หลายเดือนก่อน +1

    great video, I will try it soon

    • @theashbot4097
      @theashbot4097  8 หลายเดือนก่อน +1

      Thank you very much! If you have any problems please look at the pined comment.

  • @cem_kaya
    @cem_kaya ปีที่แล้ว +3

    Don't forget to enable run in background from the project settings -> player

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +2

      That is a good idea! I can have the AI train and do something else at the same time.

    • @cem_kaya
      @cem_kaya ปีที่แล้ว +2

      @@theashbot4097 yep also using anaconda to set up the Python env makes it easier to manage and share/replicate the setup.

  • @12NoteOctaves
    @12NoteOctaves ปีที่แล้ว +1

    Amazing video! Thanks a lot!

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +2

      I am glad this help! Do you mind me asking what did I do good, and what I could do better next time?

    • @12NoteOctaves
      @12NoteOctaves ปีที่แล้ว +1

      @@theashbot4097 Your explanation and flow was very nice and I could catch up with it. And mentioning the right versions of python to use helps too since I am just getting into ML agents. I have been thinking of trying ML Agents to train an avatar to walk with imitation learning using motion capture data. If you have any insights into that in future content, it would be great! :)
      I have subscribed and turned the notifications on just in case.

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +2

      @@12NoteOctaves yes I am looking into getting an ML agent to walk. If I do figure it out, I probably won’t have a video on it for about a month. Thank you for the sub!!

  • @AnyGameAtAll
    @AnyGameAtAll ปีที่แล้ว +1

    Great tutorial! I was really lost at code monkey's mlagents tutorial

  • @SharkTrials
    @SharkTrials 5 หลายเดือนก่อน +2

    Great video, just quick question. How do i change the max steps

    • @theashbot4097
      @theashbot4097  5 หลายเดือนก่อน +2

      Yes. You need to edit the .yaml file. I show it in this video. th-cam.com/video/1raDh6rpg8U/w-d-xo.htmlsi=wecITeY4bSMfJ2YG

  • @mhreinhardt
    @mhreinhardt ปีที่แล้ว +1

    Thanks for the vid! For the installs I needed to do them all at once or I was having errors trying to switch the protobuf (data serializer, btw) version after the fact. i.e. ran: pip install mlagents torch torchvision torchaudio protobuf==3.20.3

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +2

      I am glad you found this video helpful!! Do you mind me asking: What do you think I did really good at, and what do you think I could do better?

    • @mhreinhardt
      @mhreinhardt ปีที่แล้ว +1

      @@theashbot4097 I haven't actually finished it yet, but wanted to drop that tidbit before I forgot in case it helped people out. Once I have time to finish it I'll comment here again. Regardless, I really appreciate you sharing your knowledge about ML-agents down to the nitty gritty details, no matter which way it's presented.

  • @user-mi4fs2fk4t
    @user-mi4fs2fk4t ปีที่แล้ว +2

    i cant change any of the numbers t 2and 0 and the script was not just added there by itself i had to add it and now i am stuck and cant do anything help you did this in time 14:40

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +1

      Make sure you change the "MonoBehaviour" with the "Agents" the try to remove the scripts off the gameobject, and put then back on. I do not understand your other problem(s).

    • @user-mi4fs2fk4t
      @user-mi4fs2fk4t ปีที่แล้ว

      wdum mono bahaviour@@theashbot4097

  • @arinshrestha8921
    @arinshrestha8921 ปีที่แล้ว +1

    I should have watched it earlier.. I regret watching older videos that had me stucked in dependencies error. I wasted my week trying to figure out the problem. This was very handy.
    I suggest you to work on making better thumbnail and some more tutorial on training with different mlagent components like series. Once again I am greatful to you.

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +1

      Ya I can do that. thank you for the recommendation.

  • @leminchose
    @leminchose 10 หลายเดือนก่อน +1

    Thank you very much!

    • @theashbot4097
      @theashbot4097  10 หลายเดือนก่อน +1

      I am vary glad my ML-Agents tutorial helped you!!

  • @neko6193
    @neko6193 14 วันที่ผ่านมา +1

    I got mlagents-learn not recognized on my first attempt but successfully install after I delete my venv folder and install python package all over again.

    • @theashbot4097
      @theashbot4097  14 วันที่ผ่านมา +1

      Hey good thinking! I am glad that work for you!

  • @ruilanli6342
    @ruilanli6342 ปีที่แล้ว +1

    thank you this helps me a lot!

  • @thetopoj9
    @thetopoj9 9 หลายเดือนก่อน +2

    Im somewhere around 28:00, and i keep getting these 3 errors. Ive checked my code several times, and it is just like the code in the video, and i just cant find anything wrong with it, here are the 3 errors: Couldn't connect to trainer on port 5004 using API version 1.5.0. Will perform inference instead. Fewer observations (0) made than vector observation size (1). The observations will be padded.Heuristic method called but not implemented. Returning placeholder actions.

    • @theashbot4097
      @theashbot4097  9 หลายเดือนก่อน +2

      The 1st “error” might look like an error but it is just saying that you did not start the training process. The 2nd, and 3rd errors I will need to see the code for.

    • @thetopoj9
      @thetopoj9 9 หลายเดือนก่อน +1

      @@theashbot4097 Thanks, sorry for the late reply, here it is:

    • @thetopoj9
      @thetopoj9 9 หลายเดือนก่อน +1

      using System.Collections;
      using System.Collections.Generic;
      using UnityEngine;
      using Unity.MLAgents;
      using Unity.MLAgents.Actuators;
      using Unity.MLAgents.Sensors;
      public class NewBehaviourScript : Agent
      {
      [SerializeField] private Transform target;
      [SerializeField] private SpriteRenderer spriteRenderer;
      public override void OnEpisodeBegin() {
      transform.position = new Vector3(Random.Range(-3.5f, -1.5f), Random.Range(-3.5f, 3.5f));
      target.position = new Vector3(Random.Range(1.5f, 3.5f), Random.Range(-3.5f, 3.5f));
      }
      public override void CollectObservations(VectorSensor sensor) {
      Debug.Log((Vector2)transform.localPosition);
      sensor.AddObservation((Vector2)transform.localPosition);
      sensor.AddObservation((Vector2)target.localPosition);
      }
      public override void OnActionReceived(ActionBuffers actions) {
      float moveX = actions.ContinuousActions[0];
      float moveY = actions.ContinuousActions[1];
      float moveSpeed = 5f;
      transform.localPosition += new Vector3(moveX, moveY) * Time.deltaTime * moveSpeed;
      }
      public override void Heuristic(in ActionBuffers actionsOut) {
      ActionSegment continuousActions = actionsOut.ContinuousActions;
      continuousActions[0] = Input.GetAxisRaw("Horizontal");
      continuousActions[1] = Input.GetAxisRaw("Vertical");
      }
      void OnTriggerEnter2D(Collider2D collision) {
      if (collision.TryGetComponent(out Target targ)) {
      AddReward(10f);
      spriteRenderer.color = Color.green;
      EndEpisode();
      } else if (collision.TryGetComponent(out Wall wall)){
      AddReward(-2f);
      spriteRenderer.color = Color.red;
      EndEpisode();
      }
      }
      }

    • @thetopoj9
      @thetopoj9 9 หลายเดือนก่อน +1

      Also, in case it has something to do with it, at the bottom of the behavior parameters, there's a little message saying, "There is no model for this Brain; cannot run inference. (But can still train)"

    • @theashbot4097
      @theashbot4097  9 หลายเดือนก่อน +1

      @@thetopoj9 1st message your all good.
      2nd message everything looks fine, but I will have too look into it more tomorrows.
      3rd message that just means you will either have to train a brain, or have to use heuristic.

  • @unbroken-hunter
    @unbroken-hunter 9 หลายเดือนก่อน +1

    Instead of using a blank wall and target script, couldn’t you use tags? Have a wall tag and a target tag, that way you don’t have to use trygetcomponent so much since it’s pretty slow

    • @theashbot4097
      @theashbot4097  9 หลายเดือนก่อน +1

      Ya I realized that it was supper slow after making that. the reason why I used to not like tags is because I am not good at spelling. I have gotten better though.

  • @kodaxmax
    @kodaxmax ปีที่แล้ว +2

    How do you edit the trainer config file? the only one i could find was in the results folder and doesn't seem to be used during training.

    • @kodaxmax
      @kodaxmax ปีที่แล้ว +2

      add it's file path after mlagents-learn like so:
      mlagents-learn C:\Users\username
      esults\test1\configuration.yaml --run-id test1 --force

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +2

      @@kodaxmax That is kind of right. When you use the "mlagents-learn --run-id=MoveToTargetAgent" you need to type "mlagents-learn --run-id=MoveToTargetAgent"

  • @thomaszovtov2595
    @thomaszovtov2595 10 หลายเดือนก่อน +1

    Hi I got aproblem while installing the mlagents it says
    note: This error originates from a subprocess, and is likely not a problem with pip.
    ERROR: Failed building wheel for numpy
    Failed to build numpy
    ERROR: Could not build wheels for numpy, which is required to install pyproject.toml-based projects
    Could you help me fix the eror im using python 3.10 pip 23.3.1 and its not working

    • @theashbot4097
      @theashbot4097  10 หลายเดือนก่อน +1

      If you look at the pined comment the 2nd question in the "commonly had problems" section has the solution.
      2. ERROR When installing the ML-Agents packages.
      A. Make sure you are using python 3.9.13, and run this "pip install numpy==1.21.2" before you try to install the ML-Agents package.

    • @thomaszovtov2595
      @thomaszovtov2595 10 หลายเดือนก่อน

      Thanks worked@@theashbot4097

  • @tux5422
    @tux5422 ปีที่แล้ว +3

    Instead of bothering to keep everything in local position you can just create multiple copies of the environment on the same position.
    By the way your tutorial was great, thank you so much brother keep sharing❤
    You can make a video on how to connect unity to etherium blockchain. Would really appreciate that🔥

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +1

      Sorry but I do not know what etherium blockchain is.

    • @Queracus
      @Queracus ปีที่แล้ว +1

      would simulations run independently? enviroments wouldnt affect each other?

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +2

      @@Queracus Good point. Agents from environment 1 can hit target from environment 2. that would make some problems.

  • @renatoferreira5898
    @renatoferreira5898 ปีที่แล้ว +1

    Nice video

  • @Queracus
    @Queracus ปีที่แล้ว +1

    Just a comment for clarity.
    Each data point in the feature space represents a specific state of the environment, characterized by the x and y coordinates of the agent and target. The agent's position can be represented as a pair of coordinates (x_agent, y_agent), and the target's position as (x_target, y_target). Therefore, a single data point in the feature space can be represented as a 2-dimensional vector [x_agent, y_agent, x_target, y_target].
    Since there are only two dimensions (x and y) for both the agent and target positions, the feature space size is 2. This means that the feature vectors representing the states of the environment will have a length of 2. BUT in Behavior Parameters we must put 4, to inform Unity how to read the sensor.
    Just a more correct way to look at it, if anyone ever gets do deeper ML coding, so it will not be confusing.

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +1

      Sorry but I do not really understand what you are trying to clarify.

    • @Queracus
      @Queracus ปีที่แล้ว +1

      @@theashbot4097 Just trying to say that anywhere else, out of Unity - 2 would be the correct answer for Space size instead of 4. If anyone goes on, to learn more in depth of coding agents, loops, etc.. they should not be confused by this Unity quirk :)
      Great video tho! Loved it. Would be cool to implement a section on how to turn on learning on GPU :)

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +1

      @@Queracus Now I understand what you mean. Yhank you for clarifying.

  • @letrepas
    @letrepas 5 หลายเดือนก่อน +1

    Great video, I learned a lot from it! This is my first time working with this program, and it's fascinating. I'm wondering why I can't move using the WASD keys though. In the Heuristic method override, we're setting the continuous actions for movement based on the raw input from the "Horizontal" and "Vertical" axes. However, it seems like the input from the WASD keys might not be mapped to these axes correctly. I'll need to investigate further or perhaps adjust the input settings to ensure WASD movement functions as expected. Any insights or suggestions would be greatly appreciated!
    couldn't connect to trainer on port 5004 using api version 1.5.0 will perform inference instead.
    Unexpected exception when trying to initialize communication: System.IO.IOException: Error loading native library
    EDIT: I found a mistake: I added the wrong component to the agent, instead of Decision Requester I added Demonstration Recorder

    • @theashbot4097
      @theashbot4097  5 หลายเดือนก่อน +1

      The "couldn't connect to trainer on port 5004 using api version 1.5.0 will perform inference instead." Means you are not training the agent right now. If you start the training in CMD then you get this error then something went wrong. But if you just get this error when you did not start the training in the CMD then everything should be fine. Also Good job on find the problem to the WASD problem.

  • @DokkSide
    @DokkSide ปีที่แล้ว +2

    Very good video! Do you know how I can enable CUDA usage on my GPU? I already installed CUDA from Nvidia but my CUDA usage (on the task manager performance panel) says it's using 0% CUDA when it's training.

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +1

      I do not know how to. Sorry.

  • @Speo_
    @Speo_ 6 หลายเดือนก่อน +2

    Hi i have a question, If you are training a model using command prompt, and you shut down your pc, is there a way to recover everything that was previously done in the venv, and if not what is the ideal method for training when it s a big project

    • @theashbot4097
      @theashbot4097  6 หลายเดือนก่อน +1

      I do not think i understand the question. If you are asking if you can make the training go longer before it automatically stops then the answer is yes. I do it in this video. th-cam.com/video/1raDh6rpg8U/w-d-xo.htmlsi=mXrU38uns5IVemSP

  • @kadenperry5719
    @kadenperry5719 10 หลายเดือนก่อน +1

    Awsome tutorial, however I am using these Agents for more advance projects and 500,000 steps is not enough, how to I increase the step numbers? thanks. according to help forums it says to add it to the "trainer_config.yaml" and that folder is not here only one with Config is with the test runs that were previously ran, thanks

    • @theashbot4097
      @theashbot4097  10 หลายเดือนก่อน +1

      You have to make a YAML config file, as I show here th-cam.com/video/1raDh6rpg8U/w-d-xo.htmlsi=7J35le5iRHoJM5cV, then at the bottom change the max step.

    • @kadenperry5719
      @kadenperry5719 10 หลายเดือนก่อน +1

      @@theashbot4097 thanks so much!

    • @theashbot4097
      @theashbot4097  10 หลายเดือนก่อน +1

      @@kadenperry5719 No problem!

  • @binga_
    @binga_ ปีที่แล้ว +1

    really nice, thank you

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +1

      Thank you vary much. what do you think I did good in this video, and what do you think I could have done better?

    • @binga_
      @binga_ ปีที่แล้ว +1

      @@theashbot4097 I really like the fact that you explain everything: when there's a command to type, you explain what each parameter does instead of just giving it like some other youtubers, that's cool. I think the thing you could improve is the fact that there are several chapters on the same theme (there are several "rewards", several "train agent", etc.), so if we're looking for help on a particular subject it's a bit messy; it would be better to have one big chapter on each subject. anyway, thanks again

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +2

      @@binga_ Thank you for that. I will try to start putting less chapters in my videos. I am not going to edit these ones, because it took me about 2 hours the first time.

    • @binga_
      @binga_ ปีที่แล้ว +1

      @@theashbot4097 I have a question, maybe you can help me: when I start the training, if I click on a window other than unity (like google chrome for example), the game pauses (I think this is the normal behaviour of unity) and after a few seconds, the training stops, saying "The Unity environment took too long to respond". Isn't there a way of continuing the training while doing something else on my computer?

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +1

      Sorry for not seeing this until now. you can go into Window/PojectSettings then go under Player, then Resolution, and presentation then toggle Run in background on.

  • @devansh_4u
    @devansh_4u ปีที่แล้ว +1

    I am at 29:00, everything is working OK, but my agent is not visible in the game mode, I can see it moving when I split display of scene and game mode, with removed background it is still there and all the functions are working fine, but just the agent is not visible
    EDIT: nevermind fixed it by setting the mask interaction in the sprite renderer of the background to visible outside mask!

  • @vishalvidyanand9942
    @vishalvidyanand9942 7 หลายเดือนก่อน +1

    I got this error when I tried to install torch torchvision and torchaudio in a Mac environment
    ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
    mlagents 0.30.0 requires torch=1.8.0; platform_system != "Windows" and python_version >= "3.9", but you have torch 2.0.1 which is incompatible.
    Kindly help me with this issue.

    • @theashbot4097
      @theashbot4097  7 หลายเดือนก่อน +1

      Go here, pytorch.org/, and scroll down a little. then try running the different code it gives you.

  • @KurayTunc
    @KurayTunc 4 หลายเดือนก่อน +1

    Hey I did everything and it worked in the end but when I tried this with 3d I couldn't move in heuristic only part and couldn't solve the problem could you help me ?

    • @theashbot4097
      @theashbot4097  3 หลายเดือนก่อน

      Sorry, but I do not have enough info to help solve your problem.

  • @DoctorDisastrous
    @DoctorDisastrous 5 หลายเดือนก่อน +1

    How do you fix the issue with Visual Studio not compiling the Unity.MLAgents package?

    • @theashbot4097
      @theashbot4097  5 หลายเดือนก่อน +1

      Honestly I do not know. Does it compile correctly in unity? if not then you probably forgot to install the ML-Agents package in unity.

  • @mattiapesce8372
    @mattiapesce8372 10 วันที่ผ่านมา +1

    i'm a newbie, i'm wondering if i have to do all the setup with the cmd every time i start a new project

    • @theashbot4097
      @theashbot4097  10 วันที่ผ่านมา +2

      Yes, you can just copy and past the venv folder from project to project just make sure you put it in the right directory.

  • @felixmuller9062
    @felixmuller9062 ปีที่แล้ว +1

    Thank you for this video!!
    Why do you add the real position of the agent and the target and not the local position to the observations?

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +1

      I fix it here 33:38. Thank you!

  • @kangaeloo
    @kangaeloo 6 หลายเดือนก่อน +1

    Hello, From Japan. I tried but I can't create collision detection of agent and walls. Why?

    • @theashbot4097
      @theashbot4097  6 หลายเดือนก่อน +2

      My best guess is the agent visual gameobject is not on position of 0 on the z, y, and z. Instead of moving the visual you need to move the agent gameobject, which will also move the visual. Then do the save for the target visual.

  • @milkbredAPEX
    @milkbredAPEX ปีที่แล้ว +1

    i thank you so much for this video, ive looked up every single tutorial on mlagents and couldn't find the right one until today. also is there anywhere i can learn to know how to fix/debug environments like you? it's because i spend days trying to get some tutorials to work and always find system/cmd errors (rn i cant get openai gym environments to work :(

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +2

      I am glad you found this helpful. I do not know what you mean by “fix/debug environments”. Sorry.

    • @milkbredAPEX
      @milkbredAPEX ปีที่แล้ว +1

      @@theashbot4097 its ok. i was just wondering how you were able to figure out what cmd commands to put and everything else to set up mlagents?

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +1

      I show all of the commands you need to do to set up ML-Agents in the video.

  • @jaswei
    @jaswei 11 หลายเดือนก่อน +1

    Hi! I was curious if you knew how time scale affects the training time. I read that the time scale is supposed to change how quickly the training takes, with 20x real life speed being the default and 1x being real life speed. However, when testing it I didn't see that happening with training time at normal speed (20x) being 4:43, 1x being 8:52 (Not 20x slower), and 100x being 4:24. My setups is 3D and with multiple pellet objects being placed randomly.

    • @theashbot4097
      @theashbot4097  11 หลายเดือนก่อน +1

      This video has it.
      th-cam.com/video/1raDh6rpg8U/w-d-xo.html

    • @jaswei
      @jaswei 11 หลายเดือนก่อน +1

      @theashbot4097 I might just be missing it, but I don't see anywhere where you mention/use timescale to change training speed

    • @theashbot4097
      @theashbot4097  11 หลายเดือนก่อน +1

      @@jaswei Sorry it took soo long to give this response. You will need to change the "time_horizon" in your .yaml file to be 512 or 1024.

    • @jaswei
      @jaswei 11 หลายเดือนก่อน +1

      @theashbot4097 I could be wrong, but I think time horizon is different from time scale, right? Time horizon should affect the amount of training an agent does before attempting to learn from its experience, while time scale should just change how quickly the computer runs the data/simulation. Time scale just doesn't seem to work as I expected it to, as seen with the times in my first comment, which is why I was curious if perhaps I was wrong about how it works

    • @theashbot4097
      @theashbot4097  11 หลายเดือนก่อน +2

      @@jasweiTime scale will make the whole game (or project) run faster. time horizon makes the training go faster.

  • @ast3roidplum147
    @ast3roidplum147 3 หลายเดือนก่อน +1

    When I use the control panel and type "python" it brings me to the Microsoft store. :/ Does anyone have a fix?

    • @theashbot4097
      @theashbot4097  3 หลายเดือนก่อน +1

      Try using “py” instead of “python” If that does not work then you need to reinstall python, while making sure that the, add to path, but is checked.

  • @오우하이
    @오우하이 ปีที่แล้ว +1

    I typed "pip install mlagents" in venv, but it says " ERROR: Failed building wheel for numpy
    Failed to build numpy
    ERROR: Could not build wheels for numpy, which is required to install pyproject.toml-based projects". How to fix it?

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +1

      Make sure you have python 3.9.13. Also
      make sure you got the pip properly installed at 5:42. install the numpy package before trying to install the mlagents package: pip install numpy. or try to install it using pip3: pip3 install num py.

    • @오우하이
      @오우하이 ปีที่แล้ว +1

      ​@@theashbot4097 Thank you! Error has fixed!

  • @bakartu
    @bakartu ปีที่แล้ว +1

    If i use what you show on the beggining of the video to install everything correctly then i can create any project i want with these packages or are they only for the project you wanted to create? Thanks, very nice and informing video.

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +2

      From my 5 minutes of testing it seems it works.

    • @bakartu
      @bakartu ปีที่แล้ว +1

      @@theashbot4097 Thank you!!

  • @vatsalbhuva1471
    @vatsalbhuva1471 18 วันที่ผ่านมา +1

    Hi,
    I was able to train my Flappy Bird 2 days back, and now when I try to train it by introducing new obstacles, it's failing.
    It's not about the logic, but the Unity Editor doesn't seem to connect to the ML-Agents instance (I've tried re-installing Unity, recreating the venv, but it still fails).
    When I log the DiscreteActions[0] value, it's always outputting it as 0 (even though space size is 2).
    And there's this error: Couldn't connect to trainer on port 5004 using API version 1.5.0. Will perform inference instead.
    I tried installing Python 3.9.13 like you said, but it's not connecting to the mlagents-learn instance I assume (before when I stopped the game, then the terminal used to say that it's generating a graph based on the learning done so far. But now, it just shows that mlagents is listening on port 5004 even after I stop the game. I've also verified that mlagents is actually listening on the port using netstat command).
    Please help out with this, it's been close to 2 days since I've got any progress on my project.

    • @theashbot4097
      @theashbot4097  18 วันที่ผ่านมา +2

      That sounds very frustrating. The waring saying "Couldn't connect to trainer on port 5004 using API version 1.5.0. Will perform inference instead." just pops up when one of 2 things happen:
      1. You did not do the mlagents-learn before pressing play on the unity scene, or you are getting an error in the CMD.
      2. The venv that you are using is not connected to the unity scene. To fix this you have to go through the VENV setup process, but make sure you are in the same folder that the assets folder is in, but not in the asset folder itself.

    • @vatsalbhuva1471
      @vatsalbhuva1471 18 วันที่ผ่านมา +1

      @@theashbot4097 Thanks for the quick response! Yeah I've been able to train it just 2 days back so I don't know what did I change such that it's not working now, especially the same value of DiscreteActions. Anyways, will need to figure out something.
      I also tried creating an entirely new project, installed mlagents on that, and tried to just just connect mlagents. But even then it did not. Very frustrating indeed!

    • @vatsalbhuva1471
      @vatsalbhuva1471 18 วันที่ผ่านมา +1

      @@theashbot4097 I managed to train my Flappy Bird to not hit the bottom and upper walls. But now I introduced the pipes and want to train it using these obstacles, but the mlagents process is itself not connecting! F

    • @theashbot4097
      @theashbot4097  17 วันที่ผ่านมา +1

      @@vatsalbhuva1471 Sorry, I did not see these here until now. Are you still having this problem?

  • @blakes3785
    @blakes3785 6 หลายเดือนก่อน +1

    what intext coding suggestions thing do you have, I don't know how to get mine to work and I've kind of been coding blind for a long time

    • @theashbot4097
      @theashbot4097  6 หลายเดือนก่อน +1

      Visual studio. If it is not working then follow this tutorial.

  • @sarah.p.d
    @sarah.p.d 2 หลายเดือนก่อน +1

    hi, do you offer like some kind of private tutor? i kinda need help with my bachelor's thesis. I can pay you of course!

    • @theashbot4097
      @theashbot4097  2 หลายเดือนก่อน +1

      I would love to help you with this, but could you give me more information so I can know if I am the right fit? I don't want you to pay me for something that I can not do.

  • @pjmatuck
    @pjmatuck ปีที่แล้ว +1

    Thank you for your video! I checked that you are facing the same problem as me, the MLAgent always try to restart when you stops your Unity Editor. Did you get to fix it? You can check it at minute 31:45.

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +2

      Sadly I have not found a way to make the ML-Agents automatically stop. The only solution I have found I showed in the video. Pressing Ctrl + C.

    • @pjmatuck
      @pjmatuck ปีที่แล้ว +1

      @@theashbot4097 Yes, is the same that I'm using. Post a new vídeo If you find it! Thank you

  • @user-vy7vb9zn2l
    @user-vy7vb9zn2l 7 หลายเดือนก่อน +1

    Hi! Thanks for the video. When i try to attach my script to the "add component" in agents, the "behaviour parameter" does not show up. If I try to add it manually, I am not able to change the name of the behaviour. Any idea as to why unity Is not doing this automatically, as it should. I am a beginner. Any help would be much appreciated.

    • @theashbot4097
      @theashbot4097  7 หลายเดือนก่อน +2

      The only thing I can think of is you did not change the MonoBehavoiur to "Agent"

    • @Kind-Squirrel-Productions
      @Kind-Squirrel-Productions 7 หลายเดือนก่อน +1

      Please help me i have changed my one from MonoBehavoiur to Agent and it still doesn't show. it has been a nightmare of bugs errors and incompatibles!@@theashbot4097

    • @theashbot4097
      @theashbot4097  6 หลายเดือนก่อน +1

      @@Kind-Squirrel-Productions What version of unity are you using?

  • @vickievans4106
    @vickievans4106 9 หลายเดือนก่อน +1

    Hello, I'm trying to replicate the steps and got the following error in Unity when executing the project. Do you know what I can do?
    Couldn't connect to trainer on port 5004 using API version 1.5.0. Will perform inference instead.
    UnityEngine.Debug:Log (object)

    • @theashbot4097
      @theashbot4097  9 หลายเดือนก่อน +2

      It might seem like an error but it is not. It is just saying that you did not start the training so it will just have you do heuristic.

    • @vickievans4106
      @vickievans4106 9 หลายเดือนก่อน +1

      @@theashbot4097 Thank you so much!

  • @flipix8711
    @flipix8711 ปีที่แล้ว +2

    Do you know why when I put python -m venv venv doesn’t work?

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +2

      Try "py" instead of "python" also make sure you have python 3.13.9.

    • @flipix8711
      @flipix8711 ปีที่แล้ว +2

      Thanks!!! nice video@@theashbot4097

  • @iliyagolyak6953
    @iliyagolyak6953 6 หลายเดือนก่อน +1

    Hi, is there a way to change hyperparameters or train another (own) neural network?

    • @theashbot4097
      @theashbot4097  6 หลายเดือนก่อน +1

      Sorry I am confused about what you are asking. Could you clarify?

  • @sarasolis5733
    @sarasolis5733 4 หลายเดือนก่อน +1

    I have this error: "Couldn't connect to trainer on port 5004 using API version 1.5.0. Will perform inference instead." What can I do? Help pls

    • @theashbot4097
      @theashbot4097  4 หลายเดือนก่อน +3

      That is just a warning. It is just saying that you have not started the training, meaning you are either running a heuristic or you are looking at a pretrained model.

    • @sarasolis5733
      @sarasolis5733 4 หลายเดือนก่อน +1

      @@theashbot4097 So, if I wait for it to initialise should it work or is there something else I should do? Thank you for your help in advance

    • @theashbot4097
      @theashbot4097  4 หลายเดือนก่อน +2

      @@sarasolis5733 If you want to agent to train then you have to wait for the python to initialize the training, but if you want to test the environment, or look at a pre trained model then do not worry about the warning.

  • @yaoling_XR
    @yaoling_XR ปีที่แล้ว +1

    Cowhide brother!

    • @yaoling_XR
      @yaoling_XR ปีที่แล้ว +1

      That's excellent tutorial, thanks bro~

  • @user-ce3eu6py4m
    @user-ce3eu6py4m ปีที่แล้ว +2

    Hello,
    I'm getting "zero" values for "ContinuousActions", why?

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +1

      Sorry for the slow response. I think it could be one of three different things. 1. if you are using heuristic and you misspelled "Horizontal", or "Vertical". 2. you did not add a "Decision Requester". 3. You did not Start the training properly. I hope this helps, and again sorry for the slow response.

    • @user-ce3eu6py4m
      @user-ce3eu6py4m ปีที่แล้ว +1

      @@theashbot4097 Thank you.
      My problem was due to use of a different version of "Pytorch".
      Thanks. I wish you all the best with your channel.

  • @mikhailhumphries
    @mikhailhumphries ปีที่แล้ว +1

    You started 4 months ago. What turned your focus to game development and ml agents?

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +3

      I actually started programing about 1 year ago, and decided to start with unity. About 4 months ago I decided to start posting videos that I wish I could have watch when I was learning how to do something. the reason I decided to make videos for plane C# at the beginning is because a lot of stuff that I could not find any video for, was stuff that I found out how to do it in plane C#. And recently I wanted to learn how to use ML-Agents, but every video I found on how to do that was at least 2 years old, and out dated. I hope that answered you question. Sorry for the long answer.

  • @Coz12978
    @Coz12978 ปีที่แล้ว +1

    This is great, but when I do the OnActionRecieved override it gives me a float[] vectorAction. Fixes?

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว

      sorry I do not understand what you mean. is it an error? if so can you send me the whole error.

  • @BenVhekith
    @BenVhekith ปีที่แล้ว +1

    I am only 4 minutes into this, so I don't know for sure if this will work, but you can keep other version of python if you start your commands with " py -3.9 " instead of python. Of course, you will need to be able to use "py" instead of python, so ash bot probably can't use this method

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +2

      WOW! I downloaded the newest version of python to test the different versions, and then I tried starting python using "py" and it worked! So then I tried to use "py -3.9", and it worked as well! (I think the reason "py" did now work for me in the video is because I did not have the newest version of python installed.) I will use this in any of video I make for ML-Agents. Thank you!

  • @oshers8817
    @oshers8817 ปีที่แล้ว +1

    I keep getting the error '['D:\\MLAgents\\venv\\Scripts\\python.exe', '-Im', 'ensurepip', '--upgrade', '--default-pip']' returned non-zero exit status 1. Do you know why this is?

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +1

      Is this in the unity editor, or in the CMD? What were you trying to do when you got this error?

    • @oshers8817
      @oshers8817 ปีที่แล้ว +1

      @theashbot4097 I was attempting to create the virtual environment in the command prompt, or the line "python -m venv venv". I've tried switching python with py and it doesn't change anything

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +1

      @@oshers8817 Make sure you have python properly installed by running "python" or "py" if non of those work then look at this part of the video 2:50. if it does work make sure you have version 3.9.13. if you do not then uninstall the version of python that it says you have, and install the right version of python. tell me if these steps help.

    • @oshers8817
      @oshers8817 ปีที่แล้ว +1

      @@theashbot4097 I actually figured it out. My antivirus just prevented downloading some files, so if anyone has this same problem in the future just tell them to try this. I just subbed because you are simply too good to have only 50 subs. Keep putting out videos and I'm sure you'll grow beyond what you could imagine

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +1

      @@oshers8817 I am glad you figured it out! Thank you for subscribing! I would love to put out more videos but I don't really have any good ideas. If you have any please feel free to share.

  • @michpo1445
    @michpo1445 9 หลายเดือนก่อน +1

    I made a printout for ContinuousActions on OnActionReceived and half of my actiosn are either -1 or 1, they are not changing values. What is causing this? I set continuous actions to 4

    • @theashbot4097
      @theashbot4097  9 หลายเดือนก่อน +1

      Are there any errors/warnings in the console?

    • @michpo1445
      @michpo1445 9 หลายเดือนก่อน +1

      @@theashbot4097 thanks for the quick reply, I traced the problem to the Observations, I have 11, Vector3, quaternion, and 4 floats. I commented all other code. When I add any of the floats, it creates this problem. I dont know why, they're not attached to the actions at all right now. It must be a bug in mlagents. Vector 3 and quaternion are totally fine

    • @theashbot4097
      @theashbot4097  9 หลายเดือนก่อน +1

      So it is working?

    • @michpo1445
      @michpo1445 9 หลายเดือนก่อน +1

      @@theashbot4097 I had aggregated variables for power in my observations, where continuous actions would either add or subtract small quantities of power (to each motor in a quadcopter). I assume this creates some kind of a feedback loop and causes continuous actions to always be 1. Once I disconnected actions from observations completely by removing cumulative power from observations, this problem went away. But now I can only use position and rotation in observations. Is this correct thing to do? ,

    • @theashbot4097
      @theashbot4097  9 หลายเดือนก่อน +1

      @@michpo1445 I am still having a hard time understanding what you are doing. Do you think you can send a video?

  • @ahmetmetecakr658
    @ahmetmetecakr658 4 หลายเดือนก่อน +1

    It's such an amazing video you made there. Thx for it! But I am facing a minor problem. I don't see any ONNX file after trainings. I don't think I missed a point. What could be the problem?

    • @ahmetmetecakr658
      @ahmetmetecakr658 4 หลายเดือนก่อน +1

      After writing mlagents-learn and run id I start training in unity by pressing start button. The agent does its training but after I go back to cmd i don't see any difference since i started it. I pressed ctrl+c to see if it stops and yes it stops and says the training is interrupted. After i stop training from stop button I see that there is the Test folder i named but it doesn't have ONNX file in it.

    • @theashbot4097
      @theashbot4097  4 หลายเดือนก่อน +1

      @@ahmetmetecakr658 Make sure you have reinstalled onnx. I show how at 31:50

    • @ahmetmetecakr658
      @ahmetmetecakr658 3 หลายเดือนก่อน +1

      ​@@theashbot4097 I found the problem. It was behaviour type. I changed it to Default from Heuristic. Thx for replying tho. Love your work

    • @theashbot4097
      @theashbot4097  3 หลายเดือนก่อน

      @@ahmetmetecakr658 I am glad you got it working! Thank you!

  • @Wrrenked
    @Wrrenked ปีที่แล้ว +2

    'Couldn't connect to trainer on port 5004 using API version 1.5.0. Will perform inference instead.'
    When I start the training, I get this error and the training does not start. Additionally, when I enter the code 'mlagents-Learn --run-id=Test1' into cmd, some texts that I do not understand appear and nothing happens regarding the start of the training. How can I solve this? Do you have an idea?

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +1

      I do not know if capitals matter. If they do then you need your L to be a lowercase L. Can you please send me the error?

    • @Wrrenked
      @Wrrenked ปีที่แล้ว +2

      I did what you said and something came up that I didn't understand. In cmd it says something like this
      'Traceback (most recent call last):
      File "C:\Users\kemal\AppData\Local\Programs\Python\Python39\lib
      unpy.py", line 197, in _run_module_as_main
      return _run_code(code, main_globals, None,
      File "C:\Users\kemal\AppData\Local\Programs\Python\Python39\lib
      unpy.py", line 87, in _run_code
      exec(code, run_globals)
      File "C:\Users\kemal\Desktop\unity\2D AI Project\Library\PackageCache\com.unity.ml-agents@2.0.1\venv\Scripts\mlagents-learn.exe\__main__.py", line 4, in
      File "C:\Users\kemal\Desktop\unity\2D AI Project\Library\PackageCache\com.unity.ml-agents@2.0.1\venv\lib\site-packages\mlagents\trainers\learn.py", line 13, in
      from mlagents.trainers.trainer_controller import TrainerController
      File "C:\Users\kemal\Desktop\unity\2D AI Project\Library\PackageCache\com.unity.ml-agents@2.0.1\venv\lib\site-packages\mlagents\trainers\trainer_controller.py", line 13, in
      from mlagents.trainers.env_manager import EnvManager, EnvironmentStep
      File "C:\Users\kemal\Desktop\unity\2D AI Project\Library\PackageCache\com.unity.ml-agents@2.0.1\venv\lib\site-packages\mlagents\trainers\env_manager.py", line 13, in
      from mlagents.trainers.agent_processor import AgentManager, AgentManagerQueue
      File "C:\Users\kemal\Desktop\unity\2D AI Project\Library\PackageCache\com.unity.ml-agents@2.0.1\venv\lib\site-packages\mlagents\trainers\agent_processor.py", line 20, in
      from mlagents.trainers.trajectory import AgentStatus, Trajectory, AgentExperience
      File "C:\Users\kemal\Desktop\unity\2D AI Project\Library\PackageCache\com.unity.ml-agents@2.0.1\venv\lib\site-packages\mlagents\trainers\trajectory.py", line 4, in
      from mlagents.trainers.buffer import (
      File "C:\Users\kemal\Desktop\unity\2D AI Project\Library\PackageCache\com.unity.ml-agents@2.0.1\venv\lib\site-packages\mlagents\trainers\buffer.py", line 97, in
      class AgentBufferField(list):
      File "C:\Users\kemal\Desktop\unity\2D AI Project\Library\PackageCache\com.unity.ml-agents@2.0.1\venv\lib\site-packages\mlagents\trainers\buffer.py", line 210, in AgentBufferField
      self, pad_value: np.float = 0, dtype: np.dtype = np.float32
      File "C:\Users\kemal\Desktop\unity\2D AI Project\Library\PackageCache\com.unity.ml-agents@2.0.1\venv\lib\site-packages
      umpy\__init__.py", line 305, in __getattr__
      raise AttributeError(__former_attrs__[attr])
      AttributeError: module 'numpy' has no attribute 'float'.
      `np.float` was a deprecated alias for the builtin `float`. To avoid this error in existing code, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here.
      The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
      numpy.org/devdocs/release/1.20.0-notes.html#deprecations'@@theashbot4097

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +2

      @@WrrenkedTry "pip install numpy" or "pip3 install nunpy"

    • @Wrrenked
      @Wrrenked ปีที่แล้ว +2

      the same error continues. Also, I get the following error in Unity: 'Couldn't connect to trainer on port 5004 using API version 1.5.0. Will perform inference instead.
      UnityEngine.Debug:Log (object)
      Unity.MLAgents.Academy:InitializeEnvironment () (at ./Library/PackageCache/com.unity.ml-agents@2.0.1/Runtime/Academy.cs:459)
      Unity.MLAgents.Academy:LazyInitialize () (at ./Library/PackageCache/com.unity.ml-agents@2.0.1/Runtime/Academy.cs:279)
      Unity.MLAgents.Academy:.ctor () (at ./Library/PackageCache/com.unity.ml-agents@2.0.1/Runtime/Academy.cs:248)
      Unity.MLAgents.Academy/c:b__83_0 () (at ./Library/PackageCache/com.unity.ml-agents@2.0.1/Runtime/Academy.cs:117)
      System.Lazy`1:get_Value ()
      Unity.MLAgents.Academy:get_Instance () (at ./Library/PackageCache/com.unity.ml-agents@2.0.1/Runtime/Academy.cs:132)
      Unity.MLAgents.Agent:LazyInitialize () (at ./Library/PackageCache/com.unity.ml-agents@2.0.1/Runtime/Agent.cs:451)
      Unity.MLAgents.Agent:OnEnable () (at ./Library/PackageCache/com.unity.ml-agents@2.0.1/Runtime/Agent.cs:365)',
      maybe it has something to do with this, I have no idea.@@theashbot4097

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +2

      ​@@WrrenkedThat Error is due to the training not starting in the CMD. and the training is not starting in the CMD because of an error. I do not like to have people do this, but could you delete you venv file then set it up again? 4:20.

  • @abelsturm542
    @abelsturm542 11 หลายเดือนก่อน +1

    Hey, I have a question, I wanted to expand the max_setps of the configuration, to give more time to train, but I don't find the config file, each test when finished has this file, but I don't find the default one, can some on help me? 😅

    • @theashbot4097
      @theashbot4097  11 หลายเดือนก่อน +2

      This video shows how to set up the config file. th-cam.com/video/1raDh6rpg8U/w-d-xo.htmlsi=IiGZLxSFqj-th3jy

  • @letrepas
    @letrepas 5 หลายเดือนก่อน +1

    Hi, I run this in cmd: mlagents-learn --run-id=TestRun, but nothing happens in unity. Behaivor Type - Default, Version information:
    ml-agents: 0.30.0,
    ml-agents-envs: 0.30.0,
    Communicator API: 1.5.0,
    PyTorch: 2.0.1+cpu
    Error:The environment does not need user interaction to launch
    The Agents' Behaviour Parameters > Behavior Type is set to "Default"
    The environment and the Python interface have compatible versions.
    If you're running on a headless server without graphics support, turn off display by either passing --no-graphics option or build your Unity executable as server build.
    help please

    • @theashbot4097
      @theashbot4097  5 หลายเดือนก่อน +1

      Did you set up the VENV in the right folder? It seems like it does nor recognize unity.

    • @letrepas
      @letrepas 5 หลายเดือนก่อน +1

      ​@@theashbot4097 I've configured the venv folder in the correct directory, and it's creating a folder named "results" alongside it, but the "config" folder is missing.

    • @theashbot4097
      @theashbot4097  5 หลายเดือนก่อน +1

      @@letrepas The config is optional, so you have to make i yourself.

    • @letrepas
      @letrepas 5 หลายเดือนก่อน +1

      @@theashbot4097 I've already done everything I can.
      And reinstalled the venv folder, I'm very sad ((
      where the mlagents-learn file should be located

    • @theashbot4097
      @theashbot4097  5 หลายเดือนก่อน +1

      @@letrepas It should be in the root directory. It should NOT be in the Assets folder, but you should be able to see the Assets folder when in file explorer. Please tell me if I need to reword it.

  • @LOGANX07
    @LOGANX07 8 หลายเดือนก่อน +2

    TypeError: CCompiler_spawn() got an unexpected keyword argument 'env'
    [end of output]
    note: This error originates from a subprocess, and is likely not a problem with pip.
    ERROR: Failed building wheel for numpy
    Failed to build numpy
    ERROR: Could not build wheels for numpy, which is required to install pyproject.toml-based projects how to fix? i tried everything.

    • @LOGANX07
      @LOGANX07 8 หลายเดือนก่อน +1

      ı updated pip and numpy latest btw .

    • @theashbot4097
      @theashbot4097  8 หลายเดือนก่อน +2

      @@LOGANX07 Make sure you are using python 3.9.13, and run this "pip install numpy==1.21.2" before you try to install the ML-Agents package.

    • @LOGANX07
      @LOGANX07 8 หลายเดือนก่อน +1

      @@theashbot4097 I write here if this works

    • @LOGANX07
      @LOGANX07 8 หลายเดือนก่อน +1

      python 3.9.13 is not workin anymore need at least 3.10.12@@theashbot4097

    • @theashbot4097
      @theashbot4097  8 หลายเดือนก่อน +1

      For what?

  • @Mateus_py
    @Mateus_py ปีที่แล้ว +1

    pip install mlagents
    pip3 install torch torchvision torchaudio
    pip install protobuf==3.20.3
    Thank you, for me aways the hardest of any project is setting up the enviromment. The way everything conflicts its so annoying, thanks for explaining how things need to be done

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +2

      Ya then there is one more thing I cover later in the tutorial. I do not remember why I put it 31:50. but there it is.

  • @VEETEEGameStudio
    @VEETEEGameStudio 10 หลายเดือนก่อน +1

    I get this error when i try to run the mlagents C:\Users\User\Desktop\VTgames\Projects\Quad Ball\venv\lib\site-packages\torch\__init__.py:614: UserWarning: torch.set_default_tensor_type() is deprecated as of PyTorch 2.1, please use torch.set_default_dtype() and torch.set_default_device() as alternatives. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\torch\csrc\tensor\python_tensor.cpp:453.)
    _C._set_default_tensor_type(t)

    • @theashbot4097
      @theashbot4097  10 หลายเดือนก่อน +1

      Try reinstalling Pythorch 6:29.

    • @VEETEEGameStudio
      @VEETEEGameStudio 10 หลายเดือนก่อน +1

      and even when i want to check if ml agents was installed corrently using mlagents-learn --help i get his error C:\Users\User\Desktop\VTgames\Projects\Quad Ball\venv\lib\site-packages\torch\__init__.py:614: UserWarning: torch.set_default_tensor_type() is deprecated as of PyTorch 2.1, please use torch.set_default_dtype() and torch.set_default_device() as alternatives.

    • @theashbot4097
      @theashbot4097  10 หลายเดือนก่อน +1

      ​@@VEETEEGameStudio Try deleting the VENV file then remake the VENV file. 3:38. Also make sure you have the right version of python.

    • @VEETEEGameStudio
      @VEETEEGameStudio 10 หลายเดือนก่อน

      hey man thanks
      @@theashbot4097

    • @VEETEEGameStudio
      @VEETEEGameStudio 10 หลายเดือนก่อน

      Thanks again Man ,you are the best
      @@theashbot4097

  • @cesaredimasi899
    @cesaredimasi899 ปีที่แล้ว +1

    i'm having an error while installing mlagents on cmd that say: Could not build wheels for numpy, which is required to install pyproject.toml-base projects. can you tell me what to do.? Thanks

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +1

      Make sure you are using python 3.9.13.
      Then run this before you use the command that gives you this error.
      pip install numpy==1.21.2

  • @firekiller2141
    @firekiller2141 11 หลายเดือนก่อน +1

    Hi, thank you very much for this guide! I would like to ask, how do you "say" AI what to do? I understand that you can give a reward for touching goal, but what if chances to touch a goal randomly is almost zero? For example, i wanted my AI to jump over the hole(I made 2D simulation) , but AI just can't understand what to do and randomly walk.

    • @theashbot4097
      @theashbot4097  11 หลายเดือนก่อน +1

      To do that you will need a deferent observation method. I think this video might help you with that. th-cam.com/video/fz8D0OZkQGQ/w-d-xo.html

    • @firekiller2141
      @firekiller2141 11 หลายเดือนก่อน +1

      @@theashbot4097 Thank you! So, in short, you just have to learn mlagent step by step starting with simple tasks ending with complex tasks?

    • @theashbot4097
      @theashbot4097  11 หลายเดือนก่อน +1

      @@firekiller2141yes. You will have to do that. But I was trying to say you would use something like sensor observations.

  • @Kiyodio
    @Kiyodio 10 หลายเดือนก่อน +1

    Can you help solve an error or warning saying this?
    Warning: Failed to initialize NumPy: module compiled against API version 0x10 but this version of numpy is 0xe (function operator ())

    • @theashbot4097
      @theashbot4097  10 หลายเดือนก่อน +1

      Make sure you are using python 3.9.13, and run this "pip install numpy==1.21.2" before you try to install the ML-Agents package.

    • @Kiyodio
      @Kiyodio 10 หลายเดือนก่อน +1

      @@theashbot4097 I managed to fix the problem myself
      I installed py 3.10.12
      went into the mlagents-envs setup.py config
      changed numpy==1.21.2 to numpy==1.23
      and everything worked fine

    • @theashbot4097
      @theashbot4097  9 หลายเดือนก่อน +1

      @@KiyodioI am glad you got it working!

  • @patrickdean2669
    @patrickdean2669 ปีที่แล้ว +1

    So I didn't get the onnx file mentioned at 35:12... I've got onnx installed per the "pip install onnx" stage.. any ideas?
    Thanks again for great content!

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +1

      did you get this error? 31:40?

    • @patrickdean2669
      @patrickdean2669 ปีที่แล้ว +1

      I think I'd not let it train for long enough? Or just it was creating the files... it works if I leave it "for a bit"... IDK what I'm doing wrong! Anyways, thanks again for the stellar material, and keep up the good work!
      @@theashbot4097

    • @theashbot4097
      @theashbot4097  ปีที่แล้ว +2

      Do you mind me asking what you think I did good, and what I could do better for future tutorials? also if you have any tutorial Ideas please share!