How to use Machine Learning AI Vision with Unity ML-Agents!

แชร์
ฝัง
  • เผยแพร่เมื่อ 12 พ.ย. 2024

ความคิดเห็น • 101

  • @CodeMonkeyUnity
    @CodeMonkeyUnity  3 ปีที่แล้ว +2

    🌐 Have you found the videos Helpful and Valuable?
    ❤️ Get my Courses unitycodemonkey.com/courses or Support on Patreon www.patreon.com/unitycodemonkey
    📦 Unity Machine Learning Playlist: th-cam.com/play/PLzDRvYVwl53vehwiN_odYJkPBzcqFw110.html

    • @PolymathAtif
      @PolymathAtif 3 ปีที่แล้ว

      You are the new Brackeys my man

    • @PolymathAtif
      @PolymathAtif 3 ปีที่แล้ว

      After he left TH-cam I was so unmotivated but you and Fat Dino kept me going

    • @sucukluomlet4665
      @sucukluomlet4665 3 ปีที่แล้ว

      When I run any ML-agent project, I see this error.
      Missing Profiler.EndSample (BeginSample and EndSample count must match): ApplyTensors
      Previous 5 samples:
      GC.Alloc
      ApplyTensors
      GC.Alloc
      Barracuda.PeekOutput
      FetchBarracudaOutputs
      In the scope:
      Can someone help

    • @_VeonAlmeida
      @_VeonAlmeida ปีที่แล้ว

      hi can i train it for a push up counter im thinking training it for detecting a proper push..can unity ml agents help in this

  • @CodeMonkeyUnity
    @CodeMonkeyUnity  3 ปีที่แล้ว +6

    💬 The Machines have EYES!
    Adding vision to your AI is surprisingly easy although comes at the cost of much longer training times.
    I've already covered quite a few of the base mechanics of ML-Agents, what example projects would you like to see?

    • @meowththatsright7881
      @meowththatsright7881 3 ปีที่แล้ว

      Tactical shooter ai with machine learning like sebastian did? But he didn't show us a tutorial

    • @stalinthomas9850
      @stalinthomas9850 3 ปีที่แล้ว

      An RTS game that the mlagent must learn to play?

    • @v1rusAnon
      @v1rusAnon 2 ปีที่แล้ว

      can tyou put camera to the 2d character itself? in 2d game

    • @Punch_Card
      @Punch_Card หลายเดือนก่อน

      ai learns to walk

  • @enescaglar7326
    @enescaglar7326 3 ปีที่แล้ว +4

    when you do something, you do the best. These are best AI tutorial series on the Internet :)

  • @guridoraccoon6375
    @guridoraccoon6375 3 ปีที่แล้ว +3

    Always a delight to see u posting

  • @tudormuntean3299
    @tudormuntean3299 3 ปีที่แล้ว

    Been trying to find something like this for ages. Thanks!

  • @SangeetaYadav-pv8og
    @SangeetaYadav-pv8og 3 ปีที่แล้ว +1

    I was looking for it thanks dude.......

  • @allaboutpaw9396
    @allaboutpaw9396 3 ปีที่แล้ว

    Thank you for your contribution. I really appreciate your videos.

  • @Diego0wnz
    @Diego0wnz 3 ปีที่แล้ว

    Damn this combined with the yolo object detection algorithm could be really fucking cool for real life applications

  • @RavenMinis
    @RavenMinis 3 ปีที่แล้ว

    Dude, I love your content so much

  • @vo_sk
    @vo_sk 2 ปีที่แล้ว +3

    Thank you so much for this AI playlist, it`s very inspiring! Are you going to make some new videos on this topic in 2022? It would be so amazing :) Thanks again!

    • @CodeMonkeyUnity
      @CodeMonkeyUnity  2 ปีที่แล้ว +1

      Yup I'd love to revisit ML sometime in the future to see what's changed, just need to find the time

  • @leonardo6631
    @leonardo6631 3 ปีที่แล้ว

    thank you so much for the great video!

  • @mypaxa003
    @mypaxa003 3 ปีที่แล้ว

    This is crazy. Why im never thought about simplifying image before feeding it to AI?) Really cool to see combination of two AIs.

    • @mypaxa003
      @mypaxa003 3 ปีที่แล้ว +1

      Please make a video about drawing gizmos. Really interested how to draw transform gizmos without object (like for changing vertex position in ProBuilder).

  • @valor36az
    @valor36az ปีที่แล้ว

    Great tutorial

  • @PolymathAtif
    @PolymathAtif 3 ปีที่แล้ว +1

    You are a life saver ☺️💜👍

    • @PolymathAtif
      @PolymathAtif 3 ปีที่แล้ว

      He ❤️d my comment
      I am so happy

  • @roshanthapa1297
    @roshanthapa1297 3 ปีที่แล้ว

    Debug.Log("Awesome");
    😂
    Now, we are definitely going for AI, not only in the robot and vehicles but also in game dev it's really something that have huge potential for Future world, every time improvement brings out absolute 🔥🔥 results.

  • @xetra1155
    @xetra1155 ปีที่แล้ว

    AWESOME VIDEO GOD DAMN IT

  • @bsdrago
    @bsdrago 3 ปีที่แล้ว +2

    I am always commenting here how much I like these videos and I want a COURSE =) But I have a doubt: it seems correct to think that, if I have appropriate "sensors / data", I can train an AI to be unbeatable in a game. But we are talking about games and this is for people. It is frustrating to play a game and you always lose.
    How do I create artificial stupidity? =)People have to beat the computer otherwise the game is bad =)

    • @CodeMonkeyUnity
      @CodeMonkeyUnity  3 ปีที่แล้ว +1

      When you're training the AI it will periodically store checkpoint brains. So you could train your AI to be superhuman, then go back and use the brain from a few checkpoints before that point.
      Making the AI dumber is definitely an interesting topic, as you said it's no fun fighting against something and always losing.

    • @bsdrago
      @bsdrago 3 ปีที่แล้ว

      @@CodeMonkeyUnity I understand about checkpoints, but my point is how to create something that is not "offensive" to the player. An AI that hits 100% of the time is just as frustrating as an AI at an earlier AI checkpoint and that reaches half a meter from the player and "shoots up" (stupidly missed the shot).
      It seems to me that the good balance of the game is to find a point where the AI does not hit 100% of the time at any time, or to make a stupid mistake at a point where no one would.
      I think this discussion is fantastic and I am concerned with the construction of these sensors to have a good gameplay. Maybe missing half a meter is something that is not even expected from the computer, but ok if he misses a sniper shot 200m away =)

  • @SyedMasoodDivanOliM
    @SyedMasoodDivanOliM 10 หลายเดือนก่อน

    for Human body tracking (and placing an skeletal over the body)in AR for android how can i create this application, can you share your thought on this

  • @v1rusAnon
    @v1rusAnon 2 ปีที่แล้ว +1

    how to put and what other ways to put ml agents to the 2d character itself

  • @v1rusAnon
    @v1rusAnon 2 ปีที่แล้ว +1

    how to attach the camera to the agent itself but on 2d game?

  • @Build_the_Future
    @Build_the_Future 2 ปีที่แล้ว

    How do I use the --initialize-from tag to start training from a older step count?

  • @sinner1263
    @sinner1263 3 ปีที่แล้ว

    I watched 3:34, 0:30, 4:32, 0:15 of ads without watching the video. How? Im playing the video while Im eating. LOL

    • @r1pfake521
      @r1pfake521 3 ปีที่แล้ว

      What do you mean?

  • @saifkhaled1914
    @saifkhaled1914 ปีที่แล้ว

    Core i7 4810 MQ 2.8GHz 3.8GHz with turbo boost , is it enough to do training AI ?

  • @alizargarian1156
    @alizargarian1156 3 หลายเดือนก่อน

    Hey, firstly many thanks for your helpful tutorial. Unfortunately after playing to train I get this error:
    mlagents_envs.exception.UnityObservationException: Decompressed observation did not have the expected shape - decompressed had (84, 84, 3) but expected [3, 84, 84]
    I saw in some forum that I should update my ML-agent. But it is updated (3.0.0). Did you have this problem or maybe a solution for me?

  • @nickgennady
    @nickgennady 3 ปีที่แล้ว

    For grayscale you can have different gray values for different objects?

  • @Hellomyfriendlyfriends
    @Hellomyfriendlyfriends 3 ปีที่แล้ว

    When will you have a new course on Udemy ?? Loved the tower defense thx

    • @CodeMonkeyUnity
      @CodeMonkeyUnity  3 ปีที่แล้ว

      Not sure, right now I want to focus on the videos so maybe in 2-3 months. I'm glad you liked the course!

  • @jewelthomas7086
    @jewelthomas7086 2 ปีที่แล้ว

    How to make an AI in car game where the AI can reverse the car when it is stuck with an obstacle in front and get back to track by reversing? Any idea?

  • @RockoShaw
    @RockoShaw 2 ปีที่แล้ว

    First of all thanks for your awesome tutorials. So a Vision sensor acts as Action sender and it works for learning. So if I wanted to allocate inventory to orders would inventory be an Action in some form of discrete numbers?

  • @Annin_Mochineko
    @Annin_Mochineko ปีที่แล้ว

    Is it possible to simplified the input image into multi colors? I want to use it for robot navigation, simplyfied the cam images into target, obstacle, and background. So that when using the model in real life, I can use image segmentation to input simplyfied images without actually considering the complex environment when creating the training environment. (Sorry for my crappy English since it's not my first language.)

  • @Shakor77
    @Shakor77 11 หลายเดือนก่อน

    Is it possible to use ML AI to have it train over a gaming session rather than train it "offline" and then load the trained model?
    Reason is that I want to see if you can us ML AI to learn to play against players rather than against the environment.

    • @CodeMonkeyUnity
      @CodeMonkeyUnity  11 หลายเดือนก่อน

      As far as I know there is no way to train a model outside of the Unity Editor, so you can't train it in a build
      Maybe you could look into other methods for training that are non-Unity and then perhaps dynamically load the new trained model and use that?

  • @DavidLeahy100
    @DavidLeahy100 ปีที่แล้ว

    I don't see any VisionCamera Culling Layer ?? in my Unity 2019.4.13f

  • @RockoShaw
    @RockoShaw 2 ปีที่แล้ว

    I also have one question, in the Pig/Sheep example you added the CameraSensor. What for? on Action Received you check if it's either a Sheep or a Pig based on the last transform check.

    • @CodeMonkeyUnity
      @CodeMonkeyUnity  2 ปีที่แล้ว

      It's just to showcase how the AI vision works, if you had this exact specific scenario, organizing sheep and pigs then using ML is overkill

    • @RockoShaw
      @RockoShaw 2 ปีที่แล้ว

      ​@@CodeMonkeyUnity Thanks a lot for your response. Asking for learning purposes, but thanks for clarification.
      I have a question regarding the Camera Sensor.
      You set the width and height to 20 and 20 respectively and it matches the discrete branch size.
      When you get an action after a decision request, what does the discrete branch contains? The position of a non black pixel? or what exactly will it return? What if you wanted the colors it sees in RGB?
      Also, this 20x20 size does that mean everything the camera sees that is attached to this CameraSensor will be scaled/resized to 20x20 pixels?
      You also mention that CameraSensor being 20x20 in Grayscale would be 400 observations, but OnActionReceived is called only 17 times when I call RequestDecision. I might be confusing observation to actions but if Observations are like the VectorObservations you showed in other videos, I would expect VectorObservation Space Size to be larger than 0 in the duck shooting example, or is the Camera Sensor's observation bundled inside the MLAgents framework?

  • @octanios540
    @octanios540 3 ปีที่แล้ว +1

    Can do a turturial of how to make different characters have different abilities like apex legends

    • @rickybloss8537
      @rickybloss8537 3 ปีที่แล้ว

      You know your basically asking him to make your game for you.

    • @r1pfake521
      @r1pfake521 3 ปีที่แล้ว +1

      Implement your abilities as ScriptableObject then just make different character prefabs and drag&drop the abilities you want to the character prefabs. There are multiple tutorials about exactly this topic, even a official one from Unity themself about character selection and ability system.

    • @octanios540
      @octanios540 3 ปีที่แล้ว

      @@r1pfake521 thank you

  • @skinnyboystudios9722
    @skinnyboystudios9722 3 ปีที่แล้ว

    Can you do video on classic AI vs Reinforcement learning?

    • @CodeMonkeyUnity
      @CodeMonkeyUnity  3 ปีที่แล้ว +1

      It's tricky to do a general video because it's all highly dependent on what you want to do. Some things are easier to do with ML, some easier with Classic AI.

    • @skinnyboystudios9722
      @skinnyboystudios9722 3 ปีที่แล้ว

      @@CodeMonkeyUnity Yes the video can be comparing different classic vs ml methods. When to use one over the other. Just explanations not necessarily a tutorial.

  • @pixboi
    @pixboi ปีที่แล้ว

    Do you know if there is a way of writing texture yourself without a camera? Would be useful for situations where you have a class presentation of tiles for example, and you want to input the surrounding of the agent without a camera.

    • @CodeMonkeyUnity
      @CodeMonkeyUnity  ปีที่แล้ว

      You can manually write to a texture with SetColor();
      I used something like that here unitycodemonkey.com/video.php?v=Xss4__kgYiY
      unitycodemonkey.com/video.php?v=ZRRc7J-OwGo

    • @pixboi
      @pixboi ปีที่แล้ว

      @@CodeMonkeyUnity sorry I mean as in mlagents context. I found out you can make your own sensor and write stuff in it with ObservationWriter

  • @mohammedabdelsalam1010
    @mohammedabdelsalam1010 3 ปีที่แล้ว +2

    Can you make about ar technology?

    • @CodeMonkeyUnity
      @CodeMonkeyUnity  3 ปีที่แล้ว +2

      AR is an interesting topic that I'd love to research at some point, just don't know when

    • @mohammedabdelsalam1010
      @mohammedabdelsalam1010 3 ปีที่แล้ว +1

      @@CodeMonkeyUnity God willing, soon 😉

  • @dnscdnsc6121
    @dnscdnsc6121 3 ปีที่แล้ว

    Hi sir, new subs here. I was just wondering how to reference Gameobject to another scene?

  • @ahmedelborki5982
    @ahmedelborki5982 3 ปีที่แล้ว

    I like your videos especially ML agents.
    i have a question:
    I am currently working on endless runner ml agents. but the agent keeps losing. do you have any advice on how to make work better?
    thanks
    my regards

    • @CodeMonkeyUnity
      @CodeMonkeyUnity  3 ปีที่แล้ว +1

      It is all a matter of designing your rewards correctly and letting it train a lot.
      In your case, give it some observations based on where the platforms are and give it a reward based on distance.

  • @unoriginalcringygaming3002
    @unoriginalcringygaming3002 3 ปีที่แล้ว

    The machine has found you...

  • @bikcrum
    @bikcrum ปีที่แล้ว

    I just wanted to ask is it possible to use first person view from environment like self-driving car instead of ray casting? I wanted to train in a way that can be easily transferrable to other domain and eventually in real-world. Have you tried training with first person RBG-D view from car or similar example?

    • @CodeMonkeyUnity
      @CodeMonkeyUnity  ปีที่แล้ว +1

      Sure that would work, but would make the model much more complex and be much more difficult to train as opposed to a handful of raycasts.
      As long as the visuals are somewhat realistic, or the real world car has some common virtual shader then learning in the virtual world should work for applying it to the real world.

    • @bikcrum
      @bikcrum ปีที่แล้ว

      @@CodeMonkeyUnity The objective of my work is just to explore in a way that it tries to explore new places. I am thinking to use curiosity driven reinforcement learning. However, the challenge as I described is sim-to-real. I would need a input commonality between unity and realworld. One way I am thinking is to use SIFT feature of visual that will contain the descriptor of image rather than dense representation. Even though, I think I need to make environment that looks like real world. Do you know or have you used a real world captured mesh into unity? Can you point me to right direction?

  • @sundarakrishnann8242
    @sundarakrishnann8242 3 ปีที่แล้ว

    The thumbnail is from boss fight? xD

    • @CodeMonkeyUnity
      @CodeMonkeyUnity  3 ปีที่แล้ว +2

      Heh it is! I wanted something to showcase "eyes" and that seemed to look good!

  • @WorldEnder
    @WorldEnder 3 ปีที่แล้ว

    for the 3d example, could it be considered a simplified way to use a top-down camera and teaching the AI in 2d?

    • @CodeMonkeyUnity
      @CodeMonkeyUnity  3 ปีที่แล้ว

      You could indeed have a top down camera and feed that to the AI along with actions to move and it would learn to get to an animal and identify it.
      However that would require a massive amount of training time, something like 100+ Million steps.

  • @Punch_Card
    @Punch_Card หลายเดือนก่อน

    9:29 I don't really understand what you mean by classic AI? Is it just if-else statements?

    • @CodeMonkeyUnity
      @CodeMonkeyUnity  หลายเดือนก่อน

      Yup exactly, that or state machines or any of those kinds of AI, those are a completely different category from machine learning AI

  • @In-N-Out333
    @In-N-Out333 3 ปีที่แล้ว

    How come in this project you didn’t create multiple environments to train in parallel to each other?

  • @kamillatocha
    @kamillatocha 3 ปีที่แล้ว

    how does it know the animals position to move to it and rotate so its facing it

    • @CodeMonkeyUnity
      @CodeMonkeyUnity  3 ปีที่แล้ว

      That is handled through classic AI, just a simple list of all the animals and a simple mover script.

  • @myelinsheathxd
    @myelinsheathxd 3 ปีที่แล้ว

    I know it's not easy but I think the developers should try to add a technology that visualize the ML brain to debug and sense how the brain logic is working in our agent!

    • @CodeMonkeyUnity
      @CodeMonkeyUnity  3 ปีที่แล้ว

      Even if you could visualize it you couldn't possibly understand it. It would simply show hundreds of dots each with a value between -1 and +1.
      That's too much noise to be able to debug

    • @myelinsheathxd
      @myelinsheathxd 3 ปีที่แล้ว

      @@CodeMonkeyUnity Yeah, but we need smth else to understand logic creation behavior like in human brain analysis. During fmri scan of human brain, it's easy to feel how basic logic work by observing language areas of the brain. A person will be required to think about one word several times with several conditions, then the data shows that a specific area working consistently. This means brain uses this part of the brain to the specific word, close to this brain area there several words to related to this word like synonymous words, ect.
      Again, It's not easy,, since I haven't created custom ML algorithm. But we need smth like fmri like technology to understand real time Machine Learning's learning principles.

  • @xetra1155
    @xetra1155 ปีที่แล้ว

    Does sensor work with imitation learning?

    • @CodeMonkeyUnity
      @CodeMonkeyUnity  ปีที่แล้ว

      Hmm good question, I'm not sure, I covered Imitation learning a long time ago here but don't remember if it would work with a Camera sensor unitycodemonkey.com/video.php?v=supqT7kqpEI

    • @xetra1155
      @xetra1155 ปีที่แล้ว

      @@CodeMonkeyUnity Then I guess you know what to do for the next video my friend

  • @rikrishshrestha5421
    @rikrishshrestha5421 3 ปีที่แล้ว

    i thought camerasensor was coded by him. It was provided by unity in ml-agent pkg.

    • @CodeMonkeyUnity
      @CodeMonkeyUnity  3 ปีที่แล้ว

      Yes it's one of the built-in sensors

  • @crazyfox55
    @crazyfox55 3 ปีที่แล้ว

    Why use ML-Agents instead of just using the birds transform? It doesn't seem like there is anything for the AI to learn, when the solution is just faster and more accurate if its hand coded here. Are there better examples that I'm just not seeing?

    • @CodeMonkeyUnity
      @CodeMonkeyUnity  3 ปีที่แล้ว +2

      Yeah it's just a demo to showcase how to use vision with ML-Agents.
      If my goal was just to make an AI to shoot the bird I would go with Classic AI instead of ML.

  • @nicofacto2424
    @nicofacto2424 3 ปีที่แล้ว

    And of course we will never see the shootTargetEnvironement.cs
    so .... thx ... @5:02

    • @CodeMonkeyUnity
      @CodeMonkeyUnity  3 ปีที่แล้ว

      It spawns the prefab and does a raycast. All the code is included in the project files.

    • @nicofacto2424
      @nicofacto2424 3 ปีที่แล้ว

      @@CodeMonkeyUnity well i got them but ShootTargetEnvironment.cs(32,32): error CS0117: 'UtilsClass' does not contain a definition for 'GetMouseWorldPositionZeroZ', this is the only error for me

  • @RoadHater
    @RoadHater 2 ปีที่แล้ว

    Is there any way to have a black and white camera instead of grayscale? Even less observations

    • @BrainSlugs83
      @BrainSlugs83 11 หลายเดือนก่อน

      You would still need one float per pixel, you would just have less usable resolution in each pixel. You can accomplish this though with quantization (i.e. Q2 gives 2 bits per pixel for example; essentially 4 shades of grey, and have roughly 16x the inferencing power).