🌐 Have you found the videos Helpful and Valuable? ❤️ Get my Courses unitycodemonkey.com/courses or Support on Patreon www.patreon.com/unitycodemonkey 📦 Unity Machine Learning Playlist: th-cam.com/play/PLzDRvYVwl53vehwiN_odYJkPBzcqFw110.html
When I run any ML-agent project, I see this error. Missing Profiler.EndSample (BeginSample and EndSample count must match): ApplyTensors Previous 5 samples: GC.Alloc ApplyTensors GC.Alloc Barracuda.PeekOutput FetchBarracudaOutputs In the scope: Can someone help
💬 The Machines have EYES! Adding vision to your AI is surprisingly easy although comes at the cost of much longer training times. I've already covered quite a few of the base mechanics of ML-Agents, what example projects would you like to see?
Thank you so much for this AI playlist, it`s very inspiring! Are you going to make some new videos on this topic in 2022? It would be so amazing :) Thanks again!
Please make a video about drawing gizmos. Really interested how to draw transform gizmos without object (like for changing vertex position in ProBuilder).
Debug.Log("Awesome"); 😂 Now, we are definitely going for AI, not only in the robot and vehicles but also in game dev it's really something that have huge potential for Future world, every time improvement brings out absolute 🔥🔥 results.
I am always commenting here how much I like these videos and I want a COURSE =) But I have a doubt: it seems correct to think that, if I have appropriate "sensors / data", I can train an AI to be unbeatable in a game. But we are talking about games and this is for people. It is frustrating to play a game and you always lose. How do I create artificial stupidity? =)People have to beat the computer otherwise the game is bad =)
When you're training the AI it will periodically store checkpoint brains. So you could train your AI to be superhuman, then go back and use the brain from a few checkpoints before that point. Making the AI dumber is definitely an interesting topic, as you said it's no fun fighting against something and always losing.
@@CodeMonkeyUnity I understand about checkpoints, but my point is how to create something that is not "offensive" to the player. An AI that hits 100% of the time is just as frustrating as an AI at an earlier AI checkpoint and that reaches half a meter from the player and "shoots up" (stupidly missed the shot). It seems to me that the good balance of the game is to find a point where the AI does not hit 100% of the time at any time, or to make a stupid mistake at a point where no one would. I think this discussion is fantastic and I am concerned with the construction of these sensors to have a good gameplay. Maybe missing half a meter is something that is not even expected from the computer, but ok if he misses a sniper shot 200m away =)
Hey, firstly many thanks for your helpful tutorial. Unfortunately after playing to train I get this error: mlagents_envs.exception.UnityObservationException: Decompressed observation did not have the expected shape - decompressed had (84, 84, 3) but expected [3, 84, 84] I saw in some forum that I should update my ML-agent. But it is updated (3.0.0). Did you have this problem or maybe a solution for me?
How to make an AI in car game where the AI can reverse the car when it is stuck with an obstacle in front and get back to track by reversing? Any idea?
First of all thanks for your awesome tutorials. So a Vision sensor acts as Action sender and it works for learning. So if I wanted to allocate inventory to orders would inventory be an Action in some form of discrete numbers?
Is it possible to simplified the input image into multi colors? I want to use it for robot navigation, simplyfied the cam images into target, obstacle, and background. So that when using the model in real life, I can use image segmentation to input simplyfied images without actually considering the complex environment when creating the training environment. (Sorry for my crappy English since it's not my first language.)
Is it possible to use ML AI to have it train over a gaming session rather than train it "offline" and then load the trained model? Reason is that I want to see if you can us ML AI to learn to play against players rather than against the environment.
As far as I know there is no way to train a model outside of the Unity Editor, so you can't train it in a build Maybe you could look into other methods for training that are non-Unity and then perhaps dynamically load the new trained model and use that?
I also have one question, in the Pig/Sheep example you added the CameraSensor. What for? on Action Received you check if it's either a Sheep or a Pig based on the last transform check.
@@CodeMonkeyUnity Thanks a lot for your response. Asking for learning purposes, but thanks for clarification. I have a question regarding the Camera Sensor. You set the width and height to 20 and 20 respectively and it matches the discrete branch size. When you get an action after a decision request, what does the discrete branch contains? The position of a non black pixel? or what exactly will it return? What if you wanted the colors it sees in RGB? Also, this 20x20 size does that mean everything the camera sees that is attached to this CameraSensor will be scaled/resized to 20x20 pixels? You also mention that CameraSensor being 20x20 in Grayscale would be 400 observations, but OnActionReceived is called only 17 times when I call RequestDecision. I might be confusing observation to actions but if Observations are like the VectorObservations you showed in other videos, I would expect VectorObservation Space Size to be larger than 0 in the duck shooting example, or is the Camera Sensor's observation bundled inside the MLAgents framework?
Implement your abilities as ScriptableObject then just make different character prefabs and drag&drop the abilities you want to the character prefabs. There are multiple tutorials about exactly this topic, even a official one from Unity themself about character selection and ability system.
It's tricky to do a general video because it's all highly dependent on what you want to do. Some things are easier to do with ML, some easier with Classic AI.
@@CodeMonkeyUnity Yes the video can be comparing different classic vs ml methods. When to use one over the other. Just explanations not necessarily a tutorial.
Do you know if there is a way of writing texture yourself without a camera? Would be useful for situations where you have a class presentation of tiles for example, and you want to input the surrounding of the agent without a camera.
You can manually write to a texture with SetColor(); I used something like that here unitycodemonkey.com/video.php?v=Xss4__kgYiY unitycodemonkey.com/video.php?v=ZRRc7J-OwGo
I like your videos especially ML agents. i have a question: I am currently working on endless runner ml agents. but the agent keeps losing. do you have any advice on how to make work better? thanks my regards
It is all a matter of designing your rewards correctly and letting it train a lot. In your case, give it some observations based on where the platforms are and give it a reward based on distance.
I just wanted to ask is it possible to use first person view from environment like self-driving car instead of ray casting? I wanted to train in a way that can be easily transferrable to other domain and eventually in real-world. Have you tried training with first person RBG-D view from car or similar example?
Sure that would work, but would make the model much more complex and be much more difficult to train as opposed to a handful of raycasts. As long as the visuals are somewhat realistic, or the real world car has some common virtual shader then learning in the virtual world should work for applying it to the real world.
@@CodeMonkeyUnity The objective of my work is just to explore in a way that it tries to explore new places. I am thinking to use curiosity driven reinforcement learning. However, the challenge as I described is sim-to-real. I would need a input commonality between unity and realworld. One way I am thinking is to use SIFT feature of visual that will contain the descriptor of image rather than dense representation. Even though, I think I need to make environment that looks like real world. Do you know or have you used a real world captured mesh into unity? Can you point me to right direction?
You could indeed have a top down camera and feed that to the AI along with actions to move and it would learn to get to an animal and identify it. However that would require a massive amount of training time, something like 100+ Million steps.
I know it's not easy but I think the developers should try to add a technology that visualize the ML brain to debug and sense how the brain logic is working in our agent!
Even if you could visualize it you couldn't possibly understand it. It would simply show hundreds of dots each with a value between -1 and +1. That's too much noise to be able to debug
@@CodeMonkeyUnity Yeah, but we need smth else to understand logic creation behavior like in human brain analysis. During fmri scan of human brain, it's easy to feel how basic logic work by observing language areas of the brain. A person will be required to think about one word several times with several conditions, then the data shows that a specific area working consistently. This means brain uses this part of the brain to the specific word, close to this brain area there several words to related to this word like synonymous words, ect. Again, It's not easy,, since I haven't created custom ML algorithm. But we need smth like fmri like technology to understand real time Machine Learning's learning principles.
Hmm good question, I'm not sure, I covered Imitation learning a long time ago here but don't remember if it would work with a Camera sensor unitycodemonkey.com/video.php?v=supqT7kqpEI
Why use ML-Agents instead of just using the birds transform? It doesn't seem like there is anything for the AI to learn, when the solution is just faster and more accurate if its hand coded here. Are there better examples that I'm just not seeing?
Yeah it's just a demo to showcase how to use vision with ML-Agents. If my goal was just to make an AI to shoot the bird I would go with Classic AI instead of ML.
@@CodeMonkeyUnity well i got them but ShootTargetEnvironment.cs(32,32): error CS0117: 'UtilsClass' does not contain a definition for 'GetMouseWorldPositionZeroZ', this is the only error for me
You would still need one float per pixel, you would just have less usable resolution in each pixel. You can accomplish this though with quantization (i.e. Q2 gives 2 bits per pixel for example; essentially 4 shades of grey, and have roughly 16x the inferencing power).
🌐 Have you found the videos Helpful and Valuable?
❤️ Get my Courses unitycodemonkey.com/courses or Support on Patreon www.patreon.com/unitycodemonkey
📦 Unity Machine Learning Playlist: th-cam.com/play/PLzDRvYVwl53vehwiN_odYJkPBzcqFw110.html
You are the new Brackeys my man
After he left TH-cam I was so unmotivated but you and Fat Dino kept me going
When I run any ML-agent project, I see this error.
Missing Profiler.EndSample (BeginSample and EndSample count must match): ApplyTensors
Previous 5 samples:
GC.Alloc
ApplyTensors
GC.Alloc
Barracuda.PeekOutput
FetchBarracudaOutputs
In the scope:
Can someone help
hi can i train it for a push up counter im thinking training it for detecting a proper push..can unity ml agents help in this
💬 The Machines have EYES!
Adding vision to your AI is surprisingly easy although comes at the cost of much longer training times.
I've already covered quite a few of the base mechanics of ML-Agents, what example projects would you like to see?
Tactical shooter ai with machine learning like sebastian did? But he didn't show us a tutorial
An RTS game that the mlagent must learn to play?
can tyou put camera to the 2d character itself? in 2d game
ai learns to walk
when you do something, you do the best. These are best AI tutorial series on the Internet :)
Glad you like them!
Always a delight to see u posting
Been trying to find something like this for ages. Thanks!
I was looking for it thanks dude.......
Thank you for your contribution. I really appreciate your videos.
Damn this combined with the yolo object detection algorithm could be really fucking cool for real life applications
Dude, I love your content so much
Thank you so much for this AI playlist, it`s very inspiring! Are you going to make some new videos on this topic in 2022? It would be so amazing :) Thanks again!
Yup I'd love to revisit ML sometime in the future to see what's changed, just need to find the time
thank you so much for the great video!
This is crazy. Why im never thought about simplifying image before feeding it to AI?) Really cool to see combination of two AIs.
Please make a video about drawing gizmos. Really interested how to draw transform gizmos without object (like for changing vertex position in ProBuilder).
Great tutorial
You are a life saver ☺️💜👍
He ❤️d my comment
I am so happy
Debug.Log("Awesome");
😂
Now, we are definitely going for AI, not only in the robot and vehicles but also in game dev it's really something that have huge potential for Future world, every time improvement brings out absolute 🔥🔥 results.
AWESOME VIDEO GOD DAMN IT
I am always commenting here how much I like these videos and I want a COURSE =) But I have a doubt: it seems correct to think that, if I have appropriate "sensors / data", I can train an AI to be unbeatable in a game. But we are talking about games and this is for people. It is frustrating to play a game and you always lose.
How do I create artificial stupidity? =)People have to beat the computer otherwise the game is bad =)
When you're training the AI it will periodically store checkpoint brains. So you could train your AI to be superhuman, then go back and use the brain from a few checkpoints before that point.
Making the AI dumber is definitely an interesting topic, as you said it's no fun fighting against something and always losing.
@@CodeMonkeyUnity I understand about checkpoints, but my point is how to create something that is not "offensive" to the player. An AI that hits 100% of the time is just as frustrating as an AI at an earlier AI checkpoint and that reaches half a meter from the player and "shoots up" (stupidly missed the shot).
It seems to me that the good balance of the game is to find a point where the AI does not hit 100% of the time at any time, or to make a stupid mistake at a point where no one would.
I think this discussion is fantastic and I am concerned with the construction of these sensors to have a good gameplay. Maybe missing half a meter is something that is not even expected from the computer, but ok if he misses a sniper shot 200m away =)
for Human body tracking (and placing an skeletal over the body)in AR for android how can i create this application, can you share your thought on this
how to put and what other ways to put ml agents to the 2d character itself
how to attach the camera to the agent itself but on 2d game?
How do I use the --initialize-from tag to start training from a older step count?
I watched 3:34, 0:30, 4:32, 0:15 of ads without watching the video. How? Im playing the video while Im eating. LOL
What do you mean?
Core i7 4810 MQ 2.8GHz 3.8GHz with turbo boost , is it enough to do training AI ?
Hey, firstly many thanks for your helpful tutorial. Unfortunately after playing to train I get this error:
mlagents_envs.exception.UnityObservationException: Decompressed observation did not have the expected shape - decompressed had (84, 84, 3) but expected [3, 84, 84]
I saw in some forum that I should update my ML-agent. But it is updated (3.0.0). Did you have this problem or maybe a solution for me?
For grayscale you can have different gray values for different objects?
When will you have a new course on Udemy ?? Loved the tower defense thx
Not sure, right now I want to focus on the videos so maybe in 2-3 months. I'm glad you liked the course!
How to make an AI in car game where the AI can reverse the car when it is stuck with an obstacle in front and get back to track by reversing? Any idea?
First of all thanks for your awesome tutorials. So a Vision sensor acts as Action sender and it works for learning. So if I wanted to allocate inventory to orders would inventory be an Action in some form of discrete numbers?
Is it possible to simplified the input image into multi colors? I want to use it for robot navigation, simplyfied the cam images into target, obstacle, and background. So that when using the model in real life, I can use image segmentation to input simplyfied images without actually considering the complex environment when creating the training environment. (Sorry for my crappy English since it's not my first language.)
Is it possible to use ML AI to have it train over a gaming session rather than train it "offline" and then load the trained model?
Reason is that I want to see if you can us ML AI to learn to play against players rather than against the environment.
As far as I know there is no way to train a model outside of the Unity Editor, so you can't train it in a build
Maybe you could look into other methods for training that are non-Unity and then perhaps dynamically load the new trained model and use that?
I don't see any VisionCamera Culling Layer ?? in my Unity 2019.4.13f
I also have one question, in the Pig/Sheep example you added the CameraSensor. What for? on Action Received you check if it's either a Sheep or a Pig based on the last transform check.
It's just to showcase how the AI vision works, if you had this exact specific scenario, organizing sheep and pigs then using ML is overkill
@@CodeMonkeyUnity Thanks a lot for your response. Asking for learning purposes, but thanks for clarification.
I have a question regarding the Camera Sensor.
You set the width and height to 20 and 20 respectively and it matches the discrete branch size.
When you get an action after a decision request, what does the discrete branch contains? The position of a non black pixel? or what exactly will it return? What if you wanted the colors it sees in RGB?
Also, this 20x20 size does that mean everything the camera sees that is attached to this CameraSensor will be scaled/resized to 20x20 pixels?
You also mention that CameraSensor being 20x20 in Grayscale would be 400 observations, but OnActionReceived is called only 17 times when I call RequestDecision. I might be confusing observation to actions but if Observations are like the VectorObservations you showed in other videos, I would expect VectorObservation Space Size to be larger than 0 in the duck shooting example, or is the Camera Sensor's observation bundled inside the MLAgents framework?
Can do a turturial of how to make different characters have different abilities like apex legends
You know your basically asking him to make your game for you.
Implement your abilities as ScriptableObject then just make different character prefabs and drag&drop the abilities you want to the character prefabs. There are multiple tutorials about exactly this topic, even a official one from Unity themself about character selection and ability system.
@@r1pfake521 thank you
Can you do video on classic AI vs Reinforcement learning?
It's tricky to do a general video because it's all highly dependent on what you want to do. Some things are easier to do with ML, some easier with Classic AI.
@@CodeMonkeyUnity Yes the video can be comparing different classic vs ml methods. When to use one over the other. Just explanations not necessarily a tutorial.
Do you know if there is a way of writing texture yourself without a camera? Would be useful for situations where you have a class presentation of tiles for example, and you want to input the surrounding of the agent without a camera.
You can manually write to a texture with SetColor();
I used something like that here unitycodemonkey.com/video.php?v=Xss4__kgYiY
unitycodemonkey.com/video.php?v=ZRRc7J-OwGo
@@CodeMonkeyUnity sorry I mean as in mlagents context. I found out you can make your own sensor and write stuff in it with ObservationWriter
Can you make about ar technology?
AR is an interesting topic that I'd love to research at some point, just don't know when
@@CodeMonkeyUnity God willing, soon 😉
Hi sir, new subs here. I was just wondering how to reference Gameobject to another scene?
I like your videos especially ML agents.
i have a question:
I am currently working on endless runner ml agents. but the agent keeps losing. do you have any advice on how to make work better?
thanks
my regards
It is all a matter of designing your rewards correctly and letting it train a lot.
In your case, give it some observations based on where the platforms are and give it a reward based on distance.
The machine has found you...
R
U
N
I just wanted to ask is it possible to use first person view from environment like self-driving car instead of ray casting? I wanted to train in a way that can be easily transferrable to other domain and eventually in real-world. Have you tried training with first person RBG-D view from car or similar example?
Sure that would work, but would make the model much more complex and be much more difficult to train as opposed to a handful of raycasts.
As long as the visuals are somewhat realistic, or the real world car has some common virtual shader then learning in the virtual world should work for applying it to the real world.
@@CodeMonkeyUnity The objective of my work is just to explore in a way that it tries to explore new places. I am thinking to use curiosity driven reinforcement learning. However, the challenge as I described is sim-to-real. I would need a input commonality between unity and realworld. One way I am thinking is to use SIFT feature of visual that will contain the descriptor of image rather than dense representation. Even though, I think I need to make environment that looks like real world. Do you know or have you used a real world captured mesh into unity? Can you point me to right direction?
The thumbnail is from boss fight? xD
Heh it is! I wanted something to showcase "eyes" and that seemed to look good!
for the 3d example, could it be considered a simplified way to use a top-down camera and teaching the AI in 2d?
You could indeed have a top down camera and feed that to the AI along with actions to move and it would learn to get to an animal and identify it.
However that would require a massive amount of training time, something like 100+ Million steps.
9:29 I don't really understand what you mean by classic AI? Is it just if-else statements?
Yup exactly, that or state machines or any of those kinds of AI, those are a completely different category from machine learning AI
How come in this project you didn’t create multiple environments to train in parallel to each other?
how does it know the animals position to move to it and rotate so its facing it
That is handled through classic AI, just a simple list of all the animals and a simple mover script.
I know it's not easy but I think the developers should try to add a technology that visualize the ML brain to debug and sense how the brain logic is working in our agent!
Even if you could visualize it you couldn't possibly understand it. It would simply show hundreds of dots each with a value between -1 and +1.
That's too much noise to be able to debug
@@CodeMonkeyUnity Yeah, but we need smth else to understand logic creation behavior like in human brain analysis. During fmri scan of human brain, it's easy to feel how basic logic work by observing language areas of the brain. A person will be required to think about one word several times with several conditions, then the data shows that a specific area working consistently. This means brain uses this part of the brain to the specific word, close to this brain area there several words to related to this word like synonymous words, ect.
Again, It's not easy,, since I haven't created custom ML algorithm. But we need smth like fmri like technology to understand real time Machine Learning's learning principles.
Does sensor work with imitation learning?
Hmm good question, I'm not sure, I covered Imitation learning a long time ago here but don't remember if it would work with a Camera sensor unitycodemonkey.com/video.php?v=supqT7kqpEI
@@CodeMonkeyUnity Then I guess you know what to do for the next video my friend
i thought camerasensor was coded by him. It was provided by unity in ml-agent pkg.
Yes it's one of the built-in sensors
Why use ML-Agents instead of just using the birds transform? It doesn't seem like there is anything for the AI to learn, when the solution is just faster and more accurate if its hand coded here. Are there better examples that I'm just not seeing?
Yeah it's just a demo to showcase how to use vision with ML-Agents.
If my goal was just to make an AI to shoot the bird I would go with Classic AI instead of ML.
And of course we will never see the shootTargetEnvironement.cs
so .... thx ... @5:02
It spawns the prefab and does a raycast. All the code is included in the project files.
@@CodeMonkeyUnity well i got them but ShootTargetEnvironment.cs(32,32): error CS0117: 'UtilsClass' does not contain a definition for 'GetMouseWorldPositionZeroZ', this is the only error for me
Is there any way to have a black and white camera instead of grayscale? Even less observations
You would still need one float per pixel, you would just have less usable resolution in each pixel. You can accomplish this though with quantization (i.e. Q2 gives 2 bits per pixel for example; essentially 4 shades of grey, and have roughly 16x the inferencing power).