One thing I thought you could do is make it so a label has to show up for 150ms (roughly a standard pro reaction time) - that way it doesn't shoot at every 1 frame ghost it thinks it sees, but only at persistent threats - and also it makes it seem more realistic and human-like by having somewhat realistic reaction times. You could also have it move the aim slowly over time in trial movements until it's over the top of the marked target and only shoot once it lines up, which would not only improve reliability of the aim but make it seem even more human-like and I just realised this video was from may 2021 and you're likely not even working on this any more oh well
Just got this recommended. Really good work! I am impressed by the performance you can achieve with transfer learning on your "small" annotated Valorant dataset. You still remember how high the performance for different objects was on your test set (accuracy or MAP if you have computed that)? It also really hurt me not seeing your model TURN upon hearing someone behind :D Would really love to see you including audio next, and then seeing some nice 180 flicks in version 2.0.
This is gold. Things I think you can do (although it's been a year so who knows what happened) is obviously have it be aware to object presence, as you have said in the video, but also respond to sounds, voice commands (via wheel ore voice chat), awareness to economy, and most importantly, have it teabag other players.
When i was doing my project with computer vision, I gained almost x10 performance increase just by downscaling input image by some ratio. Of course, it lowers the accuracy of results, but, sometimes full resolution is MUCH bigger than enough and downscaling isn't going to affect the results at all. So, by downscaling input images you can boost performance for free by finding the optimal level of downscaling.
This project really is incredible - the way the video was captured, the way inputs were sent to the game, the problem solving of getting a used dongle when the exploit was patched, all of it was wild!
I said it before and I'll say it again. These are the *best* videos on TH-cam right now. You sir are on the fast track to 2mio subs if you keep up this frequency and quality. Good luck and well done 👍
THIS IS SOO GOOD. As a person just starting off with OpenCV and AI and stuff and an interest in Valorant, this is godly. I do want to see your code just to see how you used all the AI libraries and stuff purely from an academic standpoint but it makes sense why you would not want to.
This is extremely cool, and I’m super impressed! You’ve given me the motivation to get started on a few personal projects I’ve been considering. I love stuff like this, combining hardware hacking and multiple devices and data streams - managing complexity like that and coming up with solutions for problems in that space is so much fun.
It's funny, I wrote computer vision bots for both PUBG and BDO using very similar tech. I followed nearly the same thought paths as you, used the same strategies / tech, and hit the same roadblocks. The part about being unable to load cudart had me dying, I know that pain. People would ask me why I bothered and I had no answer other than it was fun, so yeah I totally get this video and am glad to see someone else understands how satisfying making something like this can be, even though there is no real advantage to be gained.
So, I know this video is kind of old, but I just discovered your channel and I'm watching all your videos 😅. I'm a PhD in machine learning, and I saw on the quick code that appears on the video that you are using large images. I don't think that this is necessary. You could downscale the images, use on your model, and recalculate afterwards where the BB is on the real feed. A second thing I would suggest os already use a pretrained huggingface object detection model just to see if it detects the caracteres as a person and use simple code to see the color of the border. This solution should help with the low data amount. You could even create data this way :) I don't have a solution for the spikes and mollies tough. Either way awesome video!
I am actually planning on developing a thesis with Machine Learning and AI, and your video just shows up on my feed. Incredible, you just gives me an idea! Thank you so much for that, really appreciate it. Looking back at the older games like Counter Strike 1.6 that has bots, we are totally hoping this game also have like that bot you have made. Actually it's easier to create it on the system itself, not based on Computer Vision, but anyway... this project is smaller than large companies made for robots and self-driving cars like Tesla. Don't compare yours to them, this one-man project really amazes and inspires us on the community.
Honestly, I have watched this video about 4 times in the last month; because of how good it is, unfortunately there are not many good videos explaining how to train a custom data set, but your sources in the video's description helped me alot thank you for sharing this information.
I'm glad this vid got recommended to me. For enemy detection, you could probably use the fact that the game outlines all enemies in the same bright red color to make the job easier on yolov5.
What an amazing project, these types of projects are what we engineers think of doing and give up saying it's way too much work 😂. Anyway great work and good content.
an improvement method for your labelling: you can add the ability to analyze moving images (for paint-splatters) by introducing a LSTM or similar. this will also remove the false labelling of beams or shells as enemies or spikes because the data in the short-term memory makes it impossible to mistake a bullet-shell for a spike. and the data in the long-term memory might even know which friendly is holding the spike.
It's things like this that make me wish I had the patients to learn coding and neural networks, I'd have so much fun just experimenting and pushing the boundaries of what I could create
Those kind of tournaments exist in Cs : each team has code from a specific dude who programed all the moves of the bots of his team. Very funny to watch
I worked with cv2 already but this is next level, my dude. I love this video so much and computer vision is extremely interesting. I actually consider focusing on computer vision in my future career. Anyways, thx for this awesome video and great inspiration. U are an beast
You need to reduce the size of the images the neural net is provided with. Go black and white and scale down the images, this will let it perform so much faster
Very interesting video! By the way, the creator of YOLO had ceased his research to prevent the tech from being used for military applications. I hope it will not be misused.
that would be so tight to have AI gaming competitions in the future. See who can make the best trained AI and have all the AI compete against each other
Awesome Video. Changing the border color for enemys to a bright Pink or so could probably help with lots of false Enemys. The second one is a bit more tricky you are right. But I believe this could be since the AI get trickked in a way of an illusion like the Necker cube. Since you allways start the round facing correct you'd could use that data to make a null-point and just add the inputs and afterwards delete them again to allways have the correct view. Anyway thanks for the video I look forward for more.
It is so painfull to see ai struggling, knowing he is just not good enought and there is nothing it can do until some human makes a better version of itself.
No what's more painful is knowing there are actual human being that plays like this, viewangle desync (aiming at the ground), doesn't use audio etc etc
As a tournament organiser, having an odd number of team was bad, that why I would have want to let people creating bot participate (Not cheat, just bot, that use info that only an human can have, as you exactly did.) but, well, I doubted this exist, and your video kinda proved it T.T
Hey love the content, speaking as a overglorified trolley collector studying computer science and biomechatronics, ive only ever really worked with prerecorded material for AI, so forgive me if this cant be applied. I thought could be interesting using interpolation to track specific markers between set intervals/frames to track the future, pass and present frames for charactor motion and to predict where to aim and shoot with the low frame rate. For the navigation aspect, my big dumb idea for that is using planar homography, taking in the height of the player model and angle to visually map and record coordinates around the map and develop a nav mesh which the ai could then use to get around corners
apply data augmentation so it can generalized better. adding filter for each label like a bounding box width/height ratio range, and rgb value range will clean up the predictions
@@adrielle1i23 haha not on purpose, but it would eliminate a lot of the error -making in the process of elimination for the AI when it notices a close-up enemy or even an in-movement enemy. it would be hard for the AI to notice a head peaking enemy otherwise or somesuch.
@@kecs2 I actually don't think that would make a difference. The biggest difference would be from using multiple frames instead of a single. Even humans have a hard time noticing features of a still image. But if something is moving it's much easier to see.
@@oblivion_2852 ya thats what i mean, multiple frames to highlight a different portion of the body the way valorant divides its damage multiplier: head, body, legs
Very interesting good job with that, for the FPS problem I recommend you try to scale down the image resolution to 608x608 (must be multiples of 32) and remove the last YOLO head from the model (responsible for long range detections and is also the most expensive in terms of computations), this will lead to less accuracy over long range targets, but will have a much better chance with close range encounters. Also if you would like, I could help you increase the detection accuracy quite more as well as some performance improvements, I am interested in this.
This is actually really cool, i recently did a project using opencv and yolov5, and i was wondering if i could make a valorant bot like you did. i am absolutely blown by this
One thing you could do is connect a wireless audio adapter into the game PC and have it transmit to the Bot PC. Then you could use an audio library to monitor the left and right audio channels for things like footsteps, gun shots, volume, etc. Then compare those findings to the mini map to see if those sounds are coming from a teammate or enemy. If it couldn't be coming from a friendly, have the AI turn in the direction of the sound. That'd be a way to at least get some basic sound integration done.
Hi! This is so awesome! Im a data scientist and I had some thoughts! There is a lot that could be done to improve the actual CV model, but I want to focus on some other stuff first: For the latency in the detection, afaik a common solution is using something like a Hungarian algorithm to match detects across frames. After you have matched your detects you can use them with a filter like a Kalman filter to model and smooth the trajectory, since you know your latency and velocity, heading etc you can push your reported position into the future as an easy way to get the bot to 'lead' the shots and compensate for detection latency. This is really convenient also, since you can remove unmatched detections, and solve issues with short term (like single frame) false positives. Also, if you lose a detection for a few frames the kalman filter will predict the expected locations based on the object kinematics which may help as well For navigation, things are a lot harder. Navigating strait from pixels is obviously really tough. I think the standard approach would be to use something like ORB SLAM to actually do the localization. If you want to get fancy, you can combine orb slam, mini map and also your key input into an extended kalman filter or something similar There are probably also hackier approaches to navigation using heuristics or dynamic window approach which might be worth looking at!
@Ocean Blues I think you have the right idea. One improvement could be to leave some buffer room around walls which would reduce getting stuck on walls and corners.
@@riveducha I hope you turned off mouse acceleration. I don't remember if you mentioned it in the video. But I guess mouse acceleration would be something that would cause the bot to overshoot and oscilate around the target.
There is probably a way to randomly generate training data. The games assets are probably available, so rendering pngs of just character models (with alpha 0 background) at different distances and angles, placing them randomly into background shots of the game and automatically generating the outlining box where the png got placed (that you till now had to manually draw) could give you lot's of training data very quickly wich should improve the results of your AI model by a lot. Just an idea
...but why? Why do any of this?
cuz it's interesting AF
you're more than a year late to ask that homie
@@jazzWF Its a question unbound by time
Cause computer vision is awesome!
Because he can.
It's amazing how you can easily replicate my teammates in comp
lmaoo absolute gold comment. This should get pinned HAHAHA
He should make a neon bot that sprints into the enemy's spawn with spike lmao
@@zem0ku605 what rank are u?
1to1 replica
Iron confirmed
This video made me understand why my friends call me a bot.
105 likes
No comment.After 1 year let's fix it.
Lol
@@reportagebykonstantinos8030 agree
4 comments
LMAO
Calling Valorant a csgo gamemode is the funniest and most fitting description of the game I've ever heard
the AI is like a noob and a pro fighting over the controls.
you're getting close.
One thing I thought you could do is make it so a label has to show up for 150ms (roughly a standard pro reaction time) - that way it doesn't shoot at every 1 frame ghost it thinks it sees, but only at persistent threats - and also it makes it seem more realistic and human-like by having somewhat realistic reaction times. You could also have it move the aim slowly over time in trial movements until it's over the top of the marked target and only shoot once it lines up, which would not only improve reliability of the aim but make it seem even more human-like and I just realised this video was from may 2021 and you're likely not even working on this any more oh well
Desinc on a 1 year old vid
@@eHeSTaFIXtatiCkANKpiQU I know, I wanted to write it anyway so I did
@@eHeSTaFIXtatiCkANKpiQU dawg
Just got this recommended. Really good work!
I am impressed by the performance you can achieve with transfer learning on your "small" annotated Valorant dataset. You still remember how high the performance for different objects was on your test set (accuracy or MAP if you have computed that)?
It also really hurt me not seeing your model TURN upon hearing someone behind :D Would really love to see you including audio next, and then seeing some nice 180 flicks in version 2.0.
reocmmended
this randomly came in my algorithm 3 years later
same
same
Same
Type shi
Anyone else just get recommended this video 1 year later?
Great video btw
Yes
This is a prime example of a TH-camr who needs a shit ton more attention. Well done!
This is gold. Things I think you can do (although it's been a year so who knows what happened) is obviously have it be aware to object presence, as you have said in the video, but also respond to sounds, voice commands (via wheel ore voice chat), awareness to economy, and most importantly, have it teabag other players.
Pretty interesting video, ans it's really well made as well. And it has subtitles! Thanks!
I’m happy that somebody likes the subtitles!
@@riveducha hello friend, my name is Luigi, would you please help me?
@@luigiesposito2481 help you in what?
When i was doing my project with computer vision, I gained almost x10 performance increase just by downscaling input image by some ratio. Of course, it lowers the accuracy of results, but, sometimes full resolution is MUCH bigger than enough and downscaling isn't going to affect the results at all.
So, by downscaling input images you can boost performance for free by finding the optimal level of downscaling.
This project really is incredible - the way the video was captured, the way inputs were sent to the game, the problem solving of getting a used dongle when the exploit was patched, all of it was wild!
This video is a year old and now is being recommended to everyone
Amazing work. My daughters told me about this and I was impressed so had to check this out. Well done!!
I said it before and I'll say it again.
These are the *best* videos on TH-cam right now. You sir are on the fast track to 2mio subs if you keep up this frequency and quality.
Good luck and well done 👍
Appreciate the support!
THIS IS SOO GOOD. As a person just starting off with OpenCV and AI and stuff and an interest in Valorant, this is godly. I do want to see your code just to see how you used all the AI libraries and stuff purely from an academic standpoint but it makes sense why you would not want to.
I wanna see the code as well lol ive made a kinda poop bot for CSGO but it was cool please make a github with the code maybe or something
This is extremely cool, and I’m super impressed! You’ve given me the motivation to get started on a few personal projects I’ve been considering.
I love stuff like this, combining hardware hacking and multiple devices and data streams - managing complexity like that and coming up with solutions for problems in that space is so much fun.
As someone getting a PhD in Machine Learning, you're doing the work of someone getting a PhD in Machine Learning.
It's funny, I wrote computer vision bots for both PUBG and BDO using very similar tech. I followed nearly the same thought paths as you, used the same strategies / tech, and hit the same roadblocks. The part about being unable to load cudart had me dying, I know that pain. People would ask me why I bothered and I had no answer other than it was fun, so yeah I totally get this video and am glad to see someone else understands how satisfying making something like this can be, even though there is no real advantage to be gained.
So, I know this video is kind of old, but I just discovered your channel and I'm watching all your videos 😅. I'm a PhD in machine learning, and I saw on the quick code that appears on the video that you are using large images. I don't think that this is necessary. You could downscale the images, use on your model, and recalculate afterwards where the BB is on the real feed. A second thing I would suggest os already use a pretrained huggingface object detection model just to see if it detects the caracteres as a person and use simple code to see the color of the border. This solution should help with the low data amount. You could even create data this way :) I don't have a solution for the spikes and mollies tough. Either way awesome video!
Just got recommended your video today randomly and loved it. I thought you were a much bigger channel, you definitely deserve more views!
I am actually planning on developing a thesis with Machine Learning and AI, and your video just shows up on my feed. Incredible, you just gives me an idea! Thank you so much for that, really appreciate it.
Looking back at the older games like Counter Strike 1.6 that has bots, we are totally hoping this game also have like that bot you have made. Actually it's easier to create it on the system itself, not based on Computer Vision, but anyway... this project is smaller than large companies made for robots and self-driving cars like Tesla. Don't compare yours to them, this one-man project really amazes and inspires us on the community.
More videos like that with deeeeep technical explanations, i understand in this video so much things that i been searching about and didn't understand
Honestly, I have watched this video about 4 times in the last month; because of how good it is, unfortunately there are not many good videos explaining how to train a custom data set, but your sources in the video's description helped me alot thank you for sharing this information.
Bro said he didnt share the code but somehow I see this in all of my ranked games
My dream is literally being able to do these things. I love the video, keep it up!
whoa, I didn't know pytorch was so hard to download year ago. Now everyone can download it
Id love to see a series on this as you keep trying to improve it, it was so much fun to watch
I'm glad this vid got recommended to me. For enemy detection, you could probably use the fact that the game outlines all enemies in the same bright red color to make the job easier on yolov5.
now get 8 more people and make a custom lobby so the bot can learn from actual gameplay experiences.
The lore of Terminator 7
@@andraskmeczo575shit actually happened in rocket league 😂
TH-cam algorithm as unusual as always, glad i found this gem of a channel
Why do I honestly think this bot could at least get bronze... Iron is a weird place
What an amazing project, these types of projects are what we engineers think of doing and give up saying it's way too much work 😂. Anyway great work and good content.
yo this is actually a really cool video and experiment. thanks for sharing your findings!
an improvement method for your labelling:
you can add the ability to analyze moving images (for paint-splatters) by introducing a LSTM or similar.
this will also remove the false labelling of beams or shells as enemies or spikes because the data in the short-term memory makes it impossible to mistake a bullet-shell for a spike. and the data in the long-term memory might even know which friendly is holding the spike.
when Riot don't make bots in custom so you do it yourself
Just go into a comp game
@@qaugithaduck5771 but then I'll be the bot
This is why you never take down videos. They could pop off years after uploading
It's things like this that make me wish I had the patients to learn coding and neural networks, I'd have so much fun just experimenting and pushing the boundaries of what I could create
Not me getting false banned for "3rd party program" then this guys making an ai for valorant 💀💀
The only flaw I see is that it doesn't know to trash talk
This is hard AF, I've tried computer vision before. This guy did a great job
Well done
It would be super fun to have like a league where it’s only AI you make yourself. 5v5 AI tourneys
Those kind of tournaments exist in Cs : each team has code from a specific dude who programed all the moves of the bots of his team. Very funny to watch
Oh you are the guy who made a brim bot who writes good round my agents!
lmao
I worked with cv2 already but this is next level, my dude. I love this video so much and computer vision is extremely interesting. I actually consider focusing on computer vision in my future career. Anyways, thx for this awesome video and great inspiration.
U are an beast
This takes "It dosen't use headphones!!!" to a new level
Dude thanks for making this video. You have finally proved my point that this game has bots in ranked, started noticing it since I hit rad
You need to reduce the size of the images the neural net is provided with. Go black and white and scale down the images, this will let it perform so much faster
But also reduce the resolution.
But also reduce the resolution.
No way... this guy finally found a way to have fun in Valorant
Very interesting video!
By the way, the creator of YOLO had ceased his research to prevent the tech from being used for military applications. I hope it will not be misused.
that would be so tight to have AI gaming competitions in the future. See who can make the best trained AI and have all the AI compete against each other
amazing idea
Here's one example: www.cs.mun.ca/~dchurchill/starcraftaicomp/
plz keep training this AI to point it can play at at least Iron level) Waiting for part 2)
wait, u mean irons are better than this?
@@sakana6388 yeah, they are
6:04 not going to lie that far friendly on the left, i thought that was sova until you pointed it out.. i guess im just an engineered AI
I love that your so smart you can replicate my teammates in comp
Play unrated pleaseeee. I need to hear how the ai will react to verbal abuse
I would like to thank youtube for combining my interests in one video
"this looks fun, but i might get banned if i test it in a multiplayer lobby"
"ever heard of tf2?"
Awesome Video. Changing the border color for enemys to a bright Pink or so could probably help with lots of false Enemys. The second one is a bit more tricky you are right. But I believe this could be since the AI get trickked in a way of an illusion like the Necker cube. Since you allways start the round facing correct you'd could use that data to make a null-point and just add the inputs and afterwards delete them again to allways have the correct view. Anyway thanks for the video I look forward for more.
How are we all just seeing this now?
fr lmfao
why just after 1 year? youtube hello?
So this is what my braindead ranked teammates were using
Why is this only now in my recommended, this is so facinating!
I do not like cheating in fps games but this is so cool!
only in fps games?
I've been looking everywhere for someone who Is trying to accomplish the same task . Subbed !
You knew that what you made wasn't perfect, but the feel of making something on your own is awesome. great video!
It is so painfull to see ai struggling, knowing he is just not good enought and there is nothing it can do until some human makes a better version of itself.
No what's more painful is knowing there are actual human being that plays like this, viewangle desync (aiming at the ground), doesn't use audio etc etc
Eh, if you use one of those learning bots it can. Also I don’t know much about this stuff so lmk if what I’m assaying is incorrect
The design is very human
As a tournament organiser, having an odd number of team was bad, that why I would have want to let people creating bot participate (Not cheat, just bot, that use info that only an human can have, as you exactly did.) but, well, I doubted this exist, and your video kinda proved it T.T
Hey love the content, speaking as a overglorified trolley collector studying computer science and biomechatronics, ive only ever really worked with prerecorded material for AI, so forgive me if this cant be applied. I thought could be interesting using interpolation to track specific markers between set intervals/frames to track the future, pass and present frames for charactor motion and to predict where to aim and shoot with the low frame rate. For the navigation aspect, my big dumb idea for that is using planar homography, taking in the height of the player model and angle to visually map and record coordinates around the map and develop a nav mesh which the ai could then use to get around corners
Bro just created an Iron 1 player.
Also this bot oddly resembeled the teammates I get during my rank ups lmao.
Well you never really know since there are a lot of bots out there that can roughly simulate human actions
@@DreamingBlindly me when i lie
This is getting recommended after 1 year lol
thats some god tier crosshair placement
no crosshair is for gods and this ai tried at least
cant believe you picked brim instead of kayo during testing
You should set 2 of these up and have them 1v1 eachother
this is still really impressive! well done!!
apply data augmentation so it can generalized better. adding filter for each label like a bounding box width/height ratio range, and rgb value range will clean up the predictions
hi bro ... i am learner in opencv. can u guide me how i can be master in it. i have so much interest in it
@@jdjdejei-ok1qt Teaching yourself is more viable than asking a random online
the lack of object permanence is pretty realistic for pugs honestly.
id like to see this model play against itself and learn the game see how good it can get
awesome work here man ! Would definitely love to see a part 2 some day
I think it would be really cool to have it read chat commands from teammates
stuff like"!go A" "!Defend Spike" "!defuse spike"
i think u gotta distinguish not just from enemy, but enemy head, body, and legs, so it knows where to aim to.
Inb4 you literally make an aimbot
@@adrielle1i23 haha not on purpose, but it would eliminate a lot of the error -making in the process of elimination for the AI when it notices a close-up enemy or even an in-movement enemy. it would be hard for the AI to notice a head peaking enemy otherwise or somesuch.
@@kecs2 I actually don't think that would make a difference. The biggest difference would be from using multiple frames instead of a single. Even humans have a hard time noticing features of a still image. But if something is moving it's much easier to see.
@@oblivion_2852 ya thats what i mean, multiple frames to highlight a different portion of the body the way valorant divides its damage multiplier: head, body, legs
You should make a part 2 this video was realy good
Getting the resolution down 2 to 3 times would massively benefit fps and save some computing power
this is so sick! just imagine if you had more advanced equipment damnnn
so this must be what all my teamates are
Very interesting good job with that, for the FPS problem I recommend you try to scale down the image resolution to 608x608 (must be multiples of 32) and remove the last YOLO head from the model (responsible for long range detections and is also the most expensive in terms of computations), this will lead to less accuracy over long range targets, but will have a much better chance with close range encounters.
Also if you would like, I could help you increase the detection accuracy quite more as well as some performance improvements, I am interested in this.
This would be super funny in an actual match
A full custom game with 10 of these would also be entertaining.
You should meet TacticalPumpkin
This is actually really cool, i recently did a project using opencv and yolov5, and i was wondering if i could make a valorant bot like you did. i am absolutely blown by this
One thing you could do is connect a wireless audio adapter into the game PC and have it transmit to the Bot PC.
Then you could use an audio library to monitor the left and right audio channels for things like footsteps, gun shots, volume, etc.
Then compare those findings to the mini map to see if those sounds are coming from a teammate or enemy. If it couldn't be coming from a friendly, have the AI turn in the direction of the sound.
That'd be a way to at least get some basic sound integration done.
Hi! This is so awesome! Im a data scientist and I had some thoughts!
There is a lot that could be done to improve the actual CV model, but I want to focus on some other stuff first:
For the latency in the detection, afaik a common solution is using something like a Hungarian algorithm to match detects across frames. After you have matched your detects you can use them with a filter like a Kalman filter to model and smooth the trajectory, since you know your latency and velocity, heading etc you can push your reported position into the future as an easy way to get the bot to 'lead' the shots and compensate for detection latency. This is really convenient also, since you can remove unmatched detections, and solve issues with short term (like single frame) false positives. Also, if you lose a detection for a few frames the kalman filter will predict the expected locations based on the object kinematics which may help as well
For navigation, things are a lot harder. Navigating strait from pixels is obviously really tough. I think the standard approach would be to use something like ORB SLAM to actually do the localization. If you want to get fancy, you can combine orb slam, mini map and also your key input into an extended kalman filter or something similar
There are probably also hackier approaches to navigation using heuristics or dynamic window approach which might be worth looking at!
Agree with all of your suggestions! And thanks for suggesting some good ideas other than "lol python slow" :)
@Ocean Blues I think you have the right idea. One improvement could be to leave some buffer room around walls which would reduce getting stuck on walls and corners.
@@riveducha I hope you turned off mouse acceleration. I don't remember if you mentioned it in the video. But I guess mouse acceleration would be something that would cause the bot to overshoot and oscilate around the target.
hes designed a very evil new generation aimbot and he doesn't know
exists already for months, ik ppl who sell those aimbots
its nothing new, people have been doing this since 2017
@@ZapWyd are those based on AI or they just somehow able to read encrypted data of valorant inside the computer and judge the position?
Ah what a way to make an aimbot without saying you're making an aimbot
There is probably a way to randomly generate training data. The games assets are probably available, so rendering pngs of just character models (with alpha 0 background) at different distances and angles, placing them randomly into background shots of the game and automatically generating the outlining box where the png got placed (that you till now had to manually draw) could give you lot's of training data very quickly wich should improve the results of your AI model by a lot. Just an idea
it would be cool to see 2 AI's 1v1 each other in valorant
OMG YES PLEASEEEEEEEEEEEEE
i would really like seeing a part two where you let the bot play as both the attacker and defender and let it learn by itself.
This kind of bot does not learn how to play, only where certain landmarks are
youtube recommended us all at once
tf
Tru
Ah so these are my teammates in my ranked games
Ah, my ranked teammates!
All jokes aside, really interesting experiment, always cool to see the capabilities of AI
i feel positively Neolithic after hearing "for you younger viewers, yolo is a meme from the 2010s"
love the video!