To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/CodeNoodles/. You’ll also get 20% off an annual premium subscription.
I feel like something that could improve it is if you had the code specifically look for any black spots that would mark a Bomb and label areas around them as dangerous, that could help the code avoid bombs as well as make sure it doesn’t try to hit fruits too close to bombs. I would also make it so that it doesn’t scan too close to the bottom of the screen as a bomb could come up right after or at the same time as a fruit and get hit.
This feels like a strong candidate for reinforcement learning imo. Just give it the average color values you have already collected, a reward function based off the in-game score system (obviously make bombs a high negative), and watch it go.
Could also just use a single convolution operation with a filter of 7x7 since the bombs always have a minimum size, then compute the average of the colors as he did and if its not white or black, slice the fruit by going 10 pixels in either direction from the pixel
It would be cool if the program waited a while before the fruits were on screen and then calculate how many there are. If there are more than one, it tries to slice them in one go instead of a bunch of slices.
Being somebody who works in the computer vision field, I feel like it would've been simpler for you to convert the image to HSV, take the value (lightness) channel, and then binarize the image by checking if it's less than a certain threshold. From there you can see how many connected pixels there are, and if there are more than say 1000 black pixels, its a bomb.
@@advance64bro yes but no. You can train it just genetic algorithm and train 10 ai at time and ai with biggest score replicates like natural selection and there is a chance that ai learns to mąkę a combo
I have made a program in python to automatically complete tasks in among us and wrote code for almost all tasks for the first map. I faced the same problem in some tasks ( like clean vent, clear asteroids etc) where the image recognition would not work properly due to random rotations of the sprites on screen. Your solution to the problem might be perfect in my program and I am gonna try that soon. It might be even better suited since there is even less chance of false positives ( which was caused by the splattering of the fruits on the wall in fruit ninja ).
@@TheFurry nah probably not. I just made it because my USB mouse broke and using laptop's mousepad was very annoying. If you are interested, I could give you the code but since I made it just for me, I didn't take the screen resolution into consideration, so it only works on the monitor with the exact resolution of 1920x1080.
@@HarshWeave9487 This is my first time using github so sorry if there is any mistakes.. Also, since I just made it for me, almost every pixel coordinate is hardcoded so there are just random numbers everywhere. If you have any questions, you can just ask here, I am online pretty regularly.
This could be better optimized if you didn't based it on color recognition but actually on moving pixels on the grid you created. Based on that principle, you should need to only recognize the black color of the bomb, everything shouldn't be biased on only color. I saw it bug on the background splashed fruit some times
This would add a small delay because he'd always be one frame behind (need to take the current frame and subtract the previous frame pixels that didn't move), but I do agree that the optimization would probably speed it up enough to be worthwhile. He should also not be processing the full RGB image, especially because he's in python, that's obviously going to be slow. Turning it into a grayscale or HSV (lightness channel) 2D array and doing some sort of processing to check for the darkest pixels would definitely be faster.
@ryans3979 could he not make it identify the position and type in one frame, wait like 3 frames, and then identify the same objects position for average velocity, then swipe afterwards based on delay?
@@Trevorus1 It takes someone with experience to recognize that pfp's image and what happened in it... I don't know if I should be horrified or impressed.
why don't you just instead check if it's a bomb or not? as far as i know the bomb is the most different from them and by only checking the bomb you could optimize this code alot and then you only pass the mouse on things that are on movement except the recognized(s) bomb(s) maybe you could even try using grayscale images or something idk
I feel like this system could be expanded more if used right. Like for example, if the system detects high red pixel counts in a region, have it take a screenshot and use image recognition to see if it can detect a red arc at around 75% completion, use an or function with it so it can also check the image for an X. If either result comes true, it will determine the region to be dangerous for the next x-amount of frames. Should help improve the system by a bit, and even can use fewer resources if it is able to know the danger region before doing the bomb check.
You can try using something like a yolo network for the fruit recognition. Most recent models run very fast with a medium capacity gpu and are pretty accurate
There is a lot of room for improvement. Assuming it could identify fast enough, the most reliable approach could be to use a machine learning algorithm to categorise objects on screen. Then we would want to implement an algorithm to try to score combos And also, implement some delays to make it less jittery and add functionality to avoid trajectories that overlap with a bomb
You can lower the screenshot resolution so network can have whole screen as input and train it if game is laggy while training just make game run slower
It made me remember a scene from tge dark forest where the droplet is destryoing the the space fleet , its drscribed as God 's scribbling as the droplet is ramming objects taking sharp turns which are zero in accordance to our aerodynamics
I think it would be fun to optimize it, here is sime idea that would maybe improve it- Waiting with slicing and calculating where each object is not just frame by frame so it would include the speed, direction, where the object would get in the next frames and I think it would give better accuracy + would give the compatibility to plan combos with smooth slices instead of just spamming
@@advance64bro I am not harsh, I really like it and the video and I just want to add suggestions so maybe he would do another video and improve it or to inspire someone else who interested in this for example I started building my own version to this bot
@@advance64bro Of course it would be a bit more complex but it's well within the capabilities of code noodles, he did much harder things. The preprocessing he did would really help, it would be more work on how the bot play than on the identification
Why not have the ai focus on the background color and bomb color and when thag color is over the screen it does not attack there? Sure you couldn’t get combos but I think theoretically it would work easier faster and longer since instead of picking out a fruit it is just picking out a difference in what is normally there
I'm thinking the color sampling code could have been massively simplified by just scaling the image down, which is a highly optimized operation in most image processing libraries
Nice program. Had you considered using the indexed version of screenshots, might improve the accuracy. Also maybe look into the YOLO model, it's open source and pretty good with image recognition.
if this is the version of fruit ninja I think it is, try having it play on its own for a while and then get the Cloud Kicker blade. if you get enough duplicates of that weapon you could upgrade it such that all fruits have a guaranteed chance to bounce off the bottom on the screen one time for extra time. that would prevent it from missing fruits like it did with that Watermelon
I’m not too savvy on machine learning, but would it be possible to extract the UV maps from the fruit and use that to differentiate them from the bombs? The UV map has their whole texture so would it be possible to use that to understand and recognize a fruit at all angles?
For screenshots i tent to use mss library, its much faster than pyautoguis screenshot method. Maybe you could do a v2 with machine learning, yolo 5 or really any version of it would probably perform much more accurate. And also extremly fast as that stuff runs on the gpu. Nice Video, its cool to see games being automated.
@@ranarehanqaisar2266 Thomas Edison was definitely not a genius. Also it signals low iq so hard when you say stuff like "read this and it will give you the answer." Why not just tell me the answer instead of wasting my time trying to get me to read? Is it because you feign intellecualism by your supposed reading? Just think about this if you have the mental faculties to understand what I mean
You can change the background, your blade color and splashes on the dojo in Srttings, as well as try to instantly revoke any area containing black pixels or i dunno, just (if any of this place in a screenshot has black bad No black Good, slice)
Bro, just use deep learning. If you want it to be fast you can use some mobilenet network along with some rpn detection heads... Faster R-CNN or something like that. Like a 2 stage network.
Next time train a YOLO model on a few hundred labeled images, it's a LOT easier, and will run much faster. Expect 30-120+ frames per second processed, based on your GPU.
You can try to use computer vision algorithms for object detection to detect and track objects. Look into detectors like SIFT for example. Warning: it’s a DEEP rabbit hole.
To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/CodeNoodles/. You’ll also get 20% off an annual premium subscription.
I feel like something that could improve it is if you had the code specifically look for any black spots that would mark a Bomb and label areas around them as dangerous, that could help the code avoid bombs as well as make sure it doesn’t try to hit fruits too close to bombs.
I would also make it so that it doesn’t scan too close to the bottom of the screen as a bomb could come up right after or at the same time as a fruit and get hit.
if object = bomb
is bad
else
is good
Basically
if only it was this easy
That looks like something from scratch.
if (object = bomb) {
donttouch
}
else{
destory
}
==*
The somewhat unoptimized nature of the program gives it a lot of personality and comedic value
10/10
Thanks, that makes me feel better 😆
Programming also makes mistakes like humans do?!
@@CodeNoodlesGreat video. I loved it
This feels like a strong candidate for reinforcement learning imo. Just give it the average color values you have already collected, a reward function based off the in-game score system (obviously make bombs a high negative), and watch it go.
Could also just use a single convolution operation with a filter of 7x7 since the bombs always have a minimum size, then compute the average of the colors as he did and if its not white or black, slice the fruit by going 10 pixels in either direction from the pixel
@@4_real_bruh Yeah i had the same idea. I thought about going by size as well. He could add this on top of the color to prevent more false positives.
It would be cool if the program waited a while before the fruits were on screen and then calculate how many there are. If there are more than one, it tries to slice them in one go instead of a bunch of slices.
I can already imagine that constantly catching bombs in the big slices
@@Rjciralliperhaps a pathfinding algorithm to avoid the bombs, although this might be too slow
The program does what’s efficient
@@advance64broit uses python…
@@advance64bro the program does what is designed to do, It can be efficent with fewer slides if its designed that way
Being somebody who works in the computer vision field, I feel like it would've been simpler for you to convert the image to HSV, take the value (lightness) channel, and then binarize the image by checking if it's less than a certain threshold. From there you can see how many connected pixels there are, and if there are more than say 1000 black pixels, its a bomb.
This man codes!!
Wow , that's a good idea
The problem I see is it doesn’t wait for all the fruit to make sure to get a combo for a points bonus
IT is possible to train ai that way just Smart genetic algorithm or other
It doesn’t even know how the game works, it’s a program that does what is told
@@advance64bro yes but no.
You can train it just genetic algorithm and train 10 ai at time and ai with biggest score replicates like natural selection and there is a chance that ai learns to mąkę a combo
Isn’t that lost score made up by how many critical it gets?
@@mateuszpragnacy8327that would take a long ass time and processing though
I have made a program in python to automatically complete tasks in among us and wrote code for almost all tasks for the first map. I faced the same problem in some tasks ( like clean vent, clear asteroids etc) where the image recognition would not work properly due to random rotations of the sprites on screen. Your solution to the problem might be perfect in my program and I am gonna try that soon. It might be even better suited since there is even less chance of false positives ( which was caused by the splattering of the fruits on the wall in fruit ninja ).
awesome project! Are you going to do a video on it?
@@TheFurry nah probably not. I just made it because my USB mouse broke and using laptop's mousepad was very annoying. If you are interested, I could give you the code but since I made it just for me, I didn't take the screen resolution into consideration, so it only works on the monitor with the exact resolution of 1920x1080.
Interesting....
Would you mind if I could see the code?
If you can share the link, I would be grateful.
@@HarshWeave9487 This is my first time using github so sorry if there is any mistakes.. Also, since I just made it for me, almost every pixel coordinate is hardcoded so there are just random numbers everywhere. If you have any questions, you can just ask here, I am online pretty regularly.
@@HarshWeave9487 I commented the link here but I think it got deleted by youtube. Are there any other way to share the link???
This could be better optimized if you didn't based it on color recognition but actually on moving pixels on the grid you created. Based on that principle, you should need to only recognize the black color of the bomb, everything shouldn't be biased on only color. I saw it bug on the background splashed fruit some times
This would add a small delay because he'd always be one frame behind (need to take the current frame and subtract the previous frame pixels that didn't move), but I do agree that the optimization would probably speed it up enough to be worthwhile. He should also not be processing the full RGB image, especially because he's in python, that's obviously going to be slow. Turning it into a grayscale or HSV (lightness channel) 2D array and doing some sort of processing to check for the darkest pixels would definitely be faster.
@@ryans3979 nice! Thanks for the input! That makes sense!
@ryans3979 could he not make it identify the position and type in one frame, wait like 3 frames, and then identify the same objects position for average velocity, then swipe afterwards based on delay?
Wont that be pretty hard, as the program needs to calculate a path to slice, a path which does not include any bombs in between
how would you detect moving pixels
The rhythm of the slicing syncs surprisingly well with the music after 8:25
The color recognition is a clever idea. The fact it is fast it can be used in combination with other methods for increasing accuracy
i haven’t heard about this game in a long time, when i saw this, it reminded me about how i got around 1390 while playing this in a daycare
That's impressive!
@@CodeNoodles thanks
8:37 8:59
*RUUUUULES OF NAAAATURE*
yeah it's swinging at the speed of raiden crackhead mode
Oh wow you've been at this one for quite some time, excited to see how it turned out!
Hi
@@trystankitty5393 hi?
@@ODISeth hi?
@@trystankitty5393 you replied to me lol
@@ODISeth nope
You are writing functions without space between them 😭
May I ask what’s happening in your profile picture with the mimi sentry?
@@Trevorus1 mini sentry meets big thing
@@aaronking2020why
@@Trevorus1 It takes someone with experience to recognize that pfp's image and what happened in it... I don't know if I should be horrified or impressed.
is bro's argument invalidated by deranged pfp?
Aw yeah it’s noodling time
Loved the part where he said "noodling time" and noodled all over the place
I also loved the part where he said "noodling time" and noodled all over the place
why don't you just instead check if it's a bomb or not? as far as i know the bomb is the most different from them and by only checking the bomb you could optimize this code alot and then you only pass the mouse on things that are on movement except the recognized(s) bomb(s) maybe you could even try using grayscale images or something idk
IF ONLY
Dude, do you not know how hard that is
@@advance64bro do you?
@@SuadoCowboy yes
@@advance64bro and don't you think it's worse checking each type of fruit instead of just checking if it's a bomb or not?
I feel like this system could be expanded more if used right. Like for example, if the system detects high red pixel counts in a region, have it take a screenshot and use image recognition to see if it can detect a red arc at around 75% completion, use an or function with it so it can also check the image for an X. If either result comes true, it will determine the region to be dangerous for the next x-amount of frames. Should help improve the system by a bit, and even can use fewer resources if it is able to know the danger region before doing the bomb check.
would've been useful 14 years ago
I feel honoured to get this video on my feed. This video made me excited for sure 👍
Thanks, I really appreciate it!
Hi, I have almost a thousand subscribers @@CodeNoodles
You can try using something like a yolo network for the fruit recognition. Most recent models run very fast with a medium capacity gpu and are pretty accurate
Exactly what i thought
I'm trying to do this right now. My problem is the slicing function. I can't seem to make the mouse fast enough. I'd like to see how he did that
2:50 nice explanation of kernels in an image recognition model
“A small delay that has been adddded” too good 😂 4:57
This is brilliant thanks for sharing your thought process and code. absolutely loved this
Thanks, it really means a lot!
1:44 me: just use machine learning
I thought you would use image recognition with a neural net for object detection, but maybe that could be v2 if you ever want to do that again.
Exactly
There is a lot of room for improvement. Assuming it could identify fast enough, the most reliable approach could be to use a machine learning algorithm to categorise objects on screen.
Then we would want to implement an algorithm to try to score combos
And also, implement some delays to make it less jittery and add functionality to avoid trajectories that overlap with a bomb
I don't know what is going on but I love it
That Temple of Nadia soundtrack hits hard ❤
Its magical to see a fresh half brick games content these days
I gotta admit that segue was clean
Now I wonder what it’d be like to train an AI to play this game
The music 😮 it took me a minute to understand that something was sooo familiar here!
Bro being a student working on segmenting quantum dots with dog noise this is so relatable 😭
Very good. Next project: build a robot arm with a samurai sword that chops up fruit you throw at it (but not bombs)
You can lower the screenshot resolution so network can have whole screen as input and train it if game is laggy while training just make game run slower
If I had this back in 2011 I would have seen so popular in school lol
It made me remember a scene from tge dark forest where the droplet is destryoing the the space fleet , its drscribed as God 's scribbling as the droplet is ramming objects taking sharp turns which are zero in accordance to our aerodynamics
Now that's an actual fruit ninja, my personal best is 694 though...
in the arcade mode.
I think it would be fun to optimize it, here is sime idea that would maybe improve it-
Waiting with slicing and calculating where each object is not just frame by frame so it would include the speed, direction, where the object would get in the next frames and I think it would give better accuracy + would give the compatibility to plan combos with smooth slices instead of just spamming
It does what is efficient, don’t be that harsh just because you’re dissatisfied
@@advance64bro I am not harsh, I really like it and the video and I just want to add suggestions so maybe he would do another video and improve it or to inspire someone else who interested in this for example I started building my own version to this bot
@@I_am_Itay doing something like that would have to make the fruit identifying program to be more complex which you already saw was really hard to do
@@advance64bro Of course it would be a bit more complex but it's well within the capabilities of code noodles, he did much harder things. The preprocessing he did would really help, it would be more work on how the bot play than on the identification
Never thought I would see TAS for Fruit Ninja
0:45 I️ remember this from CodeBullet (an angry Australian man who also programs robots to play games for him)
“Destroy” blud only got a 342 😭😭😭
How is your IDE not SCREAMING at you…? PyCharm would be kicking in my door at the first lack of whitespace 😅
The chopping is unreal 🤣🤣
Great job, tried it on my PC! Seems that the algorithm likes you more than me, but impressive nonetheless!
The most real part in this video. 5:20 the naming
Yay! Excited to see you post 🥰 only place I'm interested in code 😂
It's super cool to watch.
You should look into classification theory. What you are using is basicly a Euclidean distance cluster classifier.
Has been a long while since I played this^^
your naming convention in Python should be illegal
You could use object detection models like YOLOv8 or YOLOv5 for fast detection.
Bro I got a brilliant add when you started your sponsored message
Using image recognition to destroy life
Going on a treasure hunt to find all of your comments and like them rn
Oh wait there’s only two
@@CornbreadFish yes
Why not have the ai focus on the background color and bomb color and when thag color is over the screen it does not attack there? Sure you couldn’t get combos but I think theoretically it would work easier faster and longer since instead of picking out a fruit it is just picking out a difference in what is normally there
your fruit killing skills are remarkable
we need code bullet to do this
I'm thinking the color sampling code could have been massively simplified by just scaling the image down, which is a highly optimized operation in most image processing libraries
You should tag the dude that created Fruit Ninja, he's on TH-cam!
The bot even got a 5 fruit combo😮
Final program wasn't a fruit ninja, it was a fruit samurai
Nice program. Had you considered using the indexed version of screenshots, might improve the accuracy. Also maybe look into the YOLO model, it's open source and pretty good with image recognition.
4:59 "have a small delay that is ad🥁🥁🥁🥁"
Would it be easier and more accurate to fine tune an image segmentation model for this task?
You could also just hold your finger in one spot and if a fruit passes by it it splits
Fruit ninja my beloved
Finally, the first Fruit Ninja TAHS (Tool Assisted High Score)
Make a bot to destroy Candy Crush and win every level without doing any single micro-transaction.
One of the best games.
7:55 watch the left side of the screen and imagine a very angry Beatrix Kiddo
Video starts at 7:55
if this is the version of fruit ninja I think it is, try having it play on its own for a while and then get the Cloud Kicker blade. if you get enough duplicates of that weapon you could upgrade it such that all fruits have a guaranteed chance to bounce off the bottom on the screen one time for extra time. that would prevent it from missing fruits like it did with that Watermelon
if you have a good enough pc you could have used YOLO for object recognition and have a better accuracy more easly
yes, small yolo models can even run on crappy pcs pretty well
I’m not too savvy on machine learning, but would it be possible to extract the UV maps from the fruit and use that to differentiate them from the bombs? The UV map has their whole texture so would it be possible to use that to understand and recognize a fruit at all angles?
Indeed, there is no need in ML. Pixelperfect analysis is enough
Now,Do it with snake eating fruit game 😈
For screenshots i tent to use mss library, its much faster than pyautoguis screenshot method. Maybe you could do a v2 with machine learning, yolo 5 or really any version of it would probably perform much more accurate. And also extremly fast as that stuff runs on the gpu. Nice Video, its cool to see games being automated.
bro update your profile picture that's an extremely old cosmilite sprite
You should do an image recognition thing but for suika game
Maybe implementing sound extraction might help?
Bro is going to be a genius one day.
Geniuses are born not developed. Nature bestows geniuses with the ability to comprehend what others cannot.
@@ReapersRed Buddy if you think that then just read the life of Thomas Edison. That will give you the answer
@@ranarehanqaisar2266 Thomas Edison was definitely not a genius. Also it signals low iq so hard when you say stuff like "read this and it will give you the answer." Why not just tell me the answer instead of wasting my time trying to get me to read? Is it because you feign intellecualism by your supposed reading? Just think about this if you have the mental faculties to understand what I mean
Hello fellow code bullet fan 😊
Wait, is that bgm from the old Pokemon game?? Feels like remnants from a core memory
You can change the background, your blade color and splashes on the dojo in Srttings, as well as try to instantly revoke any area containing black pixels or i dunno, just
(if any of this place in a screenshot has black
bad
No black
Good, slice)
Bro, just use deep learning. If you want it to be fast you can use some mobilenet network along with some rpn detection heads... Faster R-CNN or something like that. Like a 2 stage network.
Can you make a video about using image recognition to get the best possible scores on the human benchmark website?
That's a good idea!
I AM THE STORM THAT IS APPROAAAAAACHIIIIIING
How raiden plays fruit ninja:
Best thing in the video was this at 8:37 :) (joking great job u did👏)
8:27 funny green splatter
Nice, super cool bro
Thanks!
using A* pathfinding can be good choice for finding the right path for the mouse
Combos would be so cool
fruit ninja says "G太ME OVER"
Oh hey, I was wondering why you didn't just use color averages at the beginning, looks like you figured it out though! :3
Next time train a YOLO model on a few hundred labeled images, it's a LOT easier, and will run much faster. Expect 30-120+ frames per second processed, based on your GPU.
On the topic of older games, imagine coding for jetpack joyride...
The edges of the bomb is always red, you could yes that as an identifier
You can try to use computer vision algorithms for object detection to detect and track objects. Look into detectors like SIFT for example. Warning: it’s a DEEP rabbit hole.
considering an impunt based on 2 succesful screenshot (instead of 1) would make the program too slow? good job thought, nice project