Thank you so much. I got into Object detection and python with your guidance and help in just 2 months to solve problems in the football infrastructure. Thank you
@Roboflow They certainly do, and I've been working on something during weekends for the last few weeks. I'm hoping to finish it soon and will share the results here.
your initial tutorial was my intro to computer vision and helped me get pretty far. this one has lots of unblockers. I was doing homography before, but I didnt know about keypoint detection. another unblocker is the specifics of how you did classes for training. should help me a lot!
Your video provides valuable insights into computer vision, and we truly appreciate the depth of information you share. it helps many of us learn and grow. Thank you for your dedication and efforts! 😊
@@Roboflow haha, I'm currently focused on building up the dataset, and once I finish with the keypoint detection, I'll share the notebook. The biggest issue with field hockey is the rolling substitutions. As mentioned in the Q&A video, when players disappear from the screen, they are sometimes recognized as different players. When a substitution happens, I would have to manually switch the player, but I still don't know exactly how to handle that.
You could probably now train player movement to deduce a realistic model for video games! Probably moving away from Python I guess, but that would be extraordinary, with tweakable speed and intelligence, even if resolution is relatively low. Suggestion is better than perfection. Wonderful stuff!
Hi really enjoy your videos ! You mentioned at one point that detection takes approximately 1S, so we’d need to optimize the model to reach 30fps for realtime detection. To your knowledge, has anyone explored detection using previous frames’ detections as hints? We know people and objects typically don’t teleport so knowing a little bit about where it was and the direction they were last moving can help isolate the detection to a much smaller search space? Then periodically run the “larger” model to reset
Amazing video, thanks for sharing! I did my postdoc research in computer vision, using similar techniques, including even the perspective transforms, to ultimately automate error calculations for solar mirrors. I feel your pain on the keypoint labeling, that takes forever :'). Even though that was only 4 years ago, it's amazing how much progress has been made and how much easier it is to manage these models, datasets, and transformations. My postdoc work would have been so much easier in 2024 than it was in 2020, thanks to these developments and tutorials like yours 😁
Thanks a lot Ryan! 4 years in AI space feels like forever. I can’t even imagine what will happen over the next 4 years. It’s also super validating to hear you used similar strategies, and super interesting you’ve done this in completely different field. We are organizing community session next week. Is there any chance I could show some visualizations from your work on solar panels?
Thank you. I am researching for my final project at my university. My topic is the same one you made. I hope we can discuss it more. Thank you so much, bro.
It is very cool! One question. I want the AI to count the passes and receptions of the ball between the lines by itself. I don't understand how to do this. How to write a condition
I’m afraid it won’t ba a simple condition. To do it well, you’d need to calculate ball movement direction and speed and than use those conditions to detect ball being passed.
Really great content, congrats! May I ask why did you choose to detect the key points every frame instead of using optical flow, knowing there are many features in the video (incl the pitch) which are static? Additionally, how would you approach tracking of the ball in 3D if you wanted to have high accuracy for the ball position?
Cool questions. Correct me if I’m wrong but optical flow won’t take me all the way. It only shows me that „something moved there” but does not tell me what it is. So if I want to know where are my reference points it won’t be able to do it. Because it does not know what points are interesting to me. Second of all pitch is not really static. You see pitch all the time but only some part of it. And you need a solution that will tell you what part of pitch is it. I didn’t really thought deeply about what model would I use to support 3D ball detection. But I recently was playing with 3D keypoint detection models that gave me 3D coordinates of human pose. I’d start there. But I’m open to other ideas.
Thanks for the swift answer. What I thought with optical flow is that given an initial state (1st frame) we can use anchor points known to be static in the world in order to estimate how the camera is moving. With that you can calculate how points in one frame relate to the next. But you are right, this approach is very fragile because it relies on having a set of anchor points which are always visible in the footage. For the 3D ball trajectory, because we know the physics of the ball movement, I’d probably start with trying a Kalman filter. One model for when the ball is in the air and another one for when on the ground. Use ball size in pixels as an estimate of how far from the camera it is. This is evidently very noisy but hopefully the dynamics of the kalman filter would smooth the trajectory
If tracking is lost in a frame for a player, then the next track_id that’s assigned to the same player is different, which makes it impossible to track the same player for the whole game. Is it possible in any way to fix this? Or even assign the jersey number as an alternative id? Great video btw
Jersey number assignment is a lot more complex task than it seems, so I would treat it as something extra and not a potential solution. At the end of the video I shortly talk about next steps in tracking - using ReID models or more advanced trackers like MASA. In short we need trackers that take into consideration object location but also its appearance.
Amazing video! I was wondering how much features you could add ≠ how much more data can be extracted. Lets say you have competitions where besides goals / assists nothing is tracked, but I do have access to full matches. Would it be possible to for each player individual extract data using this particular software, shots on target, passes completed, interceptions etc?
Short answer is yes, but that project would need a lot more work to get there. The most problematic part is actually player tracking and avoiding tracker ID swaps. That can happen especially if you want to process whole video. Do you want to run it real time or process games offline?
@@Roboflow I believe offline would work best since it isn't always easy to gain access to full live matches of these types of competitions. To run it all automaticly would be a lot more work as well. If you have any ideas on how to realize this project I'd love to discuss it with you!
When training the initial model for player detection, how does the model distinguish between a player and a referee? - If the model's distinction is based on the color of clothing, will it still work if the clothing colors are changed? - For example, if referees typically wear yellow shirts, what happens if players in another match wear a similar color? - In some training annotations, I noticed that a person (possibly a coach) standing outside the field wearing black clothing was not labeled as a player and, therefore, is not detected (as expected). I'm curious how the model understands these differences.
VERY GOOD QUESTION! In general hard to tell how exactly does it work, but during data collection I tried really hard to include refs in different outfits - red, black and yellow. I think the model just learn that refs have outfits visually different than most of people on the field. As for refs on the sidelines, they have flags and I think this is how model knows, people one the sidelines without flags are not refs.
Nice video. Really enjoyed it. I don’t quite understand why we use SigLIP for the team ID vs using an additional class in the object detection model (“player team 1” vs “player team 2”). Is there some complicating factor that makes that not work well?
This is something that I clearly did not explained because you are not the first one to ask this question. So the problem is that every time different teams play. In one game red play against blue team. In different yellow against white team. There is no way you could annotate data in a way general enough you could apply it to any game in the future. Solution that I presented is general. And does not require you annotate all data with more info.
Ah that makes sense. Is soccer similar to other sports where the “away” team usually wears white/light jerseys and home wears dark/color jerseys? Wonder if home/away classes would be enough to generalize on
Nice thanks for sharing , but I have a question , I'd like to know if you use one model for each classe or unique model for detect all class during the game ?
Great tutorial! Thanks very much How you save pitch in video in 01:19:39 ? I tried to save it like this: with video_sink: for frame in tqdm(frame_generator, total=video_info.total_frames): .... video_sink.write_frame(pitch) in the while loop, but the result is nothing!
oh wow, what a nice find. can you tell me what techniques is required to make this real-time assuming a 30fps input and same output? do i need to implement this in deepstream?
Stream processing is not the reason here. You’d need to train smaller architectures not YOLOv8x but YOLOv8s or m. But most importantly make embeddings calculation faster.
@@Roboflow thank you, i will try that my goal is to do live video feed processing. the concept is same where i need to capture keypoints and map it to a 2d map & mark trajectory but for a different project.
Great video!! I'm trying to put this project together based on your video, but I'm stuck because I haven't figured out exactly what TeamClassifier's predict() method does. I searched in the Colab code but was unsuccessful. Could you describe the exact code for that method?
This wouldnt work very well without a high overhead viewpoint right? The visual noise of crowd and sideline players would botch the CV from working properly I assume?
Yup! We cover all important considerations at the end of the video. If you want to use ground-level cameras this problem becomes borderline unsolvable.
sir i tested model on other video but problem is cant detect main referee as referee but as player, also sometimes detect penalty spot as ball (this when track ball) what is the solution pls
I mention problems like this at the end of the video. It all depends on specific football footage you want you use, but usually the solution is to expand dataset and add new images. Then retraining the model.
Can you tell me how the yolov8 architecture works? I would like to ask how to train the model to recognize circles and output, instead of the width and height of the bounding quadrangle, the x/y location and radius.
Thats an amazing video thank you very much! Currently i am using Macos and wanted to ask If I have to do all the steps in your tutorial or If there is an solution to just upload an video that i would like to analyse. Is that possible?
I encounter an error while trying to continue deploying the trained model with the current version of ultralytics (8.2.103) instead of the said dependency (8.0.196). Suggestions?
Let me explain why I went with embeddings and not classification: problem with football is that every game is different. In one game you have white vs red. In other you have blue vs yellow. So I’d need to have separate classes for all of them. Embeddings is general solution. It doesn’t matter who plays. And it does not require annotation.
First of all huge thank you ! but I cannot find the video with all the analytics it is only giving me a frame like in the video but I want the full video with analytics like u did, how can I get it? I cannot find it in the contents folder , it has only the sample data and input data
Hey thanks for the Tutorial! I tried to follow your steps, but i've got an error at uploading my trained model to roboflow at 11:30. The message is: An error occured when getting the model upload URL: 404 Client Error: Not Found for url: ... Any ideas?
That's probably because you're trying to attach your model to my dataset. If you would like to save your model, you would first need to clone my dataset and then attach the model to it.
@@Roboflow perfect. Would this work for futsal? This could be great app. Once finished, what you need to do, upload the video from the game for example? Would that be enough?
I am currently working on a project to identify players using their jersey numbers. I trained a YOLO model to detect players, another YOLO model to detect the jersey region, and a third model to predict the jersey number. After making predictions, I swap the track ID with the detected jersey number. However, the issue I'm facing is that the track ID keeps changing throughout the video. How can I maintain the detected jersey number consistently throughout the video?
Awesome project! Would love to take a look at some visualizations from project like this! I’m currently working on tracks stitching. That would allow you to maintain same tracker ID.
After training when I run: %cd {HOME} Image(filename=f'{HOME}/runs/detect/train/confusion_matrix.png', width=600) I got error FileNotFoundError: [Errno 2] No such file or directory: '/content/runs/detect/train/confusion_matrix.png' what is wrong ?
incredible video! I copied your notebook and added a way to correct the ball tracking when the ball is in the air. Can I connect with you to discuss on Linkedin or twitter?
I'm reviewing the scripts, and I've noticed that some of your Collab codes don't appear in main.py. Am I making things too complex? I'm little bit confused bc of that
Hi quick question, so I'm trying to use the best.pt model from your yolov5 + bytracking video and it's not really tracking my own dataset, does the best.pt model you made only work on the dataset from kaggle?
Hi, I will try to answer on this question, best.pt is yolo v8 model fine-tuned on specific kaggle dataset as he mentioned, so the best.pt model is output from training showed in 10:00. If you want to use the same video, this model should work fine, but if you want specific football game, you should fine-tune model on that game. You can label data on roboflow with label assist tool and then fine-tune model on your dataset. There is plenty tutorials about that on youtube, check it out!
hi all :) I cover this topic towards the end of the video. The distribution of train dataset and data you run the model on should be similar. So if I use TV footage and you use for example video made with phone standing on the sideline, the model would need to be fine-tuned.
how can we create the feature to keep track of how many goals did the team score? I want to use opencv to keep track of the goals scored in the video frame as well
I am unable to deploy to Roboflow, with the following error: Dependency ultralytics==8.0.196 is required but found version=8.3.37, to fix: `pip install ultralytics==8.0.196` Would you like to continue with the wrong version of ultralytics? y/n: y An error occured when getting the model upload URL: 404 Client Error: Not Found for url:
"error": { "message": "Unsupported request. `GET /roboflow-jvuqo/football-players-detection-3zvbc/12/uploadModel` does not exist or cannot be loaded due to missing permissions.", "type": "GraphMethodException", "hint": "You can see your active workspace by issuing a GET request to `/` with your `api_key`." } } PLEASE HELP
Quick Question ! I'm working on a system. Some person on Reddit provide me with your football video that will help me build my project that is to count how many customers come into a retail store and also track employee activity, especially their working hours. One problem I’m facing is that the system assigns a new ID whenever it loses sight of a person, which throws off the customer count and causes issues with tracking employees too. Also, since the store uses multiple stationary cameras (unlike moving cameras covering whole feild), it gets tricky to keep track of the same person across all the cameras. How can I solve this issue, especially syncing the cameras to avoid assigning new IDs?
cant wait until ai can watch all soccer matches pull all the stats youd ever want and then give them to you on a platter via API to build the ultimate prediction model
Hi, can you provide me the link from the first video you have published about Football AI on this channel. You mentioned that is published 2 years ago?
It is possible to run it on your own GPU (100% if you have Linux). I’m not sure about the windows part. I did not installed anything on windows in 15 years. :/
That’s an amazing video! Few people in the world provide such great content for free. 1 hour of pure learning.
I’m soooo glad people can see the effort I put into this video.
Roboflow is not free
i lost it when "it's called football"😂THAT'S RIGHT BROOOO!
Thank you so much. I got into Object detection and python with your guidance and help in just 2 months to solve problems in the football infrastructure. Thank you
so awesome to hear that those tutorials really help people!
@Roboflow They certainly do, and I've been working on something during weekends for the last few weeks. I'm hoping to finish it soon and will share the results here.
your initial tutorial was my intro to computer vision and helped me get pretty far. this one has lots of unblockers. I was doing homography before, but I didnt know about keypoint detection. another unblocker is the specifics of how you did classes for training. should help me a lot!
Awesome to hear that! You came back after almost 2 years for part 2.
This is absolutely GOLD ! Haven't watched it but it's on my list. Have been following your updates on TW !
haha if you read my X posts you probably already know everything haha
Also stitching across various cameras is also very interesting thought.
Your video provides valuable insights into computer vision, and we truly appreciate the depth of information you share. it helps many of us learn and grow. Thank you for your dedication and efforts! 😊
My pleasure!
thanks a lot. I am currently applying this to field hockey. I have completed player object detection and am working on pitch keypoint detection.
wooooow! I’d love to see some results!
@@Roboflow haha, I'm currently focused on building up the dataset, and once I finish with the keypoint detection, I'll share the notebook. The biggest issue with field hockey is the rolling substitutions. As mentioned in the Q&A video, when players disappear from the screen, they are sometimes recognized as different players. When a substitution happens, I would have to manually switch the player, but I still don't know exactly how to handle that.
the moment he said it is called football --> insta sub
That was a risky move. I bet it can work both ways haha
You could probably now train player movement to deduce a realistic model for video games! Probably moving away from Python I guess, but that would be extraordinary, with tweakable speed and intelligence, even if resolution is relatively low. Suggestion is better than perfection. Wonderful stuff!
Very true 🤣
Nothing is free except your tutorials
Thank you for your efforts 🌹🌹
I see you decided not to skip model training section ;)
Hi really enjoy your videos ! You mentioned at one point that detection takes approximately 1S, so we’d need to optimize the model to reach 30fps for realtime detection. To your knowledge, has anyone explored detection using previous frames’ detections as hints? We know people and objects typically don’t teleport so knowing a little bit about where it was and the direction they were last moving can help isolate the detection to a much smaller search space? Then periodically run the “larger” model to reset
@roboflow
from Indonesia wanna say, thank you, the video so clear and easy to underestand, moreover enthusiast like me...
My pleasure! Awesome to read comments like this
Dude this is sick! great tutorial!
Amazing video, thanks for sharing! I did my postdoc research in computer vision, using similar techniques, including even the perspective transforms, to ultimately automate error calculations for solar mirrors. I feel your pain on the keypoint labeling, that takes forever :'). Even though that was only 4 years ago, it's amazing how much progress has been made and how much easier it is to manage these models, datasets, and transformations. My postdoc work would have been so much easier in 2024 than it was in 2020, thanks to these developments and tutorials like yours 😁
Thanks a lot Ryan! 4 years in AI space feels like forever. I can’t even imagine what will happen over the next 4 years. It’s also super validating to hear you used similar strategies, and super interesting you’ve done this in completely different field. We are organizing community session next week. Is there any chance I could show some visualizations from your work on solar panels?
@@Roboflow Sure thing! Please send me an email and I can share images and more info
Thank you. I am researching for my final project at my university. My topic is the same one you made. I hope we can discuss it more. Thank you so much, bro.
you can always find me on LinkedIn and X
I could't wait to watch this video! Thank you for sharing!
Let me know if you like it and if you have some questions or cool ideas
Thank you so much for this video. Im currently working on getting it to work for my volleyball matches.
Really? I’d love to see your results!
Thank you for encouraging me to pursue a career in this field with your videos :)
It is really cool field to have career in!
Thank you for this tutorial. I plan to run through it soon!
Let me know how you liked it
This will be one of the best projects I could've ever wished for❤️
Thanks a lot Sir
I love sports + computer vision combo!
You are blessed! Thank you for this great effort.
Amazing video, thanks bro. You are my hero in CV
thanks a lot!
Great!
Thanks Peter this video is very useful!
Great to hear that!
this is hilarious huge project !
thanks a lot! so cool to see people are still excited about Football AI even several weeks after release
It is very cool! One question. I want the AI to count the passes and receptions of the ball between the lines by itself. I don't understand how to do this. How to write a condition
I’m afraid it won’t ba a simple condition. To do it well, you’d need to calculate ball movement direction and speed and than use those conditions to detect ball being passed.
You are a Hero my friend!
wow. Even though I have not that much knowledge of ML, the video was great
I was trying really hard to make it easy to follow even to people without ML background.
Really great content, congrats! May I ask why did you choose to detect the key points every frame instead of using optical flow, knowing there are many features in the video (incl the pitch) which are static?
Additionally, how would you approach tracking of the ball in 3D if you wanted to have high accuracy for the ball position?
Cool questions. Correct me if I’m wrong but optical flow won’t take me all the way. It only shows me that „something moved there” but does not tell me what it is. So if I want to know where are my reference points it won’t be able to do it. Because it does not know what points are interesting to me. Second of all pitch is not really static. You see pitch all the time but only some part of it. And you need a solution that will tell you what part of pitch is it.
I didn’t really thought deeply about what model would I use to support 3D ball detection. But I recently was playing with 3D keypoint detection models that gave me 3D coordinates of human pose. I’d start there. But I’m open to other ideas.
Thanks for the swift answer. What I thought with optical flow is that given an initial state (1st frame) we can use anchor points known to be static in the world in order to estimate how the camera is moving. With that you can calculate how points in one frame relate to the next. But you are right, this approach is very fragile because it relies on having a set of anchor points which are always visible in the footage.
For the 3D ball trajectory, because we know the physics of the ball movement, I’d probably start with trying a Kalman filter. One model for when the ball is in the air and another one for when on the ground. Use ball size in pixels as an estimate of how far from the camera it is. This is evidently very noisy but hopefully the dynamics of the kalman filter would smooth the trajectory
Amazing, thank you for recording this.
my pleasure!
Crushing it Piotr!!!!!
Thanks a looot!
If tracking is lost in a frame for a player, then the next track_id that’s assigned to the same player is different, which makes it impossible to track the same player for the whole game. Is it possible in any way to fix this? Or even assign the jersey number as an alternative id?
Great video btw
Jersey number assignment is a lot more complex task than it seems, so I would treat it as something extra and not a potential solution.
At the end of the video I shortly talk about next steps in tracking - using ReID models or more advanced trackers like MASA. In short we need trackers that take into consideration object location but also its appearance.
Amazing video! I was wondering how much features you could add ≠ how much more data can be extracted. Lets say you have competitions where besides goals / assists nothing is tracked, but I do have access to full matches. Would it be possible to for each player individual extract data using this particular software, shots on target, passes completed, interceptions etc?
Short answer is yes, but that project would need a lot more work to get there. The most problematic part is actually player tracking and avoiding tracker ID swaps. That can happen especially if you want to process whole video. Do you want to run it real time or process games offline?
@@Roboflow I believe offline would work best since it isn't always easy to gain access to full live matches of these types of competitions. To run it all automaticly would be a lot more work as well. If you have any ideas on how to realize this project I'd love to discuss it with you!
When training the initial model for player detection, how does the model distinguish between a player and a referee?
- If the model's distinction is based on the color of clothing, will it still work if the clothing colors are changed?
- For example, if referees typically wear yellow shirts, what happens if players in another match wear a similar color?
- In some training annotations, I noticed that a person (possibly a coach) standing outside the field wearing black clothing was not labeled as a player and, therefore, is not detected (as expected). I'm curious how the model understands these differences.
VERY GOOD QUESTION! In general hard to tell how exactly does it work, but during data collection I tried really hard to include refs in different outfits - red, black and yellow. I think the model just learn that refs have outfits visually different than most of people on the field.
As for refs on the sidelines, they have flags and I think this is how model knows, people one the sidelines without flags are not refs.
Good you cleared it out, never understood "soccer"! Lol.
Haha! I decided that important things need to be explained right at the beginning.
Piłka nożna 😊
nie da się ukryć haha
Nice video. Really enjoyed it.
I don’t quite understand why we use SigLIP for the team ID vs using an additional class in the object detection model (“player team 1” vs “player team 2”). Is there some complicating factor that makes that not work well?
This is something that I clearly did not explained because you are not the first one to ask this question. So the problem is that every time different teams play. In one game red play against blue team. In different yellow against white team. There is no way you could annotate data in a way general enough you could apply it to any game in the future. Solution that I presented is general. And does not require you annotate all data with more info.
Ah that makes sense.
Is soccer similar to other sports where the “away” team usually wears white/light jerseys and home wears dark/color jerseys? Wonder if home/away classes would be enough to generalize on
Outstanding video. Oh....and Go Barcelona!!! :-)
thank you very very much!
Nice thanks for sharing , but I have a question , I'd like to know if you use one model for each classe or unique model for detect all class during the game ?
I use one model to detect ball, player, goalkeeper and referee.
Great tutorial!
Thanks very much
How you save pitch in video in 01:19:39 ?
I tried to save it like this:
with video_sink:
for frame in tqdm(frame_generator, total=video_info.total_frames):
....
video_sink.write_frame(pitch)
in the while loop, but the result is nothing!
oh wow, what a nice find. can you tell me what techniques is required to make this real-time assuming a 30fps input and same output? do i need to implement this in deepstream?
Stream processing is not the reason here. You’d need to train smaller architectures not YOLOv8x but YOLOv8s or m. But most importantly make embeddings calculation faster.
@@Roboflow thank you, i will try that my goal is to do live video feed processing. the concept is same where i need to capture keypoints and map it to a 2d map & mark trajectory but for a different project.
Great video!
Great video!!
I'm trying to put this project together based on your video, but I'm stuck because I haven't figured out exactly what TeamClassifier's predict() method does. I searched in the Colab code but was unsuccessful. Could you describe the exact code for that method?
All the code for TeamClassifier is here: github.com/roboflow/sports
@@Roboflow Thank you very much!!!!
@@knobico1337 pleasure!
Amazing!!
VERY COOL SIRS and MADMANS
I love the enthusiasm! 🔥
It would be great to have this in real-time
There are 100% easy optimizations that can bring us closer to… 15 FPS. Going faster than this can be challenging :)
output video does not show up inside /content or any other directory in google colab.
same for me
This wouldnt work very well without a high overhead viewpoint right? The visual noise of crowd and sideline players would botch the CV from working properly I assume?
Yup! We cover all important considerations at the end of the video. If you want to use ground-level cameras this problem becomes borderline unsolvable.
Hi. Can you please tell me in which folder you're saving the output result video that you've shown in the tutorial?
If you run it in Colab, I save in `/content` which is the default output directory
@@Roboflow So, all the output videos that you've shown in the tutorial will be saved in '/content' folder, right?
Exactly! FOOTBALL! GOOD JOB!
sir i tested model on other video but problem is cant detect main referee as referee but as player, also sometimes detect penalty spot as ball (this when track ball) what is the solution pls
I mention problems like this at the end of the video. It all depends on specific football footage you want you use, but usually the solution is to expand dataset and add new images. Then retraining the model.
thank you very much for sharing this knowledge !
My pleasure!
Basically I should create a new project for every team or new video I would like to analyze?
Hi! No. Everything can sit in a single project. Why do you think you need to split data?
You have created one custom model, with this model, will we able to use for other match videos ? for tracking
Model is open so you can use it. As for „will it work” I discuss it towards the end of the video.
curious to know how the models would perform when the teams wear kits with similar colours?
This rarely happen as there are rules to prevent that. But we can test if you have any video we could use.
Can you tell me how the yolov8 architecture works? I would like to ask how to train the model to recognize circles and output, instead of the width and height of the bounding quadrangle, the x/y location and radius.
we have dedicated YOLOv8 tutorials on this channel; did you have a chance to watch it?
@@Roboflow Yes, and what about the second question. who can help?
This is gold! Can it be done identically using Jupyter notebook?
Crazy tutorial, what are the specs of your pc?
better local or cloud setup?
Thanks alot!! Greatly appreciate it
my pleasure!
Thats an amazing video thank you very much! Currently i am using Macos and wanted to ask If I have to do all the steps in your tutorial or If there is an solution to just upload an video that i would like to analyse. Is that possible?
I encounter an error while trying to continue deploying the trained model with the current version of ultralytics (8.2.103) instead of the said dependency (8.0.196). Suggestions?
Hi! It’s just a warning. No need to worry about it. Just confirm.
@@Roboflow leads to an error not a warning for me: "404 Client Error: Not Found for url: ..."
@@nafiserfan3576 Same for me
Instead of the embeddings I have used image classification for a different but related task.
Let me explain why I went with embeddings and not classification: problem with football is that every game is different. In one game you have white vs red. In other you have blue vs yellow. So I’d need to have separate classes for all of them. Embeddings is general solution. It doesn’t matter who plays. And it does not require annotation.
Bro do a accident detection and alert system model using yolov8
How can I make it so I don't see just one frame of the videos, but have them all as in the tutorial?
I would really like if you can do something similar but with tennis!!
Tennis is 100% on my TODO list along with basketball. I just need to work on other projects in the meantime:/
Tell them ohhhh!!! It's called Football!!! FUUTBALL... The sports where you use your foot to play the ball. Not your hands.
EXACTLY!
I wonder how many people are upset that in Italy it's called calcio and not football. No one cares.
Amen brother!
google soccer and be surprised
It's soccer, which is a variant of the broader term football
First of all huge thank you ! but I cannot find the video with all the analytics it is only giving me a frame like in the video but I want the full video with analytics like u did, how can I get it? I cannot find it in the contents folder , it has only the sample data and input data
Hey thanks for the Tutorial!
I tried to follow your steps, but i've got an error at uploading my trained model to roboflow at 11:30. The message is: An error occured when getting the model upload URL: 404 Client Error: Not Found for url: ...
Any ideas?
That's probably because you're trying to attach your model to my dataset. If you would like to save your model, you would first need to clone my dataset and then attach the model to it.
Can you detect someone taking a shot?
Give the code for video generation 1:19:38 of pitch visualisation and voronoi diagram visualisation 1:22:05
+++
Can you calculate distance covered for each player?
Yup. At this point you have everything you need to calculate distance
@@Roboflow perfect.
Would this work for futsal?
This could be great app.
Once finished, what you need to do, upload the video from the game for example? Would that be enough?
Hello Robo. I have some issues: TypeError: unhashable type: 'numpy.ndarray'. What do I do?
Looks like you use frames / crops as keys in Python dictionary. Is it happening in our code?
Great job on this video. How can you turn this into events data?
I am currently working on a project to identify players using their jersey numbers. I trained a YOLO model to detect players, another YOLO model to detect the jersey region, and a third model to predict the jersey number. After making predictions, I swap the track ID with the detected jersey number. However, the issue I'm facing is that the track ID keeps changing throughout the video. How can I maintain the detected jersey number consistently throughout the video?
Awesome project! Would love to take a look at some visualizations from project like this! I’m currently working on tracks stitching. That would allow you to maintain same tracker ID.
@@Roboflow Waiting for a good outcome
After training when I run: %cd {HOME}
Image(filename=f'{HOME}/runs/detect/train/confusion_matrix.png', width=600)
I got error
FileNotFoundError: [Errno 2] No such file or directory: '/content/runs/detect/train/confusion_matrix.png'
what is wrong ?
Is it possible that you interrupted your training somehow and restarted it once again?
@@Roboflow Yes you had right. This is result of my lack of knowledge of Roboflow environment, but I still learn.😅 Thank you for clue
@@thewatcher4940 uuuuf! I was scared for a second haha
thank you from Egypt
Does anyone know where I can locate the trained models?
The links you are looking for are in the video description.
is it possible to make a pass counter for both teams ?
yup! that would be a natural extension of that project, we know the accurate ball path so it should not be overly complicated
Where can i see the ouput after running the cell , you skipped that part where we have to see the output
Hi 👋🏻 I’m not really sure if I understand your question. are you talking about video or image output?
Recently i started a similar project but my main concern is real-time analysis
is there any way i can connect with you either email or linkedin
Just leave comment under my LinkedIn post and I’ll invite you
incredible video! I copied your notebook and added a way to correct the ball tracking when the ball is in the air. Can I connect with you to discuss on Linkedin or twitter?
big if true! absolutely! what's your X handle?
it’s jeremy9k27 - I also replied to one of your threads on x with a screenshot of my results 👍
I'm reviewing the scripts, and I've noticed that some of your Collab codes don't appear in main.py. Am I making things too complex?
I'm little bit confused bc of that
The code in script might be vastly different than the code in colab. I recommend Colab as the reference.
If we master in this type of analysis, what is a professional occupation we can get into in the sport/football industry?
hi, what amount of ram would i need in my laptop to run this?
To be honest RAM is not a problem. The problem is VRAM - GPU memory. You probably need few gigabytes of VRAM.
Hi man, great tutorial! I would like to ask you a few questions. How can I contact you? Tnx!
Leave your questions in the comments. I answer most of them.
@@Roboflow
@@Roboflow
Hi quick question, so I'm trying to use the best.pt model from your yolov5 + bytracking video and it's not really tracking my own dataset, does the best.pt model you made only work on the dataset from kaggle?
Hi, I will try to answer on this question, best.pt is yolo v8 model fine-tuned on specific kaggle dataset as he mentioned, so the best.pt model is output from training showed in 10:00. If you want to use the same video, this model should work fine, but if you want specific football game, you should fine-tune model on that game. You can label data on roboflow with label assist tool and then fine-tune model on your dataset. There is plenty tutorials about that on youtube, check it out!
hi all :) I cover this topic towards the end of the video. The distribution of train dataset and data you run the model on should be similar. So if I use TV footage and you use for example video made with phone standing on the sideline, the model would need to be fine-tuned.
how can we create the feature to keep track of how many goals did the team score? I want to use opencv to keep track of the goals scored in the video frame as well
The easiest approach is to teach model to detect when ball is in the goal. But it is also very unreliable as it may lead to potential false positives.
using this as base knowledge in which fields we can apply this for a day2 day life prob?
the most obvious choice is traffic analysis in large-area stores; people want to know how customers move, where do they stop
@@SkalskiP thank you sir
It wonder if it’s possible to do it realtime
I cover this topic at the end of the video. I think it is possible, but would require a lot of optimization.
I am unable to deploy to Roboflow, with the following error: Dependency ultralytics==8.0.196 is required but found version=8.3.37, to fix: `pip install ultralytics==8.0.196`
Would you like to continue with the wrong version of ultralytics? y/n: y
An error occured when getting the model upload URL: 404 Client Error: Not Found for url:
"error": {
"message": "Unsupported request. `GET /roboflow-jvuqo/football-players-detection-3zvbc/12/uploadModel` does not exist or cannot be loaded due to missing permissions.",
"type": "GraphMethodException",
"hint": "You can see your active workspace by issuing a GET request to `/` with your `api_key`."
}
}
PLEASE HELP
Wow! Genius!!!!
Thanks a lot!
Quick Question ! I'm working on a system. Some person on Reddit provide me with your football video that will help me build my project that is to count how many customers come into a retail store and also track employee activity, especially their working hours. One problem I’m facing is that the system assigns a new ID whenever it loses sight of a person, which throws off the customer count and causes issues with tracking employees too.
Also, since the store uses multiple stationary cameras (unlike moving cameras covering whole feild), it gets tricky to keep track of the same person across all the cameras. How can I solve this issue, especially syncing the cameras to avoid assigning new IDs?
cant wait until ai can watch all soccer matches pull all the stats youd ever want and then give them to you on a platter via API to build the ultimate prediction model
God bless you brother
Thank you!
Is there a similar tool for basketball games analysis ?
Hi, can you provide me the link from the first video you have published about Football AI on this channel. You mentioned that is published 2 years ago?
Quick question. Assuming i want to learn how to do this but have no idea about coding, can i still do it?
Cos i want to do it
I think so! You can certainly try it and see how it goes for you! Google Colab does not require any installation.
Thanks for the video. Fantastic. Is it possible to run in our own GPU, in WIN10 system?
It is possible to run it on your own GPU (100% if you have Linux). I’m not sure about the windows part. I did not installed anything on windows in 15 years. :/
AttributeError:module 'numpy' has no attribute 'float'.
Getting this error while executing last chunk of code