FastSAM: Segment Anything in Real-Time
ฝัง
- เผยแพร่เมื่อ 30 มิ.ย. 2024
- Inside my school and program, I teach you my system to become an AI engineer or freelancer. Life-time access, personal help by me and I will show you exactly how I went from below average student to making $250/hr. Join the High Earner AI Career Program here 👉 www.nicolai-nielsen.com/aicareer (PRICES WILL INCREASE SOON)
You will also get access to all the technical courses inside the program, also the ones I plan to make in the future! Check out the technical courses below 👇
_____________________________________________________________
In this video 📝 We’ll be taking a look at the new FastSAM model. This is a way faster model compared to the original SAM model from Meta AI. We can actually get up to 50x faster inference time. We can now do segmentation of anything in images in real-time. In this video, we are going to see how we can set it up in google colab and also in a custom python script locally.
If you enjoyed this video, be sure to press the 👍 button so that I know what content you guys like to see.
_____________________________________________________________
🛠️ Freelance Work: www.nicolai-nielsen.com/nncode
_____________________________________________________________
💻💰🛠️ High Earner AI Career Program: www.nicolai-nielsen.com/aicareer
⚙️ Real-world AI Technical Courses: (www.nicos-school.com)
📗 OpenCV GPU in Python: www.nicos-school.com/p/opencv...
📕 YOLOv7 Object Detection: www.nicos-school.com/p/yolov7...
📒 Transformer & Segmentation: www.nicos-school.com/p/transf...
📙 YOLOv8 Object Tracking: www.nicos-school.com/p/yolov8...
📘 Research Paper Implementation: www.nicos-school.com/p/resear...
📔 CustomGPT: www.nicos-school.com/p/custom...
_____________________________________________________________
📞 Connect with Me:
🌳 linktr.ee/nicolainielsen
🌍 My Website: www.nicolai-nielsen.com/
🤖 GitHub: github.com/niconielsen32
👉 LinkedIn: / nicolaiai
🐦 X/Twitter: / nielsencv_ai
🌆 Instagram: / nicolaihoeirup
_____________________________________________________________
🎮 My Gear (Affiliate links):
💻 Laptop: amzn.to/49LJkTW
🖥️ Desktop PC:
NVIDIA RTX 4090 24GB: amzn.to/3Uc7yAM
Intel I9-14900K: amzn.to/3W4Z5Cb
Motherboard: amzn.to/4aR6wBC
32GB RAM: amzn.to/3Jt2XVR
🖥️ Monitor: amzn.to/4aLP8hh
🖱️ Mouse: amzn.to/3W501GH
⌨️ Keyboard: amzn.to/3xUGz5b
🎙️ Microphone: amzn.to/3w1F1WK
📷 Camera: amzn.to/4b4Ryr9
_____________________________________________________________
Timestamps:
0:00 Introduction
0:37 FastSAM GitHub Repo
5:08 Colab Setup
8:38 Local Installation
12:30 FastSAM VSCode
15:00 FastSAM Results
Tags:
#fastsam #sam #segmentanything #ai #computervision #yolov8 #objectsegmentation
Join My AI Career Program
www.nicolai-nielsen.com/aicareer
Enroll in My School and Technical Courses
www.nicos-school.com
I have a question when I try this project it's work well thank you but I don't know how to get class name from annotations could you tell me this?
Hello, first of all, thank you for the video. I'm wondering about something, is it possible to train on our own dataset with fast-sam?
Thanks for the instructional video and all worked as explained. The image output is nice but now I would like to process the segments highlighted and do additional processing of these segments based on other details and positional relationships relative to other segments.
def everything_prompt(self):
if self.results == None:
return []
return self.results[0].masks.data
With ann is the everything_prompt and the below tensor array. Should be 4 but has 6
4 segments are highlighted in output but 6 tensors show in the array.
The segment array data does not seem to correlate to the original image pixel coordinates from the data I see, but obviously the plot handled the conversion in the output image.
tensor([[7.8642e+02, 7.7507e+01, 9.9292e+02, 4.7635e+02, 9.1396e-01, 0.0000e+00],
[4.0099e+02, 7.8259e+01, 6.0955e+02, 4.7675e+02, 9.0963e-01, 0.0000e+00],
[0.0000e+00, 7.8047e+01, 2.2261e+02, 4.7691e+02, 8.4697e-01, 0.0000e+00],
[0.0000e+00, 0.0000e+00, 1.0800e+03, 5.4800e+02, 7.3205e-01, 0.0000e+00],
[2.9916e+01, 8.8821e+01, 2.0837e+02, 4.6286e+02, 6.0771e-01, 0.0000e+00],
[7.9572e+02, 8.4327e+01, 9.8209e+02, 4.7095e+02, 4.5648e-01, 0.0000e+00]], device='cuda:0')
I wish SAM also classified detected masks (provided labels)
hi,I have a question, how can I obtain all the points that generate the segmented image
It will output the masks which is basically just every single pixel in that class. Or did I not understand your question correctly?
Is it not possible for us to get real time camera view instead of picture?
Yeah ill do that in the next video
@@NicolaiAI I am waiting...
@@NicolaiAI excited to see that!
Me too! Maybe both with points and with masks, please
Already up and running live! Will record a video over the weekend and upload start of next week
Hi @nicolaiAI love your videos, is there anyway you can share the py code for this video?
Awesome man, thanks! The code is on my GitHub under fastSam live
@@NicolaiAI fantastic, thank you!