How to Extract the Outputs from Ultralytics YOLOv8 Model for Custom Projects | Episode 5

แชร์
ฝัง
  • เผยแพร่เมื่อ 30 ก.ย. 2024
  • Unlock the full potential of Ultralytics YOLOv8 in your custom projects! 🚀 In this fifth episode of our series, Nicolai delves into the intricacies of extracting outputs from a trained YOLOv8 model. Learn how to efficiently obtain bounding boxes, classes, masks, and confidences to integrate into your applications. Watch as Nicolai demonstrates YOLOv8's prowess in detecting a variety of custom objects in real-time using a webcam.
    Key topics covered:
    - Setting up the YOLOv8 model for inference
    - Extracting various output attributes: bounding boxes, masks, probabilities, and key points
    - Practical coding examples for real-time object detection and segmentation
    - Using PyTorch tensors for GPU and CPU processing to manipulate results
    - Techniques for visualizing and utilizing extracted data in projects
    This episode is perfect for developers looking to enhance their computer vision tasks with YOLOv8's advanced capabilities. Don't miss out on seeing how to use these results to create powerful AI-driven applications.
    🌟YOLO Vision 2024 (YV24), our annual hybrid Vision AI event is just days away! Happening on 27th September 2024 at Google for Startups Campus, Madrid.! Watch live on:
    🔗 TH-cam: • Video
    🔗 Bilibili: live.bilibili....
    📚 For more in-depth insights, check out these resources:
    - Ultralytics Blog: Extracting Outputs from Ultralytics YOLOv8: www.ultralytic...
    - Docs: Working with Results in YOLOv8: docs.ultralyti...
    Explore more about our state-of-the-art vision AI solutions and join our community:
    - Ultralytics HUB: www.ultralytic...
    - Ultralytics YOLO: www.ultralytic...
    - Join Our Team: www.ultralytic...
    👍 If you found this video helpful, please like, subscribe, and stay tuned for more episodes. Visit our site for additional resources and updates.
    #YOLOv8 #Ultralytics #ComputerVision #AI #MachineLearning #DeepLearning #ObjectDetection

ความคิดเห็น • 231

  • @fifthperson9777
    @fifthperson9777 ปีที่แล้ว +9

    Oh finally, an in depth tutorial for yolov8
    Thanks

    • @Ultralytics
      @Ultralytics  ปีที่แล้ว +1

      You're welcome. Let us know what else you'd like to see!

  • @nwr-27
    @nwr-27 3 หลายเดือนก่อน +2

    Great video. Can i get the codes?

    • @Ultralytics
      @Ultralytics  3 หลายเดือนก่อน

      Thank you for your kind words! 😊 You can find all the code and resources you need in the Ultralytics YOLO GitHub repository: Ultralytics YOLO GitHub github.com/ultralytics/ultralytics. For detailed documentation and tutorials, visit our official docs: Ultralytics Documentation docs.ultralytics.com/. If you have any specific questions or run into issues, feel free to ask here or open an issue on GitHub. Happy coding! 🚀

  • @samgarbakytnur7008
    @samgarbakytnur7008 7 หลายเดือนก่อน +2

    I am a ME student but somehow I need to work with YOLOv8 to graduate. I need to detect defects on 3D printed objects with YOLOv8. I am built my own custom dataset and trained it. Unfortunately its not detecting any defects. Now I am trying to train new model with twice more images and with 100 epoch. Hope it will work. I am working on Colab.

    • @Ultralytics
      @Ultralytics  7 หลายเดือนก่อน

      Thank you for sharing your experience. We wish you the best of luck with your project, and we're here to assist you with any questions or issues that may arise.
      Thanks

  • @yunary_cutz
    @yunary_cutz 6 หลายเดือนก่อน +2

    where is the code used in this video?

    • @Ultralytics
      @Ultralytics  6 หลายเดือนก่อน +1

      Hi there, please find it here: github.com/niconielsen32/YOLOv8-Class

  • @gavinsbestfriend
    @gavinsbestfriend 5 หลายเดือนก่อน +1

    What exactly are results? why did you use just results[0], instead of all of results?

    • @Ultralytics
      @Ultralytics  5 หลายเดือนก่อน

      The `results` object contains all the detection outcomes for a given frame. By indexing into `results`, you can access the data within it, allowing you to later process or extract bounding boxes, class IDs, tracking IDs, and more.

  • @prussiancat5357
    @prussiancat5357 8 หลายเดือนก่อน +2

    Is it possible to filter my outputs to such as it will only show person on screen? If so how can I achieve this, thank you.

    • @Ultralytics
      @Ultralytics  7 หลายเดือนก่อน +1

      Certainly, it's feasible. You can utilize the provided code to showcase exclusively designated class labels.
      """
      from ultralytics import YOLO
      from ultralytics.utils.plotting import Annotator, colors
      import cv2
      model = YOLO("yolov8n.pt")
      names = model.names
      cap = cv2.VideoCapture("path/to/video/file.mp4")
      assert cap.isOpened(), "Error reading video file"
      w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
      video_writer = cv2.VideoWriter("ultralytics.avi", cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h))
      while cap.isOpened():
      success, im0 = cap.read()
      if success:
      results = model.predict(im0, show=False)
      boxes = results[0].boxes.xyxy.cpu().tolist()
      clss = results[0].boxes.cls.cpu().tolist()
      annotator = Annotator(im0, line_width=4, example=names)
      if boxes is not None:
      for box, cls in zip(boxes, clss):
      if names[int(cls)] == "person": # Here class name whose bbox you want to display
      annotator.box_label(box, label=names[int(cls)])
      cv2.imshow("ultralytics", im0)
      video_writer.write(im0)
      if cv2.waitKey(1) & 0xFF == ord('q'):
      break
      continue
      print("Video frame is empty or video processing has been successfully completed.")
      break
      cap.release()
      video_writer.release()
      cv2.destroyAllWindows()
      """
      Thanks
      Ultralytics Team!

  • @user-firebender
    @user-firebender 9 หลายเดือนก่อน +3

    do you have tutorial how to ekstract txt file containing timestamp of which object is being detected for for vidieo input?

    • @Ultralytics
      @Ultralytics  9 หลายเดือนก่อน +2

      There isn't a tutorial specifically covering timestamps. For more comprehensive guidance, feel free to inquire in our community!
      GitHub Issues: github.com/ultralytics/ultralytics/issues
      GitHub Discussion: github.com/orgs/ultralytics/discussions
      Thanks
      Ultralytics Team!

    • @user-firebender
      @user-firebender 9 หลายเดือนก่อน +1

      @@Ultralytics sorry if this sounds stupid but which file should i modify so i can get the disired output (txt containing object and time stamp)?

    • @Ultralytics
      @Ultralytics  9 หลายเดือนก่อน

      @@user-firebender You are advised to modify the internal code. For more effective responses to your queries, we recommend posting them on our GitHub: github.com/ultralytics/ultralytics/issues

  • @notyetnotnowyouknow
    @notyetnotnowyouknow 8 หลายเดือนก่อน +3

    what im trying to do is detect realtime and then put my cursor on the detected object, if possible ,when the pose detects the head and its features are marked as green, i would like to get the cordinates of nose keypoint and put my cursor on that as do it as fast and as efficiently as possible . can you please help me . also how do i use my screen as source. also how would the tracking work , can i make my mouse cursor follow the tracked path

    • @Ultralytics
      @Ultralytics  8 หลายเดือนก่อน +1

      For real-time object detection and cursor control:
      - Use OpenCV for object detection.
      - Extract nose keypoint coordinates from pose detection.
      - Employ PyAutoGUI for efficient cursor placement.
      - Capture screen frames with OpenCV.
      - Implement real-time cursor tracking for a responsive user experience.
      Thanks

    • @notyetnotnowyouknow
      @notyetnotnowyouknow 8 หลายเดือนก่อน

      @@Ultralytics actually i did try but i couldn't get anywhere . then i asked gpt4 but its was useless
      from ultralytics import YOLO
      import win32api
      from mss import mss
      import numpy as np
      # Load model, initialize MSS, define screen area
      model, sct, monitor = YOLO('yolov8n-pose.pt'), mss(), {'left': 880, 'top': 400, 'width': 800, 'height': 800}
      while True:
      # Capture screenshot, run inference
      screenshot = np.array(sct.grab(monitor))[..., :3]
      results = model(screenshot, show=True)
      # Check if any detections were made
      if len(results.pred) > 0:
      # Get the first detection's keypoints
      keypoints = results.pred[0].keypoints
      # Check if the detection has keypoints
      if keypoints:
      # Get the nose keypoint (assuming the first keypoint is the nose)
      nose_keypoint = keypoints.xy[0]
      # Move the cursor to the nose keypoint
      win32api.SetCursorPos((int(nose_keypoint[0]), int(nose_keypoint[1])))
      the last part , of checking detection and all is from gpt and i don't know what conccution it made but its not working please help me
      thank you

    • @Ultralytics
      @Ultralytics  8 หลายเดือนก่อน +1

      Alright, for more technical queries, you can refer to our GitHub Issues section: github.com/ultralytics/ultralytics/issues

    • @notyetnotnowyouknow
      @notyetnotnowyouknow 8 หลายเดือนก่อน

      @@cv3174 yup and its completed and works amazing. if you train it with game data and proper keypoints it will work even better .but i didn't cause i was satisfied with default one

    • @Ultralytics
      @Ultralytics  2 หลายเดือนก่อน +1

      That sounds impressive! If you need any further assistance or want to explore more about object detection and tracking, check out our detailed guides and documentation: docs.ultralytics.com/guides/vision-eye/. Happy coding! 🚀

  • @oitienay5699
    @oitienay5699 9 หลายเดือนก่อน +2

    Can you give me this code please? thank you

    • @Ultralytics
      @Ultralytics  9 หลายเดือนก่อน +1

      Sure, you can use mentioned code to extract output of Ultralytics YOLOv8 Object Detection.
      ```python
      from ultralytics import YOLO
      # Load the YOLOv8 model
      model = YOLO('yolov8n.pt')
      # Perform inference on an image
      results = model('ultralytics.com/images/bus.jpg')
      # Extract bounding boxes, classes, names, and confidences
      boxes = results[0].boxes.xyxy.tolist()
      classes = results[0].boxes.cls.tolist()
      names = results[0].names
      confidences = results[0].boxes.conf.tolist()
      # Iterate through the results
      for box, cls, conf in zip(boxes, classes, confidences):
      x1, y1, x2, y2 = box
      confidence = conf
      detected_class = cls
      name = names[int(cls)]
      ```
      Thanks
      Ultralytics Team!

    • @oitienay5699
      @oitienay5699 9 หลายเดือนก่อน +1

      @@Ultralytics thank you

    • @Ultralytics
      @Ultralytics  2 หลายเดือนก่อน

      You're welcome! If you have any more questions, feel free to ask. Happy coding! 😊
      For more details, check out our AzureML Quickstart Guide docs.ultralytics.com/guides/azureml-quickstart/.
      Thanks,
      Ultralytics Team!

  • @kennetheladistu3356
    @kennetheladistu3356 3 หลายเดือนก่อน +1

    hello, I have trained my data sets for parcels, may I ask how can I predict the parcels using the live video feed of my camera?

    • @Ultralytics
      @Ultralytics  3 หลายเดือนก่อน +1

      Hello! 🌟 To predict parcels using a live video feed from your camera, you can use the `predict` mode in the Ultralytics YOLOv8 model. First, ensure you have the latest versions of `torch` and `ultralytics` installed. Then, you can run a command like this: ```yolo predict model=path/to/your/model.pt source=0 ``` This command will use your webcam (source=0) for live predictions. For more details, check out the Ultralytics documentation docs.ultralytics.com/modes/predict. If you encounter any issues, please share specific error messages or code snippets. Happy detecting! 📦✨ For more resources, visit our FAQ docs.ultralytics.com/help/FAQ/.

  • @jaysawant3097
    @jaysawant3097 9 หลายเดือนก่อน +2

    lets say i have a yolo m
    odel, trained on my custom dataset,, i have its weights in my run folder as best.pt,,, now i want to pass a single image and predict the classes present in it. along with the count i need to display the image with the bounding boxes that the model predicted. i am getting thecount, but dont know how to get the bounding boxes, please help

    • @Ultralytics
      @Ultralytics  9 หลายเดือนก่อน

      You can get the bounding box coordinates with object counts using the below code!
      '''
      from ultralytics import YOLO
      from ultralytics.solutions import object_counter
      import cv2
      model = YOLO("yolov8n.pt")
      cap = cv2.VideoCapture("path/to/video/file.mp4")
      assert cap.isOpened(), "Error reading video file"
      counter = object_counter.ObjectCounter() # Init Object Counter
      region_points = [(20, 400), (1080, 404), (1080, 360), (20, 360)]
      counter.set_args(view_img=True,
      reg_pts=region_points,
      classes_names=model.names,
      draw_tracks=True)
      while cap.isOpened():
      success, im0 = cap.read()
      if not success:
      break
      tracks = model.track(im0, persist=True, show=False)
      boxes = tracks[0].boxes.xyxy.cpu()
      for box in boxes:
      print("Bounding Box Value : ", box)
      im0 = counter.start_counting(im0, tracks)
      print("In Counts : {}, Out Counts : {}".format(counter.in_counts, counter.out_counts))
      cv2.destroyAllWindows()
      '''
      Thanks
      Ultralytics Team!

  • @k23vanthuan98
    @k23vanthuan98 5 หลายเดือนก่อน +1

    Sorry, but I once watched a video and it showed shorter lines of code. Can you tell me the difference between these two pieces of code?
    import cv2
    from ultralytics import YOLO
    #from ultralytics.models.yolo.detect.predict import DetectionPredictor
    import time
    model = YOLO ("best.pt")
    results= model.predict(source="0", show=True)
    print(results)

    • @Ultralytics
      @Ultralytics  5 หลายเดือนก่อน +1

      The code remains largely the same, but when the video was created, YOLOv8 had just been introduced. Since then, we've significantly optimized the code, resulting in reduced lines of inference code compared to what was presented in the video. Thank you.

    • @k23vanthuan98
      @k23vanthuan98 5 หลายเดือนก่อน +1

      @@Ultralytics I can understand your answer to mean that the above two pieces of code are cis and have the same effect. Is that correct?

    • @Ultralytics
      @Ultralytics  5 หลายเดือนก่อน

      @@k23vanthuan98 Yes, for inference the latest code is available at: docs.ultralytics.com/modes/predict/

  • @werangalakshitha9800
    @werangalakshitha9800 5 หลายเดือนก่อน +2

    We are creating colab notebooks that will include the codes for our TH-cam videos, we will share them soon! Thanks

    • @Ultralytics
      @Ultralytics  5 หลายเดือนก่อน +1

      The colab notebooks are already released and available at: github.com/ultralytics/ultralytics?tab=readme-ov-file#notebooks

    • @ronithrock2015
      @ronithrock2015 2 หลายเดือนก่อน

      released??

    • @Ultralytics
      @Ultralytics  2 หลายเดือนก่อน

      Oops, my mistake! 😅 We haven't released them yet. Stay tuned for updates! 🚀

  • @ramanathreddyg7212
    @ramanathreddyg7212 6 หลายเดือนก่อน +2

    I want this episode code not entire code

    • @Ultralytics
      @Ultralytics  6 หลายเดือนก่อน

      You can use mentioned code below to extract the output of Ultralytics YOLOv8.
      ```python
      import cv2
      import numpy as np
      from ultralytics import YOLO
      from ultralytics.utils.plotting import Annotator, colors
      model = YOLO("yolov8n.pt")
      names = model.model.names
      cap = cv2.VideoCapture("Path/to/video/file.mp4")
      assert cap.isOpened(), "Error reading video file"
      while cap.isOpened():
      success, im0 = cap.read()
      if success:
      results = model.predict(im0, show=False)
      boxes = results[0].boxes.xyxy.cpu().tolist()
      clss = results[0].boxes.cls.cpu().tolist()
      annotator = Annotator(im0, line_width=3, example=names)
      if boxes is not None:
      for box, cls in zip(boxes, clss):
      annotator.box_label(box, color=(255, 144, 31), label=names[int(cls)])
      if cv2.waitKey(1) & 0xFF == ord('q'):
      break
      continue
      print("Video frame is empty or video processing has been successfully completed.")
      break
      cap.release()
      video_writer.release()
      cv2.destroyAllWindows()
      ```
      Thanks

    • @ramanathreddyg7212
      @ramanathreddyg7212 6 หลายเดือนก่อน

      @@Ultralytics how to see the content of the .pt file and how to know the accuracy of my .pt file i.e the accuracy of my dataset to show any other

    • @Ultralytics
      @Ultralytics  2 หลายเดือนก่อน

      To see the content of your `.pt` file and check the accuracy of your model, you can follow these steps:
      1. Load the Model: Use the `torch` library to load the `.pt` file.
      2. Check Model Accuracy: Evaluate the model on a validation dataset to get accuracy metrics.
      Here's a concise example:
      ```python
      import torch
      from ultralytics import YOLO
      Load the model
      model = YOLO("path/to/your/model.pt")
      Print model architecture
      print(model.model)
      Evaluate the model on a validation dataset
      results = model.val(data="path/to/your/dataset.yaml")
      print(results)
      ```
      For more details, check out our documentation docs.ultralytics.com/modes/predict/.

  • @olevester
    @olevester 5 หลายเดือนก่อน +1

    i wanted to see the full code but could not find it 😥

    • @Ultralytics
      @Ultralytics  5 หลายเดือนก่อน

      The code is available in our docs: docs.ultralytics.com/usage/simple-utilities/

  • @astrum0941
    @astrum0941 3 หลายเดือนก่อน +1

    I am working on a project using MediaPipe for pose estimation. Mediapipe only supports single-person pose estimation, but I want to make it multi-person. I was going to use YOLOv8's object detection and loop the MediaPipe pose estimation code through each bounding box but I am not sure how I could do that. Is there a way to run the MediaPipe code through each bounding box of a human instead of the whole frame, and then put it back togethter in one frame?

    • @Ultralytics
      @Ultralytics  3 หลายเดือนก่อน

      Thanks for sharing your feedback. You can directly use the Ultralytics YOLOv8 Pose Model, which supports multi-person pose estimation in a single frame. For more information, please visit: docs.ultralytics.com/tasks/pose/

  • @moxd320
    @moxd320 3 หลายเดือนก่อน +1

    Hello there! I’m building a simple program that use yolov8 to detect person and then call a function ( the function should connect to pixhawk but I already wrote it )
    I just need the code where it triggers the function when a person is detected

    • @Ultralytics
      @Ultralytics  3 หลายเดือนก่อน

      You can add a check once the person is detected, the sample code is provided below. we hope this will help :)
      ```python
      import cv2
      from ultralytics import YOLO
      from ultralytics.utils.plotting import Annotator, colors
      model = YOLO("yolov8s.pt")
      cap = cv2.VideoCapture("Path/to/video/file.mp4")
      w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH,
      cv2.CAP_PROP_FRAME_HEIGHT,
      cv2.CAP_PROP_FPS))
      out = cv2.VideoWriter("Ultralytics.avi",
      cv2.VideoWriter_fourcc(*"MJPG"), fps, (w, h))
      while True:
      ret, im0 = cap.read()
      if not ret:
      break
      annotator = Annotator(im0, line_width=3)
      results = model.predict(im0) # For prediction
      boxes = results[0].boxes.xyxy.cpu()
      clss = results[0].boxes.cls.cpu().tolist()
      for box, cls in zip(boxes, clss):
      if names[int(cls)] == "person":
      print("Person Detected!!! Ultralytics !!!")
      # ... Execute your logic because a person is detected ...
      annotator.box_label(box, label=model.names[int(cls)], color=colors(int(cls), True))
      out.write(im0)
      cv2.imshow("Ultralytics", im0)
      if cv2.waitKey(1) & 0xFF == ord("q"):
      break
      out.release()
      cap.release()
      cv2.destroyAllWindows()
      ```
      Regards
      Ultralytics Team!

  • @laguelycaa.3411
    @laguelycaa.3411 5 หลายเดือนก่อน +1

    Hello. What must I extract to the output of the Ultralytics YOLOv8 Model if I want to know the position of the objects being detected?

    • @Ultralytics
      @Ultralytics  5 หลายเดือนก่อน +1

      You can utilize the provided code snippet to retrieve the bounding box position as follows:
      ```
      from ultralytics.utils.plotting import Annotator
      from ultralytics import YOLO
      import cv2
      model = YOLO('yolov8n.pt') # Load a pre-trained or fine-tuned model
      # Process the image
      source = cv2.imread('path/to/image.jpg')
      results = model(source)
      # Extract results
      annotator = Annotator(source, example=model.names)
      for box in results[0].boxes.xyxy.cpu():
      x_min, y_min, x_max, y_max = box.tolist()
      print("Position of Bounding box:", (x_min, y_min, x_max, y_max))
      ```
      This code snippet utilizes Ultralytics' YOLOv8 model to process an image and extract bounding box results. It then iterates through the detected boxes and prints their positions.

  • @amankumarsingh8857
    @amankumarsingh8857 8 หลายเดือนก่อน +1

    hey, can you please help me.........
    I have my custom trained model (best.pt), it detects two things person and headlight. Now I want the output according to these conditions: 1. If model detect only headlight return 0, 2. If model detect only person return 1, 3. If model detect headlight and person both return 0.

    • @Ultralytics
      @Ultralytics  8 หลายเดือนก่อน

      Your inquiries appear to be completely technical. We suggest submitting them to the Ultralytics Discussion section for more effective assistance. You can do so at github.com/orgs/ultralytics/discussions.
      Thanks
      Ultralytics Team!

  • @ballajaisheel1120
    @ballajaisheel1120 5 หลายเดือนก่อน +1

    hey !!!!
    can you please explain me hoe to get the coordinates of the deteted bounding boxes in an image and one more thing how can i change the saved directory to one single folder not runs>predict1,predict2 etc
    thanks

    • @Ultralytics
      @Ultralytics  5 หลายเดือนก่อน

      Right, you can use mentioned code to extract the bounding boxes from image, additionally you can store output in specific directory.
      ```python
      import cv2
      import os
      from ultralytics import YOLO
      from ultralytics.utils.plotting import Annotator, colors
      model = YOLO("yolov8s.pt")
      output_dir = "test"
      image = cv2.imread("path/to/image.png")
      results = model.predict(image, show=False, classes=[108])
      boxes = results[0].boxes.xyxy.cpu().tolist()
      clss = results[0].boxes.cls.cpu().tolist()
      annotator = Annotator(image, line_width=2, example=model.names)
      for box, cls in zip(boxes, clss):
      annotator.box_label(box, color=colors(int(cls), True),
      label=model.names[int(cls)])
      print("Bounding Box Coordinates : ", box)
      cv2.imwrite(os.path.join(output_dir, "output.png"), image)
      ```
      Thanks

  • @kishantripathi4521
    @kishantripathi4521 หลายเดือนก่อน +1

    hey i am working on a real time obj detection project in which it shows the number of cars in parking space and no. of empty spaces. what i want is to check if the space is empty or not from a function to perform some task. how i can extract the output in form of text in real time from the output?

    • @Ultralytics
      @Ultralytics  หลายเดือนก่อน +1

      Hey there! For real-time object detection and extracting outputs as text, you can use the `predict` function in YOLOv8 to get the detection results. You can then process these results to count cars and determine empty spaces. Make sure you're using the latest versions of `torch` and `ultralytics`. For more details, check out our documentation: docs.ultralytics.com/modes/predict/. If you need further assistance, feel free to ask! 🚗📊

  • @cosasdeLPS
    @cosasdeLPS 5 หลายเดือนก่อน +1

    hi, could you please lend me ahand? I'm trying to export the results and sending them to and excel or csv file, but i can't seem to get the right code, i already exported it to torchscript but its being impossible to resolve

    • @Ultralytics
      @Ultralytics  5 หลายเดือนก่อน

      Sure, you can extract the bounding boxes and classes using mentioned code below.
      ```python
      import cv2
      from ultralytics import YOLO
      from ultralytics.utils.plotting import Annotator, colors
      model = YOLO("yolov8s.pt")
      cap = cv2.VideoCapture("path/to/video/file.mp4")
      while cap.isOpened():
      success, frame = cap.read()
      if success:
      results = model.track(frame)
      annotator = Annotator(frame, line_width=4)
      boxes = results[0].boxes.xyxy.cpu()
      clss = results[0].boxes.cls.cpu().tolist()
      for box, cls in zip(boxes, clss):
      print(f"Bounding box {box}, class name {model.names[int(cls)]}")
      if cv2.waitKey(1) & 0xFF == ord("q"):
      break
      else:
      break
      cap.release()
      cv2.destroyAllWindows()
      ```
      Thanks
      Ultralytics Team!

  • @rifkyhernanda7466
    @rifkyhernanda7466 ปีที่แล้ว +3

    hello, where can i get the full code? i only see until line 50 in this video, i copy everything and the code isnt working for me

    • @Ultralytics
      @Ultralytics  ปีที่แล้ว +1

      Hi there, please find it here: github.com/niconielsen32/YOLOv8-Class

    • @FernandaZ-u7c
      @FernandaZ-u7c 8 หลายเดือนก่อน

      @@Ultralytics Thank you for your shaing. I run the full code from the link provided, but there is an error says:
      ...
      ValueError: too many values to unpack (expected 4)
      Why is that? How to debug it? 😃

    • @WashiurRahman
      @WashiurRahman 4 หลายเดือนก่อน

      @@FernandaZ-u7c i have also got same error. did you find the fix?

    • @Ultralytics
      @Ultralytics  2 หลายเดือนก่อน

      It sounds like there might be an issue with the unpacking of values in the code. Ensure you're using the latest versions of `torch` and `ultralytics`. If the issue persists, please share the specific line causing the error for more detailed help. 😊
      For more guidance, check our documentation: docs.ultralytics.com/

  • @dejahm1502
    @dejahm1502 6 หลายเดือนก่อน +1

    if you could justt talk a little slower lol without swallowing half the words and slurring through. but the video by itself was super cool, excited to try it out!

    • @Ultralytics
      @Ultralytics  6 หลายเดือนก่อน

      Thank you for the feedback! We'll slow down the speech for better clarity next time. We are glad you found the video cool, and we hope you enjoy trying it out!

  • @miguelmarte7340
    @miguelmarte7340 ปีที่แล้ว +3

    Great video! thanks! I would like to know how can I get the results with masks when I predicting

    • @Ultralytics
      @Ultralytics  ปีที่แล้ว +1

      See docs.ultralytics.com/modes/predict/#masks for getting masks results from Segment models :)

  • @bitmapsquirrel6869
    @bitmapsquirrel6869 10 หลายเดือนก่อน +2

    what would I add to my code (using live video cam) to get the coordinates of the boxes? I'm planning on extracting these coordinates and creating a custom segmentation dataset and pairing it to the rest of my project
    My code so far:
    from ultralytics import YOLO
    model = YOLO("yolov8l")
    results = model.predict(source="0", show=True, conf=0.5)
    results.show()

    • @Ultralytics
      @Ultralytics  10 หลายเดือนก่อน +1

      You can get the bounding box coordinates using the provided code.
      """
      from ultralytics import YOLO
      import cv2
      import sys
      cap = cv2.VideoCapture(0)
      model = YOLO("yolov8n.pt")
      if not cap.isOpened():
      print("Error reading video file")
      sys.exit()
      while cap.isOpened():
      success, frame = cap.read()
      if success:
      results = model.predict(frame, persist=True)
      boxes = results[0].boxes.xywh.cpu()
      clss = results[0].boxes.cls.cpu().tolist()
      names = results[0].names
      for box, cls in zip(boxes, clss):
      x, y, w, h = box
      label = str(names[int(cls)])
      #......
      #......
      #......
      """

    • @bitmapsquirrel6869
      @bitmapsquirrel6869 10 หลายเดือนก่อน +1

      @@Ultralytics thank you so much, but I'm actually looking to migrate to a segmentation project and need to export every bounding box from my custom dataset that is detected. how would I do so (export what and where the thing is segmented in a way that is readable by an external source). Also, do you have any methods/tips to use this data to make a pathfinding (self-driving car)? I need a way to export the data provided by the segmentation model and find a route that can avoid certain segmentations (like grass)

    • @Ultralytics
      @Ultralytics  10 หลายเดือนก่อน

      @@bitmapsquirrel6869 seems like your query is more technical, we would recommend asking your queries on our GitHub Issue Section: github.com/ultralytics/ultralytics/issues

  • @kosttavmalhotra5899
    @kosttavmalhotra5899 8 หลายเดือนก่อน +1

    i have run the command (model.predict(source="C:\\Users\\User\\Documents\\Bandicam\\check.mp4", stream=True,save=True, imgsz=320, conf=0.5))
    and got this ()///////////////////
    where is the output video got saved when the stream =true...please help

    • @Ultralytics
      @Ultralytics  8 หลายเดือนก่อน +1

      We highly recommend upgrading the Ultralytics package, and hopefully, this will address your issue.
      ```pip install -U ultralytics```
      Thanks,
      Ultralytics Team!

  • @holsonshen9140
    @holsonshen9140 9 หลายเดือนก่อน +1

    I noticed in your previous comment that you would have colab notebooks. Do you have notebooks now? Where is the address?

    • @Ultralytics
      @Ultralytics  9 หลายเดือนก่อน

      You can extract the output of Ultralytics YOLOv8 using mentioned code below.
      """
      from ultralytics import YOLO
      # Load the YOLOv8 model
      model = YOLO('yolov8n.pt')
      names = model.model.names
      # Perform inference on an image
      results = model('ultralytics.com/images/bus.jpg')
      # Extract bounding boxes, classes, names, and confidences
      boxes = results[0].boxes.xyxy.tolist()
      classes = results[0].boxes.cls.tolist()
      confidences = results[0].boxes.conf.tolist()
      # Iterate through the results
      for box, cls, conf in zip(boxes, classes, confidences):
      x1, y1, x2, y2 = box
      confidence = conf
      detected_class = cls
      name = names[int(cls)]
      """
      Thanks
      Ultralytics Team!

  • @SubSquadTV2024
    @SubSquadTV2024 8 หลายเดือนก่อน +1

    Good morning! How can I convert the Yolo v8 file .pt to weights? can you help me with this one, thank you vey much

    • @Ultralytics
      @Ultralytics  8 หลายเดือนก่อน

      Directly there is no support for this feature, but you can use third party tools to convert PyTorch (.pt) to Darknet (.weights) format. The currently available formats are mentioned at the following link: docs.ultralytics.com/modes/export/#arguments

  • @akashvvk7359
    @akashvvk7359 5 หลายเดือนก่อน +1

    can i get the full code of finding the coordinates of bounding box from webcam.The given code is not working

    • @Ultralytics
      @Ultralytics  5 หลายเดือนก่อน

      You can use the code below to extract the bounding box coordinates from the webcam. For more information, you can explore our docs: docs.ultralytics.com/modes/predict/#working-with-results
      ```python
      import cv2
      from ultralytics import YOLO
      from ultralytics.utils.plotting import Annotator, colors
      model = YOLO("yolov8s.pt")
      cap = cv2.VideoCapture(0)
      assert cap.isOpened(), "Error reading video file"
      while cap.isOpened():
      success, im0 = cap.read()
      if success:
      results = model.predict(im0, show=False, classes=[108])
      boxes = results[0].boxes.xyxy.cpu().tolist()
      clss = results[0].boxes.cls.cpu().tolist()
      annotator = Annotator(im0, line_width=2, example=model.names)
      if boxes is not None:
      for box, cls in zip(boxes, clss):
      annotator.box_label(box, color=colors(int(cls), True),
      label=model.names[int(cls)])
      print("Bounding Box Coordinates : ", box)
      if cv2.waitKey(1) & 0xFF == ord('q'):
      break
      continue
      print("Video frame is empty or video processing has been successfully completed.")
      break
      cap.release()
      cv2.destroyAllWindows()
      ```
      Thanks

  • @0xH3S
    @0xH3S 5 หลายเดือนก่อน +1

    Hello, I want to extract label without rest details... How I can do it? I use yolov8

    • @Ultralytics
      @Ultralytics  5 หลายเดือนก่อน

      You can utilize the mentioned code below to do this.
      ```python
      import cv2
      from ultralytics import YOLO
      model = YOLO("yolov8n.pt")
      cap = cv2.VideoCapture("path/to/video/file.mp4")
      assert cap.isOpened(), "Error reading video file"
      while cap.isOpened():
      success, im0 = cap.read()
      if success:
      results = model.predict(im0, show=False)
      clss = results[0].boxes.cls.cpu().tolist()
      if clss:
      for cls in clss:
      print(f"label {model.names[int(cls)]}")
      if cv2.waitKey(1) & 0xFF == ord('q'):
      break
      continue
      break
      cap.release()
      cv2.destroyAllWindows()
      ```

  • @sandaznadaz6282
    @sandaznadaz6282 8 หลายเดือนก่อน +1

    how to show tracking id?

    • @Ultralytics
      @Ultralytics  8 หลายเดือนก่อน

      You can display the tracking ID by calling `model.track`, for example:
      ```yolo track model=yolov8n.pt source="th-cam.com/video/LNwODJXcvt4/w-d-xo.html" conf=0.3, iou=0.5 show```
      Thanks,
      Ultralytics Team!

  • @TheodoreBC
    @TheodoreBC 5 วันที่ผ่านมา

    Is YOLOv8 good enough to spot that camo-wearing deer before you spook it on the trail, or should I stick to my binoculars, bro? Curious if it handles tricky background situations.

    • @Ultralytics
      @Ultralytics  5 วันที่ผ่านมา

      YOLOv8 is quite powerful and can handle complex backgrounds, including camouflaged objects like deer. However, its effectiveness can depend on factors like lighting and camera quality. It might not replace binoculars entirely, but it’s a great tool to enhance your spotting game! 🦌🔍 Check out more about YOLOv8 here: docs.ultralytics.com/models/yolov8/

  • @onik5377
    @onik5377 7 หลายเดือนก่อน +1

    How is the frame rate dynamically put onto the window? In my while(True) loop with OpenCV, I use the putText() method on every frame but it seems to stay at 30 besides visible slowdowns at times. How do I make the framerate account for processing time from my YOLO model?

    • @Ultralytics
      @Ultralytics  7 หลายเดือนก่อน

      By default, we don't provide support to display the frames per second (FPS) on the frame display. However, In the context of YOLO and OpenCV, dynamically updating the frame rate on the window involves accounting for the processing time of your YOLO model. Instead of a fixed frame rate, you can calculate the actual frames per second based on the time it takes for the YOLO model inference.
      Thanks
      Ultralytics Team!

  • @muhammadwaleed1526
    @muhammadwaleed1526 4 หลายเดือนก่อน +1

    Is this complete code snippet available that is used in video?

    • @Ultralytics
      @Ultralytics  4 หลายเดือนก่อน

      Yes, you can extract the output of Ultralytics YOLOv8 by following our docs: docs.ultralytics.com/usage/python/#__tabbed_3_2
      The code snippets are available in the docs.
      Thanks
      Ultralytics Team!

    • @정재원-o3c
      @정재원-o3c 4 หลายเดือนก่อน +1

      @Ultralytics I don't think that's what muhammad meant. I'm pretty sure that he wanted the code that was being shown on THIS video. Not some generic example code. I would also like to request the code that was shown on this video. I'm just getting started on using YOLO, and it would help my understanding by a lot if I could replicate this. Thanks for the wonderful video tho! Much appreciated 👏 👏

    • @Ultralytics
      @Ultralytics  4 หลายเดือนก่อน

      We regularly update the code that enhance user experience. The above-provided code is the latest and can be used to extract the detection outputs easily.
      Thanks
      Ultralytics Team!

  • @taurussilver7025
    @taurussilver7025 8 หลายเดือนก่อน +1

    Hello, a student here.
    I trained a yolov8m object detection model in google Collab. Ran predictions on images and videos. Getting good results so far.
    However, I'm rather interested in how I could make inferences out of the video...
    For instance, i am interested in if i could somehow get some observation tables with : classes (the objects detected) , Detections (how many of them were detected throughout the video).
    Would to hear how i should proceed with this! I've been reading the documentation but i haven't figured it out yet. Thanks in advance!

    • @Ultralytics
      @Ultralytics  8 หลายเดือนก่อน

      If you want to get information about the prediction, i.e. class names, objects detected, and so on, you can play with the `model.predict` method, it will provide all the information you need but you need to format it according to your needs. i.e
      ```
      results = model.predict(im0, show=False)
      boxes = results[0].boxes.xyxy.cpu().tolist()
      clss = results[0].boxes.cls.cpu().tolist()
      ```
      Thanks
      Ultralytics Team!

  • @krushnarenge3699
    @krushnarenge3699 7 หลายเดือนก่อน +1

    i want to draw center lines of bounding box object detected and tracked by yolov8 in live video tell me how to do it and code

    • @Ultralytics
      @Ultralytics  6 หลายเดือนก่อน

      You can utilize the provided code to draw centroids of objects and implement object tracking over time.
      ```python
      from collections import defaultdict
      import cv2
      import numpy as np
      from ultralytics import YOLO
      model = YOLO('yolov8n.pt')
      video_path = "Path/to/video/file.mp4"
      cap = cv2.VideoCapture(video_path)
      track_history = defaultdict(lambda: [])
      while cap.isOpened():
      success, frame = cap.read()
      if success:
      results = model.track(frame, persist=True, show_conf=False)
      boxes = results[0].boxes.xywh.cpu()
      track_ids = results[0].boxes.id.int().cpu().tolist()
      annotated_frame = results[0].plot()
      for box, track_id in zip(boxes, track_ids):
      x, y, w, h = box
      track = track_history[track_id]
      track.append((float(x), float(y)))
      if len(track) > 30:
      track.pop(0)
      points = np.hstack(track).astype(np.int32).reshape((-1, 1, 2))
      cv2.polylines(annotated_frame, [points], isClosed=False, color=(230, 230, 230), thickness=2)
      cv2.imshow("YOLOv8 Tracking", annotated_frame)
      if cv2.waitKey(1) & 0xFF == ord("q"):
      break
      else:
      break
      cap.release()
      cv2.destroyAllWindows()
      ```
      Thanks

  • @hasnainahmed7605
    @hasnainahmed7605 6 หลายเดือนก่อน +1

    Excellent Tutorial. Can you please help me to find the person who are using phones in a video or camera stream. Thanks.

    • @Ultralytics
      @Ultralytics  6 หลายเดือนก่อน

      Thanks! Certainly, to detect people using phones in a video or camera stream, utilize YOLOv5 or YOLOv8. You can train the model on a dataset with images or frames showing people with phones. Then, deploy the model for real-time detection.

  • @project-arlo
    @project-arlo 7 หลายเดือนก่อน +1

    I want to combine the yolo8 with a resnet50 classifier, to run my custom trained model, and if it detects certain classes it invokes the classifier and prints the output of the classifier as bounding boxes instead of the detectors class. Are there any resources on this?

    • @Ultralytics
      @Ultralytics  7 หลายเดือนก่อน

      Officially, we do not offer support for backbone modification, but you have the flexibility to comprehend the YOLOv8 architecture and subsequently customize the classifier to suit your requirements. For additional details, please refer to our documentation: docs.ultralytics.com/models/yolov8/

  • @ShadowD2C
    @ShadowD2C 6 หลายเดือนก่อน +1

    please slow down 😭

    • @Ultralytics
      @Ultralytics  6 หลายเดือนก่อน

      Thank you for your feedback. We will inform the presenter about the request for a slower speed voice.

  • @m033372
    @m033372 2 หลายเดือนก่อน

    Have you encountered any significant challenges or limitations when integrating YOLOv8's output into larger, more complex AI systems? If so, how did you overcome them?

    • @Ultralytics
      @Ultralytics  2 หลายเดือนก่อน

      Great question! Integrating YOLOv8 into larger AI systems can indeed present challenges, particularly with handling large-scale data and ensuring real-time performance. One common issue is managing the computational load, which can be mitigated by optimizing the model using techniques like quantization or deploying on more powerful hardware. Another challenge is ensuring seamless integration with other components, which can be addressed by using standardized data formats and robust APIs. Make sure you're using the latest versions of `torch` and `ultralytics` for the best performance. For more detailed guidance, check out our documentation at docs.ultralytics.com. If you encounter specific issues, feel free to share more details so we can assist you better! 🚀

  • @pushpendrakushwaha604
    @pushpendrakushwaha604 หลายเดือนก่อน

    Hey, I have a model which does inferencing 768 x 448 image size so I can't use this predicted image directly beacause its low resolution and I have show images of high resolution 1920 x 1080, I am able to extract these results but when I am plotting these results to an original size (1920 x 1080) the masks are not coming properly, I mean to say masks are coming a bit outside of the bounding boxes on the higher resolution images, I also tried resizing the masks according to the orginal image but that didn't work, how can I fix this?

    • @Ultralytics
      @Ultralytics  หลายเดือนก่อน

      Hey! It sounds like the issue might be with the scaling of the masks when resizing to the original image size. Ensure that the scaling factor is applied consistently to both the bounding boxes and the masks. If you need more detailed guidance, please check out our documentation on handling inference results: docs.ultralytics.com/guides/instance-segmentation-and-tracking/. Also, make sure you're using the latest versions of `torch` and `ultralytics`. If the problem persists, please provide more details about your approach. 😊

  • @soda_YEET
    @soda_YEET หลายเดือนก่อน

    is it possible to only get 1 result for each class based on the highest conf value? For exampls, I have 2 classes "fruit" and "peduncle". I only want 1 fruit and 1 peduncle from the detection result with the highest confidence value.

    • @Ultralytics
      @Ultralytics  หลายเดือนก่อน

      Yes, you can achieve that by filtering the detection results to keep only the highest confidence value for each class. You can use the `results` object to access the detections and then apply your filtering logic. For more details on handling results, check out our documentation docs.ultralytics.com/. 😊

  • @AlexChen-f5y
    @AlexChen-f5y 2 หลายเดือนก่อน

    Is there a performance trade-off when using YOLOv8 for both object detection and segmentation simultaneously? Wondering if smaller projects can handle the increased computational load or if you might need a quantum leap in hardware. Also, does using customized, real-time webcam feeds introduce any additional latency? Would love to hear some insights or wild experiences!

    • @Ultralytics
      @Ultralytics  2 หลายเดือนก่อน

      Great question! Using YOLOv8 for both object detection and segmentation does increase computational load, but it's generally manageable for smaller projects with decent hardware. For real-time webcam feeds, there might be slight latency, but optimizing settings (e.g., lower resolution, frame rate) can help. For more details on optimizing performance, check out our guide on object cropping: docs.ultralytics.com/guides/object-cropping/. 🚀

  • @ceachibogdan8087
    @ceachibogdan8087 2 หลายเดือนก่อน

    I saw that when training and val he is applying some preprocessing(normalize+standardize 255). But when we use .predict, we have to do it manually, or is implemented?

    • @Ultralytics
      @Ultralytics  2 หลายเดือนก่อน

      Great question! When you use `.predict` with Ultralytics YOLOv8, the preprocessing steps like normalization and standardization are automatically handled for you. No need to do it manually! For more details, you can check out our documentation docs.ultralytics.com/modes/predict/. 😊🚀

  • @nathantafelsky7089
    @nathantafelsky7089 8 หลายเดือนก่อน +2

    The tracking configuration shares properties with Predict mode, so a lot of this applies to tracking as well.

    • @Ultralytics
      @Ultralytics  8 หลายเดือนก่อน

      Certainly, numerous predict mode arguments are supported in tracking mode.
      Thanks
      Ultralytics Team!

  • @Sasha-n2x
    @Sasha-n2x หลายเดือนก่อน

    This demo is fantastic, but how scalable is it for larger datasets or real-time video streams on less powerful hardware? Are there any optimization tricks to maintain performance without sacrificing too much accuracy? #TechChat #AskTheExperts

    • @Ultralytics
      @Ultralytics  หลายเดือนก่อน

      Great question! For larger datasets or real-time video streams on less powerful hardware, you can optimize performance by:
      1. Using a smaller model: Opt for a lighter YOLO model variant like `yolov8n.pt`.
      2. Batch processing: Process frames in batches to reduce overhead.
      3. Quantization: Convert your model to a quantized format like TFLite or ONNX for faster inference.
      4. Edge TPU: Utilize hardware accelerators like Coral Edge TPU.
      For more tips on optimizing YOLO models, check out our guide on analytics docs.ultralytics.com/guides/analytics/. 🚀

  • @enzocampregher5312
    @enzocampregher5312 7 หลายเดือนก่อน +1

    Hello, excellent job, very good videos! Where can I download the episode codes so I can practice?

    • @Ultralytics
      @Ultralytics  7 หลายเดือนก่อน

      Sure, all codes are available in our docs: docs.ultralytics.com/modes/predict/
      Thanks
      Ultralytics Team!

  • @NicolasDarknessWolf
    @NicolasDarknessWolf 27 วันที่ผ่านมา

    is there a way to use the counting method that it has but instead of using the center of the bounding box, it uses its bottom? im a bit stuck with this, i want to take the center and extract the property of the bounding box height , divide it by 2 and then move the center down so im tecnically touching the floor of the box and get a more accurate reading of collition with a trigger that im using....

    • @Ultralytics
      @Ultralytics  27 วันที่ผ่านมา

      Yes, you can modify the counting method to use the bottom of the bounding box. You can achieve this by adjusting the center coordinates. Here's a quick formula: `bottom_y = center_y + (height / 2)`. This will give you the bottom y-coordinate of the bounding box. For more details, you can check our documentation on object counting: docs.ultralytics.com/guides/object-counting/. If you need further assistance, feel free to ask! 😊

  • @gdbdff
    @gdbdff 6 หลายเดือนก่อน +1

    Hello there! New to yolo and am trying to do a project here.
    I made my own yolov8 model seeing you guys' videos and had a question
    Well i want to make a simple python script where the model is liked the the model.YOLO and its giving the output as it should with the detection_output.
    Howevere if want to make "if" functions like if the model detects a spwcefic class from my dataset, it will prijt something. How am i supposed to do that like we do with a variety of other open source models like from cvzone of smth
    Plz help

    • @Ultralytics
      @Ultralytics  6 หลายเดือนก่อน

      You can achieve this by iterating through the detection output of your YOLOv8 model in Python and checking for specific classes. Here's a basic example:
      ```python
      for detection in detection_output:
      if detection['class'] == specific_class_index:
      print("Detected specific class! Do something...")
      ```
      Replace `specific_class_index` with the index of the class you're interested in. This allows you to execute custom actions based on the detected classes.

    • @gdbdff
      @gdbdff 6 หลายเดือนก่อน

      @@Ultralytics
      from ultralytics import YOLO
      import numpy
      # load a pretrained YOLOv8n model
      model = YOLO("path:\to\yolov8_custom.pt")
      # predict on an image
      detection_output = model.predict(source=0, conf=0.25, save=False, show=True)
      # Display tensor array
      print(detection_output.probs)
      # Display numpy array
      print(detection_output[0].numpy())
      for detection in detection_output:
      if detection['class'] == breadboard:
      print("working")
      ----------------------------------------------------------
      this is my full code, sir. idk what id did wrong but the output in pycharm is this-
      0: 480x640 (no detections), 273.0ms
      0: 480x640 (no detections), 269.0ms
      0: 480x640 1 breadboard, 269.0ms
      0: 480x640 1 breadboard, 269.0ms
      0: 480x640 (no detections), 272.0ms
      0: 480x640 (no detections), 270.0ms
      0: 480x640 (no detections), 268.0ms
      0: 480x640 1 breadboard, 272.0ms
      is there anything that needs to be modified?

    • @Ultralytics
      @Ultralytics  2 หลายเดือนก่อน +1

      It looks like you're on the right track, but there are a few adjustments needed. The `detection_output` from YOLOv8 is a list of `Results` objects, and you need to access the `boxes` attribute to get the detected classes. Here's an updated version of your code:
      ```python
      from ultralytics import YOLO
      Load a pretrained YOLOv8 model
      model = YOLO("path/to/yolov8_custom.pt")
      Predict on an image (webcam in this case)
      detection_output = model.predict(source=0, conf=0.25, save=False, show=True)
      Iterate through the detection results
      for result in detection_output:
      for box in result.boxes:
      if box.cls == breadboard_class_index: Replace with the actual class index for 'breadboard'
      print("Detected breadboard! Do something...")
      ```
      Make sure to replace `breadboard_class_index` with the actual index of the 'breadboard' class in your dataset.
      For more details on how to use YOLOv8 in Python, you can refer to our documentation: YOLOv8 Python Usage docs.ultralytics.com/usage/python/.

  • @yakol1220
    @yakol1220 6 หลายเดือนก่อน +1

    Great video! any chance of getting this code file?

    • @Ultralytics
      @Ultralytics  6 หลายเดือนก่อน

      Thanks :) All code samples are available in Ultralytics Docs: docs.ultralytics.com/modes/predict/

  • @max.k3219
    @max.k3219 6 หลายเดือนก่อน +1

    Thank you for the video👍. How can you get the coordinates of an oriented bounding Box from an image?

    • @Ultralytics
      @Ultralytics  6 หลายเดือนก่อน

      Below is the provided code snippet for obtaining the coordinates of Oriented Bounding Boxes using Ultralytics YOLOv8.
      ```python
      from ultralytics import YOLO
      from ultralytics.utils.plotting import Annotator, colors
      import cv2
      # Initialize YOLOv8 model
      model = YOLO("yolov8n-obb.pt")
      names = model.names
      # Open video file
      cap = cv2.VideoCapture("Path/to/video/file.mp4")
      assert cap.isOpened(), "Error reading video file"
      while cap.isOpened():
      success, im0 = cap.read()
      if success:
      # Make predictions on each frame
      results = model.predict(im0, persist=True, show=False)
      pred_boxes = results[0].obb
      # Initialize Annotator for visualization
      annotator = Annotator(im0, line_width=2, example=names)
      # Iterate over predicted bounding boxes and draw on image
      for d in reversed(pred_boxes):
      box = d.xyxyxyxy.reshape(-1, 4, 2).squeeze()
      print("Bounding Box Coordinates : ", box)
      annotator.box_label(box, names[int(d.cls)], color=colors(int(d.cls), True), rotated=True)
      # Display annotated image
      cv2.imshow("ultralytics", im0)
      # Check for key press to exit
      if cv2.waitKey(1) & 0xFF == ord('q'):
      break
      continue
      break
      # Release video capture and close windows
      cap.release()
      cv2.destroyAllWindows()
      ```
      Thanks

  • @nithinpb7042
    @nithinpb7042 5 หลายเดือนก่อน +1

    Can we get a similar video for Yolo v9??

    • @Ultralytics
      @Ultralytics  5 หลายเดือนก่อน +1

      Yes it's coming soon, maybe in next 2 weeks :)

  • @nikhilshet4810
    @nikhilshet4810 10 หลายเดือนก่อน +1

    is there a way in which i can only get the label from the object so i need to use a text to speech

    • @Ultralytics
      @Ultralytics  10 หลายเดือนก่อน +2

      Certainly, you can retrieve the label using the provided code snippet:
      ```python
      from ultralytics import YOLO
      model = YOLO("yolov8s.pt")
      results = model.predict(frame, verbose=False)
      boxes = results[0].boxes.xywh.cpu()
      clss = results[0].boxes.cls.cpu().tolist()
      names = results[0].names
      for box, cls in zip(boxes, clss):
      x, y, w, h = box
      label = str(names[int(cls)])
      #.....
      #.....
      ```

  • @mahrukhhafeez7398
    @mahrukhhafeez7398 7 หลายเดือนก่อน +1

    I trained a model in Google colab and exported it in '.tflite' format. Now, working with Visual Studio code to check the model. It is not working. And I cannot comprehend the problem. It says that I am giving the wrong input. When I give a single image 'image.jpg' it gives the error: ValueError: Cannot set tensor: Dimension mismatch. Got 800 but expected 3 for dimension 1 of input 0. And if I give the image for first preprocessing and then infer by the model. It gives the error: ValueError: Cannot set tensor: Dimension mismatch. Got 3 but expected 800 for dimension 3 of input 0...

    • @Ultralytics
      @Ultralytics  7 หลายเดือนก่อน

      What is the image size at which you exported the YOLOv8 model to tflite format? Thanks

    • @mahrukhhafeez7398
      @mahrukhhafeez7398 7 หลายเดือนก่อน +1

      @@Ultralytics Umm. I don't know. How can i find out?

    • @Ultralytics
      @Ultralytics  7 หลายเดือนก่อน

      For more details, you can check out our documentation at: docs.ultralytics.com/

    • @Ultralytics
      @Ultralytics  3 หลายเดือนก่อน

      Hi there! 👋 It sounds like you're encountering an issue with input dimensions for your `.tflite` model. To help you better, could you please share more details, such as the exact preprocessing steps you're using and the shape of the input tensor expected by your model? In the meantime, ensure you're using the latest versions of `torch` and `ultralytics`. You can update them with: ` pip install --upgrade torch ultralytics ` For more guidance, check out our documentation docs.ultralytics.com and the common issues guide docs.ultralytics.com/guides/yolo-common-issues/. If you need further assistance, feel free to provide additional details here. 😊 Unfortunately, we can't offer private support, but we're here to help in the comments!

  • @johncarlomallabo9158
    @johncarlomallabo9158 5 หลายเดือนก่อน +1

    How to use the result of prediction in controlling a output devices? For example a servo motor. Thank you very much and have a nice day.

    • @Ultralytics
      @Ultralytics  5 หลายเดือนก่อน +1

      We offer support for the Jetson Nano. You can follow our QuickStart guide to get started:docs.ultralytics.com/guides/nvidia-jetson/
      Almost same steps can be utilize for other embedded devices, except the installation steps which can be different for each device. Thanks

    • @johncarlomallabo9158
      @johncarlomallabo9158 5 หลายเดือนก่อน +1

      @@Ultralytics thank you very much.

    • @Ultralytics
      @Ultralytics  2 หลายเดือนก่อน

      You're welcome! If you have any more questions, feel free to ask. Have a great day! 😊

  • @freakinggeek
    @freakinggeek 10 หลายเดือนก่อน +1

    How can I display a real-time message such as 'Person detected' within the frame when a person is identified? For example, if I am running a program in real-time and it detects a person, how do I show the message 'Person detected' directly on the frame?

    • @Ultralytics
      @Ultralytics  10 หลายเดือนก่อน +2

      Sure, the provided code allows for the direct display of 'Person detected' on the frame in case a person is identified in the video frame.
      ```
      import cv2
      from pathlib import Path
      from ultralytics import YOLO
      from ultralytics.utils.plotting import Annotator
      # Load the YOLOv8 model
      model = YOLO('yolov8n.pt') # pre-trained model
      model = YOLO('path/to/best.pt') # fine-tuned model
      # Path to Video
      video_path = "path/to/video.mp4"
      if not Path(video_path).exists():
      raise FileNotFoundError(f"Source path {video_path} does not exist.")
      names = model.model.names
      cap = cv2.VideoCapture(video_path)
      while cap.isOpened():
      success, frame = cap.read()
      if success:
      results = model.predict(frame)
      boxes = results[0].boxes.xyxy.cpu().numpy().astype(int)
      classes = results[0].boxes.cls.tolist()
      confidences = results[0].boxes.conf.tolist()
      annotator = Annotator(frame, line_width=2, example=str(names))
      for box, cls, conf in zip(boxes, classes, confidences):
      if names[int(cls)] == "person":
      annotator.box_label(box, "Person Detected", (255, 42, 4))
      cv2.imshow("YOLOv8 Detection", frame)
      if cv2.waitKey(1) & 0xFF == ord("q"):
      break
      else:
      break
      cap.release()
      cv2.destroyAllWindows()
      ```

    • @freakinggeek
      @freakinggeek 10 หลายเดือนก่อน +1

      @@Ultralytics Thank you so much. If there is any method to convert to speech like if the person detects then voice output like person detected

    • @Ultralytics
      @Ultralytics  10 หลายเดือนก่อน +1

      Third-party tools can be utilized for speech processing.

  • @xdzn3765
    @xdzn3765 7 หลายเดือนก่อน +1

    Is there a built-in method for sorting the results of the boxes by their height or is there somewhere in yolo that i can implement this internally?

    • @Ultralytics
      @Ultralytics  7 หลายเดือนก่อน +1

      At the moment, we don't directly support bounding box sorting based on height/width. However, feel free to adapt it for your specific use case.
      Thanks,
      The Ultralytics Team!

  • @sahilnegi4704
    @sahilnegi4704 8 หลายเดือนก่อน +1

    hey can you help me with how can i get confidence lets say i want to get only those confidence of my predicted images which is above 0.8 how can i do that ?

    • @Ultralytics
      @Ultralytics  8 หลายเดือนก่อน +1

      Sure, you can simply use conf=0.8 argument with prediction command i.e
      ```yolo predict conf=0.8 source="path/to/video.mp4" ....```
      Thanks
      Ultralytics Team!

    • @sahilnegi4704
      @sahilnegi4704 8 หลายเดือนก่อน +1

      @@Ultralytics Thank you :

    • @Ultralytics
      @Ultralytics  2 หลายเดือนก่อน

      You're welcome! 😊 If you have any more questions, feel free to ask. For more details, check out our FAQ docs.ultralytics.com/help/FAQ/. Happy coding! 🚀

  • @youngneji920
    @youngneji920 10 หลายเดือนก่อน +1

    How can I print the confidence scores for every class id for an image? Say I have 6 classes and a single image. I want to see what the confidence is for every label.

    • @Ultralytics
      @Ultralytics  10 หลายเดือนก่อน

      Absolutely, you can utilize the provided code to display the confidence score for each bounding box.
      """
      from ultralytics import YOLO
      # Load a pre-trained YOLOv8n model
      model = YOLO('yolov8n.pt')
      names = model.model.names
      # Perform inference on 'bus.jpg' with specified parameters
      results = model.predict("ultralytics.com/images/bus.jpg", verbose=False, conf=0.5)
      # Process detections
      boxes = results[0].boxes.xywh.cpu()
      clss = results[0].boxes.cls.cpu().tolist()
      confs = results[0].boxes.conf.float().cpu().tolist()
      for box, cls, conf in zip(boxes, clss, confs):
      print(f"Class Name: {names[int(cls)]}, Confidence Score: {conf}, Bounding Box: {box}")
      """
      Hope this helps. Thanks.

    • @legion4924
      @legion4924 9 หลายเดือนก่อน

      @@Ultralytics How if want save results the output from terminal to save file .txt? i tried use save_txt=True , but the .txt display only numbers didnt display a class name or a any string

    • @Ultralytics
      @Ultralytics  2 หลายเดือนก่อน

      To save the results with class names and confidence scores to a `.txt` file, you can modify the `save_txt` method to include class names. Here's how you can do it:
      ```python
      from ultralytics import YOLO
      Load a pre-trained YOLOv8n model
      model = YOLO('yolov8n.pt')
      names = model.model.names
      Perform inference on 'bus.jpg' with specified parameters
      results = model.predict("ultralytics.com/images/bus.jpg", verbose=False, conf=0.5)
      Save results to a .txt file
      txt_file = "output.txt"
      with open(txt_file, "w") as f:
      for result in results:
      boxes = result.boxes.xywh.cpu()
      clss = result.boxes.cls.cpu().tolist()
      confs = result.boxes.conf.float().cpu().tolist()
      for box, cls, conf in zip(boxes, clss, confs):
      f.write(f"Class Name: {names[int(cls)]}, Confidence Score: {conf}, Bounding Box: {box}
      ")
      print(f"Results saved to {txt_file}")
      ```
      This script will save the results to `output.txt` with class names, confidence scores, and bounding box coordinates.
      For more details, you can refer to the Ultralytics documentation docs.ultralytics.com/reference/engine/results/.

  • @Rapter_Babu
    @Rapter_Babu 6 หลายเดือนก่อน +1

    how can i get only the segmentation area not the mask i want original crop image from video frame?

    • @Ultralytics
      @Ultralytics  6 หลายเดือนก่อน

      You can achieve this effortlessly by leveraging the principles of instance segmentation. For coding implementation, feel free to explore our documentation page at: docs.ultralytics.com/guides/instance-segmentation-and-tracking/#__tabbed_1_1

    • @Rapter_Babu
      @Rapter_Babu 5 หลายเดือนก่อน

      @@Ultralyticsyes but i need the segmentation shape crop image not the bounding box crop image

    • @Ultralytics
      @Ultralytics  2 หลายเดือนก่อน

      Got it! To isolate and crop the segmented area, you can follow these steps:
      1. Load the model and run inference:
      ```python
      from ultralytics import YOLO
      model = YOLO("yolov8n-seg.pt")
      results = model.predict(source="path/to/your/video/frame.jpg")
      ```
      2. Generate a binary mask and draw contours:
      ```python
      import cv2
      import numpy as np
      img = np.copy(results[0].orig_img)
      b_mask = np.zeros(img.shape[:2], np.uint8)
      contour = results[0].masks.xy[0].astype(np.int32).reshape(-1, 1, 2)
      cv2.drawContours(b_mask, [contour], -1, (255, 255, 255), cv2.FILLED)
      ```
      3. Isolate the object using the binary mask:
      ```python
      isolated = cv2.bitwise_and(img, img, mask=b_mask)
      ```
      This will give you the original cropped image based on the segmentation shape. For more detailed steps, check out our guide: docs.ultralytics.com/guides/isolating-segmentation-objects/

  • @aryanmalewar1955
    @aryanmalewar1955 10 วันที่ผ่านมา

    Hey, I am searching how to control the way the model saves the output images in runs/detect/predict. is there a way to change it? I used the save_dir attribute in the model.predict() function but the model still saved it in the default way. Also is possible to get the total count of the detections predicted by model in a run?

    • @Ultralytics
      @Ultralytics  10 วันที่ผ่านมา

      To change the save directory, make sure you're using the `save_dir` parameter correctly in `model.predict()`. If it's not working, double-check for typos or version updates. For counting detections, you can iterate over the `Results` objects and sum up the detections. If issues persist, ensure you're using the latest versions of `ultralytics` and `torch`. For more details, check out the predict mode documentation docs.ultralytics.com/modes/predict/. 😊

    • @aryanmalewar1955
      @aryanmalewar1955 8 วันที่ผ่านมา

      @@Ultralytics I am using the save_dir parameter correctly. I don't see why it is not working. Iterating over the results gives number of images which had the desired object to be detected. I need total objects detected in all the images, is there a way to do it?

    • @Ultralytics
      @Ultralytics  8 วันที่ผ่านมา

      If `save_dir` isn't working, ensure you're using the latest package versions. For counting total detections, iterate over `results` and sum up `len(result.boxes)` for each `result`. This will give you the total number of detected objects. If issues persist, consider checking the documentation or reaching out on our Discord for community support: ultralytics.com/discord.

  • @FnuAnkur
    @FnuAnkur 9 หลายเดือนก่อน +1

    Why do you use model.fuse()?

    • @Ultralytics
      @Ultralytics  9 หลายเดือนก่อน +1

      model.fuse() in ultralytics is used to optimize inference performance by combining certain operations, such as convolutions and batch normalization, into a single fused operation for efficiency.

  • @mohamedahmed-rc3tr
    @mohamedahmed-rc3tr 6 หลายเดือนก่อน

    how to extract the resulted image or video and how to show

    • @Ultralytics
      @Ultralytics  6 หลายเดือนก่อน

      You can use the mentioned code below to display the resultant image.
      ```
      from PIL import Image
      from ultralytics import YOLO
      # Load a pretrained YOLOv8n model
      model = YOLO('yolov8n.pt')
      # Run inference on 'bus.jpg'
      results = model(['bus.jpg', 'zidane.jpg']) # results list
      # Visualize the results
      for i, r in enumerate(results):
      # Plot results image
      im_bgr = r.plot() # BGR-order numpy array
      im_rgb = Image.fromarray(im_bgr[..., ::-1]) # RGB-order PIL image
      # Show results to screen (in supported environments)
      r.show()
      # Save results to disk
      r.save(filename=f'results{i}.jpg')
      ```
      For more information, you can explore our Predict docs available at: docs.ultralytics.com/modes/predict/#plotting-results

  • @Melo7ia
    @Melo7ia 2 หลายเดือนก่อน

    Oh, Nicolai, you make it look so suave con YOLOv8 magic on display! I'm stoked to see the toolbox unpacked. Question, though-does handling the extraction process on a live feed impact system performance significantly, especially for high-res inputs? Asking for a friend with a not-so-beefy GPU! Also, any tips for optimizing performance while maintaining detection accuracy? 🎶 Alguém mais no mesmo barco?

    • @Ultralytics
      @Ultralytics  2 หลายเดือนก่อน

      ¡Gracias por tus amables palabras! 😊 Sí, procesar una transmisión en vivo con YOLOv8 puede impactar el rendimiento del sistema, especialmente con entradas de alta resolución y GPUs menos potentes. Aquí van algunos tips para optimizar el rendimiento sin sacrificar demasiada precisión:
      1. Reducir la resolución de entrada: Procesar imágenes más pequeñas puede acelerar significativamente el tiempo de inferencia.
      2. Usar un modelo más ligero: Opta por versiones más pequeñas de YOLOv8, como YOLOv8n (nano) o YOLOv8s (small).
      3. Aprovechar la inferencia en batch: Si es posible, procesa múltiples frames a la vez.
      4. Optimizar el modelo: Considera convertir el modelo a formatos optimizados como TensorRT o OpenVINO.
      Para más detalles, puedes revisar nuestra guía de optimización aquí: Optimizing YOLOv8 docs.ultralytics.com/guides/optimizing-openvino-latency-vs-throughput-modes/.
      ¡Espero que esto ayude! 🚀

  • @yaZ-p2h
    @yaZ-p2h หลายเดือนก่อน

    i am having a problem with yolov8 where when i run yolov8 it draws bounding boxes everywhere and a bunch of different class against my blank wall, what is the fix to this inaccuracy?

    • @Ultralytics
      @Ultralytics  หลายเดือนก่อน

      It sounds like your model might be overfitting or not trained properly. Here are a few steps you can take to address this:
      1. Check Your Dataset: Ensure your training dataset is well-annotated and diverse. Poor-quality data can lead to inaccurate predictions.
      2. Verify Training Settings: Make sure your training configuration (e.g., learning rate, batch size) is appropriate for your dataset.
      3. Regularization Techniques: Consider using techniques like data augmentation to improve model generalization.
      4. Evaluate Model Performance: Monitor metrics like precision, recall, and mAP during training to ensure the model is learning correctly.
      For more detailed troubleshooting, check out our guide on common YOLO issues: YOLO Common Issues docs.ultralytics.com/guides/yolo-common-issues/.
      If you need further assistance, please provide more details about your training setup and dataset.

    • @yaZ-p2h
      @yaZ-p2h หลายเดือนก่อน

      @@Ultralytics when i run it in command line it works fine however when i run it in python the inaccuracies start happening

    • @Ultralytics
      @Ultralytics  หลายเดือนก่อน

      It sounds like there might be a discrepancy between your CLI and Python setups. Here are a few things to check:
      1. Environment Consistency: Ensure that the Python environment you're using matches the one used in the CLI. Check versions of `torch`, `ultralytics`, and other dependencies.
      2. Model Configuration: Verify that the model configuration and weights used in Python are the same as those in the CLI.
      3. Inference Settings: Ensure that inference settings like confidence threshold and image size are consistent between CLI and Python.
      For more detailed troubleshooting, you can refer to our guide on common YOLO issues: YOLO Common Issues docs.ultralytics.com/guides/yolo-common-issues/.
      If the problem persists, please share more details about your Python script and the specific inaccuracies you're encountering.

  • @ronithrock2015
    @ronithrock2015 2 หลายเดือนก่อน

    HEY @Ultralytics please make a video for extracting the locations of the bounding boxes which are given by the yolov8 model

    • @Ultralytics
      @Ultralytics  2 หลายเดือนก่อน

      Hi there! Thanks for your suggestion! Extracting bounding box locations from YOLOv8 is a great topic. In the meantime, you can check out our documentation on this at YOLOv8 Docs docs.ultralytics.com. Make sure you're using the latest versions of `torch` and `ultralytics` for the best experience. Stay tuned for more tutorials! 🚀

  • @LunaStargazer-v1s
    @LunaStargazer-v1s หลายเดือนก่อน

    Nicolai, this was such an illuminating tutorial! Just wondering, can these extracted outputs be dynamically integrated with augmented reality applications in real-time? Imagine the wonders we could build merging virtual dreams and physical realities!

    • @Ultralytics
      @Ultralytics  หลายเดือนก่อน

      Absolutely, integrating YOLOv8 outputs with augmented reality (AR) in real-time is a fantastic idea! YOLOv8's real-time object detection capabilities can indeed be used to enhance AR applications, providing dynamic interactions between virtual and physical worlds. For more details on how to implement this, check out our documentation: docs.ultralytics.com/. Keep dreaming big! 🚀

  • @fouadboutaleb4157
    @fouadboutaleb4157 11 หลายเดือนก่อน +1

    Hello, is there a method to generate a video where I can detect faces? Specifically, I'd like to take an input video, identify faces within it, and produce an output video consisting of cropped face segments with consistent dimensions.
    thanks

    • @Ultralytics
      @Ultralytics  11 หลายเดือนก่อน

      Yes, you can do this. After detecting the face, you can crop it and write it to the output video file while ensuring that each cropped face is resized to the same dimensions. The pseudocode is mentioned below!!!.
      """
      face_crop = im0_detected[int(y1):int(y2), int(x1):int(x2)]
      face_resized = cv2.resize(face_crop, (416, 416))
      videowriter.write(face_resized)
      """

    • @fouadboutaleb4157
      @fouadboutaleb4157 11 หลายเดือนก่อน +1

      @@Ultralytics thanks for the response! But resize function is distorting the results , I'll try to find a way , thanks anyway

    • @Ultralytics
      @Ultralytics  2 หลายเดือนก่อน

      You're welcome! If resizing is distorting the results, you might want to maintain the aspect ratio by padding the cropped faces. Check out our detailed guide on object cropping for more tips: docs.ultralytics.com/guides/object-cropping/. Good luck! 😊

  • @DoomsdayDatabase
    @DoomsdayDatabase 10 หลายเดือนก่อน

    Hi i want to figure out where the live detected results are stored (i am using web cam) and i want it to speak out results using pyttsx3 or any tts engine, my code so far is given below. i am planning on integrating it with the rest of the object.
    from ultralytics import YOLO
    model = YOLO("yolov8l")
    results = model.predict(source="0", show=True, conf=0.5)
    results.show()
    thanks in anticipation of help! and a thanks for keeping yolo free for all !

    • @Ultralytics
      @Ultralytics  10 หลายเดือนก่อน

      The live detection results will be saved in the 'runs/predict/exp' folder. Ensure that you include a save argument to store the output results. The modified code is provided below.
      ```python
      from ultralytics import YOLO
      model = YOLO("yolov8l")
      results = model.predict(source="0", show=True, conf=0.5, save=True)
      results.show()
      ```
      Thank you.

  • @alfred8294
    @alfred8294 7 หลายเดือนก่อน +3

    Can we get another person to talk on these videos please? This guy speaks way to fast and it is sometimes hard to understand what he is talking about.

    • @Ultralytics
      @Ultralytics  7 หลายเดือนก่อน

      Appreciate you sharing your experience! We'll have a discussion with the creator and make the necessary updates. You can expect clearer voice and more effective sound in the upcoming videos.
      Thanks
      Ultralytics Team!

    • @alfred8294
      @alfred8294 7 หลายเดือนก่อน +1

      @@UltralyticsThanks for listening. Just take things a little slower please. Thanks again.

    • @alfred8294
      @alfred8294 7 หลายเดือนก่อน

      @@UltralyticsThanks for listening. Just take things a little slower please. Thanks again.

    • @shyamsaseethar2602
      @shyamsaseethar2602 7 หลายเดือนก่อน +2

      @alfred You do have an option to watch the video in 0.5x speed

    • @alfred8294
      @alfred8294 7 หลายเดือนก่อน

      @@shyamsaseethar2602 Very clever. Will try.

  • @jesuspeguero7695
    @jesuspeguero7695 ปีที่แล้ว +1

    hi, how do i extract the section areas in the version of yolo that can do section

    • @Ultralytics
      @Ultralytics  ปีที่แล้ว

      Hi there! To extract section areas using the YOLO model, you can start by labeling the relevant sections as bounding boxes on your document. After that, you can train a YOLOv8 model using these labeled bounding boxes.

  • @achrafelmadany6433
    @achrafelmadany6433 ปีที่แล้ว +1

    the code isn't working for me 😢

    • @Ultralytics
      @Ultralytics  ปีที่แล้ว

      If you encounter any code-related issues, please feel free to open an issue in the Ultralytics GitHub Repository here: github.com/ultralytics/ultralytics/issues

  • @SanaaMahmoud-e9w
    @SanaaMahmoud-e9w 8 หลายเดือนก่อน +1

    if i make more than one model using yolov8 and want to make combine them or multi task it to work in real time , how can i make this ?

    • @Ultralytics
      @Ultralytics  8 หลายเดือนก่อน

      There is currently no direct method to achieve this. The recommended approach is to retrain the model by fine-tuning it with annotations for all classes. This process will enable the model to detect all the specific objects you are interested in.
      Regards,
      Ultralytics Team!

    • @SanaaMahmoud-e9w
      @SanaaMahmoud-e9w 8 หลายเดือนก่อน +1

      really? but it take more time ,doesn't it. in more details about our projects is in self driving car and make model for dataset about bump ,another model for dataset about signs and traffic lights , another model about cars and pedstrains and finally segmentation to detect the lane .but all model use yolov8 @@Ultralytics

    • @SanaaMahmoud-e9w
      @SanaaMahmoud-e9w 8 หลายเดือนก่อน +1

      are any way to collect this model .and thanks more for your reply

    • @Ultralytics
      @Ultralytics  8 หลายเดือนก่อน

      Maybe then you can try to use multiple models using the multi-threading concept, we have provided detailed information about this: docs.ultralytics.com/modes/track/
      Thanks
      Ultralytics Team!

    • @SanaaMahmoud-e9w
      @SanaaMahmoud-e9w 8 หลายเดือนก่อน +1

      thanks @@Ultralytics

  • @stefangraham7117
    @stefangraham7117 10 หลายเดือนก่อน +1

    Great Content. Is it possible to see the entire code that you used? i have search the github but i dont see this code. Also is it possible you have a video on extracting the bounding box picture for a conf

    • @Ultralytics
      @Ultralytics  10 หลายเดือนก่อน

      If you wish to retrieve the bounding box coordinates, confidence score, and class name for each object, you can employ the provided code below. The code enables you to extract bounding boxes with a confidence score greater than or equal to 0.5.
      """
      from ultralytics import YOLO
      # Load a pre-trained YOLOv8n model
      model = YOLO('yolov8n.pt')
      names = model.model.names
      # Perform inference on 'bus.jpg' with specified parameters with conf=0.5
      results = model.predict("ultralytics.com/images/bus.jpg", verbose=False, conf=0.5)
      # Process detections
      boxes = results[0].boxes.xywh.cpu()
      clss = results[0].boxes.cls.cpu().tolist()
      confs = results[0].boxes.conf.float().cpu().tolist()
      for box, cls, conf in zip(boxes, clss, confs):
      print(f"Class Name: {names[int(cls)]}, Confidence Score: {conf}, Bounding Box: {box}")
      """
      Hope this helps. Thanks.

    • @stefangraham7117
      @stefangraham7117 10 หลายเดือนก่อน +1

      @@Ultralytics thank you for the response. my7 intention is to run with my webcam. i have added 'source=0'. it then asks for 'stream=True' then shows repetitive warnings without bringing up the image window to show the results.

    • @Ultralytics
      @Ultralytics  10 หลายเดือนก่อน

      @@stefangraham7117 This could be due to insufficient memory.

    • @stefangraham7117
      @stefangraham7117 10 หลายเดือนก่อน

      @@Ultralytics i mean the system i am using has 32GB of RAM. could there be a part of the code i can tweak to limit the memory usage?

    • @Ultralytics
      @Ultralytics  2 หลายเดือนก่อน

      @@stefangraham7117 With 32GB of RAM, memory shouldn't be an issue. Let's ensure you're using the latest versions of `torch` and `ultralytics`. You can update them using:
      ```bash
      pip install --upgrade torch ultralytics
      ```
      If the issue persists, please share the exact warnings you're seeing. For more details on setting up a security alarm system with YOLOv8, check out our guide: docs.ultralytics.com/guides/security-alarm-system/.

  • @SubSquadTV2024
    @SubSquadTV2024 8 หลายเดือนก่อน

    Just wanted to ask, how to get the class name of the live result. Thank you very much.

    • @Ultralytics
      @Ultralytics  8 หลายเดือนก่อน +2

      You can obtain the class names by using the following code after loading the model:
      ```python
      model = YOLO('yolov8n.pt')
      classes_names = model.names
      ```
      Thanks
      Ultralytics Team!

  • @okaysummer9653
    @okaysummer9653 ปีที่แล้ว

    I use yolov8 to predit multiple-stream, but there seems no way to know result from which stream, Any one konw deal with it ?

    • @Ultralytics
      @Ultralytics  ปีที่แล้ว

      To implement multi-stream Object Tracking, you can refer to the Ultralytics Docs: docs.ultralytics.com/modes/track/#multithreaded-tracking.
      Please keep in mind that if you wish to perform object detection on multiple streams instead, you can replace 'track' with 'predict'.

  • @adepusairahul7375
    @adepusairahul7375 11 หลายเดือนก่อน

    can we get the code which you are using, so that we can understand how to use the python code for running it

    • @Ultralytics
      @Ultralytics  11 หลายเดือนก่อน

      We are creating colab notebooks that will include the codes for our TH-cam videos, we will share them soon! Thanks

    • @karthikeyareddybs6403
      @karthikeyareddybs6403 10 หลายเดือนก่อน +1

      when do the google colab notebook will be available@@Ultralytics

    • @Ultralytics
      @Ultralytics  10 หลายเดือนก่อน

      Notebooks will be available at end of this week! Thanks

  • @ameliawong9544
    @ameliawong9544 ปีที่แล้ว +2

    hello, can you release the full code?

    • @Ultralytics
      @Ultralytics  ปีที่แล้ว

      You can get access to all YOLOv8 code at github.com/ultralytics/ultralytics

  • @DaljitSingh-tr8oo
    @DaljitSingh-tr8oo 11 หลายเดือนก่อน

    Great video. But just wondering the yolov8 have a count which is able to view via cli command. Is there any example where i will be able to return it in a json the number of count ?

    • @Ultralytics
      @Ultralytics  11 หลายเดือนก่อน

      We are uncertain about which count you're inquiring about - whether it's the object counting module or the total count of objects within a frame.

    • @DaljitSingh-tr8oo
      @DaljitSingh-tr8oo 11 หลายเดือนก่อน

      Hi@@Ultralytics for example i have an image of 5 cars and 1 truck. how can i have a response to where it it say how many cars it found and trucks within the image

    • @DaljitSingh-tr8oo
      @DaljitSingh-tr8oo 11 หลายเดือนก่อน

      Its the total count of objects within a frame.

    • @Ultralytics
      @Ultralytics  2 หลายเดือนก่อน

      Got it! You can use Ultralytics YOLOv8 to count objects and return the results in JSON format. Here's a concise example:
      ```python
      from ultralytics import YOLO
      import json
      Load the model
      model = YOLO("yolov8n.pt")
      Perform inference
      results = model("path/to/your/image.jpg")
      Extract counts
      counts = {model.names[int(cls)]: int(count) for cls, count in zip(*results.xyxy[0][:, -1].unique(return_counts=True))}
      Convert to JSON
      json_counts = json.dumps(counts)
      print(json_counts)
      ```
      This will give you a JSON response with the count of each object class. For more details, check out our object counting guide docs.ultralytics.com/guides/object-counting/.

  • @March_Awake
    @March_Awake ปีที่แล้ว

    Thanks more

  • @KevinRamirez-yl9lc
    @KevinRamirez-yl9lc 2 หลายเดือนก่อน

    I try to run a code to object detection in real time in my local pc, but the result is wrong, because draw the rectangle in other place. This is the code:
    from ultralytics import YOLO
    import cv2
    import numpy as np
    model = YOLO("yolov8n.pt")
    labels = model.names
    COLORS = np.random.randint(0, 255, size=(len(labels), 3), dtype="int64")
    cap = cv2.VideoCapture(0)
    while True:
    ret, frame = cap.read()
    if not ret:
    print("Cant detect camera")
    break
    results = model.track(frame, stream=True)
    for result in results:
    classes_names = result.names
    for box in result.boxes:
    if box.conf[0] > 0.6:
    [x1, y1, x2, y2] = box.xyxy[0]
    x1, y1, x2, y2 = int(x1), int(y1), int(x2), int(y2)
    label_index = int(box.cls[0])
    label_class = classes_names[label_index]
    color = (
    int(COLORS[label_index][0]),
    int(COLORS[label_index][1]),
    int(COLORS[label_index][2]),
    )
    cv2.rectangle(frame, (x1, y1), (x2, y2), color, 2)
    cv2.putText(
    frame,
    f"{classes_names[int(label_index)]}{box.conf[0]}:.2f",
    (x1, y1),
    cv2.FONT_HERSHEY_SIMPLEX,
    1,
    color,
    2,
    )
    cv2.imshow("Frame", frame)
    if cv2.waitKey(1) & 0xFF == ord("q"):
    break
    cap.release()
    cv2.destroyAllWindows()

    • @Ultralytics
      @Ultralytics  2 หลายเดือนก่อน

      It looks like there might be an issue with how the bounding box coordinates are being processed or drawn. Let's ensure that the coordinates are correctly extracted and used. Also, make sure you're using the latest versions of `torch` and `ultralytics`. Here's a refined version of your code:
      ```python
      from ultralytics import YOLO
      import cv2
      import numpy as np
      Load the YOLOv8 model
      model = YOLO("yolov8n.pt")
      Get class labels
      labels = model.names
      Generate random colors for each label
      COLORS = np.random.randint(0, 255, size=(len(labels), 3), dtype="int64")
      Open the webcam
      cap = cv2.VideoCapture(0)
      while True:
      ret, frame = cap.read()
      if not ret:
      print("Can't detect camera")
      break
      Run inference
      results = model(frame, stream=True)
      for result in results:
      for box in result.boxes:
      if box.conf[0] > 0.6:
      x1, y1, x2, y2 = map(int, box.xyxy[0])
      label_index = int(box.cls[0])
      label_class = labels[label_index]
      color = tuple(map(int, COLORS[label_index]))
      Draw the bounding box
      cv2.rectangle(frame, (x1, y1), (x2, y2), color, 2)
      cv2.putText(
      frame,
      f"{label_class} {box.conf[0]:.2f}",
      (x1, y1 - 10),
      cv2.FONT_HERSHEY_SIMPLEX,
      0.5,
      color,
      2,
      )
      Display the frame
      cv2.imshow("Frame", frame)
      Break the loop if 'q' is pressed
      if cv2.waitKey(1) & 0xFF == ord("q"):
      break
      cap.release()
      cv2.destroyAllWindows()
      ```
      Make sure to check the following:
      1. Ensure you have the latest versions of `torch` and `ultralytics` installed.
      2. Verify that your webcam is working correctly.
      For more detailed information on using the `predict` mode, you can refer to the Ultralytics YOLOv8 documentation docs.ultralytics.com/modes/predict/. If the issue persists, please provide more details about the environment and any error messages you encounter.