FTC 18225 High Definition WA State Control Award Video Submission (Freight Frenzy 2021-2022)

แชร์
ฝัง
  • เผยแพร่เมื่อ 15 ก.ย. 2024

ความคิดเห็น • 56

  • @cool_syder_4207
    @cool_syder_4207 2 ปีที่แล้ว +28

    If anyone deserves the control award, it has got to be you guys!

  • @ToeDexterity
    @ToeDexterity 2 ปีที่แล้ว +6

    This is really off-the-charts amazing! Well done!!

  • @rjhuang7650
    @rjhuang7650 2 ปีที่แล้ว +2

    This is awesome. The robot is intelligent.

  • @robosapiens7051
    @robosapiens7051 2 ปีที่แล้ว +7

    Hey guys! This is easily one of the coolest robots I have seen this season, only question I had is what you guys used for your intelligent claw, do you all use regular cameras or any special programs. Thanks again and congrats on making worlds!

    • @highdefinition6017
      @highdefinition6017  2 ปีที่แล้ว +2

      We use a regular Logitech webcam (I'm not sure what specific model), and we don't use any libraries for detection; all the code used for processing the image is developed by our team.

  • @elywickander4666
    @elywickander4666 2 ปีที่แล้ว

    most advanced clawbot ever made

  • @divinaabiodun
    @divinaabiodun ปีที่แล้ว

    Good job on keep it up 😜😜

  • @elementalsb8112
    @elementalsb8112 2 ปีที่แล้ว +1

    This is some crazy vision I was wondering if you knew where I could go to learn more about computer vision/tensorflow and or opencv, like commands and how to really understand an utilize the vision. I guess what I'm asking is how did you learn to use tensoflow lite and what I could to be able to master it like you. Thanks a lot :)

    • @highdefinition6017
      @highdefinition6017  2 ปีที่แล้ว

      Hi! We learned TensorFlow Lite basically throughout trying it out and experiencing what it was capable of. There's plenty of helpful resources in the external samples in the FtcRobotController folder, for example this one: github.com/FIRST-Tech-Challenge/FtcRobotController/blob/master/FtcRobotController/src/main/java/org/firstinspires/ftc/robotcontroller/external/samples/ConceptTensorFlowObjectDetection.java
      We've found that TensorFlow works well for game elements (a new TFLite model is provided every year) but we've found it difficult to use for custom objects or other use cases like warehouse freight detection. This is why we decided to create a custom vision algorithm, which you can see at 0:33. If you want to learn more about our custom vision algorithms, feel free to contact our lead programmer at null_awe#0184 on Discord.
      There are probably great videos on how to use OpenCV for FTC on TH-cam, but we haven't really used OpenCV, at least not yet.

  • @spidernh
    @spidernh 2 ปีที่แล้ว +1

    Congrats on worlds!

  • @timmyytjr1131
    @timmyytjr1131 2 ปีที่แล้ว +1

    Omg you guys are absolutely crazy! Do you guys use a motion planning library like Roadrunner or did you custom make the movements? And as for the hybrid PID, how did you create a tuner for that?
    I can't wait to see you guys at worlds! I'm definitely going for y'alls pins if you have any lol

    • @highdefinition6017
      @highdefinition6017  2 ปีที่แล้ว +1

      Nope, everything is custom made! For the PID tuner, we have a base class that handles all the normal PID logic which applies to all PID models (for any subsystem), and then all we have to do for a new subsystem is just implement a few methods (getError, setPower, cancel) and then it's ready to be tuned in tele-op.

  • @pjwetherell9414
    @pjwetherell9414 2 ปีที่แล้ว

    Did you run your vision in a separate thread? How long did it take to get a picture and fully process it.
    Did you attempt to find the real world coordinates of the freight? If so how accurate was that?

  • @yotamdubiner2545
    @yotamdubiner2545 2 ปีที่แล้ว

    Wow! Amazing! I'm definitely going to try to implement that kind of intelligence in our bot.

    • @highdefinition6017
      @highdefinition6017  2 ปีที่แล้ว

      Feel free to reach out if you have any questions!

    • @yotamdubiner2545
      @yotamdubiner2545 2 ปีที่แล้ว

      @@highdefinition6017 actually, can you give me an idea of where to start? Maybe some theory, what to Google,, etc

    • @highdefinition6017
      @highdefinition6017  2 ปีที่แล้ว

      @@yotamdubiner2545 Try looking at the sample class in external samples called "ConceptWebcam". This teaches you how to retrieve image frames from the webcam directly. Then, the most important thing is converting the RGB color format of the pixels to HSV, which you can work more with (filtering is much easier with HSV).

    • @yotamdubiner2545
      @yotamdubiner2545 2 ปีที่แล้ว

      @@highdefinition6017 I know how to do that. I'm familiar with eocv. I just need to know how you determine the distance from the object by using the pixel of it in the image

    • @yotamdubiner2545
      @yotamdubiner2545 2 ปีที่แล้ว

      @@highdefinition6017 and how u determine the angle

  • @brandon-mz1vs
    @brandon-mz1vs 2 ปีที่แล้ว +1

    Nice robot. Can your intelligent claw detect the individual blocks at the start of the game when they are all clumped together?

    • @highdefinition6017
      @highdefinition6017  2 ปีที่แล้ว +1

      Yep! We don't actually detect all of the blocks, only the closest one (lowest in the image). However, it does pick out one individual lowest block.

  • @tharunkumara.r229
    @tharunkumara.r229 2 ปีที่แล้ว

    DANG thats amazing! Both your software and hardware just top notch! I have a quick question, what motors did you all use for your turret?

  • @robotkg6540
    @robotkg6540 2 ปีที่แล้ว

    How did you create the rotating platform?3d Print? Custom Made from any material? Is it using servo or motor to rotate?

    • @highdefinition6017
      @highdefinition6017  2 ปีที่แล้ว

      Cool question! Our turret is a lazy susan bearing powered by a motor. There is a carbon fiber plate attached on top of the lazy susan bearing so that we can easily attach our delivery arm. We also have another lazy susan bearing on the intake arm but that is custom 3D printed.

  • @useruseruseruser-i6s
    @useruseruseruser-i6s 2 ปีที่แล้ว

    wowww!!

  • @17viKing17
    @17viKing17 2 ปีที่แล้ว

    Hello, tell me please. You are using a servo dymamixel 12a? If so, please tell me how you managed to reflash them to use with contrl hab, which library did you use?

    • @highdefinition6017
      @highdefinition6017  2 ปีที่แล้ว

      We're using 11 GoBilda servos and one Savox servo, not Dynamixel servos. I'm not sure what you mean by reflashing them using the control hub.

  • @honeykohms8345
    @honeykohms8345 2 ปีที่แล้ว

    This is absolutely insane. Congrats on getting to worlds!
    Quick question: I noticed that in your detection code you have set it up to use a webcam. Is it possible to achieve this with the phone camera as well?

    • @highdefinition6017
      @highdefinition6017  2 ปีที่แล้ว

      We actually haven't tried it with a phone camera before, although we've been asked this same question before. Our guess is you could try looking at the sample classes that is provided in the FTCRobotController, and see if there is something that can retrieve camera frames.
      When we tried this last year for ring detection (counting orange pixels instead), the only way we could retrieve an image from the phone camera was to use Vuforia to return an image (a process of which we've kind of forgotten the specifics of... but you could probably find online). We're not aware of another way currently.

    • @honeykohms8345
      @honeykohms8345 2 ปีที่แล้ว +2

      @@highdefinition6017 Alright thank you. I'll try it out and let you know how it goes.

  • @calvinz9126
    @calvinz9126 ปีที่แล้ว

    that claw really has aimbot

  • @LeontinHainaru
    @LeontinHainaru 2 ปีที่แล้ว

    How cool is that. Congrats guys. How you do the real time detection? Open CV?

    • @highdefinition6017
      @highdefinition6017  2 ปีที่แล้ว +1

      All of our vision algorithms are completely original this year. The logic described in the video is run in a background thread, and when highly optimized, it can cycle camera frames extremely quickly that allows us to do real-time detection.
      If you are in the FTC Discord, there was another example of real-time detection that also used a highly-optimized version of our code: discord.com/channels/225450307654647808/771188718198456321/946616121451249725

    • @LeontinHainaru
      @LeontinHainaru 2 ปีที่แล้ว

      @@highdefinition6017 COngrats and keep going.

  • @jebsho
    @jebsho 2 ปีที่แล้ว

    I'm just curious but you said your robot has 25 sensors!? Could you list the one's you used?

    • @highdefinition6017
      @highdefinition6017  2 ปีที่แล้ว

      1 IMU
      2 Logitech Webcams
      5 REV 2m Distance Sensors
      6 Motor Encoders
      12 Servo Encoders

  • @mateusbernart2101
    @mateusbernart2101 2 ปีที่แล้ว +1

    Hi guys, How did you make the rotating Platform?

    • @highdefinition6017
      @highdefinition6017  2 ปีที่แล้ว

      The rotating platform in both the delivery arm and the intake arm is a lazy susan bearing. For the delivery arm, it was big enough that we could buy it online. However, since the intake arm was smaller, we used Fusion 360 to CAD our own lazy susan bearing and 3D printed it.

    • @mateusbernart2101
      @mateusbernart2101 2 ปีที่แล้ว

      @@highdefinition6017 but how you motorized it?

    • @highdefinition6017
      @highdefinition6017  2 ปีที่แล้ว

      ​@@mateusbernart2101 I'll be explaining the intake arm lazy susan bearing, but the delivery arm works similarly with a motor in place of a servo.
      The Lazy Susan bearing has two parts, an inner part (mounted to the drivetrain) and the outer part. In between is a set of balls that allow for the rotation of the bearing. We have attached a gear connected to the outer part and the above plate. This is controlled by the servo you see on top of the bearing. When the servo rotates, it rotates the gear. Another gear is located in the inner part of the bearing. Therefore, both gears will move and the intake arm will rotate!
      If you don't understand my explanation, then we can also share the CAD file for you to take a better look at the components.

    • @VeerNanda
      @VeerNanda 2 ปีที่แล้ว

      @@highdefinition6017 would it be possible for you to share the cad?

    • @highdefinition6017
      @highdefinition6017  2 ปีที่แล้ว

      ​@@VeerNanda Added the link to the video description! :)

  • @kaushikreddy2775
    @kaushikreddy2775 2 ปีที่แล้ว

    Woah... what kinds of mecanum wheels do you use?

  • @leohai6700
    @leohai6700 2 ปีที่แล้ว

    Simply awesome. You have a repository for your code?

    • @highdefinition6017
      @highdefinition6017  2 ปีที่แล้ว +1

      Yeah, the repo is private as of now because it's not cleaned up, which we'll probably do if we release the code to public. Our shipping element detector code, however, is public here: github.com/HiiDeff/ShippingElementDetector

    • @leohai6700
      @leohai6700 2 ปีที่แล้ว

      @@highdefinition6017 thank you

  • @acronicosftc
    @acronicosftc 2 ปีที่แล้ว +1

    rapaaaaaaz

  • @fundooguy316
    @fundooguy316 2 ปีที่แล้ว

    One of the best robots I've seen this season. I don't see the link to your portfolio. Can you share it ?

    • @highdefinition6017
      @highdefinition6017  2 ปีที่แล้ว +1

      Hi there!
      We have not made the portfolio public yet as we are still mid-season. Following the World Championships, we will most likely make it public - so stay tuned for then!

    • @fasvi1285
      @fasvi1285 2 ปีที่แล้ว

      @@highdefinition6017 Have you made the code public? You mentioned in the video that you were considering doing that.