Your system is always so stable whenever I see it! I have two questions about your line following system. First, how did you simplify the difficult course, like the one at 0:06 in the video? I made a line-following system with OpenCV for RCJ, but when I tried it on a zigzag line, the robot swung so much that it went off course. (Is this a PID issue?) Secondly, how do you determine the turning value when reading signals at a corner? In my system, the turning value is constant, and sometimes the robot goes off course.
Thank you! The robot follows the line by initially searching for seven points in the image. In addition to identifying the lowest point on the line, both for the full image and a version cropped in height, the highest, leftmost, and rightmost points on the line are identified. The robot typically follows the line by centering the highest point of the cropped image. If this point is not at the top edge of the cropped frame, either the left or right point is selected for following, depending on their distance to the edge of the frame, to prevent overrunning the line. The point that the robot is currently following is highlighted in red on our GUI. At intersections, this becomes more challenging, particularly when standing at an angle, as the correct line may no longer be the only one at the upper edge of the frame. For this reason, a theoretical point (yellow in our GUI) is constantly calculated by connecting and extending the lowest and highest points of the cropped image to the upper edge of the frame. In the case of an intersection, the point at the top of the frame closest to the yellow point is selected to follow. If a green marker is detected, the point on the corresponding side of the line is chosen. The detection of these is based on checking the four sides around the marker for any black lines. We will release all our code including documentation etc. soon in the Github organization linked in the description, so you will be able to read over everything there.
@Overengineering2 Thank you for your reply! I am working on a program based on that. What does "a version cropped in height" mean? I think it refers to the top row of the current image. Is that correct?
@@null5464 Yes, we remove part of the top of the image so we can follow the line more closely, since the way our robot is built causes a lot of corner cutting, which is a problem often times.
@@antoniat9531 As you can see in the video, after rescuing the dead victim at the red evacuation point, we simply follow the wall, checking for any unexpected distance changes measured by a distance infrared sensor mounted on the right side of the robot. If there are any of these changes the robots turns right and checks with the camera whether the stripe on the ground is silver or black.
Your system is always so stable whenever I see it!
I have two questions about your line following system.
First, how did you simplify the difficult course, like the one at 0:06 in the video? I made a line-following system with OpenCV for RCJ, but when I tried it on a zigzag line, the robot swung so much that it went off course. (Is this a PID issue?)
Secondly, how do you determine the turning value when reading signals at a corner? In my system, the turning value is constant, and sometimes the robot goes off course.
Thank you! The robot follows the line by initially searching for seven points in the image. In addition to identifying the lowest point on the line, both for the full image and a version cropped in height, the highest, leftmost, and rightmost points on the line are identified. The robot typically follows the line by centering the highest point of the cropped image. If this point is not at the top edge of the cropped frame, either the left or right point is selected for following, depending on their distance to the edge of the frame, to prevent overrunning the line. The point that the robot is currently following is highlighted in red on our GUI. At intersections, this becomes more challenging, particularly when standing at an angle, as the correct line may no longer be the only one at the upper edge of the frame. For this reason, a theoretical point (yellow in our GUI) is constantly calculated by connecting and extending the lowest and highest points of the cropped image to the upper edge of the frame. In the case of an intersection, the point at the top of the frame closest to the yellow point is selected to follow. If a green marker is detected, the point on the corresponding side of the line is chosen. The detection of these is based on checking the four sides around the marker for any black lines.
We will release all our code including documentation etc. soon in the Github organization linked in the description, so you will be able to read over everything there.
@Overengineering2 Thank you for your reply! I am working on a program based on that. What does "a version cropped in height" mean? I think it refers to the top row of the current image. Is that correct?
@@null5464 Yes, we remove part of the top of the image so we can follow the line more closely, since the way our robot is built causes a lot of corner cutting, which is a problem often times.
@@Overengineering2finally i did it. thank you so much!! :)
I have a small question. How do you search for the exit of the evacuation zone?
@@antoniat9531 As you can see in the video, after rescuing the dead victim at the red evacuation point, we simply follow the wall, checking for any unexpected distance changes measured by a distance infrared sensor mounted on the right side of the robot. If there are any of these changes the robots turns right and checks with the camera whether the stripe on the ground is silver or black.