Andreas It is not just this video. It is all your videos that are helping me learn more and more about electronics, robotics, etc. Thank you. This was another helpful video
One thing people usually do is to swing the rotating lidar to change its plane, like mounting it on servo actuated hinge. You loose time resolution, but gain volumetric information.
Lidar is in fact very good at detecting obstacles and there's a major point missed here: autonomy systems don't just judge a single lidar scan, but many successive scans over time to answer the 'is there an obstacle here'. It also helps lidar tremendously when it's on a moving platform with varying roll and pitch and there's a good positioning system (RTK gps and high rate IMU) aiding in positioning-tagging points.
yet I think you could solve this problem by tilting the LiDAR front and sideways. If you use more than just one of these, you could cover more of the relevant field of interest. About the "ground drop" problem: A LiDAR entirely dedicated to scan the "road ahead" by being tilted forward would provide sufficient resolution. In the end: I doubt there's THE ONE sensor to cover everything. You have to combine multiple one's to cover all eventualities. And yes, stereoscopic cameras and image recognition are a part as well
The auto industry has solved this LiDAR blindness problem by using 75GHz RADAR to supplement the LiDAR sensor. Other types of sensors that are more nearsighted can be used to improve nearby object accuracy.
Another great video! One of the early systems I was involved in 2009 was using both LIDAR and Ultrasound together as you show in the last segment. We had the same limitations you discovered. However, we also came up with a solution. Detect, query, target, verify. The Lidar sensor would detect an object. If limited return data fell outside of programmed limits, the Ultrasound sensor was sent a command to target the unknown obstacle. Of course, if the sensor was fixed, this had limitations. Our solution, to mount the ultrasound sensor on a plate that had X and Y servo motors to control the directional plane required for targeting. This raised two more problems, time and number of queries per second (plus latency). Our solution, install four ultrasound sensors, one at each corner of our robot. Raspberry did not yet exist. We too had the exact same data processing issue. In fact we still had issues when we attempted to use Motorola RISC 88000 processors. In summary, we did not properly plan project objectives. Our solutions were far too complex and over engineered. We were trying to create high fidelity resolution that was never required and did not focus our efforts to a simple problem, target, define and avoid.
Thank you for sharing your experience. If I understand right, a bunch of ultrasonic sensors in different directions would have done the job on obstacle avoidance?
Suggestion: Tilt the LIDAR down a few degrees at the front, and put a mirror behind it to reflect the backwards laser beams back towards the front again but on an UPWARDS tilt. This will effectively give you two forward facing detection planes, on different tilted angles, one up and one down. You just sacrifice your backwards facing detection.
@@AndreasSpiess should be fine, it's just another layer of optics. IIRC on other similar technologies like the Microsoft Kinect, mirrors will work. Easy to test tho with your bathroom mirror or a hand mirror.
@@AndreasSpiess th-cam.com/video/1Qx8NzuSSJ4/w-d-xo.html here u can see how triangulation systems like this just treat mirrors as if they are portals to another room.
Further thoughts: You could keep it simple too by angling the LIDAR & mirror so the top & bottom detection planes "trigger" at a similar distance to each other, so you dont have to bother separating the 2 planes in software & setting different trigger distances. Just position everything so if *either* is triggered the robot will do the same thing (stop then reverse & turn, etc). Or: Since you don't want the top plane to continue upwards to be triggered by things high above, perhaps it will be mounted high on the robot facing straight forwards (no upwards angle), and only the low detection plane will be angled down. Remember that mirrors reflect all the way to their edge, so it is easy to mount a mirror as the highest point of the robot with no unprotected blind spot above it. This also gives the bonus ability of the top detection plane being able to see forwards at a maximum distance, so you retain the ability to use it for more than just simple collision avoidance.
if its servo controlled then you know where its facing, so you could code a transform so that the alternate 180degree fields can correspond to 2 angles. even without knowing the direct facing of the sensor you could still write code to distinguish 2 planes from each half (or however much) of the total scan. edit: because if the light emitting and being detected is on the same face, and that face is facing a mirror then itll still emit and detect the return to whatever angle the mirrors are set too aslong as the mirror is reflective enough and setup so there is little to no return to the detecter from the mirror surfaces themselves. then ok you lose whatever field of view by however much is 'blocked' by a mirror, but you gain more data for the remaining field so.. and for full 3d you culd expand it so that instead of having the problem of gyroscopic torque in moving the lidar assembly around a perpendicular axis to gain another degree of resolution, by ie angling it up and down, well you could much easier keep the lidar assembly fixed and then angle a mirror up and down to get a wider field of view/ planes .. even though at first i agreed with your (missed) sarcasm ;p i then thought no wait a minute there is a way that could actually work and make sense. besides which: of course triangulation works with mirrors if the mirrors plane is normal to both beams, then it doesnt affect its relative angles (discarding real world material affects) so both angles are kept through the translation. the only thing that changes is the potential distance travelled from the device to target because of the longer path taken, and so you want to take that into account and add that offset to your results or reconfigure the system to overcome it/ cancel it out.
Excellent video! For short range obstacle avoidance, an ultrasonic system with an omni source and a multi-element phased array receiver can be used for DoA estimation with full spatial coverage (a four element pyramid would be the minimal number of phones for full spacial coverage). The main problem with this approach is that it can only see one thing at a given two way range from the system at a time, and if there are multiple reflections received at the same time, the system can become confused, but it is generally possible to throw out bad results by requiring all returns to come in at consistent times at each receive array phone. I was actually working on this problem tonight for an underwater military application when I took a break to watch your movie. But in my case, the problem is to detect marine mammals anywhere near a high-power low frequency acoustic surveillance array. During peacetime, these arrays must be turned off when marine mammals are nearby because having them wash up on a beach is terrible publicity for the Navy. The offshore oil exploration vessels have the same problem...
I think this would work because You only need the shortest distance. You only the fastest echo will be needed. Maybe you stop once too early, but this is not problematic. My 3 sensor design was very similar as I started all "loudspeakers" together. So it behaved like an omnidirectional source.
For simple obstacle avoidance, a 2D lidar is great. Many environments are sufficiently constrained that planar sensing is fine. Once you start worrying about complex terrain it becomes a really hard problem. You discuss hanging obstacles - once you start getting into those, you might as well start worrying about *deformable* obstacles. The simplest example is long grass: any simple sensor will register it as an obstacle, but you want to drive right over it. At that point you need multiple sensors - probably a touch sensor combined with something like a planar lidar. As for not wanting to process all the data from a planar lidar, just wait until you realize you want to add a camera. It's definitely worth having at least raspberry pi level processing on a mobile robot, if not a full on laptop-type computer.
what you're describing is the limitations of the cheap toy LIDAR systems; an industrial LIDAR system, used on cars, has none of those issues, it scans in 3D space and it does so very fast using an array of lasers and detectors
Thanks for your contribution and the clarification. But to use Swiss precision: Lidar (or Ladar) always means a time of flight method which the A1 do not use. So it is NOT a Lidar but a laser triangulation tystem originally developed for a vacuum cleaner robot. Yes you arer right real and useful 3D Lidars are still very challenging and expensive. That is why the camera Systems got so popular in the automotive Industry over the last decade. Some car manufacturers even use a stereo camera to get a better estimation of real distances.
AFAIK LIDAR is the abbreviation for "Light Detection and Ranging" which would include all sorts of systems. But anyway, not important. Working with stereo cameras for sure would be very interesting for me. But I first have to start with one camera ;-)
Please refer to www.robotshop.com/media/files/PDF/revolds-whitepaper.pdf and see that there is no time of flight measured. The angle of the Incoming light ray is determined by an array of light sensitive Elements, e.g. linear array of photodiodes or in this example an imager.
You have described a wall as a mirror reflection; however a wall is not a mirror, it's close to a diffuse surface, so the light gets thrown back in all directions regardless where source light came from. The strongest emission of light is along the surface normal and falls off from there on out towards no emission at 90° to the normal. The sensitivity of the sensor will also have a cutoff, which is why your LIDAR can't see these walls very far. See Lambertian BRDF.
You are right. I use to simplify a little. I showed and mentioned this effect when I measured the "wall", which was a dirty wooden board. Truly not a mirror.
Resolutions is the smallest increment something can measure, for example, a ruler that was markings 1mm apart has a resolution of 1mm. A ruler that has markings 1cm apart has a resolution of 1cm. Summarised: - Accuracy is how accurate to the real value the measurement is. - Resolution is the smallest increment the measurement can show.
Nice video, shows clearly that you need more than just a Lidar; Lidar for SLAM, IR or ultrasonic for obstacle avoidance, switches with bumpers for colllision detection and IR sensors to detect the stairs. One thing: at 9:21 you are confusing "resolution" with "accuracy". The RPLIDAR gives the distance with a 0.5mm resolution but the accuracy depends on the type of surface. Slamtec does not give any data on this, but from experience I found that it is safe to assume a 1-2cm accuracy of the measured distance.
Thank you Anrdreas. Great presentation. My question and yeah, we are at 2021 now, is with what you've presented and what I gleaned from other sources, isn't this why things that crawl, fly, etc... utilize multiple sensor types to create a holistic picture? The end of this video showed you positioning ultrasonic in three different orientations with the Lidar. The limitations that you identify, and thank you for that, pretty much was the icing on the cake for me for why our critical systems out there now are using every available tech to overlap and cover the blank spots. In this time (three years) have you seen any one of the tech's used rise to the top and provide a clear picture? Thanks again, really enjoy the breakdowns you do of this stuff and it's serving well in educating me to the topic.
Seen from here we are in the "innovation valley of death" between the hype and a useful system in self driving cars (www.researchgate.net/figure/The-valley-of-death-in-innovation-and-the-context-of-the-simplified-BID-process-Based_fig1_331105240 )
For slow moving vechicles a servo tilting the lidar would allow it to scan up/down a few degrees in the moving direction. Probably enough to detect descending stairs.
*@Andreas Spiess* 6:45 You can put the lidar on a rotating wedge, in that way you can know at what angle the liar are (because you know the rotation of the wedge) and suddenly you have made a 3D lidar out of a 2D liar. (basically make the 2D "lazer disc" wobble around in 3D space, with whatever angle you need to cover the desired volume) (A probable cause for problems is the G-forces created from the lidar's own rotation)
I purchased a RPLIDAR A1 based on one of your earlier videos. I've constructed a mount to pivot the LIDAR up and down. I think the increased scan time can be compensated by controlling the rotation of the LIDAR directly. Rather than having the LIDAR sweep a full 360 degrees, I plan to limit the rotation to 90 degrees or less. I'll need to either control the DC motor included in with the RPLIDAR A1 or to use a different motor to control the LIDAR rotation. I haven't tested how well I can control the included DC motor yet. Hopefully an h-bridge will allow LIDAR to sweep back and forth rather than rotate continuously. I'm skeptical I'll have enough control the included motor to make this work well. I'll likely need to add an encoder to monitor the motor's speed and position. The small motor might not be up to the task of continuously changing direction to produce the desired back and forth motion. I think it's very likely I'll need to use a different motor to produce this motion. The LIDAR itself provides feedback of the device's angular position but I think it would be hard to use this feedback for relatively precise motor control. I think either a gear motor with an encoder or a stepper motor will be needed. A stepper motor should be okay since the position feedback could be used to correct for any lost steps. I'm hoping by adding a tilting mount and a motor to point the LIDAR (rather than spin the LIDAR), I can use the RPLIDAR A1 for obstacle avoidance. I've made progress building the tilt mount but I haven't yet tried controlling the RPLIDAR A1's motor to limit its rotation. I used to make videos about my robot projects but I haven't done so in a while. I'll make a point of documenting my efforts as I attempt using LIDAR for obstacle avoidance.
The high inertia of the rotor will kill your plan. This little motor is in no way designed to accelerate the rotor that fast as you would want. Even a much bigger one will not be able to exceed the 7 Hz you already have to a higher scan frequency that makes worth the effort. A possible approach would require a different optical system. A continuously rotating mirror pyramid or polygon drum will give you a scan effect over a limited angle. When the beam leaves the range, the laser points on the next mirror segment and the beam comes in from the opposite corner. This is the principle how laser printers were working 20 years ago. Bonus: If your polygon mirror drum has enough segments, you can tilt the mirrors to obtain multiple planes. Imagine an octagon drum with 8 mirrors of slightly different angles, so one revolution can give you 8 scans of 90° in different planes.
@Andreas Spiess "I agree with SpeedFlap. It is probably faster to keep the motor running than to reverse directions." This may well be the case but I don't think I'll be able to stop myself from trying the back and forth method. "And laser for sure will have a longer life." If by "laser" you mean the complete LIDAR unit then, I tend to agree. Moving the assembly back and forth could increase the wear on the slip ring and possibly increase the wear on the encoder used to determine angular position. I was about to say the laser diode itself should be unaffected my proposed change. Thinking about it a bit more, I concede the back and forth acceleration could cause additional mechanical strain to (electrical and mechanical) support components used with the laser. I have a feeling any changes I make will likely reduce the functionality of the RPLIDAR A1. It's a really cool gadget as is (thanks for showing it in a video Andreas Spiess). Even if the changes I make aren't practical, hopefully they'll be interesting. @SpeedFlap "The high inertia of the rotor will kill your plan." I'm not so sure I agree. I'd likely have trouble moving the rotor back and forth quickly with a motor, of similar size of the original motor, but I'm sure I have lots of motors capable of accelerating the rotor without much trouble. "Even a much bigger one will not be able to exceed the 7 Hz you already have to a higher scan frequency that makes worth the effort." I'm not really concerned about a lower scan frequency. I just want to convert the 2D scan to a 3D scan even if the area scanned isn't very big. I'd rather have a 2Hz scan of an area where obstacles could interfere with the progress of my robot, than have a 7Hz 360 degree scan of at a single height. I considered removing the LIDAR unit from the rotor and use my own pan and tilt mount but I didn't see an obvious way to separate these. I didn't want to mess up the slip ring/encoder section of the device so I ended my disassembly exploration with the LIDAR still attached to the rotor. Thanks for the feedback guys. Thanks for the great videos Andreas Spiess. I'll make sure to document my efforts with videos posted to my channel (I haven't posted anything interesting for a while).
One of the ways to speed up computations is to do a buffering and batching. Buffer lets say 30 points, then send an array of 30 points to a different core or processor where they are all processed together (i.e. denoising, combinging with previous data, doing kinematic compensation from gyroscopes and accelrometers, etc). This way per-point overheads are reduced greatly, and main code only needs to focus on communication with lidar and basic processing of points. 1400 points per second is extremely small amount of data for modern computing systems. In DSP and audio processing systems you usually process about 100k samples per second. In video processing dozens of millions of points per second. In graphics systems, you can be processing trillions of entities per second, using FPGAs, DSPs, GPUs or multicore CPUs. As well by smart software solutions like batching, async processing, circular buffers, vectorization, etc. Seriously 8MHz, 8-bit MCU with no FPU, no DMA, and multiple high speed UARTs is terrible for any serious robotics.
This is what I meant when I mentioned a subsystem. With my 3 ultrasonic sensors, I used an ATTINY to do the measuring. Like that the main processor only had to read two values: The shortest distance detected and the direction where it was. Much faster.
Very interesting video! I do not know about the RPLIDAR but I have seen other LIDAR sensors on the market that do not capture BLACK surfaces properly or at all just like your stealth plane example! In addition they may have too many noisy artefacts to be filtered. I recommend asking the manufacturer for a video of the sensor in operation before buying.
The 2D-system could be made into a 3D-system to look in the forward direction by mounting it vertically on a turntable that ‚oscillates‘ from left to right and back at a defined speed within a defined angle. You probably lose some scan rate for backward beams. That‘s the same with horizontal mounting. But while the system is vertically mounted, the challenge would be to reduce beams in all other false directions by coordinating the turntable‘s speed to the scanner‘s speed. The rest should be a matter of software. Probably exceeds the possibilities of a small Computer... I’m just blowing the faint wind of my thoughts into this great brainstorm. Thank you for all your great Videos.!
Very good video on LIDAR's blind spots. I think that LIDAR may be good for specific applications, but for use in vehicles it is a dead end and stereoscopic cameras win on all ends. They are cheap, devices and software is capable to resolve objects at real time.
I do not know of any particular. However I think 2 regular cheap 640x480 cameras ($4) would be sufficient to start. Connected using I2C or SPI. However to process the image with reasonable precision it would require some processing power. Not sure if suitable for even more powerful ESP32, unless you dedicate one MCU and its 2 cores just for this activity. Writing the software would be interesting problem nevertheless or using existing libraries (OpenCV?)...
A solution may be to put a sensor on top on the x/z plane, while the other put on front of the vehicle, with nothing under it (maybe mounted with an L bracket?) on y axys, this way you have a 2D plane of under and on top of the vehicle, would still not be perfect, but would at least cover for stairs and low ceiling. Edit: it's only a thought, maybe there is something I didn't think of, like the Arduino not being capable of processing the input from both sensor xD
@@AndreasSpiess you're also right, also it makes the panzer (:P) way more unbalanced on the front, still would be a solution... Also putting 2 slightly oblique static sensor, one up and one down, would be great to detect stairs and/or low ceiling, but I dunno, I never used those sensor except with the lego mindstorm nxt 2.0 so you're for sure more expert, I'm just suggesting ideas xD
Turn it 90 degrees (rotating vertical) and scan it horizontally. That converts it to a 3-D lidar, if you can find or make software to handle it. You can define the scanning angle to whatever you need. It would be pretty slow, however.
LIDARS are mostly used for mapping and planning, not for emergency obstacle avoidance. What you can do to improve the chances of detecting small objects with the LIDAR is to implement a probabilistic sensor model like HIMM or Bayesian and map the data over time.
I always thought LIDARS on cars are also used for obstacle avoidance. As I showed the LIDAR was able to "see" quite small objects, but only in one plane. So the enhancement of analysis would probably not help the (fast) obstacle avoidance. However, it could help to increase mapping accuracy.
Looking forward to affordable MEMS LIDAR (oscillating mirror). Wondering how effective a VL530X time of flight sensor would be mounted to a couple of servos such that it looked around in a circle.
The sensor completely lacks optics, it doesn't have lenses to collimate either the laser beam or the incident light. So it fires a wide 35° beam and then collects return signal over 25° field of view. It has low sensitivity and basically performs similarly to ultrasonic, but often times worse. You basically get something like a weighted average of the distance in its field of view. The main reason you want to use it is for being much smaller than ultrasonic.
As long as your vehicle moves at a sensible speed for home-made robots (i.e. under 5km/h), you should be able to get around the scanning plane problem by taking advantage of the high scaning rate by oscillating the angle and overlaying the data.
@Andreas Spiess - thanks for a great video series - you always ask the "right" questions when checking out new technologies. In an earlier reply you stated "So today, I tried (my RPLIDAR) in front of a mirror. It was able to detect obstacles through the mirror, but only if the mirror was in a 90 degrees angle to the laser. This was only a narrow area.". Can you be more specific? If the suggestion from some readers is to take the back side of the 360 circle and use one mirror at 45° degrees to point the laser (and the reflection path) up and another one above at -45°to point it forward the angle between the incoming laser and the reflection will be 90°. Does this work? Can you hold the RPLIDAR at 45° close to the mirror and check, if it shows reasonable distance readings based on the reflection? Of course it would help if you use a "surface" mirror where the aluminium is not behind the glass but in front of the glass. The you only have one reflection plane and not the glass plane and the aluminium back plane for reflection. A cheap source for such mirrors for 6€ is here (they are used in overhead projectors): www.betzold.de/prod/77637/ .
I did exactly what you describe. And it showed reasonable results. But as I wrote, only for a few degrees, because the mirror was flat and not round. You can draw the beams and see where it goes if the angle points to the back to the mirror with an angle of let's say 20 degrees. Then the beam does no more go to the front to the bot, bot to one side.
I liked this video, it pointed out main positive and negative sides of the lidar If we want to be sure, we would need to use a combination of sensing technology, lidar, us, ir, 3d camera I am preparing a PhD thesis in this thematic (Selfdriving car) and I will start experimenting with Turtlebot3 burger which has lidar included
I think the F117's angular shape was more a result of limited computing power. The engineers used computer simulations to predict the radar cross section of a particular proposal. But the limited computing power back then meant they could only handle very few polygons. That's why the later stealth airplanes look a lot different.
You might be right. And they did not have a lot of time to build it. So they used what they had (aluminum, for example). And they had also to prove the concept to convince Congress. It was quite revolutionary at the time. Nicely described in Kelly Johnson's book I recently showed. The shapes of the newer planes, however, seem to follow the same formula and principle.
HI ANdreas, I came to the same summary. Then I got to thinking outside the box (pun) while watching this. What if we mount this lidar on the verticle plane rather than the horizontal? Then we no longer have a low hanging object or ground object problem. We now have a left to right problem. If we mount this on the side of our tank for instance and as high as possible, this sould solve the problemds discussed. Now left to right is a problem as our sensor is only mounted on the right side, we can now mount a second on the left side to solve this problem. However I propose instead of one mounted on the left and another on the right, we have both left and right sensors mounted on a spinning platform on top, now we resolve the shortcommings of 2d lidar and now have 3d lidar(or close enough) what do you think? Picture an old police siren that has two lenses inside which rotates, this is our two vertically mounted cheap lidar modules instead. Thoughts?
There were a few ideas in the comments. You can look at it from different angles: In the end, you have to reach a 3D area. You either do it with a moving 2D sensor (with the tradeoff of additional mechanics and slow speed) or you upgrade to a real 3D scanner.
@o:54 obstacle avoidance is more than providing a stop signal. for instance: if the device is a debris recovery swarm drone it may need to define an orbit modification in order to avoid collision.
My idea on solving the 3D problem: Attach it on a servo so it meassures vertically. Then rotate the servo in desired angular resolution. Voila: 3D sphere covered.
I could think of a couple of problems with this but, suffice it to say if you can create a 3D assembly there is BIG money in it right now. Voila: Easy money for a smart guy like you!
If you don't mind using a Raspberry in your project you could use a Kinect. It has a depth range of 0.8m to 4m and a 70-degree horizontal and 60-degree vertical field of view. I did some tests using the freenect driver and interfacing it with ROS was a breeze.
Oh, didn't know that. That's unfortunate. Luckily here in Brazil we have an abundance on those sensors and you can easily find a used one for like 20 bucks.
LIDARS work like ultra sonic distance sensors but using light instead of sound. They don't triangulate anything. You might have mirror walls, but most people have regular white walls, that scatter light into all directions just fine. And those 3 fixed LIDAR systems you tested in an earlier video aren't even LIDARS, they are just regular infrared proximity sensors which work in a completely different way and have nothing to do with LIDAR. And I'm not sure what the purpose of this video is... Of course if you don't have the right LIDAR system for your obstacle avoidance system, you're not gonna have a hard time avoiding obstacles. If you don't have the right screw for a particular application that also doesn't mean screws in general aren't well suited. LIDAR is absolutely awesome. And if you have the right system it has countless potential uses. Here are some examples of what LIDAR data from a good LIDAR system looks like: th-cam.com/video/nXlqv_k4P8Q/w-d-xo.html th-cam.com/video/aIxYt7DkK5A/w-d-xo.html th-cam.com/video/4RRBOoLsCEg/w-d-xo.html
My definition of LIDAR is: Light Detection and Ranging. So we use different definitions. And there are a few ways LIDARS work. Only one is Time-of-Flight.
@@AndreasSpiess "Light Detection and Ranging" isn't a definition, it's just what LIDAR stands for, it is based on rader which stands for "Radio Detection and Ranging", but neither of those say anything about how the work or what would be considered a LIDAR. If you pump so much current through a diode that it starts glowing that doesn't make it an LED, it just makes it a broken diode. Considering the term "Light Detection and Ranging", I can understand someone considering any system that uses light to measure distance to something LIDAR. But technically LIDAR is a specific way of measuring distance using light and just as triangulating your position using GPS isn't called RADAR, even though it technically also uses radio waves to range something, a common infrared proximity sensor isn't LIDAR, even though both commonly use infrared to measure proximity. For more information on LIDARs visit: en.wikipedia.org/wiki/Lidar
You could augment the LIDAR with other sensors such as IR TOF sensors for obstacle avoidance and floor sensors to detect the floor. IR TOF are far better than ultrasonic although they are a little more expensive. They will detect obstacles even at extreme angles and using an array of pulsed and continuous IR TOF sensors with differing beam widths you can create a cone infront of the robot that will detect all objects. The only downsize is their range typically between 0.6 and 4 meters.
Sounds very expensive. I used one small TOF device in one of my videos. It cost nearly 10 $ and it only covered a very small range because its laser. Ultrasound has an opening angle of about 30 degrees which can be quite handy for such an application.
@@AndreasSpiess yeah the more sensors you add the more expensive things become. I use the TOF instead of the ultrasonic sensors because they don't block your sketch like the ultrasonic sensors as long as you don't request data too quickly from them. The VL53L1X is a really good sensor but yeah it's a bit expensive it has a beam width of about 27 degrees. I always thought the ultrasonic sensors had a narrower beam width of about 15 degrees but I could he wrong. Great video by the way keep up the good work.
hi! thanks for the video on the RPLidar A1, I'm wondering if the A1 will work in outdoor mode/sunlight? In order to produce 3D pointclouds with the A1, is there software for navigation already designed for this unit? Will I need to write code for it to work with ROSS? thanks for the help!
I did not test it outside, but I assume the range will be considerably shorter if it is sunny. Concerning ROS integration: I found this link: blog.zhaw.ch/icclab/rplidar/
I came for LiDAR applications, and got out with an interesting fact about nighthawk 😮 I am surprised that the arduino can’t process the 115200 bps signal. I will try it.
Why not put the LIDAR on a servo tilted platform? To simplify the measurements, you could use more or less different numbers of angles and skip the data while moving. For example you can move the servo to the "low" position and take the distances. Then move it to the "mid", and read the sensor data. In this short three step example, you only have to move the servo to the top position and you've got all necessary data for a tracked vehicle. I guess my suggestion would handle the problems with high and low obstacles in the front, and while backwards driving too. I think this would work, at least if it drives not too fast 🤣.
A ring of alternating small mirrors pointing up and down, spaced a few mm apart, could allow up and down view, at the sacrifice of some angular point resolution which it has plenty of.
Well it depends. I've used lidars for obstacle avoidance just fine. The reason why u don't see the wall is most likely a resolution problem and not a reflection problem (in all fairness thought I used a 3000€ one). But the biggest reason why lidars or most 2d and 3d scanners aren't that usefull is the compute power u need to probably use the data. (and a raspberry just won't do it)
I think I described why the lidars cannot see some walls. The 360 degrees LIDAR sees walls where the angle is big, so I do not think it is a matter of resolution. If no light is reflected, I assume also expensive LIDARS will detect nothing.
Der Fehler liegt hier in der Annahme bezüglich der Reflektivität. Bei einer Wand handelt es sich um eine diffuse Reflektion und ist somit von winkel weitestgehend unabhanig. Daher kann ein Lidar Sensor (unabhäig ob einzelstrahl, 2D oder 3D) eine Wand unabhänig vom Winkel wahrnehmen. Weshalb die Wand in dem Test "verschwindet" liegt an der Art wie der Sensor arbeitet: Der Sensor misst immer in festen Winkel- bzw Zeitintervallen. Bei einer parallel verlaufenden Wand wird die euklidische Distanz zwischen zwei Messpunkten, auf dieser Wand, somit mit steigendem Abstand zum sensor immer gößer, weshalb diese schnell entweder außerhalb der Reichweite des Sensors liegen oder nicht mehr auf der Wand. Sorry to all non german speakers but I couldn't explain that in english without risking a lot of misunderstandings.
Auch wenn die Wand ein diffuser Reflektor ist, ist die reflektierte lichtmenge je nach Winkel unterschiedlich. Und der LIDAR hat eine minimale Empfindlichkeit. So stelle ich mir das jedenfalls vor. Und der Versuch mit dem Laserpointer zeigt genau dieses Verhalten: Ich seh einen starken Lichtfleck wo ich ihn vermute (Einfallswinkel= Ausfallswinkel) und praktisch keine Energie in andere Richtungen. Dazu wird der Punkt noch in die Länge "verschmiert", was ihn zusätzlich schwächt. Aber ich gebe dire recht, er ist immer noch sichtbar, wenn auch sehr schwach. Die Reichweite habe ich ja mit 6 Metern auf eine senkrechte graue Wand bestätigt. Er hat die parallele Wand aber schon nach ca 2 Metern nicht mehr "gesehen". Deshalb habe ich angenommen dass das mit der zu kleinen reflektierten Energie zu tun hat.
@@AndreasSpiess bei hochwertigen lidrs (Sick, hokoyu, velodein) ist eine Erkennung bei 10% garantiert, in unseren Tests wurde eine Reflexion von teilweise 2% noch erkannt. Wie genau sich trigonometrische Sensoren (we die rp-lidar Reihe) kann ich hier nicht sagen, da wir mit diesen Sensoren aufgrund ihrer geringen Reichweite, eiem hohen Rauschen und allgemeinen Ungenauigkeit nicht arbeiten.
What if you mount this lidar vertically and rotate it so it can see an hemisphere each half turn? You absolutely reduce the acquisition speed but in some conditions a low moving speed could be fine with it.
I do agree with Elon Musk. LIDAR isn't suited for self driving cars. But that doesn't mean that there aren't uses for them. Like, a gamer doesn't need a Quadro GPU. It can be used, but it's a massive waste of resources. Or, you could use a wrench to get a nail into wood, but a hammer is a better tool.
if you get to know the state of the art benchmark on object detection with or without lidar, you will realize lidar is not just redundancy but actually needed for autonomous car within 3 years
Here's my very short and to-the-point rant against the capitalization of "lidar" (as "LIDAR" or "LiDAR", both commonly seen): 1. You don't capitalize the analogous acronyms "radar" and "sonar" as "RADAR" and "SONAR" or "RaDAR" and "SoNAR", do you? 2. When the word was coined, the very first times it ever appeared in print, it was printed in all lowercase.
great video, let me know the cost effective solution for cleaner. Is that any possible to make a video about other 2d lidar? such as SICK TIM, SLAMOPTO V3 hokuyo?
Great video! I am a novice working on a robot and need to do away with the ultrasound because I have dogs. I am using Rasberry Pi 4B 4GB do you have any suggestions to replace the ultrasound units? My robot does not exceed about 3 to 4 miles per hour.
Thanks - been cogitating these exact issues. Wouldn't be too hard to put a servo there to make the LIDAR rock forward and backward to get vertical scanning. At least these are fairly inexpensive.
Excellent explanation. I am considering a combination of lidar and camera for my Jetson nano based robot. The real challenge is speed of data processing and reaction time in the software/hardware. Python on ROS is easier but i think may not be as fast to process as C++.... I may need to resort to low level assembly code just to read lidar/camera data to produce the stop command. Other than that I think the rest of Slam will work fine. Any thoughts about voice recognition and speech synthesis... Next step is to get my robot to talk back to me. 🤓
Thank you very much for the video Andreas, very informative. I'm wondering if it would be possible to scan a dog and see only the body without the fur. Might you know if the LiDAR can penetrate fur or hair or clothing? Thank you very much.
24GHz radar modules from ebay are quite sensitives, what about use it as obstacles detection ? We used one of those in one project and we were capable to measure human breathing.
I have to use rplidar to detect human in a indsutrial field. So rplidar will be placed in place. Can you tell me how to achieve this? I guess i have to do the environment mapping which i have done by using ros heltor slam. But dont know what would be the next step. Can u please give me some ideads about how to detect the obstacle after got the environment mapping
Chinese manufacturer RoboSense says that its new, high-performance solid-state LiDAR system for autonomous driving is 1/400th the price of traditional 64-line LiDAR systems and has updated features not found in even higher priced systems. The $200 RS-IPLS Intelligent Perception LiDAR system (yup, the price really is two Benjamins) is designed for the mass production of vehicles. Maybe not for small tanks -- but large ones?
@@AndreasSpiess if i may recoment one camera, have a look into the realsence camera from intel. We are using the D435 for our robot. It's a stereovision camera with structerd light added tot it. realsense.intel.com/stereo/
Adding visual would be the evolutionary step for certain, I did a few experiments in C++ with OpenCV using a Cubox-I pro and a very cheap 720p usb camera. Its actually very simple to build visual pipelines that filter / mask as required. There are also quite a few tutorials on training also for things like faces / people and objects. I used this to get started, github.com/WPIRoboticsProjects/GRIP Its a great tool to create the visual pipeline ( cam -> filters etc etc ) it exports code iirc to c++ and python.
Your video highlights why do we need sensor fusion and better sensors. No sensor can cope will all imaginable understand unimaginable scenarios. Certain failure rate must be planned for and remedied adequately.
Mine was already 2D. But you are right, tilting would generate the needed third dimension. But you would lose speed, which is quite valuable for this application.
There's 5 solutions to the resolution problem: 1) use a rangefinder that can be polled at a higher frequency so you can rotate it faster (like 50rpm). 2) put more rangefinders at slight angles into the same column 3) Instead of using a laser pointer, you can use a laser-line, and instead of a 1D sensor strip, you'd use a 2D high speed camera, so each scan gives you a line of depths spanning the vertical FOV of the camera, then you rotate the whole laser line/camera assembly as you would a rangefinder (you'd usually have a fixed assembly and a rotating mirror). This will need some fairly beefy machine for image processing. 4) Use a structured light sensor 5) Use a time of flight camera (the price of these is currently tumbling)
I did some investigations into the different principles of measuring distance (in some earlier videos). TOF and triangulation both are fast enough. Normal rangefinders use a different principle which is very slow. I still have a "golf" rangefinder here I have to test. It promises it can measure speed on a long distance. We will see...
it may be possible to perform LIDAR scanning using three 360° camera's and a scattered laser. the laser would have a simple time stamped carrier signal. the target object reflects the laser back to each of the 360° camera's that are calibrated to track the direction of the incoming laser and decode the time stamp giving a triangulation ability. the rest is processed much like GPS.
Is it possible to run multiple of this A1 sensors in the same vehicle, or in the same space, without interfering? Maybe synchronizing phases in some way? Or using different wavelengths?
The laser is quite small, so it should be no problem to run a few of them in parallel. The chance one sees the other for more than a fraction of a second is quite small. But i only have one :-( So I cannot test.
Hello.Andreas hello ALL Great info Thankyou Noted.Marvalous . Hey is it poss to make a servo rotation device to cover all axis up dow left right and all around for home tiny dones or an other devices for fast obstical Avoidence Fast ?? cheers.
if you sample points in LIDAR you are sampling and you need a sampling filter. Lidar is missing rulers because builder didnt go to school and didnt do his Nyquist Theorem homework.
To escape the 2D restriction inherent in the rotating laser platform, could one use a two-axis mirror system of the type that is inside of a grocery-store bar-code scanner?
Andreas
It is not just this video. It is all your videos that are helping me learn more and more about electronics, robotics, etc. Thank you. This was another helpful video
Nice to read that my work is helpful!
Gugus. Very good andreas. Pity there too many desktop jockeys today and fewer thinkers like you.
One thing people usually do is to swing the rotating lidar to change its plane, like mounting it on servo actuated hinge. You loose time resolution, but gain volumetric information.
Yes ,good information
Lidar is in fact very good at detecting obstacles and there's a major point missed here: autonomy systems don't just judge a single lidar scan, but many successive scans over time to answer the 'is there an obstacle here'. It also helps lidar tremendously when it's on a moving platform with varying roll and pitch and there's a good positioning system (RTK gps and high rate IMU) aiding in positioning-tagging points.
You are right, "sensor fusion" can add a lot of information.
TL;DR: A 2D lidar only detects obstacles in a 2D plane.
That was the point of the video, I think.
ok.
yet I think you could solve this problem by tilting the LiDAR front and sideways.
If you use more than just one of these, you could cover more of the relevant field of interest.
About the "ground drop" problem: A LiDAR entirely dedicated to scan the "road ahead" by being tilted forward would provide sufficient resolution.
In the end: I doubt there's THE ONE sensor to cover everything. You have to combine multiple one's to cover all eventualities. And yes, stereoscopic cameras and image recognition are a part as well
@@AndreasSpiess Pan and tilt mechanism allows collection of data in 3D, and production of a point cloud representation of the surroundings.
@@AndreasSpiess Pretty stupid point. In other news; water is wet.
The auto industry has solved this LiDAR blindness problem by using 75GHz RADAR to supplement the LiDAR sensor. Other types of sensors that are more nearsighted can be used to improve nearby object accuracy.
AFAIK they also use some 3D LIDARS
Another great video!
One of the early systems I was involved in 2009 was using both LIDAR and Ultrasound together as you show in the last segment.
We had the same limitations you discovered. However, we also came up with a solution. Detect, query, target, verify.
The Lidar sensor would detect an object. If limited return data fell outside of programmed limits, the Ultrasound sensor was sent a command to target the unknown obstacle. Of course, if the sensor was fixed, this had limitations. Our solution, to mount the ultrasound sensor on a plate that had X and Y servo motors to control the directional plane required for targeting. This raised two more problems, time and number of queries per second (plus latency).
Our solution, install four ultrasound sensors, one at each corner of our robot.
Raspberry did not yet exist. We too had the exact same data processing issue. In fact we still had issues when we attempted to use Motorola RISC 88000 processors. In summary, we did not properly plan project objectives.
Our solutions were far too complex and over engineered.
We were trying to create high fidelity resolution that was never required and did not focus our efforts to a simple problem, target, define and avoid.
Thank you for sharing your experience. If I understand right, a bunch of ultrasonic sensors in different directions would have done the job on obstacle avoidance?
Suggestion: Tilt the LIDAR down a few degrees at the front, and put a mirror behind it to reflect the backwards laser beams back towards the front again but on an UPWARDS tilt. This will effectively give you two forward facing detection planes, on different tilted angles, one up and one down. You just sacrifice your backwards facing detection.
I am not sure if triangulation works with mirrors. Do you?
@@AndreasSpiess should be fine, it's just another layer of optics. IIRC on other similar technologies like the Microsoft Kinect, mirrors will work. Easy to test tho with your bathroom mirror or a hand mirror.
@@AndreasSpiess th-cam.com/video/1Qx8NzuSSJ4/w-d-xo.html here u can see how triangulation systems like this just treat mirrors as if they are portals to another room.
Further thoughts: You could keep it simple too by angling the LIDAR & mirror so the top & bottom detection planes "trigger" at a similar distance to each other, so you dont have to bother separating the 2 planes in software & setting different trigger distances. Just position everything so if *either* is triggered the robot will do the same thing (stop then reverse & turn, etc).
Or: Since you don't want the top plane to continue upwards to be triggered by things high above, perhaps it will be mounted high on the robot facing straight forwards (no upwards angle), and only the low detection plane will be angled down. Remember that mirrors reflect all the way to their edge, so it is easy to mount a mirror as the highest point of the robot with no unprotected blind spot above it. This also gives the bonus ability of the top detection plane being able to see forwards at a maximum distance, so you retain the ability to use it for more than just simple collision avoidance.
if its servo controlled then you know where its facing, so you could code a transform so that the alternate 180degree fields can correspond to 2 angles.
even without knowing the direct facing of the sensor you could still
write code to distinguish 2 planes from each half (or however much) of
the total scan.
edit: because if the light emitting and being detected is on the same face, and that face is facing a mirror then itll still emit and detect the return to whatever angle the mirrors are set too aslong as the mirror is reflective enough and setup so there is little to no return to the detecter from the mirror surfaces themselves. then ok you lose whatever field of view by however much is 'blocked' by a mirror, but you gain more data for the remaining field so..
and for full 3d you culd expand it so that instead of having the problem of gyroscopic torque in moving the lidar assembly around a perpendicular axis to gain another degree of resolution, by ie angling it up and down, well you could much easier keep the lidar assembly fixed and then angle a mirror up and down to get a wider field of view/ planes
.. even though at first i agreed with your (missed) sarcasm ;p
i then thought no wait a minute there is a way that could actually work and make sense.
besides which: of course triangulation works with mirrors if the mirrors plane is normal to both beams, then it doesnt affect its relative angles (discarding real world material affects) so both angles are kept through the translation. the only thing that changes is the potential distance travelled from the device to target because of the longer path taken, and so you want to take that into account and add that offset to your results or reconfigure the system to overcome it/ cancel it out.
I appreciate the objective manner in which he approaches the topics.
Thank you!
I have a better title: 2D LiDAR struggles when 3D is needed
There is no “ugly truth” here, just common sense it seems?
Many people think, that this 100$ device solves the issue of obstacle avoidance.
@@AndreasSpiess many people think clickbait titles are a good way to drive a channel into the trash no matter how informative it is.
Excellent video! For short range obstacle avoidance, an ultrasonic system with an omni source and a multi-element phased array receiver can be used for DoA estimation with full spatial coverage (a four element pyramid would be the minimal number of phones for full spacial coverage). The main problem with this approach is that it can only see one thing at a given two way range from the system at a time, and if there are multiple reflections received at the same time, the system can become confused, but it is generally possible to throw out bad results by requiring all returns to come in at consistent times at each receive array phone.
I was actually working on this problem tonight for an underwater military application when I took a break to watch your movie. But in my case, the problem is to detect marine mammals anywhere near a high-power low frequency acoustic surveillance array. During peacetime, these arrays must be turned off when marine mammals are nearby because having them wash up on a beach is terrible publicity for the Navy. The offshore oil exploration vessels have the same problem...
I think this would work because You only need the shortest distance. You only the fastest echo will be needed. Maybe you stop once too early, but this is not problematic.
My 3 sensor design was very similar as I started all "loudspeakers" together. So it behaved like an omnidirectional source.
For simple obstacle avoidance, a 2D lidar is great. Many environments are sufficiently constrained that planar sensing is fine. Once you start worrying about complex terrain it becomes a really hard problem. You discuss hanging obstacles - once you start getting into those, you might as well start worrying about *deformable* obstacles. The simplest example is long grass: any simple sensor will register it as an obstacle, but you want to drive right over it. At that point you need multiple sensors - probably a touch sensor combined with something like a planar lidar.
As for not wanting to process all the data from a planar lidar, just wait until you realize you want to add a camera. It's definitely worth having at least raspberry pi level processing on a mobile robot, if not a full on laptop-type computer.
I agree with the camera. These days you can get quite some information with it and ML.
Sir, your tank has no weapons, you need to fix that problem ASAP.
I had a project for a coil gun, but this was too dangerous. It kills a lot of electronic devices around it...
what you're describing is the limitations of the cheap toy LIDAR systems; an industrial LIDAR system, used on cars, has none of those issues, it scans in 3D space and it does so very fast using an array of lasers and detectors
You are right. I think I mentioned that in my video.
Thanks for your contribution and the clarification.
But to use Swiss precision: Lidar (or Ladar) always means a time of flight method which the A1 do not use. So it is NOT a Lidar but a laser triangulation tystem originally developed for a vacuum cleaner robot.
Yes you arer right real and useful 3D Lidars are still very challenging and expensive. That is why the camera Systems got so popular in the automotive Industry over the last decade. Some car manufacturers even use a stereo camera to get a better estimation of real distances.
AFAIK LIDAR is the abbreviation for "Light Detection and Ranging" which would include all sorts of systems. But anyway, not important.
Working with stereo cameras for sure would be very interesting for me. But I first have to start with one camera ;-)
Also to be precise since we are actually measuring time needed for a given length, this method is called Trilateration.
Please refer to www.robotshop.com/media/files/PDF/revolds-whitepaper.pdf and see that there is no time of flight measured. The angle of the Incoming light ray is determined by an array of light sensitive Elements, e.g. linear array of photodiodes or in this example an imager.
You have described a wall as a mirror reflection; however a wall is not a mirror, it's close to a diffuse surface, so the light gets thrown back in all directions regardless where source light came from. The strongest emission of light is along the surface normal and falls off from there on out towards no emission at 90° to the normal. The sensitivity of the sensor will also have a cutoff, which is why your LIDAR can't see these walls very far. See Lambertian BRDF.
You are right. I use to simplify a little. I showed and mentioned this effect when I measured the "wall", which was a dirty wooden board. Truly not a mirror.
Resolutions is the smallest increment something can measure, for example, a ruler that was markings 1mm apart has a resolution of 1mm. A ruler that has markings 1cm apart has a resolution of 1cm.
Summarised:
- Accuracy is how accurate to the real value the measurement is.
- Resolution is the smallest increment the measurement can show.
You are right. I mixed things.
Nice video, shows clearly that you need more than just a Lidar; Lidar for SLAM, IR or ultrasonic for obstacle avoidance, switches with bumpers for colllision detection and IR sensors to detect the stairs.
One thing: at 9:21 you are confusing "resolution" with "accuracy". The RPLIDAR gives the distance with a 0.5mm resolution but the accuracy depends on the type of surface. Slamtec does not give any data on this, but from experience I found that it is safe to assume a 1-2cm accuracy of the measured distance.
You are right!
Resolution vs. precision vs. accuracy 101.
What about letting the lidar tumble or nod in a controlled manner, eg. with a servo. So achieving a pseudo 3d operation. Just curious.
Adrian Schneider, interessting.
Was thinking the same. Would have to move very slowly due to gyroscopic resistance force.
There are a few viewers who proposed it. It is possible, but you lose speed which is essential for obstacle avoidance.
Andreas Spiess plus - you'd be fighting gyro effect constantly...
Gyroscopic forces would be minimal for something like that. It's light and only spins @ 7Hz. Nothing a servo couldn't handle
Thank you Anrdreas. Great presentation. My question and yeah, we are at 2021 now, is with what you've presented and what I gleaned from other sources, isn't this why things that crawl, fly, etc... utilize multiple sensor types to create a holistic picture? The end of this video showed you positioning ultrasonic in three different orientations with the Lidar. The limitations that you identify, and thank you for that, pretty much was the icing on the cake for me for why our critical systems out there now are using every available tech to overlap and cover the blank spots. In this time (three years) have you seen any one of the tech's used rise to the top and provide a clear picture? Thanks again, really enjoy the breakdowns you do of this stuff and it's serving well in educating me to the topic.
Seen from here we are in the "innovation valley of death" between the hype and a useful system in self driving cars (www.researchgate.net/figure/The-valley-of-death-in-innovation-and-the-context-of-the-simplified-BID-process-Based_fig1_331105240 )
For slow moving vechicles a servo tilting the lidar would allow it to scan up/down a few degrees in the moving direction. Probably enough to detect descending stairs.
True.
*@Andreas Spiess*
6:45 You can put the lidar on a rotating wedge, in that way you can know at what angle the liar are (because you know the rotation of the wedge) and suddenly you have made a 3D lidar out of a 2D liar. (basically make the 2D "lazer disc" wobble around in 3D space, with whatever angle you need to cover the desired volume)
(A probable cause for problems is the G-forces created from the lidar's own rotation)
You are right. But then you lose a lot of detection speed.
@@AndreasSpiess Yes, but maybe it's better than not seeing the obstacles at all?
Good Job! I like real information, verified behaviour. It is great for me to understand what to expect from these tools. Thank you for clear picture.
Glad you enjoyed it!
I purchased a RPLIDAR A1 based on one of your earlier videos. I've constructed a mount to pivot the LIDAR up and down.
I think the increased scan time can be compensated by controlling the rotation of the LIDAR directly.
Rather than having the LIDAR sweep a full 360 degrees, I plan to limit the rotation to 90 degrees or less. I'll need to either control the DC motor included in with the RPLIDAR A1 or to use a different motor to control the LIDAR rotation.
I haven't tested how well I can control the included DC motor yet. Hopefully an h-bridge will allow LIDAR to sweep back and forth rather than rotate continuously. I'm skeptical I'll have enough control the included motor to make this work well. I'll likely need to add an encoder to monitor the motor's speed and position. The small motor might not be up to the task of continuously changing direction to produce the desired back and forth motion. I think it's very likely I'll need to use a different motor to produce this motion.
The LIDAR itself provides feedback of the device's angular position but I think it would be hard to use this feedback for relatively precise motor control. I think either a gear motor with an encoder or a stepper motor will be needed. A stepper motor should be okay since the position feedback could be used to correct for any lost steps.
I'm hoping by adding a tilting mount and a motor to point the LIDAR (rather than spin the LIDAR), I can use the RPLIDAR A1 for obstacle avoidance.
I've made progress building the tilt mount but I haven't yet tried controlling the RPLIDAR A1's motor to limit its rotation. I used to make videos about my robot projects but I haven't done so in a while. I'll make a point of documenting my efforts as I attempt using LIDAR for obstacle avoidance.
The high inertia of the rotor will kill your plan.
This little motor is in no way designed to accelerate the rotor that fast as you would want. Even a much bigger one will not be able to exceed the 7 Hz you already have to a higher scan frequency that makes worth the effort.
A possible approach would require a different optical system. A continuously rotating mirror pyramid or polygon drum will give you a scan effect over a limited angle. When the beam leaves the range, the laser points on the next mirror segment and the beam comes in from the opposite corner. This is the principle how laser printers were working 20 years ago.
Bonus: If your polygon mirror drum has enough segments, you can tilt the mirrors to obtain multiple planes. Imagine an octagon drum with 8 mirrors of slightly different angles, so one revolution can give you 8 scans of 90° in different planes.
I agree with SpeedFlap. It is probably faster to keep the motor running than to reverse directions. And laser for sure will have a longer life.
@Andreas Spiess "I agree with SpeedFlap. It is probably faster to keep the motor running than to reverse directions."
This may well be the case but I don't think I'll be able to stop myself from trying the back and forth method.
"And laser for sure will have a longer life."
If by "laser" you mean the complete LIDAR unit then, I tend to agree. Moving the assembly back and forth could increase the wear on the slip ring and possibly increase the wear on the encoder used to determine angular position.
I was about to say the laser diode itself should be unaffected my proposed change. Thinking about it a bit more, I concede the back and forth acceleration could cause additional mechanical strain to (electrical and mechanical) support components used with the laser.
I have a feeling any changes I make will likely reduce the functionality of the RPLIDAR A1. It's a really cool gadget as is (thanks for showing it in a video Andreas Spiess). Even if the changes I make aren't practical, hopefully they'll be interesting.
@SpeedFlap "The high inertia of the rotor will kill your plan." I'm not so sure I agree. I'd likely have trouble moving the rotor back and forth quickly with a motor, of similar size of the original motor, but I'm sure I have lots of motors capable of accelerating the rotor without much trouble.
"Even a much bigger one will not be able to exceed the 7 Hz you already have to a higher scan frequency that makes worth the effort."
I'm not really concerned about a lower scan frequency. I just want to convert the 2D scan to a 3D scan even if the area scanned isn't very big. I'd rather have a 2Hz scan of an area where obstacles could interfere with the progress of my robot, than have a 7Hz 360 degree scan of at a single height.
I considered removing the LIDAR unit from the rotor and use my own pan and tilt mount but I didn't see an obvious way to separate these. I didn't want to mess up the slip ring/encoder section of the device so I ended my disassembly exploration with the LIDAR still attached to the rotor.
Thanks for the feedback guys. Thanks for the great videos Andreas Spiess. I'll make sure to document my efforts with videos posted to my channel (I haven't posted anything interesting for a while).
One of the ways to speed up computations is to do a buffering and batching. Buffer lets say 30 points, then send an array of 30 points to a different core or processor where they are all processed together (i.e. denoising, combinging with previous data, doing kinematic compensation from gyroscopes and accelrometers, etc). This way per-point overheads are reduced greatly, and main code only needs to focus on communication with lidar and basic processing of points. 1400 points per second is extremely small amount of data for modern computing systems. In DSP and audio processing systems you usually process about 100k samples per second. In video processing dozens of millions of points per second. In graphics systems, you can be processing trillions of entities per second, using FPGAs, DSPs, GPUs or multicore CPUs. As well by smart software solutions like batching, async processing, circular buffers, vectorization, etc.
Seriously 8MHz, 8-bit MCU with no FPU, no DMA, and multiple high speed UARTs is terrible for any serious robotics.
This is what I meant when I mentioned a subsystem. With my 3 ultrasonic sensors, I used an ATTINY to do the measuring. Like that the main processor only had to read two values: The shortest distance detected and the direction where it was. Much faster.
Very interesting video! I do not know about the RPLIDAR but I have seen other LIDAR sensors on the market that do not capture BLACK surfaces properly or at all just like your stealth plane example! In addition they may have too many noisy artefacts to be filtered. I recommend asking the manufacturer for a video of the sensor in operation before buying.
You are right: If no light is reflected, these devices do not work. This is amso the case at angled surfaces (The F117 Stealth Bomber used this fact)
The 2D-system could be made into a 3D-system to look in the forward direction by mounting it vertically on a turntable that ‚oscillates‘ from left to right and back at a defined speed within a defined angle. You probably lose some scan rate for backward beams. That‘s the same with horizontal mounting. But while the system is vertically mounted, the challenge would be to reduce beams in all other false directions by coordinating the turntable‘s speed to the scanner‘s speed. The rest should be a matter of software. Probably exceeds the possibilities of a small Computer...
I’m just blowing the faint wind of my thoughts into this great brainstorm.
Thank you for all your great Videos.!
The idea of moving the head was discussed in other comments. It is possible, but you lose time with it.
Very good video on LIDAR's blind spots. I think that LIDAR may be good for specific applications, but for use in vehicles it is a dead end and stereoscopic cameras win on all ends. They are cheap, devices and software is capable to resolve objects at real time.
You might be right. Do you know of a "cheap" stereoscopic camera for that purpose? I think they are still out of a Maker's budget.
I do not know of any particular. However I think 2 regular cheap 640x480 cameras ($4) would be sufficient to start. Connected using I2C or SPI. However to process the image with reasonable precision it would require some processing power. Not sure if suitable for even more powerful ESP32, unless you dedicate one MCU and its 2 cores just for this activity. Writing the software would be interesting problem nevertheless or using existing libraries (OpenCV?)...
Samsung nailed it with the Jetbot vacuum cleaner.
I do not know this product :-(
A solution may be to put a sensor on top on the x/z plane, while the other put on front of the vehicle, with nothing under it (maybe mounted with an L bracket?) on y axys, this way you have a 2D plane of under and on top of the vehicle, would still not be perfect, but would at least cover for stairs and low ceiling.
Edit: it's only a thought, maybe there is something I didn't think of, like the Arduino not being capable of processing the input from both sensor xD
And it becomes also quite bulky...
@@AndreasSpiess you're also right, also it makes the panzer (:P) way more unbalanced on the front, still would be a solution... Also putting 2 slightly oblique static sensor, one up and one down, would be great to detect stairs and/or low ceiling, but I dunno, I never used those sensor except with the lego mindstorm nxt 2.0 so you're for sure more expert, I'm just suggesting ideas xD
Turn it 90 degrees (rotating vertical) and scan it horizontally. That converts it to a 3-D lidar, if you can find or make software to handle it. You can define the scanning angle to whatever you need. It would be pretty slow, however.
Exactly, it becomes quite slow. There is nothing like a free lunch ;-)
Very interesting and thorough. Thank you. I don't see why rotating ultrasound would fail, but that's probably because I'm a bit dim.
Ultrasonic sensors need a long time for each measurement (because speed of sound). So this device would be very slow..
LIDARS are mostly used for mapping and planning, not for emergency obstacle avoidance. What you can do to improve the chances of detecting small objects with the LIDAR is to implement a probabilistic sensor model like HIMM or Bayesian and map the data over time.
I always thought LIDARS on cars are also used for obstacle avoidance. As I showed the LIDAR was able to "see" quite small objects, but only in one plane. So the enhancement of analysis would probably not help the (fast) obstacle avoidance. However, it could help to increase mapping accuracy.
Looking forward to affordable MEMS LIDAR (oscillating mirror). Wondering how effective a VL530X time of flight sensor would be mounted to a couple of servos such that it looked around in a circle.
I tried the VL530X for that purpose in one of my videos. It was way too slow. And its range also is only about one meter.
The sensor completely lacks optics, it doesn't have lenses to collimate either the laser beam or the incident light. So it fires a wide 35° beam and then collects return signal over 25° field of view. It has low sensitivity and basically performs similarly to ultrasonic, but often times worse. You basically get something like a weighted average of the distance in its field of view. The main reason you want to use it is for being much smaller than ultrasonic.
As always Mr Andreas, very interesting topic. Thank you.
You are welcome!
As long as your vehicle moves at a sensible speed for home-made robots (i.e. under 5km/h), you should be able to get around the scanning plane problem by taking advantage of the high scaning rate by oscillating the angle and overlaying the data.
This idea was discussed a few times in the comments. I never tried.
@Andreas Spiess - thanks for a great video series - you always ask the "right" questions when checking out new technologies.
In an earlier reply you stated "So today, I tried (my RPLIDAR) in front of a mirror. It was able to detect obstacles through the mirror, but only if the mirror was in a 90 degrees angle to the laser. This was only a narrow area.".
Can you be more specific? If the suggestion from some readers is to take the back side of the 360 circle and use one mirror at 45° degrees to point the laser (and the reflection path) up and another one above at -45°to point it forward the angle between the incoming laser and the reflection will be 90°. Does this work? Can you hold the RPLIDAR at 45° close to the mirror and check, if it shows reasonable distance readings based on the reflection?
Of course it would help if you use a "surface" mirror where the aluminium is not behind the glass but in front of the glass. The you only have one reflection plane and not the glass plane and the aluminium back plane for reflection. A cheap source for such mirrors for 6€ is here (they are used in overhead projectors): www.betzold.de/prod/77637/ .
I did exactly what you describe. And it showed reasonable results. But as I wrote, only for a few degrees, because the mirror was flat and not round. You can draw the beams and see where it goes if the angle points to the back to the mirror with an angle of let's say 20 degrees. Then the beam does no more go to the front to the bot, bot to one side.
Good approach to the topic. Like ur video. Looking for more of your videos now.
Welcome to the channel!
As always superb presentation showing pro's and con's, with real world data and situations to back it up. Keep up the great work :).
Thank you!
Obstacle Avoidance should not be confused with SLAM, after all, in Obstacle Avoidance we want to avoid slaming into things! :-)
You are right!
Thank you for another great video. Did you define LIDAR? = Light Detection and Ranging. I wonder how many folks will need to learn that soon.
Some say also "laser" instead of light
And that is why sensor fusion is a thing. Not sonars by themselves, and not lidar by itself. Do both and you can do more than the sum of them.
I agree
I liked this video, it pointed out main positive and negative sides of the lidar
If we want to be sure, we would need to use a combination of sensing technology, lidar, us, ir, 3d camera
I am preparing a PhD thesis in this thematic (Selfdriving car) and I will start experimenting with Turtlebot3 burger which has lidar included
Good luck with your Ph.D.!
I think the F117's angular shape was more a result of limited computing power. The engineers used computer simulations to predict the radar cross section of a particular proposal. But the limited computing power back then meant they could only handle very few polygons. That's why the later stealth airplanes look a lot different.
You might be right. And they did not have a lot of time to build it. So they used what they had (aluminum, for example). And they had also to prove the concept to convince Congress. It was quite revolutionary at the time. Nicely described in Kelly Johnson's book I recently showed.
The shapes of the newer planes, however, seem to follow the same formula and principle.
HI ANdreas, I came to the same summary. Then I got to thinking outside the box (pun) while watching this. What if we mount this lidar on the verticle plane rather than the horizontal? Then we no longer have a low hanging object or ground object problem. We now have a left to right problem. If we mount this on the side of our tank for instance and as high as possible, this sould solve the problemds discussed. Now left to right is a problem as our sensor is only mounted on the right side, we can now mount a second on the left side to solve this problem. However I propose instead of one mounted on the left and another on the right, we have both left and right sensors mounted on a spinning platform on top, now we resolve the shortcommings of 2d lidar and now have 3d lidar(or close enough) what do you think? Picture an old police siren that has two lenses inside which rotates, this is our two vertically mounted cheap lidar modules instead. Thoughts?
There were a few ideas in the comments. You can look at it from different angles: In the end, you have to reach a 3D area. You either do it with a moving 2D sensor (with the tradeoff of additional mechanics and slow speed) or you upgrade to a real 3D scanner.
@o:54 obstacle avoidance is more than providing a stop signal. for instance: if the device is a debris recovery swarm drone it may need to define an orbit modification in order to avoid collision.
You are right.
My idea on solving the 3D problem:
Attach it on a servo so it meassures vertically. Then rotate the servo in desired angular resolution.
Voila: 3D sphere covered.
I could think of a couple of problems with this but, suffice it to say if you can create a 3D assembly there is BIG money in it right now.
Voila: Easy money for a smart guy like you!
@@real_arbuckle there is already a similar device invented by sparkfun...
If you don't mind using a Raspberry in your project you could use a Kinect. It has a depth range of 0.8m to 4m and a 70-degree horizontal and 60-degree vertical field of view. I did some tests using the freenect driver and interfacing it with ROS was a breeze.
Kinect seems to be an interesting technology. But I heard they are no more produced?
Oh, didn't know that. That's unfortunate. Luckily here in Brazil we have an abundance on those sensors and you can easily find a used one for like 20 bucks.
Very interesting. It covers perfectly the problems I'm dealing with. Do you think it is worth it's money ?
I do not know how long it lasts. But if you need a cheap lidar it is probably the only way to go for the moment. I would not try to build my own.
Should put the 2d lidar on a tiltable platform. It would greatly reduce the problems mentioned even if not a perfect solution.
This solution was mentioned by a few viewers. It is possible but you lose speed of detection.
LIDARS work like ultra sonic distance sensors but using light instead of sound. They don't triangulate anything. You might have mirror walls, but most people have regular white walls, that scatter light into all directions just fine. And those 3 fixed LIDAR systems you tested in an earlier video aren't even LIDARS, they are just regular infrared proximity sensors which work in a completely different way and have nothing to do with LIDAR.
And I'm not sure what the purpose of this video is... Of course if you don't have the right LIDAR system for your obstacle avoidance system, you're not gonna have a hard time avoiding obstacles. If you don't have the right screw for a particular application that also doesn't mean screws in general aren't well suited.
LIDAR is absolutely awesome. And if you have the right system it has countless potential uses.
Here are some examples of what LIDAR data from a good LIDAR system looks like:
th-cam.com/video/nXlqv_k4P8Q/w-d-xo.html
th-cam.com/video/aIxYt7DkK5A/w-d-xo.html
th-cam.com/video/4RRBOoLsCEg/w-d-xo.html
My definition of LIDAR is: Light Detection and Ranging. So we use different definitions. And there are a few ways LIDARS work. Only one is Time-of-Flight.
@@AndreasSpiess "Light Detection and Ranging" isn't a definition, it's just what LIDAR stands for, it is based on rader which stands for "Radio Detection and Ranging", but neither of those say anything about how the work or what would be considered a LIDAR. If you pump so much current through a diode that it starts glowing that doesn't make it an LED, it just makes it a broken diode.
Considering the term "Light Detection and Ranging", I can understand someone considering any system that uses light to measure distance to something LIDAR. But technically LIDAR is a specific way of measuring distance using light and just as triangulating your position using GPS isn't called RADAR, even though it technically also uses radio waves to range something, a common infrared proximity sensor isn't LIDAR, even though both commonly use infrared to measure proximity.
For more information on LIDARs visit: en.wikipedia.org/wiki/Lidar
You could augment the LIDAR with other sensors such as IR TOF sensors for obstacle avoidance and floor sensors to detect the floor. IR TOF are far better than ultrasonic although they are a little more expensive. They will detect obstacles even at extreme angles and using an array of pulsed and continuous IR TOF sensors with differing beam widths you can create a cone infront of the robot that will detect all objects. The only downsize is their range typically between 0.6 and 4 meters.
Sounds very expensive. I used one small TOF device in one of my videos. It cost nearly 10 $ and it only covered a very small range because its laser. Ultrasound has an opening angle of about 30 degrees which can be quite handy for such an application.
@@AndreasSpiess yeah the more sensors you add the more expensive things become. I use the TOF instead of the ultrasonic sensors because they don't block your sketch like the ultrasonic sensors as long as you don't request data too quickly from them. The VL53L1X is a really good sensor but yeah it's a bit expensive it has a beam width of about 27 degrees. I always thought the ultrasonic sensors had a narrower beam width of about 15 degrees but I could he wrong. Great video by the way keep up the good work.
I have a VL53L1X here and plan since some time to test it. I tested the small one.
Nice comparison between technologies and devices.
Thank you!
hi! thanks for the video on the RPLidar A1, I'm wondering if the A1 will work in outdoor mode/sunlight? In order to produce 3D pointclouds with the A1, is there
software for navigation already designed for this unit? Will I need to write code for it to work with ROSS? thanks for the help!
I did not test it outside, but I assume the range will be considerably shorter if it is sunny.
Concerning ROS integration: I found this link: blog.zhaw.ch/icclab/rplidar/
I came for LiDAR applications, and got out with an interesting fact about nighthawk 😮
I am surprised that the arduino can’t process the 115200 bps signal. I will try it.
:-)
ooops Slam is not about computing paths, is about estimating ones positions and the position of the surroundings (or creating a map)
Why would I want to know my environment if not to decide how to move?
Why not put the LIDAR on a servo tilted platform? To simplify the measurements, you could use more or less different numbers of angles and skip the data while moving. For example you can move the servo to the "low" position and take the distances. Then move it to the "mid", and read the sensor data. In this short three step example, you only have to move the servo to the top position and you've got all necessary data for a tracked vehicle. I guess my suggestion would handle the problems with high and low obstacles in the front, and while backwards driving too. I think this would work, at least if it drives not too fast 🤣.
You are right. But you lose time which Is not unimportant for this application.
I absolutely agree!
A ring of alternating small mirrors pointing up and down, spaced a few mm apart, could allow up and down view, at the sacrifice of some angular point resolution which it has plenty of.
Good idea. Probably needs some precision mechanics and synchronization...
Well it depends. I've used lidars for obstacle avoidance just fine. The reason why u don't see the wall is most likely a resolution problem and not a reflection problem (in all fairness thought I used a 3000€ one). But the biggest reason why lidars or most 2d and 3d scanners aren't that usefull is the compute power u need to probably use the data. (and a raspberry just won't do it)
I think I described why the lidars cannot see some walls. The 360 degrees LIDAR sees walls where the angle is big, so I do not think it is a matter of resolution. If no light is reflected, I assume also expensive LIDARS will detect nothing.
Der Fehler liegt hier in der Annahme bezüglich der Reflektivität. Bei einer Wand handelt es sich um eine diffuse Reflektion und ist somit von winkel weitestgehend unabhanig. Daher kann ein Lidar Sensor (unabhäig ob einzelstrahl, 2D oder 3D) eine Wand unabhänig vom Winkel wahrnehmen. Weshalb die Wand in dem Test "verschwindet" liegt an der Art wie der Sensor arbeitet: Der Sensor misst immer in festen Winkel- bzw Zeitintervallen. Bei einer parallel verlaufenden Wand wird die euklidische Distanz zwischen zwei Messpunkten, auf dieser Wand, somit mit steigendem Abstand zum sensor immer gößer, weshalb diese schnell entweder außerhalb der Reichweite des Sensors liegen oder nicht mehr auf der Wand.
Sorry to all non german speakers but I couldn't explain that in english without risking a lot of misunderstandings.
If u want a good and reliable obstacle avoidance, look for potential field method.
Auch wenn die Wand ein diffuser Reflektor ist, ist die reflektierte lichtmenge je nach Winkel unterschiedlich. Und der LIDAR hat eine minimale Empfindlichkeit. So stelle ich mir das jedenfalls vor. Und der Versuch mit dem Laserpointer zeigt genau dieses Verhalten: Ich seh einen starken Lichtfleck wo ich ihn vermute (Einfallswinkel= Ausfallswinkel) und praktisch keine Energie in andere Richtungen. Dazu wird der Punkt noch in die Länge "verschmiert", was ihn zusätzlich schwächt. Aber ich gebe dire recht, er ist immer noch sichtbar, wenn auch sehr schwach.
Die Reichweite habe ich ja mit 6 Metern auf eine senkrechte graue Wand bestätigt. Er hat die parallele Wand aber schon nach ca 2 Metern nicht mehr "gesehen". Deshalb habe ich angenommen dass das mit der zu kleinen reflektierten Energie zu tun hat.
@@AndreasSpiess bei hochwertigen lidrs (Sick, hokoyu, velodein) ist eine Erkennung bei 10% garantiert, in unseren Tests wurde eine Reflexion von teilweise 2% noch erkannt. Wie genau sich trigonometrische Sensoren (we die rp-lidar Reihe) kann ich hier nicht sagen, da wir mit diesen Sensoren aufgrund ihrer geringen Reichweite, eiem hohen Rauschen und allgemeinen Ungenauigkeit nicht arbeiten.
What if you mount this lidar vertically and rotate it so it can see an hemisphere each half turn?
You absolutely reduce the acquisition speed but in some conditions a low moving speed could be fine with it.
There are many possibilities. But if you lose speed it will be too slow for most "moving" applications
Dear Andreas,
The opening angle of ultrasonic sensors is 18° and not 30° as you said.
Maybe 2x18 is also 30 degrees?
I do agree with Elon Musk. LIDAR isn't suited for self driving cars. But that doesn't mean that there aren't uses for them.
Like, a gamer doesn't need a Quadro GPU. It can be used, but it's a massive waste of resources.
Or, you could use a wrench to get a nail into wood, but a hammer is a better tool.
if you get to know the state of the art benchmark on object detection with or without lidar, you will realize lidar is not just redundancy but actually needed for autonomous car within 3 years
Here's my very short and to-the-point rant against the capitalization of "lidar" (as "LIDAR" or "LiDAR", both commonly seen): 1. You don't capitalize the analogous acronyms "radar" and "sonar" as "RADAR" and "SONAR" or "RaDAR" and "SoNAR", do you? 2. When the word was coined, the very first times it ever appeared in print, it was printed in all lowercase.
the solution is obviously a sensor pod with several subsystems gathering data from their area of responsibility
Yes, you are right.
Thank you Andreas, this is very useful and informative!
You are welcome!
great video, let me know the cost effective solution for cleaner. Is that any possible to make a video about other 2d lidar? such as SICK TIM, SLAMOPTO V3 hokuyo?
I do not know one.
Hey man, you got a link for that tank? Looks like a beast.
Maybe you search for "summer prioject" on the channel
Super cool investigation. Thank you.
You are welcome!
There's an invisible portal hidden in your box!
Fortunately for us, the robot is a tank, so we're not affected by the issue of smaller obstacles.
Good Point!
Great video! I am a novice working on a robot and need to do away with the ultrasound because I have dogs. I am using Rasberry Pi 4B 4GB do you have any suggestions to replace the ultrasound units? My robot does not exceed about 3 to 4 miles per hour.
Maybe you look at Time of flight sensors or Lidars? You find videos for both topics on this channel.
That was indeed interesting, and informative
Glad you enjoyed it
Thanks for sharing👍😀
Greater video as ever 👍
You are welcome, Asger!
Useful video
Thank you!
Great video as usual. What about the power consumption? I guess it would be high due to the rotating motor and continuous beam.
I measured 80 mA for the motor. I did not measure the LED.
Love your videos :) great work sir
Thank you so much!
Thanks - been cogitating these exact issues. Wouldn't be too hard to put a servo there to make the LIDAR rock forward and backward to get vertical scanning. At least these are fairly inexpensive.
Possible. But then you lose quite some speed, I think.
@@AndreasSpiess Have you tried out a 3d one?
No
Excellent explanation. I am considering a combination of lidar and camera for my Jetson nano based robot. The real challenge is speed of data processing and reaction time in the software/hardware. Python on ROS is easier but i think may not be as fast to process as C++.... I may need to resort to low level assembly code just to read lidar/camera data to produce the stop command. Other than that I think the rest of Slam will work fine. Any thoughts about voice recognition and speech synthesis... Next step is to get my robot to talk back to me. 🤓
ROS+LiDAR works fine in a laptop with Intel i5-4th gen and the driver is coded in C… however, I think that even python could handle the signal.
Thank you very much for the video Andreas, very informative. I'm wondering if it would be possible to scan a dog and see only the body without the fur. Might you know if the LiDAR can penetrate fur or hair or clothing? Thank you very much.
LIDAR uses reflected laser light.wherever it is reflected is the distance it measures.
Probably not working, it would not penetrate fur. Unless you really use a high power laser, but then your dog would also lose its coat.
Dear Andreas how about using parallel processing using several arduino units. If the is justified design dedicated parallel processor.
Arduinos are not very fast. A simple ESP32 is much, much faster. Before going parallel I would use a faster processor.
And now a video showing how I can do it with an Arduino Due only scanning the path ahead in 3D for slow moving four wheel drive robot :)
That is your turn now ;-)
Thank you for this video!
You are welcome!
24GHz radar modules from ebay are quite sensitives, what about use it as obstacles detection ? We used one of those in one project and we were capable to measure human breathing.
I used one of those in a video. They only measure speed, not distance.
I have to use rplidar to detect human in a indsutrial field. So rplidar will be placed in place. Can you tell me how to achieve this? I guess i have to do the environment mapping which i have done by using ros heltor slam. But dont know what would be the next step. Can u please give me some ideads about how to detect the obstacle after got the environment mapping
Mmmh, the LiDAR... A device I learned about frist in Space, Above And Beyond.
Thank you for informative video about LIDAR :)
You are welcome!
What I'm very intreressting about is a Lidar which could be tilted by a servo so you have basicly a 2.5D lidar.
You can try that. But you lose a lot of speed...
Where do you buy your electronics from? I'm also from Switzerland and it's always very annoying due to shipping.
You always find the links to my sources in the video description
Chinese manufacturer RoboSense says that its new, high-performance solid-state LiDAR system for autonomous driving is 1/400th the price of traditional 64-line LiDAR systems and has updated features not found in even higher priced systems. The $200 RS-IPLS Intelligent Perception LiDAR system (yup, the price really is two Benjamins) is designed for the mass production of vehicles. Maybe not for small tanks -- but large ones?
Sounds very interesting. Where did you see the price? I only got a request form when I pressed "buy"
DesighFax newsletter. Here is the link: www.designfax.net/cms/dfx/opens/article-view-dfx.php?nid=4&bid=827&et=featurearticle&pn=02
Thank you for the link. So it probably will take a while till we get our hands on these devices (if we do not need 10'000 pcs)
What about using 2 cameras and opencv library to do photogrammetry?
(With a raspberry pi for it)
Maybe. But I have to learn first how to use one camera ;-)
@@AndreasSpiess if i may recoment one camera, have a look into the realsence camera from intel. We are using the D435 for our robot. It's a stereovision camera with structerd light added tot it.
realsense.intel.com/stereo/
You should check the realsense D435 camera from Intel. It's around $130 and I highly recommend it.
Adding visual would be the evolutionary step for certain, I did a few experiments in C++ with OpenCV using a Cubox-I pro and a very cheap 720p usb camera. Its actually very simple to build visual pipelines that filter / mask as required. There are also quite a few tutorials on training also for things like faces / people and objects.
I used this to get started,
github.com/WPIRoboticsProjects/GRIP
Its a great tool to create the visual pipeline ( cam -> filters etc etc ) it exports code iirc to c++ and python.
Or Just one camera and a convolutional neural network
th-cam.com/video/9QRTg-4q634/w-d-xo.html
Your video highlights why do we need sensor fusion and better sensors. No sensor can cope will all imaginable understand unimaginable scenarios. Certain failure rate must be planned for and remedied adequately.
You are right! For that, we need to understand the characteristics of the different sensors.
you can get a quasi 2d lidar if you take your 1d lidar sensor and tilt it in an oscilating motion
Mine was already 2D. But you are right, tilting would generate the needed third dimension. But you would lose speed, which is quite valuable for this application.
There's 5 solutions to the resolution problem:
1) use a rangefinder that can be polled at a higher frequency so you can rotate it faster (like 50rpm).
2) put more rangefinders at slight angles into the same column
3) Instead of using a laser pointer, you can use a laser-line, and instead of a 1D sensor strip, you'd use a 2D high speed camera, so each scan gives you a line of depths spanning the vertical FOV of the camera, then you rotate the whole laser line/camera assembly as you would a rangefinder (you'd usually have a fixed assembly and a rotating mirror). This will need some fairly beefy machine for image processing.
4) Use a structured light sensor
5) Use a time of flight camera (the price of these is currently tumbling)
I did some investigations into the different principles of measuring distance (in some earlier videos). TOF and triangulation both are fast enough. Normal rangefinders use a different principle which is very slow.
I still have a "golf" rangefinder here I have to test. It promises it can measure speed on a long distance. We will see...
it may be possible to perform LIDAR scanning using three 360° camera's and a scattered laser. the laser would have a simple time stamped carrier signal. the target object reflects the laser back to each of the 360° camera's that are calibrated to track the direction of the incoming laser and decode the time stamp giving a triangulation ability. the rest is processed much like GPS.
Maybe this is possible. However, the speed of light is quite fast ;-)
@@AndreasSpiess that will only improve the resolution.
This is what I was looking for
Glad the video was helpful!
Is it possible to run multiple of this A1 sensors in the same vehicle, or in the same space, without interfering? Maybe synchronizing phases in some way? Or using different wavelengths?
The laser is quite small, so it should be no problem to run a few of them in parallel. The chance one sees the other for more than a fraction of a second is quite small. But i only have one :-( So I cannot test.
Us threr any sensor work as this lidar with high range??
I need a sensor that detect the distance and the angle at the same time
*A Gem in Electronics Lobby*
Thank you!
Hello.Andreas
hello ALL
Great info Thankyou Noted.Marvalous . Hey is it poss to make a servo rotation device to cover all axis up dow left right and all around for home tiny dones or an other devices for fast obstical
Avoidence Fast ??
cheers.
Run 2 split apart. We haven't had problems.
???
if you sample points in LIDAR you are sampling and you need a sampling filter. Lidar is missing rulers because builder didnt go to school and didnt do his Nyquist Theorem homework.
Maybe. I am not sure.
To escape the 2D restriction inherent in the rotating laser platform, could one use a two-axis mirror system of the type that is inside of a grocery-store bar-code scanner?
Maybe. but not easy on a rotating device. And you would lose most probably speed