- 145
- 137 577
bashrc
เข้าร่วมเมื่อ 11 พ.ย. 2006
Freedombone interactive installer
An interactive installer for the Freedombone self-hosted web server
มุมมอง: 460
วีดีโอ
Demo of big-endian Linux kernel running on Jetson TK1 (ARM Tegra)
มุมมอง 4099 ปีที่แล้ว
The final proof! See wiki.baserock.org/How_to_install_a_big-endian_Linux_system_to_NVIDIA_Jetson_TK1
FreedomBox Foundation - James Vasile
มุมมอง 18K12 ปีที่แล้ว
Freedom Box Executive Director James Vasile speaks at the Contact Summit, hosted by Douglas Rushkoff and held at the Angel Orensanz Foundation in New York on October 20, 2011.
Kevin Warwick and the Seven Dwarf robots (1996)
มุมมอง 1K13 ปีที่แล้ว
Kevin Warwick demonstrating the Seven Dwarf robots at an exhibition in Glasgow in 1996. The robots show learning of obstacle avoidance and flocking behaviors.
Navigation with the Turtlebot
มุมมอง 45913 ปีที่แล้ว
A successful A to B and B to A navigation from one end of the house to the other. Being able to do this multiple times is a good indication of how reliable the navigation system is.
Navigation with the Turtlebot
มุมมอง 16213 ปีที่แล้ว
A pretty successful navigation from one end of the house to the other, as seen within the Rviz GUI. On the way back the bot gets stuck at the final doorway.
Turtlebot following behavior
มุมมอง 74313 ปีที่แล้ว
The following behavior works quite well. A possible improvement would be that if it sees something which looks like a flat surface (a wall) then it should rotate to search for new people to follow.
Testing 2D SLAM
มุมมอง 76913 ปีที่แล้ว
The 3D point cloud from the Kinect sensor is converted to a fake laser scan and a 2D SLAM algorithm is then applied.
Testing 2D SLAM
มุมมอง 1.6K13 ปีที่แล้ว
The 3D point cloud from the Kinect sensor is converted to a fake laser scan and a 2D SLAM algorithm is then applied.
GROK2 Manual Jog in Rviz
มุมมอง 5413 ปีที่แล้ว
Manually jogging the robot using the joystick. Odometry isn't calibrated, but approximately somewhere in the right ballpark.
GROK2 viewing its environment
มุมมอง 19713 ปีที่แล้ว
The alignment of views is not very good. One of the reasons is that the Kinect sensor is heavier than the previous stereo cameras, reducing the repeatability of movement in the pan axis.
URDF model showing pan/tilt head motion
มุมมอง 35313 ปีที่แล้ว
URDF model showing pan/tilt head motion
Detecting the horizontal surface of a chair and table
มุมมอง 4613 ปีที่แล้ว
Detecting the horizontal surface of a chair and table
Detecting horizontally oriented surfaces within a point cloud
มุมมอง 10313 ปีที่แล้ว
Detecting horizontally oriented surfaces within a point cloud
Detecting horizontally aligned surfaces from the GROK2 robot
มุมมอง 3514 ปีที่แล้ว
Detecting horizontally aligned surfaces from the GROK2 robot
View of a table and chair observed by the GROK2 robot
มุมมอง 2514 ปีที่แล้ว
View of a table and chair observed by the GROK2 robot
View of the desktop from the Rodney robot
มุมมอง 8014 ปีที่แล้ว
View of the desktop from the Rodney robot
View of the desktop from the Rodney robot
มุมมอง 2714 ปีที่แล้ว
View of the desktop from the Rodney robot
Composite point cloud from the GROK2 robot
มุมมอง 1.5K14 ปีที่แล้ว
Composite point cloud from the GROK2 robot
First composite point cloud model from the GROK2 robot
มุมมอง 10514 ปีที่แล้ว
First composite point cloud model from the GROK2 robot
We thought this was great when we were a child, till we realised the schematics aren't accurate some circuits don't function or are "dummy's" on cybot one of the 'burnouts' isn't actually soldered to the right pin outs 😅
I have the cybot, that was based on these robots.
Can you share the code or more information?
v4l2stereo -0 /dev/video1 -1 /dev/video0 --features When i run this it says - v4l2stereo: command not found Any idea how to resolve this?
could you tell me how does it work, what sensors did you use?
could you tell me how does it work, or what type of sensors did you use?
Hello Sir, Could you please send me a message where you got the data for this video from, because i have the same project..
This made my day... seriously i have been trying to verify if this cheap webcam can do this... thanks a bunch!
I keep getting a libhighgui.so.2.1 error it is saying there is no such file or directory after i run the video1 and video0 command any help?
Amazing! I did a project back in University to detect red & green light Worked great in the lab, didn't get around to trying it on the road Are you detecting color with a lookup table or math? I used stats.and created a square filter Works best if calibrated based on surrounding light source. Are you using RGB? I use YUV. I think that if you can have one axis as the intensity range of the surrounding light source(ie. sun & maybe street lights) you can use the other two axis determine the color
The code for this is available on Google Code. It's only sparse stereo, since the DSPs and the very limited bandwidth between them can't handle much more than that.
how did you embed the program on the robot? I need to embed a program on the surveyor robot with a single camera. your help will be greatly appreciated
On Google code under libv4l2cam
where do i find the code for this?
Yes. Search for v4l2stereo, since I don't think you can post URLs on TH-cam comments.
No. This is a simple edge detection using non-maximal suppression which is then stereo matched.
This uses the Minoru stereo webcam.
Is the code available? I would like to do some experience with my home-build webcam based stereo vision system.
Are you using SURF feature matching between left and right images?
What is the model of the webcams?
What is the model of the webcams?
This is real time enough for a car. Not a missle but for slow things it works.
hehe, just found this video after stumbling across your blog and wiki during a google search. Imagine my surprise to see that i was ALREADY subscribed to your youtube channel. small world eh.
Motter, please tell me what's the problem you want to solve in this video?
@motters2001 And with this particular dataset?
Produce a Point cloud demonstration in which RGB-D (voxels) can be seen.
For 320x240 resolution images on a computer several years old, yes you can do realtime dense stereo.
Can you create the disparity map in realtime ie. the videos 10 fps? And with what processing requirements?
I like the sound
Did you do this in an actual robot? Did you do it using which programming language?
In this case, yes.
They could be, but more likely it's just an artefact of the matching algorithm.
Are the horizontal lines caused by iterlacing in the original stereo images? I encountered this trying to make 3D videos.
There isn't a binary package release of this yet, but there will be soon.
Quick and easy is right! Looks very simple, even for someone like me?
@motters2001 I see from this article (Google "sensors_sharpirrange.shtml") that the IR beam is narrower than I imagined. Funny how hard it is to find an actual specification posted anywhere of the beam width. I found one posting that measured a 0.25cm width at 10cm which would be about 1.5 degrees. So this would require 60 samples per 90 degree sweep. For a 2 second sweep that would be about 33 ms per sample which I guess is actually pretty good!
Possibly the scan rate could be faster, but as the scan time decreases you get less range data (looks like wider arcs) and there is more mechanical stress and noise.
@motters2001 Thanks for your reply. BTW, does 2 seconds seem a little slow to you? If I am reading the specs correctly (which is not at all certain), the time between samples can be as short as 5 ms. So even at 50 ms we'd get 20 samples per second. Assuming the beam width is about 15 degrees (can't find this in the specs), that would be 6 samples per 90 degrees which would only take 300 ms. I suppose it depends on the microcontroller you are using and how fast it can query the IR sensors.
The fastest looks like about 2 seconds for a 90 degree sweep. I'm also checking to see how this compares to ranges from stereo vision, since the characteristics of this sensor are unspecified by the manufacturer.
This is really nice! I was wondering what is the fastest you can pan the IR sensors (say, in sweeps per second or degrees per second) while still sampling the distance readings fast enough not to introduce gaps in the map? (Hope that makes sense.)
Heh: you name your mind children around when they are conceived.
I was thinking of calling it "babelbrox", after the Tower of Babel and Zaphod Beeblebrox (who had two heads).
I like it! You should call any resulting robot "max headroom" :-)
The main reason for trying this configuration is to simplify the algorithms. With multiple mirrors on a plane it is also possible to get stereo ranges, but the data association problem remains complicated. In this arrangement there is a special epipolar geometry where we only need to compare features along radial lines. This might mean that this approach is suitable for use on low powered systems or DSPs.
I didn't mean moving the bottom reflector - I meant adding 4 ball bearings. I know that the software to make use of the extra information would be the difficult bit...
I see this is called "catadioptric" vision.
Yes you're right. You can imagine the mirrors not existing and this just being two cameras with fisheye lenses.
Well, you don't focus on the mirror - you focus on what you are looking at. The reflected objects seen in the top mirror are further away - but you *probably* just want to focus on infinity here in any case.
There is, but this is just because the field of view is rectangular and the mirrors are spherical. Moving the lower mirror closer to the camera makes getting a good focus on both more difficult. A solution to the focus problem would be to mount a lens in the hole of the lower mirror, but this increases the cost/complexity.
It seems as though there is some wasted space in the corners. Room for four more reflectors there - perhaps? :-)