- 53
- 52 029
Kenji Koide
เข้าร่วมเมื่อ 28 ต.ค. 2019
วีดีโอ
GLIM on Jetson Orin Nano
มุมมอง 816หลายเดือนก่อน
Jetson Orin Nano (15W) Configuration : OdometryEstimationGPU SubMapping (GPU) GlobalMapping (GPU) github.com/koide3/glim
Bundle Adjustment Factor [gtsam_points]
มุมมอง 4812 หลายเดือนก่อน
(Coming soon) github.com/koide3/gtsam_points k_koide3
Continuous Time ICP Factor [gtsam_points]
มุมมอง 3292 หลายเดือนก่อน
(Coming soon) github.com/koide3/gtsam_points k_koide3
SE3 BSpline Interpolation [gtsam_points]
มุมมอง 1732 หลายเดือนก่อน
(Coming soon) github.com/koide3/gtsam_points k_koide3
Incremental VoxelMap Update and Normal Estimation [gtsam_points]
มุมมอง 4432 หลายเดือนก่อน
(Coming soon) github.com/koide3/gtsam_points k_koide3
Colored ICP Factor [gtsam_points]
มุมมอง 2092 หลายเดือนก่อน
(Coming soon) github.com/koide3/gtsam_points k_koide3
[GLIM] Visual-LiDAR-IMU SLAM on a Drone (NTU-VIRAL)
มุมมอง 1.5K2 หลายเดือนก่อน
(Coming soon) github.com/koide3/glim k_koide3
[GLIM] Flatwall experiment with Livox Avia (Complete point cloud degeneration)
มุมมอง 4692 หลายเดือนก่อน
(Coming soon) github.com/koide3/glim k_koide3
[GLIM] Mapping with Azure Kinect
มุมมอง 8342 หลายเดือนก่อน
(Coming soon) github.com/koide3/glim k_koide3
[GLIM] Mapping with various range sensors (Same parameter setting for all)
มุมมอง 1.7K2 หลายเดือนก่อน
The same parameters are used for all the sensors. (Coming soon) github.com/koide3/glim k_koide3
[GLIM] Outdoor driving test with Livox MID360 (Processing speed: x14 of real-time)
มุมมอง 1.2K2 หลายเดือนก่อน
[GLIM] Outdoor driving test with Livox MID360 (Processing speed: x14 of real-time)
[GLIM] Map correction with offline_viewer
มุมมอง 6402 หลายเดือนก่อน
[GLIM] Map correction with offline_viewer
MegaParticles: 6-DoF Monte Carlo Localization (Closeup View)
มุมมอง 6344 หลายเดือนก่อน
MegaParticles: 6-DoF Monte Carlo Localization (Closeup View)
[ICRA2024] MegaParticles : 6-DoF Monte Carlo Localization with One Million Particles
มุมมอง 1.7K4 หลายเดือนก่อน
[ICRA2024] MegaParticles : 6-DoF Monte Carlo Localization with One Million Particles
Scan matching speed comparison (small_gicp vs Open3D)
มุมมอง 6454 หลายเดือนก่อน
Scan matching speed comparison (small_gicp vs Open3D)
GLIL robustness test (dynamic objects and motion)
มุมมอง 3135 หลายเดือนก่อน
GLIL robustness test (dynamic objects and motion)
Quadruped robot (MPC-based planning) [Student Project]
มุมมอง 2376 หลายเดือนก่อน
Quadruped robot (MPC-based planning) [Student Project]
Quadruped robot (Gesture-input) [Student Project]
มุมมอง 896 หลายเดือนก่อน
Quadruped robot (Gesture-input) [Student Project]
Quadruped robot (Gesture-input & MPC-based planning) [Student Project]
มุมมอง 3046 หลายเดือนก่อน
Quadruped robot (Gesture-input & MPC-based planning) [Student Project]
MPC-based Path Planning [Student project]
มุมมอง 3176 หลายเดือนก่อน
MPC-based Path Planning [Student project]
Impressive work! But the result is with A100 gpu which not very portable. I am wondering have you ever run it on a less potent GPU?
Great video :) Is this localization module based on hdl_global_localization or 3D-BBS?
The positioning effect is very silky! Expect to see code updates for this section soon
It's great! I'm following your works, thanks for your open-source! Recently, I want to test it like this video, but I don't find the corresponding demo bag and the config file of mid360, will you update it later?
Kenji is GOAT!
Nice job!
Yes you feel it.
You can feel it too.
Is GLIM contain location function? if not, is that any shedule for sharing the location function part? Tks in advance.
Can it run on OrangePi5?
I think yes. The configuration shown in the video description uses only ~40% of the CPU resource of Khadas VIM3.
Thank Kenji
where is dataset?
Super impressive. Great work! Thanks for open sourcing it!
Офигенно! Отличная работа!
good job
OMG it's released!
😍
Turning the body, I agree, it’s convenient to control the rotor, but rotate it to go forward is a mockery! It clearly should be in combination with a regular joystick.
I fully agree with you, but it's fun! I decided to create this after playing "Playdate", a handheld game console with a mechanical crank.
❤
❤
Always a big fan of your work. Is this in preparation for a new publication?
Thanks! Yes, it corresponds to our new paper that is going to come out in a few weeks.
Kenji, is this rviz2 on the right side? Your visualization is so cool
Thanks. The left is the usual rviz2 and the right is our original viewer (github.com/koide3/iridescence).
this is sick
Super excited for the release :)
Great work. I'm guessing you are fusing the results of IMU's estimate of position and SFM data from camera. How much drift are you getting as of now?
It fuses all Visual-LiDAR-IMU constraints on a unified factor graph. Because it was an easy setup for the LiDAR, it got almost no drift at all.
@@kenjikoide6076 If LiDAR's also participating in Motion estimation, it sure would be very accurate. I would bet that if you used only the data from LiDAR you would probably get the same estimated motion. I was thinking that LiDAR was used as ground truth.
@@shivavarunadicherla Yes, visual constraints brought only minor accuracy gain in this dataset indeed. This is just a demonstration. What we are truly aiming for is overcoming situations where point clouds become completely degenerate (e.g., tunnels), and we've confirmed that visual constraints greatly improves the reliability in such situations.
There is another one new dataset called MCD. offers higher challenge than this.
Thanks, I'll definitely try MCD. NTU-VIRAL was easy and didn't bring much insight.
@@kenjikoide6076 ntu viral original sequences are easy. The new additional sequence spms one are much tougher. So far only selected few lio can run
Lol i flu this sequences
Trying TRO again?
No, it's accepted to another one :)
GitHub 404 😢
Sorry, we are finalizing the repository, it'll come out in a few weeks.
🥰
Very impressive!
I appreciate your work very much. Can you open source or contact me? I am studying related aspects recently
I appreciate your work very much. Can you open source or contact me? I am studying related aspects recently😍
めっちゃツヨい
Nice :) Which SLAM did you use for that? FAST-LIO or something else?
That was wonderful work. Congarts my master 🙏
Nice Work! Looking forward to your paper being published!
Impressive. Assume this is with a depth camera. Which one?
It's an azure kinect.
and Livox MID360 was used in the outdoor experiment.
Unbelievable!😍
This is soooo cool🎉
I really appreciate your contributions to the open-source community😍😍
Nice! For map based navigation, whenever I tried to match the point cloud to the map, it gave me a wrong pose estimation due to wrong correspondences as there are too many match candidates. So I made the map as pairs of (pose, LiDAR scan) and matched to the LiDAR scan at that pose, which made the "map" file too big. Great work! I should read your paper to re-implement the navigation module.
a $750+ football
amazing work
Wow that was cool!
will code release after publish this paper?
The global localization part is available at: github.com/koide3/hdl_global_localization
@@kenjikoide6076 thanks, it's cool
there are too many monsters................................
awesome!!!! Code release is not considered?
The global localization part is available at: github.com/koide3/hdl_global_localization
Kenji, I guess you fall in fishing these days lol 🎣
C ya in yokohama