Spectacular AI
Spectacular AI
  • 35
  • 80 276
Aerial navigation without GPS
This demonstration used a MEMS IMU (cost below $5), which was inside the cockpit to reduce vibration noise from the engine, and a consumer-grade barometer. The performance in this test was mostly limited by insufficient noise isolation in the camera. The SDK can also utilize magnetometer data, but that was not required here.
The software in this test consists of two main components:
1. Visual-Inertial Odometry (VIO), which is already available in our commercial SDK
2. A Visual Positioning System (VPS) component, which uses the aerial-imagery-based reference map
This component is not a part of our standard SDK. If you are interested in early access of the VIO+VPS solution, send us a message at www.spectacularai.com/
#contact #sensorfusion #computervision #vio #navigation
มุมมอง: 326

วีดีโอ

GPS-free drone navigation with less than 1% drift!
มุมมอง 1.3K3 หลายเดือนก่อน
Spectacular AI SDK VIO tracking with two different camera devices: 1) Spectacular AI reference HW design (v1) with two global shutter stereo cameras. 2) RTK-VINS prototype device with monocular fisheye rolling shutter camera setup. Both cases also fuse barometric altitude data with VIO. GPS is only shown for reference and no loop closures are used. The total accumulated drift is below 0.6% in b...
Gaussian Splatting on the Move - v2
มุมมอง 7164 หลายเดือนก่อน
Our method enables crisp Gaussian Splatting reconstructions from blurry and wobbly smartphone captures. Motion blur and rolling shutter compensation for 3DGS using VIO IMU data, pose refinement, and a differentiable image formation model. Video for the paper: arxiv.org/abs/2403.13327 Code: github.com/SpectacularAI/3dgs-deblur Project page: spectacularai.github.io/3dgs-deblur/ #gaussiansplatting...
GPS free drone navigation with VIO
มุมมอง 1.7K5 หลายเดือนก่อน
This video demonstrates GPS-free navigation using the Spectacular AI SDK and a standalone reference hardware design. The payload on the drone is an example of a high-accuracy VIO system built using low-cost, mostly off-the-shelf, components. It can track the position and orientation of the drone in real-time using only consumer-grade cameras and IMU, running on embedded hardware, in this case, ...
Spectacular AI on Orbbec cameras
มุมมอง 2.8K8 หลายเดือนก่อน
Spectacular AI SDK now supports Orbbec Astra 2 and Femto Mega (the official drop-in replacement for Azure Kinect DK) out-of-the-box without any extra configuration! The SDK is available FREE for non-commercial use, you can download it from github.com/SpectacularAI/sdk Link to documentation (Orbbec wrapper): spectacularai.github.io/docs/sdk/wrappers/orbbec.html #gaussiansplatting #slam #computer...
Gaussian Splatting reconstructions with Android & iPhone
มุมมอง 2.8K9 หลายเดือนก่อน
Android app: play.google.com/store/apps/details?id=com.spectacularai.rec iOS app: apps.apple.com/us/app/spectacular-rec/id6473188128 Step by step instructions: github.com/SpectacularAI/sdk-examples/tree/main/python/mapping Example gallery: www.spectacularai.com/mapping#gallery #computervision #slam #nerf #gaussiansplatting
Spectacular Rec for Gaussian Splatting and NeRF reconstructions (iPhone)
มุมมอง 3.8K9 หลายเดือนก่อน
Here are all the links you need to get started with your reconstructions! Spectacular Rec for iPhones: apps.apple.com/us/app/spectacular-rec/id6473188128 Spectacular AI SDK mapping scripts: github.com/SpectacularAI/sdk-examples/tree/main/python/mapping Nerfstudio: docs.nerf.studio/ #computervision #slam #nerf #gaussiansplatting
Gaussian Splatting and NeRFs on OAK-D (+ RealSense & iPhone)
มุมมอง 2.6K10 หลายเดือนก่อน
The latest Spectacular AI Mapping API version allows training Gaussian Splatting and NeRF reconstructions from data recorded on any device supported by the Spectacular AI SDK. The mapping and post-processing stages are fast, robust and totally COLMAP-free! The final training phase is powered by Nerfstudio. See github.com/SpectacularAI/sdk-examples/tree/main/python/mapping for instructions. Comi...
Spectacular AI with RS-LiDAR-M1 from RoboSense
มุมมอง 63210 หลายเดือนก่อน
Real time tracking and 3D reconstruction of surrounding environment using Spectacular AI SDK with data from a stereo camera, an IMU and a solid state lidar (RS-LiDAR-M1 from RoboSense). RGB camera is used for visualization. No GPS required #lidar #computervision #slam
Gaussian Splatting and NeRFs with Spectacular AI Mapping API
มุมมอง 5K10 หลายเดือนก่อน
Demonstration of Gaussian Splatting and various NEural Radiance fields trained on data processed using Spectacular AI Mapping API (no COLMAP). All data in this video was recorded with Azure Kinect, registrered using the Spectacular AI SDK (Mapping API) and finally trained using either Gaussian Splatting (Taichi implementation github.com/wanmeihuali/taichi_3d_gaussian_splatting), Nerfacto (Nerfs...
Real-time VISLAM on board a drone with Raspberry Pi
มุมมอง 828ปีที่แล้ว
Spectacular AI is collaborating with Tampere University to record a new benchmark dataset for low-cost embedded VISLAM. See tutvision.github.io/TampereDroneDataset/ for more information. Two different devices were used for data recording: Device 1 is built exlusively from off-the-shelf parts (Raspberry Pi 4 OAK-D Pro W). Device 2 is built from lower level components: R-Pi 4 monocular InnoMaker ...
Simulated visual data along EuroC trajectory
มุมมอง 251ปีที่แล้ว
Simulated visual data along the EuroC "v2-03-difficult" trajectory using different camera models. This data "matches" the actual IMU data in the original dataset and can be used to study the effects of different visual properties of the scene and camera on Visual-Inertial SLAM.
MPU6050 vs. SCHA634 - Testing the effect of IMU quality on VISLAM accuracy
มุมมอง 3.7Kปีที่แล้ว
We built a test device with two different IMUs to study how the quality of the IMU affects the accuracy of inside-out tracking (VISLAM) in the Spectacular AI SDK. The device has two global-shutter camera sensors (OV9281) with wide-angle lenses (ArduCam B0223), and two inertial-measurement units (IMUs): * TDK/InvenSense MPU6050, a popular and inexpensive MEMS IMU * Murata SCHA634-D03, a high-qua...
Spectacular AI SDK + NeRF
มุมมอง 1.7K2 ปีที่แล้ว
Spectacular AI SDK mapping API outputs can be fed into various NeRF frameworks. Here is a demonstration with NVidia Instant NeRF that produces impressive 3D reconstructions in seconds. These examples do not have a separate (slow and fragile) COLMAP step, but the mapping API outputs can be directly used as inputs for the NeRF. www.spectacularai.com/ #deeplearning #computervision #realsense #kinect
New SLAM postprocessing for large-scale visual mapping
มุมมอง 1.2K2 ปีที่แล้ว
Demonstrating Spectacular AI SDK 1.4 and the new SLAM postprocessing for large-scale visual mapping www.spectacularai.com
Large-scale real-time mapping example using Azure Kinect
มุมมอง 4.2K2 ปีที่แล้ว
Large-scale real-time mapping example using Azure Kinect
Real-time mapping with OAK-D and RealSense
มุมมอง 9K2 ปีที่แล้ว
Real-time mapping with OAK-D and RealSense
Fast texturized reconstruction demo
มุมมอง 1.4K2 ปีที่แล้ว
Fast texturized reconstruction demo
RGBD-Inertial mapping
มุมมอง 1.8K2 ปีที่แล้ว
RGBD-Inertial mapping
SLAM and relocalization tests on OAK-D & RealSense
มุมมอง 8K2 ปีที่แล้ว
SLAM and relocalization tests on OAK-D & RealSense
HybVIO: Pushing the limits of real-time visual-inertial odometry
มุมมอง 6K2 ปีที่แล้ว
HybVIO: Pushing the limits of real-time visual-inertial odometry
Spectacular AI SDK AR demo
มุมมอง 1.3K2 ปีที่แล้ว
Spectacular AI SDK AR demo
City-scale GNSS-VIO Augmented Reality
มุมมอง 1.8K2 ปีที่แล้ว
City-scale GNSS-VIO Augmented Reality
Spatial AI demonstration
มุมมอง 8692 ปีที่แล้ว
Spatial AI demonstration
Low-power real-time VIO (with RealSense comparison)
มุมมอง 4.9K3 ปีที่แล้ว
Low-power real-time VIO (with RealSense comparison)
Comparison to ARCore, ARKit and RealSense
มุมมอง 3.7K3 ปีที่แล้ว
Comparison to ARCore, ARKit and RealSense
GPS-aided visual-inertial odometry on the OAK-D
มุมมอง 4K3 ปีที่แล้ว
GPS-aided visual-inertial odometry on the OAK-D
GNSS-VIO in bad weather - high-velocity tunnel
มุมมอง 3303 ปีที่แล้ว
GNSS-VIO in bad weather - high-velocity tunnel
GNSS-VIO in a tunnel (using ZED 2)
มุมมอง 1.2K3 ปีที่แล้ว
GNSS-VIO in a tunnel (using ZED 2)
Spatial AI in a WW1 fortification
มุมมอง 6703 ปีที่แล้ว
Spatial AI in a WW1 fortification

ความคิดเห็น

  • @brdane
    @brdane 3 วันที่ผ่านมา

    This is the type of accuracy test I have been looking for for these low-cost gyroscope/accelerometer ICs. This is the only test someone really needs to see before deciding on purchasing.

  • @sumitsarkar4517
    @sumitsarkar4517 5 วันที่ผ่านมา

    how u correct the error to GPS at 0:40 and specially at 1:37

  • @WenhangDONG
    @WenhangDONG หลายเดือนก่อน

    @Spectacular AI thank you for your contribution! I follow you step, but after i run "sai-cli process MY_INPUT_PATH MY_OUTPUT_PATH", i got and error"mapping failed: no output generated" I tried on three different video of Spectacular REC, my iphone is 15pro, I don't know what happend, could you please tell me?

  • @filmproducerpro
    @filmproducerpro หลายเดือนก่อน

    Hi! Please tell me, can the program work with the 3d scanner "creality scan Ferret pro" to scan the room?

  • @bearyzhang
    @bearyzhang 3 หลายเดือนก่อน

    it is so similar to RTABMAP

  • @VorpalForceField
    @VorpalForceField 3 หลายเดือนก่อน

    Impressive ...!!! Thank You for sharing .. Cheers :)

  • @pleabargain
    @pleabargain 3 หลายเดือนก่อน

    Okay. It's cool but put your jargon into layman's terms. What is the significance of this flight path? You told the drone to fly around and not crash?

    • @김도녕-m3b
      @김도녕-m3b หลายเดือนก่อน

      It's a drone localization using vision, they show a comparison result of Red(GPS, ground truth) and Blue(Visual Inertial Odometry, Prediction) in real data

  • @TheBarthew
    @TheBarthew 3 หลายเดือนก่อน

    Love it! Did you test it on a high altitude mission?

  • @khangletruong8196
    @khangletruong8196 5 หลายเดือนก่อน

    Very interesting! I wonder if there would be any chance to test/evaluate this with your sdk on our drone ?

    • @SpectacularAI
      @SpectacularAI 5 หลายเดือนก่อน

      Generally yes, as a commercial NRE/pilot project. Contact us at www.spectacularai.com/#contact for more details. Please include a high-level description of your hardware and use case.

  • @sudarshanpoudyal5089
    @sudarshanpoudyal5089 5 หลายเดือนก่อน

    Will it be supported in opensource sdk. .

  • @malaysiastreetview
    @malaysiastreetview 5 หลายเดือนก่อน

    👍

  • @gaussiansplatsss
    @gaussiansplatsss 5 หลายเดือนก่อน

    what iphone did you use here

  • @王彪钓
    @王彪钓 6 หลายเดือนก่อน

    Astra 2 ,It has the best 3D restration capability,especially corners

    • @alessandro_valli
      @alessandro_valli 6 หลายเดือนก่อน

      Can you articulate? Have you tried both?

  • @DongPham-kj9rb
    @DongPham-kj9rb 6 หลายเดือนก่อน

    Have you achieved colors for OAK cameras?!

  • @DongPham-kj9rb
    @DongPham-kj9rb 6 หลายเดือนก่อน

    Thank you

  • @gusstplt
    @gusstplt 6 หลายเดือนก่อน

    I try to use SpectacutarAI but after recording data from an android smartphone, when I launch s "sai-process" it get on two different computer (1 on windows(portable computer) , 1 on linux(GPU server) : SpectacularAI ERROR: x:441 Abandon (core dumped) I don't know what to do.

  • @uzaiftalpur
    @uzaiftalpur 7 หลายเดือนก่อน

    @SpectacularAI how can I process without OAK-D? any other way to use such devices

  • @Mark001986
    @Mark001986 7 หลายเดือนก่อน

    Very nice! Is the lidar used for SLAM tracking and bundle adjustment?

  • @GroFilms
    @GroFilms 7 หลายเดือนก่อน

    LOVE this presentation- awesome!!

  • @alessandro_valli
    @alessandro_valli 7 หลายเดือนก่อน

    Which one would you recommend for indoor use? Astra 2 or Femto? I am interested in having the higher resolution possible. Thank you!

    • @SpectacularAI
      @SpectacularAI 7 หลายเดือนก่อน

      Femto. It has the best depth sensor for indoor use

    • @michelesacco1638
      @michelesacco1638 6 หลายเดือนก่อน

      @@SpectacularAI Yes but it can't see black surface

    • @alessandro_valli
      @alessandro_valli 6 หลายเดือนก่อน

      ​@@michelesacco1638really?

  • @전종택-v3b
    @전종택-v3b 7 หลายเดือนก่อน

    I thought this is same but a little different. If someone just want to 6 axis about motion, it doesn't matter, is it? In the event of vr, it doesn't matter. also If someone more improvement somethings, get a good things. But this video is interesting ;)

  • @kamalakrishnan9427
    @kamalakrishnan9427 7 หลายเดือนก่อน

    The link provided just directs us to this video which tells us to install the requirements.txt. There is no requirements.txt in the git hub repo. Help. ​ @SpectacularAI

  • @Deadnature
    @Deadnature 8 หลายเดือนก่อน

    Can this be done on a mac studio m2?

  • @engfernandolsf
    @engfernandolsf 8 หลายเดือนก่อน

    how much is this equipment? is a budget solution to scan and bring a house or small company to a 3d model?

  • @OlgaLight13
    @OlgaLight13 8 หลายเดือนก่อน

    I can't seem to find the requirements.txt file and i do pip install spectacularAI and it doesn't work. Please help!

    • @SpectacularAI
      @SpectacularAI 8 หลายเดือนก่อน

      The most up-to-date instructions can be found here spectacularai.github.io/docs/sdk/tools/nerf.html (you should "pip install spectacularAI[full]")

    • @kamalakrishnan9427
      @kamalakrishnan9427 7 หลายเดือนก่อน

      No, The link provided just directs us to this video which tells us to install the requirements.txt. There is no requirements.txt in the git hub repo. Help. ​@@SpectacularAI

    • @SpectacularAI
      @SpectacularAI 7 หลายเดือนก่อน

      You're correct that the page links to this video, which has some outdated info. You can skip that part of the docs (no need to follow the full video at that point) and follow the rest of the instructions on the page. The requirements.txt has been removed. The new command you should run is "pip install spectacularAI[full]" (notice the "[full]"). You may have to uninstall (pip uninstall spectacularAI) first if you have an older version.

  • @naurk
    @naurk 9 หลายเดือนก่อน

    VRAM required?

    • @SpectacularAI
      @SpectacularAI 9 หลายเดือนก่อน

      The 3DGS models in this video were trained on an NVIDIA GeForce RTX 3080 Ti, which has 12GB of RAM

  • @wrillywonka1320
    @wrillywonka1320 9 หลายเดือนก่อน

    But we cant use gaussian splats in any video editing software. They are awesome but pretty much are useless

    • @SpectacularAI
      @SpectacularAI 9 หลายเดือนก่อน

      The splats (or efficiently training them) is a new technology that has did not exist a few months ago. The amount of software that will support them in some form will probably increase quite a lot in 2024, but you are right that the support currently rather limited.

    • @cekuhnen
      @cekuhnen 6 หลายเดือนก่อน

      They are pretty useful because you can relight them and don’t need meshes and textures to render.

    • @wrillywonka1320
      @wrillywonka1320 6 หลายเดือนก่อน

      @@cekuhnen thats true. I mean honestly i love how detailed they are compared to meshes and how easy they are on computing power but for someone that uses da vinci n blender there is really no way to utilize this great tech

    • @cekuhnen
      @cekuhnen 6 หลายเดือนก่อน

      @@wrillywonka1320 yeah it has limited use only

  • @zyang056
    @zyang056 9 หลายเดือนก่อน

    Is the ios app source available for customization?

  • @SpectacularAI
    @SpectacularAI 9 หลายเดือนก่อน

    iOS app and tutorial video released! th-cam.com/video/d77u-E96VVw/w-d-xo.html

  • @prarthanahegde9819
    @prarthanahegde9819 10 หลายเดือนก่อน

    Hii, can we implement real time mapping just like shown in the video with just Real Sense D455 camera. I dont have the OAK-D.

  • @JReinhoud
    @JReinhoud 10 หลายเดือนก่อน

    Please share settings of the Kinect and RTAB-Map. And did you use standalone RTAB-Map or ROS with RTAB-Map?

  • @JINLAI-c5r
    @JINLAI-c5r 10 หลายเดือนก่อน

    I try,but it shows “SpectacularAI WARN: VIO may be running too slow, data is being input too fast, or IMU samples are missing / time-offset from frames. (buffer size 10)”.How to slove?

  • @jakesnake5534
    @jakesnake5534 10 หลายเดือนก่อน

    Nice

  • @tbuk8350
    @tbuk8350 11 หลายเดือนก่อน

    Dang, ARCore is rock solid the whole time lol

  • @bernat4289
    @bernat4289 ปีที่แล้ว

    Very impressive. What kind of hardware configuration is used to integrate RTK data?

  • @yiboliang8338
    @yiboliang8338 ปีที่แล้ว

    Cool. Now I am just going to buy a 40$ second handed screen-cracked Sharp R5G with TOF sensor as the depth camera and the computing board for my project to run ARCORE.

  • @metalthower
    @metalthower ปีที่แล้ว

    I would like to do this with a trail mapping bike - outdoors. Is this possible with an OAK-D-Pro?

  • @seble_pikachu3732
    @seble_pikachu3732 ปีที่แล้ว

    It's really interesting to compare the results you can get in a VISLAM with a 2 euro IMU and a 150 euro IMU. Depending on your means and desired performance, you can easily choose one IMU or the other thanks to this comparison.

  • @kimsanmaro
    @kimsanmaro ปีที่แล้ว

    can i get more information about GPS, ZED fusion? or any github rep anthing about fusion. please help. i want to more detail...

  • @liangzijian4452
    @liangzijian4452 ปีที่แล้ว

    Great work! 👍I've been following SpectacularAI's sdk for a long time and have tried to apply oak's examples to my drones. But the results would drift badly. The camera I'm using is also oak-d-pro-w, do I need imu calibration, or some other action to achieve the results in the video?

    • @SpectacularAI
      @SpectacularAI ปีที่แล้ว

      This has been improved significantly in the latest SDK versions. Depending on your hardware and, e.g., the level of vibration noise in the drone, the performance may also benefit from use-case-specific parameter tuning, which is available as a commercial service.

  • @seble_pikachu3732
    @seble_pikachu3732 ปีที่แล้ว

    Was the µc Raspberry Pico just there to interface with the camera ? Or was there an algorithm on it ? Tanks and congratulations for the results !

    • @SpectacularAI
      @SpectacularAI ปีที่แล้ว

      The MCU currently reads the IMU and triggers the camera, which also causes the IMU and camera timestamps to be accurately synchronized in the same monotonic clock. It is not strictly necessary to use one but it is a technically convenient choice.

  • @IzotopShurup
    @IzotopShurup ปีที่แล้ว

    did you add support for ARMhF?

  • @imignap
    @imignap ปีที่แล้ว

    Very cool! Fusing the optical data with the IMU to dead reckon. Wish that Murata was cheaper... Jw why no 3d mag?

  • @matspg
    @matspg ปีที่แล้ว

    Thanks for posting this - I have a question: in the first test, the light grey is ground truth. The red and the blue tracks - are those from IMU *alone*, or are they from SLAM with the corresponding IMU. (if they're from IMU alone, that's incredible to me...way better than I expect). Thanks!

    • @SpectacularAI
      @SpectacularAI ปีที่แล้ว

      They are computed using visual-inertial SLAM, where the image data is the same for both tracks but the IMU is different

  • @sudarshanpoudyal5089
    @sudarshanpoudyal5089 ปีที่แล้ว

    Can you make the hardware setup opensource. I am unable to find low cost visual inertial sensor setup currently.

  • @kljnjon
    @kljnjon ปีที่แล้ว

    Very impressive stuff.

  • @horserenoir3553
    @horserenoir3553 ปีที่แล้ว

    Could you tell us what computer set-up you used for the data acquisition while walking around? I'd love to do the same here in my town but haven't come across a mobile computer that could handle the data in real-time (+ a small monitor to observe the whole process).

  • @jimxu1825
    @jimxu1825 ปีที่แล้ว

    cool, will that be colored?