3D Scanning of a Cottage with a Phone

แชร์
ฝัง
  • เผยแพร่เมื่อ 8 พ.ย. 2024

ความคิดเห็น • 127

  • @tokyowarfare6729
    @tokyowarfare6729 6 ปีที่แล้ว +107

    5 laser scanner salesman did not like the video :D

  • @mute3189
    @mute3189 4 ปีที่แล้ว +7

    A reassuring example; close to the quality I got of my basement floor with the iPhone X + Meshroom. You had a more complex area than me, with more input data. - Very interesting. Thank-you for sharing. Any added tips you (or anyone reading) may have found to heighten roomscan quality (in conditions without a better camera, and conditions with) are appreciated. Incredible exciting stuff!

  • @yawnyawning
    @yawnyawning 6 ปีที่แล้ว +9

    it really looks incredible... cant imagine Lenovo got this kind of technology...

  • @aussieraver7182
    @aussieraver7182 4 ปีที่แล้ว +4

    Maaaaad!
    Now add VR with PBR + HDRP + Post Processing on Unity and you got yourself realism in the digital.

  • @keyserswift5077
    @keyserswift5077 4 ปีที่แล้ว +12

    when I tried this it ended up looking like a big ball. lol

    • @etherialwell6959
      @etherialwell6959 3 ปีที่แล้ว +3

      Make sure you dont live in a completely spherical home! Results may differ based on what your apartment looks like
      ;P

  • @davidmartin1628
    @davidmartin1628 5 ปีที่แล้ว +3

    It would be great to see some interpolation between adjacent vertexes where there is missing data to 'fill in' the gaps to make a full 3D rendering without having faces of objects missing when looking at it from one side.

    • @matlabbe
      @matlabbe  5 ปีที่แล้ว +3

      In the first part of the video, this is the raw mesh that can be created from the data coming from the depth sensor. Holes are created by: 1) we didn't scan the area (like we didn't turn around an object to see behind it), 2) the depth sensor cannot "see" the surface (black or metallic surfaces), 3) there is a window or a mirror. However, at the end of the video, in the final optimized mesh we can actually see an interpolation in some areas to fill most small holes, even windows. For example, look at how windows are "closed" and textured from images from the outdoor. cheers

    • @davidmartin1628
      @davidmartin1628 5 ปีที่แล้ว

      @@matlabbe matlabbe Thanks for the reply. I feel like I didn't give enough credit where it was due as the project was fairly impressive.
      I fully understand that the 3D rendering is an result of the camera directly mapping the image to the mesh created by the depth sensor.
      My idea was to form a gradient of the textures in regions of large objects that haven't been imaged, based on adjacent regions that have been scanned, like the underside of tables.
      An example of this is the mesh has been captured for the pool table upstairs and the top and side surfaces of the object has been imaged. There has been interpolation of the mesh for the underside. However the underside remains untextured/transparent as it was never directly observed with the camera which is to be expected.
      My idea is that I think it would make a nice added bonus if the interpolated mesh surfaces got shaded or textured with textures from surrounding regions to make the image look more 'complete' without having actually scanned them.
      This way when you look at the 3D rendering from any angle, the object will not have sides missing. I know the interpolation of textures would produce a lot of incorrectly shaded sides but I think it would look neat.

  • @chemistry2023
    @chemistry2023 2 ปีที่แล้ว +1

    A great full 3d map

  • @subramanyam2699
    @subramanyam2699 6 ปีที่แล้ว +12

    This is insane !!!

  • @GyLala
    @GyLala 6 ปีที่แล้ว

    Does support Samsung Galaxy S9+? sadly :c I can't install. well Google Tango

  • @attreyu65
    @attreyu65 7 ปีที่แล้ว +4

    Is it possible to use an iPad/iPhone with a Structure sensor attached ? I mean, all these SLAM solutions are getting better and better at detecting the loop closures, not having spikes or other defects in the geometry - but the texturing is still very, very low. I understand that Tango uses video clips, but even then, those clips are being split into their respective frames, and surely when you film in 1080p at 30 or 60 FPS, the extracted frames must be good enough, with the quality of the cameras these days.
    Still, I don't know of even one SLAM solution, except maybe itSeez3D with Structure, which has results comparable to photogrammetry - and even itSeez3D or Structure+Skanect (via uplink), don't let us browse around the environment, they only let us scan objects, bodies or rooms - but the higher the volume scanned, the lower the resolution, so they are ultimately useless.
    I also understand that these options are being chosen by you, as developers, to increase the FPS, but surely the processing can be offloaded to a workstation via WiFi, and when you have a couple of Titans or 1080 Ti waiting to work and chunk the data you throw at them - it shouldn't be an issue.
    Hell, I would be more than happy to skip the realtime processing, if you can give me a perfect quality texturing, based on 2-5mb/frame jpegs or png.
    I don't know why everyone is thinking about scanning dolls and their girlfriend - when these RGB-D cameras, very cheap, could in theory do the job a Faro or Leica scanner does for 50.000€.
    Sorry for the rant :) I just know that you guys already have the algorythms you need, the processing power is something we, as users, have - so why isn't it done yet ? :)

    • @matlabbe
      @matlabbe  7 ปีที่แล้ว +1

      For the first question, yes, we could get similar results with a Structure sensor, though I am more concerned with the motion tracking accuracy that could be not as good as Tango (for large space like that).
      About processing power, the main goal of RTAB-Map app is to do processing onboard and to limit exportation time, so point clouds are downsampled and textures are downscaled. If you want to export in high texture resolution and high point cloud density, I suggest to save the database and open it in RTAB-Map Desktop to use more processing power so that exportation is done faster (Section 7 of github.com/introlab/rtabmap/wiki/Multi-Session-Mapping-with-RTAB-Map-Tango ). Note that full resolution images are always saved in the database, even if the online rendering texture quality and point cloud density are low.

    • @attreyu65
      @attreyu65 7 ปีที่แล้ว

      It would be nice to also use the Structure with RTAB. In Desktop you have all major sensors, except this one. Any plans of compiling the sources to use Structure as well ?
      Why I'm insisting on Structure - because it's relatively known and it's completely mobile - you can really explore the surroundings, something impossible with a R200 or a Kinect, which need to be attached to the desktop/laptop via cable. Also, if you've used Structure with Skanect, you know it can uplink the data to a computer via WiFi, which is perfect.
      Regarding the texturing size - in the options we have a maximum of 8192x8192 - is this the size of the whole texture atlas, or per capture ?
      Have you used Bundle Fusion ?

    • @matlabbe
      @matlabbe  7 ปีที่แล้ว +1

      Yes, I approve. I don't have a structure sensor, so it is the main reason why it is not supported in RTAB-Map.
      It is the size of the texture atlas. However, there is an option to output more than one texture atlas. For example, I just uploaded this video th-cam.com/video/HWF4zdXjFu4/w-d-xo.html showing the model after being exported with 15 4096x4096 textures (camera images are not scaled). See the description for link to compare with model exported with default RTAB-Map settings (only one texture atlas).
      I use the Point Cloud Library.

  • @Insectula
    @Insectula 7 ปีที่แล้ว +2

    Can't they for indoor architectural scanning create something that calculates "hey, that's a flat wall" and make a flat plane out of it? "oh look, that's an organic object so I'll skip that"..."that's a curved surface so I'll try to calculate my best attempt at a radius". I mean somthing that gives a basic structural layout, and you can fill it with photogrammerty scanned or modeled objects? If it recognizes a basic primitive it replaces with one to reduce the polycount and clean it up? Sort of like the conversion of a bitmap to a vector in 2D apps?

    • @matlabbe
      @matlabbe  7 ปีที่แล้ว +2

      Indeed, detecting walls/floors as planes can save a lot of polygons and make sure the walls are actually straight. It is not an easy task for SLAM systems as some information may be missing or there are errors in the map (caused by drift) making double walls/floors effect. With a good scan, it could be possible though (see Matterport's dollhouse view for example, not sure if they detect planes, but they seem to reduce quite a lot the required polygons to show). We will probably see in not so far future mobile apps that can do that.

  • @moseschuka7572
    @moseschuka7572 2 ปีที่แล้ว

    Is it possible to implement object detection, recognition, pose and tagging will doing this? if so, please how?

  • @gulhankaya5088
    @gulhankaya5088 2 ปีที่แล้ว

    hello, how do I use the map I created? I am using the application in an autonomous vehicle. How will it go through the map I created when I put the vehicle at the starting point?

  • @kylegreenberg8690
    @kylegreenberg8690 7 ปีที่แล้ว +7

    how do these sessions work? and how do you combine sessions

    • @mathAI42
      @mathAI42 7 ปีที่แล้ว +10

      Images are matched between sessions. When there is a match, a constraint is added between the maps. In the video, when we see reappearing a previous map it is because such constraint has been found, the graph of both sessions are now linked into a single global graph. See paper referred on this page for more details: github.com/introlab/rtabmap/wiki/Multi-session

    • @kylegreenberg8690
      @kylegreenberg8690 7 ปีที่แล้ว +2

      Mathieu Labbé thankyou for sharing!

  • @moahammad1mohammad
    @moahammad1mohammad 4 ปีที่แล้ว +1

    How are the scans so goo-
    Oh hes using a handheld scanner

    • @valeriavidanekalman3885
      @valeriavidanekalman3885 3 ปีที่แล้ว

      Freeman Google made a phone that uses 3d sensors to capture the idk reality its called tango, its shittyass now so they made the Google ar, its better for 3d scanning.

  • @sergioandresbarragallardo2447
    @sergioandresbarragallardo2447 7 ปีที่แล้ว

    Dear friend, I downloaded the example of multisession ( chalet 1;2;3;4) and can run the files on my computer, all the files works great, but at the moment of generate my own database with my phone the process fail. I made 3 models (1.db; 2.db; 3.db) but is imposible for me merge the scenes. There is some special method to scan with the application on the phone between scene and scene ? thanks !

    • @mathAI42
      @mathAI42 7 ปีที่แล้ว +1

      Make sure to start/stop from "anchor locations" (this is explained in the scanning tips section of the page linked in the description). If lighting has changed between the sessions, this could also influence the ability to merge the sessions.

  • @Nicoda1st
    @Nicoda1st 7 ปีที่แล้ว +2

    Thanks for sharing!

  • @minhajsixbyte
    @minhajsixbyte 3 ปีที่แล้ว

    can this be done with any phone camera or special hardware required?
    is it possible with an iphone Xr or huawei y7 pro 2018

    • @songqiaocui2950
      @songqiaocui2950 3 ปีที่แล้ว +1

      Unfortunately no, it only works on google tango smartphones, which only has two phones. And lenovo phab 2 pro is one of them. And to make it worse, google killed tango project 4 years ago and switched to AR core. Tango needs specific hardware like IMU and TOF sensors, which normal phones do not equipt with.

    • @minhajsixbyte
      @minhajsixbyte 3 ปีที่แล้ว

      @@songqiaocui2950 :(

  • @shuixing85
    @shuixing85 4 ปีที่แล้ว

    cannot wait to use this program on iPhone 12 pro😀😀😀

    • @matlabbe
      @matlabbe  4 ปีที่แล้ว +2

      2021 ;)

  • @Snyft
    @Snyft 3 ปีที่แล้ว

    I think this is something I need. I want to make a 1:1 vr recreation of my apartment but I dont know any of this. Is this tool a good way to start?

    • @matlabbe
      @matlabbe  3 ปีที่แล้ว +1

      If you have a iPhone/iPad with LiDAR, give it a try with the iOS version, it is free. There are also other apps on iOS that would give similar results. This is currently the most easiest way to scan without any other specialized hardware.

  • @belikepanda.
    @belikepanda. 3 ปีที่แล้ว

    I wish there was a tutorial on how to do this

    • @matlabbe
      @matlabbe  3 ปีที่แล้ว +1

      See github.com/introlab/rtabmap/wiki/Multi-Session-Mapping-with-RTAB-Map-Tango. If you want to try with your own data, a Google Tango phone is required, or a iPhone/iPad with LiDAR (using the RTAB-Map iOS app).

  • @oxpack
    @oxpack 4 ปีที่แล้ว

    That’s not a cottage maybe vacation home but cottages are way smaller and have an old pair of skis on the wall.

    • @matlabbe
      @matlabbe  4 ปีที่แล้ว

      It seems there is a debate around cabin, cottage or chalet depending on where you live in Canada (www.narcity.com/life/canadians-cant-agree-on-whether-its-called-a-cottage-cabin-or-chalet). I am used to "chalet de ski" in french, but it seems the most common translation is "cottage". Here also the renting site (www.chaletsalpins.ca/en/cottages-for-rent/), they also called them "cottage". They are all new luxurious constructions on Stoneham ski resort just a little north of Québec city, but I agree with you, a "chalet" or "cottage" is a lot smaller and rustic in general.

  • @Aristocle
    @Aristocle 6 ปีที่แล้ว

    The android app for my samsung s7 dont work. why?

    • @matlabbe
      @matlabbe  6 ปีที่แล้ว

      The app doesn't work on ARCore, only on Google Tango compatible phones. It is because a depth camera is required.

  • @animowany111
    @animowany111 7 ปีที่แล้ว +1

    Hi, does this project need a depth camera? Do you use any sensors like the gyroscope and accelerometer?
    Thanks

    • @matlabbe
      @matlabbe  7 ปีที่แล้ว +3

      A depth camera is required. For motion estimation, it is the approach developed by Google Tango, which is a fusion of gyroscope/accelerometer and a fish-eye camera (i.e., visual inertial odometry).

    • @animowany111
      @animowany111 7 ปีที่แล้ว

      Too bad, I don't own a depth camera. I've experimented with ORB-SLAM2, but that gives extremely large scale drift using video from my phone (and even using one of the proper datasets, namely freiburg2_large_with_loop).
      I haven't managed to compile LSD-SLAM yet, as I am a beginner to ROS and it uses some extremely non-standard build system.

  • @youbutstronger1453
    @youbutstronger1453 7 ปีที่แล้ว

    how do I export the finished scan to my pc?

    • @matlabbe
      @matlabbe  7 ปีที่แล้ว +2

      If you did Export and saved the mesh/cloud on the device, you can find the zip file on the SD-CARD's RTAB-Map/Export folder. You can use Astro file manager to browse files on the SD-CARD. From there you can send the file by email or copy it in your google drive.

    • @youbutstronger1453
      @youbutstronger1453 7 ปีที่แล้ว

      matlabbe
      Thank you 😁😘

  • @michaeld954
    @michaeld954 6 ปีที่แล้ว

    Will this do the outside easily

    • @matlabbe
      @matlabbe  6 ปีที่แล้ว +1

      It would do outdoor as long there is no direct sunlight on the house (may work on a cloudy day). However it would not be easy to scan more than the first floor. For outdoor, you may use structure from motion / photogrammetry with a flying drone. This will be a lot faster and easier.

  • @alightimages7401
    @alightimages7401 3 ปีที่แล้ว

    Would this work with any 360 cameras?

  • @harshilsakadasariya7684
    @harshilsakadasariya7684 7 ปีที่แล้ว

    hey , I want to make this type of application bt without using tango enabled devices . is it possible to make that ?

    • @matlabbe
      @matlabbe  7 ปีที่แล้ว +1

      RTAB-Map is available without Tango. However, in this kind of application (hand-held scanning with RGB-D sensor), the Visual Inertial Odometry approach of Tango helps a lot to get more accurate maps.

    • @harshilsakadasariya7684
      @harshilsakadasariya7684 7 ปีที่แล้ว

      so is it possible to do 3D scanning with iphone or any simple android phone such as motoG5 ?

    • @matlabbe
      @matlabbe  7 ปีที่แล้ว +1

      No, a depth sensor is required. You may find photogrammetry-based apps that could do 3D scanning using only the phone's camera. Some of these apps are very good for small objects or scanning single person, but for scanning large indoor environments like in this video, it would not be as easy, fast or reliable.

    • @harshilsakadasariya7684
      @harshilsakadasariya7684 7 ปีที่แล้ว

      thank you very much sir it is really helpful to me.

    • @Martin-dx5zs
      @Martin-dx5zs 7 ปีที่แล้ว +1

      Harshil Sakadasariya AKkit

  • @Tom-fb6nz
    @Tom-fb6nz 5 ปีที่แล้ว

    Would this work on the s10?

    • @matlabbe
      @matlabbe  5 ปีที่แล้ว

      Unfortunately no. This technology is currently only available with Lenovo Phab2Pro and Asus Zenfone AR, which have a rear long-range depth sensor (a time-of-flight technology similar to LiDARs).

    • @mrbulp
      @mrbulp 4 ปีที่แล้ว

      @@matlabbe have try with nokia? the latest had the depth sensing tech in it

  • @curtiswilson8402
    @curtiswilson8402 4 ปีที่แล้ว

    Is the "render" photo-grade clarity? ☺

    • @matlabbe
      @matlabbe  4 ปีที่แล้ว

      There is still room for improvements, but we try to improve the texture quality over time. For example, compare the old and new sketchfab models linked in the description.

  • @widgity
    @widgity 5 ปีที่แล้ว

    I assume this relies on Tango or ARcore? Such a shame Google killed Tango, I wish I bought a device while they were available.

    • @mathAI42
      @mathAI42 5 ปีที่แล้ว

      Yeah, it works only on Tango phones (not ARCore). I use Asus Zenfone AR, which can still be bought I think (here bestbuy in canada still sell it at 800$CAD).

    • @widgity
      @widgity 5 ปีที่แล้ว

      @@mathAI42 Huh, I thought AR core was meant to be a direct replacement for Tango. I guess not. I played with the zenphone, and nearly bought one for about £200, but they are out of stock anywhere I can find them locally now. If i bought one now, I'd be worried it would come with AR core instead of Tango.

    • @matlabbe
      @matlabbe  5 ปีที่แล้ว +1

      @@widgity My Zenfone AR has both ArCore and Google Tango working

  • @GospodinJean
    @GospodinJean 2 ปีที่แล้ว

    which software is that?

    • @matlabbe
      @matlabbe  2 ปีที่แล้ว

      RTAB-Map for Google Tango, but now available on iOS (with LiDAR required)

  • @sergiesaenz6735
    @sergiesaenz6735 6 ปีที่แล้ว

    can this 3d scanned map be used in a first person game ?

    • @matlabbe
      @matlabbe  6 ปีที่แล้ว

      Yes, this is a 3D model like everything else.

    • @simeonnedkov894
      @simeonnedkov894 6 ปีที่แล้ว +1

      You need to remodel basically everything

    • @supersaiyajin1599
      @supersaiyajin1599 5 ปีที่แล้ว

      @@simeonnedkov894 why?

  • @pixelflex7297
    @pixelflex7297 7 ปีที่แล้ว +1

    How long did it take to map the building?

    • @matlabbe
      @matlabbe  7 ปีที่แล้ว +2

      ~30 minutes of scanning

    • @pixelflex7297
      @pixelflex7297 7 ปีที่แล้ว

      Thanks for the update.

    • @venomman
      @venomman 7 ปีที่แล้ว

      How good is the image in 3d? Im thinking of the new Asus Tango phone with the 23 MP camera. But I really want the resolution to be very high to create these worlds for VR.

    • @matlabbe
      @matlabbe  7 ปีที่แล้ว

      Tango API uses the camera in video mode, so you won't have 23 MP RGB images. For example on Phab2Pro, we can get 1080p max. Note that for an area as large as in this video, the textures should be downscaled a lot to be viewable on most devices.

    • @venomman
      @venomman 7 ปีที่แล้ว

      matlabbe well that's somewhat good news as the zenfone has a 4k camera...?

  • @myperspective5091
    @myperspective5091 7 ปีที่แล้ว +1

    Cool.
    What was the actual time laps?

    • @matlabbe
      @matlabbe  7 ปีที่แล้ว +1

      1 Hz frame rate and ~1860 frames, so about ~31 minutes of scanning.

    • @myperspective5091
      @myperspective5091 7 ปีที่แล้ว

      1.Were you involved in development of this software?
      2.Did you test to see what the maximum range was.
      3. Did you test to see what the minimum range was (inside of a shoe box or a closet, area between furniture and the wall)?
      4. Can you port it to any graphics environments?
      If so, have you heard anyone who has filled in the blanks to complete the image?

    • @matlabbe
      @matlabbe  7 ปีที่แล้ว +5

      1. yes
      2. ~7 meters max in good conditions, biggest issues are black and reflective materials. For example, if you pause at 2:32 (frame 1494), I've enabled on left the visualization of the depth over rgb, so we can see that no depth data can be captured on the black couch (invisible couch!).
      3. min 30 cm, maybe 50 to avoid Tango drifting
      4. On the sketchfab link in the description, this is a common OBJ model that has been exported. No for the last question :P
      cheers

    • @myperspective5091
      @myperspective5091 7 ปีที่แล้ว

      Thank.👍
      I always wanted to combine something like this with a robots navigation system. By using some object recognition software to tag the location of objects in the environment. That way it can make list to make it's own task tree. It could even create it's own access level expectations on where to find people that use that environment because it could identity different types of rooms by their contents. It could do it just from look up check off list.

    • @JetJockey87
      @JetJockey87 7 ปีที่แล้ว +1

      You could try using a Simultaneous Localization And Mapping (SLAM) algorithm for an autonomous vehicle to navigate its environment. This is how robotic vacuum cleaners find the way back to their charging stations.

  • @Hermiel
    @Hermiel 4 ปีที่แล้ว +1

    Can this be made to work on the 2020 iPad?

    • @mathAI42
      @mathAI42 4 ปีที่แล้ว +1

      Yes theoretically, it has the hardware. The problem is from the software. I didn't have the confirmation yet if ArKit lets us get the raw point cloud or depth image from the LiDAR (if someone knows, tell me! I am waiting for that before buying one). What I see from latest Xcode 11.4, we can get a mesh but not the point cloud/depth image (registered with color camera). Maybe they will unlock that possibility in the future.

    • @Hermiel
      @Hermiel 4 ปีที่แล้ว

      @@mathAI42 Can you use the facial scanner on the iPhone?
      www.fabbaloo.com/blog/2020/3/31/apples-new-lidar-ipad-disappoints-or-does-it

    • @mathAI42
      @mathAI42 4 ปีที่แล้ว +1

      @@Hermiel maybe, but it is not very user friendly if you cannot see the screen when you scan... Note that I don't see a problem with the actual resolution of the lidar, it seems similar to zenfone ar or huawei P30 pro

    • @GirizdL
      @GirizdL 3 ปีที่แล้ว

      @@mathAI42 Will you publish your scanning software for the AppGallery for the Huawei P50 pro or for the P40 Pro ?

    • @matlabbe
      @matlabbe  3 ปีที่แล้ว

      @@GirizdL I am currently having some depth distortion issues on huawei ArEngine with my latest build, I'll see what I can do.

  • @Jasonreninsh
    @Jasonreninsh 7 ปีที่แล้ว

    Do u also use IMU for assistant?

    • @matlabbe
      @matlabbe  7 ปีที่แล้ว

      Yes, Tango's odometry is a Visual Inertial Odometry (VIO) approach.

    • @Jasonreninsh
      @Jasonreninsh 7 ปีที่แล้ว

      okay, I see. I tried with Kinect and the position of camera would be lost very often. Now I get the point. tks.

    • @ziweiliao4044
      @ziweiliao4044 4 ปีที่แล้ว

      it will be really crazy if no IMU is used... such kind of shakes and moving speed... hope the tech without IMU achieving the same result will come out in the next 5 years ( T.T )

  • @jonixmotogp1423
    @jonixmotogp1423 5 ปีที่แล้ว

    holy shit

  • @bobpro583
    @bobpro583 4 ปีที่แล้ว

    Work on iPhone 11???

    • @matlabbe
      @matlabbe  4 ปีที่แล้ว +1

      No, currently working only on Tango-enabled phones (or selected android phones with back TOF camera)

    • @matteo_petruz1435
      @matteo_petruz1435 4 ปีที่แล้ว

      @@matlabbe where i can download app?

    • @matlabbe
      @matlabbe  4 ปีที่แล้ว

      @@matteo_petruz1435 play.google.com/store/apps/details?id=com.introlab.rtabmap or latest version: github.com/introlab/rtabmap/wiki/Installation#rtab-map-tango-apk

  • @sergioandresbarragallardo2447
    @sergioandresbarragallardo2447 7 ปีที่แล้ว

    Export in pcv or xyz format?

    • @matlabbe
      @matlabbe  7 ปีที่แล้ว

      Currently, there are only PLY (binary) and OBJ (ASCII) export formats available, though they should be readable in most softwares (e.g. MeshLab) for conversion in other format.

    • @sergioandresbarragallardo2447
      @sergioandresbarragallardo2447 7 ปีที่แล้ว +1

      Thanks for the answer, the program is great, it would be wonderful to reconcile the potential with programs like archicad or revit

  • @jozatheman
    @jozatheman 4 ปีที่แล้ว

    i cant scan my room with the app i have :/

  • @dior1992
    @dior1992 4 ปีที่แล้ว

    What kind of accuracy did you manage to have on the measurements?

    • @matlabbe
      @matlabbe  4 ปีที่แล้ว

      For a single room, it is easy to get only 1 to 3 cm error. For large areas, it can be worst if loop closures are not detected (vio drift not corrected).

  • @kingarchnyc
    @kingarchnyc 3 ปีที่แล้ว

    What software/app is this?

    • @matlabbe
      @matlabbe  3 ปีที่แล้ว +1

      RTAB-Map: play.google.com/store/apps/details?id=com.introlab.rtabmap
      It is free and Open Source.

    • @kingarchnyc
      @kingarchnyc 3 ปีที่แล้ว +1

      @@matlabbe thank you 🙏🏿 too bad I use iPhone, this does not seem to have an iOS version... 🤦

    • @matlabbe
      @matlabbe  3 ปีที่แล้ว +1

      @@kingarchnyc An iOS version will be released soon, but it will work only on iPhones/iPads with LiDAR sensor.

    • @GirizdL
      @GirizdL 3 ปีที่แล้ว

      @@matlabbe Dear Mathieu, I'm looking for a new phone, and the biggest dilemme for me is the Asus ZenFone AR and the iPhone 12 MAX pro. I found some materials about the object scanning function of the LidAR, and was a little bit disappointing for me, and the CrossPoint's house-scanning didn't briefed me that iPhone worth it.
      What do you think about the using of the old Tango Tablets for this purpose? Are they as good as Zenfone

    • @matlabbe
      @matlabbe  3 ปีที่แล้ว +1

      @@GirizdL I prefer Tango phones with TOF (zenfone AR or Phab2 pro) over the original tango dev kits. The advantage of iPhone is that it will be supported and get new apps over time, while nobody develop for tango anymore.