Gaussian Splatting! The next big thing in 3D!

แชร์
ฝัง
  • เผยแพร่เมื่อ 15 มิ.ย. 2024
  • In this video, we embark on a fascinating exploration of Gaussian Splatting, a cutting-edge technique that brings a touch of magic to computer graphics and visualization! I recently take a look on this 3D technique and found it very intresting. It is quite hard to manage while it is on its developement but fortunately there is some instructions on the net,
    Specs:
    These samples were rendered with Graphics card Nvidia RTX 3070 8Gb Vram
    PC: Asus ROG Ryzen 7 64Gb ram
    Check out NeRF Guru Jonathan Stephens beginner guide in here:
    • Getting Started With 3...
    #nerf #gaussiansplatting #lumaai
  • ภาพยนตร์และแอนิเมชัน

ความคิดเห็น • 374

  • @IRWBRW964
    @IRWBRW964 9 หลายเดือนก่อน +447

    3D Gaussian Splatting is actually not a NeRF technology as there is no neural network, but the splats are directly optimized through rasterization rather than the ray tracing like method of NeRFs.

    • @spyral00
      @spyral00 9 หลายเดือนก่อน +12

      Looks like it's a new way to display point clouds, am I wrong?
      Still amazing and I have to try it!

    • @JB-fh1bb
      @JB-fh1bb 9 หลายเดือนก่อน +9

      @@spyral00Right? I thought this gaussian splatting technique was a new way to present the point data generated by NeRF

    • @malfattio2894
      @malfattio2894 9 หลายเดือนก่อน

      Wow, it looks really damn good considering

    • @Blox117
      @Blox117 9 หลายเดือนก่อน +2

      so it will be faster too

    • @WWG1-WGA
      @WWG1-WGA 8 หลายเดือนก่อน

      That means we can play even more with the neurons

  • @jimj2683
    @jimj2683 9 หลายเดือนก่อน +468

    Imagine Google Street view built with this. It could then be used in a GTA type game with the entire world.

    • @ariwirahadi8838
      @ariwirahadi8838 9 หลายเดือนก่อน +40

      you forget about flight simulator..it is generated by real map

    • @valcaron
      @valcaron 9 หลายเดือนก่อน +43

      Grand Theft Auto Frankfurt
      GTA but everything's all blurry.

    • @michaking3734
      @michaking3734 9 หลายเดือนก่อน +13

      i bet in the next 20-30 years

    • @florianschmoldt8659
      @florianschmoldt8659 9 หลายเดือนก่อน +27

      There is no good way use splatting with interactive light and shadow or animation. All the lighting is fixed together with the color information. So I guess, this tech won't make it into gaming.

    • @NileGold
      @NileGold 9 หลายเดือนก่อน +2

      @@valcaron and most of it is missing

  • @crs11becausecrs10wastaken
    @crs11becausecrs10wastaken 9 หลายเดือนก่อน +82

    If scanning software is actually capturing and rendering details as fine as leaves of plants, without all of the artifacts, then that is absolutely mind-blowing.

  • @filipewnunes
    @filipewnunes 9 หลายเดือนก่อน +109

    I spent lots and lots of hours in my life unwraping UVs and correcting meshes to use in my archviz projects.
    The amount of development in this field is insane. And we are in the first days of this.
    What a time to be alive.

    • @OlliHuttunen78
      @OlliHuttunen78  9 หลายเดือนก่อน +14

      My thoughts exactly. Many things are changing very fast now. Although this does not yet create anything from scratch and for these Nerf you still need something existing, from which things are transformed into 3D by taking the pictures from real world. Traditional modeling still certainly has its place in when creating something new.

    • @captainflimflam
      @captainflimflam 9 หลายเดือนก่อน +13

      I got that reference! 😉

    • @loleq2137
      @loleq2137 9 หลายเดือนก่อน +8

      Ah, a fellow Scholar!

    • @MagicPlants
      @MagicPlants 9 หลายเดือนก่อน +1

      well said!

    • @nekosan01
      @nekosan01 8 หลายเดือนก่อน

      Photogrammetry is very old, do not why you only know this marketing stuff and enjoying if its much worse than realitycapture and other app, and they do not require expensive videocard, also you can import in sculpt software for fixing mesh and project uv very easy, than this garbage

  • @JimmyNuisance
    @JimmyNuisance 9 หลายเดือนก่อน +33

    I fell in love with splat engines when I spent time in Dreams on the PSVR. It's fantastic for creatives, it makes it very easy to make new and unseen surfaces.

    • @kazioo2
      @kazioo2 8 หลายเดือนก่อน

      That renderer went through so many changes and iterations (also after they made public explanations) I'm not really sure how much of typical splatting is still used there. There are many conflicting informations about it.

  • @4Gehe2
    @4Gehe2 9 หลายเดือนก่อน +11

    Ok I did a quick reading of the paper. This is a clever thing however what should be kept in mind is that it doesn't preserve details so much as makes them up.
    (The explaning bit is in the chapters 5.1, 5.2, and fig. 2, 3, 4 of the paper). Basically you reconstruct the environment not by analysis but by statistically informed guesses. After which you then do analysis on whether the guess was too big or small. Then you refrence the solution to the original data to see how close you were with your guesses. If the guess was too small you duplicate it near the point; if the guess was too great you divide it in to two. Meaning that if you need to estimate a curve, instead of trying actually solve the curve you keep guessing the shape of the curve, but because of the process of duplication and division of the guesses you basically approach faster to the solution. However it is important to keep in mind that you don't actually get THE SOLUTION, you get approximation of the solution based of guesses.
    Basically this is the way you can do square roots and cube roots in your head to 2-3 decimals, by estimating upper and lower and iterating that (for those that don't know: if you want to estimate square root of 6, you can in your had calculate that 2x2 is 4 3x3 is 9, so the solution is between those; then you can do 2,5x2,5 you get 6,25 which is more, so you know that solution needs to be less than that so 2,25x2,25 you get 5,0625... so on and so forth. You will never practically get the solution of 2,2449489743 because we only go to 3 decimals but lets be honest 0,04% error is more than enough.
    To simplify a bit: Imagine you are sculpting with clay and want to replicate a shape. You can only add or remove instead of shaping it with your hands. If you have too much material you cut away hald of the amount of you know to be too much. If you added too little clay, you add another same sized lump. And you keep repeating this untily you get close enough approximation of the thing you are replicating.
    What is important to keep in mind is the limitations of this. You can't replicate things accurately for the simple reason that if you lack information on details you can't just guess them! Your data resolution doesn't increase. You only actually know the datapoints that you gathered. So for historical, scientific or engineering purposes you have will not be able to get any extra information (And I hope that people realise this, before they try to use details from this in a court of law or something), you really can't know anything more from this than you can get from just looking at the frames as pictures.

  • @wozniakowski1217
    @wozniakowski1217 9 หลายเดือนก่อน +9

    I feel like those galaxy-like ellipses with feathered edges are THE new polygons and soon this rendering method will replace them, especially in the gaming industry. What a time to be alive

    • @spyral00
      @spyral00 9 หลายเดือนก่อน +9

      That depends. Can they support animation, rigging, fluids, etc?
      Voxels are great but they still aren't the norm... Maybe it's just another great tool in the shelf.

    • @bricaaron3978
      @bricaaron3978 9 หลายเดือนก่อน +1

      I would say the gaming industry would be the last area to use this method. It looks like this is a method of rendering; it has nothing to do with the generation and manipulation of 3D data.

    • @spyral00
      @spyral00 9 หลายเดือนก่อน

      ​@@bricaaron3978 true... I would love to see a game made wth NERF point clouds rendered with this though.

    • @bricaaron3978
      @bricaaron3978 9 หลายเดือนก่อน

      @@spyral00 Can you, in a few sentences, explain why NERF point clouds are different from any other point cloud so that I don't have to research it, lol?

    • @spyral00
      @spyral00 9 หลายเดือนก่อน

      @@bricaaron3978 NERF is an algorithm that generates 3d models (or point clouds) from 2d photos, using neural nets. Pretty amazing stuff, but quite complex and not yet widely used. This technique seems to be just a way to display the results in a nice way, if I understand correctly. In theory one could make game environments using photos+NERF as an input and this to render them, pretty sure it'd look amazing

  • @TheCebulon
    @TheCebulon 9 หลายเดือนก่อน +7

    All the time, I thought I saw videos and was wondering about 3D. 🤣
    Then it hit me: These ARE 3D renders.
    Absolutely stunning.

  • @patjackmanesq
    @patjackmanesq 9 หลายเดือนก่อน +11

    2.7k subs is a ridiculously low amount for such quality videos! Great work, brother

  • @Ironside451
    @Ironside451 9 หลายเดือนก่อน +3

    Reminds me of that moment on Star Trek into Darkness when they are looking at security footage and are able to move around the footage just like this

  • @linuxmill
    @linuxmill 9 หลายเดือนก่อน +70

    guassian splatting has been around for many years. I used it in the late 90's. It's a method of generating implicit functions, which can then be contoured.

    • @MatteoMi
      @MatteoMi 9 หลายเดือนก่อน +15

      I'm not a specialist, but I suppose this is similar to VR, that's also been around from the 80s, but the tech wasn't mature enough. I mean, maybe.

    • @EmileChill
      @EmileChill 9 หลายเดือนก่อน +2

      @linuxmill I used autodesk 123d catch which isn't avalible anymore, i believe it was the same kind of technique but not 100% sure.

    • @danielalorbi
      @danielalorbi 9 หลายเดือนก่อน +19

      Yup, the new thing here is using it to render radiance fields in real time

    • @EmileChill
      @EmileChill 9 หลายเดือนก่อน +2

      @@danielalorbi That's incredible!

    • @stephanedubedat5538
      @stephanedubedat5538 9 หลายเดือนก่อน

      while the technique is not new, its application to NERF is

  • @Oho_o
    @Oho_o 9 หลายเดือนก่อน +1

    Those gaussian splats looks like galaxies in space at 2:07 .. ;O

  • @TorQueMoD
    @TorQueMoD 9 หลายเดือนก่อน +31

    Great video! The RTX 3070 has 8GB of Vram though, not 4. I'm super excited to see where NeRF will take us in another 5 years! It's a boon for indie developers who don't have the time or budget to create high quality assets.

    • @stash.
      @stash. 9 หลายเดือนก่อน

      it varies, i have the 6gb 3070 model
      _===edit====_
      Turns out i had the 8gb version not the 6gb as i mentioned earlier

    • @GrandHighGamer
      @GrandHighGamer 9 หลายเดือนก่อน

      @@stash. 4GB would be incredibly low still (and 8GB is already pitiful for a card that cost around $800), to the point where it wouldn't make sense to exist at all. At that point a 3060 would both be cheaper and potentially have 4x the memory. I'd imagine this was just a mistake.

    • @esaedvik
      @esaedvik 9 หลายเดือนก่อน +1

      @@GrandHighGamer 8GB is perfectly fine for the use cases of 1080-1440p gaming.

  • @stash.
    @stash. 9 หลายเดือนก่อน +1

    bringing old family photos will be a huge market boom

  • @thenerfguru
    @thenerfguru 9 หลายเดือนก่อน +1

    Thanks for the shout out! You can now view the scene in the NerfStudio viewer which unlocked smooth animation renders.

    • @OlliHuttunen78
      @OlliHuttunen78  9 หลายเดือนก่อน

      Yes. I just noticed your new video about it. Have to try it. Thanks Jonathan!

  • @o0oo888oo0o
    @o0oo888oo0o 9 หลายเดือนก่อน +1

    Great, best videos about this niche of nerf's etc. i found so far. Keep it up!

  • @BlenderDaily
    @BlenderDaily 9 หลายเดือนก่อน +1

    so exciting! thanks for the explanation:)

  • @The-Filter
    @The-Filter 9 หลายเดือนก่อน

    Man, thank you for this video! That stuff is really next gen! wow! And top notch moderation! Very relaxing and informative!

  • @ChronoWrinkle
    @ChronoWrinkle 9 หลายเดือนก่อน +1

    Hot damn, it should be possible to extract depth , normals, and glossines from such capture, this is insane!

  • @pan6593
    @pan6593 8 หลายเดือนก่อน

    Great summary, insight and practical example - thanks!

  • @8eck
    @8eck 9 หลายเดือนก่อน +2

    I remember when i first tried Nerf. Since then, they have evolved into insane quality!

  • @jamesleetrigg
    @jamesleetrigg 9 หลายเดือนก่อน +14

    If you watch two minute papers, there’s a new radiance, field, technique that is over 10 times as fast and better quality so look forward to seeing this in VR/AR

    • @Barnaclebeard
      @Barnaclebeard 9 หลายเดือนก่อน +21

      Can't stand to watch TMP anymore. It's nothing but paid content and catchphrases. I sure would love a channel like the old TMP.

    • @primenumberbuster404
      @primenumberbuster404 9 หลายเดือนก่อน +9

      ​@@Barnaclebeard fr 😢 Many of those papers are actually not even peer reviewed.

    • @Barnaclebeard
      @Barnaclebeard 9 หลายเดือนก่อน +8

      @@primenumberbuster404 And it's exceedingly rare that there is any analysis or insight beyond, "imagine what it can do two papers down the road!" anymore.

    • @Summanis
      @Summanis 9 หลายเดือนก่อน +1

      Both this video and the TMP one are on the same paper.

  • @HandleBar3D
    @HandleBar3D 9 หลายเดือนก่อน +3

    This is gonna be huge in real estate, once it’s a streamlined app on both ends.

  • @3dvolution
    @3dvolution 9 หลายเดือนก่อน

    It's getting better and better, that's an impressive method, thanks for sharing ;)

  • @vadimkozlov3228
    @vadimkozlov3228 6 หลายเดือนก่อน +1

    fantastic and very professional youtube channel. appreciate your work

  • @romanograsnick
    @romanograsnick 8 หลายเดือนก่อน

    Astonishing achievements were made, that is great! I hope this may lead to set builders to make more models which can be traced and recreated in 3d space, keeping these sculpting jobs relevant. Thanks!

  • @chosenideahandle
    @chosenideahandle 9 หลายเดือนก่อน

    Terve Olli! Another Finn with an awesome TH-cam channel (I'm not including myself 😁)! Thanks for keeping us up-to-date on what is going on with this cutting edge stuff.

  • @drekenproductions
    @drekenproductions 9 หลายเดือนก่อน +1

    thanks for linking to the nerf guru. could come in handy some day if i decide to try this!

  • @MartinNebelong
    @MartinNebelong 9 หลายเดือนก่อน

    Great overview and certainly exciting times! 😊

  • @damsen978
    @damsen978 8 หลายเดือนก่อน

    This is literally what will follow photographs and images in general where you can see captured moments of your family and friends in full 3D. Now we need a device that would capture these automatically with a click of a button.

  • @talis1063
    @talis1063 9 หลายเดือนก่อน +10

    I'm deeply uncomfortable with how fast everything is moving right now. Feels like anything you touch could become obsolete in months.

    • @flameofthephoenix8395
      @flameofthephoenix8395 9 หลายเดือนก่อน +1

      Except for farming.

    • @ChainsawGutsFuck
      @ChainsawGutsFuck 9 หลายเดือนก่อน

      @@flameofthephoenix8395 Or water. Or oxygen. Or physical existence.

    • @flameofthephoenix8395
      @flameofthephoenix8395 9 หลายเดือนก่อน

      @@ChainsawGutsFuck I figured he was talking about careers.

    • @Sc0pee
      @Sc0pee 9 หลายเดือนก่อน

      If you mean traditional 3D-modelling for gaming/movies or 3D-printing then no at least not for the foreseeable future, because this technique doesn't produce mesh models, which is a requirement in games and movies for dynamical lightning, animation, surfacing, interactivity etc. And it also requires you to have the object you want in real life to work with.

  • @LaVerite-Gaming
    @LaVerite-Gaming 9 หลายเดือนก่อน +1

    It's beautfiul that the first image I ever saw rendered in this way now is a Captain Haddock figurine ❤

  • @TheABSRDST
    @TheABSRDST 9 หลายเดือนก่อน

    I'm convinced that this is how our vision works irl

  • @domovoi_0
    @domovoi_0 9 หลายเดือนก่อน

    Incredible.
    Love and blessings!

  • @tristanjohn
    @tristanjohn 9 หลายเดือนก่อน

    Absolutley phenomenal!

  • @devinweidinger3812
    @devinweidinger3812 9 หลายเดือนก่อน

    Great video man, thanks for sharing

  • @MonsterJuiced
    @MonsterJuiced 9 หลายเดือนก่อน +9

    This is fascinating! I hope there's going to be some kind of support for blender/ unreal/ unity soon I would love to play with this

    • @Jackpadgett-gh8ht
      @Jackpadgett-gh8ht 8 หลายเดือนก่อน

      there is support for it! volinga AI, search it up

  • @Neura1net
    @Neura1net 9 หลายเดือนก่อน +1

    Very cool. Thank you

  • @DailyFrankPeter
    @DailyFrankPeter 9 หลายเดือนก่อน +1

    All we need now is a scanner in every phone for taking those selfie pointclouds and we'll be in the world of tomorrow.

  • @marco1941
    @marco1941 9 หลายเดือนก่อน

    Wow, now we’ll see really interesting development in video game production and of course in the results.

  • @michaelvicente5365
    @michaelvicente5365 9 หลายเดือนก่อน

    ohhh thanks for explaining, I saw a couple things on twitter and was wondering what this gaussian splatting was about!

  • @ericb6048
    @ericb6048 9 หลายเดือนก่อน

    Glad I found your channel ;) .. nice to meet you Olli

  • @GraveUypo
    @GraveUypo 9 หลายเดือนก่อน

    these are so good that you can probably use screenshots of these models to make 3d models with old photogrametry software.

  • @MommysGoodPuppy
    @MommysGoodPuppy 9 หลายเดือนก่อน

    Yesss i cant wait for this to be utilized in vr, I assume we could render absolutely insane detail in realtime for simulating reality or having big budget cgi movie visuals in games

  • @fontende
    @fontende 9 หลายเดือนก่อน +1

    Technology from Minority Report movie, showed 20 years ago, that's how long it takes to make.

  • @Inception1338
    @Inception1338 9 หลายเดือนก่อน

    One more time for gauss to show the world who is the king of Mathematics.

  • @luketimothy
    @luketimothy 8 หลายเดือนก่อน

    Just imagine a machine that can generate point clouds around itself at a rate of 60 per second, and a technique like this that can render that point cloud at the same 60 per second rate. Truly 3D video. Would be amazing.

  • @renko9067
    @renko9067 8 หลายเดือนก่อน

    This is basically how the actual visual field works. Overlays of sensations, sounds, and smells complete the illusion of subject/object. It is the zero dimension quantum wave field.
    The scene ‘moves’ in relation to the ‘eyes’ of an apparent subject.

  • @jimmyf2618
    @jimmyf2618 9 หลายเดือนก่อน +1

    This reminds me of the the old "Unlimited Detail" video promising infinite rendering

  • @eekseye666
    @eekseye666 9 หลายเดือนก่อน

    Oh I love your content! Should have been subscribed last time I met your channel. I didn't, but I do it now! )

  • @FredBarbarossa
    @FredBarbarossa 9 หลายเดือนก่อน

    looks really impressive

  • @lordofthe6string
    @lordofthe6string 9 หลายเดือนก่อน

    This is so freaking cool, I hope one day I can make a game using this tech.

  • @MotMovie
    @MotMovie 9 หลายเดือนก่อน

    Good stuff mate. Very interesting indeed and great to see such in depth look into things with self made examples. As a sidenote, music is a bit big for this, I mean it´s not cure for cancer (just yet) so perhaps go bit easier on "Life will win again, there will be beautiful tomorrow" soundtrack :p . Anyhow, cheers, will be back for more.

  • @malipetek
    @malipetek 9 หลายเดือนก่อน

    Very interesting, thanks.

  • @lemiureelemiur3997
    @lemiureelemiur3997 9 หลายเดือนก่อน

    Stunning!

  • @JeremyDWilliamsOfficial
    @JeremyDWilliamsOfficial 9 หลายเดือนก่อน

    Nice work! Well done :)

    • @JeremyDWilliamsOfficial
      @JeremyDWilliamsOfficial 9 หลายเดือนก่อน

      Subbed. But to be honest, you really didn't explain much about it. Perhaps make another that explores the math, techniques or even touches on your install and implementation process.

  • @ponadchmurami8008
    @ponadchmurami8008 9 หลายเดือนก่อน

    amazing thanks man for this video

  • @Eddygeek18
    @Eddygeek18 9 หลายเดือนก่อน +5

    Next step is getting it working with animations and physics and you have a new game rendering method. I have always felt mesh rendering is limited, been waiting for a new method such as this. Hope it's the one this time since there have been quite a few duds in the past

    • @0ooTheMAXXoo0
      @0ooTheMAXXoo0 9 หลายเดือนก่อน

      Apparently Dreams (2020) on PS4 uses this technique.

    • @catsnorkel
      @catsnorkel 9 หลายเดือนก่อน +1

      Until this method can produce poly models that can properly fit into a pipeline, I really don't see this being widely used in either the games or film industries, but I can see it being used a lot in archvis for example.

    • @Eddygeek18
      @Eddygeek18 9 หลายเดือนก่อน

      @@catsnorkel i know what you mean gpus are designed for polygons and engines have very specific mechanisms for it, but i don't think it would take too much modify existing software to make use of GPU effeciently for this technology. They both use techniques hardware is capable of so if invested in i don't think it would take Unity or Unreal much more time to integrate the tech into their engines compared with poly based rendering pipelines. Since it uses a scattering field type rendering it shouldn't be much different

    • @catsnorkel
      @catsnorkel 9 หลายเดือนก่อน +2

      @@Eddygeek18Thing is, this technique does not support dynamic lighting, and isn't even built in a way that could be modified to support it. Same with animation, surfacing, interractivity etc. It is a really cool idea to render directly from pointcloud data like this, skipping most of the render pipeline, however the parts that are skipped over is **where the game happens**

    • @Eddygeek18
      @Eddygeek18 9 หลายเดือนก่อน +1

      @@catsnorkel I work in 2D game development so this is a bit out of my depth but if i know game developers they will 100% find a way to make this tech work, no idea of the maths but new tech like this makes my mind wander to ideas of how to make it work. Rigging points to a skeletal mesh isn't much different from vertices that's already the beginning of animation, storing a normal vector and magnitude (sharpness) in the key points is the start of lighting. Like i said i'm 2D dev not 3D but I don't think it will be long till we see full fledged games with this technology

  • @striangle
    @striangle 9 หลายเดือนก่อน

    absolutely amazing technology! super excited to see where the future takes us. thanks for sharing! ..side question - what is the music track on this video?

  • @Dartheomus
    @Dartheomus 9 หลายเดือนก่อน

    My mom walked in the room and asked what the hell I was doing. I told her to just relax. I'm gaussian splatting.

  • @bradleypout1820
    @bradleypout1820 9 หลายเดือนก่อน

    good video got a new sub!

  • @ralfbierig
    @ralfbierig 9 หลายเดือนก่อน

    Interesting and seriously promising for VR applications!

    • @0ooTheMAXXoo0
      @0ooTheMAXXoo0 9 หลายเดือนก่อน

      Yes, apparently used in Dreams on PS4 and PSVR.

  • @endrevarga5111
    @endrevarga5111 8 หลายเดือนก่อน

    Idea!
    1. Make a low-poly 3D scene in Blender. It's a 3D skeleton. Use colors as object IDs.
    2. Using real-time fast OpenGL engine, quick-render some hundred images, placing the camera to different locations like photographing a real scene for the 3DGS creation. The distribution of the camera should be easy using Geometry Nodes.
    3. Using these images, use Runway-ML or ControlNet etc. to re-skin them according to a prompt. If possible, use one image to ensure consistency.
    4. Give the re-skinned images to the 3DGS creation process to create a 3DGS image for the scene.
    Et voilà, a 3D AI-generated virtual reality is converted to 3DGS.

  • @GeekyGami
    @GeekyGami 9 หลายเดือนก่อน +1

    This point cloud technology is much older than 2020.
    It has been tried for a decade at this point, on and off.

  • @IndyStry
    @IndyStry 9 หลายเดือนก่อน +1

    This is awesome, is there a way to export this to an estimated actual polygonal model to use in 3d softwares?

  • @Datdus92
    @Datdus92 9 หลายเดือนก่อน

    You could walk in your memories with VR!

  • @afti03
    @afti03 8 หลายเดือนก่อน

    Fascinating! could you make a video on what would be the most relevant use cases for this type of technology?

  • @joelmulder
    @joelmulder 9 หลายเดือนก่อน

    Once video games and 3D software rendering engines start to use this… Oh boy, that’s gonna be something else

  • @tonygardner4077
    @tonygardner4077 9 หลายเดือนก่อน

    liked and subscribed ... hi from New Zealand

  • @liliangimenez4461
    @liliangimenez4461 9 หลายเดือนก่อน +1

    How big are the files used to render the scene? Could this be used as a light field video format?

  • @SerigioKun
    @SerigioKun 9 หลายเดือนก่อน

    Impresiona como avanzan estas tecnicas. En un futuro poder aplicar esta tecnología para la resolución de crímenes u accidentes, con solo tener un video del mismo va a ser genial.

  • @MaxSMoke777
    @MaxSMoke777 9 หลายเดือนก่อน +3

    It's a cute way to make use of point clouds. I'm certain it'll be handy for MRI's and CT scans, but it's nowhere near as useful as an actual 3D model. You couldn't use it for video game models or 3D printing. It could be extremely useful for real-time, point-cloud, video conferencing, since it's so fast.

    • @catsnorkel
      @catsnorkel 9 หลายเดือนก่อน

      agreed. it will probably find a few niche use cases for certain effects that are layered on top of a traditional poly-based render pipeline, but it's not going to completely take over, probably ever. This is a technology developed for visualisation, and not really suitable for games or film.

  • @icegiant1000
    @icegiant1000 9 หลายเดือนก่อน

    How long before micro drones are just buzzing up and down our bike paths, sidewalks, streets and so on, grabbing HQ images, beaming them to the cloud, and by the end of the day you can do a virtual walkthrough of the local fair, or the car dealership, or a garage sale on the other side of town, or the crowd at a football game. Only thing stopping us is CPU power and storage, and that is getting solved fast. Exciting times! P.S.- How long before people stay home, and just send out their micro drones, and view everything in VR at home. A lot safer than getting mugged.

  • @DavidKohout
    @DavidKohout 9 หลายเดือนก่อน

    This really makes me feel like living in the future.

    • @DavidKohout
      @DavidKohout 9 หลายเดือนก่อน

      This just confirms that we're living in the best times, from the start of the phone technology to this.

  • @f1pitpass
    @f1pitpass 9 หลายเดือนก่อน

    great to see

  • @metatechnocrat
    @metatechnocrat 9 หลายเดือนก่อน

    Well one thing it'll be useful for is helping me examine images for clues to hunt down replicants.

  • @foxy2348
    @foxy2348 9 หลายเดือนก่อน

    amazing. How is this rendered? In what program?

  • @Felenari
    @Felenari 9 หลายเดือนก่อน

    Good watch. Subscribe earned. Haddock is one of my faves.

  • @costiqueR
    @costiqueR 9 หลายเดือนก่อน

    I tell you this: is a game changer for the industry...

    • @catsnorkel
      @catsnorkel 9 หลายเดือนก่อน

      depends on the industry though. Archvis yes, absolutely.
      Games and film, it will only really have a minor impact since it isn't really geared towards those use cases.

  • @MarinusMakesStuff
    @MarinusMakesStuff 9 หลายเดือนก่อน +2

    Awesome!!! Though, for me, all that matters is getting a correct mesh and I couldn't care less about textures personally. I hope the mesh generation will soon also make leaps like this :)

    • @joonglegamer9898
      @joonglegamer9898 9 หลายเดือนก่อน

      Yeah you're spot on, this is not new, there might be new elements to it which is great, but I won't bat an eye until they come up with a perfect, easy to seam - seamless uv-mapping model, we still have to make our models animateable, relying on low poly to get the most of the CPU / GPU powers in any setup, so yeah untill then we can keep dreaming, hasn't happened in 40+ years.

  • @mankit.mp4
    @mankit.mp4 8 หลายเดือนก่อน

    Hi Olli, great video and thanks for the intro to such fascinating tech. What’s your opinion on whether Insta360 or full frame camera a fisheye lens will provide a better result or workflow?

    • @OlliHuttunen78
      @OlliHuttunen78  8 หลายเดือนก่อน +1

      Well. In the process where Colmap is used to generate the pointcluoud it doesn't like any kind of fisheyes lenses or round distortions on the images. Best way to train the model is use source images where all distortion has been removed. I'm not sure how the Luma AI's new Interactive Scenes are handling the material. It seems that it can take all sort of wide angle videos or 360 footage in. I recommend to try: lumalabs.ai/interactive-scenes

  • @NecroViolator
    @NecroViolator 9 หลายเดือนก่อน +1

    I remember a Australian company making infinite graphics with something similar. They made games and other stuff.
    Cant remember the name but it was many years ago. :(

  • @wolfzert
    @wolfzert 9 หลายเดือนก่อน

    Woow, que bien, un punto más para seguir andando en la IA

  • @angelavolkov1126
    @angelavolkov1126 9 หลายเดือนก่อน

    Very cool.

  • @abhi4154
    @abhi4154 9 หลายเดือนก่อน

    very nice video

  • @helper_bot
    @helper_bot 9 หลายเดือนก่อน

    exciting news!

  • @imsethtwo
    @imsethtwo 9 หลายเดือนก่อน

    solution to the floating artifacts would be just make procedural volumetric fog and use it to your advantage 😎

  • @aksi221
    @aksi221 9 หลายเดือนก่อน +2

    wow this is awesome !
    is it possible to get 3d objects out of this? and if so, how good are the results?

    • @santitabnavascues8673
      @santitabnavascues8673 9 หลายเดือนก่อน +2

      I would say the 3d models out of this wouldn't be much better than those obtained on other 3d scan methods.

    • @catsnorkel
      @catsnorkel 9 หลายเดือนก่อน

      This technology isn't really intended for that type of use. It isn't something that is designed to fit into a film or game pipeline with poly models. You might see certain effects here and there using it, but generally it is going to be used more for visualisation.

  • @EveBatStudios
    @EveBatStudios 8 หลายเดือนก่อน

    I really hope this gets picked up and adopted quickly by companies that are training 3-D generation on nerfs. The biggest issue I’m seeing is resolution. I imagine this is what they were talking about coming in the next update with imagine 3D. Fingers crossed that would be insane.

  • @triplea657aaa
    @triplea657aaa 9 หลายเดือนก่อน

    Gauss strikes again!

  • @Lumaa_Lex
    @Lumaa_Lex 9 หลายเดือนก่อน

    I saw the pond from Saint-Petersburg Botanical Gardern! Or it was really shockingly accurate 3d representation of this pond? =)

  • @MilesBellas
    @MilesBellas 9 หลายเดือนก่อน +1

    The entire VFX Industry is under massive disruptive growth that now prioritizes INDIVIDUAS.....
    ....a huge paradigm shift.

  • @ziomalZparafii
    @ziomalZparafii 9 หลายเดือนก่อน

    Closer and closer to Esper from Blade Runner.

  • @taureanwooley
    @taureanwooley 9 หลายเดือนก่อน

    Perforated disck layering at one point with bezier curve translations and HDR data mining ...

  • @GauravSharma-gt2gp
    @GauravSharma-gt2gp 9 หลายเดือนก่อน +1

    If this is so amazing, then why did 360 videos failed to gain popularity?

  • @louisebali
    @louisebali 9 หลายเดือนก่อน

    Thx...

  • @yurygaltykhin6271
    @yurygaltykhin6271 9 หลายเดือนก่อน

    I am pretty much sure that this (or a similar) tech in conjunction with the neural engines' development will finally lead to the creation of fully immersible and high-definition virtual worlds that are very cheap to produce. This is a mixed blessing to me because, in the near future, it will be impossible to distinguish artificial media from legitimate "real" images and videos. My bet is that soon we will see a legislative trend to compulsory disclosure of the origin of an image or a video when publishing, the first for the mass media and later on for any publications, including social media for the general public. Nevertheless, in a few years from now, I expect to see new video games that will make the games made with Unreal Engine 5 look as unrealistic as idk, Doom 2.

  • @thilakkumar008
    @thilakkumar008 9 หลายเดือนก่อน

    fantastic , i need your suggestion?

  • @lolmao500
    @lolmao500 9 หลายเดือนก่อน

    Next gen of graphic cards apparently will all have a neural network chip on there.

  • @manzell
    @manzell 9 หลายเดือนก่อน

    Nerfies! What a time to be alive!