I stumbled across this video and I'm blown away. In a former life I use to do a lot of graphics and old-school 3D work. Could I request a beginner's guide on "how to get started" with the tech you covered?
This can be used to google maps street view. Imagine instead of clicking arrows to move forward, you just need to press “w” maybe as if you are walking in a video game
@@hallo_ween07 Starting with a town maybe? Then slowly moving towards a country? Maybe a different type of google car can be made to use photogrammetry instead
this is genuinely the best explanation i've heard on gaussian splatting. especially the part i don't think i've heard anyone talk about that is HUGE, which is *spherical harmonics*. actually showing reflections and the sun through the leaves that you're showing here. massive.
Bravo Bilawal! Well done. Straight to the point why GS is such a promising approach. I especially enjoyed the comparison to the "wonky looking broccoli trees" ;-)
Brother your channel is going places. Your content is presented in a very straight forward manner and will get you far. Keep at it. See you at the 1 million Sub mark.
Very comprehensive explanation, I actually do research on these reconstruction and view synthesis topics and your intro explanation made it very clear where/how to start thinking about gaussian splats.
This is awesome. But what if I wanna take the splats to Blender, to insert a 3D statue in the middle of scene? it's possible, or we need some kind of file conversion ?
Thank you! I think the photogrammetry tools had largely written off Nerf as a fun research toy. Gaussian splatting is making them pay attention and I could see it quickly become a common artifact alongside other reality capture data products in AEC. Amazing for visualization and progress snapshots!
Would be Interesting to see applications dealing with heritage preservation demands considering we’re typically using Point Clouds. If we could see GS as a resource for model driven Digital Twins and other digital processes for maintenance operations, I think we can cut away hundreds of hours of modelling work.
Thanks. Can you make a video of how this can be used for product marketing. Like creating a 3d structure of a product for websites. Can you also confirm if this can be done using some mobile apps
class Gaussian: def __init__(self, center, scale, covariance, color): self.center = center self.scale = scale self.covariance = covariance self.color = color #Is this enough? Or is there more to it to attain spherical harmonics?
For the initial sparse point cloud / posing of images - colmap is fine but reality capture, Agisoft Metashape, etc would be better. Google also released CamP code this week which is also bette than colmap.
Maybe. I don’t believe the density of those RGB scans is good enough unless you really put the tripod in a bunch of locations. Though I guess matterport could try to use their depth data in the training process to make up for that sparsity?
Excellent work! 3dgs can be used for any presentations, including those for ordinary consumers, real estate agents for project visualization, doctors, VFX/CGI, artist....
I am a VFX artist in Los Angeles who does digital sets, Matte Paintings, and Environments. I am trying to understand Gaussian Splatting from a non white paper stand point. It seems like it takes the capture camera Meta data ...turns it into a point cloud......and when you view it through your camera...it takes the photos or batch of photos that are near by and basically shows you photoprojections from the photo capture? Its basically viewing a flip book of pictures as your moving your camera based on the capture data positions? Forgive my laymen understanding. As a matte painter we do the same thing manually........doing a series of projections based on camera. How far off am I?
Thank you, that was clearly explained and very exciting. A very minor quibble is that Gaussian splatting is named for one of the greatest mathematicians of all times. His name is pronounce gawse and the curve he discovered is called the gawsian and is central to the study of probability and statistics. And now, computer graphics!
i am confused, watching the video right now, and the pronouciation sounds very typical american? i think its good, whats to complain? the real pronouciation of course is difficult for non german speakers, listen here en.wikipedia.org/wiki/File:De-carlfriedrichgauss.ogg
@@sirleto I don't want to make this a big deal. Mathematicians and physicists all pronounce it this way and thought you might care :) @daverayment explains it better. Gauss rhymes with house.
now i just wish to see some kind of style filters. since it uses point clouds, we definitely have depth as a parameter, giving us stuff like edge highlighting or idk, zoom effects. Could probably train a stylized cartoon filter. Since the splats are mostly gradients, maybe flatten the colors to specific styles? Would be fun to mess with if my PC would be able to process all the stuff :/
What is the best gausian splatting software for using drone footage. I have the original DJI MINI so I can't plot routes. I'm assuming video to luma ai would be easiest
I wonder if you can transform a 3D scene with textures into a gaussian splatting format to make games more optimal without sacrificing too much visual quality
I think 3d video's will also be a great thing when they are ready? I just have seen a demo of a video that probably had a depth map, so it was a 2d video but made 3d. But when you went behind an object to the side where there was no camera, then there was no object. I think AI could automatically predict what is behind an object and in that way create 3d maps from just normal videos. Then you could watch videos and walk around in those video's. I think this will be amazing.
Gaussian splatting is good for view synthesis. For real world measurements - you’re better off using classical photogrammetry to densify the point cloud or turn it into mesh and measure that. Of course you need to make sure your scan is metric accurate and to scale. iPhone apps that use the AR metadata eg Polycam do a good job of automating that for you.
Imagine having a 3D rendering engine that can automatically harness the power of Gaussian splatting for 3D animation, rendering only the needed Ray traced frames to create the splatting and turbo charge the rest of the frames making it possible to render long animations on a single consumer machine in record time.
Gaussian splatting is awesome, but it's not quite there yet. This is great for real objects, but a lot of 3D is fantasy. How does this do with drawings and concept art?
*@Creative Tech Digest* 0:11 No, I have no idea what any of those words means, and several other words, THAT was why I went to this video in the first place, to learn more, but I just get to hear a lot of meaningless words. :(
This is good feedback. I have added a “3D Capture 101” video to my queue! This video def assumes you know the basics of photogrammetry and 3d scanning tech
So...how much will it cost me to get into this game? All I've got right now is a reasonably capable PC, a few VR headsets I would like to view this stuff in, and a bit of free time.
There’s a ton of interesting research that uses radiance fields as the 3D representation but diffusion models for the generation. Quality isn’t quite where MJ is for still imagery tho!
I have a question that almost certainly reveals my total ignorance of this tech! When rendering 3d GS environments in a game engine, is it concievable that game mechanics could be added to this, for instance, pathfinding, collision, decals, all that traditional 3D stuff.
it's still early days. you could certainly query metadata you attach to each splat. you could also mix and match - make a mesh version using photogrammetry but keep the visibility off but keep it on for physics collisions, path finding etc.
Could you render a 3D scene into Gaussian Splats for more rapid playback in game engines? I imagine if you could you could purposefully distribute the points intelligently, creating a better looking final result.
Its math. It was always math. Except chemistry. Idk how that works. Because if all compounds wanna turn into noble gas, then why is LiN not on pubchem? And why doesnt nitrogen bond with 5 molecules???
Watching these point cloud with football size points, imagine the everything around us in the real life made from atoms and every single cell have 2 meters of atomoc level DNA. That where we are, at an embrionic state of science and compute, compared to complexity of the real life...
it's a radiance field so that's a feature not a bug - instead of rasterizing triangles as you would with a triangulated 3D mesh for example - you instead rasterize these gaussian splats. you can still use a mesh under the hood if u want for collisions etc.
this is only good for scanned environment, or existing physical world or object. perfect example is google maps. but for games, films, AR world, where the world don't exist and have to be constructed, it doesn't work cause you have to eitehr build the 3d model environment for high quality inter actions, or let AI do experimental 3d environments. But it is a forward tech for the point cloud technique, that's been here for a long time. its the fastest way to visualize as triangles/polys are not created and just vertices. But this time, looks like blending different data and adding spherical harmonics for better non-static lighting effects. But still, if you cannot control the materials by shading or future AI material/shading, its still limited. This tech still lacks the very dynamic effects of lighting and shading that are very much available in any 3D software or 3D rendering engines.
Hands down the best explainer I’ve found about Gaussian splatting - super clear and understandable without compromising any technical accuracy!
Thank you Ashwin!
I... I think my degree is already outdated
Gosh, everyone feels that way lately no matter the degree. The fundamentals still matter tho so you’re good!
Bro...I started on film....freeze me in carbonite and put me on display in the natural history museum.
I stumbled across this video and I'm blown away.
In a former life I use to do a lot of graphics and old-school 3D work. Could I request a beginner's guide on "how to get started" with the tech you covered?
I've discovered the topic today and my mind is blown away
Great breakdown man! I definitely have a better understanding now. :)
Is this explanation the one?! 😂
Nice vid bro. I love your editing style
Really awesome video and how you explained it - clear, crisp, in-depth. And thanks for including links to further resources.
This can be used to google maps street view. Imagine instead of clicking arrows to move forward, you just need to press “w” maybe as if you are walking in a video game
Exactly
Maybe too expensive and big for the whole world
@@hallo_ween07 Starting with a town maybe? Then slowly moving towards a country? Maybe a different type of google car can be made to use photogrammetry instead
You cand use wasd but you walk in a 360 pic, when you exit you are in space and you can play doom
i thought the same! We already have all the data to make a 3d digital twin of Earth. All we need is an AI model that can do it for us.
Much needed information, short and crisp, thanks Bilawal
Very Well explained! Now I really understood the spherical harmonic part of this tech. Thank you.
It really is the cherry on top, and love that an OG physics concept helps pull it off!
this is genuinely the best explanation i've heard on gaussian splatting. especially the part i don't think i've heard anyone talk about that is HUGE, which is *spherical harmonics*. actually showing reflections and the sun through the leaves that you're showing here. massive.
thsi was the best explanation ever! THANKS!!
the editing quality is dope!
Fantastic! This level of detail is candy for Gaussian Splatting devotees
Thanks! I’ve been wondering if I go more high level but I think this might be the way to go
Really great stuff, man -- love the in-depth breakdown done in a conversational manner. Looking forward to seeing your channel's growth!
Amazing video! Really exciting applications for this!
Bravo Bilawal! Well done. Straight to the point why GS is such a promising approach. I especially enjoyed the comparison to the "wonky looking broccoli trees" ;-)
🥦 🙈 you know all about it 😂 great to see you!
Brother your channel is going places. Your content is presented in a very straight forward manner and will get you far. Keep at it. See you at the 1 million Sub mark.
Amazing explanation for those working in 3D. Thanks
My pleasure!
Amazing tech 💜. Bilawal really makes the inner lost 3d kid in me resurface 🚀
Wow, that's truly impressive! Thank you for this video
it explains everything so well Keep up the good work!
Perfect video, thank you very much. It denotes main paper ideas and help me understand paper even more
Incredible Video Ashwin! Thank you - just subscribed!
Very comprehensive explanation, I actually do research on these reconstruction and view synthesis topics and your intro explanation made it very clear where/how to start thinking about gaussian splats.
If you use enough buzz words and look around the room you can sound like you know what you're talking about. Good job.
Great explanation, love the video quality. Cheers
This is awesome. But what if I wanna take the splats to Blender, to insert a 3D statue in the middle of scene? it's possible, or we need some kind of file conversion ?
Please make a video about cloud editing tools and point cloud shaders with Gaussian Splats. Thanks!
*takes notes*
The best video on the topic so far and with well represented insights.
How would you see Gaussian Splats in the construction industry?
Excellent job!
Thank you! I think the photogrammetry tools had largely written off Nerf as a fun research toy. Gaussian splatting is making them pay attention and I could see it quickly become a common artifact alongside other reality capture data products in AEC. Amazing for visualization and progress snapshots!
Would be Interesting to see applications dealing with heritage preservation demands considering we’re typically using Point Clouds. If we could see GS as a resource for model driven Digital Twins and other digital processes for maintenance operations, I think we can cut away hundreds of hours of modelling work.
Thanks. Can you make a video of how this can be used for product marketing. Like creating a 3d structure of a product for websites. Can you also confirm if this can be done using some mobile apps
@@shoaibwaqar9477it’s a great use case, def possible. For mobile check out Luma AI and Polycam
Very clear explanation. Thank you.
But how do Gaussians show spherical harmonics? Is it a mono colour blob that fades away or does it have many colours showing at different angles?
class Gaussian:
def __init__(self, center, scale, covariance, color):
self.center = center
self.scale = scale
self.covariance = covariance
self.color = color
#Is this enough? Or is there more to it to attain spherical harmonics?
Extremely fast but caught up with mind-boggling information.
This is a great explanation for guass splat !
Thanks for impressive explanation!!
Gosh this is insane progress
How do you use an FPV control to move around? That looks really great
The viewer on the 3DGS inria GitHub - also unreal engine itself works for such FPV controls!
This is a very interesting and educative video , my question is that what software can you used for the point cloud which software is the best
For the initial sparse point cloud / posing of images - colmap is fine but reality capture, Agisoft Metashape, etc would be better. Google also released CamP code this week which is also bette than colmap.
How about the camp can u pls help me with the link so that I will download it on my dell laptop
How about the camp can u pls help me with the link so that I will download it on my dell laptop
@@ibrahimsalisumadaki678 camp-nerf.github.io
Great work breaking things down Bilawal! Nice job
Means a lot coming from you! Miss ya G ❤️
"Insane, unbelievable, amazing, oh my god"
This is a great explainer - thank you!
Glad you enjoyed it!
Do you know or this also possible with scans made by the matterport pro 3?
Maybe. I don’t believe the density of those RGB scans is good enough unless you really put the tripod in a bunch of locations. Though I guess matterport could try to use their depth data in the training process to make up for that sparsity?
Excellent work! 3dgs can be used for any presentations, including those for ordinary consumers, real estate agents for project visualization, doctors, VFX/CGI, artist....
Thanks! Agreed - huge potential for many verticals
I was trying to find the origin and interpretation of the term -Gaussian Splatting. Does anyone know why it is called like that?
I am a VFX artist in Los Angeles who does digital sets, Matte Paintings, and Environments. I am trying to understand Gaussian Splatting from a non white paper stand point. It seems like it takes the capture camera Meta data ...turns it into a point cloud......and when you view it through your camera...it takes the photos or batch of photos that are near by and basically shows you photoprojections from the photo capture? Its basically viewing a flip book of pictures as your moving your camera based on the capture data positions? Forgive my laymen understanding. As a matte painter we do the same thing manually........doing a series of projections based on camera. How far off am I?
Thank you, that was clearly explained and very exciting. A very minor quibble is that Gaussian splatting is named for one of the greatest mathematicians of all times. His name is pronounce gawse and the curve he discovered is called the gawsian and is central to the study of probability and statistics. And now, computer graphics!
Thank you, and duly noted on pronunciation!
i am confused, watching the video right now, and the pronouciation sounds very typical american? i think its good, whats to complain?
the real pronouciation of course is difficult for non german speakers, listen here en.wikipedia.org/wiki/File:De-carlfriedrichgauss.ogg
@@sirletoare you able to say "house" or "mouse"? You don't need to be German to pronounce Gaussian correctly!
@@sirleto I don't want to make this a big deal. Mathematicians and physicists all pronounce it this way and thought you might care :) @daverayment explains it better. Gauss rhymes with house.
now i just wish to see some kind of style filters. since it uses point clouds, we definitely have depth as a parameter, giving us stuff like edge highlighting or idk, zoom effects. Could probably train a stylized cartoon filter. Since the splats are mostly gradients, maybe flatten the colors to specific styles?
Would be fun to mess with if my PC would be able to process all the stuff :/
Def doable. Check this out: twitter.com/johnowhitaker/status/1696336230299185647
What is the best gausian splatting software for using drone footage. I have the original DJI MINI so I can't plot routes. I'm assuming video to luma ai would be easiest
What a legend you are mate! Thank you for this great content
I wonder if you can transform a 3D scene with textures into a gaussian splatting format to make games more optimal without sacrificing too much visual quality
I'm wondering about the spherical harmonics. I really didn't understand how that part works.
I suggest watching the linked video as it goes into a lot more detail on how the spherical harmonics are implemented
WoW 🤩-it’s doesn’t get any better!
I think 3d video's will also be a great thing when they are ready?
I just have seen a demo of a video that probably had a depth map, so it was a 2d video but made 3d. But when you went behind an object to the side where there was no camera, then there was no object.
I think AI could automatically predict what is behind an object and in that way create 3d maps from just normal videos.
Then you could watch videos and walk around in those video's. I think this will be amazing.
Spot on - I think this is the direction apple seems to be going. Do more sophisticated infilling of a parse 3D capture
Is it possible to measure inside?
Gaussian splatting is good for view synthesis. For real world measurements - you’re better off using classical photogrammetry to densify the point cloud or turn it into mesh and measure that. Of course you need to make sure your scan is metric accurate and to scale. iPhone apps that use the AR metadata eg Polycam do a good job of automating that for you.
I too am wondering how this could be used for capturing 3D models (e.g. as part of a 3D printing workflow)
If you want metric accurate continuous surfaces; splatting may not be ideal for your use case. It’s much more suited to “view synthesis tasks”
Great video! One pet peeve: it's pronounced gau-see-uhn
Thanks! That’s how learned it picking up photoshop as a kid in India lol. Fixed in new video :)
Have you heard about anyone/any tools to convert a poly environment to a GS one and then compare the results?
Counting minutes until a vr game releases with this.
Amazing, thank you!
Will this ever work for video? Could you combined multiple angles of a speaker or actor and create a moving Gaussian splat video? Great video!!
Great way of explaining it
Imagine having a 3D rendering engine that can automatically harness the power of Gaussian splatting for 3D animation, rendering only the needed Ray traced frames to create the splatting and turbo charge the rest of the frames making it possible to render long animations on a single consumer machine in record time.
Love this. I think this we’ll get tools like this sooner than we think - 2024 is gonna be a fun one
Should I re-name from MESH IMAGES to GAUSS IMAGES?
Great video, looking forward to the next one!
Ngl I kinda like gauss images way more than “gsplats” or “GaSp” 😂
Gaussian splatting is awesome, but it's not quite there yet. This is great for real objects, but a lot of 3D is fantasy. How does this do with drawings and concept art?
Can't this be done with transparent polygons? what is the size of Gaussian splat. About 50 bytes? How do Gaussians splats render faster than polygons?
This tech now looks so close to what Brain Dances look on Cyberpunk 2077
*@Creative Tech Digest*
0:11 No, I have no idea what any of those words means, and several other words, THAT was why I went to this video in the first place, to learn more, but I just get to hear a lot of meaningless words. :(
This is good feedback. I have added a “3D Capture 101” video to my queue! This video def assumes you know the basics of photogrammetry and 3d scanning tech
Great Video👍🏼
Please do a Maven course…😊
Potentially. If enough folks are interesting in something dedicated to 3D capture
So...how much will it cost me to get into this game? All I've got right now is a reasonably capable PC, a few VR headsets I would like to view this stuff in, and a bit of free time.
Not much at all. Use your phone to capture and process in the cloud with luma or polycam. Drop ‘em into Unity and check it out in VR!
subscribed! interesting stuff
Thanks for the subscribe!
Thanks B!
Very cool. How do I photograph a dragon though? Or Aliens?
There’s a ton of interesting research that uses radiance fields as the 3D representation but diffusion models for the generation. Quality isn’t quite where MJ is for still imagery tho!
Spline added Gaussian Splatting today. So it’s real-time for the web.
I would be interested to see if this improves the quality of splatter movies…
Wow such a good video. I don't need to read the paper anymore
Glad you found it helpful!
I have a question that almost certainly reveals my total ignorance of this tech! When rendering 3d GS environments in a game engine, is it concievable that game mechanics could be added to this, for instance, pathfinding, collision, decals, all that traditional 3D stuff.
it's still early days. you could certainly query metadata you attach to each splat. you could also mix and match - make a mesh version using photogrammetry but keep the visibility off but keep it on for physics collisions, path finding etc.
This is insane
Great vid, I subscribed. Could you do a vid on dynamic NeRF (DyNeRF)
Thank you!
Could you render a 3D scene into Gaussian Splats for more rapid playback in game engines? I imagine if you could you could purposefully distribute the points intelligently, creating a better looking final result.
Interesting. Reminds me a bit of what Google Seurat was trying to do
I’ve been trying to wrap my brain around wth this is for over a year and now I’m thinking it’s basically physics manipulating ova voxels?
Gauss-ian, it's named after Carl Friedrich Gauss
Interesting
Wait a minute, so could this in theory greatly speed up ray tracing ? You render just a few points and then interpolate with gaussian splatting.
My God in heaven. 🤯🤯🤯🤯
Welcome to new UE Plugin: "UEGaussianSplatting: 3D Gaussian Splatting Rendering Feature For UE"
If you put this with garrys mod vr, you've essentially made a working matrix prototype, then it be a matter of what can be scanned.
nice
But, can it model the real physics, or is it a fancy illusion?
can we gaussian splat the entire planet?
Fluid simulation should be faster with Gaussian splatting.
Great video! Btw, I’m 99.99% sure it’s pronounced gow-see-uhn.
Its math.
It was always math.
Except chemistry. Idk how that works.
Because if all compounds wanna turn into noble gas, then why is LiN not on pubchem?
And why doesnt nitrogen bond with 5 molecules???
Watching these point cloud with football size points, imagine the everything around us in the real life made from atoms and every single cell have 2 meters of atomoc level DNA. That where we are, at an embrionic state of science and compute, compared to complexity of the real life...
Deep!
but it doesnt have any geometry....
it's a radiance field so that's a feature not a bug - instead of rasterizing triangles as you would with a triangulated 3D mesh for example - you instead rasterize these gaussian splats. you can still use a mesh under the hood if u want for collisions etc.
0:17 reaons
Thought it was mfing Kanye West on the thumbnail💀
😂 never gotten that before but I’ll take it
this is only good for scanned environment, or existing physical world or object. perfect example is google maps. but for games, films, AR world, where the world don't exist and have to be constructed, it doesn't work cause you have to eitehr build the 3d model environment for high quality inter actions, or let AI do experimental 3d environments. But it is a forward tech for the point cloud technique, that's been here for a long time. its the fastest way to visualize as triangles/polys are not created and just vertices. But this time, looks like blending different data and adding spherical harmonics for better non-static lighting effects. But still, if you cannot control the materials by shading or future AI material/shading, its still limited. This tech still lacks the very dynamic effects of lighting and shading that are very much available in any 3D software or 3D rendering engines.
samaj nahi i
somebody explaining from non 3d background. ??????