nerf vs 3d gaussian splatting is gonna be an interesting research topic 1v1 for the next year lol and to try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/bycloud . The first 200 of you will get 20% off Brilliant’s annual premium subscription!
IMO gaussian splatting wins by default simply because there is a volume for the ray to bounce on. Easier to fix that for interactive models in settings.
bycloudAI really disappoints me like this. He spends enough time doing research that if he spent less time trying to throw at you a jargon salad, you might actually be able to learn something but instead he wastes your time, too. He at least gets clicks out of it. I get the impression (and he has hinted a couple times) that even he doesn't understand everything he says. There are schools in both East and West that merely require you to regurgitate, i.e. copy/paste your "research." Sadly, it's a missed opportunity.
Crazy how they publish so fast. I took 2 years developing a method on motion magnification and writing to submit to a journal. and has passed 1 year since I submitted to the journal. Not published yet. It will clearly be outdated when it will be published.
There are already AI motion models, which means temporal information i.e. time - the fourth dimension On another note, machine learning itself is hyperdimensional, so it already functions in way more than 4 dimensions.
Literey the best explanation of what 3D Gaussian Spatting is I have seen so far. And I was realy into getting it and watched basycally every video I could find. Thanks for that bro. Abo for that.
Would be great if you could make a video on AI tools used locally. Like, what are the limitations? The advantages? How have they evolved over time? The system requirements? I'm pretty sure I don't has 24 GB of VRAM >.>" So much stuff going on and it moves fast.
I love your channel, dude. It's geeky as hell and entertaining as hell, which forces me to go out and learn about the terms you speak of and for that, I am super grateful.
I know it's sort of meant as a joke, but to say that meta spent all that money on Horizon Worlds is incorrect. Most of their spending has been on their Reality Labs teams that produce AI and graphics research projects. Otherwise, great video as always
Not a joke at all. Per Wikipedia: Formerly: Facebook Reality Labs (2020-2021) Founded: 2020 Products: Meta Quest 3, Quest 2, Meta Quest Pro, Ray-Ban Stories, Horizon Worlds Revenue Decrease: US$2.16 billion (2022) Operating income Decrease: US$-13.72 billion (2022)
@@DJVARAO In your reply right there, it says the same thing, *Meta Reality Labs* They don't just work on Horizon Worlds. Meta has released detailed spending breakdown reports to show those billions are spent across VR and AR hardware development and research, AI research, and more. Most of the money was being dumped into AR hardware research. In the end less than 10% was being spent on Horizon Worlds/The Metaverse.
@@nodelayfordays8083 You should check out **Generative Image Dynamics**, it can simulate the response of an object's motion, to a user's interaction from a still image aka *Interactive Dynamics*-although a very limited physics at this moment.
Been on the splatting hype train lately.i dont understand it enough to know how this can be used for video or with far fewer input images..maybe i'll just give it a couple of weeks for a couple of new papers... oh well before i even finish typing... dynamic splats!
I'm looking forward to seeing benchmarks comparing Gaussian splatting to classical shading in terms of performance and quality. I wonder what kind of games will be possible with the technique.
IA Is divide in two. Modern and classical. Modern is based in simulation of biological structures (neural network example) but classical is based in logical, probability y pure math
Great video, the progress in AI is just crazy. Quick call out tho, I would personally not recommend brilliant for ML. I prefered Andrew Ngs course on coursera by far.
3D Gaussian Splatting is great, but it has a lot of artifacts added in and also it is not a mesh, we are still talking here about a point cloud.. or should I say gaussian cloud? Anyway, until we will be able to use some really good AI to transform it to really high quality 3D mesh, it won't be that useful in games, etc.
The issues is that a good portion of real life 3D models are modeled by artists using varying number of references that are not chronological nor are they from the same source most of the time, the camera that is (if that makes sense). While all of these automated techniques require a drone or a cameraman recording a video while orbiting the object(s) of interest.
Can someone please explain, how the colors are stored in the spherical harmonics? If I understood it correctly in the original system they used 16 float vec3's (in total 48 float values) that represent the coefficients of the spherical harmonics. So we the red channel is encoded as an sh by the first entry of each vec3, the green channel by the second value and the blue channel by the third value of the vec3(?) Is that correct or am I completly wrong?
For it to be the future we are going to need some real ways to use them. Currently no major software even supports these files. Only ue5 with a plug in that cost over $100
Hey how are you ? I’m a vfx artist and wanted to ask you, do you know what they use to make those crazy effect you show at 4:30 ? It’s 3D gaussian splatting then blender ? I’m really new in it
If you really believe that online service rivals personal tutoring from a human bein, i dont know what to tell you Brilliant is nice, but its not "competing with schools"
@@Forcoy Absolutely nothing I use in daily life came from school (except reading, writing, and basic math). I learned everything from jobs and the Internet. School wastes the best years of a brain's development. Bullying and peer pressure add trauma. I'm guessing you were a bully in school, and you think you were just having fun.
@@jonmichaelgalindo Buddy, I have head trauma and have moved schools twice as a complete oursider to the rest of the class. Even someone with brain damage can see that school can be an obviously useful tool, its current form is obviously not great, but that doesnt undermine the entire concept.
So has anyone done... movement with this? Cause everything is static renders so far Cause that's going to be the big differential factor. Might be great at static 3D rendering, but if it can't keep up with movement and dynamic objects, then what's the point?
I was hoping you'd explain to people the most important distinction between 3DGS and NeRF is that Gaussian Splatting doesn't produce a traditional 3D mesh that can be exported, edited and put into other software/renders.
Insisting on issuing virtual reality applications to earn money. I think they should pay more attention to environmental problems instead of wasting young people’s minds on trivial matters.
nerf vs 3d gaussian splatting is gonna be an interesting research topic 1v1 for the next year lol
and to try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/bycloud . The first 200 of you will get 20% off Brilliant’s annual premium subscription!
IMO gaussian splatting wins by default simply because there is a volume for the ray to bounce on. Easier to fix that for interactive models in settings.
It's weird how I understand absolutely nothing of every videos you make but still click on them and watch them till the end each time
I only have a surface level understanding of what's being said because I do CS... that's my saving grace 😂😂
😂just study more men. Just read more chat gpt peper
@@carkawalakhatulistiwa Those papers are not published in journal.
@@carkawalakhatulistiwastudy more men? I'm not a doctor
bycloudAI really disappoints me like this. He spends enough time doing research that if he spent less time trying to throw at you a jargon salad, you might actually be able to learn something but instead he wastes your time, too. He at least gets clicks out of it.
I get the impression (and he has hinted a couple times) that even he doesn't understand everything he says. There are schools in both East and West that merely require you to regurgitate, i.e. copy/paste your "research."
Sadly, it's a missed opportunity.
Crazy how they publish so fast.
I took 2 years developing a method on motion magnification and writing to submit to a journal. and has passed 1 year since I submitted to the journal. Not published yet.
It will clearly be outdated when it will be published.
Can't wait for 4D AI
3rd Dimension + time (animation)
or the 4th dimension?
There are already AI motion models, which means temporal information i.e. time - the fourth dimension
On another note, machine learning itself is hyperdimensional, so it already functions in way more than 4 dimensions.
Literey the best explanation of what 3D Gaussian Spatting is I have seen so far. And I was realy into getting it and watched basycally every video I could find.
Thanks for that bro. Abo for that.
Would be great if you could make a video on AI tools used locally. Like, what are the limitations? The advantages? How have they evolved over time? The system requirements? I'm pretty sure I don't has 24 GB of VRAM >.>"
So much stuff going on and it moves fast.
I love your channel, dude. It's geeky as hell and entertaining as hell, which forces me to go out and learn about the terms you speak of and for that, I am super grateful.
I know it's sort of meant as a joke, but to say that meta spent all that money on Horizon Worlds is incorrect. Most of their spending has been on their Reality Labs teams that produce AI and graphics research projects.
Otherwise, great video as always
Not a joke at all. Per Wikipedia:
Formerly: Facebook Reality Labs (2020-2021)
Founded: 2020
Products: Meta Quest 3, Quest 2, Meta Quest Pro, Ray-Ban Stories, Horizon Worlds
Revenue Decrease: US$2.16 billion (2022)
Operating income Decrease: US$-13.72 billion (2022)
@@DJVARAO In your reply right there, it says the same thing, *Meta Reality Labs* They don't just work on Horizon Worlds. Meta has released detailed spending breakdown reports to show those billions are spent across VR and AR hardware development and research, AI research, and more. Most of the money was being dumped into AR hardware research. In the end less than 10% was being spent on Horizon Worlds/The Metaverse.
@@ninjatogo Sure, but still spending 13 billion seems unjustified.
@@DJVARAO That's a different claim to: "they overspend money on the metaverse."
@@r.m8146 Is it?
I feel like you could make an entire rendering/game engine out of this.
or you can't do that?
@@shadowskullG or you can't do that?
There are no real modeling tools for the method right now.
The other issue is physics. But one day multiple papers combined.
@@nodelayfordays8083 You should check out **Generative Image Dynamics**, it can simulate the response of an object's motion, to a user's interaction from a still image aka *Interactive Dynamics*-although a very limited physics at this moment.
Would love to follow your windows installation guide but sadly I only have 8GB VRAM.
Been on the splatting hype train lately.i dont understand it enough to know how this can be used for video or with far fewer input images..maybe i'll just give it a couple of weeks for a couple of new papers... oh well before i even finish typing... dynamic splats!
I'm looking forward to seeing benchmarks comparing Gaussian splatting to classical shading in terms of performance and quality. I wonder what kind of games will be possible with the technique.
Plot twist, Chat-GPT is also just using humans to chat with you.
That plot twist thing needs to go especially with all of the dumb AI jokes
3dgs is so intuitive thinking about it
Misleading thumbnail, GS is not "A.I" related; this is a classic algorithm.
IA Is divide in two. Modern and classical. Modern is based in simulation of biological structures (neural network example) but classical is based in logical, probability y pure math
@@rjameslower so modern 'ai' just isnt ai then? thats literally just an algorithm/equations
Mom comes out: I said the tree in the backyard needed pruned!
hey i had no idea about anything in this video, i can make a wooden plank in Blender though, thank you for the valuable channel you have built
YOOO NEW VIDEO CAME OUT
Great video, the progress in AI is just crazy.
Quick call out tho, I would personally not recommend brilliant for ML. I prefered Andrew Ngs course on coursera by far.
There should be 3D Gaussian Splatting acceleration hardware in next gen GPU's.
3D Gaussian Splatting is great, but it has a lot of artifacts added in and also it is not a mesh, we are still talking here about a point cloud.. or should I say gaussian cloud? Anyway, until we will be able to use some really good AI to transform it to really high quality 3D mesh, it won't be that useful in games, etc.
I would consider using these strange artifacts for artistic purposes tbh.
The issues is that a good portion of real life 3D models are modeled by artists using varying number of references that are not chronological nor are they from the same source most of the time, the camera that is (if that makes sense). While all of these automated techniques require a drone or a cameraman recording a video while orbiting the object(s) of interest.
Can someone please explain, how the colors are stored in the spherical harmonics? If I understood it correctly in the original system they used 16 float vec3's (in total 48 float values) that represent the coefficients of the spherical harmonics. So we the red channel is encoded as an sh by the first entry of each vec3, the green channel by the second value and the blue channel by the third value of the vec3(?)
Is that correct or am I completly wrong?
Could a splat be used as a vertex for movable 4D meshes?
"you need rtx 3090 and 24gb VRAM"
Me with my 4gb gtx1050ti:
Yeah I can take it
Dude! This is wild! 😬
Thanks for the vid 🌻👍
For it to be the future we are going to need some real ways to use them. Currently no major software even supports these files. Only ue5 with a plug in that cost over $100
Jesus dude be patient
@@jerbear7952 i wanT it now!
🤣
It takes longer to train than NeRF but is faster to render after it's done?
Would a 3080ti be good enough for local install?
Hey how are you ? I’m a vfx artist and wanted to ask you, do you know what they use to make those crazy effect you show at 4:30 ? It’s 3D gaussian splatting then blender ? I’m really new in it
Any way to do this on a 3080 ti? 16vram?
Brilliant is starting to feel like a competitor to schools. Also 3DGS is awesome! 🙂
If you really believe that online service rivals personal tutoring from a human bein, i dont know what to tell you
Brilliant is nice, but its not "competing with schools"
@@Forcoy Absolutely nothing I use in daily life came from school (except reading, writing, and basic math). I learned everything from jobs and the Internet. School wastes the best years of a brain's development. Bullying and peer pressure add trauma. I'm guessing you were a bully in school, and you think you were just having fun.
@@jonmichaelgalindo Buddy, I have head trauma and have moved schools twice as a complete oursider to the rest of the class. Even someone with brain damage can see that school can be an obviously useful tool, its current form is obviously not great, but that doesnt undermine the entire concept.
@@Forcoy I'm like 99% sure now that you're a bully... They all seem to have brain damage for sure.
could you do a 3d gaussian splatting of a person and rig it like in blender
No, it’s still just a fancy point cloud like NeRF. At best, you could use it as a reference for such a model.
Can 3D Gaussian Splatting work perfectly fine to explore in a 8GB Ram laptop that has no GPUs?
Weird how there's no comparison to MobileNeRF. Which seems a lot faster with lower system requirements.
what about dynamic light...
So has anyone done... movement with this? Cause everything is static renders so far
Cause that's going to be the big differential factor. Might be great at static 3D rendering, but if it can't keep up with movement and dynamic objects, then what's the point?
so 3d model for modding games yet?
Awesome
Incredible
awesome
Zaaz ojalá Konami se hubiera esperado a esta tecnología para revivir silent hill antes de mandar a la niebla al equipo XD
I was hoping you'd explain to people the most important distinction between 3DGS and NeRF is that Gaussian Splatting doesn't produce a traditional 3D mesh that can be exported, edited and put into other software/renders.
Neither do NeRF, none use or create mesh. Mesh can be extracted from both using traditional photogrammetry method.
3D Gaussian is already dead. 4D Gaussian Splatting is here. Paper released last week
I would've named this video : "THE FUTURE OF 3D AI?" or something like that more clickbaity while at the same time not being clickbait
You’re everything wrong with this platform.
What about 4D?
dalle 3 is new why are you saying it was revived
They're called spokes
Nice
im sad that i cant run most of ai w/ my 3050 but it's ok :,(
damn i really wish I was smart enough to understand this
4d Gaussian splatting
we already have the metaverse, its called fortnite
No easy way. I wanna pain and suffering to install Nerfs on cloud LOL hahahaha Like installing Gentoo linux
Took me three hours to get splatting working.
@@rw7717 Now try doing that but on a k8s cluster using kubeflow =P
ah most welcome news, another step closer to manifesting my waifu into existence
You can do this with less vram guys.....all im going to say
Im scared I understand 80% of the word spaghetti
Reality is going to be confusing in the future
mark vr is for rich people vr
I cant wait for AI to be able to create near perfect 3d assets based off of gaussian splats
Admit it. You made 90% of those words up.
Nearly a quarter of this video was an advertisement for something every single one of your viewers without exception is already fully aware of.
i mean ive heard of Brilliant but i hadnt heard of any of the other stuff
sooo early!
Insisting on issuing virtual reality applications to earn money. I think they should pay more attention to environmental problems instead of wasting young people’s minds on trivial matters.
Its the most stupid claim, to write a model can see, hear and speak
investors and customers are kinda stupid
Oh the bullshit with a model can see hear and speak came from openAI. No wonder, that everybody is repeating it like a parrot