One thing you missed is the fact that the Elden Ring stars disappear when they're too close to the player. As such, I didn't like that one star really close to the camera when you were explaining the effect, but at the grass scene, it's perfect.
I think in this case (GPU downsampling vs gaussian blur) it would make more sense to compare the frametimes instead of the percent change in frames per second. The gaussian example at 7:12 takes 2ms per frame, which is only a 64% loss if you're already getting 800 frames per second. If it was added to a complete game that was getting 60fps, an additional 2ms per frame would only drop it to 54fps or a 10% loss. (Still not great, but not quite as drastic)
@@vadiks20032 Probably bilinear filtering? I imagine it does a simple integer downscaling on the GPU, then scales it back to output resolution with bilinear filtering
A bloom technique I worked on years ago was a combination of a separated gaussian blur with the downscaling technique you mentioned here. Just downscaling is prone to bringing out the ugliness of the bilinear sampling, and just gaussian requires a large kernel size to get acceptable results, but a small gaussian kernel mixed with downscaling gives you the best of both worlds!
it's beautiful. slightly terrifying out in the open sky looks like the moon exploded. probably why Elden Ring used this effect in the first place. may give this a try myself.
Pro tip: rotation matrices with trig functions sucks ass I have sworn off rotation matrices with trig functions and my new best friend is calculating rotation using reflections. Any rotation can be expressed by two reflections, and for cases like this, where a flipped model won't be an issue, just 1 reflection will work. The equation for reflecting vector(vertex) 𝑣 across unit vector 𝑎 is just: 𝑣 - 2(𝑣⋅𝑎)𝑎 And if 𝑎 is not a unit vector, just divide by 𝑎²: 𝑣 - 2(𝑣⋅𝑎)𝑎/𝑎² I got this revelation from Marc ten Bosch's video on rotors from geometric algebra, which I would highly reccomend.
You could also generate a random unit quaternion to get a random rotation, and unit quaternion to matrix is just a bunch of + - *, but this reflection trick sounds even faster, thanks!
I work on video and vfx, so coding shaders is only adjacent to my field, but still your youtubes are really interesting, and helped me understand how some of my most used effects work. Thanks for sharing.
Yeah that ish is lookimg good. Pan a lil bit of perlin noise across the emissive shader to give the stars a teeny but of twinkle? 🤌 Great video, dawgie. Keep it up.
it might be worth adding another gamma correction over the stars after the bloom to lighten the bloom effect. Right now it is so strong it is lifting the black level and appearing washed out.
Are you ever going to go over how to make your grass interactable? that can be a big issue when you have so much grass, and is very important for immersion
Yeah! Once I get around to modeling the protagonist of my personal game project I'll go over tons of player interaction stuff like footprints and grass interaction. It's going to be awhile though unfortunately, since I'm really busy with work and school at the same time right now.
Split your world into chunks, only the 4 chunks nearest to the player need to be actively updated w.r.t player interaction, each chunk has a low resolution texture, per frame transform player pos into UV coords of the chunk, then just draw a white pixel onto the texture, you can then pass the chunk's texture into the vertex shader for the grass, and sample it for (white = flatten the grass i.e scale Y by ~.01 or so, black = leave the grass as normal) then u have grass deformation after the player has walked on it. This need a slightly modified approach if you use a quad tree for variable chunk sizes (a kind of hack to do fast culling + LOD grass density) so you'll need to change the texture res down as the chunk size goes down. I've been working to add this as a feature to the Terrain class in my game engine.
you can also do a gaussian blur to the most-downscaled image to make it cheaper, and in two passes, once just horizontally and once just vertically. gaussian blurs are "separable" in this way and it changes the expense from n^2 per pixel to just 2n, and doing it on the most-downscaled image makes it even cheaper on top of that gaussian blurs are the same as an infinite number of infinitely small random blurs applied on top of each other, so gaussian blurring the downscaled image doesn't hurt anything also to be physically accurate bloom should happen at a step in the post-processing/presentation chain where colors are represented in linear/physical light (or you could convert back and forth as needed)
The result would be basically the same but I assume the gaussian blur would still be much slower and yeah I put bloom after lighting post processing effects (ssao) and before color grading, I put it at the end of the post processing pipeline here for simplicity but I go over a more concrete effect pipeline in the HDR post processing vid
I gained IQ points from that breakdown of why box blur and gaussian blur aren't practical + an alternative methods that is. Optimization makes me go a big rubbery one. Thank you for being so detail oriented and actually explaining why and how all this worked.
The fast blur is really great but only works when you want to blur the whole render, so that's the trade off. Gotta make a video on different blurs for stuff like depth of field lol also thanks!
How would you go about adjusting the positioning of the stars to give you a milky way type of stripe in the sky rather than just having a fairly even distribution of stars everywhere?
I like doing things in shaders and all, but for this, with the amount of actual parallax any player can see from the ground, you might as well do it the way the earliest humans did it... with a skybox, or a textured dome. Or at the very least, instead of generating all the star data like the point cloud, the transforms, etc, at runtime, at least read them from some kind of noise texture. I'm 99% certain most of the skies in elden ring are matte paintings with a high res flow map running the cloud movements. I guarantee you they aren't doing anything volumetric up there.
Also, honestly i didnt see if you did this, but with the point cloud method, be sure to offset all the positions spherically around the center so the character isnt walking thru stars.
Right, I understand why you would do it, I'm saying, the amount of parallax detectible on objects that size from that distance is miniscule. At the least, it's something you would look at in an optimization pass... a candidate for something to streamline.
@@Acerola_t oh, sorry, yeah, in Elden Ring they are using a mesh BECAUSE of the Siofra Well descent which DOES allow you to see the parallax. My bad., I forgot about that factor. I was thinking in terms of something you would only ever be seeing from the ground.
Not my code no since it's made for the built in pipeline, but if you are proficient with URP then the same theory behind all the decisions I made still applies.
One day I'll have to actually learn how to make shaders (my uni professors just went, okay here's shader graphs because actual shader code is too complicated for y'all right now, have fun) and this channel will be my bible... at first before I then am forced to pack up and go read studies.
if you use a Tetrahedron(en.wikipedia.org/wiki/Tetrahedron ) or Triangular Bipyramid (en.wikipedia.org/wiki/Triangular_bipyramid ), it would take 2 or 1 less verts. that is 1/3 or 1/6 less verts. that could be 2,000,000 or 1,750,000 stars.
One question and one observation: If you already scattered your points at random in polar coordinates as first step, what's the point of adding positional noise after the rotations? (I might be making up words but I can't go back and check the real ones) And, looking at the sample images from ER you're showing, I think it would make a big difference to not assume that stars are distributed randomly but use some density map. After all it's galaxies and all that shiz that give the extra wow factor... But seriously, your channel is a jewel
The positional noise is added to each vertex to give random distortions to the mesh so that the stars arent perfectly uniform. This is different from the world position of the star since adding more noise to that would not deform the mesh as it is added equally to each vertex. A density map would def improve it but I kept it random for simplicity. Thanks for watching!!
The blur trick is called something like blurring by gpu convolution and is only applicable in real time since if you're using photoshop or after effects you can use a real blur like gaussian.
@@Acerola_t You're right. I guess quaternions would be more meaningful if you wanted the stars to rotate constantly and smoothly, but it's not the case so it's a bit overkill.
Surprisingly enough, only about 4500 stars can be visible at once to the human eye. So stars are some of the easiest things to not fake in a video game. It's one of the few times where you can choose to render more of something for stylistic reasons, rather than be forced to render less things than reality to not summon the performance police.
@@Acerola_t 6 times as many vertices, 8x triangles. Probably a bunch more calculations too, culling, shading and whatnot. Particle being just a small bitmap splashed flat (no 3D orientation) always facing the camera feel to me like they should be vastly lighter computationally. I haven't been up to date with 3D tech since Quake, so I might be totally wrong about how modern GPUs handle them, but "back in the day" they were universally cheaper than faces.
@@sharpfang I'm saying that the word "particle" is kind of just a blanket term and doesn't refer to any specific kind of mesh lol Yeah you could do billboarded quads but from my observations in Elden Ring that was not the case and it is in fact real geometry.
@@Acerola_t Yeah, but don't take Elden Ring as a paragon of optimal game design. Elden Ring does *lots* of completely unnecessary work no-one's ever gonna see, models that are still loaded but marked as 'don't display', complex layers of clothing/armor on characters where the under-armor layers are never shown, hundreds of megabytes of unused voice track you download and install, entire complex quests that were 90% complete and are there but impossible to begin... and the engine lacking some really trivial features most other open-world games take for granted - like NPCs being able to walk while out of combat, and flying enemies not falling to death when flying over the edge of a cliff. It really feels like From hires an army of artists, writers, designers, animators etc, and a small handful of interns to write the engine code.
@@sharpfang His video is about making the Elden Ring stars, so he did just that. You miss the whole point of the video if you ask why he made the stars similar to Elden Ring stars.
There is no trig beyond the initial position generation on game start. The transformation matrices are precomputed so it boils down to a single matrix multiply in the vertex shader.
@@Acerola_t There's some hidden cost, you're sampling the buffer you stored all of those transformation matrices in. They're all stored contiguously though so it isn't very expensive and you'll have a near perfect hit rate, it's just that once you start going to millions of instanced entities you might find some tex interface gets saturated and bottlenecks the pipeline. It'd be interesting to see how the current approach performs avg FPS wise, compared to the rotation via reflection method as suggested by @Luna which may eke out a lead at high instance counts - or not, who knows?
@@Acerola_t To pass so much data to the GPU it's necessary / strongly recommended to instead write the data into a buffer and then bind and read the buffer contents since you'll be limited by the GPU's max vertex array uniform components. Maybe Unity automatically deals with this under the hood. When the GPU reads from a buffer it uses the same / similar interfaces as sampling a texture (cos really it's all just reading from GPU memory) so the SoL shows on the TEX SoL, at least in theory.
At some point he's gonna make enough tutorials to make that grass scene into an actual game.
Fr I hope he does Tbh guyz a legend 🙏
Kinda reminds me of sekiro
I appreciated the tangent joke. Well done. Very entertaining and engaging presentation.
Thank you!
One thing you missed is the fact that the Elden Ring stars disappear when they're too close to the player.
As such, I didn't like that one star really close to the camera when you were explaining the effect, but at the grass scene, it's perfect.
Please do not stop the peformance tips, we need those
Sebastian Lague but make it chaotic. Love your explanations and editing!
Sebastian is my #1 inspiration, thanks for the kind words
I think in this case (GPU downsampling vs gaussian blur) it would make more sense to compare the frametimes instead of the percent change in frames per second. The gaussian example at 7:12 takes 2ms per frame, which is only a 64% loss if you're already getting 800 frames per second. If it was added to a complete game that was getting 60fps, an additional 2ms per frame would only drop it to 54fps or a 10% loss. (Still not great, but not quite as drastic)
i wonder if gpu downsampling has a name. i mean what he showed is definitely a way to blur an image i dont know its name
@@vadiks20032 Probably bilinear filtering? I imagine it does a simple integer downscaling on the GPU, then scales it back to output resolution with bilinear filtering
@@hellomistershifty oh right the photoshop had that thing where it rescales, bilinearly, trilinearly, and based on pixels around it or smth like that
Watched these out of order. So THAT’s where the stars came from!
Good art that doesn't kill performance makes my brain happy. That blur trick is genius
A bloom technique I worked on years ago was a combination of a separated gaussian blur with the downscaling technique you mentioned here. Just downscaling is prone to bringing out the ugliness of the bilinear sampling, and just gaussian requires a large kernel size to get acceptable results, but a small gaussian kernel mixed with downscaling gives you the best of both worlds!
Loving your stuff please keep pumping these videos.
it's beautiful. slightly terrifying out in the open sky looks like the moon exploded. probably why Elden Ring used this effect in the first place.
may give this a try myself.
Pro tip: rotation matrices with trig functions sucks ass I have sworn off rotation matrices with trig functions and my new best friend is calculating rotation using reflections.
Any rotation can be expressed by two reflections, and for cases like this, where a flipped model won't be an issue, just 1 reflection will work. The equation for reflecting vector(vertex) 𝑣 across unit vector 𝑎 is just:
𝑣 - 2(𝑣⋅𝑎)𝑎
And if 𝑎 is not a unit vector, just divide by 𝑎²:
𝑣 - 2(𝑣⋅𝑎)𝑎/𝑎²
I got this revelation from Marc ten Bosch's video on rotors from geometric algebra, which I would highly reccomend.
You could also generate a random unit quaternion to get a random rotation, and unit quaternion to matrix is just a bunch of + - *, but this reflection trick sounds even faster, thanks!
I work on video and vfx, so coding shaders is only adjacent to my field, but still your youtubes are really interesting, and helped me understand how some of my most used effects work.
Thanks for sharing.
Just discovered your videos. This is so fascinating and in-depth!
love this channel
The Yard (Slimecide) reference was perfect 🪦
I have no idea how to code but I love watching your videos + you used Guts theme so I gotta sub
Got both questions wrong in Will it Scale but jokes on you I was already subbed
7:55 this gpu is working harder than a single mother on new year's eve
your videos are so good, I hope more devs find you
Yeah that ish is lookimg good. Pan a lil bit of perlin noise across the emissive shader to give the stars a teeny but of twinkle? 🤌
Great video, dawgie. Keep it up.
another acerola banger
I like the style and pacing of your videos. Keep up the good work
Your videos are great!
it looks nice, i love how chili the vibe is getting with the grass scene.
it might be worth adding another gamma correction over the stars after the bloom to lighten the bloom effect. Right now it is so strong it is lifting the black level and appearing washed out.
174 triggy boys
best remake ever, thanks :)
would give anything to be lying in that field of grass rn
I would be lying if I said I didn't want to be there too
i was watching hasanabi right before this video lmao tripped me out
Beautiful.
Are you ever going to go over how to make your grass interactable? that can be a big issue when you have so much grass, and is very important for immersion
Yeah! Once I get around to modeling the protagonist of my personal game project I'll go over tons of player interaction stuff like footprints and grass interaction.
It's going to be awhile though unfortunately, since I'm really busy with work and school at the same time right now.
Split your world into chunks, only the 4 chunks nearest to the player need to be actively updated w.r.t player interaction, each chunk has a low resolution texture, per frame transform player pos into UV coords of the chunk, then just draw a white pixel onto the texture, you can then pass the chunk's texture into the vertex shader for the grass, and sample it for (white = flatten the grass i.e scale Y by ~.01 or so, black = leave the grass as normal) then u have grass deformation after the player has walked on it.
This need a slightly modified approach if you use a quad tree for variable chunk sizes (a kind of hack to do fast culling + LOD grass density) so you'll need to change the texture res down as the chunk size goes down.
I've been working to add this as a feature to the Terrain class in my game engine.
I think if the stars were just a bit smaller in that preview with the grass, it'd look incredible
Nice one! Although the stars in Elden Ring aren't distributed evenly but form clusters. That looks much more credible.
you can also do a gaussian blur to the most-downscaled image to make it cheaper, and in two passes, once just horizontally and once just vertically. gaussian blurs are "separable" in this way and it changes the expense from n^2 per pixel to just 2n, and doing it on the most-downscaled image makes it even cheaper on top of that
gaussian blurs are the same as an infinite number of infinitely small random blurs applied on top of each other, so gaussian blurring the downscaled image doesn't hurt anything
also to be physically accurate bloom should happen at a step in the post-processing/presentation chain where colors are represented in linear/physical light (or you could convert back and forth as needed)
The result would be basically the same but I assume the gaussian blur would still be much slower
and yeah I put bloom after lighting post processing effects (ssao) and before color grading, I put it at the end of the post processing pipeline here for simplicity but I go over a more concrete effect pipeline in the HDR post processing vid
@@Acerola_t very nice, I approve
5:09 that berserk music
KING
Nice video
9:06 - Answer: Hollywood
I gained IQ points from that breakdown of why box blur and gaussian blur aren't practical + an alternative methods that is. Optimization makes me go a big rubbery one.
Thank you for being so detail oriented and actually explaining why and how all this worked.
The fast blur is really great but only works when you want to blur the whole render, so that's the trade off. Gotta make a video on different blurs for stuff like depth of field lol
also thanks!
This guy gives me the Vibes as of "the Next Toby Fox" or the Acerola himself
Holy guacamole
Are your title cards and name inspired by the Monogatari Series?
Yeah it's my favorite show
Great videos btw! I feel very proud to have gotten the references :)))
How would you go about adjusting the positioning of the stars to give you a milky way type of stripe in the sky rather than just having a fairly even distribution of stars everywhere?
Probably using 3d perlin noise during the position generation rather than white noise
I have the same camper trailer. I think, haven't touched it for years but it looks identical. Thing probably has a dead battery.
It's pretty nice
you use jakey song... you are now best dev-log style youtuber. thank you have good day
I like doing things in shaders and all, but for this, with the amount of actual parallax any player can see from the ground, you might as well do it the way the earliest humans did it... with a skybox, or a textured dome. Or at the very least, instead of generating all the star data like the point cloud, the transforms, etc, at runtime, at least read them from some kind of noise texture. I'm 99% certain most of the skies in elden ring are matte paintings with a high res flow map running the cloud movements. I guarantee you they aren't doing anything volumetric up there.
Also, honestly i didnt see if you did this, but with the point cloud method, be sure to offset all the positions spherically around the center so the character isnt walking thru stars.
If it's a skybox then you don't get the feeling of the stars actually existing in the space which is pretty important.
and I can assure you the stars genuinely exist as meshes in the space, pretty sure I demonstrated that in the video
Right, I understand why you would do it, I'm saying, the amount of parallax detectible on objects that size from that distance is miniscule. At the least, it's something you would look at in an optimization pass... a candidate for something to streamline.
@@Acerola_t oh, sorry, yeah, in Elden Ring they are using a mesh BECAUSE of the Siofra Well descent which DOES allow you to see the parallax. My bad., I forgot about that factor. I was thinking in terms of something you would only ever be seeing from the ground.
Is there a way to get this to work in URP?
Not my code no since it's made for the built in pipeline, but if you are proficient with URP then the same theory behind all the decisions I made still applies.
One day I'll have to actually learn how to make shaders (my uni professors just went, okay here's shader graphs because actual shader code is too complicated for y'all right now, have fun) and this channel will be my bible... at first before I then am forced to pack up and go read studies.
if you use a Tetrahedron(en.wikipedia.org/wiki/Tetrahedron ) or Triangular Bipyramid (en.wikipedia.org/wiki/Triangular_bipyramid ), it would take 2 or 1 less verts. that is 1/3 or 1/6 less verts. that could be 2,000,000 or 1,750,000 stars.
This is true, but the exercise is to recreate elden ring's star particles specifically.
One question and one observation: If you already scattered your points at random in polar coordinates as first step, what's the point of adding positional noise after the rotations? (I might be making up words but I can't go back and check the real ones)
And, looking at the sample images from ER you're showing, I think it would make a big difference to not assume that stars are distributed randomly but use some density map. After all it's galaxies and all that shiz that give the extra wow factor...
But seriously, your channel is a jewel
The positional noise is added to each vertex to give random distortions to the mesh so that the stars arent perfectly uniform. This is different from the world position of the star since adding more noise to that would not deform the mesh as it is added equally to each vertex.
A density map would def improve it but I kept it random for simplicity.
Thanks for watching!!
Does the blur trick have a name ? Is there a way to use it in after effects or photoshop without doing it manually ?
The blur trick is called something like blurring by gpu convolution and is only applicable in real time since if you're using photoshop or after effects you can use a real blur like gaussian.
How do you apply the bloom effect to only the stars and not the grass and cube in the final scene?
Since the stars are emissive it's very easy to get the stars into the bloom threshold range while keeping the grass out of it
Noice!!
I have no idea what any of this means, but they're like shit post tutorials. I love it, more pls
Quick question: wouldn't it be better to make the rotations using quaternions?
No because in this case the position translation, scaling, and rotation, are all done with a single matrix multiply.
@@Acerola_t You're right. I guess quaternions would be more meaningful if you wanted the stars to rotate constantly and smoothly, but it's not the case so it's a bit overkill.
#AcerolaLiedPeopleDied
holy algorithms please help me help this man move out of his trailer.
Surprisingly enough, only about 4500 stars can be visible at once to the human eye. So stars are some of the easiest things to not fake in a video game. It's one of the few times where you can choose to render more of something for stylistic reasons, rather than be forced to render less things than reality to not summon the performance police.
Why no star move
cause they dont move in the game idk seems self explanatory
@@Acerola_t i meant like see how it looks with camera movement
One question: why solids, not particles?
There's no real difference unless you mean why didn't I use a point mesh and that's because Elden Ring does not use a point mesh
@@Acerola_t 6 times as many vertices, 8x triangles. Probably a bunch more calculations too, culling, shading and whatnot. Particle being just a small bitmap splashed flat (no 3D orientation) always facing the camera feel to me like they should be vastly lighter computationally. I haven't been up to date with 3D tech since Quake, so I might be totally wrong about how modern GPUs handle them, but "back in the day" they were universally cheaper than faces.
@@sharpfang I'm saying that the word "particle" is kind of just a blanket term and doesn't refer to any specific kind of mesh lol
Yeah you could do billboarded quads but from my observations in Elden Ring that was not the case and it is in fact real geometry.
@@Acerola_t Yeah, but don't take Elden Ring as a paragon of optimal game design. Elden Ring does *lots* of completely unnecessary work no-one's ever gonna see, models that are still loaded but marked as 'don't display', complex layers of clothing/armor on characters where the under-armor layers are never shown, hundreds of megabytes of unused voice track you download and install, entire complex quests that were 90% complete and are there but impossible to begin... and the engine lacking some really trivial features most other open-world games take for granted - like NPCs being able to walk while out of combat, and flying enemies not falling to death when flying over the edge of a cliff. It really feels like From hires an army of artists, writers, designers, animators etc, and a small handful of interns to write the engine code.
@@sharpfang His video is about making the Elden Ring stars, so he did just that. You miss the whole point of the video if you ask why he made the stars similar to Elden Ring stars.
the elevator in east ligma
Why not just use a cube map?
because the purpose of the video is to recreate the asset from the game and the game does not use a cube map
@@Acerola_t fair enough! Thank you for responding so fast haha. When I first saw the effect.I just figured it was a skybox
Now you have to just simulate light pollution and reduce stars to 3 or 4 stars only, with an orange tint for the sky :-)
Acerola did you know you were speaking Vietnamese this video (according to auto caption)
a beard suits you
These are highly symmetrical objects. Why not instance ~8 different rotated stars and eliminate all trig?
There is no trig beyond the initial position generation on game start. The transformation matrices are precomputed so it boils down to a single matrix multiply in the vertex shader.
@@Acerola_t There's some hidden cost, you're sampling the buffer you stored all of those transformation matrices in. They're all stored contiguously though so it isn't very expensive and you'll have a near perfect hit rate, it's just that once you start going to millions of instanced entities you might find some tex interface gets saturated and bottlenecks the pipeline.
It'd be interesting to see how the current approach performs avg FPS wise, compared to the rotation via reflection method as suggested by @Luna which may eke out a lead at high instance counts - or not, who knows?
@@charactername263 You arent sampling anything it's an array not a texture.
@@Acerola_t To pass so much data to the GPU it's necessary / strongly recommended to instead write the data into a buffer and then bind and read the buffer contents since you'll be limited by the GPU's max vertex array uniform components. Maybe Unity automatically deals with this under the hood. When the GPU reads from a buffer it uses the same / similar interfaces as sampling a texture (cos really it's all just reading from GPU memory) so the SoL shows on the TEX SoL, at least in theory.
@@charactername263 That's literally what I am doing is passing the transformation matrices in through a buffer
Tried replicating in blender, crashed because emissives give off real light in blender and my pc can't handle 10,000 light sources.
surely there's a way to do emissive surfaces in blender without it being a real light source
@@Acerola_t Yes, Eevee.
The subtitles for this video are just staggeringly bad, it's bizarre
ai generated subtitles moment
Engagement comment
I think I'm going to have a sandwich.
Love the result! Unsubscribed.
Hasan catching strays 💀