Thank you all again so much for all the support and suggestions! If you'd like to see what else I'm working on, it would be great if you could wishlist my game Star Mining Co. on Steam! s.team/a/2584800/
Oh and the blue tinted shadow step isn't bcuz of the water color, its bcuz of sky color (it gets LIT by the sky atmosphere). see Unreal Engine -> skylight
it would look a little more realistic if you made it so that lumpier areas were grass instead of sand since sand tends to smooth its self from the wind whereas dirt and grass has varying density it will be more lumpy plus i feel like it will balance the amount of sand to grass. btw not a coder just an artist
One slight optimisation would be to completely skip the shadow casting for a pixel if the dot product for a given pixel's normal and the light source is less than zero, as that would imply its facing away from the light source, therefore casting its own shadow on itself.
If a slope is facing away the ray would intersect after a single step anyway. Meanwhile you are adding an extra check for each pixel. Even if it turns out to be measurably better and not worse, I don't think it's worth it adding the complexity.
@@WilcoVerhoef I don't claim to know the in depth details of this stuff, but I'd imagine this simple vector calculation would be far faster than a collision check, and potentially more consistent too.
Yeah, and there might be a risk that the ray would miss if the pixel is located at the limit between facing away and from the light, so I imagine this check could only help reduce artifacts further?
Keep in mind that conditional discarding of the rest of the per-pixel logic is not as straightforward optimization as it sounds due to GPUs executing them in parallel in groups. If even one pixel in that group needs shadows calculation than the rest of the group will wait for it's completion. And this actually can lower the performance on tile-based systems like most mobile phones have. Always measure the performance with and without it
you can pretty easily generate your own mipmaps with a pretty simple shader. so you have your initial texture-- i'll assume it's 1024x1024 for the sake of example. (any power of 2 should work, and if it's not a power of 2, you pad out the edges until it is.) simply render it to a texture half the size, in this case 512x512, and write a shader which just reads the 4 surrounding pixels, and does whatever computation you need to them (in this case, the maximum value.) you'd be doing something very similar to your getHeightRaw function, but instead of getting the nearest integer pixel value, you want the nearest EVEN integer. so, instead of floor(p), it would be floor(p/2)*2. then, you take that pixel and the three neighbors on the bottom, right, and bottom right, take the maximum value, and draw that to the screen. now you have your 512x512 mipmap, and you can repeat for the 256x256 and all the way down to the 1x1 case. precalculate all that at the beginning of your program, and you should be golden
Ah okay! I've never actually used mipmaps before so I don't really know what I'm doing, I thought I had to use one of the options presented by OpenGL. As you say, it should be a pretty straight forward shader to get the maximum of the neighbours. Thanks for the information!
@@BarneyCodes These are some very nice results! There's a "Comparison gallery of image scaling algorithms" on Wikipedia that links to further information. The Mitchell-Netravali filter (often abbreviated to just "Mitchell") is a popular alternative to bilinear resizing (downsampling in this case), that can, depending on the content, look significantly better / sharper with minimal / no ringing. While an "online" GPU mip generation implementation can be quite good for procedurally generated textures (like what it seems to be in these videos), it is more common to bake mipmaps for more conventional textures "offline" and load them (on demand or ahead of time) along with the source texture instead of generating them at the time of loading. Rather than "along" it is often actually loaded before the lowest mip level (0, so the source) with a cap on the lowest available mip level at which higher levels are presently loaded into graphics memory. This reduces the loading latency for large textures in cases where they cannot be loaded fast enough, e.g. when many textures are streamed in at once. It is also common to compress the entire mip pyramid for hardware decompression after baking and then storing them that way for distribution. One drawback of GPU implementations is, that many games are GPU bound / limited for varied reasons. In such cases, a CPU implementation running on a background thread might be a better fit (I imagine this is what many OpenGL implementations do for gluBuild2DMipmaps and variants). Both implementations are simple enough to have them available as options for different projects across multiple platforms. One thing the GPU implementation is amazingly good for are hierarchical depth buffers that subsequently store the min or max of an area depending on the depth buffer's direction (traditional or reverse). These are effectively what you are looking for here. There is no standard implementation (that's why you couldn't find it), but with the programmable hardware of today, there doesn't have to be one. We use the compute pipeline for these instead of graphics shaders due to efficiency and flexibility, but either of them will work.
To add up to what others said: you can build your own mipmaps by passing them directly to OpenGL using glTexImage2D’s “level” argument, where 0 is your base texture, 1 is the first mipmap (512x512 in your case), 2 is the second, and so on. Call it as many times as needed while you’re uploading the texture as you have mipmap levels, and you should be able to freely use them in the shader. If you only generate the terrain once during startup, let the CPU generate the mipmaps for simplicity’s sake and don’t bother with compute shaders yet. Speaking of which, you use textureLod() instead of texture() to make sure you’re sampling a specific mipmap in your fragment shader!
Thanks for your amazing videos. One tip: Your bilinear interpolation efforts (LERP in X, and LERP in Y) to smooth the terrain could be replaced with the default GPU bilinear sampling of your height texture. In your case, you want the GPU to grab all 4 texels, LERP them, and return their combined data.
@@BarneyCodes I have a software voxel engine that I'll eventually ramp up with some GPU help, especially for lighting. When I get to adding that, I'll be checking out your videos. :)
@@BarneyCodes Interesting how you got into shaders and miss default sampling modes (point, bilinear and trilinear). Check smoothstep() function. You may find a lot of ways to use it.
I think an interesting direction to take this project would be implementing psuedo-3d graphics similar to the Voxel Space engine (if you don't know what that is, I recommend looking it up, super cool stuff). an updated version of that with smoother rendering and real time lighting would be super cool to see, especially with how great this already looks!
@@BarneyCodes will you mind if i ask: will you create discord channel dedicated to this TH-cam channel? Because Twitter scares me, and reddit account i made recently (to share one github link, heh), and dono if there will be nice place to share my pitiful attempts of terrain creation...
What you are doing when interpolating to fix speckles is similar to bilinear filtering. I would possibly recommend using a built-in method for this, as it is a very well-documented approach that not only fixes the speckles, but might also smooth out the pixelation you see with your approach.
I recently learned about an algorithm called "relaxed cone stepping" for solving this shadow problem. It requires precomputing a texture with information about the depth texture. The extra texture stores how steep the local area around the pixel is, so you get bigger and more precise steps without needing to specify a global fixed slope (specifically, it stores the shallowest-possible cone slope, hence the name, "relaxed cone stepping"). The "relaxed" part is because it turns out you can actually allow the cone to intersect with the terrain a little bit, as long as you are willing to do a binary search on the last ray cast to find the exact intersection point (but binary search is much cheaper than linear search so it's often worth it).
Do reflections! you can do reflections of the sun on the water, and just distort it a little with a noise texture, and you can introduce diffuse reflections on the terrain, by setting the terrain color to be a lerp between the original color and white using the angle of the terrain, referencing the angle of the "camera" (always straight down) Also, you should set the water color between a cyan (close to the terrain) and a dark blue (far away), so you'll get those gradients like on the caribbean beaches. Would look super cool!
These are great ideas! Reflections sound a little scary, but I'll give it a go. I definitely think the water can be improved, using a gradient sounds like a great idea!
Really nice video and great improvements. Well done! You can improve the performance quite easily by switching your bilinear interpolation to calculate your smooth terrain height and your normals to barycentric interpolation which only needs 3 texture samples. If you want to be even faster than that, you can use the partial derivative functions ddx/ddy to sample the neighboring pixels for free and use them to calculate your smooth terrain and normals.
Not even here to add a suggestion, just making a comment to drive engagement. Your video was the first thing that came up so hopefully the algorithm gods are being kind to you this morning. I love these kinds of videos, great content!
When I seen the last video I instantly reprogrammed it in unity using hlsl. It also showed me how to use the map function which I use all the time in my normal programming.
Nice video! I have a few simple suggestions which should improve the overall fidelity :) - Increasing the intensity of the normal-mapped shadows would give the island more "ambient occlusion", which I recommend. It'll help to highlight details within big shadowed areas, even if not physically accurate. - Determining the actual water colour based on how shallow it is would also look great, sharply interpolating from a turquoise to a deep blue based on the depth of the water. - Adding some noise to the sin value of the sea foam would make it less artificial - Also changing the colour of the sea foam (to a bolder white) would help define the outline of the island (since the sand showing through the water blurs the lines a bit)
These are really great suggestions, thanks so much! I feel like all of these are things that I've noticed subconsciously but couldn't quite put my finger on! Cheers!
The waves should be continuously going towards the shore to be more believable. Then, you could use the terrain height and some randomization to vary the wave length, longer waver on shallower water, to really sell it. On the randomization, slowly moving or oscillating premade noise texture would do just fine, that doesn't need to be dynamic.
Change line 160 (where you mix in the water colour) to be: col *= mix(vec4(1.,1.,1.,1.), colourWater, waterLerp*4.); and then change you water colour to something like: const vec4 colourWater = vec4(0.384,0.8,0.9, 1.); You will get a beautiful cyan to blue gradient on the water, and you will get accurate underwater colours, as a result of multiplying the terrain color with the water colour, effectively simulating water filtering out red light with depth Cheers!
Nice work on the presentation! Looks very clean and is easy to follow. I can't find such info from a cursory glance. But, if p5js exposes the rendering context somehow, or a compatible API, then you should be able to set the lods with the texImage2D function whose second argument is the mipLevel. Generating the max-mips themselves could be done by populating UInt8ClampedArrays at the same time as you create the height-map by basing the mip indices of off the main height-map indices: ``` mipindice = (x, y, size) => Math.floor(size * (x -1)/width) + width * y * Math.floor(size * (y - 1)/height) let n = 255 * noiseVal(x, y) heightMap.fill(n) mip[1][mipindice(x, y, 128)] = Math.max(mip[1][mipindice(x, y, 128)], n) //mip being array of UInt8ClampedArrays of appropriate sizes in this snippet mip[2][mipindice(x, y, 64)] = Math.max(mip[2][mipindice(x, y, 64)], n) ``` ... etc Typed from mind so it might be a bit broken and inefficient, but the concept is hopefully clear! The normals could also be calculated in the same manner, but instead you could use the alpha channel for the height in the main texture and use rgb for the normal. I love the changes to the water rendering and the slanted terrain! Maybe the strength (colour) of the waves could be affected by the calculated slope somehow? Probably doesn't need to be said, but reflection and shadows from moving clouds (and stars for a night mode) would be interesting to see. Could be an interesting way to learn 3D textures if that's something which interests you. Keep up the good work!
This is really helpful, thanks so much! I'll give this a go when I get a chance And thanks for all these great ideas too, I'll have to look into doing reflections, they'd look so good if I can get them working!
as a general idea, it would be good to bake as much of this into the actual texture as possible, for example, you can do the coloring (including the stone and shallow sea colors) in the texture itself so that you don't have to render all of that live, since it's 100% static. Then you could just run the wave shader and the shadows/normals on the gpu and it would give you a much better performance if you're using a much larger map size
That's definitely true, and in the last version the colours were seperate for exactly this reason, but my plan is to make the terrain not so static! Really appreciate the comment!
not necessarily. An optimization like that is highly GPU and implementation dependent. More textures means higher VRAM requirements and potentially more reading from the VRAM or even system ram depending on what GPU is being used. Reading from memory is quite expensive, and doubly so when working with a GPU, so if a calculation is simple to do based on existing data, it's often as fast or faster to calculate it in real time than to try and cache it in a texture. This is really something where you'd have to decide what type of system you're targeting and profile both options to see the potential cost and benefit
How about some softness in the shadows? The farther a shadow is from its occluder (mountain) the "blurrier" it gets, because of umbral/penumbral effects. There's a natural falloff of the shadow rather than a sharp line. One way I've approximated this with ray-marched shadows in the past is to multiply the shadow color by tanh(height-above-terrain) at every raycast sampling point. So the closer the ray sample point gets to terrain the darker it is shaded (even if it never fully intersects the terrain). This is sort of the reverse of what penumbral / umbral effects are in reality, with the 'fading out' of the shadow occurring "inside" the shaded region rather than 'outside". Still, it is a fast approximation and easy to calculate alongside existing shadow occlusion code! Happy to answer qs if that doesn't make sense
Yea I think that some softness would add a lot, I'll try out the tanh method! Another suggestion I got was to draw the shadows onto a seperate texture so it can be blurred before combining it with the terrain, in which case you'd get a mix of it being inside and outside the shaded region
This type of raycasting also occurs in parallax mapping (displacement mapping). A powerful optimisation is a preprocessing step storing with each terrain position, essentially the maximum radius you can go without missing anything, regardless of direction.
Interesting! I'll have to look into it! I am planning to use some dynamic heightmaps in the future, so it might not be a perfect fit for that, but I'll check it out! Thanks for the suggestion :)
Great work this video shows that you really care about our suggestions! In addition to the noise map, you could use a curve to change the profile of the terrain. You could tweak the curve to make mountain slopes more vertical and valleys and shores more plain. This way you can make it look more natural and get rid of those big chunks of rock that appeared in the middle of the plains.
Thanks, you all have lots of really good ideas and knowledge so it would be silly for me to ignore them! I think that's a great idea, I might focus a bit more on the terrain generation instead of the shadows for a bit, because there's a lot of improvements that could be done there!
The slope-based stepping you're doing reminds me a lot of SDFs in computer graphics. They're basically an entirely different way of conceptualizing models by using mathematical functions, rather than strict triangular polygons. They're not super common, but they're an interesting concept in their own right, and can be really amazing when used properly
SDFs are wickedly cool aren't they! I've been trying for a while to come up with some way of efficiently converting a heightmap into an SDF but I haven't really made any progress on it unfortunately. It would be amazing if I could get it to work though, there are so many cool effects possible with SDFs!
Not sure if I understood your question, but if I recall correctly openGL allows you to upload your own mipmaps. So you can sample them the way you want and then upload them.
Render the shadows to a seperate output texture, then do a tiny blur pass on that before adding it in. This will give a level of anti-aliasing, and speckle removal. On the lerp function, you are using the decimal of the distance between two cells, multiply it by 2, subtract one, then raise it to a power (1.5 seems good). This will give you a curve and smooth out your points further.
That's a really great idea, a blur would probably make this so much better! I guess that's basically an easy in/out isn't it, can't believe I didn't think of that! Thanks for the suggestions!
@@BarneyCodes To ensure you don't get double shadows from the single lightsource, write the shadows as black to a white texture, do your blur, then composite that in with an alpha or whatever other system. I recommended this for Factorio back when they had double shadows if two objects overlapped, since it merges all shadows first, then blurs any edges. if you also record the distance away from from the object casting the shadow the point is (you've already done the raycast to find this) then you can apply a blur based on this distance getting a sharp shadow when the object is close, and a much blurrier one further out. Just sample more of the local area. This is expensive the more you sample, so if you instead calculate the mipmaps of the original shadow image with a linear filter, and then sample from a smaller mipmap when further away from the object casting the shadow, you'll make sure that all pixels have the same cost for their blur effect. Giving you a fairly nice umbra/blur for a low cost.
@@Silverware09 Ah yea that makes sense, thanks for the tip! Doing different levels of blur depending on ray travel distance sounds like it would be really cool! I'll give it a try!
I think it would be cool to simulate erosion by gradually changing the height map image. You could add some noise or maybe add a slight Gaussian blur to it as a function of time to smooth out the sharper peaks.
It's possible that the reason you're still seeing some weird pixelation in your shadows is due to floating-point inaccuracy in your noise texture. I don't know what data type it is, but if the smallest possible value difference is too big, it could produce that effect. This is also supported by the fact that there is some slight visible banding on the terrain's diffuse lighting (the normal vector stuff). I'm not certain if anyone else has mentioned this already, but it's worth considering. Either way loved the video neat project :)
Thanks so much for the comment, this is absolutely spot on! I didn't catch this myself so great job on picking it up! Someone on my discord pointed it out to me so I've fixed it for the next iteration and it makes a massive difference. I'm currently editing the next video in this series and this is one of the improvements I've made. I now pack the height data into all 4 colour channels instead of just using a single channel. Thanks again!
Mipmaps - you can make them manually and load them one by one, halving the size each time untill you go down to 1x1 pixels, or you can use a file format that allows storing mipmaps like dds/dxt. Details may depend on bindings/engine used.
you can generate the mipmaps manually using a compute shader call at the start (assuming your terrain is static) with your solution, you can also generate a SDF for the terrain and basically turn it into a ray marcher. it also paves the way to other cool effects like ambient occlusion and volumetric fog.
Thanks! The terrain is static at the moment but that's going to change. Performance isn't terrible at the moment though so I'm not sure if I'll switch to the mipmap method or not. I've tried to find ways to generate an SDF from a heightmap but couldn't find anything that worked. The only thing I can think of doing is approximating it with eg cones or spheres and passing that data into the shader, but that seems like a lot of work, especially if the terrain is going to be dynamic. If you could point me to some helpful resources, that would be great!
I think specifically in the sand layer you should have it change to a secondary material at a much lower slope value than the stone in higher areas; sand wants to settle in less slopey ways than other materials present, so anything higher slope should be a harder-set material.
I have an idea for how u can approximate ambient occlusion. For each pixel, imagine a plane perpendicular to its normal. Sample the height of its neighbours in a radius. Find the average amount neighbours are higher than this plane. This approximates concavity, which approximates AO. Some variables you can toy with: weighting neighbours based on proximity; choosing to allow pixels under the plane to offset the effect of those above or not, whichever looks better; the amount of AO resulting from this measurement. You may want to use a max function with sea level so the ocean is treated as an occluder.
You could also, for each pixel, raycast in several directions and measure how many casts reach the sky unoccluded. More expensive. Maybe you could bake either?
One thing that could be improved: right now you have lots of detail in the light, but areas in shadow end up very flat. They should still get some shading from the light from the sky itself, giving a bit more depth to the shadows. The cheapest method: use the dot product to give shading facing straight down. Fanciest: you can pre-compute an ambient occlusion term by tracing in lots of directions to find out how much of the sky is visible to each point: the more sky it can see, the brighter it is even when it’s in shadow. This is pre-baking an ambient occlusion texture. Fancy realtime: once you have the maximum mipmaps, you might be able to approximate the sky occlusion above by sampling the levels nearby and weighting them. Basically you want SSAO techniques, looking at papers on those might be good.
You're definitely right! I'll have to try out some AO, I think it would really add a lot! In the mean time though, as you suggest, I think I should apply the dot product shadows inside the shaded region as well! I'll have to do some reading up on these techniques!
In addition to AO of some kind, you could try the three directional light solution in Inigo Quilez's article "outdoors lighting" from 2013. Not the most realistic, but its extremely simple and can look pretty good
Really satisfying to see your progress and it's encouraging to see all the very helpful comments! Mine will perhaps be less helpful as it might send you on a frustrating goose chase... But my challenge to you is to add floating islands! Maybe it could be done in a single hieghtmap by using rgba instead of just grayscale? Or perhaps swap the ocean for big puffy white clouds. There's a lot of appeal to the style you have going. IDK if you're planning on making it into a game, but I'm getting some kind of flying/laupta castle in the sky vibes from the birds eye perspective. I can see it going in lots of cool directions.
I'd love to add some clouds! Though they could be a bit tricky with the way this is implemented, I'll have to give it some thought! Thanks for the comment :)
for each pixel, starting from the white pixels to the black pixels use the pixel height and sun derection to determine the length of the cast shadow and draw it only if the pixel under the shadow is lower than the original.
You could improve the lighting even further by also using an ambient occlusion map. This should only show in the shadowed regions. This AO map could be pre-generated using the height map, or you could have a real time version by shooting a few extra rays from every pixel, in random directions. You can control the width of the ao effect by adjusting the ray length.
If we’re talking optional extras, how about red shifting everything when the sun is closest to the horizon? I would imagine something like a sigmoid interpolation rather than a linear one might work well?
the elevation of land impacts it's climate and the climates of nearby land due to sun exposure. You could randomly generate a whole cluster of biomes just from some noise maps.
One really easy optimization would be to use partial derivatives to calculate the normals from the heightmap. Essentially partial derivatives would do pretty precisely the same thing you are doing now manually but it would reduce the amount of redundant calculations by not having to calculate the height 4 times for each pixel. Partial derivatives are often used to reconstruct normals from position data for 3d rendering as well.
Also do ambient occlusion! You just need to do a comparison between the height of the pixel with it's neighboring heights, and if there are superior neighboring heights in opposite directions, darken that pixel towards black (only darken pixels that are already in shadow, or else it will look weird)
I've been blown away with how helpful and encouraging all the comments have been, it certainly helps with the motivation! I'll definitely be doing some more updates in the future :)
I would love to see some more attention spent on the water, maybe implementing some common water shading techniques? Using the height map below the surface to get a depth value to saturate the deeper parts of the ocean could be one idea.
Great stuff, I personally don't particularly like the blurry sand that is under the water but maybe tweaking a bit would make it look better, rn it takes a lot of attention away from the main island imo
Yea I think the way I'm handling the water is a bit off. I think there would be a way to do it "properly" that would look nice, but I see what you mean. I'll have to play around with it and see if I can get it looking better! Thanks for the comment!
@BarneyCodes - I think the issue there is just the colour used for the “land under water” it should probably be much more blue, except for the very shallowest parts.
I did a very similar project to yours a few years back, but did a bresenham line drawing algorithm for the ray tracing and some simple math. One aspect that really improved the overall style was to emulate the color changes when sunlight is scattered by molecules in the atmosphere. This would allow for dynamic sunrise, midday, sunset cycles.
You could probably also implement some basic global illumination using the normals, slope value, and shading values - all things you've already calculated. Use the normals in addition to the slope to get a 3D normal vector, then check surrounding pixels for ones that are facing the surface and modulate the currently shaded fragment based on the shading and material (defined by slope) of the surrounding pixels. This could be used to give some variation and depth to your shadowed regions.
Another thing you could do to decrease the speckles even more is to use multi-sampling. I don't remember if you can just double the resolution of each texture and them enable OpenGL multi-sampling, and everything works, but in one project of mine I initially render to a 2/3x times resolution texture, and them down-sample it using a lanczos kernel to create the final image.
I haven't heard of multi sampling but I'll definitely check it out! I might need to move the height map generation into a shader because at the moment it's quite slow on the CPU, but if I do that it should be pretty quick to generate it at much higher resolutions. Thanks for the suggestion!
Generating the mipmaps can be done by hand fairly easily; you literally render each mipmap >0 while getting the max value for the 4 samples "above". You can also use compute shaders, binding the previous and current mipmaps as different images. Also, you can just set your sampler to use linear interpolation by default? Why are you using nearest filtering on the heightmap?
Thanks so much for the info! As you can clearly tell I haven't done anything with mip maps before! I'll give generating them a go :) And now I feel silly haha I'm using P5js for this project (I think I'll change that soon though), and they do a lot of the boilerplate setup for me so I haven't explicitly set the filtering anywhere, but I think I should be able to change it! Thanks for the comment!
@@BarneyCodes "And now I feel silly haha I'm using P5js for this project" Ah man then I was maybe a bit harsh, no intention of that, these videos are great. I thought you were using OpenGL/DirectX/Vulkan. I have no clue how much control over texture generation and sampling you have in p5.
love the video! another interesting idea for the shadows would be to store the lowest distance you get to the terrain when marching towards the sun, and then add a little bit of "fuzz" to the shadows, making them blurrier, like you see in real life! what do you think? it should be fairly free as well
Note for this: a very good article on this algorithm is Inigo Quilzes’ “SDF Soft Shadows” article, I recommend looking at that. I’d link directly but I know TH-cam hates links.
you can blur your shadows a bit, so they look a more soft at the edges and darker towards the center. You can also go full penumbra and determine shadow color by length of ray between your initial to-check pixel and first interesected pixel.
@@BarneyCodes if you do that, you actually have a really easy way to do it accurately. The blurring of shadows is based on how far the light has traveled, as well as the size of the light source. In most rendering engines, getting that light traveled distance efficiently is quite difficult, but you already have all the data you need for it. I was looking into it a while ago for 3D and discovered a technique called PCSS, which was developed by NVidia back in 2005. Unfortunately, it seems like the technology hasn't really improved much since then, which is a shame, because it adds so much realism
Not sure how much this idea helps but concerning the stretching and oddities with the waves maybe you could use your normal map and dot product it with the plane of the water (pretty much giving you a "shadow" of the vector on the water) and using it to dictate the rotation of the water waves and using the length of the vector to change the frequency of the waves making waves much more pronounced near cliffs and barely visible around low and flat beaches.
Love love love your journey, it's fantastic on how well you did. Have you thought about adding lights/reflection? It might look cool on the water. What do you think?
I'm not sure how the mip-map genration works in opengl, but in directX you can use a pixel shader and use a specific mip-level as rander target to manually construct a mip-chain
I’ve programmed really really reallyyy basic graphics (only a pseudo 3D “engine”, like Wolfenstein), so it’s super cool to see things that look as cool as this! I know the terrain isn’t a big concern of yours, but a tiny 5% change that could have a 50% improvement could be removing the small stone texture on the beaches of your island. Otherwise super cool stuff man
I kinda think that pseudo 3D stuff is almost cooler than actual 3D just because I find all the tricks and hacks that get it to work super interesting! I think you're right about the terrain, I might do a bit more to make it look a bit more natural! Thanks for the comment!
This is just a thought of something that might work. I have not tried it, but id like to share my thoughts anyway. Would it work to calculate the steepness of the whole map. Deltax value for every pixel, and delta y value for every pixel. Then for the raytracing you could find how steep the hill needs to be before light gets blocked. You could then iterate over the pixels in that direction and see if the average is above or below the threshold. If it is above then the light would need to be blocked
Interesting idea! Sounds similar to the "cone mapping" technique someone mentioned in another comment, I'll have to try it out Thanks for the suggestion!
I guess you could say I'm doing a bastardised version of ray marching for the shadows, the issue is coming up with an SDF for a height map. The method that I use currently is essentially just approximating the SDF by only looking at the distance on one axis (the height), which obviously isn't perfect, but gets the job done! Maybe instead of getting shadows working for height maps we could look at ways of generating terrain as SDFs? Could be an interesting area to explore!
would probably be pretty tricky and slow/no longer realtime but global illumination/bounce lighting would be very interesting. i can think of a few ideas that would work but all of them would be quite convoluted
I was thinking about some sort of shadow falloff where I make the shadows softer the further the ray has travelled before hitting terrain. I'll have to play around with it and see how it looks. Thanks for the suggestions!
Yeah, the shadows on the water look much too “hard”… At the very least I’d lighten and soften them a little (maybe take the water depth into account?) Love this little project! ☺️ As someone who hasn’t coded in years, or had much to do with graphics rendering, some of it goes over my head. Looks like fun though!
Yea I sort of forgot that was an option 😂 that's the danger of using abstractions of shaders I suppose! This is a much better solution, thanks for the suggestion!
It would be cool if you could change the angle at which the light is hitting the terrain. The closer it is to 0 or 180, the more red-shifted the hues would be.
Because all of your color tones are already so warm, the previous shadow already felt cool. You didn't need to blue-shift it. Looks kinda alien imo now
Yea I think you're right, part of the problem is that the normals introduce a little bit of shadow almost everywhere so the whole terrain is now a bit more blue which does look a bit strange. I think what I should probably try is keeping the normal shadows and the ray cast shadows seperate so they can have different colours, and that way there's also still some detail in the terrain in shaded areas
I agree, I think that would look really great! I've already got the information on how far the ray has travelled before hitting terrain so hopefully it shouldn't be too hard...
I love shaders but I didn't go to school much as a teenager :( Going to tell my kids that you don't know what job or hobbies might present themselves as an adult lol
If you have the drive you can definitely learn all of this stuff online for free! The book of shaders is a really excellent resource for getting started and feel free to ask any questions, I'm more than happy to answer!
Thanks! I use OBS to screen record and DaVinci Resolve for editing and doing simple animations. Any of the more complex animations I just code myself using P5js! Hope that helps!
So you're doing it in 1 draw call right? Using multiple draw calls should give a big performane boost I think, by storing an intermediate result in a texture. For example, the first pass could check only 15 closest pixels and store the maximum shadow length found for that pixel. The second pass would only do 15 checks as well but with 15-pixel leaps. Thus you would only use 30 texture lookups per pixel instead of 200.
Interesting! I think I'd have to do the same number of texture lookups in total though to get the same result, does splitting that over multiple draw calls actually make it faster? Or maybe I'm misunderstanding!
@@BarneyCodes You would need much fewer lookups, because the calculations done in the first pass are shared by multiple pixels, the same idea as "dynamic programming".
Each pixel is independent though I think? Each one is on its own trajectory in the light direction and each ray is parallel so unfortunately I don't think you can share any information between pixels
Why didn't you used ray marching optimization technique? Generate a 3D map of distances (or maybe some sparse spatial array like octree to save memory), you don't need that high resolution, and it's relatively easy to generate (maybe it could be done in run time, though I think it's better to spend some RAM instead). Like, you could take height of the point h above the terrain as initial guess, then check terrain points in a circle radius of h, going from center in a spiral and checking if distance from terrain point is closer than h, if it is - then store that distance as the value of h (which will then limit your search radius). Having that 3D map, you can use values from it to make steps of value length in any direction. With limitation on terrain slope, you could do some optimization to limit search radius.
I think if I was only going to use static height maps then this would be a great optimisation for sure, but my plan for this is to have dynamic height maps (maybe I should have mentioned this in the video!) in which case I'd have to do re-create the 3D map every frame which would likely end up less efficient. A cool technique though, thanks for the comment!
I think it'd look nice if the direct shoreline, where the surface of the water hits the land, is less effected by the lands colour, giving a kind of sea-foamy look.
I like that! I could pretty easily make a sort of "permanent wave" where the water meets the land, and then still have the animated waves for further out in the water
@@BarneyCodes I think so too! As the player progresses towards the center, they'll end up visually digging a ravine as they path along. It'll also give the surface of the asteroid more character.
TL;DR of this comment: Since your setup is so specific, you can get away with simplifying your calculations a WHOLE bunch (as detailed below), which should give you a faster and more realistic frame in the end. The details: One way to improve the efficiency and realism at the same time should stem from the following fact: The sun is so absurdly far away from the earth that the actual sunrays that arrive here are pretty much parallel (As in, not like scientifically precisely, but so close that no human on earth could ever tell the difference) Since your light source is closer, your shadows spread out. So your still frames do not look like they're lit by the sun, but more like they're lit by something exploding just off screen. We can actually use the parallelity of sun rays to our advantage, though. Because it means that all of our rays are gonna have the same direction vector, for any single frame. So we can calculate this direction once at the beginning of a frame, and what it "means" for our rays (i.e. their step sizes in x y and height), and use those same step sizes for all of our calculations, turning most of them into additions which are notoriously fast. We can also do the following: Instead of casting extremely many light rays in hopes of hitting everything you can, mark all your pixels as "lit", then iterate over your pixels, casting shadow rays from each, as follows: If the pixel we're looking at is already shaded (by another pixel from a previous pass), you should be fine not casting another ray, since the ray that marked our pixel should already have marked everything our new ray would've marked anyways. This saves us even more calculations, since we're not casting rays that we don't need to. If, on the other hand, the pixel we're at is still marked as lit, cast a shadow ray from that pixel, into our fixed direction. The ray should do the following: 1. Step one pixel length into its direction 2. Check if it intersects the pixel it is currently "hovering over". If so, stop calculating this ray. If not, mark that pixel as shaded and repeat from 1. Once this is done for each pixel, your frame should be complete, and your shadows should be nice and parallel. For the pixel iteration order, either "highest to lowest" or "closest to furthest" should be best, as we want to mark as many pixels as shaded as quickly as possible in order to not cast more rays than necessary. By "closest to furthest", I mean in terms of our light direction: If the light is going east, start from the west, because in this situation our rays can only shade pixels to the east of their starting pixel anyways.
This is a really well thought out comment, but I think unfortunately, since I'm doing things in a shader (where each pixel has to do all it's calculations in isolation and data can't be shared between pixels), I'm not sure that it will work :'( For sure if this was being done on the CPU this would be a fantastic approach though, and I really appreciate the time and effort you put into the comment, so thank you!
5:11 : Wouldn't it be easier to just use the interpolation the sampler2d is configured to use in the shader? I assume you set the hightmap to "nearest" min/mag filter, maybe you set it to "linear" because this does the exact same thing you calculate there. Or do i forget about something here? Edit: You could also provide the shader a lookup texture for height-colors. that could even extend to a full texture in the future. If you looking into performance more, you could replace the "if/else if"-color block with the lookup texture.
Yea definitely should have used bilinear for texture sampling, I completely forgot it was an option because I'm using P5js for this and it's abstracted away (it wasn't a conscious choice to use nearest!) I think a texture lookup is a great idea, I could even do something like height on the y axis and slope on the x axis so that I can easily have different colours for cliff areas in different sections of the map?
Sure, i think this is a possibility. Maybe just use dotproduct from up to the normal of the heightmap to get the slope and thus the xaxis. I don't really now what this would look like from an artistic pov but thats a possibility. Also you should presample the normals and store it into a normal map (Float32 RGB texture). That could also enable interpolation between normals. That could even be generated on the fly using compute shaders or a bit simpler, a render texture, and simply do a drawcall which renders the normals into the texture. Then you could even make changes to the terrain and it could recalculate very fast.
I know this is super late but shouldn't the calculation for whether a pixel is beach or not be based on both vertical AND horizontal proximity pixels on the 'water level'? Because getting inland valleys that are just sand or beach doesn't make a lot of sense.
That's a good idea! I think in reality it would probably just be distance to the centre, because otherwise it would add a lot more looping into the calculation to find out how far the pixel is horizontally from a water tile. Thanks for the comment :)
Would be neat if there was a way to import already existing heightmap images, so that you could for example, render Earth's heightmap with this shader.
This is definitely doable! The online p5 editor (that the code I linked is in) allows you to upload files, and you can then use the loadImage() function to load it into the code. Maybe I should add an image uploader to it to make it a bit more user friendly though Seeing earth's height map in this would be really cool!
@@BarneyCodes Oh, many thanks! I've been reading the code for a while and couldn't figure it out, I'm not too familiar with Javascript coding 😅 So this is really appreciated
personally I feel like it would look better if the steep areas were dirt instead of stone, and probably have a minimum height for it as well so the sand isn't turned into something else, it looks kind of strange to me, I'm not sure what your end goal is for the height map but I think making the water have a layer or two as well would look good, like deeper areas are darker and less green, and idk how tall the mountains are supposed to be, but I think some snow caps would look nice, as for optimization and fixing artifacts, no clue lol
I think you're right about all of this! The stone can look a bit out of place in certain areas. I have got a snow colour in there, but it seems like, for the most part, the mountains are never high enough. I can probably push the middle up a bit to force it to be more mountainous (just like how I pull the edges down to create the island)
Why not generate an AO texture? This should add volume to the picture. Now there is completely no shading in the shadows, I would give the normals the opportunity to darken the surface even more in the shadows.
Yea absolutely agree, the shadowed areas do look pretty flat! I'll seperate out the normal shadows from the ray cast ones so that they still have effect!
Thank you all again so much for all the support and suggestions!
If you'd like to see what else I'm working on, it would be great if you could wishlist my game Star Mining Co. on Steam! s.team/a/2584800/
texture = pg.image.load(path).convert()
texture = pg.transform.flip(texture, flip_x=False, flip_y=True)
texture = self.ctx.texture(size=texture.get_size(), components=3,
data=pg.image.tostring(texture, 'RGB'))
# mipmaps
texture.filter = (mgl.LINEAR_MIPMAP_LINEAR, mgl.LINEAR)
texture.build_mipmaps()
# AF
texture.anisotropy = 32.0
return texture
python code that generates mipmaps. use sampleLOD for in-shader use
Oh and the blue tinted shadow step isn't bcuz of the water color, its bcuz of sky color (it gets LIT by the sky atmosphere). see Unreal Engine -> skylight
it would look a little more realistic if you made it so that lumpier areas were grass instead of sand since sand tends to smooth its self from the wind whereas dirt and grass has varying density it will be more lumpy plus i feel like it will balance the amount of sand to grass. btw not a coder just an artist
One slight optimisation would be to completely skip the shadow casting for a pixel if the dot product for a given pixel's normal and the light source is less than zero, as that would imply its facing away from the light source, therefore casting its own shadow on itself.
That's a great point, thanks for the comment!
If a slope is facing away the ray would intersect after a single step anyway. Meanwhile you are adding an extra check for each pixel.
Even if it turns out to be measurably better and not worse, I don't think it's worth it adding the complexity.
@@WilcoVerhoef I don't claim to know the in depth details of this stuff, but I'd imagine this simple vector calculation would be far faster than a collision check, and potentially more consistent too.
Yeah, and there might be a risk that the ray would miss if the pixel is located at the limit between facing away and from the light, so I imagine this check could only help reduce artifacts further?
Keep in mind that conditional discarding of the rest of the per-pixel logic is not as straightforward optimization as it sounds due to GPUs executing them in parallel in groups. If even one pixel in that group needs shadows calculation than the rest of the group will wait for it's completion. And this actually can lower the performance on tile-based systems like most mobile phones have. Always measure the performance with and without it
you can pretty easily generate your own mipmaps with a pretty simple shader. so you have your initial texture-- i'll assume it's 1024x1024 for the sake of example. (any power of 2 should work, and if it's not a power of 2, you pad out the edges until it is.) simply render it to a texture half the size, in this case 512x512, and write a shader which just reads the 4 surrounding pixels, and does whatever computation you need to them (in this case, the maximum value.)
you'd be doing something very similar to your getHeightRaw function, but instead of getting the nearest integer pixel value, you want the nearest EVEN integer. so, instead of floor(p), it would be floor(p/2)*2. then, you take that pixel and the three neighbors on the bottom, right, and bottom right, take the maximum value, and draw that to the screen. now you have your 512x512 mipmap, and you can repeat for the 256x256 and all the way down to the 1x1 case. precalculate all that at the beginning of your program, and you should be golden
Ah okay! I've never actually used mipmaps before so I don't really know what I'm doing, I thought I had to use one of the options presented by OpenGL. As you say, it should be a pretty straight forward shader to get the maximum of the neighbours. Thanks for the information!
@@BarneyCodes These are some very nice results!
There's a "Comparison gallery of image scaling algorithms" on Wikipedia that links to further information. The Mitchell-Netravali filter (often abbreviated to just "Mitchell") is a popular alternative to bilinear resizing (downsampling in this case), that can, depending on the content, look significantly better / sharper with minimal / no ringing.
While an "online" GPU mip generation implementation can be quite good for procedurally generated textures (like what it seems to be in these videos), it is more common to bake mipmaps for more conventional textures "offline" and load them (on demand or ahead of time) along with the source texture instead of generating them at the time of loading. Rather than "along" it is often actually loaded before the lowest mip level (0, so the source) with a cap on the lowest available mip level at which higher levels are presently loaded into graphics memory. This reduces the loading latency for large textures in cases where they cannot be loaded fast enough, e.g. when many textures are streamed in at once.
It is also common to compress the entire mip pyramid for hardware decompression after baking and then storing them that way for distribution.
One drawback of GPU implementations is, that many games are GPU bound / limited for varied reasons. In such cases, a CPU implementation running on a background thread might be a better fit (I imagine this is what many OpenGL implementations do for gluBuild2DMipmaps and variants). Both implementations are simple enough to have them available as options for different projects across multiple platforms.
One thing the GPU implementation is amazingly good for are hierarchical depth buffers that subsequently store the min or max of an area depending on the depth buffer's direction (traditional or reverse). These are effectively what you are looking for here. There is no standard implementation (that's why you couldn't find it), but with the programmable hardware of today, there doesn't have to be one. We use the compute pipeline for these instead of graphics shaders due to efficiency and flexibility, but either of them will work.
Yeah this is on the money.
Wow, thanks for such an in-depth answer! I'll have to do some reading up but this is so valuable! Cheers!
To add up to what others said: you can build your own mipmaps by passing them directly to OpenGL using glTexImage2D’s “level” argument, where 0 is your base texture, 1 is the first mipmap (512x512 in your case), 2 is the second, and so on. Call it as many times as needed while you’re uploading the texture as you have mipmap levels, and you should be able to freely use them in the shader. If you only generate the terrain once during startup, let the CPU generate the mipmaps for simplicity’s sake and don’t bother with compute shaders yet.
Speaking of which, you use textureLod() instead of texture() to make sure you’re sampling a specific mipmap in your fragment shader!
Thanks for your amazing videos. One tip: Your bilinear interpolation efforts (LERP in X, and LERP in Y) to smooth the terrain could be replaced with the default GPU bilinear sampling of your height texture. In your case, you want the GPU to grab all 4 texels, LERP them, and return their combined data.
Oh that's very good to know! Thanks of the tip and encouragement!
@@BarneyCodes I have a software voxel engine that I'll eventually ramp up with some GPU help, especially for lighting. When I get to adding that, I'll be checking out your videos. :)
Sounds awesome, would love to see the results when you get around to it!
@@BarneyCodes Interesting how you got into shaders and miss default sampling modes (point, bilinear and trilinear). Check smoothstep() function. You may find a lot of ways to use it.
I think an interesting direction to take this project would be implementing psuedo-3d graphics similar to the Voxel Space engine (if you don't know what that is, I recommend looking it up, super cool stuff). an updated version of that with smoother rendering and real time lighting would be super cool to see, especially with how great this already looks!
I haven't heard of it but it sounds interesting, I'll take a look! Thanks for the comment!
This improvements, looks amazing! Especially shallow water.
Thank you! I'm pretty happy with how it's looking!
these*
@@BarneyCodes will you mind if i ask: will you create discord channel dedicated to this TH-cam channel? Because Twitter scares me, and reddit account i made recently (to share one github link, heh), and dono if there will be nice place to share my pitiful attempts of terrain creation...
The white lines are genius. Cool effect
What you are doing when interpolating to fix speckles is similar to bilinear filtering. I would possibly recommend using a built-in method for this, as it is a very well-documented approach that not only fixes the speckles, but might also smooth out the pixelation you see with your approach.
Well now I feel a bit silly hahaha! Thanks for pointing that out, I'll change that over!
I recently learned about an algorithm called "relaxed cone stepping" for solving this shadow problem. It requires precomputing a texture with information about the depth texture. The extra texture stores how steep the local area around the pixel is, so you get bigger and more precise steps without needing to specify a global fixed slope (specifically, it stores the shallowest-possible cone slope, hence the name, "relaxed cone stepping"). The "relaxed" part is because it turns out you can actually allow the cone to intersect with the terrain a little bit, as long as you are willing to do a binary search on the last ray cast to find the exact intersection point (but binary search is much cheaper than linear search so it's often worth it).
Sounds really interesting! I'll have to read up on it and try it out some time. Thanks for the comment!
This is what we use professionally at my job to render elevation maps. It works amazingly as a lot of the computation can be made offline.
as long as the terrain is static, this would work to precompute cone maps and similiar.
Do reflections! you can do reflections of the sun on the water, and just distort it a little with a noise texture, and you can introduce diffuse reflections on the terrain, by setting the terrain color to be a lerp between the original color and white using the angle of the terrain, referencing the angle of the "camera" (always straight down)
Also, you should set the water color between a cyan (close to the terrain) and a dark blue (far away), so you'll get those gradients like on the caribbean beaches. Would look super cool!
These are great ideas! Reflections sound a little scary, but I'll give it a go. I definitely think the water can be improved, using a gradient sounds like a great idea!
Really nice video and great improvements. Well done!
You can improve the performance quite easily by switching your bilinear interpolation to calculate your smooth terrain height and your normals to barycentric interpolation which only needs 3 texture samples. If you want to be even faster than that, you can use the partial derivative functions ddx/ddy to sample the neighboring pixels for free and use them to calculate your smooth terrain and normals.
Thanks! I haven't heard of barycentric interpolation but it sounds great, I'll check it out!
Thanks for the comment!
Not even here to add a suggestion, just making a comment to drive engagement. Your video was the first thing that came up so hopefully the algorithm gods are being kind to you this morning.
I love these kinds of videos, great content!
Haha thanks so much, this video has already done so much better than I expected! It certainly seems that the algorithm has taken a bit of notice!
When I seen the last video I instantly reprogrammed it in unity using hlsl. It also showed me how to use the map function which I use all the time in my normal programming.
Nice video! I have a few simple suggestions which should improve the overall fidelity :)
- Increasing the intensity of the normal-mapped shadows would give the island more "ambient occlusion", which I recommend. It'll help to highlight details within big shadowed areas, even if not physically accurate.
- Determining the actual water colour based on how shallow it is would also look great, sharply interpolating from a turquoise to a deep blue based on the depth of the water.
- Adding some noise to the sin value of the sea foam would make it less artificial
- Also changing the colour of the sea foam (to a bolder white) would help define the outline of the island (since the sand showing through the water blurs the lines a bit)
These are really great suggestions, thanks so much! I feel like all of these are things that I've noticed subconsciously but couldn't quite put my finger on!
Cheers!
The waves should be continuously going towards the shore to be more believable. Then, you could use the terrain height and some randomization to vary the wave length, longer waver on shallower water, to really sell it. On the randomization, slowly moving or oscillating premade noise texture would do just fine, that doesn't need to be dynamic.
Love your animations
Thanks so much!
Change line 160 (where you mix in the water colour) to be:
col *= mix(vec4(1.,1.,1.,1.), colourWater, waterLerp*4.);
and then change you water colour to something like:
const vec4 colourWater = vec4(0.384,0.8,0.9, 1.);
You will get a beautiful cyan to blue gradient on the water, and you will get accurate underwater colours, as a result of multiplying the terrain color with the water colour, effectively simulating water filtering out red light with depth
Cheers!
I knew that my water was a bit off, but couldn't figure out how to fix it. I'm keen to try this out, thanks so much!
i wish there were a way to save helpful comments like this
@@jtmcdolecopy paste into obsidian/notion
Nice work on the presentation!
Looks very clean and is easy to follow.
I can't find such info from a cursory glance. But, if p5js exposes the rendering context somehow, or a compatible API, then you should be able to set the lods with the texImage2D function whose second argument is the mipLevel.
Generating the max-mips themselves could be done by populating UInt8ClampedArrays at the same time as you create the height-map by basing the mip indices of off the main height-map indices:
```
mipindice = (x, y, size) => Math.floor(size * (x -1)/width) + width * y * Math.floor(size * (y - 1)/height)
let n = 255 * noiseVal(x, y)
heightMap.fill(n)
mip[1][mipindice(x, y, 128)] = Math.max(mip[1][mipindice(x, y, 128)], n)
//mip being array of UInt8ClampedArrays of appropriate sizes in this snippet
mip[2][mipindice(x, y, 64)] = Math.max(mip[2][mipindice(x, y, 64)], n)
```
... etc
Typed from mind so it might be a bit broken and inefficient, but the concept is hopefully clear!
The normals could also be calculated in the same manner, but instead you could use the alpha channel for the height in the main texture and use rgb for the normal.
I love the changes to the water rendering and the slanted terrain! Maybe the strength (colour) of the waves could be affected by the calculated slope somehow?
Probably doesn't need to be said, but reflection and shadows from moving clouds (and stars for a night mode) would be interesting to see. Could be an interesting way to learn 3D textures if that's something which interests you.
Keep up the good work!
This is really helpful, thanks so much! I'll give this a go when I get a chance
And thanks for all these great ideas too, I'll have to look into doing reflections, they'd look so good if I can get them working!
Man watching your videos I feel the amount of work that went into them, particularly the visualizations .
Haha yea they can take a bit of time but I think they help get the point across better than my words would on their own! (Hopefully!)
as a general idea, it would be good to bake as much of this into the actual texture as possible, for example, you can do the coloring (including the stone and shallow sea colors) in the texture itself so that you don't have to render all of that live, since it's 100% static. Then you could just run the wave shader and the shadows/normals on the gpu and it would give you a much better performance if you're using a much larger map size
That's definitely true, and in the last version the colours were seperate for exactly this reason, but my plan is to make the terrain not so static!
Really appreciate the comment!
not necessarily. An optimization like that is highly GPU and implementation dependent. More textures means higher VRAM requirements and potentially more reading from the VRAM or even system ram depending on what GPU is being used. Reading from memory is quite expensive, and doubly so when working with a GPU, so if a calculation is simple to do based on existing data, it's often as fast or faster to calculate it in real time than to try and cache it in a texture. This is really something where you'd have to decide what type of system you're targeting and profile both options to see the potential cost and benefit
How about some softness in the shadows? The farther a shadow is from its occluder (mountain) the "blurrier" it gets, because of umbral/penumbral effects. There's a natural falloff of the shadow rather than a sharp line. One way I've approximated this with ray-marched shadows in the past is to multiply the shadow color by tanh(height-above-terrain) at every raycast sampling point. So the closer the ray sample point gets to terrain the darker it is shaded (even if it never fully intersects the terrain). This is sort of the reverse of what penumbral / umbral effects are in reality, with the 'fading out' of the shadow occurring "inside" the shaded region rather than 'outside". Still, it is a fast approximation and easy to calculate alongside existing shadow occlusion code! Happy to answer qs if that doesn't make sense
Yea I think that some softness would add a lot, I'll try out the tanh method! Another suggestion I got was to draw the shadows onto a seperate texture so it can be blurred before combining it with the terrain, in which case you'd get a mix of it being inside and outside the shaded region
Perfect! Now the next step is turning this into a cute fishing and town management game
This type of raycasting also occurs in parallax mapping (displacement mapping). A powerful optimisation is a preprocessing step storing with each terrain position, essentially the maximum radius you can go without missing anything, regardless of direction.
Interesting! I'll have to look into it!
I am planning to use some dynamic heightmaps in the future, so it might not be a perfect fit for that, but I'll check it out!
Thanks for the suggestion :)
Great work this video shows that you really care about our suggestions!
In addition to the noise map, you could use a curve to change the profile of the terrain. You could tweak the curve to make mountain slopes more vertical and valleys and shores more plain.
This way you can make it look more natural and get rid of those big chunks of rock that appeared in the middle of the plains.
Thanks, you all have lots of really good ideas and knowledge so it would be silly for me to ignore them!
I think that's a great idea, I might focus a bit more on the terrain generation instead of the shadows for a bit, because there's a lot of improvements that could be done there!
Well done ! It looks amazing !!! So nice to see how much you are open to suggestions !
Thank you! Taking suggestions is a great way to learn new things!
The slope-based stepping you're doing reminds me a lot of SDFs in computer graphics. They're basically an entirely different way of conceptualizing models by using mathematical functions, rather than strict triangular polygons. They're not super common, but they're an interesting concept in their own right, and can be really amazing when used properly
SDFs are wickedly cool aren't they! I've been trying for a while to come up with some way of efficiently converting a heightmap into an SDF but I haven't really made any progress on it unfortunately. It would be amazing if I could get it to work though, there are so many cool effects possible with SDFs!
Not sure if I understood your question, but if I recall correctly openGL allows you to upload your own mipmaps. So you can sample them the way you want and then upload them.
Ahh thanks so much, that makes sense! Mip maps are very new to me so I don't really know what I'm doing haha!
I don't know who you are on what this project is *YET* but it look's great and cant wait to see it grow!
Render the shadows to a seperate output texture, then do a tiny blur pass on that before adding it in.
This will give a level of anti-aliasing, and speckle removal.
On the lerp function, you are using the decimal of the distance between two cells, multiply it by 2, subtract one, then raise it to a power (1.5 seems good). This will give you a curve and smooth out your points further.
That's a really great idea, a blur would probably make this so much better!
I guess that's basically an easy in/out isn't it, can't believe I didn't think of that!
Thanks for the suggestions!
@@BarneyCodes To ensure you don't get double shadows from the single lightsource, write the shadows as black to a white texture, do your blur, then composite that in with an alpha or whatever other system.
I recommended this for Factorio back when they had double shadows if two objects overlapped, since it merges all shadows first, then blurs any edges.
if you also record the distance away from from the object casting the shadow the point is (you've already done the raycast to find this) then you can apply a blur based on this distance getting a sharp shadow when the object is close, and a much blurrier one further out. Just sample more of the local area.
This is expensive the more you sample, so if you instead calculate the mipmaps of the original shadow image with a linear filter, and then sample from a smaller mipmap when further away from the object casting the shadow, you'll make sure that all pixels have the same cost for their blur effect. Giving you a fairly nice umbra/blur for a low cost.
@@Silverware09 Ah yea that makes sense, thanks for the tip!
Doing different levels of blur depending on ray travel distance sounds like it would be really cool! I'll give it a try!
I think it would be cool to simulate erosion by gradually changing the height map image.
You could add some noise or maybe add a slight Gaussian blur to it as a function of time to smooth out the sharper peaks.
It's possible that the reason you're still seeing some weird pixelation in your shadows is due to floating-point inaccuracy in your noise texture. I don't know what data type it is, but if the smallest possible value difference is too big, it could produce that effect. This is also supported by the fact that there is some slight visible banding on the terrain's diffuse lighting (the normal vector stuff). I'm not certain if anyone else has mentioned this already, but it's worth considering. Either way loved the video neat project :)
Thanks so much for the comment, this is absolutely spot on! I didn't catch this myself so great job on picking it up! Someone on my discord pointed it out to me so I've fixed it for the next iteration and it makes a massive difference. I'm currently editing the next video in this series and this is one of the improvements I've made. I now pack the height data into all 4 colour channels instead of just using a single channel.
Thanks again!
Mipmaps - you can make them manually and load them one by one, halving the size each time untill you go down to 1x1 pixels, or you can use a file format that allows storing mipmaps like dds/dxt. Details may depend on bindings/engine used.
Thanks for the info! I haven't used mip maps at all before, so this is really helpful!
how is this video sooooo good. really cool man
Thank you! Really appreciate it!
you can generate the mipmaps manually using a compute shader call at the start (assuming your terrain is static)
with your solution, you can also generate a SDF for the terrain and basically turn it into a ray marcher. it also paves the way to other cool effects like ambient occlusion and volumetric fog.
Thanks! The terrain is static at the moment but that's going to change. Performance isn't terrible at the moment though so I'm not sure if I'll switch to the mipmap method or not.
I've tried to find ways to generate an SDF from a heightmap but couldn't find anything that worked. The only thing I can think of doing is approximating it with eg cones or spheres and passing that data into the shader, but that seems like a lot of work, especially if the terrain is going to be dynamic. If you could point me to some helpful resources, that would be great!
@@BarneyCodes usually its easier to generate an SDF from a point cloud, but that might be expensive.
My brain passes off the pixelation on the green areas as trees.
Wow that looks great 😮
Thanks!
I think specifically in the sand layer you should have it change to a secondary material at a much lower slope value than the stone in higher areas; sand wants to settle in less slopey ways than other materials present, so anything higher slope should be a harder-set material.
I have an idea for how u can approximate ambient occlusion. For each pixel, imagine a plane perpendicular to its normal. Sample the height of its neighbours in a radius. Find the average amount neighbours are higher than this plane. This approximates concavity, which approximates AO. Some variables you can toy with: weighting neighbours based on proximity; choosing to allow pixels under the plane to offset the effect of those above or not, whichever looks better; the amount of AO resulting from this measurement. You may want to use a max function with sea level so the ocean is treated as an occluder.
You could also, for each pixel, raycast in several directions and measure how many casts reach the sky unoccluded. More expensive. Maybe you could bake either?
Some AO would probably add a lot to the overall look, I'll have to experiment with these methods and see what works. Thanks for the suggestions!
One thing that could be improved: right now you have lots of detail in the light, but areas in shadow end up very flat. They should still get some shading from the light from the sky itself, giving a bit more depth to the shadows.
The cheapest method: use the dot product to give shading facing straight down.
Fanciest: you can pre-compute an ambient occlusion term by tracing in lots of directions to find out how much of the sky is visible to each point: the more sky it can see, the brighter it is even when it’s in shadow. This is pre-baking an ambient occlusion texture.
Fancy realtime: once you have the maximum mipmaps, you might be able to approximate the sky occlusion above by sampling the levels nearby and weighting them. Basically you want SSAO techniques, looking at papers on those might be good.
You're definitely right! I'll have to try out some AO, I think it would really add a lot! In the mean time though, as you suggest, I think I should apply the dot product shadows inside the shaded region as well!
I'll have to do some reading up on these techniques!
In addition to AO of some kind, you could try the three directional light solution in Inigo Quilez's article "outdoors lighting" from 2013. Not the most realistic, but its extremely simple and can look pretty good
I'll give the article a read, thanks!
Really satisfying to see your progress and it's encouraging to see all the very helpful comments!
Mine will perhaps be less helpful as it might send you on a frustrating goose chase... But my challenge to you is to add floating islands! Maybe it could be done in a single hieghtmap by using rgba instead of just grayscale? Or perhaps swap the ocean for big puffy white clouds.
There's a lot of appeal to the style you have going. IDK if you're planning on making it into a game, but I'm getting some kind of flying/laupta castle in the sky vibes from the birds eye perspective. I can see it going in lots of cool directions.
I'd love to add some clouds! Though they could be a bit tricky with the way this is implemented, I'll have to give it some thought!
Thanks for the comment :)
for each pixel, starting from the white pixels to the black pixels use the pixel height and sun derection to determine the length of the cast shadow and draw it only if the pixel under the shadow is lower than the original.
You could improve the lighting even further by also using an ambient occlusion map. This should only show in the shadowed regions. This AO map could be pre-generated using the height map, or you could have a real time version by shooting a few extra rays from every pixel, in random directions. You can control the width of the ao effect by adjusting the ray length.
Would he epic
Great idea, AO would definitely help it out I think! Thanks for the suggestion!
Would be really interesting to explore how ambient occlusion, or similar effect could add more depth to the terrain itself.
I think AO would look great, will have to explore it some more! Thanks for the comment!
If we’re talking optional extras, how about red shifting everything when the sun is closest to the horizon? I would imagine something like a sigmoid interpolation rather than a linear one might work well?
I think a day/night cycle would be really cool! Some nice red sunrises/sunsets would look great! Thanks for the comment :)
the elevation of land impacts it's climate and the climates of nearby land due to sun exposure. You could randomly generate a whole cluster of biomes just from some noise maps.
One really easy optimization would be to use partial derivatives to calculate the normals from the heightmap. Essentially partial derivatives would do pretty precisely the same thing you are doing now manually but it would reduce the amount of redundant calculations by not having to calculate the height 4 times for each pixel. Partial derivatives are often used to reconstruct normals from position data for 3d rendering as well.
Oh this sounds really cool! I'll have to look it up but it sounds like it could be great! Thanks for the suggestion!
Also do ambient occlusion! You just need to do a comparison between the height of the pixel with it's neighboring heights, and if there are superior neighboring heights in opposite directions, darken that pixel towards black (only darken pixels that are already in shadow, or else it will look weird)
Nice ! You could also try to add some Temporal Noise to your sampling and get some fake soft shadows, I think...
Great video. The animations are awesome
Thanks so much, glad you like them!
maybe adding some clouds w. shadows? :)
Ooh that could be really cool, thanks for the suggestion!
Great follow up!
Thanks!
I would love if this becomes a regular series for as long as it gets this much engagement in the comments :)
Just to see how perfect you can get this one little tech demo
I've been blown away with how helpful and encouraging all the comments have been, it certainly helps with the motivation! I'll definitely be doing some more updates in the future :)
Damn Barney codes now? That's cool but you still owe Gordon a beer 😤
Hahaha security was getting a bit dull so decided to program in my spare time ;)
I would love to see some more attention spent on the water, maybe implementing some common water shading techniques? Using the height map below the surface to get a depth value to saturate the deeper parts of the ocean could be one idea.
Yea the water is definitely lacking a bit, needs a lot of work! Hopefully will be addressed in the next iteration! Thanks for the comment :)
Great stuff, I personally don't particularly like the blurry sand that is under the water but maybe tweaking a bit would make it look better, rn it takes a lot of attention away from the main island imo
Yea I think the way I'm handling the water is a bit off. I think there would be a way to do it "properly" that would look nice, but I see what you mean. I'll have to play around with it and see if I can get it looking better!
Thanks for the comment!
@BarneyCodes - I think the issue there is just the colour used for the “land under water” it should probably be much more blue, except for the very shallowest parts.
I did a very similar project to yours a few years back, but did a bresenham line drawing algorithm for the ray tracing and some simple math. One aspect that really improved the overall style was to emulate the color changes when sunlight is scattered by molecules in the atmosphere. This would allow for dynamic sunrise, midday, sunset cycles.
I think a day/night cycle would be really nice! I'll have to do some reading up on how it works, but this is a great suggestion, thanks!
@@BarneyCodes I would have a look at a resource:
Chapter 16. Accurate Atmospheric Scattering. By Sean O'Neil.
@JohnHilton-dz4mi thanks, I'll take a look!
i think you should keep some of the speckles, gives it a unique character in my opinion...
You could probably also implement some basic global illumination using the normals, slope value, and shading values - all things you've already calculated. Use the normals in addition to the slope to get a 3D normal vector, then check surrounding pixels for ones that are facing the surface and modulate the currently shaded fragment based on the shading and material (defined by slope) of the surrounding pixels. This could be used to give some variation and depth to your shadowed regions.
This sounds really cool! I'll have to do a bit of research into it but it sounds doable! Thanks for the suggestion!
Another thing you could do to decrease the speckles even more is to use multi-sampling. I don't remember if you can just double the resolution of each texture and them enable OpenGL multi-sampling, and everything works, but in one project of mine I initially render to a 2/3x times resolution texture, and them down-sample it using a lanczos kernel to create the final image.
I haven't heard of multi sampling but I'll definitely check it out! I might need to move the height map generation into a shader because at the moment it's quite slow on the CPU, but if I do that it should be pretty quick to generate it at much higher resolutions.
Thanks for the suggestion!
Generating the mipmaps can be done by hand fairly easily; you literally render each mipmap >0 while getting the max value for the 4 samples "above". You can also use compute shaders, binding the previous and current mipmaps as different images.
Also, you can just set your sampler to use linear interpolation by default? Why are you using nearest filtering on the heightmap?
Thanks so much for the info! As you can clearly tell I haven't done anything with mip maps before! I'll give generating them a go :)
And now I feel silly haha I'm using P5js for this project (I think I'll change that soon though), and they do a lot of the boilerplate setup for me so I haven't explicitly set the filtering anywhere, but I think I should be able to change it!
Thanks for the comment!
@@BarneyCodes "And now I feel silly haha I'm using P5js for this project" Ah man then I was maybe a bit harsh, no intention of that, these videos are great. I thought you were using OpenGL/DirectX/Vulkan. I have no clue how much control over texture generation and sampling you have in p5.
@@stysner4580 No worries at all! I was aware that you can change the texture sampling, I just didn't think of it when I should have!
love the video! another interesting idea for the shadows would be to store the lowest distance you get to the terrain when marching towards the sun, and then add a little bit of "fuzz" to the shadows, making them blurrier, like you see in real life! what do you think? it should be fairly free as well
Note for this: a very good article on this algorithm is Inigo Quilzes’ “SDF Soft Shadows” article, I recommend looking at that. I’d link directly but I know TH-cam hates links.
Yea I think adding a bit of softness to the shadows would really help the effect a lot! I'll try it out :) Thanks for the comment!
you can blur your shadows a bit, so they look a more soft at the edges and darker towards the center. You can also go full penumbra and determine shadow color by length of ray between your initial to-check pixel and first interesected pixel.
Great suggestion, soft shadows will look much nicer I think! I'll have to look into approximating them properly!
@@BarneyCodes if you do that, you actually have a really easy way to do it accurately. The blurring of shadows is based on how far the light has traveled, as well as the size of the light source. In most rendering engines, getting that light traveled distance efficiently is quite difficult, but you already have all the data you need for it. I was looking into it a while ago for 3D and discovered a technique called PCSS, which was developed by NVidia back in 2005. Unfortunately, it seems like the technology hasn't really improved much since then, which is a shame, because it adds so much realism
you can generate your own mipmaps and set them using the level param of the gltex data call.
Could probably use some 2.5D SDF magic to do AO - though there is probably an even easier, simpler method to create AO
I like that idea a lot, thanks for the comment!
Great video!
Thanks, glad you enjoyed it!
Not sure how much this idea helps but concerning the stretching and oddities with the waves maybe you could use your normal map and dot product it with the plane of the water (pretty much giving you a "shadow" of the vector on the water) and using it to dictate the rotation of the water waves and using the length of the vector to change the frequency of the waves making waves much more pronounced near cliffs and barely visible around low and flat beaches.
Oooh that's an interesting idea! I'll have to play around with it and see how it looks! Thanks for the suggestion!
Love love love your journey, it's fantastic on how well you did. Have you thought about adding lights/reflection? It might look cool on the water. What do you think?
The water definitely needs some work, I think adding some highlights and reflections could really make it pop! Thanks for the comment!
I'm not sure how the mip-map genration works in opengl, but in directX you can use a pixel shader and use a specific mip-level as rander target to manually construct a mip-chain
Awesome, thanks for the info! I haven't worked with mipmaps before (if you couldn't tell hahah) so this is great to know!
I’ve programmed really really reallyyy basic graphics (only a pseudo 3D “engine”, like Wolfenstein), so it’s super cool to see things that look as cool as this! I know the terrain isn’t a big concern of yours, but a tiny 5% change that could have a 50% improvement could be removing the small stone texture on the beaches of your island. Otherwise super cool stuff man
I kinda think that pseudo 3D stuff is almost cooler than actual 3D just because I find all the tricks and hacks that get it to work super interesting!
I think you're right about the terrain, I might do a bit more to make it look a bit more natural! Thanks for the comment!
This is just a thought of something that might work. I have not tried it, but id like to share my thoughts anyway. Would it work to calculate the steepness of the whole map. Deltax value for every pixel, and delta y value for every pixel. Then for the raytracing you could find how steep the hill needs to be before light gets blocked. You could then iterate over the pixels in that direction and see if the average is above or below the threshold. If it is above then the light would need to be blocked
Interesting idea! Sounds similar to the "cone mapping" technique someone mentioned in another comment, I'll have to try it out
Thanks for the suggestion!
Have you explored ray marching for the shadows?
I guess you could say I'm doing a bastardised version of ray marching for the shadows, the issue is coming up with an SDF for a height map. The method that I use currently is essentially just approximating the SDF by only looking at the distance on one axis (the height), which obviously isn't perfect, but gets the job done!
Maybe instead of getting shadows working for height maps we could look at ways of generating terrain as SDFs? Could be an interesting area to explore!
@@BarneyCodes Yeah idk
would probably be pretty tricky and slow/no longer realtime but global illumination/bounce lighting would be very interesting.
i can think of a few ideas that would work but all of them would be quite convoluted
I've never tried to do GI, but I should look into it, thanks for the comment!
Mipmaps for HIZ like you are describing are usually generated using e.g. a compute shader
Ahh that makes a lot of sense, thanks for the info! As you can probably tell, mip maps aren't something I've used before hahaha
instead of 'mipmap', 'binary space partitioning' is exactly what you need, i think.
That checking the height above the height field seems like it's halfway to making an SDF for the height field.
It's a pretty decent approximation, but it would be great to come up with a more accurate function so that we can do proper SDF effects with it!
You could maybe add some specular reflections
Shadow falloff and water shading
I was thinking about some sort of shadow falloff where I make the shadows softer the further the ray has travelled before hitting terrain. I'll have to play around with it and see how it looks. Thanks for the suggestions!
Yeah, the shadows on the water look much too “hard”… At the very least I’d lighten and soften them a little (maybe take the water depth into account?)
Love this little project! ☺️ As someone who hasn’t coded in years, or had much to do with graphics rendering, some of it goes over my head. Looks like fun though!
If you are doing bilinear interpolation to determine the height, why not just use bilinear sampling for the texture?
Yea I sort of forgot that was an option 😂 that's the danger of using abstractions of shaders I suppose! This is a much better solution, thanks for the suggestion!
It would be cool if you could change the angle at which the light is hitting the terrain. The closer it is to 0 or 180, the more red-shifted the hues would be.
Adding some nice day/night cycle would be really cool! Great idea!
Because all of your color tones are already so warm, the previous shadow already felt cool. You didn't need to blue-shift it. Looks kinda alien imo now
Yea I think you're right, part of the problem is that the normals introduce a little bit of shadow almost everywhere so the whole terrain is now a bit more blue which does look a bit strange. I think what I should probably try is keeping the normal shadows and the ray cast shadows seperate so they can have different colours, and that way there's also still some detail in the terrain in shaded areas
I think the shadows would look better if they got less opaque and blurrier the further away they are from their origin.
I agree, I think that would look really great! I've already got the information on how far the ray has travelled before hitting terrain so hopefully it shouldn't be too hard...
I love shaders but I didn't go to school much as a teenager :(
Going to tell my kids that you don't know what job or hobbies might present themselves as an adult lol
If you have the drive you can definitely learn all of this stuff online for free! The book of shaders is a really excellent resource for getting started and feel free to ask any questions, I'm more than happy to answer!
That video has so much post work! What tools do you use?
Thanks! I use OBS to screen record and DaVinci Resolve for editing and doing simple animations. Any of the more complex animations I just code myself using P5js! Hope that helps!
So you're doing it in 1 draw call right? Using multiple draw calls should give a big performane boost I think, by storing an intermediate result in a texture. For example, the first pass could check only 15 closest pixels and store the maximum shadow length found for that pixel. The second pass would only do 15 checks as well but with 15-pixel leaps. Thus you would only use 30 texture lookups per pixel instead of 200.
Interesting! I think I'd have to do the same number of texture lookups in total though to get the same result, does splitting that over multiple draw calls actually make it faster? Or maybe I'm misunderstanding!
@@BarneyCodes You would need much fewer lookups, because the calculations done in the first pass are shared by multiple pixels, the same idea as "dynamic programming".
Each pixel is independent though I think? Each one is on its own trajectory in the light direction and each ray is parallel so unfortunately I don't think you can share any information between pixels
Why didn't you used ray marching optimization technique? Generate a 3D map of distances (or maybe some sparse spatial array like octree to save memory), you don't need that high resolution, and it's relatively easy to generate (maybe it could be done in run time, though I think it's better to spend some RAM instead). Like, you could take height of the point h above the terrain as initial guess, then check terrain points in a circle radius of h, going from center in a spiral and checking if distance from terrain point is closer than h, if it is - then store that distance as the value of h (which will then limit your search radius). Having that 3D map, you can use values from it to make steps of value length in any direction. With limitation on terrain slope, you could do some optimization to limit search radius.
I think if I was only going to use static height maps then this would be a great optimisation for sure, but my plan for this is to have dynamic height maps (maybe I should have mentioned this in the video!) in which case I'd have to do re-create the 3D map every frame which would likely end up less efficient.
A cool technique though, thanks for the comment!
I think it'd look nice if the direct shoreline, where the surface of the water hits the land, is less effected by the lands colour, giving a kind of sea-foamy look.
I like that! I could pretty easily make a sort of "permanent wave" where the water meets the land, and then still have the animated waves for further out in the water
Could be faster to use the normals to create an initial shadow map, and then ray trace from the initial shadows to see where they extend to.
I hope you bring this to Star Mining Co
I was thinking that it could look pretty cool in that! Might have to try it out!
@@BarneyCodes I think so too! As the player progresses towards the center, they'll end up visually digging a ravine as they path along. It'll also give the surface of the asteroid more character.
Definitely! It would add a really nice sense of depth I think!
also you can take a look at ddx and ddy functions to generate normal maps then you dont have to sample neighbouring pixels
Sounds good! Thanks for the suggestion!
The sun is also infinitely (essentially) far away which should make all rays come in som the exact same angle. Maybe I missed it in the video.
I fixed that since the last video but forgot to mention it! Good pick up
Why not try passing the shadows through a gaussian blur to add some quick and dirty diffusion and to smooth out those few remaining oddities?
Yeah I think that'll be part of the next update, softening the shadows will probably make it look much better!
TL;DR of this comment:
Since your setup is so specific, you can get away with simplifying your calculations a WHOLE bunch (as detailed below), which should give you a faster and more realistic frame in the end.
The details:
One way to improve the efficiency and realism at the same time should stem from the following fact:
The sun is so absurdly far away from the earth that the actual sunrays that arrive here are pretty much parallel
(As in, not like scientifically precisely, but so close that no human on earth could ever tell the difference)
Since your light source is closer, your shadows spread out. So your still frames do not look like they're lit by the sun, but more like they're lit by something exploding just off screen.
We can actually use the parallelity of sun rays to our advantage, though. Because it means that all of our rays are gonna have the same direction vector, for any single frame. So we can calculate this direction once at the beginning of a frame, and what it "means" for our rays (i.e. their step sizes in x y and height), and use those same step sizes for all of our calculations, turning most of them into additions which are notoriously fast.
We can also do the following:
Instead of casting extremely many light rays in hopes of hitting everything you can, mark all your pixels as "lit", then iterate over your pixels, casting shadow rays from each, as follows:
If the pixel we're looking at is already shaded (by another pixel from a previous pass), you should be fine not casting another ray, since the ray that marked our pixel should already have marked everything our new ray would've marked anyways. This saves us even more calculations, since we're not casting rays that we don't need to.
If, on the other hand, the pixel we're at is still marked as lit, cast a shadow ray from that pixel, into our fixed direction.
The ray should do the following:
1. Step one pixel length into its direction
2. Check if it intersects the pixel it is currently "hovering over". If so, stop calculating this ray. If not, mark that pixel as shaded and repeat from 1.
Once this is done for each pixel, your frame should be complete, and your shadows should be nice and parallel.
For the pixel iteration order, either "highest to lowest" or "closest to furthest" should be best, as we want to mark as many pixels as shaded as quickly as possible in order to not cast more rays than necessary. By "closest to furthest", I mean in terms of our light direction: If the light is going east, start from the west, because in this situation our rays can only shade pixels to the east of their starting pixel anyways.
This is a really well thought out comment, but I think unfortunately, since I'm doing things in a shader (where each pixel has to do all it's calculations in isolation and data can't be shared between pixels), I'm not sure that it will work :'(
For sure if this was being done on the CPU this would be a fantastic approach though, and I really appreciate the time and effort you put into the comment, so thank you!
5:11 : Wouldn't it be easier to just use the interpolation the sampler2d is configured to use in the shader? I assume you set the hightmap to "nearest" min/mag filter, maybe you set it to "linear" because this does the exact same thing you calculate there. Or do i forget about something here?
Edit: You could also provide the shader a lookup texture for height-colors. that could even extend to a full texture in the future. If you looking into performance more, you could replace the "if/else if"-color block with the lookup texture.
Yea definitely should have used bilinear for texture sampling, I completely forgot it was an option because I'm using P5js for this and it's abstracted away (it wasn't a conscious choice to use nearest!)
I think a texture lookup is a great idea, I could even do something like height on the y axis and slope on the x axis so that I can easily have different colours for cliff areas in different sections of the map?
Sure, i think this is a possibility. Maybe just use dotproduct from up to the normal of the heightmap to get the slope and thus the xaxis. I don't really now what this would look like from an artistic pov but thats a possibility. Also you should presample the normals and store it into a normal map (Float32 RGB texture). That could also enable interpolation between normals. That could even be generated on the fly using compute shaders or a bit simpler, a render texture, and simply do a drawcall which renders the normals into the texture. Then you could even make changes to the terrain and it could recalculate very fast.
I know this is super late but shouldn't the calculation for whether a pixel is beach or not be based on both vertical AND horizontal proximity pixels on the 'water level'? Because getting inland valleys that are just sand or beach doesn't make a lot of sense.
That's a good idea! I think in reality it would probably just be distance to the centre, because otherwise it would add a lot more looping into the calculation to find out how far the pixel is horizontally from a water tile.
Thanks for the comment :)
Would be neat if there was a way to import already existing heightmap images,
so that you could for example, render Earth's heightmap with this shader.
This is definitely doable! The online p5 editor (that the code I linked is in) allows you to upload files, and you can then use the loadImage() function to load it into the code.
Maybe I should add an image uploader to it to make it a bit more user friendly though
Seeing earth's height map in this would be really cool!
@@BarneyCodes
Oh, many thanks!
I've been reading the code for a while and couldn't figure it out,
I'm not too familiar with Javascript coding 😅
So this is really appreciated
No worries at all! If you have any questions feel free to reach out!
now do underwater shadows with correct refraction
Definitely need to work on the water a lot, full refraction might be a bit of a stretch though 😅 something to aim for I guess?
personally I feel like it would look better if the steep areas were dirt instead of stone, and probably have a minimum height for it as well so the sand isn't turned into something else, it looks kind of strange to me, I'm not sure what your end goal is for the height map but I think making the water have a layer or two as well would look good, like deeper areas are darker and less green, and idk how tall the mountains are supposed to be, but I think some snow caps would look nice, as for optimization and fixing artifacts, no clue lol
I think you're right about all of this! The stone can look a bit out of place in certain areas. I have got a snow colour in there, but it seems like, for the most part, the mountains are never high enough. I can probably push the middle up a bit to force it to be more mountainous (just like how I pull the edges down to create the island)
Why not generate an AO texture? This should add volume to the picture. Now there is completely no shading in the shadows, I would give the normals the opportunity to darken the surface even more in the shadows.
Yea absolutely agree, the shadowed areas do look pretty flat! I'll seperate out the normal shadows from the ray cast ones so that they still have effect!
They are not broken shadows. They are photons interacting with each others creating distortion. So it is not a bug, it is a feature :)
Absolutely, it's actually the double slit experiment but the slits are so small you can't see them hahaha