@@NikitaGal31 fun but totally useless fact; in scale, the distance between the lowest and highest points on earth's surface is so small compared to the earth's size that it'd feel smooth if you were huge enough to swipe your finger across it. So you could argue that while the earth itself isn't flat, its surface almost is lol
It might be too technical, but I'd love a video on why polygons are so expensive, but the light calculations that use normal maps are relatively cheaper.
The short version is that drawing an additional triangle involves a lot of math. You have to determine if the triangle should be drawn, whether it's facing the camera, and which pixels it should cover. You have to determine in what order it should be drawn so it gets drawn in front of the stuff that's further from the camera, and after the stuff that's closer. If you're using MSAA you have to do some of these computations repeatedly. You have to do all of this before you even *start* doing lighting calculations, and some of it has to be re-done for each light that touches it, depending on which overall rendering pipeline you're using. In contrast, if you're already drawing a triangle, texturing it is pretty easy. You have three known points in a texture, three known points on a triangle, and you can just do some simple math to go grab the proper texel from the texture based on where you are on the surface of the triangle (mip levels / anisotropic filtering complicate this a little, but the point stands -- texturing is super cheap). So loading an extra texture map for the normals doesn't actually cost much. And then, when you light the mesh, instead of just using the geometry normal, you grab the texel from the normal map, apply it as an offset, and do exactly the same lighting calculation you were gonna do anyway. You still had to do the lighting calculations per pixel anyway, so adding a texture read and applying the offset costs very little. Another way of thinking about it is that geometry is a much richer (and more costly) representation than normal maps are. Geometry has depth and form. It can occlude, it can be behind, or in front of things, it changes how it looks when your perspective changes, even in the absence of lighting. Its surface directional detail is only one element of a much larger whole. Normal maps are literally just "hey, pretend this bit is facing a different direction than it actually is when you light it." It works well for encoding subtle details on surfaces, but it's a fundamentally simpler representation. That's why it's cheaper.
Realized there's a bit of jargon here: Texel = pixel in a texture (as opposed to onscreen) MSAA = "A type of anti-aliasing where you re-do some of the math of drawing triangles repeatedly to avoid harsh jaggies around the edges" Mip level / anisotropic filtering = You know how old PS1/PS2 games look super fizzy when looking at the horizon? That's because when a polygon is tilted away from the camera by still visible, it's easy to get into a situation where there's more than one texel per screen pixel, because they're jammed together by foreshortening. When you draw it, you get some arbitrary pixel from the set of potential options, and if the camera moves even a tiny bit, you get a different one, which causes an ugly, noisy, unstable appearance. In real life, you just see a weighted average of all the options -- no fizz. So mipmapping / anisotropic filtering is a way to store sort of "pre-blurred" versions of the textures alongside the originals, so when you get into that situation you can grab from the blurry version that already averages that information, which solves this problem.
The amount of normal mapping used in Nintendo Switch games like Mario Odyssey and BoTW is crazy, they achieve real looking 3D and are stunning yet can run small hardware.
as a person who just mods games and does 3D as something like a hobby . yeh it was indeed funny tho I often am too lazy to add normal maps to the models used in my mods. sometimes I don't even have any normal maps and I gotta make my own which uhh pain so yeh lmao
I remember the first time I noticed the existence of a normal map. I was playing GTA V, doing donuts in grass or something, when I spotted strange shadows in my tread marks. They were angled the same direction all the other shadows around me were. Now I had pretty substantial knowledge on games and the techniques they imploy, however I had never even heard of a normal map before. For a bit I was _certain_ my vehicle was leaving behind a 3d imprint, but after closely examining the treads I realized they were, in fact, 2d. My mind couldn't wrap around how they could add shading to a 2d texture, and for the longest time I had no idea how they did it. In the time since I've learned of the existence of normal maps, but I'm glad to see a video going over all these details I was oblivious to.
Commenting on the last part of the video where you talked about depth on flat surfaces. The first time I actually noticed this was when I was playing Dark Souls on my Xbox. I was in Undead Purgatory and just so happened to look at the ground which was on a bricky surface. Seeing how they made what looked like hundreds of individual 3d bricks on a flat surface blew my mind. I knew about normal old normal maps but that... that is something that I really want to learn more about.
I guess he's talking about tesselation. As far as I know, this technique still uses the normal maps to fake details when you're away from the 3D object. But as soon as you get closer to it, the engine will use the normal map (or another depth/height map) to brake your object into smaller polygons and translate the height of these polygons according to the texture. Basically, you're using a texture to "create details on the fly" only when needed. This cannot be confused with LODs. LODs do something similar, adding detail to the object, but in every direction and not specifically for height and depth. Also, your object need to have enough geometry for tesselation to work, so I guess it's common to combine both techniques.
@@luizfernandonoschang8298 the thing i was thinking of was parallax occlusion mapping. sisnce it looked more like that with what he showed. especially rooms inside a building. those details can be faked without tessellation
hey i discovered it in dark souls also!! i noticed it when i was in the lava area, after the discharge guy gets rid of the lava, the walls looked so crazy so i got close to them and realised how undetailed the model was and was amazed
The fact that you used genshin as an example for parallax mapping, is a great example. That game uses a lot of mapping methods to decrease processing cost. Normal maps, specular maps, parallax maps, you name it.
Thats something that i've always wondered about graphics in games, I would stand close to a wall in a game trying to see how the light was bouncing in some surfaces bout they only looked 2d lol.
I already had some minimal knowledge about how this kind of stuff (game graphics, engines, textures, etc) worked, but that image at 2:30 blew my freakin mind. It puts into perspective how awesome certain revolutionary techniques are (such as normal mapping)
I recently started learning Blender, and some of the terms being thrown around were still very vague to me. This made me better understand what I'm actually doing. Great video!
Just found your channel and I wanted to say this is exactly what I’ve been looking for: simple but still technical explanations of common video game and 3D dev techniques. Cheers!
It's so cool how almost everything in video games is an illusion. 3D looking textures look 2D, low poly models used to convey detail at distance, things tend to unload when not being looked at, etc
It would've been good to mention that for unreal specifically, the B channel (Depth) of the normal map gets entirely thrown away when using the Normal Map compression, then they get regenerated when used inside a material from the R and G channels. So if using unreal, it isn't worth trying to get fancy with the B axis since it's just going to be thrown away anyway.
@@kendarr The compression type for normal maps in UE repacks R+G into RG+BA, so you get 2x the bit depth so higher precision normals. I forget the exact bit depth now, but it'll be 16 per channel instead of 8 for example, so you get might higher than 0-255 values, which creates a better overall normal but is still compressed nicely.
Since normals are normally normalized, you can calculate the missing channel if you know the other 2, so only 2 need to be stored which saves space. On the other hand, if you store all 3 channels, you can have a vector with variable length which determines light attenuation (or amplification), allowing you to do more fancy things with your normal map.
"Normal" or " Surface Normal" it is the best name because it comes from Physics. "In mechanics, the normal force N is the component of a contact force that is perpendicular to the surface that an object contacts"
I've done some texture mods for a game and normal mapping is something I've come across and had a very simple understanding of what they do, I'm glad I came across this in Recommended to get a higher understanding. Who knows? maybe it'll help me with my own future endeavors!
I am not an expert on this, but if I am correct... I believe the rooms in spider man are done with parallax scrolling, which in the most basic sense is taking multiple images and making them move independently of one another, creating a 3d effect. That's just what it looks like to me, however I could be completely wrong, I believe the main menu for Halo Reach actually did this.
@ 1:54 "this line is affectionately called... the normal. Not the best name" This is the mathematical term for it. It's not as if it were invented for video games. The normal direction is a pretty standard terminology. I wish more people would get used to it 🤣
Great video and explanation on normal maps! Normal maps and depth maps some of my favorite tricks when it comes to 3D modeling and they're really powerful. It's impressive how detail they can give without much impact.
Oh god yes please do the one with the rooms inside buildings without them actually being geometrically there. I'm a huge Flight Simulator guy and almost all the buildings un MSFS have rooms and interiors. How does it work? I can see it's a trick when moving the camera around and look closely but frol afar or at a quick glance, the illusion is perfect. Love your videos, thank you.
I'm taking a computer graphics class currently and literally just completed an assignment where we implemented the Phong illumination model for our ray tracer. Cool video!
I love learning about the creative and genius ways that people learn how to solve problems. It’s amazing to see how realistic graphics have become in video games
i love the videos you make. you keep things clear and concise and thats great for my smooth ADHD brain. i always feel like i’ve learned something when it ends. you’re doin great man 👌
The up/down channel is typically blue, not green. Red is the direction of U texture coordinates, green is the direction of V texture coordinates (in the plane of the polygon.)
I clicked on the thumbnail for the normal pun and stayed for the content. Great stuff as always! Also excited for the hinted at parallax mapping episode coming up.
As a student studying game art I can confirm that normal maps are insane. They can make a model look ten times more detailed than it really is geometry wise.
damn, what the heck? I had no idea this stuff was flat. I was completely convinced it was all just detailed 3d models. This is actually mind-blowing. This has to be the most incredible thing I've learned about in weeks
Great informative video explanation! This is the way a lot of games function. I would like to see texture displacement used more, especially for ground and turrain. It adds a whole new level of detailed environment.
it's kinda sad they toned that down a lot in future updates. mondstadt is FULL of them, but they're few and far between in inazuma and pretty much gone in sumeru.
FYI it's called normal, because in engineering and math, primary method of describing things that move in the direction are called vectors. I've taken whole classes on vector calculus. Basically a vector is something with a value and a direction. As an example when we stand on the ground your body exerts a vector of your mass in body weight multiplied by the force of gravity, and in the direction of the center of the Earth. Makes sense, gravity points towards the center of the Earth. But the fact that it points in a direction is important. In this video he describes the word 'normal'. In physics and engineering normal means vertical towards the face of the surface. So if you have a six-sided dice. If you roll the dice and six lands face up, then an imaginary arrow that pointed straight up would be an arrow that is normal to the face of the six side. Basically it just means any service has a forward and backward direction that it points to. So your countertops have a normal vector that points to the ceiling, the front of your car has a normal vector that points towards the front of your lane, and the barrel of a gun despite being a surface area with regards to pointing, has a normal vector that points in the direction of the bullet ( we are excluding gravity from this example ). So make sense that for every triangle you can create a surface, for every surface you can create a vector, therefore the math gets really really hard for your computer and your graphics card to calculate millions of these vectors every second. So, if your computer gets a shortcut through some clever programming to make guesses about these vectors before they appear it can provide some very beneficial increases in graphical performance
The more technology develops, the more optimizations like this become possible. For example, you probably know about how higher poly models are more taxing on the hardware, which is why normal maps are so useful. It allows you to use a lower poly model that looks like it has the detail of a higher poly one. What is somewhat newer technology that has appeared in a number of games recently follows a similar principle: Resolution scaling. Just as higher poly models are more taxing, so to is running a game at a higher pixel resolution. Resolution scaling allows you to reduce this cost by running the game at a lower resolution, but then scaling that lower resolution up to the size of the screen as a borderless window. From here, filters can be applied to help mask (or highlight, depending on the situation) the added pixelation. This results in you being able to use less resources and improve framerates with minimal visual reduction. Though, the lower you're scaling down from, the more apparent the loss of detail is.
I love the pun in the numbnail of "This is normal." Because, first of all, it's common practice, and second of all, the viewers of the thumbnail are looking at a normal map.
Why is everything flat in video games? Me: Excuse me Mister! How dare you say my beloved Pong looks flat! I play it for over 40 years and it still astonishes me how much detail they packed in.
What I think is really cool is how many of this game design can be actually used to understand our own mind. We are constantly doing these kind of tricks within our brain to compute the world and make sense of our surroundings
As a professional tech artist, I would approach the definition of normals in the same fashion as you just did. You do a great job of teaching + you made a pretty nice montage
2:09 Oh this is so cool. It's taught in science class (reflection from a normal). When I first noticed the 3d effects on a 2d plane in games, my first guess was "It's something to do with reflection and lighting" Thank you for this well explained video
Not going to lie. This has to be the best video I've seen for digesting the theory of normal maps. It took me a hot minute to learn all this on my own back when I started my 3d art journey. Very well made, with great visual examples. Bravo! 👏
I always tell my students its like 3D glasses for the renderer. Those old school red and blue glasses that create the illusion of depth from a 2d image are pretty close conceptually to tangent space normals.
I first noticed this when I played Dark Souls 2 years ago, everything was detailed and lighting reflected perfectly, but the surface was flat and always wondered how that worked. Thanks for the explanation!
Great video! Worth mentioning that UE5’s nanite aims to get rid of normal maps, it does some fancy geometry virtualisation that I won’t pretend to understand but it enables billions of triangles in a scene. Optimising meshes and authoring normal maps takes quite a lot of Artist time in game development so tech like this could be a very big deal when/if it’s fully adopted
My main concern with games using nanite is bloating file sizes. There is something to be said about Nintendo sticking with the aging switch hardware, it keeps file sizes small and fast to download and force’s developers to actually optimize their games. That said, nanite is a impressive feat of technology for groundbreaking games
I first noticed this on Geralts armor in witcher 3. It was chain mail and each ring caught the light correctly from every direction. I was like there must be a whole other texture for storing lighting information and I was right.
Another newer trick artists use for video games is called displacement mapping. It operates very similarly to a normal map but instead of affecting how light is cast on it, it affects viewer occlusion of some parts of the textures giving it a very convincing 3D effect. The illusion can be broken by getting very close to the texture so you can see the edges but most people don't do that Edit: Didn't realize there would be a brief mention about displacement mapping at the end of the video lol
You are describing parallax occlusion mapping. Displacement mapping involves actually moving the geometry with the use of a displacement or height map.
What I don't see mentioned that I absolutely love and would hate seeing gone is how there is no flexibility in character movement and at the same time nothing is fixed. Whatever clothes the character is part of them, moves like a rubber with their bodies.
When I first started 3D modelling I had no idea what normal maps, height maps and ambient occlusion did... Thanks to our very impressive course syllabus, we skipped over substance painter entirely. Now I'm getting accustomed to Substance and planning to learn Mari
at around 2 minutes you show an example of a normal, and while it is correct in that face normals can be calculated from vertex normals and that can be useful in dcc's I feel it's perhaps relevant to point out that in most game engines we avoid face normals because little gain for big pain(perf). Unless you absolutely need them, better to avoid the cost. Also, I was going to call you out for passing over parallax, but sounds like that will be a follow up video.
I always figured it was cuz they could render more quickly and playback better with less geometry. Plus I guess it's cheaper unless extensive on processing. I mean a flat surface you can put a texture map on it and make it look like it's pretty much the same as a complex 3D piece of geometry. It's been a long time since I was doing 3D modeling very long time at least 14 years.
What an excellent video. I've played video games for 30 years and never really knew how 2d shapes looked so 3d, but this makes total sense, even though I have no real 3d animation experience.
Colors are just numbers between 0 and 255. The shaders care about the numbers, not the colors. We only use those colors to visualize the numbers because our brains can interpret the 3d effect that way (and it looks pretty fancy, ngl).
0:35 Tomb raider 1 was one of the first real 3d games and is still perfection to this day. I don’t think in 24 years there has been one game with more immersiveness of the main character with the game world. 1:10 bump maps/normal maps are more fakery. It’s still flat. Real texture uses displacement maps which is actual geometry on top of flat surfaces
IIRC putting 'details' to smaller polygons was called projection, then just before using them in in game, you bake some of the details like some shadows.
And the issue about normal map is that, because it has no displacement, it would actually look flat in VR. (And also stereoscopic 3D in general -- as long as you know it's a thing it is possible to spot them in 3D movies).
I first saw this trying to change a wall in the Sims. Simple enough, I thought, until I loaded the game and there was a brick texture on my floral pattered wall 😅
That being said, with UE5 and the introduction of Nanite, the logic behind using normal map to compensate for polygon count might start to dwindle and becomes outdated.
Cool! So a lot of the impressive graphical fidelity we see is not necessarily due to the computational power of modern GPUs, but innovative and clever design techniques.
I don't know if you'll mention this game, but i remember giants: citizen kabuto being one of the first game to use something similar, and it looked awesome.
If I don't see some Orange Deus Ex game in the next video, I'm going to riot. On my own. Because I'm pretty sure DE:HR was one of the first games that used cube maps to give those buildings some well needed depth. :D
Dont let flat-earthers know about this 💀
Earth probably isn't flat. The thing is our 7 continents are only a small part of it.
@@NikitaGal31 fun but totally useless fact; in scale, the distance between the lowest and highest points on earth's surface is so small compared to the earth's size that it'd feel smooth if you were huge enough to swipe your finger across it. So you could argue that while the earth itself isn't flat, its surface almost is lol
@@Phyllion- fun fact, that has been proven wrong, our fingers are super sensitive
It would be as smooth a queue ball.
That would require them learning math, which is too tall of a task, especially given how math is created to control the population.
It might be too technical, but I'd love a video on why polygons are so expensive, but the light calculations that use normal maps are relatively cheaper.
The short version is that drawing an additional triangle involves a lot of math. You have to determine if the triangle should be drawn, whether it's facing the camera, and which pixels it should cover. You have to determine in what order it should be drawn so it gets drawn in front of the stuff that's further from the camera, and after the stuff that's closer. If you're using MSAA you have to do some of these computations repeatedly. You have to do all of this before you even *start* doing lighting calculations, and some of it has to be re-done for each light that touches it, depending on which overall rendering pipeline you're using.
In contrast, if you're already drawing a triangle, texturing it is pretty easy. You have three known points in a texture, three known points on a triangle, and you can just do some simple math to go grab the proper texel from the texture based on where you are on the surface of the triangle (mip levels / anisotropic filtering complicate this a little, but the point stands -- texturing is super cheap). So loading an extra texture map for the normals doesn't actually cost much. And then, when you light the mesh, instead of just using the geometry normal, you grab the texel from the normal map, apply it as an offset, and do exactly the same lighting calculation you were gonna do anyway. You still had to do the lighting calculations per pixel anyway, so adding a texture read and applying the offset costs very little.
Another way of thinking about it is that geometry is a much richer (and more costly) representation than normal maps are. Geometry has depth and form. It can occlude, it can be behind, or in front of things, it changes how it looks when your perspective changes, even in the absence of lighting. Its surface directional detail is only one element of a much larger whole. Normal maps are literally just "hey, pretend this bit is facing a different direction than it actually is when you light it." It works well for encoding subtle details on surfaces, but it's a fundamentally simpler representation. That's why it's cheaper.
Realized there's a bit of jargon here:
Texel = pixel in a texture (as opposed to onscreen)
MSAA = "A type of anti-aliasing where you re-do some of the math of drawing triangles repeatedly to avoid harsh jaggies around the edges"
Mip level / anisotropic filtering = You know how old PS1/PS2 games look super fizzy when looking at the horizon? That's because when a polygon is tilted away from the camera by still visible, it's easy to get into a situation where there's more than one texel per screen pixel, because they're jammed together by foreshortening. When you draw it, you get some arbitrary pixel from the set of potential options, and if the camera moves even a tiny bit, you get a different one, which causes an ugly, noisy, unstable appearance. In real life, you just see a weighted average of all the options -- no fizz. So mipmapping / anisotropic filtering is a way to store sort of "pre-blurred" versions of the textures alongside the originals, so when you get into that situation you can grab from the blurry version that already averages that information, which solves this problem.
@@AndreInfanteInc VERY informative. Thanks for sharing.
@@AndreInfanteInc thank you so much, this was all very clear and tremendously helpful
@@AndreInfanteInc I knew a few things you said but not all of it so very fascinating ! Thanks for sharing this knowledge ;D
The amount of normal mapping used in Nintendo Switch games like Mario Odyssey and BoTW is crazy, they achieve real looking 3D and are stunning yet can run small hardware.
The amount of normal mapping used in games.
nintendo is always a generation behind with graphic sucks that they force you to buy this shit plastic shit to play there good games
@@idontknowwhatnameshouldipu1864 🤓
@@idontknowwhatnameshouldipu1864 🤓
Someone should tell The Pokemon Company about normal mapping.
As a 3D texture student, the "this is normal" on a thumbnail of a normal map is such a clever pun. Take my like
As a random gamer that has math background... I've basically completely forgotten that normal means non-unnatural when looking at the thumbnail
As someone starting 3d modeling and vfx I also found it funny
as a person who just mods games and does 3D as something like a hobby . yeh it was indeed funny
tho I often am too lazy to add normal maps to the models used in my mods. sometimes I don't even have any normal maps and I gotta make my own which uhh pain so yeh lmao
As somone who is alive i find this funny
💯💯💯
I remember the first time I noticed the existence of a normal map. I was playing GTA V, doing donuts in grass or something, when I spotted strange shadows in my tread marks. They were angled the same direction all the other shadows around me were. Now I had pretty substantial knowledge on games and the techniques they imploy, however I had never even heard of a normal map before. For a bit I was _certain_ my vehicle was leaving behind a 3d imprint, but after closely examining the treads I realized they were, in fact, 2d. My mind couldn't wrap around how they could add shading to a 2d texture, and for the longest time I had no idea how they did it. In the time since I've learned of the existence of normal maps, but I'm glad to see a video going over all these details I was oblivious to.
It's normal to think that way.
Commenting on the last part of the video where you talked about depth on flat surfaces. The first time I actually noticed this was when I was playing Dark Souls on my Xbox. I was in Undead Purgatory and just so happened to look at the ground which was on a bricky surface. Seeing how they made what looked like hundreds of individual 3d bricks on a flat surface blew my mind. I knew about normal old normal maps but that... that is something that I really want to learn more about.
I guess he's talking about tesselation. As far as I know, this technique still uses the normal maps to fake details when you're away from the 3D object. But as soon as you get closer to it, the engine will use the normal map (or another depth/height map) to brake your object into smaller polygons and translate the height of these polygons according to the texture.
Basically, you're using a texture to "create details on the fly" only when needed. This cannot be confused with LODs. LODs do something similar, adding detail to the object, but in every direction and not specifically for height and depth. Also, your object need to have enough geometry for tesselation to work, so I guess it's common to combine both techniques.
@@luizfernandonoschang8298 the thing i was thinking of was parallax occlusion mapping. sisnce it looked more like that with what he showed. especially rooms inside a building.
those details can be faked without tessellation
hey i discovered it in dark souls also!! i noticed it when i was in the lava area, after the discharge guy gets rid of the lava, the walls looked so crazy so i got close to them and realised how undetailed the model was and was amazed
The fact that you used genshin as an example for parallax mapping, is a great example. That game uses a lot of mapping methods to decrease processing cost. Normal maps, specular maps, parallax maps, you name it.
Yet it still only runs at 30 fps on the PS4.
Botw and half life alyx are insane for this
@@user-qw6ht7jw2b Ps4 only runs games at 30 fps
Yeah I really started to notice with the Liyue roof tiles
@@user-qw6ht7jw2b True, mindblowing that higher mid range phones run this game at 60 fps
Thats something that i've always wondered about graphics in games, I would stand close to a wall in a game trying to see how the light was bouncing in some surfaces bout they only looked 2d lol.
What I love about your videos is the fact that you can manage to put 30min of knowledge in 5 minute videos 🙏
OMG, thats so true. This skill is priceless for TH-cam blogger.
@@Dany12156 Almost unprecedented 😂
I never expected this to somehow also explain Sims 4 custom content to me, but here we are. Thank you for teaching me what a normal map is!
Bro, telegram rlly seems to be popular among youtubers doing secret giveaways they never talked about in their channels
@@doodoo2065 ?
@@SimoneBellomonte i was replying to a bot impersonating this TH-camr
I already had some minimal knowledge about how this kind of stuff (game graphics, engines, textures, etc) worked, but that image at 2:30 blew my freakin mind. It puts into perspective how awesome certain revolutionary techniques are (such as normal mapping)
I recently started learning Blender, and some of the terms being thrown around were still very vague to me. This made me better understand what I'm actually doing. Great video!
Just found your channel and I wanted to say this is exactly what I’ve been looking for: simple but still technical explanations of common video game and 3D dev techniques. Cheers!
It's so cool how almost everything in video games is an illusion. 3D looking textures look 2D, low poly models used to convey detail at distance, things tend to unload when not being looked at, etc
It would've been good to mention that for unreal specifically, the B channel (Depth) of the normal map gets entirely thrown away when using the Normal Map compression, then they get regenerated when used inside a material from the R and G channels.
So if using unreal, it isn't worth trying to get fancy with the B axis since it's just going to be thrown away anyway.
What possible reason does unreal have to do that?
@@kendarr The compression type for normal maps in UE repacks R+G into RG+BA, so you get 2x the bit depth so higher precision normals.
I forget the exact bit depth now, but it'll be 16 per channel instead of 8 for example, so you get might higher than 0-255 values, which creates a better overall normal but is still compressed nicely.
Since normals are normally normalized, you can calculate the missing channel if you know the other 2, so only 2 need to be stored which saves space. On the other hand, if you store all 3 channels, you can have a vector with variable length which determines light attenuation (or amplification), allowing you to do more fancy things with your normal map.
@@unrealization3332 I guess that's the reason we artists are not programers lol, thanks
@@user-qw6ht7jw2b Kinda makes sense, thanks, it's just wierd to think about it
"Normal" or " Surface Normal" it is the best name because it comes from Physics. "In mechanics, the normal force N is the component of a contact force that is perpendicular to the surface that an object contacts"
I've done some texture mods for a game and normal mapping is something I've come across and had a very simple understanding of what they do, I'm glad I came across this in Recommended to get a higher understanding. Who knows? maybe it'll help me with my own future endeavors!
ive always wondered about the rooms in spiderman! I'm looking forward to the explanation
I am not an expert on this, but if I am correct...
I believe the rooms in spider man are done with parallax scrolling, which in the most basic sense is taking multiple images and making them move independently of one another, creating a 3d effect. That's just what it looks like to me, however I could be completely wrong,
I believe the main menu for Halo Reach actually did this.
@@gucenuckle you are right on the money mate
@@gucenuckle oh thats really interesting!
@ 1:54 "this line is affectionately called... the normal. Not the best name"
This is the mathematical term for it. It's not as if it were invented for video games. The normal direction is a pretty standard terminology. I wish more people would get used to it 🤣
Great video and explanation on normal maps! Normal maps and depth maps some of my favorite tricks when it comes to 3D modeling and they're really powerful. It's impressive how detail they can give without much impact.
Oh god yes please do the one with the rooms inside buildings without them actually being geometrically there. I'm a huge Flight Simulator guy and almost all the buildings un MSFS have rooms and interiors. How does it work? I can see it's a trick when moving the camera around and look closely but frol afar or at a quick glance, the illusion is perfect.
Love your videos, thank you.
it's a perspective-changing cube map projection (basically a fancy scrolling texture). valve games use those for reflections.
This explanation about how RGB converts to Direction is mindblowing, i always wondered but it never clicked
I'm taking a computer graphics class currently and literally just completed an assignment where we implemented the Phong illumination model for our ray tracer. Cool video!
Iam already working quite awhile in industry, this is literally the best explanation of normal map to a lot general population
I love learning about the creative and genius ways that people learn how to solve problems. It’s amazing to see how realistic graphics have become in video games
i love the videos you make. you keep things clear and concise and thats great for my smooth ADHD brain. i always feel like i’ve learned something when it ends. you’re doin great man 👌
The up/down channel is typically blue, not green.
Red is the direction of U texture coordinates, green is the direction of V texture coordinates (in the plane of the polygon.)
I clicked on the thumbnail for the normal pun and stayed for the content. Great stuff as always! Also excited for the hinted at parallax mapping episode coming up.
I've seen many game files and the most detailed one is TLOU2. Normal maps and bump maps for pores and fingerprints. Detailed like crazy.
As a student studying game art I can confirm that normal maps are insane. They can make a model look ten times more detailed than it really is geometry wise.
damn, what the heck? I had no idea this stuff was flat. I was completely convinced it was all just detailed 3d models. This is actually mind-blowing. This has to be the most incredible thing I've learned about in weeks
Since I work on Unity all the time to make stuff in VRChat like avatars and worlds I immediately recognized the thumbnail as a normal map lol
As a 3d animation student, I never understood why normal maps. Looked like that until now: thank you for actually explaining it!
This video was recommended to me by youtube, and i can say this channel is a gold mine
I still don't understand how people use normal maps for minecraft resource packs it never worked for me
Great informative video explanation! This is the way a lot of games function. I would like to see texture displacement used more, especially for ground and turrain. It adds a whole new level of detailed environment.
the parallax mapping in genshin impact at the very end of the video is 100% the reason i got so interested in tech art lmao
it's kinda sad they toned that down a lot in future updates. mondstadt is FULL of them, but they're few and far between in inazuma and pretty much gone in sumeru.
@@GraveUypo They'll probably use it more in Fontaine.
I really like these short insights on the technical side of video game graphics from the recent videos!111
I already find this so amazing, I can’t even prepare for what abnormal maps are capable of doing.
FYI it's called normal, because in engineering and math, primary method of describing things that move in the direction are called vectors. I've taken whole classes on vector calculus. Basically a vector is something with a value and a direction. As an example when we stand on the ground your body exerts a vector of your mass in body weight multiplied by the force of gravity, and in the direction of the center of the Earth. Makes sense, gravity points towards the center of the Earth. But the fact that it points in a direction is important. In this video he describes the word 'normal'. In physics and engineering normal means vertical towards the face of the surface. So if you have a six-sided dice. If you roll the dice and six lands face up, then an imaginary arrow that pointed straight up would be an arrow that is normal to the face of the six side. Basically it just means any service has a forward and backward direction that it points to. So your countertops have a normal vector that points to the ceiling, the front of your car has a normal vector that points towards the front of your lane, and the barrel of a gun despite being a surface area with regards to pointing, has a normal vector that points in the direction of the bullet ( we are excluding gravity from this example ). So make sense that for every triangle you can create a surface, for every surface you can create a vector, therefore the math gets really really hard for your computer and your graphics card to calculate millions of these vectors every second. So, if your computer gets a shortcut through some clever programming to make guesses about these vectors before they appear it can provide some very beneficial increases in graphical performance
The more technology develops, the more optimizations like this become possible. For example, you probably know about how higher poly models are more taxing on the hardware, which is why normal maps are so useful. It allows you to use a lower poly model that looks like it has the detail of a higher poly one. What is somewhat newer technology that has appeared in a number of games recently follows a similar principle: Resolution scaling. Just as higher poly models are more taxing, so to is running a game at a higher pixel resolution. Resolution scaling allows you to reduce this cost by running the game at a lower resolution, but then scaling that lower resolution up to the size of the screen as a borderless window. From here, filters can be applied to help mask (or highlight, depending on the situation) the added pixelation. This results in you being able to use less resources and improve framerates with minimal visual reduction. Though, the lower you're scaling down from, the more apparent the loss of detail is.
I love the pun in the numbnail of "This is normal." Because, first of all, it's common practice, and second of all, the viewers of the thumbnail are looking at a normal map.
ALL images contain R, G and B. It's how you interpret these values that make normal maps different.
Why is everything flat in video games?
Me: Excuse me Mister! How dare you say my beloved Pong looks flat! I play it for over 40 years and it still astonishes me how much detail they packed in.
What I think is really cool is how many of this game design can be actually used to understand our own mind. We are constantly doing these kind of tricks within our brain to compute the world and make sense of our surroundings
As a professional tech artist, I would approach the definition of normals in the same fashion as you just did. You do a great job of teaching + you made a pretty nice montage
4:19 wait until you arrive at Sumeru desert(or Sumeru in general)
Cheers on a great video. I already knew everything. Yet thumbnail was so catching i had to see how you would talk about.
I feel like I'm in college again.
2:09 Oh this is so cool. It's taught in science class (reflection from a normal). When I first noticed the 3d effects on a 2d plane in games, my first guess was "It's something to do with reflection and lighting"
Thank you for this well explained video
Not going to lie. This has to be the best video I've seen for digesting the theory of normal maps. It took me a hot minute to learn all this on my own back when I started my 3d art journey. Very well made, with great visual examples. Bravo! 👏
I always tell my students its like 3D glasses for the renderer. Those old school red and blue glasses that create the illusion of depth from a 2d image are pretty close conceptually to tangent space normals.
I first noticed this when I played Dark Souls 2 years ago, everything was detailed and lighting reflected perfectly, but the surface was flat and always wondered how that worked. Thanks for the explanation!
I really love the new direction of your channel!
Great video! Worth mentioning that UE5’s nanite aims to get rid of normal maps, it does some fancy geometry virtualisation that I won’t pretend to understand but it enables billions of triangles in a scene. Optimising meshes and authoring normal maps takes quite a lot of Artist time in game development so tech like this could be a very big deal when/if it’s fully adopted
My main concern with games using nanite is bloating file sizes. There is something to be said about Nintendo sticking with the aging switch hardware, it keeps file sizes small and fast to download and force’s developers to actually optimize their games. That said, nanite is a impressive feat of technology for groundbreaking games
Not even watched the video and I fully love and understand the thumbnail lmao
Cos flat is normal, amd that when they're not flat, they'll use a normal map 🤣
The pun in the miniature is incredible
I first noticed this on Geralts armor in witcher 3. It was chain mail and each ring caught the light correctly from every direction. I was like there must be a whole other texture for storing lighting information and I was right.
Another newer trick artists use for video games is called displacement mapping. It operates very similarly to a normal map but instead of affecting how light is cast on it, it affects viewer occlusion of some parts of the textures giving it a very convincing 3D effect. The illusion can be broken by getting very close to the texture so you can see the edges but most people don't do that
Edit: Didn't realize there would be a brief mention about displacement mapping at the end of the video lol
You are describing parallax occlusion mapping. Displacement mapping involves actually moving the geometry with the use of a displacement or height map.
all this crazy stuff and im over here glad my renders dont have any black spots on it 💀
What I don't see mentioned that I absolutely love and would hate seeing gone is how there is no flexibility in character movement and at the same time nothing is fixed.
Whatever clothes the character is part of them, moves like a rubber with their bodies.
Thanks, i was almost getting everything on normals just by reading docs and you pretty much gave me that 1% i was not being able to find!
So essentially, surfaces use different colors to bounce light differently to give the illusion of depth. Sweet
can't wait for the second part, about time this amazing trick has received some love!
Cool. I don't know why I'm watching this because I don't plan to use this knowledge anywhere. Butt this is cool.
When I first started 3D modelling I had no idea what normal maps, height maps and ambient occlusion did... Thanks to our very impressive course syllabus, we skipped over substance painter entirely. Now I'm getting accustomed to Substance and planning to learn Mari
i saw the thumbnail and thought, 'very clever.' great video!
Great video. Simple and yet very instructive for noobies like me.
at around 2 minutes you show an example of a normal, and while it is correct in that face normals can be calculated from vertex normals and that can be useful in dcc's I feel it's perhaps relevant to point out that in most game engines we avoid face normals because little gain for big pain(perf). Unless you absolutely need them, better to avoid the cost.
Also, I was going to call you out for passing over parallax, but sounds like that will be a follow up video.
The 'norm' is a technical term for denoting the perpendicular to the ground. (if you wondered why it's called a normal)
I always figured it was cuz they could render more quickly and playback better with less geometry. Plus I guess it's cheaper unless extensive on processing. I mean a flat surface you can put a texture map on it and make it look like it's pretty much the same as a complex 3D piece of geometry. It's been a long time since I was doing 3D modeling very long time at least 14 years.
What an excellent video. I've played video games for 30 years and never really knew how 2d shapes looked so 3d, but this makes total sense, even though I have no real 3d animation experience.
Normal mapping is a big part of what made Halo 1 on the OG xbox look SO much better than other games at the time.
Displacement maps are even cooler because it makes the edges look realistic while with normal maps, the edges remain smooth.
I know how this stuff works to the tooth and nail but NEVER knew why the colors are the way they are. Pretty cool calculated stuff with just colors.
Colors are just numbers between 0 and 255. The shaders care about the numbers, not the colors. We only use those colors to visualize the numbers because our brains can interpret the 3d effect that way (and it looks pretty fancy, ngl).
0:35 Tomb raider 1 was one of the first real 3d games and is still perfection to this day. I don’t think in 24 years there has been one game with more immersiveness of the main character with the game world.
1:10 bump maps/normal maps are more fakery. It’s still flat. Real texture uses displacement maps which is actual geometry on top of flat surfaces
The thumbnail made me giggle. You get a like.
not just normal maps but height maps also help
IIRC, Doom 3 was the first video game to popularize the use of normal maps. I remember when that game first came out, it being hailed as a big deal.
IIRC putting 'details' to smaller polygons was called projection, then just before using them in in game, you bake some of the details like some shadows.
I remember doing some 3D animation class, when i learned about bump maps, my mind was blown 🤯🤯
Guy who invented normal map is a ducking genius.
And the issue about normal map is that, because it has no displacement, it would actually look flat in VR. (And also stereoscopic 3D in general -- as long as you know it's a thing it is possible to spot them in 3D movies).
I first saw this trying to change a wall in the Sims. Simple enough, I thought, until I loaded the game and there was a brick texture on my floral pattered wall 😅
That being said, with UE5 and the introduction of Nanite, the logic behind using normal map to compensate for polygon count might start to dwindle and becomes outdated.
Now imagine what would be possible with an unnormal map
*W O A H*
I love that the thumbnail is a normal map and it just says "normal"
Woah ur channel is underrated, glad I found it
Looking forward to the video about parallax occlusion, as I have no idea how it is actually working :D
Looking forward to the cube map video
Awesome vid ; thé part about fading in and out a normal Map for muscle stretches in insane
SOO GLAD THIS SHOWED ON MY FEED! EASY SUB!
Cool!
So a lot of the impressive graphical fidelity we see is not necessarily due to the computational power of modern GPUs, but innovative and clever design techniques.
Might be why I enjoy ARMA so much. Natural terrain is nice. Especially for shooters
The pause between: so yea and xyz should slowly get progressively longer.
I probably haven’t even noticed when something is a map and when it’s actually modeled.
I don't know if you'll mention this game, but i remember giants: citizen kabuto being one of the first game to use something similar, and it looked awesome.
I absolutely love how the thumbnail says "this is normal".
They said the universe can be unwrapped into a 2D space, which means anything we see 3D in reality now could be just a giant normal map
If I don't see some Orange Deus Ex game in the next video, I'm going to riot. On my own.
Because I'm pretty sure DE:HR was one of the first games that used cube maps to give those buildings some well needed depth. :D
Instant Thumb-Up for the pun (is it one?) on the Thumbnail Image. 🤣