Yes, very soon! I'll be covering adding more details to the shading shapes using normal and bump map workflows. But its been very complicated to get working because you can't use the existing normal and bump nodes due to them being hard coded to use the mesh, so i've had to create node group versions of each.
@@aVersionOfReality Yo that's great to hear. I've been studying your node setup from your example file and it's been a great resource to help understand nodes. I'm a coder, but have never worked with nodes before. I'm getting some ideas looking at your stuff though. If I have anything worth sharing ill let you know.
This is a very interesting approach. So far I've only considered three options; direct normal edit, shadow map, and normal data transfer. Direct normal edit is great when it comes to static pose and predetermined camera angles. But it falls rather flat when it comes to animation as shown in story mode of Guilty Gear by Arc System Works; face shadow constantly popping up and disappearing all of the sudden was not a pleasing sight to see. This is partially because direct normal edit is heavily dependent on facial topology, meaning every facial movement has to be carefully adjusted to avoid 'dirty shadow'. Also, direct normal edit prohibits vertices rotation as it changes the normal's direction. Arc System Works gets around this issue by essentially removing all in-between(transition) face shadow, ended up with only 6 or 7 face shadow frames in total. On the other hand, games like Genshin Impact and Gray Raven, where players can freely rotate camera, they utilize 'shadow normal map' to dictate which part of the face should be lit more often and which shouldn't. This way you can generate smooth shadow transition in all angles. Admittedly it's not a perfect solution since there's still 'dirty shadow', especially around mouth and chin in some angles. But I think this method is far less time consuming and more 'versatile' than direct normal edit in terms of asset re-usability. Though, this technique also relies on vertices location as texture needs vertices coordinates to work properly. In other words, this method also inhibits the freedom of facial movement somewhat, albeit to a lesser degree compared to the direct normal edit. And there's normal data transfer method. But as far as I know, there's not a lot of games utilizing this method for characters face. Uma Musume might be partially using this technique but it's hard to tell. I suspect it's because the shadow generated by another object is often way too simple and monotonous, even for an Anime character's face. I think if we can somehow combine this video's technique with the 'shadow map', we might have an answer for Anime's asinine face shadow once and for all.
Yup, that pretty much matches my impression of things. It'll take some more study to figure out what other shapes we need for major shape characteristics, but once those are solved for, we should be able to get an idea of how far this can all go. (Although there will still be the issue of what can be done in a game engine. A complex setup can be done in a texture just fine, but deforms may still be an issue. But even if you just bake it to a tangent map to get around that, it ought to still be better than nothing in most circumstances.)
@@aVersionOfRealityI'm not much of a node person because I don't have knowledge for it. In the meantime I'm thinking about using RBF drivers add-on by setting up face shadows as simple shape keys and rig. Very primitive but worth a try imho. Regardless thank you so much for sharing this technique. Really appreciated.
I greatly appreciate this writeup. I'm currently trying to get toon shading to look "good enough", using blender models in Unity. I suppose I will have to read up on shadow normal maps
I'm so lost, I didn't know where the video started, for a person like me who uses 3d as fun and anime shader at least being diffuse-shader-colorramp-mixshader, things were kind of lost like that, you know? I didn't know if it was an explanation of something that already existed or a tutorial, everything was a bit lost and it got worse when I saw that there were a few more videos, I have nothing to criticize here! you clearly did a great job, even because this is not a 2 minute tutorial with a basic formula, this is an explanation for you to understand what is happening and be able to make your own. I'm new to this world and this looks like a senior level to understand.
Hehe yeah sorry this isn't really intro level stuff. I suggest checking out some of my older videos for more basic shader stuff, or my newer Tree shader tutorial.
This is awesome! It's still a bit big brain for me, since I'm relatively new to blender and the whole 3D world; but I've been really interested in NPR and specially anime/toon style for videogame application. And THIS does seem to be definitively going somewhere. Thanks a real lot, will be on the lookout for our next videos.
Most of this is actually possible in Unreal Engine via pre-skinned vertex normal information. You can pass this information into this vector math, do manipulations (like math to make procedural shapes and blend those normals in), and expose parameters in materials to choose how much influence your vector math has on these normals. I have a toon shader that does a lot of this, cool to see it done in blender. The one cavieat in UE4/5 is unreal material graphs don't know how to move a procedurally generated shape via vector math when your character moves, so you can end up with shadows being projected to an entire mesh if you move it to the shaded side of the procedural shape. Or vice-versa, when you move it to the lit side. This can be resolved by using material parameter collections to pass head bone location information to your material, which can be used to offset procedural shape locations.
@@aVersionOfReality oh that is actually really helpful because I wanted to make some VRC models with this approach and didn't realize there was already a solution out there for unity. Thanks for the share. And great work on all your blender stuff.
Yeah, hopefully we can make things a bit easier on the face. Normal editing will probably still be needed in other areas. And perhaps for the non-toon shading on the face to stylize it a bit more.
Plug the Normals into a single defuse shader, then in Cycles bake a normal map as usual. Note though that Cycles can introduce some odd issues when making Tangent Normal Maps. There's starting to be better ways to do it (but these issues are with all cycles normal maps, not just this method.)
I haven't tested it out much. It should animate just fine. I don't think this method would be easy to use on any other part of the body though. (but luckily, those generally don't need their shading quite as clean and stylized to look right.)
I saw on twitter you guys figured out how to export the normals as a texture and then hand paint details over it? Id love to see how thats done and would even be willing to throw some funds to try and research something game engine ready.
Can you make a in-depth tutorial on how to apply this in a another project? Because i don’t use rigify. It didn’t work when I parent the null object to the head bone
It should work the same for any rig. There's nothing specific to rigify. But note that the Armature Constraint is better to use now. Try that. Message me if you're still having issues (contact info on the About page.)
Soon I hope, but I've been saying that for awhile. I keep discovering new and better methods, so it keeps getting pushed back. But it'll be better when it comes!
is it possible to export this shader to UE5? I have this same problem with my meshes but the nodes are not the same in unreal and I don't know how to recreate it. Is there a way to fix this problem on unreal too? Thank you so much.
I know someone has done it, but I don't have a link for Unreal. I'll see if they have it posted somewhere. All the fundamentals are the same, you just need to figure out Unreal's equivalent. Here's a link to doing this exact setup in Unity: panthavma.com/articles/shader-blender-to-unity/
Great stuff! Was exactly what I was looking for in my own project. Looking forward to part 2. In the mean time I have a question. Why not use generated normals? To remove the need for an empty to make rigging a bit easier. It always scales with the object it's referencing and you can even combine objects. I'm sure there is a way to correct the issue you have with the jaw as well. I used generated normals following this method and got great results as well. I'm just trying to understand the difference. Thanks!
Generated Normals are basically the same but use the mesh's undeformed bounding box. So that'd be difficult on a full character's body to isolate just the head, among other things. You can pretty much get to the same thing with either one. You do still need to rotate them based on an empty since you don't want them to stick to the surface, and they will if deforming the mesh only. I mostly went with Object because Generated needs more remapping that is different depending on your mesh, so that's harder to teach. You are correct that they can be used to help with deforms, as they stay the same due to being based on the undeformed state.
@@aVersionOfReality thanks for clearing that up. I didn't think about it affecting the entire mesh because that is usually how characters are built utilizing one mesh. The way I'm setting up my own character is actually separate parts or multiple meshes that are then controlled by a single rig. So in my case I guess it works out. And the face isn't actually built into the mesh but rather as cards or flat planes with materials as an image sequence, but as a result light doesn't interact with them well so I turn it off for those parts completely. I would like to see your take on the card method some day if thats something that interests you. I'm sure you'd have great ideas on it and it's not something I see done a lot in this niche despite how well it could work with people implementing new techniques as you've done in this video. I like your tutorial style and you are very in depth. I've watched this video specifically probably 10 times already. Keep at it!
@@riisezz0 Thanks! About to put up the first video for a new series on Normals. And feel free to send me pictures of your card setup and I'll see if I have any ideas (contacts on About page.)
at 33:45 when I copy the location axis as new driver and paste, the empty is parented so when the parent moves/rotates the values of the rotation don't change and as such it doesn't affect the attribute node since it's always at zero, I tried putting in the location transforms of the head and it worked well but later on the head will be parented to the bone so it won't have transforms either when the bone moves, am I missing something because I can't figure it out?
Hmm yeah not sure when that broke. There's a bunch of ways it can be done with Constraints, but most are annoying. Easiest solution is make a bone with the exact same orientation as the Empty. Make it a child of the head bone, and then use a Copy Transforms to attach the empty.
@@aVersionOfReality I see, I'll try that and might also see if I can just make the bone orientation transforms themselves into the driver, btw couldn't find a better video to help with the normals so thanks a lot for the video! :)
This is really interesting! Unfortunately, I have a western style (comicbooky I guess just see my speedpaints) in my illustrations. I'm looking into NPR style for a webcomic project to create both characters and prop assets in my style as closely as possible (or at least my skill level). I'm exploring the Lightning Boy Shader for shading my characters, but the shadows still have that typical weird shape. Anyway, I plan to create my panels for the comic in Blender then touch up parts of the render to further fit my style. Thank you for this!
Only applied custom Normals will export (as in those from typical Normal editing or applying a normal transfer). Anything else has to be rebuilt in the destination engine, or baked to a Normal Map.
@@aVersionOfReality Sorry for the dumb question but is it possible then to apply those nodes you show in the video as custom normals so when exporting as fbx and porting to unity they get persisted?
@@livb4139 You cannot apply shader nodes to mesh (vertex) normals. You could make the same setup in Geometry Nodes and apply it, but once applied it won't be projecting anymore and will break on deforms, thus defeating the purpose. The proper solution is to build the same sort of setup in the Unity shader. Here's an article showing how to do that: panthavma.com/articles/shader-blender-to-unity/
If I follow your tutorial and after that I for example use a camera in side view orthographic and switch to workbench normal map. will I be able to export it as a 2D sprite normal map?
It is, but you'll need to adjust the math to create the right shape. Which is not easy. That's why my later videos switch to a Normal Transfer based workflow, as its more adaptable.
Generally you can plug them into an emission shader and bake Emission. Save as 32bit.exr for highest quality (file will be huge, but check if that fixes your problem.) Make sure Filmic color transform isn't on or anything.
@@aVersionOfReality so in my implementation, the nose group normal output is the final normal output. Do I just plug that into the color of the emission and then bake emission? Also filmic color transform, is that the filmic color management setting?
@@aVersionOfReality I managed to get a good looking result with the bake. But how did you convert the object based normal map into a tangent one? Because my bake result doesn't look like yours.
@@alecmackintosh2734 For a Tangent map to the base mesh's Normals, plug the Generated Normals into any shader's Normal socket and plug that into the output. Then bake Normals on Tangent map. But note that when used as a tangent map, it will break on deforms. Check out the new videos on Normals I'm doing to understand more about that and how it can be solved.
Good question! I initially thought this was the proper solution, but it doesn't actually work. You still get the shading stuck to the head. Here's a thread that gives more info on what it does: blender.stackexchange.com/questions/62904/how-is-the-vector-transform-node-used I'm not entirely sure why it doesn't work, but i assume it has something to do with us deforming the mesh and having the empty parented to the head bone (which it needs to be). Our rotation fix is actually counteracting the rotation of the empty, whereas this won't do that.
@@aVersionOfReality the vector transform node assumes the object the material is on as the transformation target. The nodes you use instead are equivalent to the vector transform node BUT allowing for the empty’s transforms to be the target
1:30 Oh... My... God... This is just PERFECT ! I see the wiggly toon shading shadow terminators you are talking about a few seconds before... everywhere ! So that I even thought it was just IMPOSSIBLE to have better shadows... And finally, I've seen your video... So either I am really really bad at searching informations on the web, or you are a fucking GENIUS !!! I am just at the very beginning of the video but I would really like to know : Is this method is applicable within a game engine ? I'm thinking more specifically with Unreal Engine because I'm working with it... Thank you in advance for the answer. And thank you again for sharing this awesome tutorial with us !!! ❤️❤️❤️
Hello! Yeah, there isn't much out there on this issue. And yes, this method is applicable. People have gotten it working within both Unreal and Unity. Also, I have an ongoing new series and articles on my blog about more Normals stuff that explains a lot of the issues and solutions that go beyond this video: www.aversionofreality.com/blog/2022/4/21/custom-normals-workflow
@@aVersionOfReality Oh yeah ! It seems there is the answers to all my questions and even more ! Tons of interesting things ! Thank you so much for this insane work !! 😍🥰😘🤩
panthavma.com/articles/shader-blender-to-unity/ Some investigation on making this in Unity. I'm also working on new videos taking this all much further, which will also talk about how to bake Custom Normals of any type out to textures.
@@zaidkiwan5168 I'm not sure of the details myself, but I know several people are looking into it. You could try contacting the author of that article, or following their twitter. We've also been talking about other Normals related options. We'll be tweeting about stuff as we figure it out :)
You can bake it to a Normal map and use in any game engine and it'll work pretty well. And someone has re-created the setup in Unity. You should be able to make it in UE with the same process: panthavma.com/articles/shader-blender-to-unity/
I'm unable to use this :( please make a video on how to use it, infact the moment I append the head with rig, light, and Empty, it stops working. Great video though
Can you say more about what you're doing and what isn't working? It should just work, unless some component of the setup is left out, or the drivers or object relations are broken.
Thanks for responding! Basically the shader worked fine in the original gumroad file I downloaded, but not when I appended the material / shader with everything set up, drivers, empties etc. After tweaking around with the nodes, I found out the GFN toon preview node wasn't reacting with light. Any help? Edit: I Made a work-around by instead of using the toon preview node, I use the DIFFUSE node to SHADER TO RGB node to COLOR RAMP (constant) with the white color slider at 0.5 position. It works all right now!!! Anywya thanks for the amazing work , keep it up 😃
Yeah :_: Years later now and I'm talking to a company that wants to hire me to implement this sort of stuff in a game engine XD Still isn't a better real time solution!
That's coming. I've been working on figuring out how to add more detail and make a more user friendly workflow for all this. I'm about to start putting out a series on it all. It'll have some more explaining up front, but then get into step by step walkthroughs of how to do the Normals. Also working on some stuff about modeling the face properly in the first place (better than my current one) but that'll come after.
@@aVersionOfReality I wanted to say this. I feel like you don't get enough appreciation that you rightfully deserve. So thank you for making this shader and demonstrating it to us. And thank you for continuously trying to improve your work, taking consideration of those who are less experience with shader or 3d workflow in general.
thank you for making this tutorial, however i couldn't finish it unfortunately.. i'm very sensitive to sounds and i could hear the squishy noises inside your mouth while you were talking > your mic is too sensitive :c
@@aVersionOfReality Theres people out there making 1-20 Minute videos or less explaining it, but difficult to follow, your videos are long asf and i manage to understand the shorter ones, please try to make your videos shorter, and straight to the point.
@@CGDANJI You are commenting on a several year old video. My new ones are shorter. But "getting to the point" isn't as simple as you think. Different people are looking for different information, and different levels of context. And many of these topics are too complex to cover with a few short bullet points, which is why nobody else bothers, or they chop out info to get the length down, and then you don't actually understand the topic. And while there's certainly time that could be taken off my videos with more editing or planning, it quickly becomes a bad return for me. I already have little time, and every minute I spend on these videos is time I'm not spending making new things, learning, or earning $. I mostly make these videos for other enthusiasts I talk to as a way to avoid explaining the same thing over to different people. My videos are imperfect. And while I do try to keep improving, they are going to remain rough. Take it or leave it. If you want to make your own video that explains it all better, I'll be happy to link it and send people there instead.
Should we be expecting a part 2 any time soon?
Yes, very soon! I'll be covering adding more details to the shading shapes using normal and bump map workflows. But its been very complicated to get working because you can't use the existing normal and bump nodes due to them being hard coded to use the mesh, so i've had to create node group versions of each.
@@aVersionOfReality Yo that's great to hear. I've been studying your node setup from your example file and it's been a great resource to help understand nodes. I'm a coder, but have never worked with nodes before. I'm getting some ideas looking at your stuff though. If I have anything worth sharing ill let you know.
@@manuelromero2339 Thanks! My series on procedural eyes also covers a ton of node stuff.
This is a very interesting approach. So far I've only considered three options; direct normal edit, shadow map, and normal data transfer.
Direct normal edit is great when it comes to static pose and predetermined camera angles. But it falls rather flat when it comes to animation as shown in story mode of Guilty Gear by Arc System Works; face shadow constantly popping up and disappearing all of the sudden was not a pleasing sight to see.
This is partially because direct normal edit is heavily dependent on facial topology, meaning every facial movement has to be carefully adjusted to avoid 'dirty shadow'. Also, direct normal edit prohibits vertices rotation as it changes the normal's direction.
Arc System Works gets around this issue by essentially removing all in-between(transition) face shadow, ended up with only 6 or 7 face shadow frames in total.
On the other hand, games like Genshin Impact and Gray Raven, where players can freely rotate camera, they utilize 'shadow normal map' to dictate which part of the face should be lit more often and which shouldn't. This way you can generate smooth shadow transition in all angles. Admittedly it's not a perfect solution since there's still 'dirty shadow', especially around mouth and chin in some angles. But I think this method is far less time consuming and more 'versatile' than direct normal edit in terms of asset re-usability.
Though, this technique also relies on vertices location as texture needs vertices coordinates to work properly. In other words, this method also inhibits the freedom of facial movement somewhat, albeit to a lesser degree compared to the direct normal edit.
And there's normal data transfer method. But as far as I know, there's not a lot of games utilizing this method for characters face. Uma Musume might be partially using this technique but it's hard to tell. I suspect it's because the shadow generated by another object is often way too simple and monotonous, even for an Anime character's face.
I think if we can somehow combine this video's technique with the 'shadow map', we might have an answer for Anime's asinine face shadow once and for all.
Yup, that pretty much matches my impression of things. It'll take some more study to figure out what other shapes we need for major shape characteristics, but once those are solved for, we should be able to get an idea of how far this can all go. (Although there will still be the issue of what can be done in a game engine. A complex setup can be done in a texture just fine, but deforms may still be an issue. But even if you just bake it to a tangent map to get around that, it ought to still be better than nothing in most circumstances.)
@@aVersionOfRealityI'm not much of a node person because I don't have knowledge for it. In the meantime I'm thinking about using RBF drivers add-on by setting up face shadows as simple shape keys and rig. Very primitive but worth a try imho.
Regardless thank you so much for sharing this technique. Really appreciated.
@@brownjonny2230 If that works out, please share your results!
I greatly appreciate this writeup. I'm currently trying to get toon shading to look "good enough", using blender models in Unity. I suppose I will have to read up on shadow normal maps
I'm so lost, I didn't know where the video started, for a person like me who uses 3d as fun and anime shader at least being diffuse-shader-colorramp-mixshader, things were kind of lost like that, you know? I didn't know if it was an explanation of something that already existed or a tutorial, everything was a bit lost and it got worse when I saw that there were a few more videos, I have nothing to criticize here! you clearly did a great job, even because this is not a 2 minute tutorial with a basic formula, this is an explanation for you to understand what is happening and be able to make your own. I'm new to this world and this looks like a senior level to understand.
Hehe yeah sorry this isn't really intro level stuff. I suggest checking out some of my older videos for more basic shader stuff, or my newer Tree shader tutorial.
Perfect for people doing cell shading for pixel art renders
Hey that's a great point, I hadn't thought of that.
@@aVersionOfReality :)
This is awesome! It's still a bit big brain for me, since I'm relatively new to blender and the whole 3D world; but I've been really interested in NPR and specially anime/toon style for videogame application. And THIS does seem to be definitively going somewhere. Thanks a real lot, will be on the lookout for our next videos.
Ooo nice!! I saw your tweets on this topic. This could make things so much easier for us all. Thank you for the info.
I've literally been waiting for this video for 2 years 🤩🤩🙏🙏
Most of this is actually possible in Unreal Engine via pre-skinned vertex normal information. You can pass this information into this vector math, do manipulations (like math to make procedural shapes and blend those normals in), and expose parameters in materials to choose how much influence your vector math has on these normals. I have a toon shader that does a lot of this, cool to see it done in blender. The one cavieat in UE4/5 is unreal material graphs don't know how to move a procedurally generated shape via vector math when your character moves, so you can end up with shadows being projected to an entire mesh if you move it to the shaded side of the procedural shape. Or vice-versa, when you move it to the lit side. This can be resolved by using material parameter collections to pass head bone location information to your material, which can be used to offset procedural shape locations.
Neat! Here's someone doing a version in Unity too: panthavma.com/articles/shader-blender-to-unity/
@@aVersionOfReality oh that is actually really helpful because I wanted to make some VRC models with this approach and didn't realize there was already a solution out there for unity. Thanks for the share. And great work on all your blender stuff.
@@Beyond-Studios Thanks! I'll have more Normals stuff soon, most of which will be easier to work with than this.
Do you have a screenshot of the UE4 material graph so I can try it out?
This is ground breaking maybe normal editing may not be exactly needed as much
Yeah, hopefully we can make things a bit easier on the face. Normal editing will probably still be needed in other areas. And perhaps for the non-toon shading on the face to stylize it a bit more.
I am definatly going to give this a try since the style i'm trying to cultivate is a hybrid. thank you for this. 😀
Just stumbled upon this. Loved it. Question: how do you bake it to a normal map?
Plug the Normals into a single defuse shader, then in Cycles bake a normal map as usual. Note though that Cycles can introduce some odd issues when making Tangent Normal Maps. There's starting to be better ways to do it (but these issues are with all cycles normal maps, not just this method.)
@@aVersionOfReality Thank you so much.
Wow! This has been really helpful. Thank you for taking the time to create this detailed explanation.
Thank you for such useful content, have less videos about it than I did thougth.
This is a great idea!
Looking forward to seeing part 2!
this is really imporatant! how does it flow with animation and other parts of the body and armor mesh?
I haven't tested it out much. It should animate just fine. I don't think this method would be easy to use on any other part of the body though. (but luckily, those generally don't need their shading quite as clean and stylized to look right.)
@@aVersionOfReality thank you! I will use this in my next project.
I saw on twitter you guys figured out how to export the normals as a texture and then hand paint details over it? Id love to see how thats done and would even be willing to throw some funds to try and research something game engine ready.
Fantastic. Life just got better.
I feel dumb, but all in all great video, thanks! :D
Can you make a in-depth tutorial on how to apply this in a another project? Because i don’t use rigify. It didn’t work when I parent the null object to the head bone
It should work the same for any rig. There's nothing specific to rigify. But note that the Armature Constraint is better to use now. Try that. Message me if you're still having issues (contact info on the About page.)
Which day are u bringing part2?
Thanks for the video
Soon I hope, but I've been saying that for awhile. I keep discovering new and better methods, so it keeps getting pushed back. But it'll be better when it comes!
Great video. isnt it easier to sphereize the normals with ABNORMAL?
Yes, but then its vertex level not pixel level. So its not as high resolution, and is topology dependent.
Thanks for your response.@@aVersionOfReality
Would this work in blender 4.0? If not, is there an updated method you could link?
Yes. Nothing has changed in this area.
Thanks a LOT for this!
Amazing work! thx for sharing,bro.
is it possible to export this shader to UE5? I have this same problem with my meshes but the nodes are not the same in unreal and I don't know how to recreate it. Is there a way to fix this problem on unreal too? Thank you so much.
I know someone has done it, but I don't have a link for Unreal. I'll see if they have it posted somewhere. All the fundamentals are the same, you just need to figure out Unreal's equivalent. Here's a link to doing this exact setup in Unity: panthavma.com/articles/shader-blender-to-unity/
Great stuff! Was exactly what I was looking for in my own project. Looking forward to part 2. In the mean time I have a question. Why not use generated normals? To remove the need for an empty to make rigging a bit easier. It always scales with the object it's referencing and you can even combine objects. I'm sure there is a way to correct the issue you have with the jaw as well. I used generated normals following this method and got great results as well. I'm just trying to understand the difference. Thanks!
Generated Normals are basically the same but use the mesh's undeformed bounding box. So that'd be difficult on a full character's body to isolate just the head, among other things. You can pretty much get to the same thing with either one. You do still need to rotate them based on an empty since you don't want them to stick to the surface, and they will if deforming the mesh only. I mostly went with Object because Generated needs more remapping that is different depending on your mesh, so that's harder to teach. You are correct that they can be used to help with deforms, as they stay the same due to being based on the undeformed state.
@@aVersionOfReality thanks for clearing that up. I didn't think about it affecting the entire mesh because that is usually how characters are built utilizing one mesh. The way I'm setting up my own character is actually separate parts or multiple meshes that are then controlled by a single rig. So in my case I guess it works out. And the face isn't actually built into the mesh but rather as cards or flat planes with materials as an image sequence, but as a result light doesn't interact with them well so I turn it off for those parts completely. I would like to see your take on the card method some day if thats something that interests you. I'm sure you'd have great ideas on it and it's not something I see done a lot in this niche despite how well it could work with people implementing new techniques as you've done in this video. I like your tutorial style and you are very in depth. I've watched this video specifically probably 10 times already. Keep at it!
@@riisezz0 Thanks! About to put up the first video for a new series on Normals. And feel free to send me pictures of your card setup and I'll see if I have any ideas (contacts on About page.)
big brain, but I think I can simplify this much more specifically for anime production
a lot of the "angles" are not seen in a typical anime show
Cool! Yeah I suspect there's a lot of stuff that can be done for typical anime styles that don't necessarily need full 3D space.
at 33:45 when I copy the location axis as new driver and paste, the empty is parented so when the parent moves/rotates the values of the rotation don't change and as such it doesn't affect the attribute node since it's always at zero, I tried putting in the location transforms of the head and it worked well but later on the head will be parented to the bone so it won't have transforms either when the bone moves, am I missing something because I can't figure it out?
Hmm yeah not sure when that broke. There's a bunch of ways it can be done with Constraints, but most are annoying. Easiest solution is make a bone with the exact same orientation as the Empty. Make it a child of the head bone, and then use a Copy Transforms to attach the empty.
@@aVersionOfReality I see, I'll try that and might also see if I can just make the bone orientation transforms themselves into the driver, btw couldn't find a better video to help with the normals so thanks a lot for the video! :)
This is really interesting! Unfortunately, I have a western style (comicbooky I guess just see my speedpaints) in my illustrations. I'm looking into NPR style for a webcomic project to create both characters and prop assets in my style as closely as possible (or at least my skill level). I'm exploring the Lightning Boy Shader for shading my characters, but the shadows still have that typical weird shape.
Anyway, I plan to create my panels for the comic in Blender then touch up parts of the render to further fit my style. Thank you for this!
Its the same principles to get clean shading for any style and any shady, don't worry :)
Damn big brain, subbed
Thank a lot mate!
do the normals persist if you export as fbx and import it to unity or smth?
Only applied custom Normals will export (as in those from typical Normal editing or applying a normal transfer). Anything else has to be rebuilt in the destination engine, or baked to a Normal Map.
@@aVersionOfReality Sorry for the dumb question but is it possible then to apply those nodes you show in the video as custom normals so when exporting as fbx and porting to unity they get persisted?
@@livb4139 You cannot apply shader nodes to mesh (vertex) normals. You could make the same setup in Geometry Nodes and apply it, but once applied it won't be projecting anymore and will break on deforms, thus defeating the purpose.
The proper solution is to build the same sort of setup in the Unity shader. Here's an article showing how to do that: panthavma.com/articles/shader-blender-to-unity/
If I follow your tutorial and after that I for example use a camera in side view orthographic and switch to workbench normal map. will I be able to export it as a 2D sprite normal map?
I'm not sure exactly what you mean, but probably.
@@aVersionOfReality It's kinda hard to explain in words my problem. I've sended you a private message on facebook if you don't mind to help a bit.
maybe you mean an image texture map?
@@daton3630 I've been able with the help of aVersionOfReality to do what I needed.
@@FyresGames i see
i downloaded the gumroad files, but unable to implement this in a different face model, is it possible?
It is, but you'll need to adjust the math to create the right shape. Which is not easy. That's why my later videos switch to a Normal Transfer based workflow, as its more adaptable.
@@aVersionOfReality all right, guess i have to go through each one of them and do exactly as you’re doing right
Does this work in game engines like Unreal and Unity?
Yes. Here's an article about translating this to Unity: panthavma.com/articles/shader-blender-to-unity/
How did you actually bake the normals into a normal map? Whenever I try to bake the normals it comes out looking scuffed.
Generally you can plug them into an emission shader and bake Emission. Save as 32bit.exr for highest quality (file will be huge, but check if that fixes your problem.) Make sure Filmic color transform isn't on or anything.
@@aVersionOfReality so in my implementation, the nose group normal output is the final normal output. Do I just plug that into the color of the emission and then bake emission? Also filmic color transform, is that the filmic color management setting?
@@alecmackintosh2734 Yes and Yes. Make sure you're on Standard.
@@aVersionOfReality I managed to get a good looking result with the bake. But how did you convert the object based normal map into a tangent one? Because my bake result doesn't look like yours.
@@alecmackintosh2734 For a Tangent map to the base mesh's Normals, plug the Generated Normals into any shader's Normal socket and plug that into the output. Then bake Normals on Tangent map.
But note that when used as a tangent map, it will break on deforms. Check out the new videos on Normals I'm doing to understand more about that and how it can be solved.
In the section "object coord rotarion issue", why don´t use the "vector transform" node
Good question! I initially thought this was the proper solution, but it doesn't actually work. You still get the shading stuck to the head. Here's a thread that gives more info on what it does: blender.stackexchange.com/questions/62904/how-is-the-vector-transform-node-used
I'm not entirely sure why it doesn't work, but i assume it has something to do with us deforming the mesh and having the empty parented to the head bone (which it needs to be). Our rotation fix is actually counteracting the rotation of the empty, whereas this won't do that.
@@aVersionOfReality the vector transform node assumes the object the material is on as the transformation target. The nodes you use instead are equivalent to the vector transform node BUT allowing for the empty’s transforms to be the target
@@kolupsy Ah, thanks for the explanation!
1:30 Oh... My... God... This is just PERFECT ! I see the wiggly toon shading shadow terminators you are talking about a few seconds before... everywhere ! So that I even thought it was just IMPOSSIBLE to have better shadows... And finally, I've seen your video... So either I am really really bad at searching informations on the web, or you are a fucking GENIUS !!! I am just at the very beginning of the video but I would really like to know : Is this method is applicable within a game engine ? I'm thinking more specifically with Unreal Engine because I'm working with it... Thank you in advance for the answer. And thank you again for sharing this awesome tutorial with us !!! ❤️❤️❤️
Hello! Yeah, there isn't much out there on this issue. And yes, this method is applicable. People have gotten it working within both Unreal and Unity. Also, I have an ongoing new series and articles on my blog about more Normals stuff that explains a lot of the issues and solutions that go beyond this video: www.aversionofreality.com/blog/2022/4/21/custom-normals-workflow
@@aVersionOfReality Oh yeah ! It seems there is the answers to all my questions and even more ! Tons of interesting things ! Thank you so much for this insane work !! 😍🥰😘🤩
@@aelendys4401 There's still a lot to figure out and implement. The series isn't done yet. Feel free to contact me if you get stuck.
@@aVersionOfReality =D
yes but how can you export this out of blender?
panthavma.com/articles/shader-blender-to-unity/
Some investigation on making this in Unity.
I'm also working on new videos taking this all much further, which will also talk about how to bake Custom Normals of any type out to textures.
@@aVersionOfReality amazing article, but sadly I'm really bad with code. Can't this be done in unity's shaders?
@@zaidkiwan5168 I'm not sure of the details myself, but I know several people are looking into it. You could try contacting the author of that article, or following their twitter. We've also been talking about other Normals related options. We'll be tweeting about stuff as we figure it out :)
@@aVersionOfReality amazing! it's really exciting seeing the whole NPR community grow and experience so much groundbreaking stuff every time❤
GOAT
I want to export the final normalMap to UE4,Can you help me ? I pay to you. My English is bad , Sorry.
You can bake it to a Normal map and use in any game engine and it'll work pretty well. And someone has re-created the setup in Unity. You should be able to make it in UE with the same process: panthavma.com/articles/shader-blender-to-unity/
Does it work on vroid models?
It works on any model.
Wow thx uuu
I'm unable to use this :( please make a video on how to use it, infact the moment I append the head with rig, light, and Empty, it stops working. Great video though
Can you say more about what you're doing and what isn't working? It should just work, unless some component of the setup is left out, or the drivers or object relations are broken.
Thanks for responding! Basically the shader worked fine in the original gumroad file I downloaded, but not when I appended the material / shader with everything set up, drivers, empties etc. After tweaking around with the nodes, I found out the GFN toon preview node wasn't reacting with light. Any help?
Edit: I Made a work-around by instead of using the toon preview node, I use the DIFFUSE node to SHADER TO RGB node to COLOR RAMP (constant) with the white color slider at 0.5 position. It works all right now!!! Anywya thanks for the amazing work , keep it up 😃
@@darshan5044 This may be to do with the light settings. If you send me your file, i'll figure out whats going on (email on About page)
@@darshan5044 Looks like the problem was the World Lightning messing up the toon shader. Setting it to 0 strength fixes it.
It is so wild that this is necessary T_T
Yeah :_:
Years later now and I'm talking to a company that wants to hire me to implement this sort of stuff in a game engine XD Still isn't a better real time solution!
@@aVersionOfReality sheeeeesh lol that is wiiild
this is intersting but I'm too lazy to setup things up
You sound like that criminal guy from The Simpsons lol. Good tutorial, though.
Damn, you've figured me out!
Ugh you have a good looking setup but where’s the actual part in which you show us how to do this what’s with this lecture of why and no how ?
That's coming. I've been working on figuring out how to add more detail and make a more user friendly workflow for all this. I'm about to start putting out a series on it all. It'll have some more explaining up front, but then get into step by step walkthroughs of how to do the Normals. Also working on some stuff about modeling the face properly in the first place (better than my current one) but that'll come after.
@@aVersionOfReality I wanted to say this. I feel like you don't get enough appreciation that you rightfully deserve. So thank you for making this shader and demonstrating it to us. And thank you for continuously trying to improve your work, taking consideration of those who are less experience with shader or 3d workflow in general.
Cool trick but i personally didn't like how it looks.
thank you for making this tutorial, however i couldn't finish it unfortunately.. i'm very sensitive to sounds and i could hear the squishy noises inside your mouth while you were talking > your mic is too sensitive :c
Yeah its really tough to get audio working well, sorry.
Why yapping for 40 + minutes just get straight to the point, i've watched over 100 videos still cant fix shadows to my characters!
At that point its probably a skill issue.
@@aVersionOfReality Theres people out there making 1-20 Minute videos or less explaining it, but difficult to follow, your videos are long asf and i manage to understand the shorter ones, please try to make your videos shorter, and straight to the point.
@@CGDANJI You are commenting on a several year old video. My new ones are shorter. But "getting to the point" isn't as simple as you think. Different people are looking for different information, and different levels of context. And many of these topics are too complex to cover with a few short bullet points, which is why nobody else bothers, or they chop out info to get the length down, and then you don't actually understand the topic. And while there's certainly time that could be taken off my videos with more editing or planning, it quickly becomes a bad return for me. I already have little time, and every minute I spend on these videos is time I'm not spending making new things, learning, or earning $. I mostly make these videos for other enthusiasts I talk to as a way to avoid explaining the same thing over to different people.
My videos are imperfect. And while I do try to keep improving, they are going to remain rough. Take it or leave it. If you want to make your own video that explains it all better, I'll be happy to link it and send people there instead.