Yeah of course, we'll use : - SVG for infinite zoom(with no detail loss, no pixelation); - Javascript for creating SVGs on the fly ! But I'm sure @CGMatter has a way to do all that in Blender with almost no pain...
Remember, when using an image texture as a normal map, set the “Non-Color Data” option on the Image Texture node. And don’t forget to have a Normal Map node to convert the coordinates from tangent space to global space before feeding them to the shader.
hey man im kinda of like a noob at blender, can you tell me how to use the displacement image texture? like i plug the image texture of the displacement map to the victor displacement node and than connect that to displacement node and nothing happens?
I feel like he only explained what bump/height-maps do/are but only said that normal maps have 3 channels ... for those who still want to know what normal maps do: normal maps alter the direction in which the surface is facing compared to the actual direction the geometry is facing. the three colors(coordinates) span a normalized vector. Red is X, Green is Y and Blue is Z. an unaltered surface is this typical light violet as it has a maximum z (blue) component and ~50% the other (50% 'gray' is equal to 0 while 0% is -1 and 100% +1) Imagine the vector as a pen balanced on the tip
@@Chevifier A bump map can create magnification artifacts-- if you zoom in, it'll give you pixellation. It's also slower to calculate, like if you're aiming for 60fps. But you can use a bump map as a start, and then bake this bump map to a normal map, which will give you the best of both worlds.
Ok so i presume you would model your hard surface, then somehow bake that into a height map that can be applied to a more basic model with fewer faces, right? How do i learn about baking textures from models? I've always made my bump maps manually, seperately, in a photo editor...
That was a sick video, straight to the point, quick, informative and the ad was placed at the end without interrupting the flow. This is the first video I've seen from you and I plan on watching more 👍
if you do want to learn blender, don't start with CGmatter. End with CGmatter. Or like, watch CGmatter at some point anyway, but don't start with watching it because, like, it will be hard and your brain will not only become crinkled & wrinkled, it will become TOO wrinkled & crinkled and you'll end up dead, or even worse, a 3D artist.
You forgot to say the most important point, they are simply "Normal" Maps, ever heard of the normals every face got, the direction it points at, that is literally a part of the light bounces equation, it simply takes part after a light ray hits the surface, it decides at which direction it bounces, now "Map", We all know what maps are, well zoomers, I mean google maps, it guides lazy people who can't learn their streets where to go, but a normal map simply guides a light ray at which direction it should *bounce*, good luck with life everyone, baiii
At 2:21, the equation can be daunting at the first glance but it's nothing but "normalising" the normal vector. Basically, the vector we get perpendicular by finding the derivative (numerator) is some scalar and we want a unit length (magnitude 1) vector that's perpendicular to the surface. That's why we divide by the denominator. So all it does is give us a unit length vector.
This must be the best, most informative tutorial I have seen in a long time. Other explanations are way too technical and this video explains it quickly and easily without much hassle to someone who's just curious. Thank you!
Physics simulations, for example. You could make a displacement map of a detailed object, then reduce it down its primitives, simulate it for much cheaper as there are much less verts and such, and then when you render out your animation, the displacement maps will be applied. So simulating your object doing whatever will be way cheaper but in the final render you will still be able to have detailed light interactions with that object
Because game engines need to churn out 60 fps out of every scene and if you use a million triangles on a sci-fi vent that constitutes a fraction of your environment then you can bid your career goodbye. Basically it lowers render times.
kakashi zen but if you import the primitive into a game engine with a displacement map wouldnt it cost just as much as the model itself? what i got from the video was that only bump maps and normals maps save performance costs
@@anmoldureja4247 they're talking about displacement maps though, not baking a normal map Another reason you might want to do it is the adaptive subsurf stuff - if you bake your geometry into a displacement map, you can control the level of detail procedurally by giving it more or fewer verts to displace. Throw the adaptive subdivision control in there, and you've got something that adds more detail the closer the object is to the camera So you get all your expensive detail when stuff is close enough to see it, you get simplified geometry when it's far enough that you can't tell, and you only need one base object and one displacement map to get all those levels of detail. If you did it purely with geometry, you'd need lower-detail versions of stuff, and you'd have to mess around swapping them if they ever move relative to the camera
I mean it is not necessary that you always use a displacement map for a model though, you can just use it wherever you like, for terrain etc. It's super useful there. And there are times when you want to use a texture as a displacement map. He just explained it all on one object for comparison but you're right for the model in the video, displacement map doesn't make sense
I really enjoyed it except I just feel like the whole world is on ADHD meds and I can’t keep up everything is hyper fast I didn’t need the music I just need to know the information and it helps if it goes a little slower but I’m a dinosaur so tough luck for me. Thank you so much for this very valuable information
When making use of smooth shading, there are two techniques called Goraud Interpolation and Phong Interpolation. In Goraud Interpolation, the vertex colors are calculated based on their normal directions and the current lighting / material information. That color is then interpolated across the surface of the polygon that's formed by the triangle. This means that lighting calculations are done *per-vertex.* In Phong Interpolation, the vertex normals are interpolated across the surface of the polygon and the color of each fragment is calculated based on it's normal direction and the current lighting / materials. This means that lighting is done *per-fragment / per-pixel.* For obvious reasons, Phong interpolation is a lot more computationally expensive than Goraud interpolation, but produces a much more accurate shading that captures things like specular highlights. Modern computers are more than fast enough to do per-fragment / per-pixel lighting, as is the case with Phong interpolation. But when you use a normal map, you're skipping the interpolation of the vertex normals and simply sampling the fragment normals from a texture image. This is less computationally expensive and gives you more control over how the surface is lit because you're essentially telling the renderer which direction light should be reflected on a pixel level. You can make a very low-poly surface be lit in almost the exact same way as a high-poly surface. Tessellating a surface into more polygons can also help lighting appear more smooth. If you do this at runtime, you can have a displacement map displace the vertices of the generated polygons to add more detail to the surface. But if you're not tessellating, you can still use the same image as a bump map, in which case you're just creating the illusion of the surface being raised or lowered via a lighting trick. This isn't as good of an effect as normal mapping, but takes about a third of the data. The use of bump mapping or displacement mapping is a mutually exclusive choice. Do you want more detail, or just the illusion of more detail? Using bump mapping together with normal mapping is possible, but stupid and pointless. Just choose one or the other. Do you want to save memory with a bump map, or have more detail with a normal map? In contrast, using displacement mapping together with normal mapping makes perfect sense because displacement mapping creates actual detail, but may not be able to capture some of the more fine-grained details that normal mapping does because it's limited by the tessellation.
google search gave me this.. 'Bump maps are the old school way of adding detail to low poly objects. They are used in the same way as normal maps, except they only contain height information and not angle information. Displacement maps are sometimes used to change the location of actual vertices in a mesh' now i get it. good video though glad i learnt something :)
Just subscribed because content is king and you nailed that in both material and style. That being said 99% of TH-camrs/Podcasters/peoplewhorecordtheirvoicesforpeopletohear need to talk faster, you are the rare exception where your pacing could be slowed a bit and I think it would benefit. Also, more importantly, please look into getting an actual microphone as the audio quality was the equivalent of listening to Beethoven's Für Elise as arranged for kazoo and vuvuzela. That is to stay amusing and interesting in its own right, but I would like to drill holes in my ears to stop the incessant shrieking. Hyperbole aside, keep up the quality content and the quality mustache, I appreciate you.
Since the dawn of men people have been modeling super high-poly meshes full of detail... so we need to bake it into displacements so it can use an even higher poly count for a worst result! :D #fistbumpforbumpnode
@@doomstere.g352 well, think about this metal thing he showed in the video. if you just want to render one of those there is nothing gained in. using a displacement map instead of a model. but what if you have like.. a huge spaceship with 100 of those plates? hard to work with. 100 planes with the displacement map on it will work way better here. and since the adaptive subsurf will enhance poly count most near the camera, the planes further away won't have a huge impact on render times either.
Omg Amazing. You are really the best at quick explanations. There is just one thing left - Could you show us how to make a normal map from height map? It would be super useful!
An old artefact from 2020 and before, when game engines weren't strong enough to render million and millions of polygons in real time. It was used to fake details. You had to build the exact same model twice. Once you are finished your detailed sculpt with millions of polygons for your table for example, you had to remodel it with just a couple of thousand or even just hundreds of polygons. Imagine all the extra work they did back then.
I was really interested in how blender's bump node converts image textures into normal so this helped and I figured out the height map but didn't know the normal part
Okay I THINK I've started to grasp the difference in normal maps vs bump maps. I'm just a little dummy and didn't realize I should probably look up what exactly normals are first. Forgive me, I'm not really a math person and I'm only just rendering my donut animation right now and am still trying to figure out what all the terms mean. Your video really helps me to conceptualize it in my head. Thank you
There's also directional displacement maps, i.e. vector maps. And there are ways to directly get a normal map from geometry much as you can do for bump maps.
I actually do know how to make a beautiful website, thanks for asking. (Jokes aside, stuff like SquareSpace has become my go-to whenever distant relatives ask me to make them a website. It's far easier to show them that than to actually do it for them.)
now how can I reverse this better (asking for my 3d printer)? you can displace bumpmaps and "bake" a texture (any color->grey tex and bumpmaps work) into geometry for 3d printing, but you cant undo normal maps for printing the high poly model. (you want all the faces you can get since they'll be sliced as paths, I have the ram and storage, ill make a 200mb model just to displace textures back into a model) problems: -the seams are broken unless you can find a perfect midpoint (you need a watertight final model so an accurate midpoint saves many angry hours of meshing) -baking textures is like a lithophane and it doesn't represent the height detail that the artist intended (a brick's tex =/= the hightmap) (baking textures is nice and simple if you have something like a spherical photo that you get a UV unwrap to a sphere, you can get really nice small globes or props) -bumpmaps nearly guarantee bad seams (you cant print unless you remesh, you just cant) -gets very slow for hard edges because you have to uniformly spread out your triangles to be displaced -edges of UVs can affect some more complex models if whoever textured it didnt use a smooth island boarder. (fucking UV went over the edge of the image somewhere inside to model and broke the displacement modifier) -white/null gets displaced and will make internal geometry pop out -cant use normal maps I know a couple of these can be alleviated with a fill tool swapping the null textures with the intended midpoint color or remeshing first and somehow using that new UV set to only displace the outer, watertight surface for as few seams as mathematically possible, but both of those solutions are only situational and band-aids to the main issue. Any solutions would be helpful. (the end goal is a high poly, very high detail model of the fully texture filtered, fully tessellated, fully shaded model where all the detail is available to be baked into the model when remeshed for watertightness and printed)
@CgMatter please consider explaining to us plebs WTF are...Compositing Passes, especially Position Pass. If you google 'what is position pass' a professional nuke teacher from his desk literally just tells you to google what it is, which gives his tutorial as a top result ∞
is that a cg mustache?
Definitely a procedural
so real! love it!
DVDTSB yeah but it’s just a bump map
actually the mustache is the only thing that's real and he has secretly been Ian Hubert all along
Joss Whedon would be proud
I'm addicted to watching stuff about Blender , how did i end up like this?
That's the dopamine man by watching how to do it and thinking you got smarter. Information is a hell of a drug!
you've deleted the default cube of your mind, friend
Well... If you're anything like me... Your wife left you.
Same. I haven't opened Blender in literally over 10 years, but I am just so entertained by CGMatter, Ian, etc
@@fecu2394 oh i was never married ( lucky am i?)
see Squarespace sponsor:
my brain while watching CgMemeStar : "But can we create a website *procedurally* ?"
Yeah of course, we'll use :
- SVG for infinite zoom(with no detail loss, no pixelation);
- Javascript for creating SVGs on the fly !
But I'm sure @CGMatter has a way to do all that in Blender with almost no pain...
that would be php (kinda of)
Yes probably easiet using react.js
hrrrm well squarespace does have an api
... using blender
He said "see you" instead of "bye bye"
Something must be wrong
H E R E S Y
Alvaldong it's just a glitch is CGmatrix
He hacked all of our cameras using Procedural code in blender!
whats the matter cgmatter
CG Matter's outro cured my grandma's alzheimers. She finally knows who she is again.
Cured Granny Interface. seems legit.
@@parishna4882 CGI haha
Remember, when using an image texture as a normal map, set the “Non-Color Data” option on the Image Texture node. And don’t forget to have a Normal Map node to convert the coordinates from tangent space to global space before feeding them to the shader.
hey man im kinda of like a noob at blender, can you tell me how to use the displacement image texture? like i plug the image texture of the displacement map to the victor displacement node and than connect that to displacement node and nothing happens?
as a bump ?
Title: "WTF are... NORMAL MAPS?
My brain: Google Maps
🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣😆😆😆😆😆😆😆😆😆😆😆😆😆😆🤣🤣🤣🤣🤣😆🤣🤣😆🤣😆😆🤣😆🤣🤣😆🤣😆🤣🤣🤣🤣🤣😆🤣😆🤣😆🤣😆🤣
I feel like he only explained what bump/height-maps do/are but only said that normal maps have 3 channels ...
for those who still want to know what normal maps do:
normal maps alter the direction in which the surface is facing compared to the actual direction the geometry is facing.
the three colors(coordinates) span a normalized vector. Red is X, Green is Y and Blue is Z.
an unaltered surface is this typical light violet as it has a maximum z (blue) component and ~50% the other (50% 'gray' is equal to 0 while 0% is -1 and 100% +1)
Imagine the vector as a pen balanced on the tip
Great! Thank you!
Yep i watch the video for understanding why i would want to use a normal map when bump mapping is so much easier to create. Still dont know lol
@@Chevifier A bump map can create magnification artifacts-- if you zoom in, it'll give you pixellation. It's also slower to calculate, like if you're aiming for 60fps. But you can use a bump map as a start, and then bake this bump map to a normal map, which will give you the best of both worlds.
Thanks
Ok so i presume you would model your hard surface, then somehow bake that into a height map that can be applied to a more basic model with fewer faces, right? How do i learn about baking textures from models? I've always made my bump maps manually, seperately, in a photo editor...
That was a sick video, straight to the point, quick, informative and the ad was placed at the end without interrupting the flow. This is the first video I've seen from you and I plan on watching more 👍
Even though i don't know how to use blender because of my smooth brain, i like watching this guy because he's pretty funny.
if you do want to learn blender, don't start with CGmatter. End with CGmatter. Or like, watch CGmatter at some point anyway, but don't start with watching it because, like, it will be hard and your brain will not only become crinkled & wrinkled, it will become TOO wrinkled & crinkled and you'll end up dead, or even worse, a 3D artist.
Once you create the donut, you will be ready
@PIYUSH YADAV Junkies like to get other people hooked. You don't feel so bad about your drug habit if other people are doing it too.
@@yourhandlehere1 🤣🤣🤣
@@00O3O1B dammit, I was gonna make that joke
Why do you look young and middle-age at the same time?
the mustache
Hes actually like 50 years old
He's a youtuber
ikr he looks like a teenage boy with a fake mustache :v
That's what 25 year olds look like.
congrats on 100 dude
Congrats on 100k subs.. Well deserved.
284k now. How much did you have at time of writing this comment? I see your first videos are 4 years too and you have 158k now, congrats!
You forgot to say the most important point, they are simply "Normal" Maps, ever heard of the normals every face got, the direction it points at, that is literally a part of the light bounces equation, it simply takes part after a light ray hits the surface, it decides at which direction it bounces, now "Map", We all know what maps are, well zoomers, I mean google maps, it guides lazy people who can't learn their streets where to go, but a normal map simply guides a light ray at which direction it should *bounce*, good luck with life everyone, baiii
At 2:21, the equation can be daunting at the first glance but it's nothing but "normalising" the normal vector. Basically, the vector we get perpendicular by finding the derivative (numerator) is some scalar and we want a unit length (magnitude 1) vector that's perpendicular to the surface. That's why we divide by the denominator. So all it does is give us a unit length vector.
I can barely concentrate on the video subject with the epic music loop in the background lol
This must be the best, most informative tutorial I have seen in a long time. Other explanations are way too technical and this video explains it quickly and easily without much hassle to someone who's just curious. Thank you!
red is roughness, blue is emissive and green is metalness, gg
I still don't understand but thank you for the tutorial anyway
same
Wait, why would we use the displacement map if we already modelled the thing?
Physics simulations, for example. You could make a displacement map of a detailed object, then reduce it down its primitives, simulate it for much cheaper as there are much less verts and such, and then when you render out your animation, the displacement maps will be applied. So simulating your object doing whatever will be way cheaper but in the final render you will still be able to have detailed light interactions with that object
Because game engines need to churn out 60 fps out of every scene and if you use a million triangles on a sci-fi vent that constitutes a fraction of your environment then you can bid your career goodbye.
Basically it lowers render times.
kakashi zen but if you import the primitive into a game engine with a displacement map wouldnt it cost just as much as the model itself? what i got from the video was that only bump maps and normals maps save performance costs
@@anmoldureja4247 they're talking about displacement maps though, not baking a normal map
Another reason you might want to do it is the adaptive subsurf stuff - if you bake your geometry into a displacement map, you can control the level of detail procedurally by giving it more or fewer verts to displace. Throw the adaptive subdivision control in there, and you've got something that adds more detail the closer the object is to the camera
So you get all your expensive detail when stuff is close enough to see it, you get simplified geometry when it's far enough that you can't tell, and you only need one base object and one displacement map to get all those levels of detail. If you did it purely with geometry, you'd need lower-detail versions of stuff, and you'd have to mess around swapping them if they ever move relative to the camera
I mean it is not necessary that you always use a displacement map for a model though, you can just use it wherever you like, for terrain etc. It's super useful there. And there are times when you want to use a texture as a displacement map. He just explained it all on one object for comparison but you're right for the model in the video, displacement map doesn't make sense
I really enjoyed it except I just feel like the whole world is on ADHD meds and I can’t keep up everything is hyper fast I didn’t need the music I just need to know the information and it helps if it goes a little slower but I’m a dinosaur so tough luck for me. Thank you so much for this very valuable information
logged into youtube just to say how helpful this video was (and how it was fun to watch, nice editing!)
When making use of smooth shading, there are two techniques called Goraud Interpolation and Phong Interpolation.
In Goraud Interpolation, the vertex colors are calculated based on their normal directions and the current lighting / material information. That color is then interpolated across the surface of the polygon that's formed by the triangle. This means that lighting calculations are done *per-vertex.*
In Phong Interpolation, the vertex normals are interpolated across the surface of the polygon and the color of each fragment is calculated based on it's normal direction and the current lighting / materials. This means that lighting is done *per-fragment / per-pixel.*
For obvious reasons, Phong interpolation is a lot more computationally expensive than Goraud interpolation, but produces a much more accurate shading that captures things like specular highlights.
Modern computers are more than fast enough to do per-fragment / per-pixel lighting, as is the case with Phong interpolation. But when you use a normal map, you're skipping the interpolation of the vertex normals and simply sampling the fragment normals from a texture image. This is less computationally expensive and gives you more control over how the surface is lit because you're essentially telling the renderer which direction light should be reflected on a pixel level. You can make a very low-poly surface be lit in almost the exact same way as a high-poly surface.
Tessellating a surface into more polygons can also help lighting appear more smooth. If you do this at runtime, you can have a displacement map displace the vertices of the generated polygons to add more detail to the surface. But if you're not tessellating, you can still use the same image as a bump map, in which case you're just creating the illusion of the surface being raised or lowered via a lighting trick. This isn't as good of an effect as normal mapping, but takes about a third of the data.
The use of bump mapping or displacement mapping is a mutually exclusive choice. Do you want more detail, or just the illusion of more detail?
Using bump mapping together with normal mapping is possible, but stupid and pointless. Just choose one or the other. Do you want to save memory with a bump map, or have more detail with a normal map?
In contrast, using displacement mapping together with normal mapping makes perfect sense because displacement mapping creates actual detail, but may not be able to capture some of the more fine-grained details that normal mapping does because it's limited by the tessellation.
He looks like young Chris Hadfield with a wig
interresting but the music is hammering my head
google search gave me this.. 'Bump maps are the old school way of adding detail to low poly objects. They are used in the same way as normal maps, except they only contain height information and not angle information. Displacement maps are sometimes used to change the location of actual vertices in a mesh' now i get it. good video though glad i learnt something :)
Dude, congrats on the 100k subs!
Only about three minutes... but man... this is pure gold!
Congrats on 100k
The Andrew Price style thumbnail (And also the other memes) is why I love this channel!
Congrats on 100k subs!
KingFredrickVI he made a video about it on DefaultCube channel
284k now
Dude I love you, this shit cracked me up and it was fast pace catering to my ADD. Please don’t stop making blender breaking news videos like this
Thanks for the clear short explanation.. just the video I was looking for :)
THANK YOU!!!
4 minutes, everything I expected to learn was in the video AND there is a sponsored ad at the end? Absolute legend
Congrats to 100k man
jesus.. for years i have been googling the definition of normal maps. Today i finally understand. That was so simple thankyou sir
i feel like every time i watch a video of his, his hair changes. yet again i've been going through old videos, informative and to the point!
Congrats on 100k dude
Outstanding job, man!
woah, tom selleck teaching me blender
wtf are uv maps? how do we tame them like we're wild jungle expedition guides?
This was incredible information. Thank you for this
This is one of the most genius guy in TH-cam doing Blender in the most creative & fun ways!
Congratulations!!
100.000 subscriptors.
me who dosent design in 3d: "i m having an epilepsy due to the ammount of data gave in less than one minute"
OLLY SHIT your videos are AWESOME. You explain the shit, with no bullshit, and make it a joy to watch
never stop making these
As always, you've been great and I've been blown away. Thanks mate.
after yesterday title should stand "wtf WAS normal maps."
very quick and simple video to understand
thank you so much
Good job on 100k subs
maps are basically what prevents meshes from reaching quadrillions of vertices
I didn't even know Magnum PI did tutorials. I just outed my age.
amazing format keep it up
Thanks. I actually really needed this video as I sound out about JS placement yesterday. Exact timing. Thanks
Congrats for passing 100000 Subs
I like this look! Keep it up! Especially the moustache!
Happy 100k subs!
You reached 100K! Congratulations!
My entire world is in shambles, he said "See ya", where's my buh bye? I'm lost?
Congrats on 100K !!! will you do caustics on TH-cam play button ?
I always love this channel, Moreover that you always learn something new everyday :)
Just subscribed because content is king and you nailed that in both material and style. That being said 99% of TH-camrs/Podcasters/peoplewhorecordtheirvoicesforpeopletohear need to talk faster, you are the rare exception where your pacing could be slowed a bit and I think it would benefit. Also, more importantly, please look into getting an actual microphone as the audio quality was the equivalent of listening to Beethoven's Für Elise as arranged for kazoo and vuvuzela. That is to stay amusing and interesting in its own right, but I would like to drill holes in my ears to stop the incessant shrieking. Hyperbole aside, keep up the quality content and the quality mustache, I appreciate you.
Did not know partial derivatives can be used for this. This has opened me to many other ideas. Thanks.
mustach looking good... keep it up (i mean above the lips)
Since the dawn of men people have been modeling super high-poly meshes full of detail... so we need to bake it into displacements so it can use an even higher poly count for a worst result! :D
#fistbumpforbumpnode
I don't understand the point of displacement maps :/
I fail to see the point of them unless you’re generating them procedurally via nodes.
@@henryrichard7619 or they can be taken from real life objects too. Also it's easier to store a light image file than a heavy 3d model.
@@doomstere.g352 well, think about this metal thing he showed in the video. if you just want to render one of those there is nothing gained in. using a displacement map instead of a model. but what if you have like.. a huge spaceship with 100 of those plates? hard to work with. 100 planes with the displacement map on it will work way better here. and since the adaptive subsurf will enhance poly count most near the camera, the planes further away won't have a huge impact on render times either.
Congrats to 100k:*
Omg Amazing. You are really the best at quick explanations. There is just one thing left - Could you show us how to make a normal map from height map? It would be super useful!
The bump node does it for you,
An old artefact from 2020 and before, when game engines weren't strong enough to render million and millions of polygons in real time. It was used to fake details.
You had to build the exact same model twice. Once you are finished your detailed sculpt with millions of polygons for your table for example, you had to remodel it with just a couple of thousand or even just hundreds of polygons. Imagine all the extra work they did back then.
I don't need to know, I already know, but it's CGMatter, so I'll still watch it.
I watched this and now I am exhausted.
This was very helpful. Thanks.
Great video! Thank you!
Explained nicely! Thanks CGMatter.
I was really interested in how blender's bump node converts image textures into normal so this helped and I figured out the height map but didn't know the normal part
You are rocking that moustache, my man, love the style! Also, I love normal maps again, thanks!
Okay I THINK I've started to grasp the difference in normal maps vs bump maps. I'm just a little dummy and didn't realize I should probably look up what exactly normals are first. Forgive me, I'm not really a math person and I'm only just rendering my donut animation right now and am still trying to figure out what all the terms mean. Your video really helps me to conceptualize it in my head. Thank you
What a fantastic tutorial
scared me with that dramaalert intro for a sec
"I"M CGMATTER, YOU BEEN YOU"
MY GOSH YOU ARE GREAT..
I think one day this dude s gonna run the internet
Cg matter taught me so so so much about blender that I made my own channel just like him in the hopes of being as good as him one day
Into: "It's me! CGMatter!"
I use both! displacements for major details and normal map for small details like cracks so I don't need a crazy amount of subsurf modifier
I still don't understand what's the point of using Normal maps if BW bump maps look the same in the end.
they are more performant if you need realtime rendering like in a video game
There's also directional displacement maps, i.e. vector maps.
And there are ways to directly get a normal map from geometry much as you can do for bump maps.
vector displacement map: AM I A JOKE TO YOU???
Did somebody say Maya? Joke too, you?
The only guy that have 120k subs and still use lousy camera for recording...legend
I actually do know how to make a beautiful website, thanks for asking.
(Jokes aside, stuff like SquareSpace has become my go-to whenever distant relatives ask me to make them a website. It's far easier to show them that than to actually do it for them.)
now how can I reverse this better (asking for my 3d printer)? you can displace bumpmaps and "bake" a texture (any color->grey tex and bumpmaps work) into geometry for 3d printing, but you cant undo normal maps for printing the high poly model. (you want all the faces you can get since they'll be sliced as paths, I have the ram and storage, ill make a 200mb model just to displace textures back into a model)
problems:
-the seams are broken unless you can find a perfect midpoint (you need a watertight final model so an accurate midpoint saves many angry hours of meshing)
-baking textures is like a lithophane and it doesn't represent the height detail that the artist intended (a brick's tex =/= the hightmap) (baking textures is nice and simple if you have something like a spherical photo that you get a UV unwrap to a sphere, you can get really nice small globes or props)
-bumpmaps nearly guarantee bad seams (you cant print unless you remesh, you just cant)
-gets very slow for hard edges because you have to uniformly spread out your triangles to be displaced
-edges of UVs can affect some more complex models if whoever textured it didnt use a smooth island boarder. (fucking UV went over the edge of the image somewhere inside to model and broke the displacement modifier)
-white/null gets displaced and will make internal geometry pop out
-cant use normal maps
I know a couple of these can be alleviated with a fill tool swapping the null textures with the intended midpoint color or remeshing first and somehow using that new UV set to only displace the outer, watertight surface for as few seams as mathematically possible, but both of those solutions are only situational and band-aids to the main issue. Any solutions would be helpful. (the end goal is a high poly, very high detail model of the fully texture filtered, fully tessellated, fully shaded model where all the detail is available to be baked into the model when remeshed for watertightness and printed)
by "baking" I also mean the exact opposite of baking, taking that fresh normal map pie and shoving it back onto the model.
i needed this
@CgMatter please consider explaining to us plebs WTF are...Compositing Passes, especially Position Pass.
If you google 'what is position pass' a professional nuke teacher from his desk literally just tells you to google what it is, which gives his tutorial as a top result ∞
this made me anxious
What are your news sources?
1) Pew news
1) Blender news
Love CG mustache
This one is very well done. Good job :D
News that CG
News that Matter
Great video. Thank you.
Next video: How to get 100k subs FAST!
Congratulations on 100k man :)