Really great stuff to see from Chris, we worked together on Sackboy years ago so it's really awesome to see a big evolution to leverage Unreal's newer tools to solve a lot of the problems we had on that project. Also really happy to see a centralised technical art/tools pipeline been hinted at as this one of my biggest complaints during my time at Sumo.
Where would one get access to the Arrowhead Initiative to get the plugins? Are they available to purchase somewhere? Would be super useful to know. Thanks for the insight, awesome video. Its insane how little material layering is being talked about, I see too many assets labelled 'game ready' meanwhile they use x2 4k maps and 2x 2k maps and its like 'game ready' great but for what game...
Hi, I’m currently working on optimizing my project’s materials and am exploring efficient ways to share data, such as masks, tweaked vertex colors, and UV scaling, across different material layers or within material layer blends. I understand that sharing these parameters can significantly simplify the material setup by reducing the redundancy of inputs and potentially streamlining the workflow. However, beyond simplifying parameters, I’m curious about the performance implications of this approach. Could someone explain how to effectively share data like masks, tweaked vertex colors, and UV scaling across material layers or blends? Additionally, does this method of sharing data between layers or blends offer any performance benefits, such as reducing shader complexity or improving render times? I’m particularly interested in understanding if there are best practices or specific techniques within Unreal Engine that facilitate this kind of data sharing while also optimizing material performance. Thank you for your insights!
Hi I really enjoyed lecture, one part that interests me, but I have little to no resources on is the value clamping inside of a packed map and being able to isolate specific gray values. I was wondering if you guys could share any resources that may help me in my endeavors, I've been poking around a few different forms and a lot of my solutions full short of anything useful. Any help you guys could provide would be awesome and appreciated.
Hi, still waiting for more information on this. It looks pretty bad for Sumo that no one has answered any.of the great questions posted below and it's been a year now
If this is an internal only setup, is there anything that I could achieve creating this manually from scratch possibly something similar/basic in Unreal? Great presentation by the way 👌.
You could use Unreals Material Layering System, which is exactly what he's using, it sounds like he has more of a library of examples as opposed to the system itself which is already a part of UE.
DXT normal compression... Please god no. Artists you "taste test"ed this with must have been blind. It just destroys the shading and looks dog unless every normal is 4k and high texel density.
@@SuWoopSparrow I've seen people try it in some asset packs and it was clear as day to me, so unless they did it wrong, which I don't think they did, it is very much not imperceptible. My immediate thought was "These normals look straight out of the Source Engine with how blocky they are". Again, the effect is hard to see at 4k, but everything does not need to be 4k and your resolution just goes to hiding those artifacts wasting video memory when the idea was to reduce it. I'm curious to do my own comparisons now though, so I will get back on that, cause maybe they weren't reranging the normals correctly. I doubt that was it, but maybe.
@@Jofoyo Assuming we are talking a video game application here and not close up cinematic shots with perfectly composed lighting, the difference shouldnt be noticeable if you follow what he said. 1. Dxt1 Compression 2. Rerange R and G from 0-1 to -1-1 3. Rebuild the B channel (you can use DeriveNormalZ node) Can't speak for what asset packs are doing, but if you just plug a Dxt1 compressed texture into a normal output it will look blocky and stronger than normal. It is a day and night difference this way, but following what the speaker said gave me a result that was basically the same in game view and in World Normal view.
The best thing I've seen in a while!
Really great stuff to see from Chris, we worked together on Sackboy years ago so it's really awesome to see a big evolution to leverage Unreal's newer tools to solve a lot of the problems we had on that project. Also really happy to see a centralised technical art/tools pipeline been hinted at as this one of my biggest complaints during my time at Sumo.
Where would one get access to the Arrowhead Initiative to get the plugins? Are they available to purchase somewhere? Would be super useful to know. Thanks for the insight, awesome video. Its insane how little material layering is being talked about, I see too many assets labelled 'game ready' meanwhile they use x2 4k maps and 2x 2k maps and its like 'game ready' great but for what game...
Hi, did you manage to find them? Searching with Google doesn't lead anywhere
Crazy !!!!!! awesome
This is super important!
Excellent talk, thanks for all the information!
This is really inspiring presentation, thanks!
Hi, I’m currently working on optimizing my project’s materials and am exploring efficient ways to share data, such as masks, tweaked vertex colors, and UV scaling, across different material layers or within material layer blends. I understand that sharing these parameters can significantly simplify the material setup by reducing the redundancy of inputs and potentially streamlining the workflow. However, beyond simplifying parameters, I’m curious about the performance implications of this approach.
Could someone explain how to effectively share data like masks, tweaked vertex colors, and UV scaling across material layers or blends? Additionally, does this method of sharing data between layers or blends offer any performance benefits, such as reducing shader complexity or improving render times? I’m particularly interested in understanding if there are best practices or specific techniques within Unreal Engine that facilitate this kind of data sharing while also optimizing material performance.
Thank you for your insights!
Amazing presentation
Hi I really enjoyed lecture, one part that interests me, but I have little to no resources on is the value clamping inside of a packed map and being able to isolate specific gray values. I was wondering if you guys could share any resources that may help me in my endeavors, I've been poking around a few different forms and a lot of my solutions full short of anything useful. Any help you guys could provide would be awesome and appreciated.
32:00 arrowhead plug-in. Chris talks about it and implies that it's available to grab. Where do I go? There's no link
Yeah I'm looking aswell!
@@Floydianificationhave you found it by any chance?
@@voldemortsplace762have you found it?
Anyone found out? I’d like to have a look
@@MarcoMariaRossiArte it's not available for us members of public, its for game development studio and internal use
nice.
Hi, still waiting for more information on this. It looks pretty bad for Sumo that no one has answered any.of the great questions posted below and it's been a year now
If this is an internal only setup, is there anything that I could achieve creating this manually from scratch possibly something similar/basic in Unreal? Great presentation by the way 👌.
You could use Unreals Material Layering System, which is exactly what he's using, it sounds like he has more of a library of examples as opposed to the system itself which is already a part of UE.
Please share the MLBs
You can see it, but you can't touch it.)))
DXT Normal Compression yikes...
DXT normal compression... Please god no. Artists you "taste test"ed this with must have been blind. It just destroys the shading and looks dog unless every normal is 4k and high texel density.
Have you actually tried it? Literally imperceivable difference lol
@@SuWoopSparrow I've seen people try it in some asset packs and it was clear as day to me, so unless they did it wrong, which I don't think they did, it is very much not imperceptible. My immediate thought was "These normals look straight out of the Source Engine with how blocky they are". Again, the effect is hard to see at 4k, but everything does not need to be 4k and your resolution just goes to hiding those artifacts wasting video memory when the idea was to reduce it.
I'm curious to do my own comparisons now though, so I will get back on that, cause maybe they weren't reranging the normals correctly. I doubt that was it, but maybe.
@@Jofoyo Assuming we are talking a video game application here and not close up cinematic shots with perfectly composed lighting, the difference shouldnt be noticeable if you follow what he said.
1. Dxt1 Compression
2. Rerange R and G from 0-1 to -1-1
3. Rebuild the B channel (you can use DeriveNormalZ node)
Can't speak for what asset packs are doing, but if you just plug a Dxt1 compressed texture into a normal output it will look blocky and stronger than normal. It is a day and night difference this way, but following what the speaker said gave me a result that was basically the same in game view and in World Normal view.
my god, HOW BORING presentation
i think noobs are saying they enjoy it to look accomplished
probably not for you