Temporal Denoising Analysis

แชร์
ฝัง

ความคิดเห็น • 58

  • @MarkStead
    @MarkStead  2 หลายเดือนก่อน

    I've discovered that camera focal length (zoom) changes do not generate valid motion vectors. See the details in the video description, and a link to the Blender defect.
    This technique will fail badly if you animate the camera focal length.
    There's a fix coming in Blender 4.3.

  • @udbhavshrivastava
    @udbhavshrivastava ปีที่แล้ว +8

    This was such a thorough analysis !
    appreciate the good work mate.

  • @leonarddoublet1113
    @leonarddoublet1113 ปีที่แล้ว +4

    Thanks for the video Mark - a lot of clear detailed work to explain the process and functions. I appreciate it.

  • @biggiesmollzdamayor
    @biggiesmollzdamayor 29 วันที่ผ่านมา +1

    Hey @MarkStead,
    Thanks for the great video and blender nodetree! The explanations are very clear. Just one question: On the gumroad page you mention that denoising data and environment are recommended, but after looking at the file, not sure how it is used. Would appreciate if you can elaborate on this. TIA

  • @lilacui1222
    @lilacui1222 19 วันที่ผ่านมา

    Just what I have been looking for!

  • @millthor
    @millthor 3 หลายเดือนก่อน

    Thanks a lot! I appreciate all the great work you have made!

  • @matejivi
    @matejivi ปีที่แล้ว +1

    Thank you! A shadow pass would be nice indeed.

  • @Szzachraj
    @Szzachraj ปีที่แล้ว +1

    Cool video, clear explenation helped me with decison.

  • @ruok3351
    @ruok3351 14 วันที่ผ่านมา +1

    We need a straightforward tutorial on how to use this to render animations. As an artist and not a graphics engineer im still very confused

    • @CapitalGearGaming
      @CapitalGearGaming 11 วันที่ผ่านมา +1

      There are plenty of those out there, and this node group is kinda simpler (but maybe better? I've yet to test) to use than others.
      Look up; 'Less samples, cleaner render' by Edin Spiegel to get the idea of how Temporal Denoising works and step by step on doing it
      Also look up; "De-noising flickering animation (temporal de-noising) and tips!" by Static VFX (mentioned in the previous video as well) as he also covers various methods of denoising before you do the temporal denoising. As well as his own Temporal denoising (he keeps his median Temporal denoising behind a paywall though)
      If you want a run-down of the whole process it goes like this;
      animate your scene and get it render ready. Make sure you include the under view layer, passes and data that you have at least; 'Vector', 'Denoising Data' and in some cases 'Z' passes selected. If doing MultiPass Denoising you'll also want to select all the light passes except shadow catcher. Render the scene as an OpenEXR MultiLayer image sequence(You MUST use this file type, PNG or any other filetypes do not support the additional layers mentioned above). Use the compositor to denoise the image during the render process using the denoising data layer mentioned before.
      Once rendered out you will have a sequence of quite large EXR images, open a new blender file using the same color management settings, switch over to the compositing tab and load in this temporal denoiser node group or any temporal denoiser node group by appending them; then make a image sequence node and load up the EXR images you created. Duplicate that image sequence node for however many the temporal denoiser supports; setting their offsets as needed and plug the; 'image', 'vector', 'depth' into their respective slots on the denoiser where applicable.
      Now you may under the render settings switch the file format to something like MP4 or whichever video format you want, then click 'render animation' to composite out those EXR images, temporally stabilize them, and convert them to a merged video file.
      There are tons of optional stuff I'm leaving out here and additionally they all seem to have some rough edges or limitations. But that's the short of it, it's quite a process.

  • @Mioumi
    @Mioumi 4 หลายเดือนก่อน

    Thank you! That's some real good insight

  • @Ruuubick
    @Ruuubick 4 หลายเดือนก่อน +1

    Tried this method and unfortunately the group node didn't seem to do anything? Using 4.1 for reference. At least I got to get more familiar with EXRs and some new passes as a result still!

  • @timcameron9023
    @timcameron9023 หลายเดือนก่อน

    thanks for this - I'm currently struggling with an animation containing a glass object with a subsurface object (candle wax in a glass jar) and I'm wondering if the world scale of the objects is a factor in blender's calculations. Also curious if rendering out frames twice as large with low sample rates as needed and then scaling them down in a compositor would help matters.

    • @MarkStead
      @MarkStead  หลายเดือนก่อน

      I'm not sure the world scale would make much difference in this case. Though it is important to get the size of the lights sensible - having a really small light, or one really close to an object (as I recall) can make fireflies worse.
      Subsurface scattering is going to cause long render times, and then you're adding glass on top.
      I guess you need to optimise those materials separately.
      You might want to look at the anisotropy setting for the subsurface, to control how deep the light penetrates.
      Glass will make it complicated for light to pass, and normally when doing window panes you use the light path node to allow light bounces to pass through easily (using TransparentBSDF).
      Once you've done all you can to tame the fireflies, then it comes down to denoising.
      If you find that the noise is very high in a specific part of your scene (the wax or glass), then you may be able to isolate and do spatial and temporal denoising on just that layer. For example you could use Light Paths to illuminate just the wax with the light(s), and then have a separate copy of those light(s) for illuminating the remainder of the scene. They can then be handled separately in the compositor. Can probably do the same using Cryptomatte.
      I'm sceptical about rendering at twice the resolution - though I have heard that mentioned before. I don't recall whether it was in the context of rendering a still image or an animation.

    • @timcameron9023
      @timcameron9023 หลายเดือนก่อน +1

      @@MarkStead Thanks Mark, yep I really picked a great combo lol - I've begun rendering out the various layers with holdouts (very handy) and that seems to be the way out. Appreciate your time and help.

  • @shanekarvin
    @shanekarvin 8 หลายเดือนก่อน

    Thanks Mark!. this was very helpful

  • @insertanynameyouwant5311
    @insertanynameyouwant5311 7 หลายเดือนก่อน +2

    a bit of dilemma, enabling vector pass only works when there`s no motion blur activated. But I need it also

    • @MarkStead
      @MarkStead  7 หลายเดือนก่อน +1

      I didn't know that. I just assumed it was possible. You could do something crazy like render out a sequence with one sample, no motion blur, and the vector pass. The problem you've got is that vector represents the position at a single point in time, and there's no way to get the range of movement for the visible blur. (The blur may not be linear.). Maybe when the movement is very low temporal denoising might still make sense, but then the denoising could be automatically disabled in the areas of the image with more movement and blur (and perhaps is less noticeable anyway).

  • @aulerius
    @aulerius ปีที่แล้ว

    Do you know if there is any way to minimize the occlusion masks including edges of objects, even when they are stationary? Does it have something to do with aliasing in the render? I am using your techniques for a different purpose (in projection-mapping textures on moving scenes, to distinguish occluded regions and inpaint them)

    • @MarkStead
      @MarkStead  ปีที่แล้ว +1

      Have you looked at Cryptomatte?
      At one point I was trying to use the Cryptomatte node to distinguish between different objects. The problem is that it is designed to be used with a Matte selection - so then I tried to understand how the raw Cryptomatte render layer was structured - referring to this document raw.githubusercontent.com/Psyop/Cryptomatte/master/specification/IDmattes_poster.pdf
      However it was an impossible task for me - since there is no unique object ID for a given pixel position. Specifically the Cryptomatte data represents all the source objects that create the pixel (including reflections, anti-aliasing, transparency, motion blur) and a weighting for each.
      If you're able to make a Cryptomatte selection for the occluded region, then this should give you a mask with properly anti-aliased edges.
      However (not that I understand your project exactly), perhaps you could also be looking at the Shader nodes and rendering those faces with emission and everything else transparent (perhaps using material override for the whole scene). You might be able to use Geometry Nodes to calculate the angles to the projector to give you an X/Y coordinate. GN could also calculate the facing angle and therefore the level of illumination falloff (or whether a face is occluded completely).

  • @محمدعذابمحسن
    @محمدعذابمحسن 10 หลายเดือนก่อน

    Thanks, but how do I save the temporally denoised after compositing them?

    • @MarkStead
      @MarkStead  10 หลายเดือนก่อน +2

      Yeah sorry about that - all the screen captures just show the Viewer node.
      You need to add a Composite node, and connect Image input.
      Then set your Render Output settings (presumably now rendering out as H.264 using FFmpeg Video), then activate Render Animation (Ctrl+F12).

  • @siufa23
    @siufa23 ปีที่แล้ว

    thanks Mark. This is great explantation. Do you think its possible to automate the denoise process with a python script commandline wihtout the need to enter blender?

    • @MarkStead
      @MarkStead  ปีที่แล้ว

      I personally haven't done that.
      Here's the command line doco, and you can certainly perform rendering, and run Python scripts.
      docs.blender.org/manual/en/latest/advanced/command_line/arguments.html
      If you have a Blender file configured for compositing then you could presumably just render that from the command line, with no Python scripting required.
      Perhaps what you could do from a Python script is substitute node parameters for the filenames or the number of frames. You should be able to fully integrate Python with pretty much anything in Blender including adding/manipulating compositing nodes.
      For example in Blender if I modify the frame offset in the compositor, I can see in the Scripting window it has executed this command:
      bpy.data.scenes["Scene"].node_tree.nodes["Image"].frame_offset = 1
      Obviously you have a lot of extra complexity of setting up scripts and and all the command line parameters. However it makes sense when you're trying configure an automated rendering pipeline.
      Does that help?

  • @rami22958
    @rami22958 10 หลายเดือนก่อน

    Now that I have finished creating the node, do I have to convert it to an image again, or can I convert it directly to a video? Please reply.

    • @MarkStead
      @MarkStead  10 หลายเดือนก่อน +1

      👍 You can output as a video file. Just going from memory, you would (1) connect the denoised image to the Composite node, and then configure the output settings in the normal place, or alternatively (2) use the File Output node and specify the output settings in the node properties (N panel). Output using FFmpeg Video with MPEG-4/AV1/etc.

  • @pablog.511
    @pablog.511 10 หลายเดือนก่อน

    This method works with PNG rendering method??? (I render the frames in PNG first, and then combine them in a video editor)

    • @MarkStead
      @MarkStead  10 หลายเดือนก่อน

      That's what I demonstrate in the Frame Blending part of the video. In the parts of the frames where there's movement there will be blurring. I guess you could say it's like an unsophisticated motion blur effect.

  • @Whalester
    @Whalester 4 หลายเดือนก่อน

    I can't seem to get it to work. when there is still more noise in my scene than simply using a normal denoiser node

    • @Whalester
      @Whalester 4 หลายเดือนก่อน

      I noticed when using the debugger, to get my motion colors to show at proper exposure I have to change the intensity down from 300 to 5. I don't know how to apply this to the non debugging denoising node.

    • @MarkStead
      @MarkStead  2 หลายเดือนก่อน

      The intent wasn't to do away with normal (spatial) denoising, instead to give it more samples to work with so that it can deliver a more accurate and hopefully temporally stable result.
      Of course if you're rendering with lots of samples anyway, then you may be able to eliminate spatial denoising - so long as you want some fine (or at least finer) grain in the render.
      Right now I'm working on an animation with fog, and specifically using this to denoise the volumetrics pass only. There's lots of noise in the volumetrics, and the spatial denoiser completely eliminates the noise, however when you watch the animation there's noticeable brightness changes - which are exacerbated by smoothing the volumetrics completely.

    • @MarkStead
      @MarkStead  2 หลายเดือนก่อน

      The visualisation is converting the distance into a luminance/colour value, where the larger the distance moved results in a brighter luminance. The intensity value you're referring to is to control the conversion of the movement distance (in pixels) to an luminance value - where you don't want it to be too dark or too bright. It only applies when using the debugger mode.

  • @BlaBla-sf8pj
    @BlaBla-sf8pj ปีที่แล้ว

    thx for your help

  • @0rdyin
    @0rdyin 14 วันที่ผ่านมา

    How to use temporal Denoising with optix..?

    • @MarkStead
      @MarkStead  14 วันที่ผ่านมา

      @@0rdyin If you mean using the Optix spatial denoiser, then I don't think you can inside the compositor. Though maybe you could denoise in the render settings before passing to the compositor.
      I thought that OIDN is considered better for final render, and it can be GPU accelerated now (depending on your GPU).

  • @blenderheadxyz2418
    @blenderheadxyz2418 6 หลายเดือนก่อน

    wow thanks a lot

  • @M_Lopez_3D_Artist
    @M_Lopez_3D_Artist ปีที่แล้ว

    Hey ive been rendering EXR with blender and i don't see Vector or Noisy Image and i have that checked on my render passes is there something im missing?

    • @MarkStead
      @MarkStead  ปีที่แล้ว

      Check it's saved as a MultiLayer EXR.

    • @M_Lopez_3D_Artist
      @M_Lopez_3D_Artist ปีที่แล้ว

      i will do that right now hope it works all keep u posted
      @@MarkStead

    • @M_Lopez_3D_Artist
      @M_Lopez_3D_Artist ปีที่แล้ว

      i figured it out it has to be selected to layer setting instead of combinded, when i set it to layer it showed all the inputs that i was wanting awesome@@MarkStead

    • @M_Lopez_3D_Artist
      @M_Lopez_3D_Artist ปีที่แล้ว

      it works but how do i use this for a 250 frame animation@@MarkStead

    • @MarkStead
      @MarkStead  ปีที่แล้ว

      When rendering you render out your animation as MultiLayer EXR, ending up with 250 separate EXR files.
      Then import all the EXR files into a compositor session - importing as an Image Sequence (what I do is click on the first file, then press A to select them all).

  • @djdog465
    @djdog465 ปีที่แล้ว

    cool video dad

  • @MrSofazocker
    @MrSofazocker ปีที่แล้ว

    How to get more "free" samples in blender, without blending different frames: Simply render the same frame at a different seed. combine those.
    You can most of the time, only render a third or a half the samples. which might even be faster than rendering the image once with full samples.

    • @MarkStead
      @MarkStead  ปีที่แล้ว +1

      I'm not sure that really helps, though it might seem to. Rendering more samples is effectively giving more seed values because each sample has different random properties that result in light rays bouncing differently throughout the scene. In some cases a ray will randomly hit the diffuse colour, and in other cases it does a specular reflection (with a slightly different random bounce angle).

    • @MrSofazocker
      @MrSofazocker ปีที่แล้ว

      ​@@MarkStead Please try, combining 3 "Seed renders" with say 500 samples, will give you a better image than rendering it once with 1500 samples. If you get what i mean.
      (I use M4CHIN3 tools, and he has that built-in as an custom operator in the Render menu)
      When rendering, each sample uses the same seed. If you ever rendered an animation with a fixed seed, you will notice that the noise stays the same.
      Bringing that to the extreme and only render with say 20 samples. You will notice the same pixels are black (not sampled at all) in the first frame as well as in the second frame.
      Now, using the same logic on a still frame, and rendering it with only 20 samples, but a differnt seed, now other pixels are black (not rendered).
      Of course this difference gets lower and lower depending on how many samples you start out with, but since we are not rendering to infinite sample, it will improve the clarity for low samples.
      It's the same effect as rendering an image with 200% resolution and half the samples. after denoising and downsampling you get a better image, as you gathered more "spatial samples". As one pixel previously was now 4 pixels to sample.

    • @MrSofazocker
      @MrSofazocker ปีที่แล้ว

      This does get a little funky since Blender doesn't let you set the rays per pixel, but just an overall sample amount (Which is pretty dumb), regardless it still works.

    • @MarkStead
      @MarkStead  ปีที่แล้ว +3

      Yeah, in an earlier version of Blender (I guess 2.93 and earlier) there was Branched Path Tracing.
      This allowed you to specify how many sub-samples for different rays (e.g. Diffuse, Glossy, Transmission etc). So the benefit is that you can increase the samples where it matters - e.g. Glossy or Transmission. Furthermore I guess I saw it as a way where you didn't need to recalculate the all light bounces from the camera every time.
      However in my testing way back then, I actually got better results using Branched Path Tracing, and setting the sub-samples to 1 only.
      Anyway, if you're getting good results by modifying the seed value - then go for it.
      This is an excellent technique if you render scene (particularly for a video) - then decide you should have used more samples. Just render again with a different seed - and merge the frames.

  • @LiminalLo-fi
    @LiminalLo-fi 11 หลายเดือนก่อน

    Hey Mark it looks like you are looking for median denoising. th-cam.com/video/851cEK0Taro/w-d-xo.html at about 8:00 minutes in he briefly goes over it, so if you have any deeper knowledge on this guys set up I would love to know!

    • @MarkStead
      @MarkStead  11 หลายเดือนก่อน +1

      I did try using a median function, but didn't get better results. There's still a median node group implementation in the debug denoiser that you can hook up and try.
      I ended up focusing on what noisy pixels are like, where they might exhibit a different luminosity or a significant color shift. I tried a fancy (or dodgy) algorithm to apply a weighting to the hue, saturation and luminosity differences and exclude samples where the difference exceeds a threshold.
      I'd appreciate any feedback for where you see an improvement using the median function.

    • @LiminalLo-fi
      @LiminalLo-fi 11 หลายเดือนก่อน

      @@MarkStead will let you know if I come up with anything useful. I am also looking into Blender to Unreal Engine, for its rendering speed.

    • @LiminalLo-fi
      @LiminalLo-fi 11 หลายเดือนก่อน +1

      @@MarkStead So for my current project I am getting a perfect sequence with just a single pass denoise on each of the 3 frames ---- running "next" and "previous" into vextor displacemnt ------- running those 2 outputs and the output from the "current frame" into your median group then OUT. [(just the utility median blend group not any other parts from your package)]
      I will have to render it and see what it looks like in premier but it already looks cleaner than the averaged frame method I tried earlier. I mean it looks really good!

    • @LiminalLo-fi
      @LiminalLo-fi 11 หลายเดือนก่อน

      my scene is a pretty simple project and not a heavily detailed with minimal objects, so I'm not sure how much that plays into the final result others may have.

  • @dimigaming6476
    @dimigaming6476 ปีที่แล้ว +2

    this video is much easier to digest at 1.75 speed

    • @MrKezives
      @MrKezives ปีที่แล้ว +5

      That's what you have to say after such great content?

    • @dimigaming6476
      @dimigaming6476 ปีที่แล้ว +2

      @@MrKezives You're coming in with a negative mindset. The content/Information is great. All i said is that it's easier to digest at a faster speed. Everyone has different methods of learning things. We're all on the same 3D journey here, you have no enemies brother.

    • @zonaeksperimen3449
      @zonaeksperimen3449 11 หลายเดือนก่อน

      Thanks dude