Fascinating tech explanation and “sharpening is a double edged sword” made me laugh out loud. Less is most definitely more and looking forward to watching more of your videos.
Wonderful job. Really appreciate the efforts to explain to the advanced and the less interested. This will be such a useful module because there is nothing like it that gives as much control, not just in darktable, but other software (I'm sure). This adds again to darktable being thought out in a way that is truly useful as opposed to flashy or trendy. Thanks for your painstaking efforts.
After Filmic and Color calibration, another new module in Darktable that all the PhDs in colorimetry and mathematics will love ... the other will just play with the parameters without understanding what they're doing ...
3 ปีที่แล้ว +1
You just summed up photography as a whole : photo-electronics, optics and color science that nobody understands. And yet, guess what ? It doesn't prevent people from pressing shutters. Try watercolors, I dare you. The theory is… simpler until you actually dive into it.
I really appreciate the level of research that informs your work. It's amazing to have the image process workflow totally re-thought by your work in a way that doesn't emulate other software. I'm no professional but I've noticed much better colours and really nice transitions in light since I started using darktable. Thank you for your generosity. Question: how would you recommend setting up diffuse/sharpen to add a touch more definition to film scans, without making the grain look odd. What would your strategy be?
3 ปีที่แล้ว +5
Thanks ! For sharpening film scans, I think one of the "lens deblur" presets (from soft to hard) should do the trick as a first base, then tweak the number of iterations to make the effect as strong as you like.
thanks for your great work and this video. The module looks very promising. Unfortunatly, youtube compression make it quite hard to notice the module effect at some moments.
If i understand correctly, the module uses frequency separation and the effect is limited to a certain pixel area that can be adjusted with the central radius and radius span slider. Since these sliders are fixed pixel values, am i right in assuming, that with equal settings the effect of the module will be lower on higher resolution pictures and more intense on low res pictures?
3 ปีที่แล้ว +1
Yes and no. The radii are expressed in px of the full-resolution image. But, since we compute the Laplacian of a wavelet scale, the effect will also take into account the steepness of the gradients, which makes everything a bit more stable, scale-wise. It's not like directly boosting a wavelet scale.
@ But the gradient is also measured in "intensity change per pixel", so the resolution affects the result, correct? Because a pixel is, relatively spoken, bigger in an image with fewer pixels. Wouldn't it make sense to make the "radius span" slider logarithmic? So instead of "central radius plus or minus x" it would be "central radius multiplied or divided by x". O hace the option to have logarithmic sliders, as it would also make sense for the"central radius" slider. As I have used a lot of Fourier transformations in my work, I find the logarithmic scale more intuitive in this case. But I am not sure if it makes sense for wavelets.
3 ปีที่แล้ว +1
Gradients of wavelets are weird beasts since it's basically gradients of spatially-varying Laplacians. Of course, they are scale dependent, but not as close as you would expect since we apply a variance-based regularization. Also, whatever the resolution of your image and the scene-referred size of your pixels, anyway pixels are the most basic piece of information we have, so for pixel-level things like denoising it's what we need. We could make the radii logarithmic, I'm not sure it would make a lot of difference though.
Are you able to link the source code anywhere. I wanted to play around implementing something similar into a game engine
ปีที่แล้ว
Here : github.com/aurelienpierreeng/ansel/blob/master/src/iop/diffuse.c But it's definitely too slow for 30 FPS, expect a good second of rendering at 4K resolution.
@ My rx6600 seems to do it pretty fast. 20mp with 20 iterations nearly instantaneous. I wanted to figure out the wavelet decomposition you did. I was also thinking about down-scaling the image before processing to make it 4x faster.
ปีที่แล้ว
@@mathmage420 Depends what kind of result you want to achieve, but sharpening a downscaled version defeats the purpose.
I enjoyed very the presentation of this new module and I am particularly interested in the references you mentioned they were in the code. Can you tell me which folder contains the solver for this module and the corresponding references? Thanks!
3 ปีที่แล้ว +1
Thanks ! - The code for the solver is here : github.com/darktable-org/darktable/blob/master/src/iop/diffuse.c#L714-L883 - The reference for the anisotropic heat-transfer diffusion is www.researchgate.net/publication/220663968 - The reference for the à-trous wavelets decomposition is jo.dreggn.org/home/2010_atrous.pdf (the same one is used for the contrast equalizer) - There is some proof of the analogy between the à-trous wavelet using a cardinal cubic spline and an isotropic Laplacian on my website : eng.aurelienpierre.com/2021/03/rotation-invariant-laplacian-for-2d-grids/#Scaling-coefficient - The inspiration for the regularization comes from Rudin-Osher-Fatemi PDE : en.wikipedia.org/wiki/Total_variation_denoising - Though the final variance-based regularization was rather inspired by the guided filter : kaiminghe.com/eccv10/ Good luck !
Thanks a lot for your videos and your work on darktable! It seems my last comment was also affected by the TH-cam bug you mentioned, so I'm trying again: I already wanted to support you for a while and now joined your channel, in the hope to also get rid of the ads this way. However, this unfortunately doesn't work, I still get them. Ads are also not mentioned in the list of benefits, maybe this is something you need to explicitly enable? If not, I will probably rather switch to liberapay (do I understand it correctly that there no tax or other fees are deducted?).
3 ปีที่แล้ว
Sorry, it seems that you need to subscribe to TH-cam Premium to get rid of ads, and there is no option for me to disable them for subscribers. Liberapay may yield higher revenues for me but there will still be a 5% + 0.25 € cut from the money transfer platform.
Could it be possible to impelent in darkroom a way of "freezing" modules? In digital audio workstation, to reduce cpu hit, there is always a way to freeze (render the audio to a specific point in the pipeline) so to free resources. Of course once a module/plugins is in freeze mode you can't change previous modules in the pipeline without unfreezing it first, but you can still work with the modules that are positioned after in the pipeline. Sorry for the ot . This new module seems amazing.
The output of modules is already cached, and only the next modules in the pipeline get recomputed when needed. Of course, when zooming or panning, the whole module stack gets recomputed from scratch.
@ I think, I didn't explain correctly what I meant. I know that there are the effects "bloom" and "soften" in the current version of dt. Now when I see the preset "bloom" in the new module "diffuse or sharpen", it reminds me more of the effect that the module "soften" produces. Furthermore, the "bloom" effect treats different tones differently (more diffusion on highlights), which is not the case for "diffuse or sharpen", right?
3 ปีที่แล้ว +2
The soften module is low-pass filter, aka a blur, applied on L channel in CIE Lab. The things it does on color have no real-life background whatsoever. The blooming effect is also made by blurring, but if you apply it in linear, it handles highlights just right with no additional algorithmic trick. The bloom module of darktable, again, is applied on L in CIE Lab, so it needs to work harder for something that doesn't look good anyway.
Hi Aurelien, you're probably very busy, just wanted to note my comment af few days ago is not visible, maybe needs your approval, or you don't like it?
3 ปีที่แล้ว
Hi Marc, there is a strange bug these days on TH-cam where I see new comments in the notifications, but then I find them nowhere, neither on the admin side or under the videos. I don't have any comment in the queue and I didn't moderate anything.
3 ปีที่แล้ว
Hi, I see you posted a new comment, I have it in the notifs, I can read "Thanks Aurélien, I post my comment again: I really appreciate all your work on the software and equally the effort you put on the videos to explain and foster the level of understanding of…", but then I don't have the rest anywhere. Did you post a link or some offensive words in there that could have been spotted as spam or something ?
@ Thanks. No, no link, no offensive words I think either, just a long comment. Will split it in parts. This is my second comment on TH-cam, maybe this alone is supcious to the AI... .-)
Hi Aurélien, I really appreciate all your work on the software and equally the effort you put on the videos to explain and foster the level of understanding of darktable and your modules among users. I am just a beginner of darktable, started three weeks ago trying to get into darktable, I use Capture One as my main raw converter and image editor since many years, but want to explore other possibilities, and darktable is an interesting alternative. I very much like the idea of the scene-referred workflow, not so much for the reason of (presumably) easier adaption to HDR monitors in the future, which remains to be seen, but rather for the suggestion that working in linear RGB would have less artifacts and side effects e.g. hue shifts and unpredictable results when working with the tools/modules. This, to me, is a promise of ease of use, and a safety measure, and the anticipation of less complicated fiddling work with software tools to get rid of or mitigate unwanted artifacts and results, and to achieve the desired result more easily at the end, if you are willing to take a few steps more earlier in the workflow. Before darktable, actually I was looking for a software which works in CIELab, as I had some great results for one or the other image with Affinity Lab curves, namely lightness adjustments without hue shifts, but also color contrast enhancements (steepen a and b component curves), but I have read your article in which you mention that darktable module’s Lab implementation was (retrospectively ) a „mistake“ as Lab was never intended as an image editing working space and only works well on images with specific properties, e.g. muted colors and low dynamic range, if I recall correctly. ..to be continued...
More natural looking images regarding tone and colors (at least as the basis for further edits), less artifacts and more predictable behavior are the main reasons of my quest for a new software. I realized that linear RGB could potentially be the fulfillment of what people thought Lab has promised them, so I currently give darktable a serious try. I have read about your and your fellows goal to re-implement successively the most important modules for the scene-referred (and linear RGB) workflow to give an alternative to the display-referred modules which in addition to their inherent design (“flaw”) of being display-referred also don’t work well in general, some of them at least, if I understood your article (about recommended modules, I think) correctly. So, it seems to be a future-oriented software project, not stale, which deserves to have a closer look. I have seen many videos from you, Bruce and a few from Rico and others, not all of them yet, read the darktable documentation of the modules I used, and had some good results, some slightly superior, for images previously edited with Capture One. Slightly superior, but with a lot more options to try&error, and not without pitfalls, but, as I said, a few more steps to get better results I am willing to take. Now I have seen this video about the upcoming diffuse and sharpen module with big anticipation, presumably a great new linear RGB module for the scene-referred workflow. And I see its power, and I think I see the advantage over deconvolution, and it is a highly sophisticated tool allowing precise control over the parameters of a highly sophisticated algorithm which can be used for so many things. This is really a respectful performance from you (and maybe contributors). Really. ... to be continued...
Wow! Very interesting video. I even understood some bits of it. 😉 Amazing stuff, really.
Thanks very much for the demonstration and explaining the theory behind it.
I really like how much effort you have taken, Keep up the good work looking forward to see much more videos from you.
As well as a very useful video in how to use this module you get philosophy too! Thanks as always.
Fascinating tech explanation and “sharpening is a double edged sword” made me laugh out loud. Less is most definitely more and looking forward to watching more of your videos.
Wonderful job. Really appreciate the efforts to explain to the advanced and the less interested. This will be such a useful module because there is nothing like it that gives as much control, not just in darktable, but other software (I'm sure). This adds again to darktable being thought out in a way that is truly useful as opposed to flashy or trendy. Thanks for your painstaking efforts.
Isn't the contrast equalizer a faster, easier and equally good module for local contrast and sharpening?
After Filmic and Color calibration, another new module in Darktable that all the PhDs in colorimetry and mathematics will love ... the other will just play with the parameters without understanding what they're doing ...
You just summed up photography as a whole : photo-electronics, optics and color science that nobody understands. And yet, guess what ? It doesn't prevent people from pressing shutters.
Try watercolors, I dare you. The theory is… simpler until you actually dive into it.
Looking forward to this module! Excellent work!
Thank you for the 2 hour lecture Professor Pierre!
Lens deblur is really slow but absolutely amazing!
I really appreciate the level of research that informs your work. It's amazing to have the image process workflow totally re-thought by your work in a way that doesn't emulate other software. I'm no professional but I've noticed much better colours and really nice transitions in light since I started using darktable. Thank you for your generosity. Question: how would you recommend setting up diffuse/sharpen to add a touch more definition to film scans, without making the grain look odd. What would your strategy be?
Thanks ! For sharpening film scans, I think one of the "lens deblur" presets (from soft to hard) should do the trick as a first base, then tweak the number of iterations to make the effect as strong as you like.
thanks for your great work and this video. The module looks very promising. Unfortunatly, youtube compression make it quite hard to notice the module effect at some moments.
If i understand correctly, the module uses frequency separation and the effect is limited to a certain pixel area that can be adjusted with the central radius and radius span slider.
Since these sliders are fixed pixel values, am i right in assuming, that with equal settings the effect of the module will be lower on higher resolution pictures and more intense on low res pictures?
Yes and no. The radii are expressed in px of the full-resolution image. But, since we compute the Laplacian of a wavelet scale, the effect will also take into account the steepness of the gradients, which makes everything a bit more stable, scale-wise. It's not like directly boosting a wavelet scale.
@ But the gradient is also measured in "intensity change per pixel", so the resolution affects the result, correct? Because a pixel is, relatively spoken, bigger in an image with fewer pixels.
Wouldn't it make sense to make the "radius span" slider logarithmic? So instead of "central radius plus or minus x" it would be "central radius multiplied or divided by x". O hace the option to have logarithmic sliders, as it would also make sense for the"central radius" slider.
As I have used a lot of Fourier transformations in my work, I find the logarithmic scale more intuitive in this case. But I am not sure if it makes sense for wavelets.
Gradients of wavelets are weird beasts since it's basically gradients of spatially-varying Laplacians. Of course, they are scale dependent, but not as close as you would expect since we apply a variance-based regularization. Also, whatever the resolution of your image and the scene-referred size of your pixels, anyway pixels are the most basic piece of information we have, so for pixel-level things like denoising it's what we need.
We could make the radii logarithmic, I'm not sure it would make a lot of difference though.
@ Thanks a lot for the explication. Also, I can't wait to play with this new module!
Are you able to link the source code anywhere. I wanted to play around implementing something similar into a game engine
Here : github.com/aurelienpierreeng/ansel/blob/master/src/iop/diffuse.c But it's definitely too slow for 30 FPS, expect a good second of rendering at 4K resolution.
@ My rx6600 seems to do it pretty fast. 20mp with 20 iterations nearly instantaneous. I wanted to figure out the wavelet decomposition you did. I was also thinking about down-scaling the image before processing to make it 4x faster.
@@mathmage420 Depends what kind of result you want to achieve, but sharpening a downscaled version defeats the purpose.
@ no need to sharpen a computer generated image. I'm just interested in doing Bloom affects
I enjoyed very the presentation of this new module and I am particularly interested in the references you mentioned they were in the code. Can you tell me which folder contains the solver for this module and the corresponding references? Thanks!
Thanks !
- The code for the solver is here : github.com/darktable-org/darktable/blob/master/src/iop/diffuse.c#L714-L883
- The reference for the anisotropic heat-transfer diffusion is www.researchgate.net/publication/220663968
- The reference for the à-trous wavelets decomposition is jo.dreggn.org/home/2010_atrous.pdf (the same one is used for the contrast equalizer)
- There is some proof of the analogy between the à-trous wavelet using a cardinal cubic spline and an isotropic Laplacian on my website : eng.aurelienpierre.com/2021/03/rotation-invariant-laplacian-for-2d-grids/#Scaling-coefficient
- The inspiration for the regularization comes from Rudin-Osher-Fatemi PDE : en.wikipedia.org/wiki/Total_variation_denoising
- Though the final variance-based regularization was rather inspired by the guided filter : kaiminghe.com/eccv10/
Good luck !
@ Thanks! I'm an applied mathematician and I like to go to the sources :)
Thanks a lot for your videos and your work on darktable!
It seems my last comment was also affected by the TH-cam bug you mentioned, so I'm trying again:
I already wanted to support you for a while and now joined your channel, in the hope to also get rid of the ads this way. However, this unfortunately doesn't work, I still get them. Ads are also not mentioned in the list of benefits, maybe this is something you need to explicitly enable? If not, I will probably rather switch to liberapay (do I understand it correctly that there no tax or other fees are deducted?).
Sorry, it seems that you need to subscribe to TH-cam Premium to get rid of ads, and there is no option for me to disable them for subscribers. Liberapay may yield higher revenues for me but there will still be a 5% + 0.25 € cut from the money transfer platform.
Could it be possible to impelent in darkroom a way of "freezing" modules? In digital audio workstation, to reduce cpu hit, there is always a way to freeze (render the audio to a specific point in the pipeline) so to free resources. Of course once a module/plugins is in freeze mode you can't change previous modules in the pipeline without unfreezing it first, but you can still work with the modules that are positioned after in the pipeline. Sorry for the ot . This new module seems amazing.
That would be great!
The output of modules is already cached, and only the next modules in the pipeline get recomputed when needed. Of course, when zooming or panning, the whole module stack gets recomputed from scratch.
As always, Aurélien, great video, very informative! Thank you!
One comment/question: Isn't what you call "bloom" rather a "soften" effect?
No, it's a defined visual effect called bloom : en.wikipedia.org/wiki/Bloom_(shader_effect)
@ I think, I didn't explain correctly what I meant.
I know that there are the effects "bloom" and "soften" in the current version of dt. Now when I see the preset "bloom" in the new module "diffuse or sharpen", it reminds me more of the effect that the module "soften" produces.
Furthermore, the "bloom" effect treats different tones differently (more diffusion on highlights), which is not the case for "diffuse or sharpen", right?
The soften module is low-pass filter, aka a blur, applied on L channel in CIE Lab. The things it does on color have no real-life background whatsoever. The blooming effect is also made by blurring, but if you apply it in linear, it handles highlights just right with no additional algorithmic trick. The bloom module of darktable, again, is applied on L in CIE Lab, so it needs to work harder for something that doesn't look good anyway.
Hi Aurelien, you're probably very busy, just wanted to note my comment af few days ago is not visible, maybe needs your approval, or you don't like it?
Hi Marc, there is a strange bug these days on TH-cam where I see new comments in the notifications, but then I find them nowhere, neither on the admin side or under the videos. I don't have any comment in the queue and I didn't moderate anything.
Hi, I see you posted a new comment, I have it in the notifs, I can read "Thanks Aurélien, I post my comment again: I really appreciate all your work on the software and equally the effort you put on the videos to explain and foster the level of understanding of…", but then I don't have the rest anywhere. Did you post a link or some offensive words in there that could have been spotted as spam or something ?
@ Thanks. No, no link, no offensive words I think either, just a long comment. Will split it in parts. This is my second comment on TH-cam, maybe this alone is supcious to the AI... .-)
Hi Aurélien,
I really appreciate all your work on the software and equally the effort you put on the videos to explain and foster the level of understanding of darktable and your modules among users. I am just a beginner of darktable, started three weeks ago trying to get into darktable, I use Capture One as my main raw converter and image editor since many years, but want to explore other possibilities, and darktable is an interesting alternative.
I very much like the idea of the scene-referred workflow, not so much for the reason of (presumably) easier adaption to HDR monitors in the future, which remains to be seen, but rather for the suggestion that working in linear RGB would have less artifacts and side effects e.g. hue shifts and unpredictable results when working with the tools/modules.
This, to me, is a promise of ease of use, and a safety measure, and the anticipation of less complicated fiddling work with software tools to get rid of or mitigate unwanted artifacts and results, and to achieve the desired result more easily at the end, if you are willing to take a few steps more earlier in the workflow.
Before darktable, actually I was looking for a software which works in CIELab, as I had some great results for one or the other image with Affinity Lab curves, namely lightness adjustments without hue shifts, but also color contrast enhancements (steepen a and b component curves), but I have read your article in which you mention that darktable module’s Lab implementation was (retrospectively ) a „mistake“ as Lab was never intended as an image editing working space and only works well on images with specific properties, e.g. muted colors and low dynamic range, if I recall correctly.
..to be continued...
More natural looking images regarding tone and colors (at least as the basis for further edits), less artifacts and more predictable behavior are the main reasons of my quest for a new software.
I realized that linear RGB could potentially be the fulfillment of what people thought Lab has promised them, so I currently give darktable a serious try.
I have read about your and your fellows goal to re-implement successively the most important modules for the scene-referred (and linear RGB) workflow to give an alternative to the display-referred modules which in addition to their inherent design (“flaw”) of being display-referred also don’t work well in general, some of them at least, if I understood your article (about recommended modules, I think) correctly.
So, it seems to be a future-oriented software project, not stale, which deserves to have a closer look.
I have seen many videos from you, Bruce and a few from Rico and others, not all of them yet, read the darktable documentation of the modules I used, and had some good results, some slightly superior, for images previously edited with Capture One.
Slightly superior, but with a lot more options to try&error, and not without pitfalls, but, as I said, a few more steps to get better results I am willing to take.
Now I have seen this video about the upcoming diffuse and sharpen module with big anticipation, presumably a great new linear RGB module for the scene-referred workflow. And I see its power, and I think I see the advantage over deconvolution, and it is a highly sophisticated tool allowing precise control over the parameters of a highly sophisticated algorithm which can be used for so many things. This is really a respectful performance from you (and maybe contributors). Really.
... to be continued...