D-Cinelike LUT for DJI FPV Drone

แชร์
ฝัง
  • เผยแพร่เมื่อ 5 ก.ย. 2024
  • I reverse engineered the D-Cinelike tone mapping curve (used by DJI FPV drone) using 129 sample points, and created a ridiculusly precise LUT (65536 point curve) that maps the footage back to linear, for easier color grading.
    Download LUT:
    drive.google.c...
    In Davinci Resolve, this LUT reverts tone mapping for displays with sRGB (piecewise) gamma, like precise PC monitors. It transforms DCinelike gamma into sRGB gamma, with a linear tone curve. For pure rec709 2.2 or 2.4 gamma, use Color Space Transform for additional conversion. Dont alter the input color space of your clip. Bypassing color management shouldn't affect the visual effect lut has, confirming correct LUT usage. Apply the LUT within a node in the Color workspace's node graph. This foundational technical LUT neutralizes the footage, reversing D-Cinelike's brightening, highlight compression, and elevated near blacks designed to conceal noise. This reveals an extra stop of dynamic range, which can be refined using temporal noise reduction. Perform your grading on this neutral base to enhance your footage's quality or add creative flair.
    Desmos graphing calculator document shown in the video can be found here:
    www.desmos.com...
    This LUT can also be used in HDR in the same way you would use any other SDR footage in an HDR project, but I've also created an HDR version of the LUT, that will result in linear relationship between photons hitting camera sensor and nits displayed on the screen.
    Download HDR version of the LUT:
    drive.google.c...
    HDR comparison video:
    • D-Cinelike HDR LUT for...
    Slightly reduced resolution SDR verison of the LUT in 3D cube format for compatibility with lumafusion (courtesy of /u/midcontphoto):
    drive.google.c...

ความคิดเห็น • 44

  • @CypiXmusic
    @CypiXmusic ปีที่แล้ว +3

    Big compliment on the properly researched work not just on the lut, but also explaining the details here in the comments. You should do explanation videos of the theory behind this, I'd watch them

    • @Eagleshadow
      @Eagleshadow  ปีที่แล้ว +1

      Thank you for kind words! Much appreciated

  • @LuisGarcia-hc2mf
    @LuisGarcia-hc2mf ปีที่แล้ว

    Dude, I can't thank you enough for doing this and having it available online AND FREE?? You're a class act. I hope I can pay it forward in a big way like you did. Thanks!

  • @naturesoundup
    @naturesoundup ปีที่แล้ว +1

    My man, you r a life saver. My drone footage with dji wide gammut and dlog gives me overexposed. I appreciate for your effort and kindness. I will subscribe on your the best channel

  • @FilipRadelic
    @FilipRadelic 8 หลายเดือนก่อน

    Just saw this on DJI forums thanks to some spambots that brought the thread back from dead, so thanks spambots I guess 😂
    Tried the 3D cube version in FCPX with some O3 Air Unit footage and even though it's not the exact same camera, it's probably close enough as I really like the results, great job!
    Also, hello bolnica 😄

    • @Eagleshadow
      @Eagleshadow  8 หลายเดือนก่อน +1

      I was at first disappointed to see the new activity on that thread was just spambots, but then I realized them bumping it could lead to increased exposure which is good, but I never would have guessed it would be you of all people 😆
      I've actually been a subscriber since two years ago, love your videos! You got some of the best bolnica freestyle there is.
      Awesome to see my lut is still being discovered and used. And yeah, I would expect it to work near perfectly with any D-Cinelike footage, regardless of camera.

  • @Augn-ff3dp
    @Augn-ff3dp 2 ปีที่แล้ว

    Wow. This is fantastic. Great to see so much effort in this space!! I hope you do well with this mate, it would save lots of time for people

  • @dustydus5762
    @dustydus5762 11 หลายเดือนก่อน

    this is amazing, thank you for all your work!

  • @MJBold_1
    @MJBold_1 ปีที่แล้ว

    Very much appreciated! You are a good soul! Thank you so much.

  • @SickLevel
    @SickLevel 2 ปีที่แล้ว

    Awesome job!!! Very nice video my friend.........keep going!

  • @picchioknossus8096
    @picchioknossus8096 ปีที่แล้ว +1

    I am very surprised. I am very very surprised that this video does not have more views or comments because it is the first bit of real piece of information that I stumble upon in days of searching after buying the DJI mini 3 pro. Coming from photography there are a lot of things that just look confusing from the photographer point of view. For example when shooting RAW 14bit on a camera I'm expecting a lot of information "above" white and "below" black and then I can play with the raw file to recover depending on the camera anything between 3 and 7 stops in any direction. When shooting 10 bit with the mini 3 pro several questions arise:
    First one is why would the bit depth be tied to any specific color profile?
    Then, 10bit are not 14 but should contain 4 times the info of an 8bit file... and still I don't seem to be able to recover any highlights or shadows compared to the 8bit "normal", infact what I noticed is that once playing with the footage to make it look right the shadows are much more noisier (this applies when shooting 8bit and 10bit with same ISO, S, A which results in the 8bit file having -0.3/-0.7EV and when leaving everything in automatic mode at 0EV). My initial "wrong" interpretation was that I was looking 1024 shades through a window of 256 and therefore be able to bring down highlights, raise shadows etc.
    One other confusing concept is the "dual ISO thing". The way it is presented is like the shadows would be shot at native ISO 800 and the highlights at native ISO 100. That in theory should provide 3 stops of additional dynamic range that are very welcome on such a tiny sensor... but instead what I see is that the shadows are mildly less noisy and the dynamic range is almost unchanged. I tested this both in bright daylight and indoors with little success (settings that accorind to DJI support dual ISO so 30fps, non locked ISO, scene with high dynamic range).
    Finally hoping that you are still reading I would like to ask a question about your LUT. Should I use the HDR version with the 10 bit file or to do HDR video the drone has to be able to shoot multiple streams of frames at different exposures like it happens in photography?
    Thanks a lot for the video, for the LUT and for any clarification that you may have.
    Sidenote: I am positively impressed by the pictures that this tiny sensor is capable of shooting, and I mean both in RAW and JPG.

    • @Eagleshadow
      @Eagleshadow  ปีที่แล้ว +5

      Sorry for the late reply, there's a lot to unpack here and I do have many answers for you.
      First, thank you for the praise and appreciation.
      I also came from photography into the world of videos, and had a similar experience as you. Luckily I had a more technical background in photography which helped.
      Raw file, such as .dng, is not only 14bit, but is a few more things at the same time, which 10 bit video in this case is not. And understanding this difference will explain why log profiles such as D-Cinelike became a thing in the first place. Having 14 bits of precision means that we have 16384 shades of gray per channel, which is insanely more than 256 or 1024 in 8 or 10 bit files respectively. It's so much in fact, that we can write down the signal strength for each pixel on a sensor just as it was received, linearly. Think of it as counting photons, and simply writing down the number of photons that hit the sensor for each pixel (not literally true, but true enough).
      If we were to write down such captured light and simply quantize it down to 8 bits, the resulting image would have terrible banding, it would look similar to a 2 bit image in the shadows, 4 bit image in the midtones, while the highlights would look like a 12 bit image. These numbers are rough guesses just to help visualize the idea. Since our eyes/brains perceive light in a logarithmic fashion for practical reasons, we figured out we can abuse that and encode this signal using a tone mapping curve that redistributes the values around such that bit depth is disproportionately allocated to lower values, so the darker the tone, the more bit depth is allocated to it. This curve is what we call gamma curve.
      To apply this gamma curve efficiently, we'd want to first actually have our shadows, midtones and highlights be where they ought to be. Which means we want to have the image properly exposed and likely even tone mapped for aesthetic reasons to compensate for differences in viewing environments and limitations of the brightness of our monitors compared to actual sunshine outside. So we end up doing tonemapping twice, and reversing it once.
      Our first tonemapping is what we call developing the photo. We either do this in raw editors or camera does it by itself.
      Our second tonemapping is gamma encoding, which gets reversed perfectly upon viewing the file given an accurate monitor. This happens automatically either when we save the file as .jpg or when camera saves it as .jpg.
      As you know yourself, raw photos give us more latitude, they let us "recover" highlights and shadows. But have you ever wondered why do we have to recovering anything in the first place? Like, why doesn't it start out recovered? Data is obviously there. It's just getting clipped in the moment of presentation. It turns out that showing an entire spectrum of what camera can see in a 200 nit monitor results with a dim looking image that looks washed out and unappealing. So the raw development software actually has a default which isn't neutral, yes even the ones called "neutral" are not neutral. To see the linear data inside of raw file requires using DNG_Profile_Editor_win_1_0_4 to modify any of the existing camera profiles into their linear versions. Developing the raw photo in that way is required to produce the same output as the D-Cinelike LUT I created, as that's the only starting point that's truly neutral.
      Raw files are more than just 14 bit files, they are information about light signal at each sensor pixel, and color primaries of those sensors don't exactly match the srgb colors, and usually they can see colors beyond srgb. To convert those colors into the color space we can use on a PC, we need to define what's white, and that's why adjusting white balance on .jpg files kinda sucks compared to adjusting it on raw files. Because color space has been baked in, and it can't be unbaked anymore. The more white balance estimation was off at the moment of conversion, the higher the shift between individual color channels on the whole, and the worse the bit rate efficiency. At the extreme, whole channels clip or parts of them get clipped as if they were over or under exposed and recovery becomes impossible due to clipped data.
      Camera sensors are typically bayer design, which means they contain twice as many green pixels as red or blue. It also means they are a mosaic or different colors, and the layout of pixel doesn't match the layout of pixels on the monitor, it's a checkerboard as opposed to stripe. So even the pixels themselves have to be interpolated into a different structure, and the heavy green tint needs to be corrected for. During this process, additional details are extracted and usually some amount of sharpening is applied to offset the lens blur.
      So raw files are wonderful, they let us develop into different color spaces and adjust white balance non destructively etc. But understanding what we learned so far, we see how we can't just write down the raw file data into 8 or 10 bit space and expect that to work. If we did that, we wouldn't get something that video compression algorithms can work with at all, and even if it did, we just can't write down linear data into 8 or 10 bit. So we're left with choice of either writing down each frame as a separate raw file (this is called cinema dng) or developing each raw frame, going through the entire raw development process described above in the same way that camera produces a jpg file, just writing down the data as compressed video frames instead of compressed jpeg files.
      Looking at that choice, if we do the math, storage requirements for saving 10 minute flight at 4k60 is equivalent to writing down 36000 dng files. That's roughly 1TB of data, and requires and external SSD drive not only for storage size but for required write speeds as well. Not exactly suitable for a lightweight consumer drone.
      Since this has been a severe limitation in videography since the dawn of digital video, we came up with the best possible compromise. Log profiles. They solve the problem of how to get as close as possible to raw data, without it actually being raw data.
      Normally, cameras would develop the raw frame of video internally. Most destructive process there is the application of the S-curve that raw converter uses to makes the image not look flat. It's what crushes the shadows and clips the highlights in the process of boosting midtones, all in the name of increasing contrast to pleasing levels. But we're pros. We want to do that ourselves because we can do it better than the camera. So instead when log profile is chosen, camera does not apply it's proprietary S-curve, but instead applies completely different curve that is opposite in nature, and designed to retain as much info as possible. D-Cinelike is designed to be used with bit depth of 8-bit. D-log is designed to be used with higher bit depth. Log curves are treading a tradeoff between recoverability of highlights and shadows vs bit depth. If you lower the contrast and brightness to the point of everything being recoverable, once that gets stretched back into pleasing to look at levels of contrast, the image will be full of banding artifacts if footage is 8-bit. If it's 10-bit, then much more will be recoverable. Best tradeoff would depend on what you intend to do with the footage.
      Additional consideration is that grading straight from log curves is more difficult the more flat they are, as you start with a less natural looking image. Fortunately, this is only an issue when doing things the wrong way. Doing things the right way, is undoing the log curve transformation precisely, which is what my LUT is for.
      So using log curves generally presents you with an image that isn't artificially clipped, as opposed to S-curve default you see as default in raw development software. There's nothing to recover because highlights past that point are in fact blown out in the internal raw file used to derive video frame as well. Or that would be true if advanced highlight recovery techniques didn't exist. While debayering the raw file, it's actually possible to exploit statistical correlations and make some guesses in cases where one or two channels burn out and get clipped, but one does not. Doing this is hard, risky, and there's more than one way of doing it, each with different tradeoffs. This is unfortunately an element of highlight recovery that isn't really possible when approximating actual raw files with log curves. The upside is that such recovered highlights were never precise in the first place, they are kinda 50% guesswork, in hopes noone will know the difference. For best result, especially with higher end equipment and HDR deliveries, it's better to avoid using this when possible.

    • @Eagleshadow
      @Eagleshadow  ปีที่แล้ว +4

      My comment was two long so I had to break it up into two parts, here's the rest of it:
      Unfortunately we have a small drone camera sensor, so we can't afford to underexpose much as it will introduce too much noise and tradeoff simply isn't worth it most of the time. But you can reduce exposure a little bit and it can help with this. Understanding all this should hopefully clarify a lot of confusion you had. But I'll address your individual questions now.
      > First one is why would the bit depth be tied to any specific color profile?
      As I don't have Mini 3 pro, and my DJI FPV only records in 8 bit, I wasn't even aware DJI limited the 10 bit recording only to some color profiles. That's an interesting decision. Generally there's two types of people, those who just want the video straight out of the drone and publish the video as-is. Then there's people who want to do it all themselves to arrive at the best result possible. For the first kind, 8 bit vs 10 bit doesn't make any difference whatsoever assuming the video is saved as standard sdr video and uploaded to youtube. It would however significantly increase the processing that's required by the camera, and less processing power might be available for other features then such as high frame rate, stabilization, etc... and if one was using a log profile, then presumably they want to edit the video themselves, and so that's what you'd want to use if you wanted to use 10bit. DJI can opt to enable 10 bit only for log profiles such as D-Cinelike or D-log on the basis of simplifying the interface. I personally disagree with this as I think more user choice is always better.
      > Then, 10bit are not 14 but should contain 4 times the info of an 8bit file...
      Important thing to understand here is that those 10 bits are encoding the exact same source data as 8 bits are on DJI FPV for example, and only difference is that image is auto-developed from 14 to 10 as opposed from 14 to 8, there are less banding artifacts present upon closer inspection as the quantization is less severe. This however makes a huge difference for HDR monitors where banding artefacts can easily become apparent in the bright clear skies, while using 10 bit will provide just enough additional detail for the banding to be invisible. Literally, the only difference between 8 and 10 bit video is the severity of banding.
      > what I noticed is that once playing with the footage to make it look right the shadows are much more noisier
      D-Cinelike will try to hide this by default by compressing that noise hence removing perceivable detail from it, and undoing the transformation using my LUT will of course make that noise visible again, and also allow you to see actual details in the shadows, albeit noisy details. Good news is that videos have one unique advantage over a single photo, and that's similarity between consecutive frames, which allows for temporal noise reduction. This requires Davinci Resolve Studio licence or a third party software or plugin, but it's extremely worth it as it essentially unlocks shadow detail recovery when working with videos.
      > One other confusing concept is the "dual ISO thing".
      You can say that again. But as far as I understand it, it just means that camera sensor is optimized differently than it otherwise would have been, which in practice provides a slightly less noisy image at high ISO, with the benefit usually centered around a particular ISO number. Normally all cameras have native ISO, and surprisingly that's often not ISO 100. Using camera at native ISO is supposed to allow recovery of maximum amount of highlight detail, naturally at the cost of increased noise of not being at ISO 100. Sony cameras wont even let you use their log-profile without forcing you into a higher ISO, and the ISO number depends on the log profile chosen, as they deem it doesn't make sense to use log profile, intended for recovery, if you won't use ISO which would maximize that recovery. Dual ISO is then supposed to mean dual native ISO, which means camera has two optimal ISO points for dynamic range recovery, instead of one, so it's more flexible. It's nice to know sensor is taking advantage of a modern feature that extends dynamic range ever so slightly, and if you want you can target locking to one of the particular ISO-s that are native to it, tho with these kind of drone cameras locking ISO isn't advisable. I at least personally haven't found any use for locking the ISO on DJI FPV, other than for recording that sunset, but that was an ISO test flight and it would have been better and safer if I had instead used EV compensation. More dynamic range and actually can see where you're landing (landing light saved my ass, lol).
      Looking at the DJI official description for this feature now, I see that my description fits what they call Video mode (48/50/60fps) dual ISO. While "photo mode and video mode (24/25/30fps)" dual ISO states that the sensor will algorithmically integrate the images gained by the two sets of analog amplifier circuits for a better dynamic range of actual imaging. It's unclear what they mean by that, but the frame rate limitation makes me suspect they might be shooting two consecutive instead of one, one at ISO 100 and one at 800, then merging those two different exposures into an HDR composite image of higher dynamic range than any individual image. The only thing you can expect from increasing dynamic range when dealing with video is reduced shadow noise and better color volume (more accurate color in underexposed and overexposed areas). You can then trade this off for highlight detail by reducing your exposure. Also look out for artifacts with this mode (24/25/30fps). My Insta360 One X2 has this feature, it's just named differently, and it causes noticeable trailing artifacts around highlights due to inability to perfectly merge two consecutive frames into one.
      > Should I use the HDR version with the 10 bit file or to do HDR video the drone has to be able to shoot multiple streams of frames at different exposures like it happens in photography?
      If you intend to make an SDR video, then use the SDR LUT featured in this video. To make HDR video you would need an HDR monitor, and considerable amount of additional knowledge.
      HDR when it comes to videos is different from how the term is used in photography.
      In photography, it refers to merging multiple exposures to increase dynamic range and then using local tonemapping to bring that range down to 8 bits of depth intended to be viewed on a 100-250 nit monitor.
      In the world of video production, television, monitors etc, HDR is a new standard of being able to encode videos (and images) using vastly improved gamma curve design in a storage space of 10 bits with a much wider and different color primaries, with values specified in absolute instead of relative terms. Modern cameras can already see far more than traditional displays can show (notice how we take it for granted that we can recover shadows and highlights from a single raw file). The missing ingredient for a while was having a new standard that would allow us to go from 14 to 10 bits, rather than going from 14 to 8 bits of precision. HDR is that new standard, distinct from HDR as a photography trick capturing extremely wide ranges that can't even be displayed fully on best HDR monitors without local tonemapping. And if you're unfamiliar with term local tonemapping, that's what sliders "shadows" and"highlights" in adobe camera raw and Davinci Resolve actually do. Clarity and texture options are high frequency versions of local tonemapping.
      > Sidenote: I am positively impressed by the pictures that this tiny sensor is capable of shooting, and I mean both in RAW and JPG.
      That's great to hear. I personally got a GoPro11 Mini and mounted it on my DJI FPV drone to get support for 10bit video with a larger sensor, and I'm finding it to be night and day difference for producing HDR video and flying around sunset time. Really miss being able to use my LUT tho. Very glad to hear you're finding it useful.

    • @CypiXmusic
      @CypiXmusic ปีที่แล้ว +3

      @@Eagleshadow I just gotta take time to thank you not just for the LUT, but actually taking the time to write this extensive comment. I read through it all and it helped me grasp a bit more of the intricacies of digital video theory. Keep the high quality, well researched work up!

  • @vadimk1004
    @vadimk1004 ปีที่แล้ว

    Amazing work!
    That LUT generates GPU error in DR 17 when attempting to use it (AMD RX580). Managed to convert Ur LUT in LUTCalc into "working on my GPU" LUT, but don't know how much that preserved Ur incredible effort in making that LUT. I'm a noob in all that stuff, BTW.

    • @Eagleshadow
      @Eagleshadow  ปีที่แล้ว

      Thanks. I'm glad you got it working, and I can only hope it's working properly with such approach. Another user had a similar experience when my LUT was incompatible with Lumafusion that he was using, and I did test compare his results and they might apply to your case as well so I suggest taking a look if you're interested: redd.it/sg4fgy
      For me, my LUT works with both DR 17 and 18 using an Nvidia GPU on Windows 10 so I guess this is probably related to your specific GPU or drivers, especially since DR throws a GPU error.

  • @RooftopTVHR
    @RooftopTVHR 2 ปีที่แล้ว

    majstore hvala puno!

    • @Eagleshadow
      @Eagleshadow  2 ปีที่แล้ว +1

      Ajd baš mi je drago da će i našima bit korisno :)

  • @NITRYNIA
    @NITRYNIA 8 หลายเดือนก่อน

    I'm very noob here I cannot just Right clic -> LUT -> add you lut ? how can I use it proprely ? (Dvinci user) thx Or it's just a base lut to add on D-cinelike and working on it after that ?

    • @Eagleshadow
      @Eagleshadow  8 หลายเดือนก่อน +1

      In Davinci Resolve, this LUT reverts tone mapping for displays with sRGB (piecewise) gamma, like precise PC monitors. It transforms DCinelike gamma into sRGB gamma, with a linear tone curve. For pure rec709 2.2 or 2.4 gamma, use Color Space Transform for additional conversion. Dont alter the input color space of your clip. Bypassing color management shouldn't affect the visual effect lut has, confirming correct LUT usage. Apply the LUT within a node in the Color workspace's node graph. This foundational technical LUT neutralizes the footage, reversing D-Cinelike's brightening, highlight compression, and elevated near blacks designed to conceal noise. This reveals an extra stop of dynamic range, which can be refined using temporal noise reduction. Perform your grading on this neutral base to enhance your footage's quality or add creative flair.

  • @pkay3399
    @pkay3399 ปีที่แล้ว

    Hi, thank you for the LUT. Question: when you say this LUT linearizes the footage, do you mean that the correct output of the LUT (for a CST node applied after) is sRGB/Linear? Because I'm going from sRGB/Linear to Davinci Wide Gamut, using the relevant Luminance mapping adjustments, and playing around with the Max. Input (nits) slider to pre-control the highlights before working on the grade. I assume that sRGB/2.2 is not the correct option since it's linear. 👍

    • @Eagleshadow
      @Eagleshadow  ปีที่แล้ว +1

      D-Cinelike is a tone curve, this straightens it out, a.k.a. removes the transform. So linearizing the footage refers to tone curve being linear, a.k.a. not being a curve but a straight line (even lacking OOTF). Image is still encoded in sRGB gamma, not linear gamma! This means that this LUT outputs in sRGB gamma, so if you want to convert it to anything else, you'd setup the CST to convert from sRGB gamma to whatever gamma you wish to use. There are details in video description and even more details answers to other comments below. If you're still interested in any further details or unsure, feel free to ask :)

    • @pkay3399
      @pkay3399 ปีที่แล้ว

      @@Eagleshadow Appreciated 👍

  • @ahassona
    @ahassona ปีที่แล้ว

    that's really one of the best LUT to natural I tried with mini 3 pro . but what do you think about the sun high expose after applying the LUT I felt its lost the exposure range even in your videos here 00:49 the spot of the sun is more exposed or don't know how to call it in English ! do you feel its easy to Color grading this after ?

    • @Eagleshadow
      @Eagleshadow  ปีที่แล้ว +3

      What you are seeing happen to the sun is that highlights are being uncompressed and you suddenly see highlight detail that you weren't able to see before. There are couple of effects at play here. In 8 bit with DJI fpv footage this results in banding, but situation should be much better with mini 3 pro as it records in 10 bit, so you hopefully won't have that problem. The other problem is that you'll sometimes see color casts appear in the highlights, but it will only happen in clipped highlights such as the sun or clouds. This is due to something that seems to be a bug in D-Cinelike, where not all channels clip at 255 as they should, and instead you get situation like this one i.imgur.com/6LPNy7X.png (R: 252 G: 255 B: 250). This doesn't make any sense since in reality if this number could have been recorded it would have been far beyond 1000 in the sun, let alone 255, so camera should have written down 255 there instead of 252 and 250, but it didn't.
      This issue usually flies under the radar, as difference between 255 and 250 isn't large enough to produce a very noticeable cast. However, when highlights are correctly uncompressed, this difference grows exponentially and the cast becomes more noticeable: i.imgur.com/cJKwNRe.png (R: 240 G: 255 B: 230). In Davinci Resolve the solution is to do a white balance with reference to that burned out highlight on the node after the LUT, which will then become R: 250 G: 250 B: 250, and then adjust the curve or gain a bit to restore that peak back to R: 255 G: 255 B: 255 i.imgur.com/vNRfTOp.png
      And if white balancing to the clipped highlights goes against your creative intent, but you still want clipped highlights to be white, use soft Luminance qualifier to adjust white balance without affecting the clipped highlights.
      Other than that, if you're concerned with the "look" of such sun not feeling right, this is once again the consequence of sun clipping, so on the LUT corrected footage you end up seeing an exponential rise in brightness that then just terminates abruptly in a clip. In real life, our eyes don't clip brightness like that, so it looks strange to us. What looks more natural is to have highlights come to a clipping point softly, which isn't as "correct" in terms of what the camera sensor saw, but it looks pleasing to us. When working in HDR, we don't want highlights to be as compressed as they are in SDR, as this less aggressive highlight compression is big part of what actually makes HDR video looks so realistic when done right. In SDR however, we do want more aggressive highlight compression.
      D-Cinelike comes with highlights pre-compressed, and my LUT uncompresses them since it's just bringing the image back to baseline neutral physical reality of what the camera sensor saw, with absolutely no edits or tone maps applied. And then it's up to you, depending on your creative intent, and depending on if you're delivering to SDR or HDR or both, to judge what amount of highlight compression to reintroduce. In Davinci Resolve, the easiest way to introduce highlight compression is to expose your footage such that highlights clip sooner than they should, and then in the Color workspace, in the "Curves - Custom" panel you'll see "Soft Clip" area with sliders "High" and "High Soft". Just drag these two sliders to adjust how you want your highlight compression to look. If you're working with another software that doesn't have an equivalent feature, you can always use curves themselves to achieve the same effect.
      I think it's a personal preference if it's easier to grade using this LUT, and it might depend on what look you're trying to achieve. If you've got a lot of experience grading D-Cinelike footage, it might be easier for you to use that as a starting point. But if you're used to using a camera agnostic neutral starting point or are interested in matching the look of multiple cameras, then using this LUT should make the grading easier. I personally use my own LUT every time I grade D-Cinelike. More than anything I love that it uncompresses deep shadows, which I can then restore with temporal noise reduction and so extend the dynamic range a bit.

    • @ahassona
      @ahassona ปีที่แล้ว +1

      @@Eagleshadow really appreciate your reply and I used to Color grading on Premier recently for my quick short videos colouring. but you made me curious to try the LUT on my footage on Davinci . many information here I would read more a bout it and I really happy I found this helpful video .

  • @anttiheik
    @anttiheik ปีที่แล้ว

    This looks pretty interesting! I grade in DWG Intermediate, how should I use this (or the HDR) version of the LUT to get D-cinelike footage to the same state as my other footage from cinema cameras? I tried applying this LUT and adding a CST at the next node with input linear and output DaVinci Intermediate but it didn't look quite right. I have a timeline node going from DWG Intermediate to Rec.709 Gamma 2.4 which converts all of my material into my display space.

    • @Eagleshadow
      @Eagleshadow  ปีที่แล้ว +1

      Ah, when I used the word linear, that actually refereed to footage not having any tone curve (be it filmic, log, or ootf), which results with preserving 1:1 ratio between the photons hitting the camera sensor and the photons being emitted from the screen. These photons however have to pass through a gamma curve on their way to the screen, since screen will expect a gamma encoded image. I calibrated the expected number of photons to srgb values for color checker provided by xrite: xritephoto.com/documents/literature/en/ColorData-1p_EN.pdf , that means that output of the lut is srgb gamma encoded. So it's a data with linear tone curve (in other words lack of tone curve) inside of srgb gamma space.
      In CST, the word linear refers to gamma itself being linear, rather than to tone curve being linear, that's why CST gave you an output that didn't look quite right.
      eotf = electro optical transfer function = photons being compressed into a gamma curve
      oetf = opto electrical transfer function = photons being decompressed from a gamma curve
      ootf = opto optical transfer function = non linearity of the overall journey of light, essentially a tone curve, standardized and defined in rec.709 as ootf, often further customized by camera manufacturers and is then called filmic tone curve, used when camera is recording directly into rec.709. While recording into log curve such as D-Cinelike essentially replaces filmic tone curve (ootf) with a log curve.
      When your footage goes from DWG Intermediate to Rec.709 Gamma 2.4, CST will automatically tick the ootf for you, which will convert all of your material into display space. So it will do two things, it will recalculate gamma from DWG (log based) gamma into Gamma 2.4, and it will also add a basic grade that lowers shadows, which makes the footage look nicer on displays than linear light would, and essentially compensates for how typical viewing environment and display brightness limitations influence our perception of emitted light. This is ideal, since my lut does not contain ootf within it.
      So matching D-Cinelike footage to the same state as your other footage from cinema cameras depends on how those cameras have encoded their video. For example, if someone recorded directly into rec.709 with all of them, they would all in theory have ootf or filmic tone curve applied, and you wouldn't want to apply ootf at the end in CST, without first inversing the ootf, assuming camera is not using a appropriately filmic tone curve, which it probably is. To match D-Cinelike that such cameras, you'd have to apply my LUT, then apply CST with ootf, and even then it would be questionable match due to proprietary nature of filmic tone curves used in those cameras.
      However, if all your cameras are recording to log profiles, as they more likely are, then this will depend on how you are inverting that log. If you are using a LUT to do that, then it depends on if this LUT is designed to convert footage to scene referred space (linear tone curve, no ootf) or to display referred space (it includes ootf).
      If you are using CST to convert your footage from log, or using raw video, then CST will automatically tick the ootf check boxes depending on what you most likely want given the color spaces you're converting to and from, and it will most likely contain ootf only on the last step. This would be the ideal setup, and my LUT should fit into such setup by default, since it doesn't contain an ootf.
      So if you are using davinci color management, bypass it for the D-Cinelike clip. Then as a first step, add my LUT onto the clip with the first node. This clip will now be in srgb gamma and will be displaying perfectly linear light with no tone curve or ootf in the viewport of davinci resolve using an accurate srgb monitor. For it to look the same on rec.709 TV that expects pure gamma 2.2 or 2.4, signal would have to be converted to that first, or resolve would have to be aware of the difference and compensate for it.
      If you're working with cinema camera that shoots raw, and you set the "camera raw" window to develop to Color Space Rec.709 and Gamma sRGB, this would yield the same result as applying my D-Cinelike lut to a non color managed D-Cinelike clip. That is not to say that my LUT has anything to do with color primaries, it's that D-Cinlike color primaries already are rec.709 primaries, and my LUT simply doesn't affect hue, but only brightness, and so the footage remains in rec.709.
      So there isn't one correct answer to your question, as it depends on how exactly you are using your cinema cameras and handling their inputs. Hopefully this helps you make sense of it. Since you're grading in DWG intermediate, I imagine you'll probably want to use CST to convert the clip with the lut applied from srgb to DWG with regards to both color space and gamma.

    • @anttiheik
      @anttiheik ปีที่แล้ว

      @@Eagleshadow Wow, thanks for the very thorough response!
      I am working with node-based color management, i.e. putting all of my different cameras into separate groups and then transforming each flavor of log (mostly BMD Film gen 5) to DWG Intermediate. Then I do my clip-level grades and on the timeline level nodes I have my global look and then an output CST from DWG to Rec.709 gamma 2.4 with luminance mapping, saturation mapping and forward OOTF checked on.
      With the DJI D-Cinelike footage I now put your sRGB LUT on the first node, then a node with a CST from sRGB (gamma and color space) to DWG Intermediate, no forward or inverse OOTF, no tone mapping. I still have my timeline level CST set to output Rec.709 Gamma 2.4. This pipeline results in quite a flat looking and a very dark image. Should I just correct this via offset/HDR Global and contrast adjustment (with pivot set to DWG middle grey 0.336) or is there another way you'd recommend? I found that I have to up the exposure by about 2.5 stops to get a good looking exposure on a clip that was exposed well with pretty much all of the data between 10% and 80% on the ungraded footage.

    • @Eagleshadow
      @Eagleshadow  ปีที่แล้ว

      @@anttiheik I found some of my footage that sits between 10% and 80% and did a test doing what you're doing, and this is what I'm seeing: imgur.com/a/ulpliJG
      As you probably know, viewer seen in these screenshots is inherently srgb, and isn't an accurate reference in this case, as the image actually looks as it's supposed to only when seen on gamma 2.4 TV, but that's not something I can screenshot.
      I would say that this is correct way of doing it, and as expected, I'm getting the exact same output going directly to gamma 2.4 as I do when I go through DWG first. I do get slightly different result when I enabled luminance mapping and saturation compression, which is also to be expected.
      In the demonstration video above you should be able to observe that applying this LUT reduces the exposure, so yeah, reduced exposure is to be expected. This is simply how tone mapping works. In reality highlights can often be stupidly bright, and if we scale down this light linearly and show it as such on the display (on the way properly compressing and decompressing to and from gamma space).
      Brightest cloud in that scene of mine sits at 84% signal value as seen in resolve scopes looking at Y channel. (207,218,216). This signal value is itself in gamma space and is a simple result of dividing 213 by 255). But when we look at light as it really is in nature, outside of gamma space, like imagine you're literally counting photons, that signal value would actually be 66%. This can be seen by exporting a tiff to photoshop and switching photshop to 32 bit/channel mode which uses linear gamma. At that point values no longer go from 0 to 255, but from 0 to 1. Brightest cloud in that image is now 0.66, and with LUT applied it becomes 0.38. If there was an airplane next to that cloud, and if that airplane was perfect white of signal value 100% meaning 255 in gamma space, which would be represented as 1 in linear space, applying the LUT would not decrease this value at all, it would stay at 1.
      In theory, if our airplane didn't actually clip, we can now look at this in linear gamma space and know that 38% as many photons came from our first cloud compared to our airplane. And same goes for any shadows in the image. Looking at the grass on the ground, it's average value in linear space is 0.0196. Meaning that 1.96% as many photons came from it as from the airplane (considering a single pixel of each of course). But since we don't percieve light linearly, but logarithmicaly, it doesn't seem to us that grass is actually 51 times darker than the airplane, or 19.4 times darker than the cloud. The srgb value of that grass is actually 38, which is much more in line of what light feels like to us compared to cloud of value 167, 4.4 times darker.
      Ok, and so we gamma encode images to make them both computationally more efficient and more intuitive to us, what does this have to do with anything? Well, the effect of this is that shadows are stupidly dark, while highlights are stupidly bright. This kind of gamma encoding takes care of the shadows being too dark, encoding them more brightly than they really are, and while this helps with highlights, trouble with them is that they go up to such an extreme that we can't really preserve all that data, detail, known as highlight contrast, and so we compress it further with what's called shoulder of the tone mapping curve.
      This is feature of filmic tone curve, also called S curve, it boosts midtones and compresses highlights. D-Cinelike is one such curve, except it's meant to be an intermediate, not a final delivery curve, so it's flatter, and what it does is it boosts midtones and compresses highlights, so if we are going back to linear light, we have to reduce midtones and uncompress highlights. What D-Cinelike doesn't do compared to regular filmic tone curve, is that it doesn't crush shadows, and that's why it looks so flat, and that's what allows us to decide what exactly to do with shadows. (it also raises blacks a tiny bit for some reason)
      To uncompress highlights without changing exposure, we'd have to map the value of 1 to values far beyond 1, like 5 or 15, but this is not something that's supported by LUT format. LUT simply remaps between values 0 and 1. This means that brightest highlight has the stay at the value of 1, and everything below that has to be remapped accordingly. And since we are going back to linear light, that means those midtones that D-Cinelike raised, now have to go back down to where they originally were.
      And that's why using LUT to undo log encoding results in an image that overall looks much darker. In some ways, D-Cinelike is closer to what you want to see from a graded footage, but linear light is the only neutral ground that is based in physics. The person who is grading is then supposed to raise mid tones and compress highlights to the degree that looks good and suits their artistic intent. So yeah, decrease in exposure is expected, and should be manually brought back to the level of brightness that is desired. Also light that is linear will inherently look flat in most situations, that's why OOTF exists as a baseline transform when going from scene to display referred light.
      I don't have a workflow that I can strongly recommend, as I'm not really a colorist with that much experience. I'm actually a game developer doing this on the side, so I haven't worked with proper cinema cameras, but just with drones, gopros, and Sony DSC-RX100 VII. I also work primarily with HDR video. My preferred workflow is using Davinci YRGB, keeping my project in Rec.2020 Intermediate timeline color space and Rec.2100 ST2084 output color space. I then convert my footage to scene referred linear light in PQ gamma (linear relationship between real life nits and nits on the screen), which really just means I work without OOTF, as I find it easier to control shadows when they aren't compressed by OOTF. I haven't really tried the DWG workflow. It just somehow made sense to me to work in same color space and gamma (PQ) that I'm viewing and delivering in. I know that tools respond differently depending on working space, but I haven't go enough experience yet to make a decision on what kind of response I prefer.
      I see a good argument for using DWG workflow for SDR as the intermediate color space is much wider than rec.709 or for delivering to multiple formats. But Rec.2020 is already a wide color space, and so far I didn't feel I was missing anything with regards to that. Also I don't think there's one best way of working with all this, since there's infinite number of ways one can set this up and what really matters is the result. I'm still experimenting myself with different approaches, and I tend towards simplicity.

    • @anttiheik
      @anttiheik ปีที่แล้ว +2

      ​@@Eagleshadow Im a commercial DP and colorist myself I and use DWG for almost all of my work for simplicity reasons as well. I only have to input transform each camera space to my timeline space and then all of my controls will work the same on each piece of footage without having to think about which is which tone curve and color space. It's also much larger than SDR display space, which allows for much smoother and more natural looking manipulations. I have a reference monitor that is calibrated to Rec.709 Gamma 2.4 and is getting its feed directly from Resolve via Blackmagic Ultrastudio, bypassing all OS level color management. I like to keep my color management pipeline so that no element in there changes middle grey exposure unless I want it to. This way, when grading someone else's footage, I know what their artistic intent in exposure was on set.
      However, if you're only grading projects for a hobby and mostly from one camera at a time, Rec.2020 makes perfect sense, although I believe you could get better results by grading in a log curve instead of a PQ display curve and only doing the output transform after making your adjustments. In log space, I would use offset or global HDR wheel for exposure and white balance adjustments and adjusting contrast via either lift and gain or the contrast knob with pivot set to the log curves middle grey point (0.336 in DaVinci Intermediate). I also like to check off "use s-curve for contrast" in project setting because I don't want any clip-level tonal compression since my output transform is already performing an s-curve and I usually also have a global timeline look applied that has its own s-curve as well. If you're going to try this method, remember to choose DWG Intermediate as your timeline color space in the project settings or otherwise your HDR wheels, color warper and some other tools won't work right.
      Now that you said it, it does make perfect sense that the footage gets darker since D-Cinelike's (and GoPro Protune's) middle grey point is mapped at a much higher level than in regular display gammas of 2.2 and 2.4. This would also sort of mean that D-cinelike should be exposed fairly bright in camera, although I tend to try to keep important detail under 80% since the highlights start to compress after that. Luckily the Mini 3 Pro D-Cinelike is 10bit so it's probably much easier to pull compressed values further from each other in the grade than it would be with 8bit without adding banding or other artefacts. This highlight compression is also something that log curves don't have and it makes exposing correctly quite a bit harder for me than on regular cinema cameras shooting log or RAW, especially when combined with the significantly raised middle grey value of D-Cinelike.
      It's amazing how much you know of something that is just a hobby or a side gig for you. Thanks for being such a generous help to all of us less technical people trying to get great images out of these little drones!

    • @dominichawgood4976
      @dominichawgood4976 ปีที่แล้ว

      @Eagleshadow I really appreciate the time you're putting in here it's fantastic. I'm new to the DJI mini 3 Pro and also DaVinci Resolve so it's a great way to understand about workflow within the software. Your LUT project made me think about a paper I like by Chloe Legendre and Paul Debevec called Practical Multispectral Lighting Reproduction where the camera is essentially turned into a light measuring device by going in through the back end and taking out manufacturer camera curves. Anyway, I did have a question. I've read through the descriptions of how to use the LUT to take out the D-Cinelike curve and I was wondering if you would suggest using it in conjunction with a CST transform node to move footage into DaVinci Wide Gamut and DaVinci? I was just looking for a way to get consistency between different camera footage. It seemed if I had a project setup using DaVinci YRGB (Color Science) with REC 709 as timeline output and color space, and I was using DJI mini 3 footage, then after applying your LUT, I used a CST node set to REC 709 (input), Gamma 2.4 (input gamma), DaVinci Wide Gamut (Output Colour Space), DaVinci Intermediate (Output Gamma)... then another one converting it back, that this would get me to where I want? So I've taken off the D-Cinelike curve and I've expanded into a wide colour space for grading. Is that correct?

  • @gonzarb93
    @gonzarb93 2 ปีที่แล้ว

    This also works with ND filter? When I apply the loot it get to "dark" how I solve this problem? Thanks!

    • @Eagleshadow
      @Eagleshadow  2 ปีที่แล้ว +1

      ND filter lowers exposure by linearly downscaling the amount of light prior to it hitting the sensor, it's no different than if the sun was a bit less bright that day, or if the indoor scene used weaker lightbulbs, so yes, this LUT will work correctly with ND filters, and the footage will get considerably darker on applying the LUT with or without ND filter, as can be seen in the video above. You solve this problem by grading, for example in Resolve you would increase the gain parameter as that is equivalent of changing exposure in post (essentially multiplying the amount of light by a number). But then you're likely to get burned highlights, so you might want to use curves to tame them, or instead do local tone mapping to bring them down, or you could do local tonemapping to raise shadows in the first place instead of increasing exposure and getting a brighter image that way. The whole point is that you have a choice of how to do it now, rather than have an d-cinelike "grade" already pre applied to your footage solving some of these things but giving the footage a specific look, and making it difficult to undo that "look" if you are after achieving a different look and feel. And when I mentioned local tonemapping, in Resolve this operator is found in two tools called "shadows" and "highlights".

    • @gonzarb93
      @gonzarb93 2 ปีที่แล้ว

      @@Eagleshadow thanks for all that I need to study lot more jaja thing was that the day I record it was really cloudy so I could not bring up to much gain without sacrificing image quality, could you do an resolve tutorial with that loot? Would be awesome to learn from you. Thanks!

    • @Eagleshadow
      @Eagleshadow  2 ปีที่แล้ว +1

      @@gonzarb93 After applying the lut, I'm not really doing anything special, it's really just the basic grading like any other. The way I like to do it is to first adjust white balance (sometimes i temporarily boost saturation to make this easier), then adjust gain until image is as bright as I feel it should be, then I use curves to perform global tonemapping, and "shadows" / "highlights" tools to perform local tonemapping. I do those at the same time trying to find a good balance. Striking a good balance between global and local tonemapping is key to realism and pleasant look. Not enough local tonemapping and image looks realistic but flat, and sometimes even washed out. Too much local tonemapping and it looks surreal and punchy, like the overdone "HDR" photography look. I mostly use "shadows" tool to perform local tonemapping. Each scene in the video will require a different ratio of these.
      For example, looking at the late afternoon sun will make the image look very washed out from the bloom effect, and I'll punch the "shadows" tool in the range of 60-100 to return the normal amount of contrast back into the image, but turning the drone 180 so that the sun is behind me, and I find using "shadows" values in the range of 10-30 are giving me good looking image. When you bring the shadows up a lot you'll get noise, that's normal and the only "proper" way of reducing noise is using a better camera or increasing exposure which sacrifices highlights by having them burn out more than they otherwise would have.
      In Davinci Resolve Studio however, you can also cheat by using temporal noise reduction at the expense of potential visual artifacts if you apply a lot of it. This is how I can bring up the gain without sacrificing too much quality in my DJI FPV footage. Another way of bringing up the gain without revealing noise would be to bring it up and then crush the shadows using gamma or curves. This would make the shadows pitch black and avoid noise, it wouldn't look very realistic but it might look more pleasant than a ton of noise. Also note that I do all this hundreds of times per video, optimizing each scene to look as good as it can look. That's pretty much it. If I do a tutorial video in the future I'll let you know.

  • @gonzarb93
    @gonzarb93 2 ปีที่แล้ว

    After applying this lut, you can color grade the footage? Or how is the procedure for use it?

    • @Eagleshadow
      @Eagleshadow  2 ปีที่แล้ว +2

      Yes. You add the footage recorded in D-Cinelike to your timeline in your preferred video editor. You apply this lut to your footage as the first step in the process so that your footage becomes neutral looking. After that you proceed to grade your footage to taste. Using Davinci Resolve I usually grade by first adjusting white balance, then exposure (gain) and saturation, then play around in with curves to perform global tone-mapping and with "shadows" / "highlights" tools to adjust local tone-mapping. If done properly, this will result in realistic and good looking video. Then if you wanted to do creative grading you would play around with the color wheels until you find the look you're happy with.

    • @gonzarb93
      @gonzarb93 2 ปีที่แล้ว

      @@Eagleshadow awesome thanks for all that knowledge! To correct white balance how you do it?

    • @Eagleshadow
      @Eagleshadow  2 ปีที่แล้ว +1

      @@gonzarb93 In Davinci Resolve this is done with two variables in the Color tab just above the color wheels, they are called "Temp" and "Tint". Temp stands for Temeperature of light and will adjust the balance between cold and warm tones (yellow-blue). Tint adjusts the balance between green and magenta. Cameras try to guess both these values and bake in such colors into the video. Their guess is rarely perfect, and adjusting these two values addresses that. I often find it useful to crank up the saturation to unrealistic levels as I find it makes any problems with white balance more obvious. If you had white balance in the drone camera settings set to auto, then you'll have to keep readjusting white balance every time camera changed it's mind about it's white balance guess. If you had white balance on manual, you'll probably have less white balancing to do but you'll still have re adjust it for any drastic light changes in the shot. I usually go to camera settings before flight, point the drone so that camera is looking at the representative scene, then set the white balance to auto to have it automatically guess the white balance of that scene, then I set it to manual to keep it there. You don't want to eyeball white balance through headset as the headset screen isn't very accurate, and you don't want to keep the manual white balance value setting from previous flight, as it could damage your colors if there is a large mismatch. White balancing with non-raw video such as one DJI FPV creates is a lossy process and so you want your source footage to be near the correct value. So basically you want drone to chose it's own white balance, but you usually want it to use a single value for the whole flight rather than have the drone keep changing it mid flight. As far as correcting it goes, just play with Temp and Tint until the video looks natural to you, or the way you want it to look.

    • @gonzarb93
      @gonzarb93 2 ปีที่แล้ว

      @@Eagleshadow thanks for all that info I will screenshot this and try to follow your steps, thanks! You are awesome! Both your videos are great

    • @Eagleshadow
      @Eagleshadow  2 ปีที่แล้ว +1

      @@gonzarb93 No problem. btw I just published the first video where I actually used this LUT as a starting point in doing the grade. Although I used the HDR version of the LUT as I did an HDR grade, but the video can be watched in SDR as well if you don't have HDR monitor or TV. Either way, it's a good example of what the final video can look like when graded to look natural: th-cam.com/video/HiHjOH7SE2s/w-d-xo.html