Dithering with Floyd-Steinberg in C++

แชร์
ฝัง
  • เผยแพร่เมื่อ 27 ก.ย. 2024

ความคิดเห็น • 206

  • @renhoeknl
    @renhoeknl 2 ปีที่แล้ว +12

    What I love about this channel, is that you can just watch the video and learn something without actually having to code along. You certainly can is you want to, but just watching and learning some new algorithms is really nice too.

  • @The3oLand
    @The3oLand 2 ปีที่แล้ว +71

    Dithering is still used in printing. In fact modern day printers use a clever combination of computed dithering like FS and seamless tiling like Penrose tiles.

    • @SerBallister
      @SerBallister 2 ปีที่แล้ว +1

      Cheap LCD panels too

    • @Smittel
      @Smittel 2 ปีที่แล้ว +1

      ^ as serb says, some cheap panels use 6bit color exploiting us not being as receptive to spatial high frequency color changes.
      Also, cg graphics sometimes use dithering to reduce banding that even with 8bit can be noticeable with dark gradients, it looks a bit like the quantized image, just not as exaggerated. Dithering is basically just free color precision similar i believe to why printers do it as well but im not too knowledgeable about them

    • @landmanland
      @landmanland 2 ปีที่แล้ว

      @@Smittel printers have the problem of having only a very limited set of colors, usually 6 in your typical inkjet printer. Fortunately printers today have extreem high density of “pixels” so that dithering doesn’t effect the end result. You only see it under a microscope.
      Straight on dithering is actually not used as the end result can make the image muddy, because how ink droplets flow and mix with each other. I use it as a first stage filter, but only for photos since it’s relative cpu expensive.

  • @Maxjoker98
    @Maxjoker98 2 ปีที่แล้ว +28

    Dithering still has uses in "modern" applications. You can still get some more dynamic range for a specific display using it. Floyd-steinberg is rarely used for this nowadays, but the principle remains. Think of things like displaying higher-bitdepth images or videos on "normal" 24bpp monitors, or display-stream compression, etc.
    EDIT: Also dithering is not a scanline-algorithm. Floyd-Steinberg is, but not all dithering algorithms are. But most of them are simple matrix operations!

    • @wes8190
      @wes8190 2 ปีที่แล้ว

      Agreed; I used dithering just a few years ago on a graphics project to get impossibly smooth gradients with no banding. It was like magic.

    • @infinitesimotel
      @infinitesimotel 2 ปีที่แล้ว

      If you want to see some impressive dithering, have you seen the presentation on the Lucas arts guy who only used 16 colours but could get crazy colour ranges and even cycle the palette to make them seem animated?

    • @SquallSf
      @SquallSf 2 ปีที่แล้ว

      @@infinitesimotel The name of the guy is Mark Ferrari and he is not Lucas' he left long, long ago.

  • @brentgreeff1115
    @brentgreeff1115 2 ปีที่แล้ว +5

    I love this channel. - this is the year I take a few months off to actually try implement all this code.

  • @Pariatech
    @Pariatech 2 ปีที่แล้ว +43

    As always a great tutorial. I like that you start with the demo, that's a nice hook to keep watching. I'm curious if I could use this algorithm to make emulation of old 16bit art using high res pictures . I'll have to try it. One more project in my bucket list hahaha

    • @javidx9
      @javidx9  2 ปีที่แล้ว +9

      lol thanks! Yeah its a great way to make pictures look retro. Most art software will have an equivalent "filter". In fact I tested a version of mu implementation against Affinty Photo, and got exactly the same results, so we know how they're doing it :D

    • @SergiuszRoszczyk
      @SergiuszRoszczyk 2 ปีที่แล้ว

      My thought on that would be to take VGA output, dither it and connect to EGA 64-color monitor. That could be interesting. Something that back in the days video cards weren't capable of (at least not 60 times a second).

  • @tjw_
    @tjw_ 2 ปีที่แล้ว +3

    new javidx9 video?! christmas came slightly late this year I see!

    • @kiefac
      @kiefac 2 ปีที่แล้ว +2

      Or extremely early, if you don't handle the overflow correctly

  • @suzuran451
    @suzuran451 2 ปีที่แล้ว +11

    Very nice! I've wanted to learn about color quantization and dithering for a long time and this video explained them in a very understandable way! Thank you!

  • @Komplexitet
    @Komplexitet 2 ปีที่แล้ว +3

    Yay new video!

  • @wesleythomas6858
    @wesleythomas6858 2 ปีที่แล้ว +2

    Glad to see you back!!!

    • @javidx9
      @javidx9  2 ปีที่แล้ว

      lol cheers Wesley, not as frequent this year, but Im hoping for once a month.

  • @PumpiPie
    @PumpiPie 3 หลายเดือนก่อน +1

    Very good video, Good exsplanation ;D Keep up the good work :D

  • @anonanon3066
    @anonanon3066 2 ปีที่แล้ว

    Wow. This is amazing.
    How did i not know about this?
    Never ever would i have imagined it to be such a simple algorithm.

  • @SergiuszRoszczyk
    @SergiuszRoszczyk 2 ปีที่แล้ว +4

    I used this technique to display pictures on white/black/yellowish-brown E-ink display. I was limiting palette to RGB values mimicking three colors of display and then dithering the picture. Works great for photos.

  • @Cypekeh
    @Cypekeh 2 ปีที่แล้ว +1

    love this dithering

  • @treyquattro
    @treyquattro 2 ปีที่แล้ว

    this was another superb tutorial. Old (Robert) Floyd was certainly one of the giants of 20th century computer science (e.g. Floyd's algorithm for finding cycles in lists, Floyd-Warshall shortest path algorithm, correctness, work with Knuth, etc.).
    BTW, with modern C++ and template argument deduction, if you're creating a std::array you can leave out the item count - and even the type if elements are all of the same - if you''re initializing from an initializer list.
    e.g. std::array a{1, 2, 3, 4, 5}; // creates a 5 element array of type int

  • @davidwilliss5555
    @davidwilliss5555 2 ปีที่แล้ว +1

    Years ago I developed dithering algorithms for printing and Floyd Steinberg is one of the algorithms we used. There was a similar algorithm called Stucki which worked the same way but distributed the error to more pixels using different weights and produced a more pleasing image.
    There's another problem that arises in printing in that often your pixels are not square and a printed pixel will overlap the neighboring white pixels so you have to weight them differently. We had one printer where this was so bad that if you printed a 50% gray by painting pixels like a checker board, the black pixels completely overlapped the white pixels and you got black. For that we ended up using a completely different algorithm.

  • @thomas3754
    @thomas3754 2 ปีที่แล้ว +1

    A new video. You call this 'Pog' these days I think. Very high quality as always, excited for the next times already

  • @carlphilip4393
    @carlphilip4393 2 ปีที่แล้ว +1

    Hey javid youre a great guy! Im currently at university and I am looking up to you! Its amazing that you share all your knowledge to all us people for free and you teach as an excellent teacher!

    • @javidx9
      @javidx9  2 ปีที่แล้ว

      Hey that's very kind of you Carl, good luck with your studies, and you can aim much much higher than me!

  • @radojedom8300
    @radojedom8300 2 ปีที่แล้ว

    Excellent. Interesting and educative.

  • @crazykidsstuff
    @crazykidsstuff 2 ปีที่แล้ว

    Best part about this weekend?
    Working through this video!
    Very entertaining and very informative. Thanks so much!

  • @arrangemonk
    @arrangemonk 2 ปีที่แล้ว +2

    dithering is still everywhere, in every conversation for audio /image resampling (rgb32 float -> rgb8), i also used floyd steinberg for service fees applied to the whole document distributed to its the positions

  • @vytah
    @vytah 2 ปีที่แล้ว +1

    The company I work in uses Floyd-Steinberg dithering to allow our users to print arbitrary images on B&W thermal printers. It works reasonably well.

  • @WillBourne999
    @WillBourne999 2 ปีที่แล้ว

    Fantastic video thanks javid.

  • @s4degh
    @s4degh 2 ปีที่แล้ว

    I was fascinated by last dithering show case with only 5 colors

  • @ianmoore322
    @ianmoore322 2 ปีที่แล้ว

    I've always wondered how to implement this algorithm. Thank you OLC. You always have the answers I've always needed. Console game engines and pixel game engines for example

  • @aropis
    @aropis 2 ปีที่แล้ว

    So great to have you back! Really awesome for people new to image processing. If you did linked dithering and to printing you would have completed the circle. I could have imagine their AHA moment especially if you mentioned CMYK color space. Awesome stuff! Keep it up! Really this video opens many interesting topics regarding signal processing. Reducing a dithered image shows the limit of nearest neighbor/bilinear filtering. This could be the starting point of an image sampling video. All the very best for 2022!

  • @clamato2010
    @clamato2010 2 ปีที่แล้ว +2

    Greetings from Mexico teacher, I am a fan of your channel and I have learned a lot with your videos.

    • @javidx9
      @javidx9  2 ปีที่แล้ว

      Hey thanks Sam! Greetings from the UK!

  • @anoomage
    @anoomage 2 ปีที่แล้ว

    I just did my own Floyd-Steinberg dithering for photographs displaying on an ePaper screen :D (Connected photograph frame, where you can choose an image on your smartphone to be displayed on the ePaper, sent to the Arduino with Bluetooth)
    Can't wait to see how you did it !

  • @SoederHouse
    @SoederHouse 2 ปีที่แล้ว

    Thanks for bringing back the youtube::olc::candy

  • @ElGnomoCuliao
    @ElGnomoCuliao 2 ปีที่แล้ว +3

    Finally!

  • @OrangeDied
    @OrangeDied 2 ปีที่แล้ว

    i know nothing and have no intresting in image proccesing, but i will watch this whole thing because yeah

  • @geehaf
    @geehaf 2 ปีที่แล้ว

    You're back!! Great explanation and demonstration - as ever. :)

  • @mehulajax21
    @mehulajax21 2 ปีที่แล้ว

    David..your content is awesome...The information that you present is pure gold...keep up the good stuff..
    I have a similar background like you (minus the game development and 10 years of automotive development).
    However, it find a lot of content carrying over to auto dev for experimentation... I would like to know if you have some book recommendations.

  • @will1am
    @will1am 2 ปีที่แล้ว

    The return of the King! :)

  • @teucay7374
    @teucay7374 2 ปีที่แล้ว

    The best video I've seen since the year started. I am working on a program to produce pixel art from high def images, and this is super useful for that! Thank you javid!

  • @Unit_00
    @Unit_00 2 ปีที่แล้ว +1

    Interesting topic as always

    • @javidx9
      @javidx9  2 ปีที่แล้ว

      Thanks Mateo!

  • @Ethanthegrand
    @Ethanthegrand 2 ปีที่แล้ว +1

    I love your videos man. Even though your channel is based around C, and I know nothing about it, i in fact watch a lot of these tutorials and program in Lua with my own pixel engine. That’s the great stuff about your videos is that you visualise everything. Keep up the great work!

  • @TomCarbon
    @TomCarbon 2 ปีที่แล้ว +1

    The great advantage of Floyd Steinberg is also the ratios were chosen to match 16 so it can be resolved with 4 bits shifting and be very optimal!!

  • @orbik_fin
    @orbik_fin 2 ปีที่แล้ว +1

    The brightening effect is caused by doing arithmetic with gamma-compressed values instead of linear ones. E.g. middle gray (128) actually encodes a brightness of 24%, not 50%. See sRGB on Wikipedia.

  • @StarLink149
    @StarLink149 2 ปีที่แล้ว

    I love your videos. :)
    I always learn something interesting and can't wait for you to release more.
    On another note, I've always found old pixel art using Bayer dithering to look very nice.

  • @barmetler
    @barmetler 2 ปีที่แล้ว +2

    I want to point something out about pointers.
    In C++, the star is part of the declarator.
    int *i, j;
    will create one int pointer and one int.
    This is why we put the star on the right. It is not a style choice, since the star is not part of the type specification, but the declarator. The same goes for references.
    This is in contrast to unsafe C# code, where the above snippet would create two pointers.
    Hope this helps!

  • @hermannpaschulke1583
    @hermannpaschulke1583 2 ปีที่แล้ว +2

    I'd say dithering still has uses today. Even with 24bpp you can still see banding in darker areas.

  • @ianbarton1990
    @ianbarton1990 2 ปีที่แล้ว

    Another really good video about a subject that I've always found really intriguing, I remember the first time I came across dithering playing around with The GIMP to convert full colour to true black and white images and I thought it was some magic voodoo algorithm that must be beyond mere mortal levels of comprehension. That's why it's so satisfying to find out that the algorithm is very accessible and intuitive to understand but there is still a solid level of mathematical thinking and nuance behind it.
    I think there's a probably a natural follow on video surrounding the generation of 'optimised' palettes (where the computer decides what colours will best approximate the source image) too if you're so inclined to do so. :)

  • @secondengineer9814
    @secondengineer9814 2 ปีที่แล้ว

    Really cool video! Always fun to see a simple algorithm that does so much!

  • @Jade-Cat
    @Jade-Cat 2 ปีที่แล้ว +6

    A big factor in the brightening of shadows might be not the dithering algorithm itself, but using it on sRGB (i assume) data, with a linear distance function. Two pixels set one to 0 and the other to 64 will emit more light than two both set to 32.

    • @Tordek
      @Tordek 2 ปีที่แล้ว +2

      Indeed! A lambda adjustment is necessary to linearize the image in between processing steps.

    • @bubuche1987
      @bubuche1987 2 ปีที่แล้ว

      Exactly. And to test that you can pictures of your screen at some distance, while your screen is either displaying a 127,127,127 color or a pattern of altering black and white pixels.

  • @therealchonk
    @therealchonk 2 ปีที่แล้ว +1

    Great Video. I'll try it out myself.

  • @yonis9120
    @yonis9120 2 ปีที่แล้ว +3

    [In the voice of Cornelius Fudge in Harry Potter 5:] He's back!

  • @super_jo_nathan
    @super_jo_nathan 2 ปีที่แล้ว +23

    Am I correct in thinking that when clamping to 0 and 255 you will lose some of the error propagation? Of course its better than wrapping around, but wouldn't storing the altered value and only clamping when actually assessing the pixel result in a better dithering?

    • @javidx9
      @javidx9  2 ปีที่แล้ว +22

      You are! In fact I was intrigued by this too, and created a version where this doesnt happen. What I observed was the error propogation goes out of control and quickly saturates, so the bottom right of the image is garbled. I thought about including it in the video, but then I'd have to explain the new custom pixel type required and it didnt really fit. I would guess that the clamping is required to keep things under control - this could probably be achieved by other means however, if you're prepared to go beyond just the basic FloydSteinberg algorithm.

    • @DFPercush
      @DFPercush 2 ปีที่แล้ว +1

      Seems like you could have a moving window of floating point values, like maybe 2 or 3 horizontal lines at a time.

    • @super_jo_nathan
      @super_jo_nathan 2 ปีที่แล้ว +3

      @@javidx9 thank you for the detailed response and the informative video! Hope to see more videos like this from you in the future!

    • @nobody8717
      @nobody8717 2 ปีที่แล้ว +2

      @@javidx9 We'd probably have to hold at the place where the clamping would potentially kick in, to investigate what is happening as to information overloading or miscalculating or translating unexpectedly. Partial dividends accumulating a discrepancy from a rounding or something like that. Debug when "clamp" is used and peek the memory values of the vars.

    • @eformance
      @eformance 2 ปีที่แล้ว +1

      @@javidx9 That makes sense, since errors would propagate and propagate diagonally, and since the algorithm's bias is towards "brightness" it would got out of control. It seems that the clamping was a fortuitous side effect that the algorithm needs. Did you try altering the bias constants too, to see if you could produce something more interesting?

  • @brainxyz
    @brainxyz 2 ปีที่แล้ว

    Very Nice! Thanks

  • @sunnymon1436
    @sunnymon1436 2 ปีที่แล้ว

    MYST had a lot of this in it, as I recall.

  • @RockTo11
    @RockTo11 2 ปีที่แล้ว

    I wish dithering was used these days, even with 24bit palettes. For example, the splash screen on the Hulu app (on Samsung TVs) uses a teal gradient, but has a lot of posterization banding. Dithering would eliminate that.

    • @bubuche1987
      @bubuche1987 2 ปีที่แล้ว

      In general, I think it would be easy to have shaders ( I am talking about glsl here, and if you don't know what it is this comment is going to make little to no sense ) outputting colors in a much broader range of colors. Everything is calculated not with integers between 0 and 255, but with "reals" between 0 and 1. The precision of those "real" is invisible for the programmer, so it could be very high. Then, in the last step, when would come the time to display it on the screen with only 24 bits per pixel, the GPU could dither the whole result ( it would have the real result of what the color should be in those "reals" and the transformation to 24 bit would be the sampling ).
      Invisible to the programmer ( maybe a boolean to set to true ), retro compatible with a lot of games and improving the result a lot.

  • @Roxor128
    @Roxor128 11 หลายเดือนก่อน

    The serial nature of Floyd-Steinberg dithering isn't the only problem with it. It's also not a good fit for animations. The way FS dithering propagates the error through the image means that if you change a single pixel, everywhere after it will change as well, resulting in shimmering noise in an animation, which looks pretty bad. An animation-safe form of dithering needs to be localised and keep its pattern still relative to the screen. A Bayer-matrix ordered dither works quite nicely. Well-enough that the software renderer for the original Unreal from 1998 uses a 2*2 version of it on environmental textures to fake bilinear filtering. Interestingly, it's not dithering between colour values, but texture coordinates. Which makes sense as a way to save on performance. Much easier to add offsets to the coordinates of the texel to look up than to do bilinear filtering. Note that it only applies to the environment. Objects such as enemies and your weapon models are unaffected. Those just use nearest-neighbour texture filtering.

  • @Ochenter
    @Ochenter 2 ปีที่แล้ว +3

    Hello David, Daddy.
    Long time no see you, miss your lessons.
    Stay safe, Mister.

    • @javidx9
      @javidx9  2 ปีที่แล้ว +1

      Hi Daniel, thanks as always, and yes stay safe indeed!

  • @dennisrkb
    @dennisrkb 2 ปีที่แล้ว +1

    You should perform the dithering in a linear color space.

    • @thorham1346
      @thorham1346 ปีที่แล้ว +1

      No, you need gamma correction, and the more bits per channel you have, the less gamma correction you need. sRGB to linear color space is already too much for even one bit per channel.

  • @evennot
    @evennot 2 ปีที่แล้ว +1

    Author of the "Return of the Obra Dinn" has in-depth research on dithering in his blog, if anyone wants even more admiration of the topic

  • @trim7911
    @trim7911 2 ปีที่แล้ว +1

    RGB error dithering ... but after you said there's no cross dithering colours all I can think is what happens if you dither Hue Saturation Brightness (HSB or HSL or HSI or HSV if you prefer). Wonder what sort of funky things would happen ... In theory it should still work but possibility of rotating all the way to a complimentary colour.
    But then converting to HSB and back after dithering might just be too much of a pain.
    Still you'd get some funky results ... Very much something that would have been tried on TV signals maybe games consoles like the Sega Genesis, Super Nintendo or Amiga (assuming you're using a composite out).
    Edit wait no that's Y'CbCr .... So much technology that's mostly gone and I've forgot about.

  • @watercat1248
    @watercat1248 2 ปีที่แล้ว

    This tethering method I will by amazing fore hardware or software that have limited color support for example nes, gb, gbc, ECT for people that create games or other software for those system I believe this information is very useful
    Personally in not that good with codes and algorithm but I appreciate that video personally the only way I'm able to create I video games is because the game engine existing

  • @BudgiePanic
    @BudgiePanic 2 ปีที่แล้ว

    Another cool video 👍

  • @catalyst5434
    @catalyst5434 2 ปีที่แล้ว

    Amazing video, I really like your explanation it is so clear and is very easy to understand ! Thanks for the nice content. I was looking for cache optimization videos but couldn't find a good one, maybe you can make a video about it, that would be awesome !

  • @RealNekoGamer
    @RealNekoGamer 2 ปีที่แล้ว

    Been using SDL2 with my follow-alongs, as it's a tried and true frontend to the standard graphical APIs, with a few additional goodies such as a render scaling function

  • @janPolijan
    @janPolijan 2 ปีที่แล้ว

    Hello, there. I'm using plain C at the moment for basic graphics programming. Thus all the C++ lambda-goodies feel like some sort of black magic, ha ha ha! But still from your good explanations, I understood most of your video and the Floyd-Steinberg dithering and it's very interesting.
    While watching the B&W dithering at 18m15s when you add the clamping, I started to wonder... I understand pixel values must not wrap around when diffusing the error. But isn't the clamping a little problematic??? For potentiall two reasons?:
    #1) slight decrease in dithering quality when we delete part of the error to be diffused in next steps.
    #2) significant amount of branching operations is now being added to perfom clamping of all four adjacents pixels for every pixel we scan.
    I thought perhaps it can be avoided? By simply computing Floyd Steinberg in a signed buffer? Or even for in place dithering, maybe one could add just a simple simple preprocessing step to half the intensity of the input buffer and then cast that pixel array to a signed type during the processing. I dunno? Maybe it sounds too "hacky"? But it's an idea I'd like to explore.

  • @Moonz97
    @Moonz97 2 ปีที่แล้ว

    Great insightful video! I wonder, how do you handle pixels that are out of bounds at 18:42?

  • @frankgrimes9299
    @frankgrimes9299 2 ปีที่แล้ว

    We could fine tune the lambda in 9:35 should get a better code optimization when we remove the branch but instead mask out the MSB and shift it down.

  • @mworld
    @mworld 6 หลายเดือนก่อน

    CGA is back hehe.

  • @GNARGNARHEAD
    @GNARGNARHEAD 2 ปีที่แล้ว

    nice one; I've been meaning to go back and have a look at the optical flow video.. try and figure something out for horizon tracking on the ESP32 Cam, a nice refresher 😁

  • @nanoic2964
    @nanoic2964 2 ปีที่แล้ว

    I've noticed that the quality on a 9th gen 2021 standard edition front facing ipad camera is quite poor, this video has shown me that it is because it dithers quite a lot.

  • @GregoryTheGr8ster
    @GregoryTheGr8ster 2 ปีที่แล้ว +1

    Nice, but what should the algorithm do when the current pixel is the last on a scanline?

  • @bogdanstrohonov8310
    @bogdanstrohonov8310 2 ปีที่แล้ว

    Good evening Mr. Barr, how about a video on localization in games? Greetings, B S

  • @FaridAnsari1
    @FaridAnsari1 2 ปีที่แล้ว

    I got my first IBM-compatible PC in the early 90's with my monitor being able to only display 256 colors in windows 3.1. I remember that when I wanted to save an image or videos, I would play around with quantization and dithering options in whatever graphic program to make it look right on my display. After watching this video, it really makes me appreciate what dithering does (approximate with far less information and still get the idea of the image across!). I think it would make for a cool post-processing effect for a pixel game engine based games but not sure if it is speedy enough for the FPS?

    • @SianaGearz
      @SianaGearz 2 ปีที่แล้ว

      If you have something CPU-rendered, then you can make Floyd Steinberg work, it's fine, but it also looks terrifyingly bad in motion, as when you have moving and non-moving parts of the image, every little movement causes a ripple of value changes to the right and bottom of it (assuming you process from top right), while everything to top and left stays static, it distracts you from actually moving parts of the image and pulls your attention towards noise at the bottom left. You can use blue noise instead to achieve a similar looking dither effect. With precomputed blue noise, diffusion style dither is insanely fast on the GPU (or CPU), trivially parallelisable, and you can control the behaviour, you can make it stable frame-to-frame or vary it uniformly between the frames, there's even 3D or spatiotemporal blue noises specifically for the purpose. Computing optimised noise is extremely slow, but it can be precomputed such that it wraps around seamlessly and just shipped as a texture or array.

  • @slavko5666
    @slavko5666 2 ปีที่แล้ว +1

    Quote from the Linux kernel coding style:
    "Encoding the type of a function into the name (so-called Hungarian notation) is brain damaged - the compiler knows the types anyway and can check those, and it only confuses the programmer. No wonder MicroSoft makes buggy programs."

  • @BudgiePanic
    @BudgiePanic 2 ปีที่แล้ว

    I've a feeling the TH-cam compression algorithm really didn't like this video.

  • @normwaz2813
    @normwaz2813 2 ปีที่แล้ว

    Hi, a little off topic but I wonder if you could explain anti-aliasing algorithm?

  • @akimpus
    @akimpus 2 ปีที่แล้ว +1

    Javidx9, hi. Do you want to touch on the topic of neural networks and artificial intelligence? I think with your teaching skills, I and other viewers could easily understand this topic.

    • @javidx9
      @javidx9  2 ปีที่แล้ว

      Thanks, but sadly no. My academic background is actually in machine learning and network construction/simulation... I'm done with it. I find it quite dull.

  • @yutdevmahmoud5271
    @yutdevmahmoud5271 2 ปีที่แล้ว

    Can you create a video for how to setup visual studio and importing your engine and how to work with it

    • @javidx9
      @javidx9  2 ปีที่แล้ว

      Yes! th-cam.com/video/eTGSTTxR-Ss/w-d-xo.html

  • @jamesking2439
    @jamesking2439 3 หลายเดือนก่อน

    I've seen some recent games fake transparency by dithering stuff out.

  • @publicmmi
    @publicmmi 2 ปีที่แล้ว

    Now a version with 50 shades of grey :)

    • @SreenikethanI
      @SreenikethanI ปีที่แล้ว

      i was searching for this comment 💀

  • @giorgioguglielmone6528
    @giorgioguglielmone6528 2 ปีที่แล้ว

    Sorry if I write to you here. Could you do a tutorial on how to write a program in Visual Studio C ++ 2022 to connect to a Firebird 4.0.1 database (maybe using Boost.Asio or other library like SOCI or IBPP) ?

  • @hackerman8364
    @hackerman8364 2 ปีที่แล้ว

    hey can you make a video about headers
    ?

  • @samuelecanale5463
    @samuelecanale5463 2 ปีที่แล้ว

    Hello, i'm trying to do my own pixel game engine but i encountered a compiler error C2089: "Class too large" about the pixelgameengine big class. Did you encounter the same error? If so how did you solve it? Hope you will find the time to answer. Great video btw, i'm learning so much from you!

    • @javidx9
      @javidx9  2 ปีที่แล้ว

      Thanks Samuele, sounds like you are allocating too much memory on the stack. Big areas of memory need to be allocated on heap and accessed via pointers or other appropriate interfaces.

    • @samuelecanale5463
      @samuelecanale5463 2 ปีที่แล้ว

      @@javidx9 thank you very much. I'll try to fix it like this

  • @smartito_97
    @smartito_97 2 ปีที่แล้ว

    This is the algorithm what printers use?

  • @SianaGearz
    @SianaGearz 2 ปีที่แล้ว +1

    I'm looking at my Odroid Go Advance, and it could actually benefit from dithering, the colour resolution is very low. Maybe temporal dithering though, aka FRC. Today, you feed 8-bit image to your PC monitor, but the pixel resolution is only 6-bit and with a nonlinear transfer curve, so temporal dithering is how they restore some of the requisite quality; or alternatively displays with 10-bit input and 8-bit panel. I feel the modern rendering also has something similar, where you add a noise function to your sample vector together with multiple samples or filtering to try to fight aliasing in the shaders or raytracing, so you dither/vary the sampling point instead of the colour value.
    A very unpleasant trait of Floyd Steinberg is that it looks good in a static picture, but absolutely horrendous in motion, as with minimal movement, parts of the image near the top left are stable, but near the bottom right are increasingly flickery, especially disturbing in parts of the image that haven't really changed. Blue noise was often the accepted substitute here, just like in shaders, until low-discrepancy noise functions were invented recently.
    As to its tendency to lighten up dark areas, wouldn't it make sense to convert the intermediate values to linear space from gamma space and then back? You also don't need to clamp the intermediate values then, just leave them as floats. Then the small-value diffusion that occurs would be more careful not to visibly brighten the image, i would think.

  • @oschonrock
    @oschonrock 2 ปีที่แล้ว

    Is there a "bounds check" bug here: th-cam.com/video/lseR6ZguBNY/w-d-xo.html on line 78. ie do the coordinates of vPixel+vOffset fall off the end / beginning of the row/column? ie do they use image information from a part of the image which is VERY far away, or even worse "outside the image"... (I haven't checked the implementation of GetPixel and SetPixel to see whether, and if so how, they are bounds checked). -- Update I just downloaded, compiled and checked. SetPixel does the bounds checking and just ignores the "out of bounds" pixel coordinates... so this is "OK..."

  • @Lattamonsteri
    @Lattamonsteri 2 ปีที่แล้ว

    I remember that I heard an interview where an LucasGames employee told how when he went to work there, dithering wasn't used in the games because it didn't compress well. But after he drew a dithered image and it looked so much better than the standard (EGA?) image, the coders were forced to implement dithering.
    Now, I wonder... is this Floyd Steinberg dithering easy to compress? Could we use the standard posterized image and use it as the compressed image, and then just store another array, where the error amount is stored for each pixel? Then at runtime, the algorithm would go through the image and create the doppled effect? Or is there a better way?

    • @SianaGearz
      @SianaGearz 2 ปีที่แล้ว +1

      I don't see much use, you might as well compress the original high colour depth image instead. Because all you've done is for the say 8 bits of your original colour image, if you're converting it to 2-bit dithered representation, all you've done is stored for each pixel, a separate plane with 2 bits corresponding to the thresholded image, and another plane with 6 bits corresponding to the remaining error. You have also not at all decorrelated the data but duplicated it, so let's say you store differences between neighbouring pixels, and give them a variable storage size depending on the stored value; but then both planes encode the same general trend really, and you'd be better off doing this on the whole pixel, as when there are substantial magnitude changes, you store them once rather than twice.
      At low bit depths, the dithered image itself is probably pretty much incompressible, while at higher ones, nothing speaks against compressing dithered image directly with local difference.
      But i think someone else can come up with a better approach in terms of compression, but i wager a guess, it wouldn't be simple at all.
      On the other hand, if you knew you'd be diffusion dithering the image for display, you can make use of lossy compression algorithm on the high bit depth image, the artefacts of which would be particularly well hidden by the dither, as they're similar in appearance. If you know ADPCM algorithm for compressing audio, there are 2D generalisations of it. It would even spit out data in the same order as consumed by the dithering algorithm, so you can have a pretty optimised implementation that simultaneously decodes and dithers. But i really don't know whether it would beat storing dithered image uncompressed at lower bit depths and trivial compression of dithered image at higher. Sounds like a subject for a scientific paper or something, but maybe someone has done that before.

    • @Lattamonsteri
      @Lattamonsteri 2 ปีที่แล้ว

      @@SianaGearz thanks for a thorough answer :D i'm not familiar with audio compression or anything related to compression in general but i think i understood most of what you said! x)
      As for my original idea, i forgot how many binary numbers are needed for the error values xD i guess i thought they could also be rounded to 8 values or something, but that would probably cause very weird rounding error artefacts!

  • @Kilohercas
    @Kilohercas 2 ปีที่แล้ว

    interesting, your code uses 100% of single core while idling. simple thread.sleep(1) equivalent would help a lot in a loop.

    • @javidx9
      @javidx9  2 ปีที่แล้ว +1

      That's not an ideal solution, a better one is to vsync, which you can do when constructing the engine. I prefer to let it max out as it's a crude measure of algorithm performance.

    • @Kilohercas
      @Kilohercas 2 ปีที่แล้ว

      @@javidx9 yes, where are a lot of better ways to do it as you say. It's just for this application it would be nearly invisible :)

  • @densming
    @densming 2 ปีที่แล้ว

    No Vimto can??

  • @Kaltinril
    @Kaltinril 2 ปีที่แล้ว +2

    I wonder if you had the input parameters (bits, ending error), if you could use this is a lossless compression algorithm. working your way back from the bottom right, to the top left.

    • @javidx9
      @javidx9  2 ปีที่แล้ว +1

      Hmm, working backwards would require you store which direction the error comes from. I had success some time ago, dithering to low bit counts to compress, but then Gaussian blurring and a 3x3 sharpening convolution to reconstruct and this had such shockingly low error I went on to build a commercial product with it.

    • @Kaltinril
      @Kaltinril 2 ปีที่แล้ว

      @@javidx9 that's a good point, I was forgetting about all the other error values that we don't know.

    • @allmycircuits8850
      @allmycircuits8850 2 ปีที่แล้ว

      Once I tried to implement undo operations on image in "clever way", not just have older version of image but implementing inverse operation (as precise as possible) and then storing difference between original image and the one inversed after operation performed. Classic "predictor/corrector". For example, when I perform image rotation by 1 degree, it first rotates, then rotates back, subtracted from original and stored inside "undo" structure.
      I watched what these diff images look like. it has almost gray areas (for this images I had offset of 128 so black is max difference to one side, white is to the other) with regular grid of pretty small noise. From original image having all levels 0 to 255 there was so little remaining, I cheered that diff image would take just 1/10 or less of original and still in lossless, leading to "svn for images" with small overhead.
      But alas, lossless compression is ruthless bitch. Range of each pixel was lowered from 256 to just 16, but it's not reduction by 16 times, just by 2 because 8 bits are replaced with 4 bits.
      What's more, that "residual noise" is almost incompressible as any white noise should be.
      I'm afraid compression based on dithering will suffer from same problems.
      But very interesting topic anyway. There is still something magical about compression algorithms...

  • @haltarys
    @haltarys 2 ปีที่แล้ว +1

    In your Convert_RGB_To_Greyscale, where are the coefficients 0.2162, 0.7152, and 0.0722 coming from? To convert to greyscale, I would just compute the average: add the three channels together and divide by 3.

    • @javidx9
      @javidx9  2 ปีที่แล้ว +2

      They are an established mixture based on the fact that your eye is most sensitive to green light and least sensitive to blue. Averaging gives significant advantage to what was blue, which would typically be the lowest illumination.

  • @samuelhulme8347
    @samuelhulme8347 2 ปีที่แล้ว

    “We will only have 4 shades of grey” What about 50 shades of grey

  • @tbird81
    @tbird81 2 ปีที่แล้ว

    It's quite common to dither in modern GPU graphics. It reduces banding.
    Also it's used temporally.

  • @arivaldarivald3212
    @arivaldarivald3212 2 ปีที่แล้ว +1

    And now You know how Your printer create colorful images using just four inks :-)

  • @zxuiji
    @zxuiji 2 ปีที่แล้ว

    12:55, doesn't seem that hard to parallelise on the cpu side, at worst deliberately yield thread execution time until a columns thread has started processing before launching the next columns thread... so long as it doesn't need to access the next column's pixel anyways

    • @SianaGearz
      @SianaGearz 2 ปีที่แล้ว

      I don't understand how you want to accomplish that. Every pixel depends on output from processing the neighbours to the left, to the top, and diagonally in between, and recursively so.
      If you want a parallel algorithm similar in effect and appearance to diffusion dither, you just use a precomputed high-quality blue noise and apply it as threshold offset.

    • @zxuiji
      @zxuiji 2 ปีที่แล้ว

      @@SianaGearz You would have each thread process one column, as long as each thread is launch in sequence then by the time subsequent threads are ready for their next pixel the pixel adjacent to it is already done with, at worst you would just add another buffer filled with counts, when a count matches the thread's pool number then it indicates all the pixels to the left are done with, since they're launched in sequence it be very rare that a thread would need to wait for other threads to do their bit. Each thread would effectively be doing a subsection of the scan line and only after the previous thread has already gotten to it, by waiting for the 1st pixel in a column to finish being processed before you let the next thread start with it's column you further reduce the chance that count buffer would ever serve it's purpose.

    • @SianaGearz
      @SianaGearz 2 ปีที่แล้ว

      @@zxuiji So you have a triangular wavefront, where let's say you have a 4-thread pool, thread 0 processes pixel 3 of column 0, thread 1 processes pixel 2 of column 1, thread 2 processes pixel 1 of column 2, and thread 3 processes pixel 0 of column 3? Or alternatively instead of one pixel, you have each thread process a span of pixels but advancing in that same triangular wavefront?
      Yeah, with a decent batch size, it could work.

    • @zxuiji
      @zxuiji 2 ปีที่แล้ว

      @@SianaGearz Yeah, roughly like that, things can be further optimised by keeping all buffers in one, if the dst is at the front of the shared buffer and the src is kept directly after it with each row & colomn prepended with a pixel that emits 0 light then no thread would need to check if their column is column 0, instead they would just - 1 from it prior to using it for the dst image, having the dst at the front means it can also be directly used for sending to the graphics card

    • @zxuiji
      @zxuiji 2 ปีที่แล้ว

      @@SianaGearz Now that I'm back home with a keyboard in front of me I'll do a pseudo example of what the thread code would roughly look like:
      ```
      void* dither_col( void *obj )
      {
      DITHER *dither = obj;
      PIXEL *dst = dither->buff, *src = dst + (dither->cols * obj->rows);
      uint *counts = (uint*)(src + ((dither->cols + 1) * (dither->rows + 1));
      uint y = 1, X = dither->col - 1, Y = 0;
      for ( ; y < dither->rows; ++y, ++Y )
      {
      uint row = dither->cols * Y;
      PIXEL nxt = {0}, a = src[(dither->cols * y) + X], b = src[row + dither->col], c = src[row + X];
      while ( count[y] != X )
      pthread_yield();
      ...
      dst[row + X] = nxt;
      count[y]++;
      }
      }

  • @osazeeosaro3926
    @osazeeosaro3926 2 ปีที่แล้ว +2

    fifth

  • @santyricon
    @santyricon 2 ปีที่แล้ว

    21:40 It would be interesting to see how does the image look using 50 shades of Grey
    Ok I better leave :(

    • @SreenikethanI
      @SreenikethanI ปีที่แล้ว +1

      i was searching for this comment 💀

  • @Jkauppa
    @Jkauppa 2 ปีที่แล้ว

    image compression huh

    • @Jkauppa
      @Jkauppa 2 ปีที่แล้ว

      lossless vs lossy compression

    • @Jkauppa
      @Jkauppa 2 ปีที่แล้ว

      minimum optimal loss compression

    • @Jkauppa
      @Jkauppa 2 ปีที่แล้ว

      png vs jpg vs dithering & like closest neighbor color palette fit

    • @Jkauppa
      @Jkauppa 2 ปีที่แล้ว

      noise distribution instead of absolute error value

    • @Jkauppa
      @Jkauppa 2 ปีที่แล้ว

      what do you do if you dont do histogram style analysis of the images

  • @guitart
    @guitart 2 ปีที่แล้ว +13

    Welcome back, Maestro!

    • @javidx9
      @javidx9  2 ปีที่แล้ว +1

      lol Thanks Perini!

  • @tedeusz83
    @tedeusz83 2 ปีที่แล้ว +3

    I made something similar while working on my PC Amiga Raycaster engine:
    th-cam.com/video/e3M8lOWTsoI/w-d-xo.html

    • @javidx9
      @javidx9  2 ปีที่แล้ว +2

      Thats a nice demo Mateusz, I like that dithering gives an aesthetic of its own, just a shame it gets a bit costly to do, though I suppose you only need to do it on fairly low res images anyway.

  • @ric8248
    @ric8248 2 ปีที่แล้ว +3

    It's fascinating that you're doing some DSP! I hope you enter the world of audio effects one day.

  • @Rouverius
    @Rouverius 2 ปีที่แล้ว

    25:30: CMYK! And sure enough it looks like a photo from a color newspaper!
    What's amazing to think about is that back in the 1930's, the first fax machines did a similar operation with vacuum tubes and used capacitors to hold the error values.

  • @aronkyleback5079
    @aronkyleback5079 2 ปีที่แล้ว

    The temptation to critique Javid in a somewhat hateful way just to get posted in #scrapbook...

  • @kweenahlem6161
    @kweenahlem6161 2 ปีที่แล้ว +2

    best teacher ever

  • @Miki19910723
    @Miki19910723 2 ปีที่แล้ว

    Dithering is commonly used in games combined with temporal antialiasing. Without error accumulation i guess.