Exploring Image-to-Image with Flux 0.1 Schnell: A Deep Dive into the Latest Update in ComfyUI!

แชร์
ฝัง

ความคิดเห็น • 37

  • @SebAnt
    @SebAnt 3 หลายเดือนก่อน +2

    Thanks for all these super helpful videos you are sharing, Sharvin 🙏🏼

    • @CodeCraftersCorner
      @CodeCraftersCorner  3 หลายเดือนก่อน

      Thank you so much for your support, Sebastian! I'm glad the videos are helpful.

  • @captainpike3490
    @captainpike3490 3 หลายเดือนก่อน +2

    You are a kind man. With every video you post, I learn something new. You explain so many complex topics in a simple and easy-to-understand way, without wasting a single word. Keep up the good work. I hope to see your videos more often. Thank you.

  • @pokerandphilosophy8328
    @pokerandphilosophy8328 3 หลายเดือนก่อน +2

    Very nice video and very useful workflow! 3:00, the Schnell model isn't limited to 4 steps. You can increase the number of steps much beyond that. It's simply optimised to generate nearly optimal images (txt2img) with only 4 steps, which is what makes it faster to use, of course.

  • @der-zerfleischer
    @der-zerfleischer 3 หลายเดือนก่อน

    Every video is great and instructive

  • @inanis_exe
    @inanis_exe 2 หลายเดือนก่อน

    Hi! What about turning stylized character to realistic? can It be done in same workflow but in reverse, adjusting prompt accordingly?

    • @CodeCraftersCorner
      @CodeCraftersCorner  2 หลายเดือนก่อน +1

      Yes, you can achieve great results by changing the style! The demo showed going from realistic to anime, but the reverse is definitely possible. Just add the Canny ControlNet to the workflow. Keep in mind that if the original stylized character has proportion issues (anatomy), the results may vary.

    • @inanis_exe
      @inanis_exe 2 หลายเดือนก่อน

      @@CodeCraftersCorner hi, I played around and got good results with Canny, but my main workflow is shifting toward Flux, have not looked how to do same with flux yet, will let you know later.

  • @contrarian8870
    @contrarian8870 2 หลายเดือนก่อน +1

    @CodeCraftersCorner An odd question: with img2img, can AI forced to basically reproduce the original image? if so, which setting would achieve this?

    • @CodeCraftersCorner
      @CodeCraftersCorner  2 หลายเดือนก่อน +1

      Yes, you can set the denoising strength to 0 in any img2img workflow. This will work for sd1.5 and up. Basically, the image will go from pixel space to latent space and then back to pixel space.

    • @contrarian8870
      @contrarian8870 2 หลายเดือนก่อน

      @@CodeCraftersCorner OK, thanks.

  • @Alexey-m5j
    @Alexey-m5j 3 หลายเดือนก่อน

    32GB of RAM and 8GB of video memory are enough even for a resolution of 2000x2600 (fp16 models). The quality is excellent, you don’t even need upscale. True, the generation for the dev model is very long.

    • @CodeCraftersCorner
      @CodeCraftersCorner  3 หลายเดือนก่อน

      Thank you for sharing your findings!

  • @AgustinCaniglia1992
    @AgustinCaniglia1992 3 หลายเดือนก่อน

    Great! Thanks ❤

  • @CharisTsevis
    @CharisTsevis 3 หลายเดือนก่อน

    You are always so useful. Thank you.
    My regards to my beloved South Africa!

    • @CodeCraftersCorner
      @CodeCraftersCorner  3 หลายเดือนก่อน +1

      Thanks, Charis! Sending my regards too!

  • @sunlightlove1
    @sunlightlove1 3 หลายเดือนก่อน

    Thanks for always . such short and very updated information .

  • @MrSmooX
    @MrSmooX 3 หลายเดือนก่อน

    you should try the fp8 safetensor models they are a lot smaller and work faster 👍

  • @erikt81a
    @erikt81a 3 หลายเดือนก่อน

    Running flux schnell fp16 on 4gb of vram with no issues but I have 32 gb of ram

  • @runebinder
    @runebinder 3 หลายเดือนก่อน

    I've been running Dev, tried Schnell but deleted it as it's demonstrably worse quality and at the same model size of 24GB pointless keeping it. I've got a 3090 and upgraded to 64GB earlier today. I can run FP16 and haven't had any errors with it. One thing I have noticed is if I set the weight type to Default it takes over 500s to generate the image. If I change that to the fp8_e4m3fn that drops to around 28s, I still have the clip set to the T5 FP16 version but not sure if the weight type is over riding that and setting it to FP8 and that's why the time is so different.

    • @CodeCraftersCorner
      @CodeCraftersCorner  3 หลายเดือนก่อน

      Thanks for sharing!

    • @xyzxyz324
      @xyzxyz324 หลายเดือนก่อน

      Changing weight type from default to fp8_e4m3fn will not provide you the quality of FP16 Model. Thats why it takes 500s to generate. Check your python and cuda environment, with 25 steps it should take about 50 seconds to generate a 1024x1024 imaeg with your system.

  • @AntonioSorrentini
    @AntonioSorrentini 3 หลายเดือนก่อน

    Just a constructive suggestion: try not to dance in front of the camera. Thanks for your videos and the timeliness with which you post.

    • @CodeCraftersCorner
      @CodeCraftersCorner  3 หลายเดือนก่อน +1

      Thanks for the suggestion! Will try!

  • @geraldfabrot9961
    @geraldfabrot9961 3 หลายเดือนก่อน +1

    11 minutes for 1 image?
    And we 1729 seconds in your screenshot, so 28 minutes for 1 img2img?
    Is that correct, and what was your setup and GPU for these tests?
    Cheers!

    • @CodeCraftersCorner
      @CodeCraftersCorner  3 หลายเดือนก่อน +1

      Hello, yes, I mentioned it in the Flux review video: I have a GTX 1650 4GB VRAM and 32GB RAM.

    • @ryanpatrick7086
      @ryanpatrick7086 3 หลายเดือนก่อน +1

      @@CodeCraftersCornergood lord man that’s ancient at this point no wonder it took you so long to

    • @CodeCraftersCorner
      @CodeCraftersCorner  3 หลายเดือนก่อน +1

      Yes, I'm just happy it ran, although slow.

  • @MilesBellas
    @MilesBellas 3 หลายเดือนก่อน

    Why not discuss "Dev" too?

    • @CodeCraftersCorner
      @CodeCraftersCorner  3 หลายเดือนก่อน +1

      Yes, image to image works with dev model too, actually better outputs. Will do after some more testing.

    • @MilesBellas
      @MilesBellas 3 หลายเดือนก่อน

      @@CodeCraftersCorner
      😀👍