New Flux IMG2IMG Trick, More Upscaling & Prompt Ideas In ComfyUI

แชร์
ฝัง
  • เผยแพร่เมื่อ 21 ก.ย. 2024

ความคิดเห็น • 18

  • @PaoloCaracciolo
    @PaoloCaracciolo วันที่ผ่านมา

    Great list of upscaling methods, I have also tried tiled diffusion and interpolating the upscaled latent with the Unsampler, these two were the best for me. Tiled diffusion is like ultimate sd upscaler but without any seams problem even at high denoise (0.7), while interpolation is complex and I don't really get it but it's the process that gave me all the best generations with Flux yet.

  • @impactframes
    @impactframes วันที่ผ่านมา +1

    Hey Nerdy great work on the video :)

    • @NerdyRodent
      @NerdyRodent  วันที่ผ่านมา +1

      Hey, thanks! ☺️

  • @LIMBICNATIONARTIST
    @LIMBICNATIONARTIST วันที่ผ่านมา

    Amazing content! Keep up the great work!

  • @weirdscix
    @weirdscix วันที่ผ่านมา

    img2img is pretty easy with flux. I prefer fluxunchained with the flux sampler parameters from Essentials, paired with Florence and a promptgen model. Drop denoise to 0.80, and you will get an image with the same basic composition, drop it to 0.40, and you're getting very, very similar. 24 steps with a Q4 model, around 11Gb VRAM for a 1024x1024 takes around 45 seconds on a 3090. There's also Q5 and Q8 variants of the model.

  • @equilibrium964
    @equilibrium964 2 วันที่ผ่านมา +1

    From my experience with Flux and SDUpscale I think a denoising strength of 0.3 - 0.35 is the best choice. It still adds some details, but in 95% of cases no funny stuff is happening to the image.

  • @ctrlartdel
    @ctrlartdel 23 ชั่วโมงที่ผ่านมา

    I love how your voice flow is starting to be more real and not so much news anchorish!

  • @wakegary
    @wakegary 2 วันที่ผ่านมา

    I can only afford control net with 2 limbs but I use the "mirror" option in MSPaint to make a fully formed character. Appreciate you helping us solve this maze

  • @electrolab2624
    @electrolab2624 2 วันที่ผ่านมา

    Enjoyed the video! 🐁🐭
    Actually - I use Flux Img2Img thusly: Denoise always stays between 0.1 to 0.18. - The base_shift always at 0.5 - max_shift can vary between 2 to 5 even. = amount of change.
    This is how the output can both get the color influence, and the LORA can add itself to the original since the max_shift effectively acts like denoise without being denoise. Makes sensei?
    Thought that was the trick.. Cheers!

  • @ToddDouglas1
    @ToddDouglas1 วันที่ผ่านมา

    Thanks so much! Question - the Denoise Node you have up there. That ends in the Float output. What Custom Node group is that with? I can't seem to find it.

    • @NerdyRodent
      @NerdyRodent  วันที่ผ่านมา

      It’s just a primitive

    • @ToddDouglas1
      @ToddDouglas1 วันที่ผ่านมา

      @@NerdyRodent Ha! I'm so dumb. Thanks!!

  • @stereotyp9991
    @stereotyp9991 2 วันที่ผ่านมา +2

    I miss the speaking avatars. Great video again though

  • @aeit999
    @aeit999 2 วันที่ผ่านมา

    Custom sampler I see i see. XLABS one is kinda shitty ngl.
    Also thier IPadapter is either underdeveloped or heavily censored compared do sdxl.
    I will try your method now with i2i.
    Also how you made prompts everywhere working? For me it snaps to negative, and positive is missing.

  • @LouisGedo
    @LouisGedo 2 วันที่ผ่านมา

    👋 hi

  • @JNET_Reloaded
    @JNET_Reloaded 2 วันที่ผ่านมา

    cant stand flows rather do the same with code!

  • @quercus3290
    @quercus3290 วันที่ผ่านมา

    would be interesting to test any Vision Encoder Decoder Models like ifmain/vit-gpt2-image2prompt-SD ( trained on the Ar4ikov/civitai-sd-337k dataset)
    Although, civit prompts may make things worse lol.