Great list of upscaling methods, I have also tried tiled diffusion and interpolating the upscaled latent with the Unsampler, these two were the best for me. Tiled diffusion is like ultimate sd upscaler but without any seams problem even at high denoise (0.7), while interpolation is complex and I don't really get it but it's the process that gave me all the best generations with Flux yet.
img2img is pretty easy with flux. I prefer fluxunchained with the flux sampler parameters from Essentials, paired with Florence and a promptgen model. Drop denoise to 0.80, and you will get an image with the same basic composition, drop it to 0.40, and you're getting very, very similar. 24 steps with a Q4 model, around 11Gb VRAM for a 1024x1024 takes around 45 seconds on a 3090. There's also Q5 and Q8 variants of the model.
From my experience with Flux and SDUpscale I think a denoising strength of 0.3 - 0.35 is the best choice. It still adds some details, but in 95% of cases no funny stuff is happening to the image.
I can only afford control net with 2 limbs but I use the "mirror" option in MSPaint to make a fully formed character. Appreciate you helping us solve this maze
Enjoyed the video! 🐁🐭 Actually - I use Flux Img2Img thusly: Denoise always stays between 0.1 to 0.18. - The base_shift always at 0.5 - max_shift can vary between 2 to 5 even. = amount of change. This is how the output can both get the color influence, and the LORA can add itself to the original since the max_shift effectively acts like denoise without being denoise. Makes sensei? Thought that was the trick.. Cheers!
Thanks so much! Question - the Denoise Node you have up there. That ends in the Float output. What Custom Node group is that with? I can't seem to find it.
Custom sampler I see i see. XLABS one is kinda shitty ngl. Also thier IPadapter is either underdeveloped or heavily censored compared do sdxl. I will try your method now with i2i. Also how you made prompts everywhere working? For me it snaps to negative, and positive is missing.
would be interesting to test any Vision Encoder Decoder Models like ifmain/vit-gpt2-image2prompt-SD ( trained on the Ar4ikov/civitai-sd-337k dataset) Although, civit prompts may make things worse lol.
Great list of upscaling methods, I have also tried tiled diffusion and interpolating the upscaled latent with the Unsampler, these two were the best for me. Tiled diffusion is like ultimate sd upscaler but without any seams problem even at high denoise (0.7), while interpolation is complex and I don't really get it but it's the process that gave me all the best generations with Flux yet.
Hey Nerdy great work on the video :)
Hey, thanks! ☺️
Amazing content! Keep up the great work!
img2img is pretty easy with flux. I prefer fluxunchained with the flux sampler parameters from Essentials, paired with Florence and a promptgen model. Drop denoise to 0.80, and you will get an image with the same basic composition, drop it to 0.40, and you're getting very, very similar. 24 steps with a Q4 model, around 11Gb VRAM for a 1024x1024 takes around 45 seconds on a 3090. There's also Q5 and Q8 variants of the model.
From my experience with Flux and SDUpscale I think a denoising strength of 0.3 - 0.35 is the best choice. It still adds some details, but in 95% of cases no funny stuff is happening to the image.
I love how your voice flow is starting to be more real and not so much news anchorish!
I can only afford control net with 2 limbs but I use the "mirror" option in MSPaint to make a fully formed character. Appreciate you helping us solve this maze
Enjoyed the video! 🐁🐭
Actually - I use Flux Img2Img thusly: Denoise always stays between 0.1 to 0.18. - The base_shift always at 0.5 - max_shift can vary between 2 to 5 even. = amount of change.
This is how the output can both get the color influence, and the LORA can add itself to the original since the max_shift effectively acts like denoise without being denoise. Makes sensei?
Thought that was the trick.. Cheers!
Thanks so much! Question - the Denoise Node you have up there. That ends in the Float output. What Custom Node group is that with? I can't seem to find it.
It’s just a primitive
@@NerdyRodent Ha! I'm so dumb. Thanks!!
I miss the speaking avatars. Great video again though
Custom sampler I see i see. XLABS one is kinda shitty ngl.
Also thier IPadapter is either underdeveloped or heavily censored compared do sdxl.
I will try your method now with i2i.
Also how you made prompts everywhere working? For me it snaps to negative, and positive is missing.
👋 hi
cant stand flows rather do the same with code!
would be interesting to test any Vision Encoder Decoder Models like ifmain/vit-gpt2-image2prompt-SD ( trained on the Ar4ikov/civitai-sd-337k dataset)
Although, civit prompts may make things worse lol.