Thanks man. You know I cant sit an learn from a youtube video sometimes, Im always into something, when I have technical questions, and this style of learning is easy to digest on the go. I dont even have to watch the video, text to speech is clear enough. I appreciate your time to walk us through .Thanks again
As always, very good video, impeccable phrasing easy to understand even for a French person like me, top explanations and incredible solutions that few would have thought of. Bravo, a concentrate of genius and full of generosity in sharing!
@@PixelEaselI'd say it's something else to be more precise. Photoshop still has a lot of things it can do for which comfyui isn't and won't be the best tool.
@@PixelEasel one thing I don't really understand: how did you manage to send the Show Text node string to the CLIP text encoder? The CLIP text doesn't support String as input and yet in the video I see two inputs to that node.
this is amazing workflow , thanks for making it free for all of us folks. I would like to have a further workflow on this where we could mask the face of the foreground subject , so that rest all will change except the face , that would be so cool .. Maybe using controlnet.. could you help in that please?
Great tutorial! I'm having some trouble with the **Image Composite Node** when trying to position objects. The **x-axis** works perfectly, but any adjustments to the **y-axis** throw an error: `too many indices for tensor of dimension 3`. Any tips on how to resolve this?
Great! Thank you. Can you please create a worklow in a next lesson to style an image. I have created a LORA on Replicate on flux dev with a trigger word to create images in a specific style. Now I am looking for a way to apply this to an img2img => the img as input should be transfered in that style. How?
An amazing tutorial. I wonder if you could help me with a minor tweak? How do I approach a task I have: I have two different illustrations. One that has a stylized hand drawn yellow arrow pointing straight UP and another is a simple U-Turn arrow. I want to combine the style of the first one to the U-Turn so the hand drawn style would be transferred onto the simple U-Turn.
thanks for sharing. may i asked if you were able to test this with the flux gguf variants? would like to know if it will work with those models too or if there is something lost in quantization. 🙌
@@PixelEasel it seems to work just fine only needed to replace your model loader with the gguf loader. i tested with flux schnell q3 gguf and it seemed to output just fine. downloading q2 now just to see the difference. 🙏
Thank you for the video. Since I am a complete beginner with this, could you please clarify what we need to do with the workflow? As far as I can see, it's a .json file. Thanks in advance.
Hello. I have everything updated, but the Manager button in top right corner is not there for me. What can be wrong? Thanks. The manager itself is installed of course and to use it i have to switch back to old style interface. :(
Can't FLUX implement all art styles? Even I had a problem with some loras..everything I do, the output of the photo is real...is there a workaround and can you make a tutorial video for it..thanks
They are literally trying to turn the image into another image, which keeps some of the elements from the original one, and changes others. Which is quite obvious. And it's also the opposite to what you are describing, which is keeping a face consistent across different shots. As of it was the same character in a movie or short film.
@@lucascarracedo7421 Yeah, I get that. But where you're doing composites, especially professionally, you want to keep some elements the same as the original. People being the prime example of this.
Thanks man. You know I cant sit an learn from a youtube video sometimes, Im always into something, when I have technical questions, and this style of learning is easy to digest on the go. I dont even have to watch the video, text to speech is clear enough. I appreciate your time to walk us through .Thanks again
As always, very good video, impeccable phrasing easy to understand even for a French person like me, top explanations and incredible solutions that few would have thought of. Bravo, a concentrate of genius and full of generosity in sharing!
great tutorial, I like the way you explain and demonstrate some of the key settings so we understand more, not just rushing it through.
Its been a very long time that I've had a workflow blow my mind! Thank you so much for posting!
nice to hear! I really like this one!
Outstanding work, Thank you so much again for this, by far the most simple explanation and workflow.
cheers !!!
Thankyou so much for this amazing workflow 😊
thanks for commenting!
wow amazing!
thanks
oh wow that new comfyui was disabled on my end on default, thank you for bringing that up!! Really loving the new design
Thanks a lot bro. another useful Tutorial.
WOW! Awesome video, as always! Thank you🔥
This is the new photoshop😎
comfyui, it's much more
@@PixelEaselI'd say it's something else to be more precise. Photoshop still has a lot of things it can do for which comfyui isn't and won't be the best tool.
How could I install it on my device pls make a video about this method.
Amazing, looking forward to test it 🙌👍
waiting to hear what you think
thats cool, another way to playing with comfy
For the first demo, what would you do different if you wanted to maintain the same person identifiable in the end result?
That's probably what I have ever been looking for from the start of stable diffusion.
nice to hear. thanks😊
@@PixelEasel one thing I don't really understand: how did you manage to send the Show Text node string to the CLIP text encoder? The CLIP text doesn't support String as input and yet in the video I see two inputs to that node.
thank you for sharing this! Would be awesome if we can tweak the Florence prompt beyond just replacing 'the image is'...how can we do that?
this is amazing workflow , thanks for making it free for all of us folks. I would like to have a further workflow on this where we could mask the face of the foreground subject , so that rest all will change except the face , that would be so cool .. Maybe using controlnet.. could you help in that please?
you are the best one 👍🏻
Amazing, definetly better than photoshop generative fill, I will go with this from now on:)
What does max_shift and base_shift do in the ModelSamplingFlux node?
Great workflow!! Sny way you can integrate a Lora loader.
Great tutorial! I'm having some trouble with the **Image Composite Node** when trying to position objects. The **x-axis** works perfectly, but any adjustments to the **y-axis** throw an error: `too many indices for tensor of dimension 3`. Any tips on how to resolve this?
Very fascinating idea!!
thanks😊
Thanks as always 😊❤
more than welcome!
Thanks for sharing this is pretty cool! I cant get use denoise below 8 anything less and the image turns to greyish noise.
Great! Thank you. Can you please create a worklow in a next lesson to style an image. I have created a LORA on Replicate on flux dev with a trigger word to create images in a specific style. Now I am looking for a way to apply this to an img2img => the img as input should be transfered in that style. How?
just uploaded!
An amazing tutorial. I wonder if you could help me with a minor tweak? How do I approach a task I have: I have two different illustrations. One that has a stylized hand drawn yellow arrow pointing straight UP and another is a simple U-Turn arrow. I want to combine the style of the first one to the U-Turn so the hand drawn style would be transferred onto the simple U-Turn.
Only because now jumping between workflows and saving them it's easier and faster, the new UI is better.
Is there a way to make inout image smaller? Thank you for sharing, amazing video!
Can you add stack Loras node to the workflow?
Amazing video, so all these workflows could work with dev model as well?
thanks for sharing. may i asked if you were able to test this with the flux gguf variants? would like to know if it will work with those models too or if there is something lost in quantization. 🙌
didnt check yet... if u'll try it please share your thoughts !
@@PixelEasel it seems to work just fine only needed to replace your model loader with the gguf loader. i tested with flux schnell q3 gguf and it seemed to output just fine. downloading q2 now just to see the difference. 🙏
Would it be possible to use a blonde that I have already trained to create more realistic photos?
how do you change the text Prompt please, it looked ? I'm new for ComfyUI
Have you managed to get Flux Schnell to do inpainting?
Can this be used with SDLX or SD1.5? Flux doesn't quite like my 12gb 3060
Thank you for the video. Since I am a complete beginner with this, could you please clarify what we need to do with the workflow? As far as I can see, it's a .json file. Thanks in advance.
just upload it to comfyui and you can start working with it
@@PixelEasel thank you
Thanks!
thx!
So this is a better method of image bashing?
Hello. I have everything updated, but the Manager button in top right corner is not there for me. What can be wrong? Thanks.
The manager itself is installed of course and to use it i have to switch back to old style interface. :(
Update the manager
Can't FLUX implement all art styles? Even I had a problem with some loras..everything I do, the output of the photo is real...is there a workaround and can you make a tutorial video for it..thanks
use lora for flux
how do install this tool on my device pls?
Can you tell me why?? Thanks
It doesn't work. BRIAAI Matting
In the work I've done in compositing, the goal was always to keep the person the same. Your method wouldn't be useful professionally.
this workflow has another purpose
@@PixelEasel Which is? I guess I misunderstood.
They are literally trying to turn the image into another image, which keeps some of the elements from the original one, and changes others. Which is quite obvious.
And it's also the opposite to what you are describing, which is keeping a face consistent across different shots. As of it was the same character in a movie or short film.
@@lucascarracedo7421 Yeah, I get that. But where you're doing composites, especially professionally, you want to keep some elements the same as the original. People being the prime example of this.
you just killed photoshop
lol 😆 not yet...
THIS IS JUST ALIEN TECHNOLOGY, I'M IN SHOCK