Thank you so much for your kind and encouraging words! I'm glad you found the tutorials helpful. Wishing you a successful, joyful, and wonderful 2025 as well! 🌟
Thanks! This was the solution to my problem, I used two times the usual lora loader chained, and it slowed down the generation 10x or more, literally, a pic that normally takes 45 sec could take 10 minutes. This makes it work juat as fast as using just one.
Hello master! Thank you for a very useful lesson! I ran into the problem that if I use 4-5 Loras, they begin to suppress each other. For example, I have a lora of holography and artistic style, they work well together, but if I add a lora replica of my character, then the character appears in the generation, but most of the effect from the previous loras are suppressed and almost does not take effect. I've changed a lot of parameters, but for some reason they don't work and greatly spoil the final image. Also another question: if I throw a lot of Lora, they are often happens that the image becomes just terrible and corrupted. In some situations, increasing the steps to 50 helps, but not always.
Hey! Thank you for the tutorial. I've been trying this for a week straight and I still can't get realistic pictures of me. Issue is that the body and face seem too plastic. No hair or details. I've played around w lora strength's as well and desciptive prompts. I've also tried training the loras of my self with just selfies of me. SHould I try upscaling? NOt sure what that does but after this workflow, can I add an upscaler or something?
@@raz0rstr you're welcome bro. The very first point for generating personal images with personal lora is the process of training your lora. You have to choose good quality and very different images and also the same age for all of them. Because if you choose just selfies which maybe would be low quality ones, the ai thinks this low quality is a main part of your personal lora and will add it to your lora. It has no logic and do not judge your images just put them into work. So choose them wisely. Next for upscaling, yes you can upscale your image even using your personal lora. Watch my very last video in my channel, it is an expert and new revolutionary upscale tutorial with flux
It really not bad, I tried to do the same thing again, I did not succeed, but in 7 minutes it is clear that I did not understand much. But thanks still I did not know how to integrate two lora at the same time, now I know that it is possible but it would be nice to show how I can’t find node.
in the video it is shown that the loras are combined, but I would like to know how use multiple characters separated, for example having a coffee together or something like that.
For now it is not possible for ai to separate loras. It always combines all active loras together. But you can use a trick. first use one lora and generate an image (for example with two charachters. then fix the seed and change lora and set second lora. now you have tow images. put images on top of each other in photoshop and mask one of faces.
yes it is possible but not at one try generating. you generate with one lora once and next time fix seed and use another lora then in Photoshop mask one of faces and replace it.
But, in this case, can one generate images of both girls, for example walking together, taking a coffee, one smiling the other upset, etc? Is one thing I would like to do
@@adrianmunevar654 it is a great question that I'm about to make post for it😁 It is possible but it's a little tricky. Because when you use two personal lora at the same time, even you write for two person in prompt, both faces will be combined not separated.
@@adrianmunevar654 the solution is you generate with one lora once and next time fix seed and use another lora then in Photoshop mask one of faces and replace it.
Must be a way to do it inside of ComfyUI, maybe a controlnet or/and ipadapter. If it's possible in SD it must be possible with Flux. I downloaded a workflow from civitai, it mixes sdxl with Flux, the results are amazing but I couldn't use it yet because ComfyUI goes to debug mode and I'm not an expert at all. Do you want the link and try it?
@@Jockerai I do exactly as you have the video and put it where you have shown in the video. Thank you so much for your work. You are the best in this field that I have seen. My main problem is that I have macOS m2 silicon? and you have windows. But I am finding solutions to the problems.
I have a favour to ask of you. I've seen all your videos and where you can train your LoRA. However, I do not have the ability to pay for this service right now. Maybe you can tell me where I can read or see how to make a photo with the face I want. Thank you so much for your hard work)
А в чем прикол-то? В фордже ты просто кидаешь картинку на стол, закрываешь маской то что хочешь изменить и готово. Зачем все эти ноды? Только лишняя возня, ИМХО
Can we use this GGUF workflow for the regular flux1-dev.safetensors model or do we have to use the GGUF models with this workflow? I switched the nodes so I can use the regular flux1-dev.safetensors and the t5xxl_fp16.safetensors clip but my system keeps crashing when using this workflow. Can you show us how to use the regular models if we have more powerful computers?
@@adapt-or-die-trying yes you can easily use flux.dev main model but it needs a strong system and high VRam to work. If it crashes for you, it means you can't use the main models on your current system and need a more powerful one
@@Jockerai I'm using an RTX A5000 with 24 GB of VRAM and 25 GB of RAM and 12vCPU, do I really need more power than that just to run the main models? How much would you say I need then?
click the link below to stay update with latest tutorial about Ai 👇🏻 :
www.youtube.com/@Jockerai?sub_confirmation=1
Thank you for making this tutorial and sharing your work. Watched two so far and they are fantastic. Have a prosperous 2025 wish you and yours well.👍
Thank you so much for your kind and encouraging words! I'm glad you found the tutorials helpful. Wishing you a successful, joyful, and wonderful 2025 as well! 🌟
Thanks! This was the solution to my problem, I used two times the usual lora loader chained, and it slowed down the generation 10x or more, literally, a pic that normally takes 45 sec could take 10 minutes. This makes it work juat as fast as using just one.
@@ChristerHolmMusic happy to here that 🤩. You're welcome bro stay tuned
I was looking for this and all of a sudden I found your video amazing tutorial thank you bro😍💚
you're welcome my friend😉✨
Hello master! Thank you for a very useful lesson! I ran into the problem that if I use 4-5 Loras, they begin to suppress each other. For example, I have a lora of holography and artistic style, they work well together, but if I add a lora replica of my character, then the character appears in the generation, but most of the effect from the previous loras are suppressed and almost does not take effect. I've changed a lot of parameters, but for some reason they don't work and greatly spoil the final image.
Also another question: if I throw a lot of Lora, they are often happens that the image becomes just terrible and corrupted. In some situations, increasing the steps to 50 helps, but not always.
Hey! Thank you for the tutorial. I've been trying this for a week straight and I still can't get realistic pictures of me. Issue is that the body and face seem too plastic. No hair or details. I've played around w lora strength's as well and desciptive prompts. I've also tried training the loras of my self with just selfies of me.
SHould I try upscaling? NOt sure what that does but after this workflow, can I add an upscaler or something?
@@raz0rstr you're welcome bro.
The very first point for generating personal images with personal lora is the process of training your lora. You have to choose good quality and very different images and also the same age for all of them. Because if you choose just selfies which maybe would be low quality ones, the ai thinks this low quality is a main part of your personal lora and will add it to your lora. It has no logic and do not judge your images just put them into work. So choose them wisely.
Next for upscaling, yes you can upscale your image even using your personal lora.
Watch my very last video in my channel, it is an expert and new revolutionary upscale tutorial with flux
It really not bad, I tried to do the same thing again, I did not succeed, but in 7 minutes it is clear that I did not understand much. But thanks still I did not know how to integrate two lora at the same time, now I know that it is possible but it would be nice to show how I can’t find node.
I notice that when I use several Loras the image decreases considerably, am I doing something wrong? thanks in advance
in the video it is shown that the loras are combined, but I would like to know how use multiple characters separated, for example having a coffee together or something like that.
For now it is not possible for ai to separate loras. It always combines all active loras together. But you can use a trick. first use one lora and generate an image (for example with two charachters. then fix the seed and change lora and set second lora. now you have tow images. put images on top of each other in photoshop and mask one of faces.
@@Jockerai How can I purchase the face swap workflow in this video?
If possible to use, multiple Lora model (without merging) on one image (two different character). If yes.. please upload the tutorial.. thank you..
yes it is possible but not at one try generating. you generate with one lora once and next time fix seed and use another lora then in Photoshop mask one of faces and replace it.
But, in this case, can one generate images of both girls, for example walking together, taking a coffee, one smiling the other upset, etc? Is one thing I would like to do
@@adrianmunevar654 it is a great question that I'm about to make post for it😁
It is possible but it's a little tricky. Because when you use two personal lora at the same time, even you write for two person in prompt, both faces will be combined not separated.
@@adrianmunevar654 the solution is you generate with one lora once and next time fix seed and use another lora then in Photoshop mask one of faces and replace it.
Must be a way to do it inside of ComfyUI, maybe a controlnet or/and ipadapter. If it's possible in SD it must be possible with Flux. I downloaded a workflow from civitai, it mixes sdxl with Flux, the results are amazing but I couldn't use it yet because ComfyUI goes to debug mode and I'm not an expert at all. Do you want the link and try it?
@@adrianmunevar654 yes sure send me the link please
@@Jockerai it looks like YT erased my reply, maybe for the link? Here it goes again in the next comment...
thank you !finally!
@@mozzicvisuallab8427 you're welcome 😉❤️
Hi can you create a tutorial for video to video using flux in comfy? thanks!
I have plans to do that but in future bro stay tuned
I'm not getting LoRA's added, it's like it can't find where they lie. How do I fix this?
@@ДжониФэйл where did you put them in your system?
@@Jockerai I do exactly as you have the video and put it where you have shown in the video. Thank you so much for your work. You are the best in this field that I have seen. My main problem is that I have macOS m2 silicon? and you have windows. But I am finding solutions to the problems.
I have a favour to ask of you. I've seen all your videos and where you can train your LoRA. However, I do not have the ability to pay for this service right now. Maybe you can tell me where I can read or see how to make a photo with the face I want. Thank you so much for your hard work)
@@ДжониФэйл very good 👍🏻 you can also test PowerLoraLoader from Rgthree collection
@@ДжониФэйл thank you so much for your kind comment ✨🔥
Perfect Dadash
@@Maxi.J. thank you dadash❤️
good work sir!
you're welcome bro💚
Something that comfyui lacks in a way to automatic load the tags and keywords from loras, that would help a lot.
👏🏻✨
For some reason i can’t see rgthree for install…
@@MDanielSavio update your comfyUi it will fix it
@@Jockerai I install via github finally, for some reasson i can’t see many nodes.
thanks for your video, it realy helt me a lots. let make video above controlnet with Flux GGUF,
You're welcome. I have already made one. Watch channel videos
WorkFlow link doesn't work.
@@AInfectados I checked it no issue. When you open the pink just click on downward arrow icon on top right of the screen
@@Jockerai i.imgur.com/noQmGdE.png
@@Jockerai i.imgur. com/noQmGdE.png
@@Jockerai I still can't see it, the page don't load, so i have no option.
Can you change to another shortener please?
@@AInfectados now check it I put the main link
А в чем прикол-то? В фордже ты просто кидаешь картинку на стол, закрываешь маской то что хочешь изменить и готово. Зачем все эти ноды? Только лишняя возня, ИМХО
it's just up to anyone to choose what he or she prefer. I use ComfyUi, Forge even focus
pssssst buddy, your title says MULI-LoRa instead of MULTI
@@xaviere.3299 thank you buddy I fixed it😁
Can we use this GGUF workflow for the regular flux1-dev.safetensors model or do we have to use the GGUF models with this workflow?
I switched the nodes so I can use the regular flux1-dev.safetensors and the t5xxl_fp16.safetensors clip but my system keeps crashing when using this workflow.
Can you show us how to use the regular models if we have more powerful computers?
@@adapt-or-die-trying yes you can easily use flux.dev main model but it needs a strong system and high VRam to work. If it crashes for you, it means you can't use the main models on your current system and need a more powerful one
@@Jockerai I'm using an RTX A5000 with 24 GB of VRAM and 25 GB of RAM and 12vCPU, do I really need more power than that just to run the main models? How much would you say I need then?
@@adapt-or-die-trying no it is enough. I have to see the full log error in your cmd. send it here please
junk to many missing files poorly design
@@gar-r3w3b what missing files?