0:00 Explain 1:02 Before starting 2:55 Face Swap 1 person 4:12 Face Swap 2 people 5:02 Face Swap 3 People 5:50 FP8 PULID ( InstantID Clone ) 7:40 FP8 ADETAILER ( automatic masking and inpaint tool ) 10:27 FP8 OUTPAINTING 14:50 Image To Video
When using on linux it says missing custom nodes from graph" workflowFlorence workflowFlux Controlnet workflowNodes 3 workflowXLabs Sampler workflowAdetailer workflowoutpaintsize1 workflowoutpaintfluxend workflowoutpaintflor workflowVIDEOUPSCALER workflowvideoencodecode workflowfaceswap2 workflowfaceswap3 workflowREACTORFACESWAP workflowpullflo workflowpulidtxt workflowpluidlux" after downloading installing and restarting from missing custom nodes
Thank you so much! I do however get some errors when installing missing nodes and after the reboot I still haven't got ReActor Node for ComfyUI and ComfyyUI-PuLID-Flux-Enhanced to work, leaving quite some critical nodes red. Any tips?
Missing Node Types When loading the graph, the following node types were not found ReActorFaceBoost (In group node 'workflow>REACTORFACESWAP') workflow>REACTORFACESWAP ReActorFaceSwap (In group node 'workflow>REACTORFACESWAP') ReActorMaskHelper (In group node 'workflow>REACTORFACESWAP') workflow>faceswap2 workflow>faceswap3 PulidFluxInsightFaceLoader (In group node 'workflow>pluidlux') workflow>pluidlux PulidFluxEvaClipLoader (In group node 'workflow>pluidlux') PulidFluxModelLoader (In group node 'workflow>pluidlux') ApplyPulidFlux (In group node 'workflow>pluidlux') workflowFlorence workflowFlux Controlnet workflowXLabs Sampler workflowoutpaintsize1 workflowoutpaintfluxend workflowoutpaintflor workflowfaceswap2 workflowfaceswap3 workflowpullflo workflowpulidtxt workflowpluidlux workflowvideoencod workflowvidsupscal workflowREACTORFACESWAP
Dose any one else get lots of redboxes in the custom nodes? For exmaple the rector face swap is in just a red box, even though reactor is installed etc.
Could not run 'torchvision::nms' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build).
I find comfyUI simple enough, but if you put ALL the different workflows in ONE it adds an unbelievable ammount of clutter. No single person will use all of these workflows. I suggest making ONE workflow per IDEA and putting all these workflows in one folder. There's no use to having them all in one like this. I can still use them fine but you already see people crying over the "complexity" of comfy in the comments lol. Just trying to help out! I like watching your videos.
It's much easier to have everything in 1 single json file where you can chose what you want to use and when, I did that before, having a bunch of different json files but it was way worse to manage than having 1 single workflow where you install everything all at once then you are good to go
@@tripleheadedmonkey420 Actually he shows how to use the workflows seemlessly because not every image you generate will be perfect and needs different types of fixes. It is up to personal preference and if you are truly a veteran user then just delete and save each workflow and add to a dropdown menu. I have that for one of my ComfyUI apps. These are great workflows. Don't complain just because your brain has a hard time with remembering each section
@@Aitrepreneur Counting 10 boxes of 10 balls is far easier than counting a single box of 100. What I am saying is that for people who don't truly understand what they are looking at it is a lot more difficult to see a workflow with 100 nodes than it is if there was 10 workflows with less nodes on them. It might seem easier after you counted your 10 boxes of 10 balls to put them into a box of 100 and label them. But the next person who comes along will likely find that more difficult to deal with.
@@Aitrepreneur Brother... what is this. This is novice thinking. NO disrespect intended. Seriously no disrespect intended. This is a mess. The way you should do it is how @ponzicharlie stated. I should not need to scroll multiple times on a Samsung NEO screen to see the entire workflow. Comfy Changes every week this workflow will be out of date before Christmas. /remindme (if this was Reddit).... You are a top notch guy... do not fall into the ignorant sociopath I am a workflow creator do how I do or don't use it mentality. Do it so the majority of your users will understand it. I understand it and knew right away this way overkill and not efficient.
Failed to import transformers.models.cohere.configuration_cohere because of the following error (look up to see its traceback): No module named 'transformers.models.cohere.configuration_cohere'
You might be interested in checking out the Omnigen. This is a completely new image generation / editing technology where you can do all the things you have to do with controlnets / masks / inpainting by just specifying the task in the prompt. The image quality is quite high, while the model seem to be several times smaller than modern image generation models. Seems like it has the potential to revolutionize the image generation / editing field if given enough attention.
I'm missing so many nodes, but there are non available to download in the missing node installer. What to do? I have installed a clean version of comfy with the one click installer
everything worked fine with the json file and than using the manager to install all missing nodes. But only the pluidlux workflow cant be installed. In the Manager it says "(IMPORT FAILED) ComfyUI-PuLID-Flux_Enhanced"
I keep getting errors on missing nodes, no mater what I do. WAS node suite ReActor Node PuLID Flux Enhanced those are keeping failing to import no matter what. I've tried installing them manually and same results. I can't be the only one with this problem
This isn't really specific to V2, but PuLID as a whole, every time I try to run this on any workflow I get this message: expected scalar type Half but found BFloat16 I think it's related to VRAM, but I'm wondering if people under 16GB of VRAM can run PuLID at all?
I keep getting the error "forward_orig() takes from 7 to 9 positional arguments but 10 were given" . I was able to muscle my way through everything else but this seems to really have me stumped. Shows up in several fields now. Pulid and Image gen.
Hi! I am quite new in the AI world. I installed comfyui and your workflow V2. It looks fine but many areas are covered by a red transparent rectangle and when I try to run some of the modules, I get messages telling me that nodes are missing. However, when I run the comfy manager an tell him to load the missing nodes, it tells me "no result", as if everything was already loaded. What do I do wrong? Thanks in advance!
Is it possible for you to upload an image with the workflow (like the default schnell one)? I've tried to go through the Manual and I haven't got it to load :(
ERROR: Could not find a version that satisfies the requirement onnxruntime-gpu==1.15.1 (from versions: 1.17.0, 1.17.1, 1.18.0, 1.18.1, 1.19.0, 1.19.2, 1.20.0) ERROR: No matching distribution found for onnxruntime-gpu==1.15.1 Trying to install without any versions..
@@sirmiluch6856 no chatgpt was saying it was a python error but comfyui has embedded python so didnt wanna f with it so basically i wasted my time with this vid
Installed all extensions but still it says I am missing: 1) flowfaceswap3 2) ReactorFaceSwap 3) pluidlux 4) flowfaceswap2 What to do? Should I install some extensions manually and which one exactly?
I've noticed the requirements.txt files in general quite often have the wrong onnx for my cuda version, it would be nice if that was auto detected somehow. I had to do the same and for a few other workflow nodes lately.
Anyone else struggling with error "TypeError: Cannot read properties of undefined (reading 'calcNodeInputs')" I have all the nodes installed correctly. The problem is, if copy even a small part of this workflow to a new file, I get the same error, but if I build it from scratch its fine
image to video always gives me an error saying "got prompt Failed to validate prompt for output 1187: * CogVideoDecode 1212:3: - Exception when validating inner node: tuple index out of range Output will be ignored Failed to validate prompt for output 1191: Output will be ignored"
Amazing work! 🎉 I hope segment anything, background replacement, and IC-Light can be added to the next version. ComfyUI_LayerStyle forever stop on download inference_cli-0.13.0, would you happen to know how to solve it? Thank you.
one thing, for the outpaint, SDXL has fixed resolutions that should be used for best generation: 640 x 1536 768 x 1344 832 x 1216 896 x 1152 1024 x 1024 1152 x 896 1216 x 832 1344 x 768 1536 x 640 i tried generating things with different resolutions but the outpaint was.. let's just say horrible, inconsistent, blurry, all gray and sometimes badly cutted For an easier time, you can replace the height and width nodes with CR SDXL Aspect Ratio from ComfyUI_Comfyroll_CustomNodes
Really nice, most workflows are just impossible to understand or after hours of trouble shooting don't work. If my AI overlord had feelings they would be good right now.
@@Aitrepreneur One more question. Outpainting has damaged the details of the original image, especially the fabric details of the clothing. Is it possible to do outpainting like Fooocus, where it only expands the image without altering the original? Thank you.
I started yesterday my first AI adventure with ConfyUI (without this workflow but it will help me to implement some parts to mine :) ). I will share more but for starters: I have PC: 3080 10GB, Ryzen 5900x, 32 RAM, Windows 10 - I did not make any memory optimizations like using PC as a terminal or removing browser acceleration or closing background programs. Bez model I can fully load to memory is Q4_K_M (and of course 04_K_S etc too). On basic model usually works good also Q5_K_M and of course also Q5_K_S (but not always as sometime for unknow reason slow down after multiple runs), probably some memory problems as it does not load fully to memory. With Q5 speed usually is very ok and I used it on start but sometimes I noticed slower generations (or even rarely dont start to generate and needed to cancel and start again). As a clip and text codex I it works with standard clip or this ViT_L_14_improved (...). I can also run with every Text encoder although fp16 take many RAM and of course you can not load it fully with Q8 there are not problems to load. But I can fully load fp16 and clip it to CPU although it is like +10 sec or something like this comparing to lower versions like Q8 etc. Anyway on the end I would recommend for 3080 10GB to try this Q4_K_M version as so far I dont have problems with it and it fully load to VRAM even with this better CLIP and LORA (but probably situation can be different with multiple LORAS). Q5 versions (even Q5_K_M, but of course Q4_K_S is even easier to run) also worked usually but there was sometimes slower queues and probably situation will be even worse with LORAs. You could also probably decrease memory use using it like terminal I think before i even start I had like 1,4 or 1,5 GB of VRAM used. Dont remember with which version but I think with this Q4_K_M version I also was able to generate pictures in batches of 3 (4 too but it slow downs) and with more then 20 steps (tried with 40 and 60) although time is of course proportionally higher. Text encoders fp16 on CPU, but with 32GB RAM or higher as with 16GB it can be moving to swap in some situation. Technically it usually run also without clip to RAM but in this case it is better to use f.e Q8 text encoder.
looks pretty amazing! I guess with a bad internet connection I should download the files separately.. the previous installer did not check for files that are already loaded and after several failed attempts I gave in and did it manually... as the installation kept failing and started always from beginning... anyways.. this looks super cool!
looks great but for the video one and really for all of them could you add a batch process for a folder of images - for the video could you pull from a list of movie camera moves that append to the auto-prompter - the idea of sitting there and doing it manually makes me groan - i'd rather auto process a whole folder for a movie idea then go back and do specific ones to fill in any gaps after it finishes over- night - or just run in multiple times with a random select from folder addon - usually movies go thru a series of closeups - pans- establishing shots etc - what would be dope if u could run a movie script thru a autoprompter ai to extract the camera moves then run that thru a sequential folder of numbered images you setup - take out the tedious part of doing lots of stuff .
It's a really noob question! (probably even dumb) But I'm going to ask anyway! Can this workflow be used with AMD gpu, indeed is it possible to use it with AMD gpu?
This is insanely powerful, I love the fact that these generators were put together. The image to prompt feature is my favorite feature because most of the ones online that I've found make you pay to use that feature, so I'm glad I can do it locally for free now. I'm also in love with the PuLID feature as well as that's better than using LoRAs to generate a new picture.
Damn, I just saw your last Video got removed. Good that you have it noted on Patreon or I wouldn't be able to remember that it was "OMNIGEN" :) Gonna go and see if I can figure something out myself to get that thing started, otherwise I'll wait for the Video to return or just move on :D Cheers
yes, if you can share the workflow pls? My PULID some how not work because of various version, i hope your workflow can fix the issue. Thanks if you can share
I have a 3060 and it's usable as long as you stick with the GGUF versions. Not the fastest rig, but definitely works. Well, everything but the ImagetoVideo option. Seems there are upscaler installation issues.
"This is actually way more powerful than you think it is." O m 🐐 I can't stop laughing, why do I crack up laughing at this every time?? It's like, "Now, listen. I don't care how powerful you think this is. You could think it's more powerful than a billion trillion imperial death stars combined, and you'd be SOOOO freakin wrong... It is actually WAYYYY more powerful than even that!"
Just learn it, it's not that complicated to get used to. I was intimidated by it at first myself, but it's real simple once you learn it plus most workflows are automatically put together by other people, so you don't have to put it altogether yourself.
HELLO HUMANS! Thank you for watching & do NOT forget to LIKE and SUBSCRIBE For More Ai Updates. Thx
Please update: .sh files are missing
@@Aitrepreneur custom nodes seem to not install
Thanks!
Thank you so much! 🤩
0:00 Explain
1:02 Before starting
2:55 Face Swap 1 person
4:12 Face Swap 2 people
5:02 Face Swap 3 People
5:50 FP8 PULID ( InstantID Clone )
7:40 FP8 ADETAILER ( automatic masking and inpaint tool )
10:27 FP8 OUTPAINTING
14:50 Image To Video
Dude, this is insanely well done! Amazing job!!!
Where is the workflow link?
patreon
He forgot to mention it was Patreon only I am guessing.
Patreon
looking for the link too
just go at link in pinned comment. there is file in free tier to download.
When using on linux it says missing custom nodes from graph" workflowFlorence
workflowFlux Controlnet
workflowNodes 3
workflowXLabs Sampler
workflowAdetailer
workflowoutpaintsize1
workflowoutpaintfluxend
workflowoutpaintflor
workflowVIDEOUPSCALER
workflowvideoencodecode
workflowfaceswap2
workflowfaceswap3
workflowREACTORFACESWAP
workflowpullflo
workflowpulidtxt
workflowpluidlux" after downloading installing and restarting from missing custom nodes
same for me
Same upvote
After install all nodes just clear your comfyUI and put workflow again
@@Ceteef116 Thanks you it worked!
@@Ceteef116 doesnt work for me :/
Thank you! This is one of the best and easiest all-in-one workflows I’ve come across-very neat and tidy. I’d even say it's the workflow of the year!
I just setup my runpod two days ago. I will absolutely be doing this now. Thank you so much for your hard work. It is greatly appreciated.
Thank you so much! I do however get some errors when installing missing nodes and after the reboot I still haven't got ReActor Node for ComfyUI and ComfyyUI-PuLID-Flux-Enhanced to work, leaving quite some critical nodes red. Any tips?
Missing Node Types
When loading the graph, the following node types were not found
ReActorFaceBoost (In group node 'workflow>REACTORFACESWAP')
workflow>REACTORFACESWAP
ReActorFaceSwap (In group node 'workflow>REACTORFACESWAP')
ReActorMaskHelper (In group node 'workflow>REACTORFACESWAP')
workflow>faceswap2
workflow>faceswap3
PulidFluxInsightFaceLoader (In group node 'workflow>pluidlux')
workflow>pluidlux
PulidFluxEvaClipLoader (In group node 'workflow>pluidlux')
PulidFluxModelLoader (In group node 'workflow>pluidlux')
ApplyPulidFlux (In group node 'workflow>pluidlux')
workflowFlorence
workflowFlux Controlnet
workflowXLabs Sampler
workflowoutpaintsize1
workflowoutpaintfluxend
workflowoutpaintflor
workflowfaceswap2
workflowfaceswap3
workflowpullflo
workflowpulidtxt
workflowpluidlux
workflowvideoencod
workflowvidsupscal
workflowREACTORFACESWAP
I never could get this fully working on Linux. Yet what parts did work, worked great!
Dose any one else get lots of redboxes in the custom nodes? For exmaple the rector face swap is in just a red box, even though reactor is installed etc.
yes
yeah
same
samee
Is there a guide on how to run this workflow on runpod or any others gpu service?
Could not run 'torchvision::nms' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build).
I find comfyUI simple enough, but if you put ALL the different workflows in ONE it adds an unbelievable ammount of clutter. No single person will use all of these workflows.
I suggest making ONE workflow per IDEA and putting all these workflows in one folder. There's no use to having them all in one like this. I can still use them fine but you already see people crying over the "complexity" of comfy in the comments lol.
Just trying to help out! I like watching your videos.
It's much easier to have everything in 1 single json file where you can chose what you want to use and when, I did that before, having a bunch of different json files but it was way worse to manage than having 1 single workflow where you install everything all at once then you are good to go
You can easily multiple times duplicate the workflow and delete the sections you don't want.
@@tripleheadedmonkey420 Actually he shows how to use the workflows seemlessly because not every image you generate will be perfect and needs different types of fixes. It is up to personal preference and if you are truly a veteran user then just delete and save each workflow and add to a dropdown menu. I have that for one of my ComfyUI apps. These are great workflows. Don't complain just because your brain has a hard time with remembering each section
@@Aitrepreneur Counting 10 boxes of 10 balls is far easier than counting a single box of 100. What I am saying is that for people who don't truly understand what they are looking at it is a lot more difficult to see a workflow with 100 nodes than it is if there was 10 workflows with less nodes on them. It might seem easier after you counted your 10 boxes of 10 balls to put them into a box of 100 and label them. But the next person who comes along will likely find that more difficult to deal with.
@@Aitrepreneur Brother... what is this. This is novice thinking. NO disrespect intended. Seriously no disrespect intended. This is a mess. The way you should do it is how @ponzicharlie stated. I should not need to scroll multiple times on a Samsung NEO screen to see the entire workflow. Comfy Changes every week this workflow will be out of date before Christmas. /remindme (if this was Reddit).... You are a top notch guy... do not fall into the ignorant sociopath I am a workflow creator do how I do or don't use it mentality. Do it so the majority of your users will understand it. I understand it and knew right away this way overkill and not efficient.
Failed to import transformers.models.cohere.configuration_cohere because of the following error (look up to see its traceback):
No module named 'transformers.models.cohere.configuration_cohere'
You might be interested in checking out the Omnigen.
This is a completely new image generation / editing technology where you can do all the things you have to do with controlnets / masks / inpainting by just specifying the task in the prompt.
The image quality is quite high, while the model seem to be several times smaller than modern image generation models.
Seems like it has the potential to revolutionize the image generation / editing field if given enough attention.
Hi does anyone know how to fix expected scalar type Half but found BFloat16? Pulid not working for me :((
what are the minimum hardware parameters i need to have in order to run this workflow on my computer?
I'm missing so many nodes, but there are non available to download in the missing node installer. What to do? I have installed a clean version of comfy with the one click installer
17:30 what is super resource intensive? The video generation pipeline itself? Or the video upscale only?
would you ever consider setting up a repo that patrons can have access to so we can just pull an update?
everything worked fine with the json file and than using the manager to install all missing nodes. But only the pluidlux workflow cant be installed. In the Manager it says "(IMPORT FAILED) ComfyUI-PuLID-Flux_Enhanced"
Hello. I've followed the instructions in Manual Installation Guide, but I can't find the Workflow link. Am I missing something?
same here
would this work with comfy on a cloud GPU like Runpod so that we could use a monster VRAM GPU that we wouldn't have locally?
I keep getting errors on missing nodes, no mater what I do.
WAS node suite
ReActor Node
PuLID Flux Enhanced
those are keeping failing to import no matter what. I've tried installing them manually and same results. I can't be the only one with this problem
This isn't really specific to V2, but PuLID as a whole, every time I try to run this on any workflow I get this message: expected scalar type Half but found BFloat16
I think it's related to VRAM, but I'm wondering if people under 16GB of VRAM can run PuLID at all?
I keep getting the error "forward_orig() takes from 7 to 9 positional arguments but 10 were given" . I was able to muscle my way through everything else but this seems to really have me stumped. Shows up in several fields now. Pulid and Image gen.
Hi! I am quite new in the AI world. I installed comfyui and your workflow V2. It looks fine but many areas are covered by a red transparent rectangle and when I try to run some of the modules, I get messages telling me that nodes are missing. However, when I run the comfy manager an tell him to load the missing nodes, it tells me "no result", as if everything was already loaded. What do I do wrong? Thanks in advance!
Is it possible for you to upload an image with the workflow (like the default schnell one)? I've tried to go through the Manual and I haven't got it to load :(
Do you happen to have any workflows that allow you to use 2 lora characters in the same single image generation? Thanks!
i have
The workflow allows the use of multiple loras at the same time, just click on the add lora button on the pink node
@@Aitrepreneur Can both be character loras? Whenever I try, all the characters look like the same lora character, or a mix of the 2 lora characters.
@@brianmonarchcomedy normal
@@brianmonarchcomedy i presume you've got 2 different keywords to call upon them...
I use stable-diffusion-webui-forge as alternative, it supports flux too and has the same interface like A1111.
I believe ForgeUI supports it.
@@Ryan_Chisholm Yes do.
Just use sdnext
Plus it is faster and has better memory management.
Where do you get these at? Github? Not that nightmarish hole again
Does anyone know how I could get this new workflow on the runpod instance?
Nice is there a node to create a Image Sequence from Pose? To have more Control on the video Part
Thank you very much, another crazy step forward!
ERROR: Could not find a version that satisfies the requirement onnxruntime-gpu==1.15.1 (from versions: 1.17.0, 1.17.1, 1.18.0, 1.18.1, 1.19.0, 1.19.2, 1.20.0)
ERROR: No matching distribution found for onnxruntime-gpu==1.15.1
Trying to install without any versions..
same error
did you manage to solve it?
@@sirmiluch6856 yes, just installed latest onnxruntime and onnxruntime-gpu
@@sirmiluch6856 no chatgpt was saying it was a python error but comfyui has embedded python so didnt wanna f with it so basically i wasted my time with this vid
Just install latest versions ( do not provide any version number)
someone know how to run the updated model pulid 1.1 on comfyui ?
Installed all extensions but still it says I am missing:
1) flowfaceswap3
2) ReactorFaceSwap
3) pluidlux
4) flowfaceswap2
What to do? Should I install some extensions manually and which one exactly?
I am answering myself. Edit -> clear worflow and loading it again helped after I unsinstalled and installed onnxruntime onnxruntime-gpu
I've noticed the requirements.txt files in general quite often have the wrong onnx for my cuda version, it would be nice if that was auto detected somehow. I had to do the same and for a few other workflow nodes lately.
where is the json file for manual workflow install?
edit: found it on patreon
You, the best.
How long does it take for you to generate an image? Using my amd gpu it takes me around 4-5 mins. Am I doing something wrong?
yall, can i run this on i5 12400f, rtx 3060 12gb, and 32 gigs of ram, or should i not bother?
Can you make a workflow just for videos?? Is there a way to use Loras with videos?
any tips on how to run batches? like batch img2img?
Anyone else struggling with error "TypeError: Cannot read properties of undefined (reading 'calcNodeInputs')" I have all the nodes installed correctly. The problem is, if copy even a small part of this workflow to a new file, I get the same error, but if I build it from scratch its fine
image to video always gives me an error saying
"got prompt
Failed to validate prompt for output 1187:
* CogVideoDecode 1212:3:
- Exception when validating inner node: tuple index out of range
Output will be ignored
Failed to validate prompt for output 1191:
Output will be ignored"
nevermind just saw that you updated the workflow to adress this exact issue
nevermind, just saw that you updated the workflow to adress this exact issue.
Your workflow looks fantastic. Can this be used to animate if you already have multiple frames you want to use?
Just saw the new vid went down, have it anywhere else?
yeah me too, why did he delete it
Thank you!!!
can i run this with rtx 2080ti 11 vram???
Do you have to have patreon access to get the workflow? I've followed the manual install steps but no link to workflow?
does this work for MAC?
yo it says Onyx runtime version 1.15.1 dosnt exist so i used 1.17.0, its downloading right now but will this effect anything?
I am using RTX 4090 and I am keep getting Allocate memory problem while creating Image to Video. any idea why?
Amazing work! 🎉 I hope segment anything, background replacement, and IC-Light can be added to the next version.
ComfyUI_LayerStyle forever stop on download inference_cli-0.13.0, would you happen to know how to solve it? Thank you.
is this one clik install will install comfyui as well? will this ruin my existing Comfyui? since I have my existing Comfyui running already
This is awesome. Thank you so much. One question. For Text2Img how do I input a negative prompt or try out different FLUX models?
Looks great, love the channel by the way. But I can't see a download link for the workflow or is it for Patreon only?
make a version for stable matrix. it installs everything for the most part in its own folder per checkpoint + python and all dependencies
where did i get workflow plz post
How many gigs does this add to Comfy?
Great wf! Is the workflow available on patreon without the installer? Thanks
Bro, this is insane
one thing, for the outpaint, SDXL has fixed resolutions that should be used for best generation:
640 x 1536
768 x 1344
832 x 1216
896 x 1152
1024 x 1024
1152 x 896
1216 x 832
1344 x 768
1536 x 640
i tried generating things with different resolutions but the outpaint was.. let's just say horrible, inconsistent, blurry, all gray and sometimes badly cutted
For an easier time, you can replace the height and width nodes with CR SDXL Aspect Ratio from ComfyUI_Comfyroll_CustomNodes
I didn't see the link for the workflow. Dis i miss something ?
Thanks for sharing this. Is there an installer for Mac?
Really nice, most workflows are just impossible to understand or after hours of trouble shooting don't work. If my AI overlord had feelings they would be good right now.
Which model of Flux are you using? OG Flux had bad NSFW products. I assume this is one of the distillations/retraining of Flux?
Thanks!
Thank you so much :)
@@Aitrepreneur One more question. Outpainting has damaged the details of the original image, especially the fabric details of the clothing. Is it possible to do outpainting like Fooocus, where it only expands the image without altering the original? Thank you.
What do you mean by UNCENSORED ?
You can get it to generate anything you want. Even NSFW stuff
@@Bandit1978 I think Flux is censored
@@C4zGames whi h one isn't censored?
Good job, my overlord, badabim badaboum! 🎉( 4'58)
@@kalinwang1 I dont know, but seeing UNCENSORED in each video title is annoying
Amazing workflow! Thx Aitrepreneur
I started yesterday my first AI adventure with ConfyUI (without this workflow but it will help me to implement some parts to mine :) ). I will share more but for starters:
I have PC: 3080 10GB, Ryzen 5900x, 32 RAM, Windows 10 - I did not make any memory optimizations like using PC as a terminal or removing browser acceleration or closing background programs. Bez model I can fully load to memory is Q4_K_M (and of course 04_K_S etc too). On basic model usually works good also Q5_K_M and of course also Q5_K_S (but not always as sometime for unknow reason slow down after multiple runs), probably some memory problems as it does not load fully to memory. With Q5 speed usually is very ok and I used it on start but sometimes I noticed slower generations (or even rarely dont start to generate and needed to cancel and start again). As a clip and text codex I it works with standard clip or this ViT_L_14_improved (...). I can also run with every Text encoder although fp16 take many RAM and of course you can not load it fully with Q8 there are not problems to load. But I can fully load fp16 and clip it to CPU although it is like +10 sec or something like this comparing to lower versions like Q8 etc.
Anyway on the end I would recommend for 3080 10GB to try this Q4_K_M version as so far I dont have problems with it and it fully load to VRAM even with this better CLIP and LORA (but probably situation can be different with multiple LORAS). Q5 versions (even Q5_K_M, but of course Q4_K_S is even easier to run) also worked usually but there was sometimes slower queues and probably situation will be even worse with LORAs.
You could also probably decrease memory use using it like terminal I think before i even start I had like 1,4 or 1,5 GB of VRAM used. Dont remember with which version but I think with this Q4_K_M version I also was able to generate pictures in batches of 3 (4 too but it slow downs) and with more then 20 steps (tried with 40 and 60) although time is of course proportionally higher. Text encoders fp16 on CPU, but with 32GB RAM or higher as with 16GB it can be moving to swap in some situation. Technically it usually run also without clip to RAM but in this case it is better to use f.e Q8 text encoder.
Does it work on Mac please ?
looks pretty amazing! I guess with a bad internet connection I should download the files separately.. the previous installer did not check for files that are already loaded and after several failed attempts I gave in and did it manually... as the installation kept failing and started always from beginning... anyways.. this looks super cool!
One question, what is the model used for video generation, upscale and interpolation?
same question
CogVideoX
looks great but for the video one and really for all of them could you add a batch process for a folder of images - for the video could you pull from a list of movie camera moves that append to the auto-prompter - the idea of sitting there and doing it manually makes me groan - i'd rather auto process a whole folder for a movie idea then go back and do specific ones to fill in any gaps after it finishes over- night - or just run in multiple times with a random select from folder addon - usually movies go thru a series of closeups - pans- establishing shots etc - what would be dope if u could run a movie script thru a autoprompter ai to extract the camera moves then run that thru a sequential folder of numbered images you setup - take out the tedious part of doing lots of stuff .
It's a really noob question! (probably even dumb) But I'm going to ask anyway! Can this workflow be used with AMD gpu, indeed is it possible to use it with AMD gpu?
круто. за такую работу не стыдно платить. мира и процветания.
Amazing job! Thanks!
This is insanely powerful, I love the fact that these generators were put together. The image to prompt feature is my favorite feature because most of the ones online that I've found make you pay to use that feature, so I'm glad I can do it locally for free now. I'm also in love with the PuLID feature as well as that's better than using LoRAs to generate a new picture.
Damn, I just saw your last Video got removed. Good that you have it noted on Patreon or I wouldn't be able to remember that it was "OMNIGEN" :)
Gonna go and see if I can figure something out myself to get that thing started, otherwise I'll wait for the Video to return or just move on :D Cheers
spectacular work
Gonna have nightmares from that thumbnail man lol
Thanks for the work! So the access to the workflow is Premium ?
Am I missing something, why does it say NSFW, flux isn't NSFW?
what I now need is a local video upscaled for what ever old videos.. ehem..
absolutely smashing, joined your patreon for this, how about putting in Openpose into it?
Bro there is no link to the workflow
There is but you have to go to his Patreon page for it. It's free.
yes, if you can share the workflow pls? My PULID some how not work because of various version, i hope your workflow can fix the issue. Thanks if you can share
Id love to install this, but I dont think my 3070 will love it... Even SDXL makes it sound like its preparing for take off.
I have a 3060 and it's usable as long as you stick with the GGUF versions. Not the fastest rig, but definitely works. Well, everything but the ImagetoVideo option. Seems there are upscaler installation issues.
"This is actually way more powerful than you think it is."
O m 🐐 I can't stop laughing, why do I crack up laughing at this every time?? It's like, "Now, listen. I don't care how powerful you think this is. You could think it's more powerful than a billion trillion imperial death stars combined, and you'd be SOOOO freakin wrong... It is actually WAYYYY more powerful than even that!"
Can't hate on that. Insta like and comment for this effort!
No forge videos? comfy isn't comfy :/
Just learn it, it's not that complicated to get used to. I was intimidated by it at first myself, but it's real simple once you learn it plus most workflows are automatically put together by other people, so you don't have to put it altogether yourself.
This was incredible for someone just getting into local AI image generation. Thank you so much for this!
That face swap was disturbing 😳
Here’s a challenge…. Now make it all run on M chip Mac n get back to me
Can we get ai to create custom workflows from text? xd
Aitrepreneur makes me lose my hatred towards ComfyUI.
Pass.
huh... This is almost insane!
I am 100% that you would've made more money just selling the workflow rather than hiding it behind a monthly subscription.
its free... just need to join don't have to pay a cent.