At first i didn't understand why you make this part 06:35 Supercharge the Workflow, but after getting a MemoryError now i know what to do, we need more thinkers like you
I really thought this was just gonna be junk like so many other "Video/animation" ones I already tried. And I am very impressed by it, simply because it worked. And my video came out really nice. Subscribed!
Great video! I want to ask that can we use init image for this workflow like we do on Deforum? I need the video starts with a specific image on the first frame of the video, then it should changes through the prompts. Do you know how does it possible on ComfyUI / AnimateDiff? Thank you!
I haven't personally used deforum, but it sounds like its the same concept. This workflow uses 4 init images at different points during the 96 frames to guide the animation. The ipadapter and control net nodes do most of the heavy lifting so prompts aren't really needed, but i've used them to fine tune outputs. I'd encourage you to try it out and see if it gives you the results you're looking for.
i've heard of people running this successfully on as little as 8gb VRAM, but you'll probably need to turn of the frame interpolation. you can also try running this on the cloud at openart (but your checkpoint options might be limited): openart.ai/workflows/abeatech/tutorial-morpheus---morphing-videos-using-text-or-images-txt2img2vid/fOrrmsUtKEcBfopPrMXi
Bro u r doing really good job, i hav only one question, in this video u did image to video morphing so can u do video to video morphing? Or can u make morphing video by using only text / prompt?
this is great! do you know if there is a way to "batch" many variations where you can give each of the four guidance images a folder and it will run through and do a new animation with different source images multiple times?
Amazing! Thanks a lot for this!!! btw - which nodes do I need to disable in order to get back to the original flow? (the one that is based only on input images and not on prompts)
Nice tutorial, even if it was very hard for me to set this up. Which SD 1.5 model do you recommand to install ? I just wanna morph input images, and a very realistic render. If someone could help :3
Awesome tutorial! I've been getting used to the ComfyUI workflow...love the batch image generation!! However, do you have any tips on how to make LONGER text to video animations? I've seen several YT channels that have very long format morphing videos...well over an hour. I'd like to create videos that average around 1 minute, but can't sort out how to do it!
Hello! Big thanks to you, bro. I learned how to make different animations from your video. I watched many other tutorials, but they didn't work for me. You explained everything very clearly. Tell me, can I insert motion masks myself, or do I have to insert link addresses only? Are there any other websites with different masks? Greetings from UKRAINE!!!
Hi Abe, I found the final animation output is wildly different in style & aesthetic from the initial input images. Any tips for retaining overall style? Also have you got this workflow to work with SDXL?
NICE! I tried and it works great. Thanx for the tut! Question though. I tried changing the 96 to a larger number so the changes between pictures takes a bit longer but I don't see any difference. Is there something I'm missing? Thanx!
in my case it doesnt really use the images i feed it. I already tried to find all the settings which result in almost no morph and basically all 4 original images standing still but i cant seem to find them.
hi, I would like the first and last frames to exactly match the images I uploaded without being reinterpreted. Is this possible? In the case how should I do it? Thanks
Hey, my animations come out super blurry and are no where near as clear as yours. I can barely make out the monkey, it's just a bunch of moving brown lol. Is there a reason for this?
Help! I encountered this error while running it Error occurred when executing IPAdapterUnifiedLoader: Module 'comfy. model_base' has no attribute 'SDXL_instructpix2pix'
Sounds like it could be a couple of things: a) you might be trying to use an SDXL checkpoint - in which case try using a SD1.5. The AnimateDiff model in the workflow only works with SD1.5 or b) an issue with your IPAdapter node. you can yry making sure the ipadapter model is downloaded and in the right folder, or reinstalling the ComfyUI_IPAdapter_plus node (delete the custom node folder and reinstall from manager or github)
Managed to get this running. It does okay but I am not seeing much influence from the control net motion video input. Any way to make that more apparent? Also have notice a Shutterstock overlay near the bottom of the clip. it is translucent but noticeable. kind of ruins everything. anyway, to eliminate that artifact?
Working, but when i select "plus high strenght", i get clip vision error. What im i missing, i downloaded everything.... VIT-G is the problem for some reason?
How i can get progress bar like you on top of the screen? I must reainstall full comfy UI for this workflow. I instaled crystools but progress bar doesn't appear on top :/ Thank you for your video you are a god!
when i put on manager install missing nodes i cant do it and said: To apply the installed/updated/disabled/enabled custom node, please RESTART ComfyUI. And refresh browser... what can do that?
Error occurred when executing IPAdapterBatch: Error(s) in loading state_dict for ImageProjModel: size mismatch for proj.weight: copying a param with shape torch.Size([3072, 1280]) from checkpoint, the shape in current model is torch.Size([3072, 1024]). I followed this video exactly and am only using SDL 1.5 checkpoints. I cannot find anywhere how to fix this
I'm getting errors when trying to run... a few items that say "value not in list: ckpt_name:" "value not in list: lora_name" and "value not in list: vae_name:" I'm certain I put all the downloads in the correct folders and name everything appropriately.... Any thoughts?
Hi! I have this error all the time: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm). Though I use 1.5 checkpoint. Please help
hi! beginners question. So if I run a software like ComfyUI locally, does that mean that all AI art, music, works that I generate will be free to use for commercial purposes?or am I violating terms of copyright? I am searching more info about this but I get confused, thanks in advance
how awesome is that! but what needs to be done to get rid of these errors: When loading the graph, the following node types were not found: ADE_ApplyAnimateDiffModelSimple VHS_SplitImages SimpleMath+ ControlNetLoaderAdvanced ADE_MultivalDynamic VHS_VideoCombine BatchCount+ ADE_UseEvolvedSampling FILM VFI RIFE VFI Color Correct (mtb) VHS_LoadVideoPath IPAdapterUnifiedLoader ACN_AdvancedControlNetApply ADE_LoadAnimateDiffModel ADE_LoopedUniformContextOptions IPAdapterAdvanced CreateFadeMaskAdvanced
Thanks for this. Do you know what could be causing this error : Error occurred when executing KSampler: Given groups=1, weight of size [320, 5, 3, 3], expected input[16, 4, 64, 36] to have 5 channels, but got 4 channels instead
I figured out the problem, I was using the wrong ControlNet. I am having a different issue though, where my initial output is very "noisy", as if ther was latent noise all over it. Is it imporant for the source images to be in the same aspect ratio as the output?
Hi! any suggestion for missing IPAdapter? I am confused because i didn't get an error to install or update and I have all of the IPAdapter nodes installed... the process stopped on the "IPAdapter Unified Loader" node. !!! Exception during processing!!! IPAdapter model not found. Traceback (most recent call last): File "/workspace/ComfyUI/execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "/workspace/ComfyUI/execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "/workspace/ComfyUI/execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "/workspace/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus/IPAdapterPlus.py", line 453, in load_models raise Exception("IPAdapter model not found.") Exception: IPAdapter model not found.
@@tilkitilkitam Thank you for responding. I already had the model installed but it was not seeing it. I ended up restarting Comfy completely after I updated everything from the manager instead of only doing a hard refresh and that fixed it.
Hey! I've managed to get this working but I was under the impression this workflow will animate between the given reference images. The results I get are pretty different from the reference images. Am I wrong in my assumption?
You're right - it uses the reference images (4 frames vs 96 total frames) as a starting point and generates additional frames, but the results should still be in the same ball park. if you're getting drastically different results, it might be a mix of your subject + SD1.5 model. I've had the best results by using a similar type of model (photograph, realism, anime, etc) for both the image generation and the animation
@@Adrianvideoedits Could you make the videos first and then close and load the upscaler to improve the quality or does it have to be all together and it can't be done in 2 different workflows?
I've got an error message Error occurred when executing IPAdapterUnifiedLoader: IPAdapter model not found. File "C:\Users\waldo\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\waldo\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\waldo\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\waldo\Documents\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 573, in load_models raise Exception("IPAdapter model not found.") What shoud I do ?
you didnt explain most important part, which is how to run same batch with and without upscale. It generates new batches everytime you queue prompt so preview batch is waste of time. I like the idea though.
Hey man, not sure, but looks like there's this node in the workflow called Seed (rgthree) and it seems clicking the bottom button on this node called Use last queued seed does the trick. Try it.
"it's nothing too intimidating" then continues to show a workflow that takes up the entire screen. Lol! thanks for this tutorial, i've been looking for something like this days now. I'm switching from A1111 to comfy UI and the changes are a little more intimidating to get a handle on things than I originally expected. Thanks for this.
I get this weird error when it gets to the controlnet, not sure if you know whats wrong? 'ControlNet' object has no attribute 'latent_format', I have the qrcode control net loaded.
@@artificiallyinspired Make sure its the same name. A good habit I always do when loading new workflows is to go through all the nodes where you select a model or Lora and make sure the one I have locally is checked. Not everyone follows the same naming conventions. Sometimes you might download a workflow and someone has their ipadapter named "ip-adapter_plus.safetensors" while yours is "ip-adapter-plus.safetensors". Always good to re-select
Guys who have come across the fact that the loading 2 new models stops at 0% or I also had an example - the loading 3 new models is 9% and no longer continues. What is the problem? :c
Sounds like it could be a couple of things: a) you might be trying to use an SDXL checkpoint - in which case try using a SD1.5. The AnimateDiff model in the workflow only works with SD1.5 or b) an issue with your IPAdapter node. you can yry making sure the ipadapter model is downloaded and in the right folder, or reinstalling the ComfyUI_IPAdapter_plus node (delete the custom node folder and reinstall from manager or github)
@@abeatech it say s in the note to install it in the clip vision folder. but that is not it as none of the preloaded models are there and the new one installed there does not appear in the dropdown selector. so if it is not that folder then where are you supposed to install it? if the node is bad why is it used in the work flow in the first place? shouldn't it just have the ipadapter plus node?
The results will not exactly be the same, but they should still be in the same ball park. if you're getting drastically different results, it might be a mix of your subject + SD1.5 model. I've had the best results by using a similar type of model (photograph, realism, anime, etc) for both the image generation and the animation. Also worth double checking that you have the VAE and LCM lora selected in the settings module
Thanks for this I've followed the steps shown but seeing this erorr messg what am I doing wrong here Failed to validate prompt for output 53: * CheckpointLoaderSimple 564: - Value not in list: ckpt_name: 'SD1.5\juggernaut_reborn.safetensors' not in ['dreamshaper_8.safetensors', 'flux1-schnell-bnb-nf4.safetensors', 'juggernaut_reborn.safetensors', 'realvisxlV50_v50LightningBakedvae.safetensors', 'revAnimated_v2Rebirth.safetensors'] * LoraLoaderModelOnly 563: - Value not in list: lora_name: 'SD1.5\Hyper-SD15-8steps-lora.safetensors' not in ['AnimateLCM_sd15_t2v_lora.safetensors', 'Hyper-SD15-8steps-lora.safetensors', 'flux1-redux-dev.safetensors', 'v3_sd15_adapter.ckpt', 'vae-ft-mse-840000-ema-pruned.ckpt'] Output will be ignored
Must say: It's such a pleasure to listen to your calm voice tone. Plus very much appreciated all the info. Thanks!
At first i didn't understand why you make this part 06:35 Supercharge the Workflow, but after getting a MemoryError now i know what to do, we need more thinkers like you
People like you are so important. Too many gatekeepers out there. ❤
Thanks to you I have created my first video using ComfyUI! Your video is priceless!
I really thought this was just gonna be junk like so many other "Video/animation" ones I already tried.
And I am very impressed by it, simply because it worked.
And my video came out really nice.
Subscribed!
You're amazing!! I was lost for so long and when I found this video I was found
I've been having fun with this workflow for a few days already. It is amazing what can be done on a laptop in 2024.
This looks awesome. I gotta give it a try asap. Thanks for sharing.
Literally the answer to my prayers, have been looking for exactly this for MONTHS
Great video!! Hope you'll drop more knowledge in the future!
Thank you for this video❤❤
After following many tutorial, you are the only one gettin to me the results in a very clear way. Thank you so much!
Hello, how did you make that effect with your eyes at 0:20 please?
Great video!
I want to ask that can we use init image for this workflow like we do on Deforum?
I need the video starts with a specific image on the first frame of the video, then it should changes through the prompts.
Do you know how does it possible on ComfyUI / AnimateDiff?
Thank you!
I haven't personally used deforum, but it sounds like its the same concept. This workflow uses 4 init images at different points during the 96 frames to guide the animation. The ipadapter and control net nodes do most of the heavy lifting so prompts aren't really needed, but i've used them to fine tune outputs. I'd encourage you to try it out and see if it gives you the results you're looking for.
You about to blow up bro. Keep it going. Btw, I was subscriber #48 😁
Thanks for the sub!
Error occurred when executing VHS_LoadVideoPath:
module 'cv2' has no attribute 'VideoCapture'
your video timestep: 04:20
great tutorial, I am wondering... how many vram does this setup need?
i've heard of people running this successfully on as little as 8gb VRAM, but you'll probably need to turn of the frame interpolation. you can also try running this on the cloud at openart (but your checkpoint options might be limited): openart.ai/workflows/abeatech/tutorial-morpheus---morphing-videos-using-text-or-images-txt2img2vid/fOrrmsUtKEcBfopPrMXi
@@abeatech thank you!! will try the two suggestions! congrats for the channel!
Thank you for this video! Any idea what to do when the videos are blurry?
Same here, any answer?
I get too fast transitions between images. I did not find where you can adjust the transition time. I will be grateful for the advice.
Bro u r doing really good job, i hav only one question,
in this video u did image to video morphing so can u do video to video morphing?
Or can u make morphing video by using only text / prompt?
Thanks a lot for sharing this, very precise and complete guide ! 🥰
Cheers from France !
nice! why did you stop making more tutorials?
this is great!
do you know if there is a way to "batch" many variations where you can give each of the four guidance images a folder and it will run through and do a new animation with different source images multiple times?
Amazing! Thanks a lot for this!!!
btw - which nodes do I need to disable in order to get back to the original flow? (the one that is based only on input images and not on prompts)
Dude I loved this video! You explain things very well and I love how you explain in detail as you build out strings of nodes! subbed!
Nice tutorial, even if it was very hard for me to set this up. Which SD 1.5 model do you recommand to install ? I just wanna morph input images, and a very realistic render. If someone could help :3
good video. where can i find the link to the additional video masks? I don't see it in the description
Awesome tutorial! I've been getting used to the ComfyUI workflow...love the batch image generation!! However, do you have any tips on how to make LONGER text to video animations? I've seen several YT channels that have very long format morphing videos...well over an hour. I'd like to create videos that average around 1 minute, but can't sort out how to do it!
Great stuff man! Thank you 😀what are your specs btw? I only have 8gb vram
Great tutorial
What should I do to make the reference image persistent
Hello! Big thanks to you, bro. I learned how to make different animations from your video. I watched many other tutorials, but they didn't work for me. You explained everything very clearly. Tell me, can I insert motion masks myself, or do I have to insert link addresses only? Are there any other websites with different masks? Greetings from UKRAINE!!!
Розібрався?
@@tadaizm так, розібрався. Просто скопіювати свою маску як путь і вставити.Нажаль масок мало.Скачати інщі маски теж та щє проблема, фіг знадеш.
Hi, does anybody know how to add more images to create a longer video?
Hi Abe, I found the final animation output is wildly different in style & aesthetic from the initial input images. Any tips for retaining overall style? Also have you got this workflow to work with SDXL?
Great tutorial! Your content is super helpful. Just wondering, where are you these days? We'd love to see more Comfy UI tutorials from you!
Insane!!!!! Ty so much!
NICE! I tried and it works great. Thanx for the tut! Question though. I tried changing the 96 to a larger number so the changes between pictures takes a bit longer but I don't see any difference. Is there something I'm missing? Thanx!
5:30 what do you do to see the duration :)
in my case it doesnt really use the images i feed it. I already tried to find all the settings which result in almost no morph and basically all 4 original images standing still but i cant seem to find them.
Nice tutorial, thanks!
Glad it was helpful!
nice bro. Thank you🖖
hi, I would like the first and last frames to exactly match the images I uploaded without being reinterpreted. Is this possible? In the case how should I do it? Thanks
it worked so fine. but now it sends an error at the Load video Path, is there any update??
ipiv did an incredible job with this workflow!. Thanks for the tutorial.
Everything is great, but i have blurry result on my horizontal artwork. Any suggestion what to check on it?
I have the workflow working but my videos look very uninteresting - too abstract and the matte animation is very obvious
i don't have an ipadapter folder in my models folder, should i just make one?
in my comfy UI there is no manager option ...... help please
search in youtube how to install
can could one add some kind of ip adaptar to add your own face to transform?
Hey, my animations come out super blurry and are no where near as clear as yours. I can barely make out the monkey, it's just a bunch of moving brown lol. Is there a reason for this?
Same did you ever figure it out
@@DanielMatotek This was a while ago, but i believe I changed models
Why can't your morph 20/50 pictures?
how can I fix "Motion module 'AnimateLCM_sd15_t2v.ckpt' is intended for SD1.5 models, but the provided model is type SD3."???
cool, how long did it take you?
Why my generated animation very different from the reference images
Help! I encountered this error while running it
Error occurred when executing IPAdapterUnifiedLoader:
Module 'comfy. model_base' has no attribute 'SDXL_instructpix2pix'
Sounds like it could be a couple of things:
a) you might be trying to use an SDXL checkpoint - in which case try using a SD1.5. The AnimateDiff model in the workflow only works with SD1.5
or
b) an issue with your IPAdapter node. you can yry making sure the ipadapter model is downloaded and in the right folder, or reinstalling the ComfyUI_IPAdapter_plus node (delete the custom node folder and reinstall from manager or github)
why do you need controlnet at all? can it be skipped and morph without any mask?
Managed to get this running. It does okay but I am not seeing much influence from the control net motion video input. Any way to make that more apparent? Also have notice a Shutterstock overlay near the bottom of the clip. it is translucent but noticeable. kind of ruins everything. anyway, to eliminate that artifact?
Working, but when i select "plus high strenght", i get clip vision error. What im i missing, i downloaded everything.... VIT-G is the problem for some reason?
Very helpful, thanks!
Glad it was helpful!
How i can get progress bar like you on top of the screen? I must reainstall full comfy UI for this workflow. I instaled crystools but progress bar doesn't appear on top :/ Thank you for your video you are a god!
How much GPU VRAM is needed?
Brilliant video. thanks so much for sharing your knowledge.
when i put on manager install missing nodes i cant do it and said: To apply the installed/updated/disabled/enabled custom node, please RESTART ComfyUI. And refresh browser... what can do that?
how to add more than 4 images
copying video address of the animation doesn't work, it copies an .webm link, please help :(
Error occurred when executing IPAdapterBatch:
Error(s) in loading state_dict for ImageProjModel:
size mismatch for proj.weight: copying a param with shape torch.Size([3072, 1280]) from checkpoint, the shape in current model is torch.Size([3072, 1024]).
I followed this video exactly and am only using SDL 1.5 checkpoints. I cannot find anywhere how to fix this
where do i get your input images
Excellent! 👍👍👍
I'm getting errors when trying to run... a few items that say "value not in list: ckpt_name:" "value not in list: lora_name" and "value not in list: vae_name:"
I'm certain I put all the downloads in the correct folders and name everything appropriately.... Any thoughts?
I've encountered this error : Error occurred when executing RIFE VFI:
Tensor type unknown to einops
Hi! I have this error all the time: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm). Though I use 1.5 checkpoint. Please help
Hi Abe aTech, very informative and i like your work very much.
Anybody know how to scale this to more than 4 images.. ive tried but the masks are messing up the animation from the cloned nodes
want to make longer videos
how do i kick it off?
hi! beginners question. So if I run a software like ComfyUI locally, does that mean that all AI art, music, works that I generate will be free to use for commercial purposes?or am I violating terms of copyright? I am searching more info about this but I get confused, thanks in advance
how awesome is that!
but what needs to be done to get rid of these errors:
When loading the graph, the following node types were not found:
ADE_ApplyAnimateDiffModelSimple
VHS_SplitImages
SimpleMath+
ControlNetLoaderAdvanced
ADE_MultivalDynamic
VHS_VideoCombine
BatchCount+
ADE_UseEvolvedSampling
FILM VFI
RIFE VFI
Color Correct (mtb)
VHS_LoadVideoPath
IPAdapterUnifiedLoader
ACN_AdvancedControlNetApply
ADE_LoadAnimateDiffModel
ADE_LoopedUniformContextOptions
IPAdapterAdvanced
CreateFadeMaskAdvanced
Thanks for this. Do you know what could be causing this error : Error occurred when executing KSampler:
Given groups=1, weight of size [320, 5, 3, 3], expected input[16, 4, 64, 36] to have 5 channels, but got 4 channels instead
I figured out the problem, I was using the wrong ControlNet. I am having a different issue though, where my initial output is very "noisy", as if ther was latent noise all over it. Is it imporant for the source images to be in the same aspect ratio as the output?
Ok found the solution here too, I was using a photorealistic model, which somehow the workflow doesn't seem to like. Switching to juggernaut fixed it
Goooooood
I copied pretty much everything you did and my animation outputs looks super low quality?
Tried for ages couldn't make it work, every image is very pixelated and crazy cannot wor it out
Is there an way to increase frames/batch size for FadeMask?? Everything over 96 is messing up the Facemask -.-''
Got it :D
Hi! any suggestion for missing IPAdapter? I am confused because i didn't get an error to install or update and I have all of the IPAdapter nodes installed... the process stopped on the "IPAdapter Unified Loader" node.
!!! Exception during processing!!! IPAdapter model not found.
Traceback (most recent call last):
File "/workspace/ComfyUI/execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/workspace/ComfyUI/execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/workspace/ComfyUI/execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/workspace/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus/IPAdapterPlus.py", line 453, in load_models
raise Exception("IPAdapter model not found.")
Exception: IPAdapter model not found.
same problem
ip-adapter_sd15_vit-G.safetensors - install this from the manager
@@tilkitilkitam Thank you for responding.
I already had the model installed but it was not seeing it. I ended up restarting Comfy completely after I updated everything from the manager instead of only doing a hard refresh and that fixed it.
thx Abe!
Hey! I've managed to get this working but I was under the impression this workflow will animate between the given reference images.
The results I get are pretty different from the reference images.
Am I wrong in my assumption?
You're right - it uses the reference images (4 frames vs 96 total frames) as a starting point and generates additional frames, but the results should still be in the same ball park. if you're getting drastically different results, it might be a mix of your subject + SD1.5 model. I've had the best results by using a similar type of model (photograph, realism, anime, etc) for both the image generation and the animation
@@abeatech Is there any way to make the result more like reference images
Vram required?
16gb for upscaled
@@Adrianvideoedits Could you make the videos first and then close and load the upscaler to improve the quality or does it have to be all together and it can't be done in 2 different workflows?
@@creed4788 I dont see why not. But upscaling itself takes most vram so you would have to find upscaler for lower vram cards
I've got an error message
Error occurred when executing IPAdapterUnifiedLoader:
IPAdapter model not found.
File "C:\Users\waldo\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\waldo\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\waldo\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\waldo\Documents\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 573, in load_models
raise Exception("IPAdapter model not found.")
What shoud I do ?
Success now !
I re-install ip-adapter_sd15_vit-G.safetensors from the manager
I maybe missing something but the workflow is different so it's not working
I finally realized, I had the wrong workflow file...it's working now
you didnt explain most important part, which is how to run same batch with and without upscale. It generates new batches everytime you queue prompt so preview batch is waste of time. I like the idea though.
idk maybe a seed? efficiency nodes?
Hey man, not sure, but looks like there's this node in the workflow called Seed (rgthree) and it seems clicking the bottom button on this node called Use last queued seed does the trick. Try it.
❤🔥❤🔥❤🔥
"it's nothing too intimidating" then continues to show a workflow that takes up the entire screen. Lol! thanks for this tutorial, i've been looking for something like this days now. I'm switching from A1111 to comfy UI and the changes are a little more intimidating to get a handle on things than I originally expected. Thanks for this.
I get this weird error when it gets to the controlnet, not sure if you know whats wrong? 'ControlNet' object has no attribute 'latent_format', I have the qrcode control net loaded.
@@artificiallyinspired Make sure its the same name. A good habit I always do when loading new workflows is to go through all the nodes where you select a model or Lora and make sure the one I have locally is checked. Not everyone follows the same naming conventions. Sometimes you might download a workflow and someone has their ipadapter named "ip-adapter_plus.safetensors" while yours is "ip-adapter-plus.safetensors". Always good to re-select
Guys who have come across the fact that the loading 2 new models stops at 0% or I also had an example - the loading 3 new models is 9% and no longer continues. What is the problem? :c
Awesome but could you slow it down please.
Help! I encountered this error while running it
Error occurred when executing IPAdapterUnifiedLoader :
module 'comfy.model base’ has no attribute 'SDXl instructpix2pix
Sounds like it could be a couple of things:
a) you might be trying to use an SDXL checkpoint - in which case try using a SD1.5. The AnimateDiff model in the workflow only works with SD1.5
or
b) an issue with your IPAdapter node. you can yry making sure the ipadapter model is downloaded and in the right folder, or reinstalling the ComfyUI_IPAdapter_plus node (delete the custom node folder and reinstall from manager or github)
@@abeatech it say s in the note to install it in the clip vision folder. but that is not it as none of the preloaded models are there and the new one installed there does not appear in the dropdown selector. so if it is not that folder then where are you supposed to install it? if the node is bad why is it used in the work flow in the first place? shouldn't it just have the ipadapter plus node?
noob question: why my results more different from my output
i mean, what images i loaded have different output instead transitioning
The results will not exactly be the same, but they should still be in the same ball park. if you're getting drastically different results, it might be a mix of your subject + SD1.5 model. I've had the best results by using a similar type of model (photograph, realism, anime, etc) for both the image generation and the animation. Also worth double checking that you have the VAE and LCM lora selected in the settings module
This is not for beginners, put that on the description mate
what are you are crying about...go find a beginner class when it's too hard to understand...
did anyone else run into some errors while following this video?
Вообще ни чего не понятно что ты там делал! быстр быстро прокликал и смотрите что у меня получилось. куда,чего,как не показал.
its morbing time...
urred when executing CheckpointLoaderSimple:
'model.diffusion_model.input_blocks.0.0.weight'
File "C:\Users\x1\Desktop\New folder (4)\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\x1\Desktop\New folder (4)\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\x1\Desktop\New folder (4)\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\x1\Desktop\New folder (4)\ComfyUI_windows_portable\ComfyUI
odes.py", line 516, in load_checkpoint
out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\x1\Desktop\New folder (4)\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 511, in load_checkpoint_guess_config
model_config = model_detection.model_config_from_unet(sd, diffusion_model_prefix)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\x1\Desktop\New folder (4)\ComfyUI_windows_portable\ComfyUI\comfy\model_detection.py", line 239, in model_config_from_unet
unet_config = detect_unet_config(state_dict, unet_key_prefix)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\x1\Desktop\New folder (4)\ComfyUI_windows_portable\ComfyUI\comfy\model_detection.py", line 120, in detect_unet_config
model_channels = state_dict['{}input_blocks.0.0.weight'.format(key_prefix)].shape[0]
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ :P
Thanks for this I've followed the steps shown but seeing this erorr messg what am I doing wrong here
Failed to validate prompt for output 53:
* CheckpointLoaderSimple 564:
- Value not in list: ckpt_name: 'SD1.5\juggernaut_reborn.safetensors' not in ['dreamshaper_8.safetensors', 'flux1-schnell-bnb-nf4.safetensors', 'juggernaut_reborn.safetensors', 'realvisxlV50_v50LightningBakedvae.safetensors', 'revAnimated_v2Rebirth.safetensors']
* LoraLoaderModelOnly 563:
- Value not in list: lora_name: 'SD1.5\Hyper-SD15-8steps-lora.safetensors' not in ['AnimateLCM_sd15_t2v_lora.safetensors', 'Hyper-SD15-8steps-lora.safetensors', 'flux1-redux-dev.safetensors', 'v3_sd15_adapter.ckpt', 'vae-ft-mse-840000-ema-pruned.ckpt']
Output will be ignored