- 184
- 294 450
CG Pixel
เข้าร่วมเมื่อ 8 มิ.ย. 2022
This channel is about creating various 3d art like manga, nature, abstract, fx movie ...ect, using blender software
have fun and enjoy
have fun and enjoy
ComfyUI Tutorial: Increase Image and Video Generation Using Wave Speed #comfyui #comfyuitutorial
on this video i will show you how to use wave speed nodes that allows you to decrease the generation time of your video and images using dynamic cache that avoid recalculation during generation, while maintaining the same quality. the nodes are tested for both FLUX, HUNYUAN and LTXV #comfyui #ltxvideo #stablediffusion #imagetovideo #texttovideoai #hunyuan #wavespeed #
Chapitres
00:00 Intro
00:36 Installation Part
02:11 Flux Wave Speed img gen
04:28 Flux results using wavespeed
05:20 LTX Video Wavespeed
06:21 LTX Video results
07:51 Hunyuan Video Wavespeed & results
11:01 Conclusion & Outro
My Upwork Profile
www.upwork.com/freelancers/~01047908de30a2c349
Reddit Profile
www.reddit.com/user/cgpixel23
1-My Workflow
openart.ai/workflows/6EZzihtnI0c1y6hEnmHI
2- comfyui-wave speed nodes
github.com/chengzeyi/Comfy-WaveSpeed
3-LTXV0.9.1 model
huggingface.co/Lightricks/LTX-Video
4-LTXV Nodes Installation Tutorial
th-cam.com/video/x-bT_Ld7A1o/w-d-xo.html
5-HUNYUAN model
huggingface.co/city96/HunyuanVideo-gguf
6-AI Video tutorial playlist
th-cam.com/video/hr77a6otZ_0/w-d-xo.html
Chapitres
00:00 Intro
00:36 Installation Part
02:11 Flux Wave Speed img gen
04:28 Flux results using wavespeed
05:20 LTX Video Wavespeed
06:21 LTX Video results
07:51 Hunyuan Video Wavespeed & results
11:01 Conclusion & Outro
My Upwork Profile
www.upwork.com/freelancers/~01047908de30a2c349
Reddit Profile
www.reddit.com/user/cgpixel23
1-My Workflow
openart.ai/workflows/6EZzihtnI0c1y6hEnmHI
2- comfyui-wave speed nodes
github.com/chengzeyi/Comfy-WaveSpeed
3-LTXV0.9.1 model
huggingface.co/Lightricks/LTX-Video
4-LTXV Nodes Installation Tutorial
th-cam.com/video/x-bT_Ld7A1o/w-d-xo.html
5-HUNYUAN model
huggingface.co/city96/HunyuanVideo-gguf
6-AI Video tutorial playlist
th-cam.com/video/hr77a6otZ_0/w-d-xo.html
มุมมอง: 275
วีดีโอ
ComfyUI Tutorial: How To Use LTXV 0.9.1 with STG #comfyui #comfyuitutorial #ltxv
มุมมอง 3.1Kวันที่ผ่านมา
on this video i will show you how to create video from text, image or video using the new LTXV 0.9.1 model thas has spatio temporal skip guidance included which gonna allows you to create more consistent video at low vram consumption #comfyui #ltxvideo #stablediffusion #imagetovideo #texttovideoai #hunyuan Chapitres 00:00 Intro 00:35 What is STG 02:37 Installation Part 03:30 Workflow Overview &...
Comfyui Tutorial : How To Run Hunyuan GGUF #comfyui #hunyuan #comfyuitutorial
มุมมอง 6K21 วันที่ผ่านมา
on this video i will show you how to create video from text, image or video using the new hunyuan gguf model that is dedicated for low vram graphic card pc #comfyui #ltxvideo #stablediffusion #imagetovideo #texttovideoai #hunyuan Chapitres 00:00 Intro 00:35 Workflow Overview 07:11 Results 07:27 Installation Part 11:05 Outro My Upwork Profile www.upwork.com/freelancers/~01047908de30a2c349 Reddit...
ComfyUI Tutorial: Creating Video From Images or Text #comfyui #flux #ltxv
มุมมอง 4.9Kหลายเดือนก่อน
openart.ai/workflows/NE2kbxQX5Z0Of7eA3oltin this tutorial i am gonna show you how you can generate video from text or images using the new LTX video model to obtain 8 sec videos with my all in one workflow #comfyui #ltxvideo #stablediffusion #imagetovideo #texttovideoai Chapitres 00:00 Intro 00:22 Workflow Overview 06:37 Installation Part 08:32 Conclusion & Outro My Upwork Profile www.upwork.co...
Comfyui Tutorial : testing Flux Tools for inpainting & outpainting #comfyui #flux #fluxtools
มุมมอง 2.7Kหลายเดือนก่อน
in this tutorial i am gonna show you how you can use the flux tools update which will focus on Depth, Canny Lora and Redux models to create new type of images #comfyui #flux #fluxtool #fluxgguf Chapitres 00:00 Intro 00:21 Workflow Overview 03:23 Outpainting results 05:18 Inpainting results 07:11 installation part 08:38 Outro My Upwork Profile www.upwork.com/freelancers/~01047908de30a2c349 Reddi...
Comfyui Tutorial : How To Use Flux Tools #comfyui #flux #stablediffusion #fluxtools
มุมมอง 2.7Kหลายเดือนก่อน
in this tutorial i am gonna show you how you can use the flux tools update which will focus on Depth, Canny Lora and Redux models to create new type of images #comfyui #flux #fluxtool #fluxgguf Chapitres 00:00 Intro 00:19 What is flux tools 01:44 Workflow Overview 07:16 Installation Part 09:18 Redux Install 10:47 Redux Workflow 11:51 Outro My Upwork Profile www.upwork.com/freelancers/~01047908d...
Comfyui Tutorial : Flux Multi Area Prompting #comfyui #flux #stablediffusion
มุมมอง 4.9K2 หลายเดือนก่อน
in this tutorial i am gonna show you how you can run multiple area prompting using special nodes and flux lite version #comfyui #flux #multiareaprompt #fluxggguf Chapitres 00:00 Intro 00:39 Workflow Overview 05:46 Installation Part 07:12 Results 09:14 Object positions 11:16 Outro My Upwork Profile www.upwork.com/freelancers/~01047908de30a2c349 Reddit Profile www.reddit.com/user/cgpixel23 1-My W...
ComfyUI Tutorial : Deamon Detailers For Better Images Using Flux Lite #comfyui #flux
มุมมอง 2K2 หลายเดือนก่อน
in this tutorial i am gonna show you how you can create good quality images by using the new flux deamon detailers that allows you to add details without changing your images. we will also test out the new flux 8B lite and gguf version which is super fast. #comfyui #fluxturbo #flux #fluxcontrolnet #fluxggguf #fluxlite Chapitres 00:00 Intro 00:42 Installation Part 03:00 Workflow Overview 07:14 O...
Comfyui Tutorial: Testing the new SD3.5 model #comfyui #comfyuistablediffusion #stablediffusion3.5
มุมมอง 2.1K2 หลายเดือนก่อน
in this tutorial i am gonna show you how you can use the SD3.5 models that has large, turbo and GGUF version #comfyui #stablediffusion #stablediffusion3.5 Chapitres 00:00 Intro 01:41 Installation Part 04:02 Workflow Overview 05:47 SD3.5 models Results My Upwork Profile www.upwork.com/freelancers/~01047908de30a2c349 Reddit Profile www.reddit.com/user/cgpixel23 1-My Workflow openart.ai/workflows/...
ComfyUI Tutorial : How To Create Images Using 8 Steps Flux Turbo Model #comfyui #flux #fluxturbo
มุมมอง 2.7K2 หลายเดือนก่อน
in this tutorial i am gonna show you how you can create good quality images using the new flux turbo model, i will also show you how to use with ultimatesd upscale and depth controlnet #comfyui #fluxturbo #flux #fluxcontrolnet #fluxggguf Chapitres 00:00 Intro 01:05 Installation Part 02:14 Workflow Overview 06:22 Results (Generation Time) 07:49 Results (Ultimate SD upscale) 08:31 Results (Dpeth ...
ComfyUI Tutorial : How To Create Consistent Images Using Flux Model #comfyui #flux #controlnet
มุมมอง 7K3 หลายเดือนก่อน
in this tutorial i am gonna show you how you can create consistent image character sheet using SDXL controlnet and flux GGUF model #comfyui #forge #flux #fluxnf4 #fluxggguf Chapitres 00:00 Intro 00:23 Installation Part 02:54 Workflow Overview 07:47 Outro My Upwork Profile www.upwork.com/freelancers/~01047908de30a2c349 Reddit Profile www.reddit.com/user/cgpixel23 1-My Workflow openart.ai/workflo...
ComfyUI Tutorial : How to use Flux Controlnet Upscaling #controlnet #comfyui #flux
มุมมอง 1.7K3 หลายเดือนก่อน
in this tutorial i am gonna show you how you can install and run flux controlnet upscaling version that allows you to upscale your images using GGUF model #comfyui #forge #flux #fluxnf4 #fluxggguf Chapitres 00:00 Intro 00:26 Installation Part 03:50 Workflow Overview 07:54 Tips & Settings 08:58 Downside My Upwork Profile www.upwork.com/freelancers/~01047908de30a2c349 Reddit Profile www.reddit.co...
Comfyui Tutorial: Outpainting using flux & SDXL lightning #comfyui #flux #outpainting #controlnet
มุมมอง 1.6K3 หลายเดือนก่อน
in this tutorial i am gonna show you how you can install and run outpainting using flux GGUF model and SDXL LIGHTNING #comfyui #forge #flux #fluxnf4 #fluxggguf Chapitres 00:00 Intro 00:25 Installation Part 03:20 Workflow Overview 09:14 Layer Nodes Settings 13:33 Outro My Upwork Profile www.upwork.com/freelancers/~01047908de30a2c349 Reddit Profile www.reddit.com/user/cgpixel23 1-My Workflow open...
Comfyui Tutorial: How To Use Controlnet Flux Inpainting #comfyui #flux #controlnet
มุมมอง 5K3 หลายเดือนก่อน
in this tutorial i am gonna show you how you can install and run both controlnet and controlnet all in one version using flux GGUF model on both Comfyui #comfyui #forge #flux #fluxnf4 #fluxggguf Chapitres 00:00 Intro 00:35 Installation Part 02:32 Workflow Overview 05:08 Inpainting Results My Upwork Profile www.upwork.com/freelancers/~01047908de30a2c349 Reddit Profile www.reddit.com/user/cgpixel...
ComfyUI tutorial: How to use Controlnet all in one using Flux GGUF model #comfyui #flux #controlnet
มุมมอง 6K4 หลายเดือนก่อน
in this tutorial i am gonna show you how you can install and run both controlnet and controlnet all in one version using flux GGUF model on both Comfyui #comfyui #forge #flux #fluxnf4 #fluxggguf Chapitres 00:00 Intro 00:26 Installation Part 03:45 Workflow Overview 07:25 Controlnet canny results 08:51 Controlnet Depth & TILE results My Upwork Profile www.upwork.com/freelancers/~01047908de30a2c34...
ComfyUI & Forge Webui Tutorial: How To Use Flux IPadapter #comfyui #flux #ipadapter #forge
มุมมอง 7K4 หลายเดือนก่อน
ComfyUI & Forge Webui Tutorial: How To Use Flux IPadapter #comfyui #flux #ipadapter #forge
ComfyUI tutorial: How To Use Flux ControlnetV3 & Controlnet all in one #comfyui #flux #controlnet
มุมมอง 7K4 หลายเดือนก่อน
ComfyUI tutorial: How To Use Flux ControlnetV3 & Controlnet all in one #comfyui #flux #controlnet
ComfyUI & Forge Tutorial : Testing Flux GGUF Model #comfyui #forge #flux #comfyuitutorial
มุมมอง 4.7K4 หลายเดือนก่อน
ComfyUI & Forge Tutorial : Testing Flux GGUF Model #comfyui #forge #flux #comfyuitutorial
Forge Tutorial: Flux NF4 Complete Guide #flux #forge #comfyui #stablediffusion
มุมมอง 5K4 หลายเดือนก่อน
Forge Tutorial: Flux NF4 Complete Guide #flux #forge #comfyui #stablediffusion
Comfyui Tutorial: New Flux-NF4 for Low Vram #comfyui #comfyuitutorial #flux
มุมมอง 9K5 หลายเดือนก่อน
Comfyui Tutorial: New Flux-NF4 for Low Vram #comfyui #comfyuitutorial #flux
Comfyui Tutorial: flux model for low Vram #comfyui #comfyuitutorial #flux
มุมมอง 3.2K5 หลายเดือนก่อน
Comfyui Tutorial: flux model for low Vram #comfyui #comfyuitutorial #flux
Comfyui Tutorial : More Accurate Style Transfer With IPAdapter #comfyuitutorial #comfyui
มุมมอง 2.4K5 หลายเดือนก่อน
Comfyui Tutorial : More Accurate Style Transfer With IPAdapter #comfyuitutorial #comfyui
Comfyui Tutorial: Tile Controlnet For Image Upscaling #comfyui #comfyuitutorial #controlnettile
มุมมอง 2.7K5 หลายเดือนก่อน
Comfyui Tutorial: Tile Controlnet For Image Upscaling #comfyui #comfyuitutorial #controlnettile
Comfyui Tutorial: SDXL Controlnet Union All In One #comfyui #comfyuitutorial #controlnet
มุมมอง 6K6 หลายเดือนก่อน
Comfyui Tutorial: SDXL Controlnet Union All In One #comfyui #comfyuitutorial #controlnet
ComfyUI Tutorial: Depth Anything V2 & Controlnet #comfyui #comfyuitutorial #controlnet
มุมมอง 7K6 หลายเดือนก่อน
ComfyUI Tutorial: Depth Anything V2 & Controlnet #comfyui #comfyuitutorial #controlnet
Comfyui Tutorial : Morphing Animation Using QRControlnet #comfyui #controlnet #comfyuitutorial
มุมมอง 1.9K7 หลายเดือนก่อน
Comfyui Tutorial : Morphing Animation Using QRControlnet #comfyui #controlnet #comfyuitutorial
ComfyUI Tutorial: Exploring Stable Diffusion 3 #comfyui #comfyuitutorial #stablediffusion3
มุมมอง 6977 หลายเดือนก่อน
ComfyUI Tutorial: Exploring Stable Diffusion 3 #comfyui #comfyuitutorial #stablediffusion3
Comfyui Tutorial : Style Transfert using IPadapter #comfyui #comfyuitutorial #ipadapter #controlnet
มุมมอง 3K7 หลายเดือนก่อน
Comfyui Tutorial : Style Transfert using IPadapter #comfyui #comfyuitutorial #ipadapter #controlnet
ComfyUI Tutorial: Preserving Details with IC-Light #comfyui #comfyuitutorial @risunobushi_ai
มุมมอง 3K7 หลายเดือนก่อน
ComfyUI Tutorial: Preserving Details with IC-Light #comfyui #comfyuitutorial @risunobushi_ai
ComfyUI Tutorial: Background and Light control using IPadapter #comfyui #comfyuitutorial #ipadapter
มุมมอง 6K8 หลายเดือนก่อน
ComfyUI Tutorial: Background and Light control using IPadapter #comfyui #comfyuitutorial #ipadapter
Impressive content, CG Pixel. Can't wait to see your next upload from you. I hit the thumbs up icon on your segment. Keep up the fantastic work. Your detailed explanation of the wave speed node's impact on image generation was enlightening. Have you considered exploring potential optimizations for video generation with high-complexity prompts in future videos?
very good explaining thanks a lot! have you already tried this workflow with wavespeed?
i just upload it th-cam.com/video/MQAPmDXe-b8/w-d-xo.html
Awesome, thanks a lot!
i don't have musgrave texture
how do you connect t2v to upscale?
My question is can you load LoRAs into specific areas and stop "bleeding". That would help a lot.
Very good teaching video, but why is the image I generated black? What should I do? Thank you. * Tried to change vae, but still generated black images
how does the vae guff file compare with the 327 mb ae.safetensors file?
Thanks, this flux inpaint work good even with Flux Schnell in 4 steps
How to add more time in the generation of the video (more than 3secs) ?
I am getting following error: Missing Node Types: Florence2Run Florence2ModelLoader ShowText|pysssss Int Literal UnetLoaderGGUF GetNode SetNode and many more... please help. ps:I have already added ComfyUI-Florence2 to custom nodes.
theres now a flux 1 dev hyper nf4 did you try that one let me know thanks for the information and video.
my outputs coming out great , just remeber to prompt this " best video quality of
thanks for the tips
Your learning :) nice video.
You forgot to include the Florence2 download in your video/description or as a required node. It also would have helped if you expanded it in the vid, but I was able to figure out that it requires "Florence-2-base" model via your workflow JSON Overall very helpful video though. Thanks!
thanks for the advice i will try to implemented for my next videos
What about coloring the lineart? Where can I download the flux forge model?
i dont know if there is workflow like that but i can build new one and do tutorial about it
@@cgpixel6745 that would be so nice
Hello, I'm new to using the tool, is there a chat so I can contact you to help me use the workflow?
well it is not so much complicated to get used to comfy you can reach me via reddit
Hello, I am from the SeaArtAI ComfyUI team. We would like to invite you to collaborate with us on the ComfyUI project. Could you kindly provide an email address so we can send you further details about the collaboration?
okay well you clearly know your way around ComfyUI... subscribed
thanks bro
Khouya enta les Zommes! Excellent delivery, consistently on point.
sahit kho keshma khasek ani hna
@@cgpixel6745 Merci kho
is ltx okay for making stylized or illustration style video like animated diff? i like ltx because it is fast just wondering does anyone have tried it for non-realistic style?
yes it is suitable for both realistic and anime style
Even tho I installed the Florence 2 node, I still got the Missing Node Type message error that say Florence2ModelLoader were not found. Do you have any clue how to fix that?
replace that node with DownloadAndLoadFlorence2Model and it should work
@@cgpixel6745 Thank you so much!!! It finally worked!
Even tho I installed the Florence 2 node, I still got the Missing Node Type message error that say Florence2ModelLoader were not found. Do you have any clue how to fix that?
yeah don't use that node. uninstall and just replace it with regular load clip and clip text encode.
you can try with this nodes github.com/pythongosssss/ComfyUI-WD14-Tagger it does the same thing
" FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers" got this error
you can try to post it here may be the developper can help us github.com/Lightricks/LTX-Video/issues
Thanks for sharing. I havent been able to get 9.1 to work on my 6gb laptop. But ill keep trying to figure it out. Maybe your workflow might help.
I got this error "Unexpected architecture type in GGUF file, expected one of flux, sd1, sdxl, t5encoder but got 'hyvid'" ?
update your GGUF nodes
great works. !!!
thanks
Can you use a Lora of yourself added into ComfyAI currently being used with image creation for video?
yes you just have to create the lora than added to the checkpoint
*This is your pilot speaking
yeah my mic got sick lol
multiAreaConditioning node - is it still working? I cannot get it to work :(
try to post your problem here github.com/Davemane42/ComfyUI_Dave_CustomNode/issues may be someone can help us for me it was easy installation
@@cgpixel6745 What version of Comfy are you using? I read a lot on reddit that it stopped working for a lot of people. I use the new ComfyUI Desktop.
@cgpixel6745 i tried same but not working failed at ksampler steps error all tensors need to be on same gpu
Great Video. Your pixelation problem might be related to mismatch in FPS. In T2V section you specified 24 FPS but in Upscale 30 FPS. I changed to 24 in Upscale section and everything looks good
oh thanks buddy i did not noticed that
Thanks, I hadn't realised you can do video to video with gguf. But of course you can feed the video into the latent space. However, I don't see where you are setting the denoise value. This would seem to be important, setting it too high will just ignore the input video, and too low will keep it too much the same. Anyhow, the simple workflow still works so is not an issue for me. I think the regime of trying to do everything all at once in a single workflow is a bad idea, but that's what most other youtubers are doing too. Better to keep a single workflow that does one thing, most of one's results are thrown away, and the ones you want to keep can then be thrown into rife or upscale, and that keeps the main process minimal.
well you can use the bypasse muter to disable some groups, because it can become redundant for me to do txt, img or vid all in seperate video tutorial thats why i grouped everything
Would it be possible to designate specific loras for each area? Eg. couples pictures where I can choose to use personal loras for each person. Thanks for the video, Subscribed.
well for now it is not possible but i think it will be on future updates
Hey thats a great and quick video. I am not able to find find teh CLIPTextEncodeFlux node in the manager/ custom nodes, do you know what might be the issue?
use the normal cliptextencoder it should work
Any suggestions after re-installing? This keeps giving me a conflicted nodes error for -- Florence2Model Conflicted Nodes: DownloadAndLoadFlorence2Model [comfyui-tensorops] Florence2Run [comfyui-tensorops]
in that case use DownloadAndLoadFlorence2Model as model loader may be it could work
can one set starting image for the text to video to be image to video instead? If one does not want to use generated text, but image itself as base
yes thats the goal of imgtovid is that what you are looking for ?
It's not true img2vid. It only uses the LLM to describe the image. It does not take the actual image and animate it.
it depends on the image resolution in some cases it failed to animate it but most of the time it worked for me
I was getting the error "SamplerCustomAdvanced: Required input is missing: latent_image". I had to manually connect EmptyHunyuanLatentVideo.Latent to SamplerCustomAdvanced.latent_image. Is this the right way or am I missing something?
@@Daddydavidodave-zv5wy yes I forgot to plug it since I used video to video group for my last generation
@@cgpixel6745 Thanks! Learned a lot from this workflow.
Your videos are always exciting.. Expect, They never work, lol
I mean the workflow never fucking works ,
What error did you get ?
how to add loras to gguf
i am working on it i will do tutorial on that
I was able to throw a "LoraLoaderModelOnly" node between "Unet Loader (GGUF)" and "ModelSamplingSD3" and used a Hunyuan-specific Lora and it seemed to work.
Great tutorial it was very easy to follow along! Thanks!
@@697_ stay tune more tutorial are coming for video generation
thanks man! quick questionhere, how to show gpu,cpu, ram etc workload percentage there? i have installed crystool but after the update few months ago it doesnt show on my comfyui
you should activated on comfyui parameters
removal background is ok but not replacing background ; is that a problem of Load CLIP VISION? i have choice clip-vision-vit-h.safetensors
try to update your bria nodes it is not related to clip vision
it doesn't work for me, which file is below "IP ADAPTER.SAFETENSORS" , i can't find it on huggingface or github..when i change background it stays black.
Workflow seems to get stuck on "Requested to load MochiTEModel_"(terminal) , and in GUI it shows it's on Clip Text Encode (Negative Prompt)
We need the new workflow version 🙏 TripleCLIPLoader does not work with the new comfy(v0.3.8) and flux, it generates black image. If change to DualCLIPLoader, it will work but lose a dimention. How to solve this?
in that case change your VAE
hello, thanks for yout job.I have 2 problem: 1/ i get this message failed "KSampler index is out of bounds for dimension with size 0 " and the node UPSCALE IMAGE BY has the cricle red on the paramzeter IMAGE ?? what can i do ? Isse i have the same prompt that yours but you have no faile with this NODE UPSCALE IMAGE BY 2/ What sort of mp4 i have to find in the node LOAD VIDEO (PATH) ?
im getting same error message: failed "KSampler index is out of bounds for dimension with size 0". it was working initially for some time this error came randomly; now im trying to re-download all files/models to try again
reduce the upscaling factor to 1.5
@ I disabled that node and all the upscaling nodes. the error occurs at the first ksampler
@@cgpixel6745 by the way i just found out on a reddit post that its a problem with animatediff advanced and the creator might be working on an update
Whats your gpu? what vRam?
literally the most important question not answered but in vid he reckons 3060 rtx w 6gb vram and done in 6 mins at 100 steps.
@markdkberry tried it myself now. 100 steps in 2-3 mins on rtx 3070
yes i have RTX3060 6GB laptop version
Great video! I like that you added prompt generator and frame interpolation. That makes a huge difference. Ai upscale would be great too! Thumbs up!
thanks