CG Pixel
CG Pixel
  • 184
  • 294 450
ComfyUI Tutorial: Increase Image and Video Generation Using Wave Speed #comfyui #comfyuitutorial
on this video i will show you how to use wave speed nodes that allows you to decrease the generation time of your video and images using dynamic cache that avoid recalculation during generation, while maintaining the same quality. the nodes are tested for both FLUX, HUNYUAN and LTXV #comfyui #ltxvideo #stablediffusion #imagetovideo #texttovideoai #hunyuan #wavespeed #
Chapitres
00:00 Intro
00:36 Installation Part
02:11 Flux Wave Speed img gen
04:28 Flux results using wavespeed
05:20 LTX Video Wavespeed
06:21 LTX Video results
07:51 Hunyuan Video Wavespeed & results
11:01 Conclusion & Outro
My Upwork Profile
www.upwork.com/freelancers/~01047908de30a2c349
Reddit Profile
www.reddit.com/user/cgpixel23
1-My Workflow
openart.ai/workflows/6EZzihtnI0c1y6hEnmHI
2- comfyui-wave speed nodes
github.com/chengzeyi/Comfy-WaveSpeed
3-LTXV0.9.1 model
huggingface.co/Lightricks/LTX-Video
4-LTXV Nodes Installation Tutorial
th-cam.com/video/x-bT_Ld7A1o/w-d-xo.html
5-HUNYUAN model
huggingface.co/city96/HunyuanVideo-gguf
6-AI Video tutorial playlist
th-cam.com/video/hr77a6otZ_0/w-d-xo.html
มุมมอง: 275

วีดีโอ

ComfyUI Tutorial: How To Use LTXV 0.9.1 with STG #comfyui #comfyuitutorial #ltxv
มุมมอง 3.1Kวันที่ผ่านมา
on this video i will show you how to create video from text, image or video using the new LTXV 0.9.1 model thas has spatio temporal skip guidance included which gonna allows you to create more consistent video at low vram consumption #comfyui #ltxvideo #stablediffusion #imagetovideo #texttovideoai #hunyuan Chapitres 00:00 Intro 00:35 What is STG 02:37 Installation Part 03:30 Workflow Overview &...
Comfyui Tutorial : How To Run Hunyuan GGUF #comfyui #hunyuan #comfyuitutorial
มุมมอง 6K21 วันที่ผ่านมา
on this video i will show you how to create video from text, image or video using the new hunyuan gguf model that is dedicated for low vram graphic card pc #comfyui #ltxvideo #stablediffusion #imagetovideo #texttovideoai #hunyuan Chapitres 00:00 Intro 00:35 Workflow Overview 07:11 Results 07:27 Installation Part 11:05 Outro My Upwork Profile www.upwork.com/freelancers/~01047908de30a2c349 Reddit...
ComfyUI Tutorial: Creating Video From Images or Text #comfyui #flux #ltxv
มุมมอง 4.9Kหลายเดือนก่อน
openart.ai/workflows/NE2kbxQX5Z0Of7eA3oltin this tutorial i am gonna show you how you can generate video from text or images using the new LTX video model to obtain 8 sec videos with my all in one workflow #comfyui #ltxvideo #stablediffusion #imagetovideo #texttovideoai Chapitres 00:00 Intro 00:22 Workflow Overview 06:37 Installation Part 08:32 Conclusion & Outro My Upwork Profile www.upwork.co...
Comfyui Tutorial : testing Flux Tools for inpainting & outpainting #comfyui #flux #fluxtools
มุมมอง 2.7Kหลายเดือนก่อน
in this tutorial i am gonna show you how you can use the flux tools update which will focus on Depth, Canny Lora and Redux models to create new type of images #comfyui #flux #fluxtool #fluxgguf Chapitres 00:00 Intro 00:21 Workflow Overview 03:23 Outpainting results 05:18 Inpainting results 07:11 installation part 08:38 Outro My Upwork Profile www.upwork.com/freelancers/~01047908de30a2c349 Reddi...
Comfyui Tutorial : How To Use Flux Tools #comfyui #flux #stablediffusion #fluxtools
มุมมอง 2.7Kหลายเดือนก่อน
in this tutorial i am gonna show you how you can use the flux tools update which will focus on Depth, Canny Lora and Redux models to create new type of images #comfyui #flux #fluxtool #fluxgguf Chapitres 00:00 Intro 00:19 What is flux tools 01:44 Workflow Overview 07:16 Installation Part 09:18 Redux Install 10:47 Redux Workflow 11:51 Outro My Upwork Profile www.upwork.com/freelancers/~01047908d...
Comfyui Tutorial : Flux Multi Area Prompting #comfyui #flux #stablediffusion
มุมมอง 4.9K2 หลายเดือนก่อน
in this tutorial i am gonna show you how you can run multiple area prompting using special nodes and flux lite version #comfyui #flux #multiareaprompt #fluxggguf Chapitres 00:00 Intro 00:39 Workflow Overview 05:46 Installation Part 07:12 Results 09:14 Object positions 11:16 Outro My Upwork Profile www.upwork.com/freelancers/~01047908de30a2c349 Reddit Profile www.reddit.com/user/cgpixel23 1-My W...
ComfyUI Tutorial : Deamon Detailers For Better Images Using Flux Lite #comfyui #flux
มุมมอง 2K2 หลายเดือนก่อน
in this tutorial i am gonna show you how you can create good quality images by using the new flux deamon detailers that allows you to add details without changing your images. we will also test out the new flux 8B lite and gguf version which is super fast. #comfyui #fluxturbo #flux #fluxcontrolnet #fluxggguf #fluxlite Chapitres 00:00 Intro 00:42 Installation Part 03:00 Workflow Overview 07:14 O...
Comfyui Tutorial: Testing the new SD3.5 model #comfyui #comfyuistablediffusion #stablediffusion3.5
มุมมอง 2.1K2 หลายเดือนก่อน
in this tutorial i am gonna show you how you can use the SD3.5 models that has large, turbo and GGUF version #comfyui #stablediffusion #stablediffusion3.5 Chapitres 00:00 Intro 01:41 Installation Part 04:02 Workflow Overview 05:47 SD3.5 models Results My Upwork Profile www.upwork.com/freelancers/~01047908de30a2c349 Reddit Profile www.reddit.com/user/cgpixel23 1-My Workflow openart.ai/workflows/...
ComfyUI Tutorial : How To Create Images Using 8 Steps Flux Turbo Model #comfyui #flux #fluxturbo
มุมมอง 2.7K2 หลายเดือนก่อน
in this tutorial i am gonna show you how you can create good quality images using the new flux turbo model, i will also show you how to use with ultimatesd upscale and depth controlnet #comfyui #fluxturbo #flux #fluxcontrolnet #fluxggguf Chapitres 00:00 Intro 01:05 Installation Part 02:14 Workflow Overview 06:22 Results (Generation Time) 07:49 Results (Ultimate SD upscale) 08:31 Results (Dpeth ...
ComfyUI Tutorial : How To Create Consistent Images Using Flux Model #comfyui #flux #controlnet
มุมมอง 7K3 หลายเดือนก่อน
in this tutorial i am gonna show you how you can create consistent image character sheet using SDXL controlnet and flux GGUF model #comfyui #forge #flux #fluxnf4 #fluxggguf Chapitres 00:00 Intro 00:23 Installation Part 02:54 Workflow Overview 07:47 Outro My Upwork Profile www.upwork.com/freelancers/~01047908de30a2c349 Reddit Profile www.reddit.com/user/cgpixel23 1-My Workflow openart.ai/workflo...
ComfyUI Tutorial : How to use Flux Controlnet Upscaling #controlnet #comfyui #flux
มุมมอง 1.7K3 หลายเดือนก่อน
in this tutorial i am gonna show you how you can install and run flux controlnet upscaling version that allows you to upscale your images using GGUF model #comfyui #forge #flux #fluxnf4 #fluxggguf Chapitres 00:00 Intro 00:26 Installation Part 03:50 Workflow Overview 07:54 Tips & Settings 08:58 Downside My Upwork Profile www.upwork.com/freelancers/~01047908de30a2c349 Reddit Profile www.reddit.co...
Comfyui Tutorial: Outpainting using flux & SDXL lightning #comfyui #flux #outpainting #controlnet
มุมมอง 1.6K3 หลายเดือนก่อน
in this tutorial i am gonna show you how you can install and run outpainting using flux GGUF model and SDXL LIGHTNING #comfyui #forge #flux #fluxnf4 #fluxggguf Chapitres 00:00 Intro 00:25 Installation Part 03:20 Workflow Overview 09:14 Layer Nodes Settings 13:33 Outro My Upwork Profile www.upwork.com/freelancers/~01047908de30a2c349 Reddit Profile www.reddit.com/user/cgpixel23 1-My Workflow open...
Comfyui Tutorial: How To Use Controlnet Flux Inpainting #comfyui #flux #controlnet
มุมมอง 5K3 หลายเดือนก่อน
in this tutorial i am gonna show you how you can install and run both controlnet and controlnet all in one version using flux GGUF model on both Comfyui #comfyui #forge #flux #fluxnf4 #fluxggguf Chapitres 00:00 Intro 00:35 Installation Part 02:32 Workflow Overview 05:08 Inpainting Results My Upwork Profile www.upwork.com/freelancers/~01047908de30a2c349 Reddit Profile www.reddit.com/user/cgpixel...
ComfyUI tutorial: How to use Controlnet all in one using Flux GGUF model #comfyui #flux #controlnet
มุมมอง 6K4 หลายเดือนก่อน
in this tutorial i am gonna show you how you can install and run both controlnet and controlnet all in one version using flux GGUF model on both Comfyui #comfyui #forge #flux #fluxnf4 #fluxggguf Chapitres 00:00 Intro 00:26 Installation Part 03:45 Workflow Overview 07:25 Controlnet canny results 08:51 Controlnet Depth & TILE results My Upwork Profile www.upwork.com/freelancers/~01047908de30a2c34...
ComfyUI & Forge Webui Tutorial: How To Use Flux IPadapter #comfyui #flux #ipadapter #forge
มุมมอง 7K4 หลายเดือนก่อน
ComfyUI & Forge Webui Tutorial: How To Use Flux IPadapter #comfyui #flux #ipadapter #forge
ComfyUI tutorial: How To Use Flux ControlnetV3 & Controlnet all in one #comfyui #flux #controlnet
มุมมอง 7K4 หลายเดือนก่อน
ComfyUI tutorial: How To Use Flux ControlnetV3 & Controlnet all in one #comfyui #flux #controlnet
ComfyUI & Forge Tutorial : Testing Flux GGUF Model #comfyui #forge #flux #comfyuitutorial
มุมมอง 4.7K4 หลายเดือนก่อน
ComfyUI & Forge Tutorial : Testing Flux GGUF Model #comfyui #forge #flux #comfyuitutorial
Forge Tutorial: Flux NF4 Complete Guide #flux #forge #comfyui #stablediffusion
มุมมอง 5K4 หลายเดือนก่อน
Forge Tutorial: Flux NF4 Complete Guide #flux #forge #comfyui #stablediffusion
Comfyui Tutorial: New Flux-NF4 for Low Vram #comfyui #comfyuitutorial #flux
มุมมอง 9K5 หลายเดือนก่อน
Comfyui Tutorial: New Flux-NF4 for Low Vram #comfyui #comfyuitutorial #flux
Comfyui Tutorial: flux model for low Vram #comfyui #comfyuitutorial #flux
มุมมอง 3.2K5 หลายเดือนก่อน
Comfyui Tutorial: flux model for low Vram #comfyui #comfyuitutorial #flux
Comfyui Tutorial : More Accurate Style Transfer With IPAdapter #comfyuitutorial #comfyui
มุมมอง 2.4K5 หลายเดือนก่อน
Comfyui Tutorial : More Accurate Style Transfer With IPAdapter #comfyuitutorial #comfyui
Comfyui Tutorial: Tile Controlnet For Image Upscaling #comfyui #comfyuitutorial #controlnettile
มุมมอง 2.7K5 หลายเดือนก่อน
Comfyui Tutorial: Tile Controlnet For Image Upscaling #comfyui #comfyuitutorial #controlnettile
Comfyui Tutorial: SDXL Controlnet Union All In One #comfyui #comfyuitutorial #controlnet
มุมมอง 6K6 หลายเดือนก่อน
Comfyui Tutorial: SDXL Controlnet Union All In One #comfyui #comfyuitutorial #controlnet
ComfyUI Tutorial: Depth Anything V2 & Controlnet #comfyui #comfyuitutorial #controlnet
มุมมอง 7K6 หลายเดือนก่อน
ComfyUI Tutorial: Depth Anything V2 & Controlnet #comfyui #comfyuitutorial #controlnet
Comfyui Tutorial : Morphing Animation Using QRControlnet #comfyui #controlnet #comfyuitutorial
มุมมอง 1.9K7 หลายเดือนก่อน
Comfyui Tutorial : Morphing Animation Using QRControlnet #comfyui #controlnet #comfyuitutorial
ComfyUI Tutorial: Exploring Stable Diffusion 3 #comfyui #comfyuitutorial #stablediffusion3
มุมมอง 6977 หลายเดือนก่อน
ComfyUI Tutorial: Exploring Stable Diffusion 3 #comfyui #comfyuitutorial #stablediffusion3
Comfyui Tutorial : Style Transfert using IPadapter #comfyui #comfyuitutorial #ipadapter #controlnet
มุมมอง 3K7 หลายเดือนก่อน
Comfyui Tutorial : Style Transfert using IPadapter #comfyui #comfyuitutorial #ipadapter #controlnet
ComfyUI Tutorial: Preserving Details with IC-Light #comfyui #comfyuitutorial @risunobushi_ai
มุมมอง 3K7 หลายเดือนก่อน
ComfyUI Tutorial: Preserving Details with IC-Light #comfyui #comfyuitutorial @risunobushi_ai
ComfyUI Tutorial: Background and Light control using IPadapter #comfyui #comfyuitutorial #ipadapter
มุมมอง 6K8 หลายเดือนก่อน
ComfyUI Tutorial: Background and Light control using IPadapter #comfyui #comfyuitutorial #ipadapter

ความคิดเห็น

  • @KeyserTheRedBeard
    @KeyserTheRedBeard 4 ชั่วโมงที่ผ่านมา

    Impressive content, CG Pixel. Can't wait to see your next upload from you. I hit the thumbs up icon on your segment. Keep up the fantastic work. Your detailed explanation of the wave speed node's impact on image generation was enlightening. Have you considered exploring potential optimizations for video generation with high-complexity prompts in future videos?

  • @SpinAngelo137
    @SpinAngelo137 วันที่ผ่านมา

    very good explaining thanks a lot! have you already tried this workflow with wavespeed?

    • @cgpixel6745
      @cgpixel6745 22 ชั่วโมงที่ผ่านมา

      i just upload it th-cam.com/video/MQAPmDXe-b8/w-d-xo.html

  • @gabrielmoro3d
    @gabrielmoro3d วันที่ผ่านมา

    Awesome, thanks a lot!

  • @JustTimoha
    @JustTimoha 2 วันที่ผ่านมา

    i don't have musgrave texture

  • @gameguru888
    @gameguru888 3 วันที่ผ่านมา

    how do you connect t2v to upscale?

  • @thanksfernuthin
    @thanksfernuthin 4 วันที่ผ่านมา

    My question is can you load LoRAs into specific areas and stop "bleeding". That would help a lot.

  • @S4MUEL404
    @S4MUEL404 5 วันที่ผ่านมา

    Very good teaching video, but why is the image I generated black? What should I do? Thank you. * Tried to change vae, but still generated black images

  • @play150
    @play150 5 วันที่ผ่านมา

    how does the vae guff file compare with the 327 mb ae.safetensors file?

  • @emiln1977
    @emiln1977 5 วันที่ผ่านมา

    Thanks, this flux inpaint work good even with Flux Schnell in 4 steps

  • @ishtarnaomi7610
    @ishtarnaomi7610 6 วันที่ผ่านมา

    How to add more time in the generation of the video (more than 3secs) ?

  • @PowerPlaay
    @PowerPlaay 8 วันที่ผ่านมา

    I am getting following error: Missing Node Types: Florence2Run Florence2ModelLoader ShowText|pysssss Int Literal UnetLoaderGGUF GetNode SetNode and many more... please help. ps:I have already added ComfyUI-Florence2 to custom nodes.

  • @Amsterdamhaze
    @Amsterdamhaze 8 วันที่ผ่านมา

    theres now a flux 1 dev hyper nf4 did you try that one let me know thanks for the information and video.

  • @DeathMasterofhell15
    @DeathMasterofhell15 9 วันที่ผ่านมา

    my outputs coming out great , just remeber to prompt this " best video quality of

    • @cgpixel6745
      @cgpixel6745 9 วันที่ผ่านมา

      thanks for the tips

  • @aivideos322
    @aivideos322 9 วันที่ผ่านมา

    Your learning :) nice video.

  • @zerilis
    @zerilis 9 วันที่ผ่านมา

    You forgot to include the Florence2 download in your video/description or as a required node. It also would have helped if you expanded it in the vid, but I was able to figure out that it requires "Florence-2-base" model via your workflow JSON Overall very helpful video though. Thanks!

    • @cgpixel6745
      @cgpixel6745 9 วันที่ผ่านมา

      thanks for the advice i will try to implemented for my next videos

  • @AyuK-jm1qo
    @AyuK-jm1qo 10 วันที่ผ่านมา

    What about coloring the lineart? Where can I download the flux forge model?

    • @cgpixel6745
      @cgpixel6745 9 วันที่ผ่านมา

      i dont know if there is workflow like that but i can build new one and do tutorial about it

    • @AyuK-jm1qo
      @AyuK-jm1qo 8 วันที่ผ่านมา

      @@cgpixel6745 that would be so nice

  • @Rafielw
    @Rafielw 10 วันที่ผ่านมา

    Hello, I'm new to using the tool, is there a chat so I can contact you to help me use the workflow?

    • @cgpixel6745
      @cgpixel6745 9 วันที่ผ่านมา

      well it is not so much complicated to get used to comfy you can reach me via reddit

  • @marketing-k5g
    @marketing-k5g 11 วันที่ผ่านมา

    Hello, I am from the SeaArtAI ComfyUI team. We would like to invite you to collaborate with us on the ComfyUI project. Could you kindly provide an email address so we can send you further details about the collaboration?

  • @mickelodiansurname9578
    @mickelodiansurname9578 11 วันที่ผ่านมา

    okay well you clearly know your way around ComfyUI... subscribed

    • @cgpixel6745
      @cgpixel6745 11 วันที่ผ่านมา

      thanks bro

  • @takimdigital3421
    @takimdigital3421 11 วันที่ผ่านมา

    Khouya enta les Zommes! Excellent delivery, consistently on point.

    • @cgpixel6745
      @cgpixel6745 11 วันที่ผ่านมา

      sahit kho keshma khasek ani hna

    • @takimdigital3421
      @takimdigital3421 10 วันที่ผ่านมา

      @@cgpixel6745 Merci kho

  • @haya3sa
    @haya3sa 12 วันที่ผ่านมา

    is ltx okay for making stylized or illustration style video like animated diff? i like ltx because it is fast just wondering does anyone have tried it for non-realistic style?

    • @cgpixel6745
      @cgpixel6745 12 วันที่ผ่านมา

      yes it is suitable for both realistic and anime style

  • @OLandry908
    @OLandry908 12 วันที่ผ่านมา

    Even tho I installed the Florence 2 node, I still got the Missing Node Type message error that say Florence2ModelLoader were not found. Do you have any clue how to fix that?

    • @cgpixel6745
      @cgpixel6745 12 วันที่ผ่านมา

      replace that node with DownloadAndLoadFlorence2Model and it should work

    • @OLandry908
      @OLandry908 7 วันที่ผ่านมา

      @@cgpixel6745 Thank you so much!!! It finally worked!

  • @OLandry908
    @OLandry908 12 วันที่ผ่านมา

    Even tho I installed the Florence 2 node, I still got the Missing Node Type message error that say Florence2ModelLoader were not found. Do you have any clue how to fix that?

    • @haya3sa
      @haya3sa 12 วันที่ผ่านมา

      yeah don't use that node. uninstall and just replace it with regular load clip and clip text encode.

    • @cgpixel6745
      @cgpixel6745 12 วันที่ผ่านมา

      you can try with this nodes github.com/pythongosssss/ComfyUI-WD14-Tagger it does the same thing

  • @averdadeestanua8570
    @averdadeestanua8570 12 วันที่ผ่านมา

    " FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers" got this error

    • @cgpixel6745
      @cgpixel6745 12 วันที่ผ่านมา

      you can try to post it here may be the developper can help us github.com/Lightricks/LTX-Video/issues

  • @dadadies
    @dadadies 12 วันที่ผ่านมา

    Thanks for sharing. I havent been able to get 9.1 to work on my 6gb laptop. But ill keep trying to figure it out. Maybe your workflow might help.

  • @tenaciousdean6179
    @tenaciousdean6179 12 วันที่ผ่านมา

    I got this error "Unexpected architecture type in GGUF file, expected one of flux, sd1, sdxl, t5encoder but got 'hyvid'" ?

    • @cgpixel6745
      @cgpixel6745 12 วันที่ผ่านมา

      update your GGUF nodes

  • @gardentv7833
    @gardentv7833 13 วันที่ผ่านมา

    great works. !!!

    • @cgpixel6745
      @cgpixel6745 12 วันที่ผ่านมา

      thanks

  • @jriker1
    @jriker1 13 วันที่ผ่านมา

    Can you use a Lora of yourself added into ComfyAI currently being used with image creation for video?

    • @cgpixel6745
      @cgpixel6745 12 วันที่ผ่านมา

      yes you just have to create the lora than added to the checkpoint

  • @AlphaGirth
    @AlphaGirth 13 วันที่ผ่านมา

    *This is your pilot speaking

    • @cgpixel6745
      @cgpixel6745 12 วันที่ผ่านมา

      yeah my mic got sick lol

  • @ChrissyAiven
    @ChrissyAiven 14 วันที่ผ่านมา

    multiAreaConditioning node - is it still working? I cannot get it to work :(

    • @cgpixel6745
      @cgpixel6745 12 วันที่ผ่านมา

      try to post your problem here github.com/Davemane42/ComfyUI_Dave_CustomNode/issues may be someone can help us for me it was easy installation

    • @ChrissyAiven
      @ChrissyAiven 12 วันที่ผ่านมา

      @@cgpixel6745 What version of Comfy are you using? I read a lot on reddit that it stopped working for a lot of people. I use the new ComfyUI Desktop.

  • @manikanta3977
    @manikanta3977 15 วันที่ผ่านมา

    @cgpixel6745 i tried same but not working failed at ksampler steps error all tensors need to be on same gpu

  • @korner8712
    @korner8712 15 วันที่ผ่านมา

    Great Video. Your pixelation problem might be related to mismatch in FPS. In T2V section you specified 24 FPS but in Upscale 30 FPS. I changed to 24 in Upscale section and everything looks good

    • @cgpixel6745
      @cgpixel6745 12 วันที่ผ่านมา

      oh thanks buddy i did not noticed that

  • @geoffphillips5293
    @geoffphillips5293 16 วันที่ผ่านมา

    Thanks, I hadn't realised you can do video to video with gguf. But of course you can feed the video into the latent space. However, I don't see where you are setting the denoise value. This would seem to be important, setting it too high will just ignore the input video, and too low will keep it too much the same. Anyhow, the simple workflow still works so is not an issue for me. I think the regime of trying to do everything all at once in a single workflow is a bad idea, but that's what most other youtubers are doing too. Better to keep a single workflow that does one thing, most of one's results are thrown away, and the ones you want to keep can then be thrown into rife or upscale, and that keeps the main process minimal.

    • @cgpixel6745
      @cgpixel6745 12 วันที่ผ่านมา

      well you can use the bypasse muter to disable some groups, because it can become redundant for me to do txt, img or vid all in seperate video tutorial thats why i grouped everything

  • @borqi
    @borqi 17 วันที่ผ่านมา

    Would it be possible to designate specific loras for each area? Eg. couples pictures where I can choose to use personal loras for each person. Thanks for the video, Subscribed.

    • @cgpixel6745
      @cgpixel6745 12 วันที่ผ่านมา

      well for now it is not possible but i think it will be on future updates

  • @abhinavchhabra4947
    @abhinavchhabra4947 18 วันที่ผ่านมา

    Hey thats a great and quick video. I am not able to find find teh CLIPTextEncodeFlux node in the manager/ custom nodes, do you know what might be the issue?

    • @cgpixel6745
      @cgpixel6745 12 วันที่ผ่านมา

      use the normal cliptextencoder it should work

  • @linoeleven
    @linoeleven 18 วันที่ผ่านมา

    Any suggestions after re-installing? This keeps giving me a conflicted nodes error for -- Florence2Model Conflicted Nodes: DownloadAndLoadFlorence2Model [comfyui-tensorops] Florence2Run [comfyui-tensorops]

    • @cgpixel6745
      @cgpixel6745 12 วันที่ผ่านมา

      in that case use DownloadAndLoadFlorence2Model as model loader may be it could work

  • @pokis50
    @pokis50 18 วันที่ผ่านมา

    can one set starting image for the text to video to be image to video instead? If one does not want to use generated text, but image itself as base

    • @cgpixel6745
      @cgpixel6745 12 วันที่ผ่านมา

      yes thats the goal of imgtovid is that what you are looking for ?

  • @Mopantsu
    @Mopantsu 19 วันที่ผ่านมา

    It's not true img2vid. It only uses the LLM to describe the image. It does not take the actual image and animate it.

    • @cgpixel6745
      @cgpixel6745 12 วันที่ผ่านมา

      it depends on the image resolution in some cases it failed to animate it but most of the time it worked for me

  • @Daddydavidodave-zv5wy
    @Daddydavidodave-zv5wy 19 วันที่ผ่านมา

    I was getting the error "SamplerCustomAdvanced: Required input is missing: latent_image". I had to manually connect EmptyHunyuanLatentVideo.Latent to SamplerCustomAdvanced.latent_image. Is this the right way or am I missing something?

    • @cgpixel6745
      @cgpixel6745 19 วันที่ผ่านมา

      @@Daddydavidodave-zv5wy yes I forgot to plug it since I used video to video group for my last generation

    • @Daddydavidodave-zv5wy
      @Daddydavidodave-zv5wy 19 วันที่ผ่านมา

      @@cgpixel6745 Thanks! Learned a lot from this workflow.

  • @Dangerkabaap
    @Dangerkabaap 20 วันที่ผ่านมา

    Your videos are always exciting.. Expect, They never work, lol

    • @Dangerkabaap
      @Dangerkabaap 20 วันที่ผ่านมา

      I mean the workflow never fucking works ,

    • @cgpixel6745
      @cgpixel6745 20 วันที่ผ่านมา

      What error did you get ?

  • @cargas
    @cargas 20 วันที่ผ่านมา

    how to add loras to gguf

    • @cgpixel6745
      @cgpixel6745 12 วันที่ผ่านมา

      i am working on it i will do tutorial on that

    • @zerilis
      @zerilis 9 วันที่ผ่านมา

      I was able to throw a "LoraLoaderModelOnly" node between "Unet Loader (GGUF)" and "ModelSamplingSD3" and used a Hunyuan-specific Lora and it seemed to work.

  • @697_
    @697_ 21 วันที่ผ่านมา

    Great tutorial it was very easy to follow along! Thanks!

    • @cgpixel6745
      @cgpixel6745 20 วันที่ผ่านมา

      @@697_ stay tune more tutorial are coming for video generation

  • @victorwijayakusuma
    @victorwijayakusuma 22 วันที่ผ่านมา

    thanks man! quick questionhere, how to show gpu,cpu, ram etc workload percentage there? i have installed crystool but after the update few months ago it doesnt show on my comfyui

    • @cgpixel6745
      @cgpixel6745 12 วันที่ผ่านมา

      you should activated on comfyui parameters

  • @alexandreb.8350
    @alexandreb.8350 24 วันที่ผ่านมา

    removal background is ok but not replacing background ; is that a problem of Load CLIP VISION? i have choice clip-vision-vit-h.safetensors

    • @cgpixel6745
      @cgpixel6745 12 วันที่ผ่านมา

      try to update your bria nodes it is not related to clip vision

  • @alexandreb.8350
    @alexandreb.8350 24 วันที่ผ่านมา

    it doesn't work for me, which file is below "IP ADAPTER.SAFETENSORS" , i can't find it on huggingface or github..when i change background it stays black.

  • @petertucker455
    @petertucker455 29 วันที่ผ่านมา

    Workflow seems to get stuck on "Requested to load MochiTEModel_"(terminal) , and in GUI it shows it's on Clip Text Encode (Negative Prompt)

  • @80oo0oo08
    @80oo0oo08 หลายเดือนก่อน

    We need the new workflow version 🙏 TripleCLIPLoader does not work with the new comfy(v0.3.8) and flux, it generates black image. If change to DualCLIPLoader, it will work but lose a dimention. How to solve this?

    • @cgpixel6745
      @cgpixel6745 12 วันที่ผ่านมา

      in that case change your VAE

  • @alexandreb.8350
    @alexandreb.8350 หลายเดือนก่อน

    hello, thanks for yout job.I have 2 problem: 1/ i get this message failed "KSampler index is out of bounds for dimension with size 0 " and the node UPSCALE IMAGE BY has the cricle red on the paramzeter IMAGE ?? what can i do ? Isse i have the same prompt that yours but you have no faile with this NODE UPSCALE IMAGE BY 2/ What sort of mp4 i have to find in the node LOAD VIDEO (PATH) ?

    • @Treybradley
      @Treybradley 12 วันที่ผ่านมา

      im getting same error message: failed "KSampler index is out of bounds for dimension with size 0". it was working initially for some time this error came randomly; now im trying to re-download all files/models to try again

    • @cgpixel6745
      @cgpixel6745 12 วันที่ผ่านมา

      reduce the upscaling factor to 1.5

    • @Treybradley
      @Treybradley 12 วันที่ผ่านมา

      @ I disabled that node and all the upscaling nodes. the error occurs at the first ksampler

    • @Treybradley
      @Treybradley 9 วันที่ผ่านมา

      @@cgpixel6745 by the way i just found out on a reddit post that its a problem with animatediff advanced and the creator might be working on an update

  • @MindfulMusingsTome
    @MindfulMusingsTome หลายเดือนก่อน

    Whats your gpu? what vRam?

    • @markdkberry
      @markdkberry หลายเดือนก่อน

      literally the most important question not answered but in vid he reckons 3060 rtx w 6gb vram and done in 6 mins at 100 steps.

    • @MindfulMusingsTome
      @MindfulMusingsTome หลายเดือนก่อน

      @markdkberry tried it myself now. 100 steps in 2-3 mins on rtx 3070

    • @cgpixel6745
      @cgpixel6745 12 วันที่ผ่านมา

      yes i have RTX3060 6GB laptop version

  • @trsd8640
    @trsd8640 หลายเดือนก่อน

    Great video! I like that you added prompt generator and frame interpolation. That makes a huge difference. Ai upscale would be great too! Thumbs up!

    • @cgpixel6745
      @cgpixel6745 12 วันที่ผ่านมา

      thanks