JanRT
JanRT
  • 11
  • 69 794
ComfyUI Stable Cascade workflow
2/20: models updated for comfyui, you can change to Load Checkpoint node as usual using the models below:
huggingface.co/stabilityai/stable-cascade/tree/main/comfyui_checkpoints
2/18: CascadeSampling node added by comfyui, workflow updated.
Comfyui just updated native support for stable cascade. This is an example of workflow.
Stable Cascade Github
github.com/Stability-AI/StableCascade/tree/master?tab=readme-ov-file
Cascade model huggingface
huggingface.co/stabilityai/stable-cascade/tree/main
ComfyUI-Easy-Use
github.com/yolain/ComfyUI-Easy-Use
Workflow:
drive.google.com/file/d/1FwhBn6KQuKXMU18CMs8rRpnG9kQAhx7T/view?usp=sharing
00:00 Introduction
00:44 Workflow walkthrough
01:12 examples with different styles and text generation
Folder structure:
ComfyUI\models\
├─clip\Stable-Cascade\model.safetensors
├─unet\stage_c_bf16.safetensors
├─unet\stage_b_bf16.safetensors
└─vae\stage_a.safetensors
#comfyui #stablecascade #stablediffusion
มุมมอง: 4 879

วีดีโอ

AnimateDiff SparseCtrl RGB w/ single image and Scribble control
มุมมอง 6K7 หลายเดือนก่อน
AnimateDiffv3 SparseCtrl RGB w/ single image and Scribble control for smooth and flicker-free animation generation. This is an update from previous ComfyUI SparseCtrl workflow to generate animation with just one image, as starting or ending frame. Also talked about the openpose guidance, SparseCtrl scribble redraw, and the effect of total frames. Updated AnimateDiff Evolved to Gen2 node sets. A...
ComfyUI workflow for RAVE: Temporally Consistent Video Editing
มุมมอง 2.3K7 หลายเดือนก่อน
Comfyui workflow for RAVE, Randomized Noise Shuffling for Fast and Consistent Video Editing. Lightweight but works quite well for style transfer and object replacement, producing flicker-free results. Workflow step by step provided at the end. Added SDXL FaceID and reactor. RAVE Github: github.com/rehg-lab/RAVE Comfyui RAVE: github.com/spacepxl/ComfyUI-RAVE Comfyui noise: github.com/BlenderNeko...
ComfyUI Hand Correction Workflow - HandRefiner
มุมมอง 10K8 หลายเดือนก่อน
Comfyui work flow w/ HandRefiner, easy and convenient hand correction or hand fix. HandRefiner Github: github.com/wenquanlu/HandRefiner Controlnet inpaint depth hand model: huggingface.co/hr16/ControlNet-HandRefiner-pruned/tree/main comfyui_controlnet_aux: github.com/Fannovel16/comfyui_controlnet_aux Mesh Graphormer (FYI): github.com/microsoft/MeshGraphormer Workflow: drive.google.com/file/d/11...
AnimateDiffv3 faceID and ReActor
มุมมอง 3.1K8 หลายเดือนก่อน
Happy new year everyone! This video talks about AnimateDiff v3 w/ IPadapter FaceID and ReActor for creating animations using reference face picture, face swap and face analysis. Both ReActor and FaceID use insightface, please install it first. IP-Adapter-FaceID model: huggingface.co/h94/IP-Adapter-FaceID ComfyUI_IPAdapter_plus: github.com/cubiq/ComfyUI_IPAdapter_plus comfyui-reactor-node: githu...
Comfyui AnimateDiffv3 RGB image Sparse Control
มุมมอง 7K8 หลายเดือนก่อน
AnimateDiffv3 RGB image SparseCtrl example, comfyui workflow w/ Open pose, IPAdapter, and face detailer. SparseCtrl Github: guoyww.github.io/projects/SparseCtrl/ AnimateDiff v3 model: huggingface.co/guoyww/animatediff/tree/main ComfyUI-Advanced-ControlNet: github.com/Kosinkadink/ComfyUI-Advanced-ControlNet Workflow: drive.google.com/file/d/1X5DqLOOYUvcM5z9Wsx_MJJm_-sdEtEff/view?usp=sharing 00:0...
Comfyui AnimateDiff v3 + LCM Video to Video
มุมมอง 18K8 หลายเดือนก่อน
AnimateDiffv3 released, here is one comfyui workflow integrating LCM (latent consistency model) controlnet IPadapter Face Detailer auto folder name parser, with animation and realistic demo. AnimateDiff v3 model: huggingface.co/guoyww/animatediff/tree/main IPadapter models: huggingface.co/h94/IP-Adapter Vit-H model (SD1.5, I renamed): huggingface.co/h94/IP-Adapter/resolve/main/models/image_enco...
LucidDreamer: 3D scene generation based on Stable Diffusion + Gaussian Splatting
มุมมอง 3.3K8 หลายเดือนก่อน
LucidDreamer, a project for 3D scene generation from just one picture and one single text prompt, through stable diffusion inpainting outpainting, monocular depth estimation, and Gaussian splatting, local installation and visualization. LucidDreamer: github.com/luciddreamer-cvlab/LucidDreamer 00:00 Introdcution of LucidDreamer & Gaussian splatting - Workflow & demos: 01:03 sd-webui to generate ...
ComfyUI MagicAnimate + DensePose - Local Install
มุมมอง 7K8 หลายเดือนก่อน
ComfyUI workflow for MagicAnimate and SDXL turbo Face detailer, and local install tutorial for MagicAnimate and Vid2Densepose to convert customized Dense Pose videos. 00:00 Introduction 00:44 ComfyUI MagicAnimate workflow SDXL turbo Face Detailer 03:06 Installation of MagicAnimate 04:14 Installation of Vid2Densepose (Detectron2) Workflow: drive.google.com/file/d/10cDn0w89CiXidhMkLW0XCefsn8eyMZS...
ComfyUI AnimateDiff Flicker-Free Inpainting
มุมมอง 5K9 หลายเดือนก่อน
ComfyUI workflow with AnimateDiff, Face Detailer (Impact Pack), and inpainting to generate flicker-free animation, blinking as an example in this video. Workflow: drive.google.com/file/d/1TRlhp1oLQwqwFGWZ-_gXplRIp3Uo7WXX/ Batch Prompt example: "0" :"1girl, dynamic angle, octane render, Velvia, official art, unity 8k wallpaper, ultra detailed, aesthetic, (masterpiece:1.1), (best quality:1.1), Gh...
Comfyui Realtime LCM with Photoshop, Blender, C4D, Zbrush, Maya...
มุมมอง 3.2K9 หลายเดือนก่อน
A workflow integrating Latent Consistency Models (LCM) with screen share custom node, to generate realtime pictures w/ paints in photoshop, and 3D models in Blender, C4D, Zbrush, Maya, etc., enjoy! Workflow: drive.google.com/file/d/1RaUSzTz4pg4f3pxDDfmzH78GHzALnmyR/ comfyui-mixlab-nodes: github.com/shadowcz007/comfyui-mixlab-nodes or installed through the Comfyui Manager

ความคิดเห็น

  • @NinzyaCat
    @NinzyaCat 7 ชั่วโมงที่ผ่านมา

    the most talentless and stupid workflow I've ever seen

  • @rachinc
    @rachinc 19 วันที่ผ่านมา

    this workflow leaves me with so many questions that your video didnt cover. in the cr prompt text box what am i supposed to put? positive prompts? because right now its linking back to your computer. it says D:/Program Files/ComfyUI_windows_portable/ComfyUI/output/ADiff/JanRT_P07/ what about the next CR prompt text box? "imgs, openpose," do I leave that as is? what about the Output_Folder text box? The Prefix text box? i put an image in the load image box but now where do you load the video where you want the face to be applied? I hit queue prompt and it said Prompt executed in 35.51 seconds, but it didnt produce anything.

  • @rachinc
    @rachinc 19 วันที่ผ่านมา

    can you update your workflow? When I opened it, it told me I didnt have InsightFaceLoader, IpAdapterApplyFace and IPAdapterApply. And I was like thats impossible!!!! theres no way I'm missing IPAdapterApply! And I'm almost certain I've used FaceLoader recently so how is this possible?! so I double clicked in the empty space to bring up that search bar thing, typed in IPAdapter FaceID and was able to load that node box from scratch. So I manually changed out IpAdapter InsightFace Loader and IPAdapter FaceID but not sure where to put IPAdapter Apply because the red node boxes didnt look like IpAdapter Apply but told me that I was missing it (when I wasnt). thats the first time I realized a workflow could falsely tell me I'm missing something that I'm not. maybe its a deprecated version?

  • @Byrdfl3wsNest
    @Byrdfl3wsNest 20 วันที่ผ่านมา

    Fantastic! Thank you for the epic tutorial. This is a game changer. Liked! Subscribed!! I wonder, could some form of this workflow be used to fix hands on images that have previously been rendered?

  • @sigmareaver680
    @sigmareaver680 หลายเดือนก่อน

    Just to make sure, the HandRefiner Github isn't needed to use this, is it? I'm assuming that's just the repo for the white paper, but I want to make sure before I try this.

  • @fengxu6967
    @fengxu6967 หลายเดือนก่อน

    Hello, I followed your instructions for installation, but the textures in the generated scene's PLY file are incorrect. Could you help me troubleshoot this issue?

  • @ShengzhuPeng
    @ShengzhuPeng 2 หลายเดือนก่อน

    Hi! I’m interested in a business collaboration. Could you please share your email? Thanks

  • @RhapsHayden
    @RhapsHayden 3 หลายเดือนก่อน

    Is this still working for anyone? I did a fresh install of Comfy + Python 3.10.10 and it still cannot load.

  • @MrMertall
    @MrMertall 4 หลายเดือนก่อน

    I keep getting the below error despite having installed YACS already : No module named 'yacs.config' File "C:\Users\xpare\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\xpare\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\xpare\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\xpare\Desktop\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux ode_wrappers\mesh_graphormer.py", line 66, in execute from controlnet_aux.mesh_graphormer import MeshGraphormerDetector File "C:\Users\xpare\Desktop\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\mesh_graphormer\__init__.py", line 5, in from controlnet_aux.mesh_graphormer.pipeline import MeshGraphormerMediapipe, args File "C:\Users\xpare\Desktop\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\mesh_graphormer\pipeline.py", line 12, in from custom_mesh_graphormer.modeling.hrnet.config import config as hrnet_config File "C:\Users\xpare\Desktop\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_mesh_graphormer\modeling\hrnet\config\__init__.py", line 7, in from .default import _C as config File "C:\Users\xpare\Desktop\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_mesh_graphormer\modeling\hrnet\config\default.py", line 15, in from yacs.config import CfgNode as CN

  • @ahmadzaini
    @ahmadzaini 4 หลายเดือนก่อน

    Thank you man, great job! But in my PC, IPAdapterApply node is missing and become red, when i try to replace it with IPAdapterAdvance node, i miss the 'insightface' input. Do you know how to solve this problem?

  • @_gr1nchh
    @_gr1nchh 4 หลายเดือนก่อน

    Getting runtime error "mat1 and mat2" shapes cannot be multiplied. Any idea as to what could be causing this?

  • @ryuktimo6517
    @ryuktimo6517 4 หลายเดือนก่อน

    this does not work on resting hands only raised hands

  • @qus123
    @qus123 4 หลายเดือนก่อน

    What can you put in the prompt that goes into unsampler?

  • @voxyloids8723
    @voxyloids8723 5 หลายเดือนก่อน

    Still trying to make a mesh from it

  • @caoonghoang5060
    @caoonghoang5060 5 หลายเดือนก่อน

    When loading the graph, the following node types were not found: IPAdapterApply I have fully installed the nodes and still get this error

  • @cinematic_monkey
    @cinematic_monkey 5 หลายเดือนก่อน

    This is too hard to follow. You should build it from scratch with every step shown.

    • @JanRTstudio
      @JanRTstudio 5 หลายเดือนก่อน

      OK, I will consider it in the next video, thanks for the feedback

  • @Distop-IA
    @Distop-IA 5 หลายเดือนก่อน

    This channel is underrated. You're the goat @JanRTstudio

    • @JanRTstudio
      @JanRTstudio 5 หลายเดือนก่อน

      Thank you, so glad to hear that!

  • @renanarchviz
    @renanarchviz 5 หลายเดือนก่อน

    on mine it appears in the install custom nodes tab. a red band shows the conflict. it does not appear on the desktop in confyui

  • @bradyee227
    @bradyee227 5 หลายเดือนก่อน

    hi i am getting this error could you help plz File "D:\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\cuda\__init__.py", line 293, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled

    • @JanRTstudio
      @JanRTstudio 5 หลายเดือนก่อน

      Hi you are using cuda version comfyui without installing torch-cuXXX (like cu118 version). You can try to run "your_ComfyUI_folder\update\update_comfyui_and_python_dependencies.bat". Are you using portable version?

  • @voxyloids8723
    @voxyloids8723 5 หลายเดือนก่อน

    Can\t find practical usage

    • @JanRTstudio
      @JanRTstudio 5 หลายเดือนก่อน

      I was trying to use it as background in blender, but the addon has issues for importing.

  • @hanygh2240
    @hanygh2240 5 หลายเดือนก่อน

    thx

  • @MikevomMars
    @MikevomMars 5 หลายเดือนก่อน

    This workflow CREATES and image, but in most of the cases, you'd like to load an EXISTING image to refine it 😐

    • @JanRTstudio
      @JanRTstudio 5 หลายเดือนก่อน

      yeah similar suggestion in other comments, I will test and upload a Img2Img workflow

  • @epelfeld
    @epelfeld 5 หลายเดือนก่อน

    Thank you it works great, just something wrong with colors. They are too bright and the image is overexposed. Have you an idea what's wrong?

    • @JanRTstudio
      @JanRTstudio 5 หลายเดือนก่อน

      Sure! It might be the color match node, you can try different reference picture for the color match

  • @mkrl89
    @mkrl89 5 หลายเดือนก่อน

    Hi there! Great video tho. I've tried follow it to install Magic Animate nodes but I failed... Maybe you could help. My case is that despite all stuff is downloaded and it shows Magic Animate as installed in Manager I am not able to find those nodes in Comfy. I even tried to use your workflow but those nodes still appear as red boxes. I found out that my terminal shows this info under Magic Animate node: " cannot import name 'PositionNet' from 'diffusers.models.embeddings'" I'd appreciate any ideas what's wrong :)

  • @mfb-ur7kz
    @mfb-ur7kz 6 หลายเดือนก่อน

    I receive an error while trying the ScrenShare node: " Error accessing screen stream: NotAllowedError: Failed to execute 'getDisplayMedia' on 'MediaDevices': Access to the feature "display-capture" is disallowed by permission policy." Do you know what might cause the error? Where can I enable display-capture? thanks

    • @JanRTstudio
      @JanRTstudio 5 หลายเดือนก่อน

      Hi what's your system and browser? It seems the browser doesn't allow screen share. Are you using server version or sd-webui-comfyui?

  • @leetotti3064
    @leetotti3064 6 หลายเดือนก่อน

    When i run the workflow,it stops and show me : Error occurred when executing KSampler: mat1 and mat2 shapes cannot be multiplied (154x768 and 1280x2048)

    • @JanRTstudio
      @JanRTstudio 6 หลายเดือนก่อน

      It seems models/vae don't match, can you please double check the models are in the same nodes as in the video?

    • @leetotti3064
      @leetotti3064 6 หลายเดือนก่อน

      Thanks, I checked the models and vae and it looks like that's the problem. It works now@@JanRTstudio

  • @kshabana_YT
    @kshabana_YT 6 หลายเดือนก่อน

    you are a pro

  • @fabiotgarcia2
    @fabiotgarcia2 6 หลายเดือนก่อน

    Does it work on Mac M2?

    • @JanRTstudio
      @JanRTstudio 6 หลายเดือนก่อน

      I think so, comfyui stated support for M2 with any recent macOS version, and this is the native support so should work, though I don't have Mac to test it right now

  • @VFXMinds
    @VFXMinds 6 หลายเดือนก่อน

    hi comfyui easy use import failed error i m getting. Not able to to run style selector node.

    • @JanRTstudio
      @JanRTstudio 6 หลายเดือนก่อน

      Can you copy the error code in the command line window regarding the import failed? that's strange if installed from comfyui manager

    • @kuka7466
      @kuka7466 5 หลายเดือนก่อน

      SyntaxError: Unexpected non-whitespace character after JSON at position 4 (line 1 column 5) @@JanRTstudio

    • @JanRTstudio
      @JanRTstudio 5 หลายเดือนก่อน

      @@kuka7466 Hi can you check "Install Missing Custom Nodes" from comfyui-manager menu? generally it's missing nodes

  • @nickchalion
    @nickchalion 6 หลายเดือนก่อน

    Hi and thank you for this vid , i have a very large error, can u help me plz? Error occurred when executing KSampler: Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 16, 24, 24] to have 4 channels, but got 16 channels instead and ...

    • @JanRTstudio
      @JanRTstudio 6 หลายเดือนก่อน

      Sure! Very similar to the issue mentioned in another comment, can you double check the model names in 4 black loader nodes (2 unets 1vae and 1 clip) are correct? Sometimes Comfyui just update a default value if your model file location (0:31) is not correct.

    • @nickchalion
      @nickchalion 6 หลายเดือนก่อน

      @@JanRTstudio thank you so much , it fixed 😍

    • @JanRTstudio
      @JanRTstudio 6 หลายเดือนก่อน

      @@nickchalion Awesome!

  • @MikevomMars
    @MikevomMars 6 หลายเดือนก่อน

    The following node types were not found: -StableCascade_StageB_Conditioning -StableCascade_EmptyLatentImage Unfortunately, they aren't available in the manager 😐

    • @ischeka
      @ischeka 6 หลายเดือนก่อน

      did you update the comfyui itself , since cascade nodes are native not custom nodes I believe

    • @MikevomMars
      @MikevomMars 6 หลายเดือนก่อน

      @@ischeka After updating ComfyUI, the nodes were available BUT the workflow stops with an error "Given groups=1, weight of size [320, 16, 1, 1], expected input[2, 64, 12, 12] to have 16 channels, but got 64 channels instead"

    • @JanRTstudio
      @JanRTstudio 6 หลายเดือนก่อน

      @@MikevomMars Can you double check it's stable_cascade selected in Load Clip type, and reselect all the models in that 4 black loader nodes? Or just drag the downloaded workflow to comfyui again to reload. I just updated comfyui, but can't replicate your error. Seems something wrong with the model loading.

    • @JanRTstudio
      @JanRTstudio 6 หลายเดือนก่อน

      @@ischeka yep, thank you!

    • @MikevomMars
      @MikevomMars 6 หลายเดือนก่อน

      @@ischeka Finally, it works - thanks for helping 😊👍 The issue was as follows: ComfyUI automatically filled the UNET, CLIP and VAE loaders, but for some strange reason, it inserted the stage a safetensor in the top UNET loader instead of the stage b. I had a hard time figuring out what safetensors need to go in what loader because they are so tiny in the video that it is hard to see. But it works now.

  • @Foolsjoker
    @Foolsjoker 6 หลายเดือนก่อน

    Are people able to train on this yet?

    • @JanRTstudio
      @JanRTstudio 6 หลายเดือนก่อน

      Yes, the training code has been released for lora, controlnet and Stages B & C. You can find it at the hyperlink "training" on their github webpage.

  • @sudabadri7051
    @sudabadri7051 6 หลายเดือนก่อน

    Lol i was just banging my head against a wall trying to fix this. Thank you

    • @JanRTstudio
      @JanRTstudio 6 หลายเดือนก่อน

      😄No problem my friend

  • @meywu
    @meywu 6 หลายเดือนก่อน

    Please upload your video to 4K for read node name more easy.

    • @JanRTstudio
      @JanRTstudio 6 หลายเดือนก่อน

      I will try 1440p next time, limited by my monitor res 😅thanks for the suggestion.

  • @ehsankholghi
    @ehsankholghi 7 หลายเดือนก่อน

    thansk so much for ur great tutorials.is there any render time limit in comfyui? i wanna use a 32seconds video.its 30 frame (1000 png) for video2video but i got this error on my 3090ti: numpy.core._exceptions._ArrayMemoryError: Unable to allocate 6.43 GiB for an array with shape (976, 1024, 576, 3) and data type float32

    • @JanRTstudio
      @JanRTstudio 7 หลายเดือนก่อน

      It seems 1000 might be a little bit more, I didn't have a chance to try that yet, but you can try a lower fps, like 30fps -> 10fps, and do frame interpolation (ComfyUI-Frame-Interpolation VFI node) afterwards, thus you only need to generate 320 images a time. I have a VFI node example in my RAVE video

    • @ehsankholghi
      @ehsankholghi 6 หลายเดือนก่อน

      @@JanRTstudio numpy.core._exceptions._ArrayMemoryError: Unable to allocate 6.43 GiB for an array with shape (976, 1024, 576, 3) and data type float32 i got this error

  • @jeffg4686
    @jeffg4686 7 หลายเดือนก่อน

    Nice. Could you use Automatic 1111 to train a lora for monkeys hands using this as a base model? By "can you", I mean is it possible do you think?

    • @JanRTstudio
      @JanRTstudio 7 หลายเดือนก่อน

      I believe yes, A1111 not sure, but you can find training using python here: github.com/microsoft/MeshGraphormer/blob/main/docs/EXP.md

    • @jeffg4686
      @jeffg4686 7 หลายเดือนก่อน

      @@JanRTstudio - thanks

    • @JanRTstudio
      @JanRTstudio 7 หลายเดือนก่อน

      no problem! @@jeffg4686

    • @jeffg4686
      @jeffg4686 7 หลายเดือนก่อน

      @@JanRTstudio - Nice. I might have to take a trip to the zoo, or even do some gens with dalle or something.

    • @JanRTstudio
      @JanRTstudio 7 หลายเดือนก่อน

      Sounds good 😄@@jeffg4686

  • @sudabadri7051
    @sudabadri7051 7 หลายเดือนก่อน

    Another amazing video my friend ❤

    • @JanRTstudio
      @JanRTstudio 7 หลายเดือนก่อน

      😀

  • @GggggQqqqqq1234
    @GggggQqqqqq1234 7 หลายเดือนก่อน

    Thank you Thank you Thank you.

    • @JanRTstudio
      @JanRTstudio 7 หลายเดือนก่อน

      😀

  • @stephantual
    @stephantual 7 หลายเดือนก่อน

    You probably know this but you could just use IPadapter for the clothes, at 1/0/1.0, it has solid grasp on the image and given only few frames are generated with very little movement , a simple mask will do (but coco segmenting can also be implemented). Thank you!

    • @JanRTstudio
      @JanRTstudio 7 หลายเดือนก่อน

      Right, I bypassed IPA in the video, mask/segmenting is a good way to try, thanks for the suggestion!

    • @stephantual
      @stephantual 7 หลายเดือนก่อน

      @@JanRTstudio Thank you for the cool videos! Do you have an X account so we can follow you on?

    • @JanRTstudio
      @JanRTstudio 7 หลายเดือนก่อน

      Sure! Just created one, JanRT111, will update there! @@stephantual

  • @ehsankholghi
    @ehsankholghi 7 หลายเดือนก่อน

    i upgraded to 3090ti 24gig.how much cpu ram i need to do video to video SD? I have 32gig

    • @JanRTstudio
      @JanRTstudio 7 หลายเดือนก่อน

      That's pretty cool! 24G is a lot, it's mainly GPU ram to render, if shifted to CPU ram the speed will slow down dramatically, so 32G RAM is good enough you don't want to use it. With 24G you can try latent upscale to get high resolution animation. I just have 12G VRAM and can do 512x768 100frames+ one time. You are good to go!

  • @anoubhav
    @anoubhav 7 หลายเดือนก่อน

    What is a unsampler?

    • @JanRTstudio
      @JanRTstudio 7 หลายเดือนก่อน

      Sampling is a denoising process, unsampler does the reverse and creates the noise pattern from image, used to reconstruct the image with modified prompts.

  • @tianxiangxu2288
    @tianxiangxu2288 7 หลายเดือนก่อน

    nice work

    • @JanRTstudio
      @JanRTstudio 7 หลายเดือนก่อน

      Thank you!

  • @user-vj3bj7dd1x
    @user-vj3bj7dd1x 7 หลายเดือนก่อน

    fantastic work!, the only problem is that I found that after running this workflow, the background is always pure color, even if I have added some background info in prompt, it does not work, could you share some fix method?

    • @JanRTstudio
      @JanRTstudio 7 หลายเดือนก่อน

      Thanks for the feedback! First I think you can try to bypass the "AnimateDiff Loader" and set "Input_Img_Cap" to 1, and run some single picture to check if the background is generated as you wish, change to different models if not. Or you can add the depth controlnet with a very low strength, 0.2 for example. If you just want to restyle the source video, you can decrease the denoise value in the first Ksampler, like 0.6 - 0.8.

  • @typho0n5
    @typho0n5 7 หลายเดือนก่อน

    Error occurred when executing VHS_LoadImagesPath: directory is not valid: D:/Program Files/ComfyUI_windows_portable/ComfyUI/output/ADiff/JanRT_P05/ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 155, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 85, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 78, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite\videohelpersuite\load_images_nodes.py", line 143, in load_images raise Exception("directory is not valid: " + directory) I've tried many times but I really don't know how to modify

    • @JanRTstudio
      @JanRTstudio 7 หลายเดือนก่อน

      You can change the first CR prompt text to your folder path "D:/ComfyUI_windows_portable/ComfyUI/output/ADiff/", and rerun from the beginning. Loading images from folder other than the "output" folder inside ComfyUI usually causes error, I think that's the reason.

  • @mick7727
    @mick7727 7 หลายเดือนก่อน

    My brain always shuts down when I see ComfyUI. I started a week ago on a1111 so yeah, very early days!

    • @JanRTstudio
      @JanRTstudio 7 หลายเดือนก่อน

      lol yeah get familiar with a1111, you will find it's just those options separated as nodes in comfyui

  • @ronnysempai
    @ronnysempai 7 หลายเดือนก่อน

    Good video, thanks

    • @JanRTstudio
      @JanRTstudio 7 หลายเดือนก่อน

      Thank you!

  • @GfcgamerOrgon
    @GfcgamerOrgon 7 หลายเดือนก่อน

    Its unfortunatelly that crossing fingers is interpreted as single hand, on many angles, I wish they could fix this up. A traning should be made, probably, and some sign to detect when one hand is under the other, because it deforms really bad as if there is only one hand on person! Also gloves have been a problem to me. It can be better yet.

    • @JanRTstudio
      @JanRTstudio 7 หลายเดือนก่อน

      Exactly, they mentioned this limitation. Works well in general poses, but still need to fix manually when crossing, overlapping, partial hands, etc.

  • @sureshotmv8255
    @sureshotmv8255 7 หลายเดือนก่อน

    Great content! Is there a guide on how sparse control rgb/scribble actually work? What I mean how do you know it's placed first and last image? Can you place RGB Sparse Control on frames 1,5,7,9,15,20? How?

    • @JanRTstudio
      @JanRTstudio 7 หลายเดือนก่อน

      Thank you! Yes, that's controlled by the "Sparse method", I am making another video for it and will talk about these methods

  • @risewithgrace
    @risewithgrace 7 หลายเดือนก่อน

    For some reason, even though I've successfully downloaded Comfyui Impact Pack, Comfyui will still say it's missing. So the node above SAMLoader in the Face Detailer section is red. Have you run into this issue?

    • @JanRTstudio
      @JanRTstudio 7 หลายเดือนก่อน

      That's strange, did you use comfyUI manager to install Impact? Just check the CMD window, during loading it will show "Import Failed" for Impact Pack, and before that, it actually gives you error and the cause of failure.

  • @sudabadri7051
    @sudabadri7051 7 หลายเดือนก่อน

    is there anyway to include IP adapter FaceID sdxl into this and a regional IP adapter like your animate diff workflow?

    • @JanRTstudio
      @JanRTstudio 7 หลายเดือนก่อน

      I haven't tried sdxl for RAVE but should work, I will try and update the workflow if it works

    • @JanRTstudio
      @JanRTstudio 7 หลายเดือนก่อน

      I tried the FaceID sdxl and it works, seems best w/ reactor and without face plus for the animation (not only the face), but just based on a few generations. The workflow link is added above and you can have a try. I probably will add Regional IPadapter in the future post, it's in the inspire pack but I actually have not used it yet.

    • @sudabadri7051
      @sudabadri7051 7 หลายเดือนก่อน

      @@JanRTstudio you are awesome i will test and let you know how I go!

    • @sudabadri7051
      @sudabadri7051 7 หลายเดือนก่อน

      can you add an option for me to give you money through youtube or patreon, your work is excellent I want to support you.

    • @JanRTstudio
      @JanRTstudio 7 หลายเดือนก่อน

      Thank you for your kind words, that is already great support for me, really appreciate it! I will consider adding it later.@@sudabadri7051