- 53
- 212 815
Neuron
Germany
เข้าร่วมเมื่อ 19 ม.ค. 2017
This channel is all about being creative with AI tools. I love to use open source tools like ComfyUI, automatic1111 or Blender.
FLUX TOOLS ControlNet in ComfyUI AI, DEPTH & CANNY, model & lora
This video gives a short overview of the new Flux.1 TOOLS ControlNet functions for FLUX.1 from BlackForset Labs.
Find more infos on the FLUX tools here:
blackforestlabs.ai/flux-1-tools/
Download the models and put it in your ComfyUI models/unet folder:
FLUX DEPTH: huggingface.co/black-forest-labs/FLUX.1-Depth-dev
FLUX CANNY: huggingface.co/black-forest-labs/FLUX.1-Canny-dev
Download the Loras and put it in your ComfyUI models/loras folder:
DEPTH LORA: huggingface.co/black-forest-labs/FLUX.1-Depth-dev-lora
CANNY LORA: huggingface.co/black-forest-labs/FLUX.1-Canny-dev-lora
Download FLUX here:
blackforestlabs.ai/#get-flux
Install Flux:
th-cam.com/video/JiFxw_CToFM/w-d-xo.html
Download the workflow on Patreon with BASE membership:
www.patreon.com/posts/118548571?
Support me on Patreon:
patreon.com/neuron_ai
Connect with me:
_neuron_ai
_neuron_ai
#comfyui #stablecascade #stablediffusion #tutorial #animatediff #automatic1111 #aiart #ai #inpaint #interpolation
Find more infos on the FLUX tools here:
blackforestlabs.ai/flux-1-tools/
Download the models and put it in your ComfyUI models/unet folder:
FLUX DEPTH: huggingface.co/black-forest-labs/FLUX.1-Depth-dev
FLUX CANNY: huggingface.co/black-forest-labs/FLUX.1-Canny-dev
Download the Loras and put it in your ComfyUI models/loras folder:
DEPTH LORA: huggingface.co/black-forest-labs/FLUX.1-Depth-dev-lora
CANNY LORA: huggingface.co/black-forest-labs/FLUX.1-Canny-dev-lora
Download FLUX here:
blackforestlabs.ai/#get-flux
Install Flux:
th-cam.com/video/JiFxw_CToFM/w-d-xo.html
Download the workflow on Patreon with BASE membership:
www.patreon.com/posts/118548571?
Support me on Patreon:
patreon.com/neuron_ai
Connect with me:
_neuron_ai
_neuron_ai
#comfyui #stablecascade #stablediffusion #tutorial #animatediff #automatic1111 #aiart #ai #inpaint #interpolation
มุมมอง: 23
วีดีโอ
FLUX TOOLS REDUX in ComfyUI AI, IPAdapter for FLUX, styletransfer
มุมมอง 1.3K16 ชั่วโมงที่ผ่านมา
This video gives a short overview of the new Flux.1 TOOLS REDUX functions for FLUX.1 from BlackForset Labs. It is similar to IPAdapter and we will create a simple style transfer workflow. Find more infos on the FLUX tools here: blackforestlabs.ai/flux-1-tools/ Download the models: huggingface.co/black-forest-labs/FLUX.1-Redux-dev/tree/main huggingface.co/Comfy-Org/sigclip_vision_384/blob/main/s...
FLUX TOOLS FILL, inpaint & outpaint in ComfyUI AI, exchange cloth, expand background
มุมมอง 3.9K14 วันที่ผ่านมา
This video gives a short overview of the new Flux.1 TOOLS fill functions for FLUX.1 from BlackForset Labs. It includes inpainting & outpainting and I will show you how to use it to change clothes or expand the background of an image. Find more infos here: blackforestlabs.ai/flux-1-tools/ Download the tools here: blackforestlabs.ai/#get-flux Install Flux: th-cam.com/video/JiFxw_CToFM/w-d-xo.html...
CogVideoX, replace missing nodes after update, repair broken workflow, ComfyUI AI, pose
มุมมอง 2.4K21 วันที่ผ่านมา
In this tutorial I show you how to repair your CogVideoX pose workflows if it got broken after the recent update. The workflows I am talking about get explained here: th-cam.com/video/Ky4EZCON5ls/w-d-xo.html Be sure to install all the needed custom nodes which are linked below. Please comment if you have questions or tell me your suggestions for future videos. Get the dancing girl video here: a...
New FLUX.1 AI TOOLS, Fill, Canny, Depth, Redux, IpAdapter for FLUX
มุมมอง 2.2K28 วันที่ผ่านมา
This video gives a short overview of the new Flux.1 TOOLS functions for FLUX.1 from BlackForset Labs. It includes inpainting, outpainting, controlnets and some functions similar to IpAdapter. Find more infos here: blackforestlabs.ai/flux-1-tools/ Download the tools here: blackforestlabs.ai/#get-flux Support me on Patreon: patreon.com/neuron_ai Connect with me: _neuron_ai twitter.c...
VID2VID, dancing animals with CogVideoX in ComfyUI AI, with auto pose creation
มุมมอง 7Kหลายเดือนก่อน
In this tutorial I walk you through the usage of CogVideoX in ComfyUI. We will create the workflow for making dancing animal videos with automatic pose creation. This is a 2 part series. The first part can be found here: th-cam.com/video/Ky4EZCON5ls/w-d-xo.html th-cam.com/video/QW7Pu9Uwlgw/w-d-xo.html Be sure to install all the needed custom nodes which are linked below. Please comment if you h...
CogVideo in ComfyUI AI, IMG2VID, free local video model, usage and installation
มุมมอง 4.4Kหลายเดือนก่อน
In this tutorial I walk you through the usage of CogVideoX in ComfyUI. This is a 2 part series. The second part can be found here: Pending... Be sure to install all the needed custom nodes which are linked below. Please comment if you have questions or tell me your suggestions for future videos. The ComfyUI workflow cen be downloaded with BASE memebership on Patreon: www.patreon.com/posts/cogvi...
SD 3.5 MEDIUM is here, ComfyUI AI usage & comparison with SDXL
มุมมอง 3.7Kหลายเดือนก่อน
In this video I will show you how you install and use the new StableDiffusion 3.5 in ComfyUI. Be sure to update your ComfyUI version to the latest version. Otherwise you might get errors. The blog post at stability.ai and the page from the ComfyUI Github: stability.ai/news/introducing-stable-diffusion-3-5 comfyanonymous.github.io/ComfyUI_examples/sd3/ Get the models and put them in the models/c...
SD 3.5 large & turbo in ComfyUI AI, usage and installation
มุมมอง 2K2 หลายเดือนก่อน
In this video I will show you how you install anduse the new StableDiffusion 3.5 in ComfyUI. The blog post at stability.ai and the page from the ComfyUI Github: stability.ai/news/introducing-stable-diffusion-3-5 comfyanonymous.github.io/ComfyUI_examples/sd3/ Get the models and put them in the models/checkpoints folder of your ComfyUI installation: huggingface.co/stabilityai/stable-diffusion-3.5...
Create mockups for graphic design presentations in ComfyUI AI with FLUX 1, load prompt from textfile
มุมมอง 2.2K2 หลายเดือนก่อน
In this video I will show you how you use ComfyUI to create graphic design mockups which you can use to present designs to customers or on your webpage. We will also implement functionality to load prompts from a textfile. Installation: If you havent used FLUX so far check out my video on how to use and install FLUX in ComfyUI befor you watch this video. You will also find the needed models the...
FLUX negative prompt, how to, in ComfyUI AI
มุมมอง 3.6K3 หลายเดือนก่อน
In this video I will show you how you use a negative prompt with FLUX in ComfyUI. Installation: If you havent used FLUX so far check out my video on how to use and install FLUX in ComfyUI befor you watch this video. You will also find the needed models there: th-cam.com/video/JiFxw_CToFM/w-d-xo.html Get this workflow on Patreon with the base membership: www.patreon.com/posts/flux-negative-to-11...
Simple FLUX inpainting in ComfyUI AI
มุมมอง 1.8K3 หลายเดือนก่อน
In this video I will show you how you build a simple inpaint workflow with FLUX. Installation: If you havent used FLUX so far check out my video on how to use and install FLUX in ComfyUI befor you watch this video: th-cam.com/video/JiFxw_CToFM/w-d-xo.html Get the needed custom node: github.com/kijai/ComfyUI-KJNodes Get this workflow on Patreon with the free membership: www.patreon.com/posts/sim...
Vary subtle & vary strong functionality in ComfyUI AI, like in Midjourney
มุมมอง 2.6K3 หลายเดือนก่อน
In this video I will walk you through a special workflow to recreate some Midjourney functionality I missed in ComfyUI. I will setup a workflow for subtle and strong variation. You will get full control over the amount of variation of your generated images. Please comment below if you have questions or want to tell me your suggestions for future videos. Get the needed custom node package: githu...
Deforum animation from start image with ComfyUI AI
มุมมอง 2.1K3 หลายเดือนก่อน
In this video I will walk you through a deforum workflow to create an animation from an starting image inside ComfyUI. Please comment below if you have questions or want to tell me your suggestions for future videos. The deforum videos on my channel: Deforum base workflow: th-cam.com/video/zuAJExW_IPc/w-d-xo.html Deforum cadence interpolation: th-cam.com/video/fQQfMAQHc_E/w-d-xo.html Deforum IP...
FLUX V1 GGUF model in ComfyUI AI, for 8GB VRAM / GPU Ram / small VRAM
มุมมอง 2.9K4 หลายเดือนก่อน
In this video I will show you how you use the Flux GGUF model version in ComfyUI. This version is optimized for small GPU VRam and smaller GPUS with 8GB. Installation: If you havent used FLUX so far check out my video on how to use and install FLUX in ComfyUI befor you watch this video: th-cam.com/video/JiFxw_CToFM/w-d-xo.html Update your ComfyUI to the newest version. Update or Install the GGU...
FLUX V1 with IPAdapter in ComfyUI AI, DEV, SCHNELL, FP8
มุมมอง 1.4K4 หลายเดือนก่อน
FLUX V1 with IPAdapter in ComfyUI AI, DEV, SCHNELL, FP8
FLUX NF4 with ControlNet in ComfyUI AI, For smaller GPUs, low VRAM
มุมมอง 4.2K4 หลายเดือนก่อน
FLUX NF4 with ControlNet in ComfyUI AI, For smaller GPUs, low VRAM
FLUX FLUX FLUX - DEV & SCHNELL model with LORA in ComfyUI AI
มุมมอง 1.5K4 หลายเดือนก่อน
FLUX FLUX FLUX - DEV & SCHNELL model with LORA in ComfyUI AI
Easy and fast text to 3D & image to 3D with TripoSR in ComfyUI AI
มุมมอง 8K4 หลายเดือนก่อน
Easy and fast text to 3D & image to 3D with TripoSR in ComfyUI AI
Create abstract animated video ControlNets in Blender 3D, for ComfyUI AI, Automatic1111, etc.
มุมมอง 2.5K4 หลายเดือนก่อน
Create abstract animated video ControlNets in Blender 3D, for ComfyUI AI, Automatic1111, etc.
Insane morbid morphing animation with AnimateDiff, IpAdapter, ControlNet, in ComfyUI AI
มุมมอง 9214 หลายเดือนก่อน
Insane morbid morphing animation with AnimateDiff, IpAdapter, ControlNet, in ComfyUI AI
Exchange objects by keyword in ComfyUI AI, SAM, Grounding Dino, Differential Diffusion
มุมมอง 8114 หลายเดือนก่อน
Exchange objects by keyword in ComfyUI AI, SAM, Grounding Dino, Differential Diffusion
OpenSource AuraFlow V 0.1 & AuraSR upscaler model introduction and usage in ComfyUI AI
มุมมอง 6015 หลายเดือนก่อน
OpenSource AuraFlow V 0.1 & AuraSR upscaler model introduction and usage in ComfyUI AI
Shakker AI a CivitAI alternative for SD models, ComfyUI, A1111, new models, AI generator
มุมมอง 9575 หลายเดือนก่อน
Shakker AI a CivitAI alternative for SD models, ComfyUI, A1111, new models, AI generator
IPAdapter Plus styletransfer with Deforum in ComfyUI AI
มุมมอง 1.4K5 หลายเดือนก่อน
IPAdapter Plus styletransfer with Deforum in ComfyUI AI
Cadence interpolation for Deforum in ComfyUI AI, smooth animation, consistent and coherent
มุมมอง 1.7K6 หลายเดือนก่อน
Cadence interpolation for Deforum in ComfyUI AI, smooth animation, consistent and coherent
Real Deforum for ComfyUI AI, infinite psychedelic zoom animation madness
มุมมอง 9K7 หลายเดือนก่อน
Real Deforum for ComfyUI AI, infinite psychedelic zoom animation madness
Easy light transfer from image to image with ICLight in ComfyUI AI, Gaffer
มุมมอง 8657 หลายเดือนก่อน
Easy light transfer from image to image with ICLight in ComfyUI AI, Gaffer
Autodetect and remove unwanted objects in ComfyUI AI, impact, lama, yolo8, Ultralytics, SEGM
มุมมอง 3.3K7 หลายเดือนก่อน
Autodetect and remove unwanted objects in ComfyUI AI, impact, lama, yolo8, Ultralytics, SEGM
Animate IPadapter V2 / Plus with AnimateDiff, IMG2VID
มุมมอง 3.3K7 หลายเดือนก่อน
Animate IPadapter V2 / Plus with AnimateDiff, IMG2VID
can i add more images (front, back, left, right, top, bottom) for better result? thanks
Not that i know.
You do realize that joining for free is different from €3 a month?
I think i say base memberschip and not free.
List object has no attribute shape
top man!
video thumbnail dreams, video content reality
I would like to know how to change the background. Thanks for this tutorial.
Check out my change background tutorial. Or you can simply mark the background in the mask editor and generate sonething new. There are also nides like rembg which can remove background.
Check my other videos
good simple and clear lesson!
FYI there is an update on cogvideox wrapper which removes the fun smapler need an update,
th-cam.com/video/OcVagEvHoSA/w-d-xo.html
This node no longer works
Which one?
how can fix KeyError: ((1, 1, 5), '|u1')??
Great, video, thank you.
Thanks man, really helpful, liked & subscribed
I am doing something wrog probably.I could not get workflow work.Could you plese share workflow too.Thank you
Any error massage? The workflow is linked below. Its available on Patreon.
@@neuron_ai thank you
Compare to forge UI how much slower flux is in comfyui?
I can not say. Didnt use forge so far.
comfy is faster
@@tibarry194 Thank i notice a 30% faster speed in comfy with dev bnb nf4 but now i wonder what GGUF models should i use on 12 GB vram, Q6 or Q8?
🥲my TripoSR view is not showing the 3d. It's show "loading scene...."
Check your comfyui when starting in your terminal. There might be errormessage for the triposr addon.
Someone any idea why i get an error message saying that IC Light expects 3 channel but my input image has 4? Tried everthing. Removed alpha channels, Changed base model, tested fbc and fc IC Light model. Nothing worked. My workflow does not include any other custom node.
Hey very nice video i have a task where i have a 2 car image and i want to replace 1 car image with 2 car image how can i do it ?
You want to replace 1 car with two cars so that you have 3 cars?
@@neuron_ai I don't want to change the color of car I want to replace Ferrari with Bugatti input will be 2 images First will be a image in which Ferrari is present second will be a image of Bugatti output will be just replace Ferrari with Bugatti environment/surrounding will be same as Ferrari image
Nice tutorial, could you make a video on how to install everything from scratch?
You mean the whole comfyui or only the models and plugins?
@@neuron_ai Modules and plugins, I tried following you and installing the modules you linked in the description, but I don't have the same Sampler 😅
Where is output folder
In the comfy Main folder
THANK YOU!!! I couldn't figure this out for days.
Dentro de que versión de Python necesita estar funcionando comfyUI para instalar estos nodos? Saludos.
Sorry, I dont speak espaniol.
@@neuron_ai Google Translate: I simply don't exist
@@neuron_ai Mr.comment eraser
Do I need to generate a Tripo API Key for this workflow?
You dont need the api key. But the Installation is difficult and the custom nodes might be brocken. I will do a video on how to use the custom nodes wich uses the api key soon.
@@neuron_ai I have cloned the repo from github, installed CUDA and PyTorch, and the requirements from requirements.txt but when I try to run the prompt the TripoSR Viewer just says "loading scene..."
@@neuron_ai Thank you for the help, I could not get this workflow to function but I am using another github repo that worked connecting to Tripo. Its called ComfyUI_Tripo by VAST_AI_RESEARCH in case anyone needs it. It works with .png inputs or text to 3D model also
what is the use of alpha channel as mask, any alternatives?
There is also image to mask custom node
How can I make the character in a dance video perform at the original image size instead of a human size?
You mean more like a real cat? Unfortunatly I didnt find a way to optimise things with cogvideox so far.
unfortunatly I have the same problem to create a baby dancing on an adult video dance. it's impossible. it does not work with openpose or mimic motion
do not use sgm_uniform in sd
Why?
Thank you! Its posible to convert obj meshed to glb in comfyui? Thank you!
Might be possible with 3D-pack. I would do this in Blender3d
@ Thank you!
Thank you! Its posible to convert obj meshed to glb in comfyui? Thank you!
sooooooooo slowwwwwwwwww
Unfortunatly :(
wtf is this? thumbnail has nothing to do with the video/ clockbaiting in 2025?! seriously? this quality is crap
yes click unlike and they pay price xD
I saw your youtube looks great! I was wondering if you know of a Flux 1.1 dev node for loras (16 b)? (comfy UI) Happy you are showing great videos!
thanks. Ionly know of the official loras widely known atm.
I correctly installed the deforum "nodes", however, I receive this message in comfyui "(IMPORT FAILED) Deforum Nodes"
I sometimes need to do another refreh of the browser window. Does your manager shows any message in the customnode list at the deforum position?
@@neuron_ai I've uninstalled it, installed it in Dovo, reset it a few times, but the message is always the same in the "manager", (IMPORT FAILED) Deforum Nodes. Try fix or uninstall. I've already clicked on "try fix" and it doesn't work either.
@@takemeon9312 Unfortunatly sometimes some custom nodes needs special atention. Do you have access to a console or terminal where you can install python packages?
It is November 6th and those nodes did the same thing to me where they just randomly connected to other nodes. I think it is something specific to KJNodes because it never happened until I installed them for your tutorial and it was the KJNodes that did it to you as well. Not a big deal of course but definitely a little annoying heheh
yrah its annoyng. hope it will be fixed.
how do y get Tripo API key to start using it in Comfy ?
you get it from tripo ai. but for the custom nodes i this tutorial you dont need it. unfortunately this addon makes problems in the installation process so it is not recommended at this moment.
Isn't flowframes more easy and better?
This video is about doing this in ComfyUI. There a thousand ways of doing this in a different way.
Thanks for the comparison, very helpful! I guess they both have their advantages and disadvantages, but overall, on a stricly aesthetical level, I have to say I tend to prefer the SDXL outputs. For example, I found the statue faces too modern / photorealistic with 3.5, the colors overall too saturated, and, like you, the octopus very unconvincing. With regards to anatomy mistakes: On their civitai page, the SD people write: "We recommended to sample with Skip Layer Guidance for better struture and anatomy coherency." I downloaded the Skip Layer Guidance workflow they provide on their huggingface page, and I would agree that it definitely helps with anatomy, escpecially hands. (The workflow shows the output both with and without Skip Layer Guidance.)
Whether SD 3.5 will be in demand or not depends on ControlNet and A1111 support
Yes, and what the community is doing with it.
what about comfyui 3d pack?
I am working on this. had some problems to ibstall it, but will give it another try soon. good idea.
@@neuron_ai well ill watch the video for sure. id love a free way to get good 3d models
Thanks. Despite of anatomical flaws it generates, SD 3.5 feels quite interesting. While it can't quite generate vehicles, multiple characters, or distant characters yet (at least based on my tests), I think overall if community decides to do some fine-tunes, it could probably generate even higher quality images. And at least it can already generate nice looking faces, skin shading, nature scenes and unlike Flux Schnell, SD 3.5M is not a distilled model.
Yes, I would say the same. I hope for great community fine tunes as well. we will see what time will bring.
It's better to wait an extra 60 seconds and not use Schnell models
@@havemoney ?
AlphaChanelAddByMask is just a big bug. Doesn't work.
dude how you get the triplecliploader?
you might have to do an update
@neuron_ai i did that....that loader node doesnt exist
@@wrillywonka1320 it is a core node. so this is the way. be sure to refresh and reload the comfyui page in your browser after upgrading comfyui. also restarting the server is needed.
@neuron_ai i did all that. Am i maybe not looking for the node in the correct location? Btw i appreciate your assistance. This kind of stuff drove me away from comfyui multiple times
@@wrillywonka1320 I usually double click on the background and search for it.
It gives an error message. "AlphaChanelAddByMask too many values to unpack (expected 2)" :/ Do you know any solution, please?
what kind of image do you use?
@@neuron_ai They are not your pictures. The background is a simple PNG, the "wizard" is a transparent PNG, without background.
@@neuron_ai I love this tutorial, but I can't use it. I read about this error and I tried to fox it, I reinstalled even my Python, doesn't help. Is there any alternative for AlphaChanelAddByMask?
@@neuron_ai Would you mind replying? I want to do this so badly that I even installed a new Comfy. Thank you.
@@CsokaErno i can not reproduce what wou are describing. did you connect the mask output of the image loader to the mask input of the ad by mask node? what kind o image are you loading? did you try a differeent one?
Hi, I can't install or update GGUF custom nodes to get Unet Loader. I'm running Comfyui from Pinokio. Any help please ?
Fixed. I needed to update, restart, refresh
Unfortunately XmYx/deforum-comfy-nodes (IMPORT FAILED) and it seems that a few people on git have issues too. Even trying the "Try fix" in Comfy does not solve the problem. I was so excited about this video. Will have to wait for a fix. Nice video btw.
Thanks. Possible convert to 3d from images multi-view ?
This should be possible theoretically but I think this is not implemented and the development of these custom nodes stalled... :(
I was looking around to find how to get rid of duplicated frames and found this. I have to say, you got the idea I need but implemented in an unnecessary complicated way. First of all, you don't need a complicated diagram to load 2 adjacent frames. Just set increment for them seperately starting with 0 and 1, and max value at last frame - 1 and last frame. That will solve the trick. Secondly, to get rid of duplicated frame, just split the result batchs and throw away last frame. After the whole process was done, manually add last frame. That's is alot easier and faster. Too much step will cost more computing power, which is a big deal when we only have a low-end machine.
I might do it different nowadays.
Hi, super useful. I have this error: # ComfyUI Error Report ## Error Details - **Node Type:** CRMPoseSampler - **Exception Type:** NotImplementedError - **Exception Message:** No operator found for `memory_efficient_attention_forward` with inputs: query : shape=(1, 1024, 1, 512) (torch.float32) key : shape=(1, 1024, 1, 512) (torch.float32) value : shape=(1, 1024, 1, 512) (torch.float32) attn_bias : <class 'NoneType'> p : 0.0 `decoderF` is not supported because: max(query.shape[-1] != value.shape[-1]) > 128 xFormers wasn't build with CUDA support attn_bias type is <class 'NoneType'> operator wasn't built - see `python -m xformers.info` for more info `flshattF@0.0.0` is not supported because: max(query.shape[-1] != value.shape[-1]) > 256 xFormers wasn't build with CUDA support dtype=torch.float32 (supported: {torch.float16, torch.bfloat16}) operator wasn't built - see `python -m xformers.info` for more info `cutlassF` is not supported because: xFormers wasn't build with CUDA support operator wasn't built - see `python -m xformers.info` for more info `smallkF` is not supported because: max(query.shape[-1] != value.shape[-1]) > 32 xFormers wasn't build with CUDA support operator wasn't built - see `python -m xformers.info` for more info unsupported embed per head: 512 ## Stack Trace ```
did you have errors when installing the custom nodes?
@@neuron_ai every node is visible and no problem installing. Think it requires a specific requirements combo...
Hi I cloned KJnodes into custom_nodes folder and did pip install -r requirements.txt and restarted the swarmui, but still I cannot get that grow mask with Blur option. Please help
any error?
This has been very useful in the past couple of months. I realised that if you have a video as input or made on the fly (like CogVideo5) you can just feed it into the interpolater and videcombine the output, and that just works without trouble. But I was making a flow that did something to the images and then wanted to interpolate that live, but couldn't work it out. So of course, I can make it save images out and then go through another stage like the above to make the interpolation. What happens is I get loads of tiny videos which are just each batch run of a single frame, what I wanted it to do was interpolate the image sequence and then save the video at the end without it creating the intermediate images.
this is a step by step approach. unfortunatly this is not working with video output. you have to combine the frames into a video afterwards
@@neuron_ai Thanks for the reply. After I wrote that, I tried something with "meta batch". This combined with loadvideo, to which it connects, had to be set to the right number of input frames in the video, and bingo, it worked!
@@geoffphillips5293 oh nice one. didnt know this node. will give it a try as well.
Can you share the workflow I'm trying something similar but can't get it to work@@geoffphillips5293
"Value not in list: model: 'triposr.ckpt' not in (list of length 67) Output will be ignored" what are im doing wrong?
Hey sorry for late response. Unfortunatly triposr is quit tricky. This error sounds like your model is not in the right place. did you check this?
Thanks! Actually this one seems to be working a lot better than Florence 2 for certain things.