Alex Villabon
Alex Villabon
  • 19
  • 32 804
LivePortrait For VFX - ComfyUI and Nuke
I spent the last few weeks digging into the possibilities and limitations of LivePortrait for us, the VFX crowd. In the video linked below I share all my findings and my workflow. I then hop into Nuke to improve on one of the tool's shortcomings.
My Workflow: github.com/avillabon/Tutorial_Files/tree/main/LivePortrait%20For%20VFX
LivePortrait Comfyui GitHub: github.com/kijai/ComfyUI-LivePortraitKJ
LivePortrait Non-Comfyui GitHub: github.com/KwaiVGI/LivePortrait
Paper: liveportrait.github.io/
00:00 Intro
05:00 Comfyui workflow explained
32:28 Nuke
43:06 Final Thoughts
#nuke #comfyui #vfx #liveportrait #copycat
มุมมอง: 1 133

วีดีโอ

Add Motion Blur To Your STMap Outputs - Nuke
มุมมอง 58112 ชั่วโมงที่ผ่านมา
TIMESTAMPS: 00:00 Intro 00:37 The problem 01:24 Option 1 03:43 Quick clarification 04:42 More options 07:01 The Kronos trick (my favorite) 10:00 Closing thoughts
Custom Cattery Model - Create High Detail Mattes With ViTMatte - Nuke
มุมมอง 6K14 วันที่ผ่านมา
ViTMatte Github: github.com/hustvl/ViTMatte ViTMatte for Nuke: github.com/rafaelperez/ViTMatte-for-Nuke 00:00 Intro 01:04 The Node 01:48 Example 1 06:52 Example 2 10:30 Example 3 12:54 Example 4 13:55 Example 5 14:47 Example 6 15:45 Final Thoughts
Custom Cattery Model - RIFE - Incredible Retimes
มุมมอง 1.3K21 วันที่ผ่านมา
Github: github.com/rafaelperez/RIFE-for-Nuke Paper: arxiv.org/abs/2011.06294v2
Render Different Streams Without Using Views - Nuke
มุมมอง 57728 วันที่ผ่านมา
Render Different Streams Without Using Views - Nuke
LivePortrait - Image to Video in ComfyUI
มุมมอง 926หลายเดือนก่อน
Paper: liveportrait.github.io/ ComfyUI-LivePortraitKJ: github.com/kijai/ComfyUI-LivePortraitKJ KwaiVGI LivePortrait: github.com/KwaiVGI/LivePortrait Hugging Face: huggingface.co/spaces/KwaiVGI/LivePortrait 00:00 Intro 00:49 Paper 02:01 Important links 05:31 Demo in ComfyUI
Custom Cattery Model - Depth Anything V2 - Nuke
มุมมอง 3.3Kหลายเดือนก่อน
Paper: github.com/DepthAnything/Depth-Anything-V2 Depth Anything V2 for nuke: github.com/rafaelperez/Depth-Anything-for-Nuke
LivePortrait... But With Video Inputs - Preview - Comfyui
มุมมอง 983หลายเดือนก่อน
I was preparing a video on LivePortrait in Comfyui and then I got my hands on this... LivePortrait but inputting video instead of static images. I will cover this the moment it is fully released. This thing blew my mind.
New Cattery Models! Create Cryptomattes With Segment Anything - Nuke
มุมมอง 1.9Kหลายเดือนก่อน
Download Segment Anything here: community.foundry.com/cattery/38594/segment-anything Research: segment-anything.com/
New Cattery Models! Inpaint Anything with LaMa - Nuke
มุมมอง 3.3Kหลายเดือนก่อน
Get the model here: community.foundry.com/cattery/38593/lama
Render HDRIs With Unreal And Upscale Them With Comfyui
มุมมอง 554หลายเดือนก่อน
Unreal HDR render upscaled version with workflow embedded: github.com/avillabon/Tutorial_Files/tree/main/Unreal HDR Ultra Dynamic Sky: www.unrealengine.com/marketpl... Ultra Dynamic Sky Tutorial: • Ultra Dynamic Sky - Product Video Q... Upscaler/Enhancer: github.com/roblaughter/comfyui-workflows?tab=readme-ov-file Upscaler/Enhancer Documentation: github.com/roblaughter/comfyui-workflows/blob/ma...
How I Create Skies In Unreal Engine And Enhance Them In ComfyUI
มุมมอง 1.3K2 หลายเดือนก่อน
My original render upscaled version with workflow embedded: github.com/avillabon/Tutorial_Files/tree/main/Unreal Skies To Comfyui Ultra Dynamic Sky: www.unrealengine.com/marketplace/en-US/product/ultra-dynamic-sky Ultra Dynamic Sky Tutorial: th-cam.com/video/b52npy-XUdQ/w-d-xo.html&ab_channel=EverettGunther Upscaler/Enhancer: github.com/roblaughter/comfyui-workflows?tab=readme-ov-file Upscaler/...
Nuke's Views Are Very Powerful - Not Just For Stereo
มุมมอง 8182 หลายเดือนก่อน
Nuke's Views Are Very Powerful - Not Just For Stereo
Dissolve Between Cameras - Nuke
มุมมอง 6232 หลายเดือนก่อน
Get the tool here: github.com/avillabon/Tutorial_Files/tree/main/Dissolve Between Cameras
Upscaling Footage - Nuke, Topaz Video AI and ComfyUI
มุมมอง 2.1K2 หลายเดือนก่อน
Spent a few hours looking at the different upscale options inside Nuke. Then I moved on to Topaz Video AI and finally I show at what is theoretically possible upscaling with ComfyUI SUPIR. Always curious to learn new things so please reach out if there are other better ways out there that I might not know about. 00:00 Intro 00:59 Reformat: Cubic 01:51 Reformat: Sinc4 05:16 TVIscale 06:20 Upscal...
Generate Normal Maps For Any Shot With ComfyUI and Nuke
มุมมอง 6K3 หลายเดือนก่อน
Generate Normal Maps For Any Shot With ComfyUI and Nuke
Render Overscan With Rayrender - Nuke
มุมมอง 2133 หลายเดือนก่อน
Render Overscan With Rayrender - Nuke
Render Reflections in Nuke With RayRender - Nuke
มุมมอง 4053 หลายเดือนก่อน
Render Reflections in Nuke With RayRender - Nuke
You Should Use Nuke's Cattery More - Nuke
มุมมอง 1.4K3 หลายเดือนก่อน
You Should Use Nuke's Cattery More - Nuke

ความคิดเห็น

  • @diabeticbiker
    @diabeticbiker 12 ชั่วโมงที่ผ่านมา

    thanks alex

  • @destinpeters9824
    @destinpeters9824 วันที่ผ่านมา

    Thanks for taking the time to go into detail on the node widgets.

  • @incrediblesarath
    @incrediblesarath วันที่ผ่านมา

    Cool! Thank you.

  • @user-db2dl7wz8y
    @user-db2dl7wz8y วันที่ผ่านมา

    thanks for the info

  • @Odynophonia
    @Odynophonia 2 วันที่ผ่านมา

    You can also create a motion vector pass directly from the stmap. Ben McEwan has a great write up on his blog.

    • @alexvillabon
      @alexvillabon 10 ชั่วโมงที่ผ่านมา

      I saw that a while ago, but I believe (I could be wrong of course) that its more of an approximation. I did try it and I personally didn't find the results all that useful. Then again, this type of thing could be very shot dependent.

  • @metatrongroove2824
    @metatrongroove2824 5 วันที่ผ่านมา

    Well explained! Thanks for sharing your process in detail.

  • @M_a_r_e_k_B
    @M_a_r_e_k_B 5 วันที่ผ่านมา

    Hi Alex, Motion Blur node works much better when you connect vector generator to both inputs, one set to foreground and other to background, great videos by the way, all the best, cheers.

    • @alexvillabon
      @alexvillabon 3 วันที่ผ่านมา

      Thanks, there is always something new to learn!

  • @src1903
    @src1903 10 วันที่ผ่านมา

    unfortunately I was not able to install it.I have experience in installing many cattery gizmo etc. but not this one:(

  • @mirzacreation4u
    @mirzacreation4u 12 วันที่ผ่านมา

    @alexvillabon how to install ViTMatte in nuke

  • @mirzacreation4u
    @mirzacreation4u 12 วันที่ผ่านมา

    How to install ViTMatte in nuke 15

  • @leeebible8956
    @leeebible8956 13 วันที่ผ่านมา

    plz installation tutorial video 😂😂😂😂😂😂😂😂😂

  • @alexyauffeves3840
    @alexyauffeves3840 13 วันที่ผ่านมา

    It looks like a great tool! Unfortunately it is no longer possible to download it from Github anymore. I'm getting this message "There aren’t any releases here" 😢 I hope it comes back!

  • @CosmicSoundWithAI
    @CosmicSoundWithAI 14 วันที่ผ่านมา

    thank you, great tool!! And I want to ask you Which tool is 10.39 ?

    • @alexvillabon
      @alexvillabon 14 วันที่ผ่านมา

      That would be w_hotbox. You can grab it here www.nukepedia.com/python/ui/w_hotbox/

    • @CosmicSoundWithAI
      @CosmicSoundWithAI 14 วันที่ผ่านมา

      @@alexvillabon thank you <3

  • @KishoreKumarNethala
    @KishoreKumarNethala 14 วันที่ผ่านมา

    Where it is available??? Send me link

  • @kobiohanna
    @kobiohanna 14 วันที่ผ่านมา

    thank you! I tried tested it on M1 mac but I am getting errors right away....is this compatible with mac as well?

    • @alexvillabon
      @alexvillabon 14 วันที่ผ่านมา

      Not sure, I've only ever used it on Windows and Linux machines.

    • @kobiohanna
      @kobiohanna 14 วันที่ผ่านมา

      @@alexvillabon ok I test it on Nuke 14 it’s working but 15.1 giving me some error strange

  • @SteveErnst117
    @SteveErnst117 15 วันที่ผ่านมา

    I downloaded the files, including the extra ones and put them in the vitmatte folder. When previewing the node, I get a really long error: Exception caught processing model: The following operation failed in the TorchScript interpreter. Traceback of TorchScript, serialized code (most recent call last): File .../__torch__.py, line 13, in forward image_and_trimap = {"image": image, "trimap": trimap} model = self.model _0 = ((model).forward(image_and_trimap, ))["phas"] ~~~~~~~~~~~~~~ <--- HERE return torch.contiguous(_0) File .../vitmatte.py, line 21, in forward images, H, W, = _0 backbone = self.backbone features = (backbone).forward(images, ) ~~~~~~~~~~~~~~~~~ <--- HERE decoder = self.decoder outputs = (decoder).forward(features, images, ) File .../vit.py, line 21, in forward _0 = __torch__.modeling.backbone.utils.get_abs_pos patch_embed = self.patch_embed x0 = (patch_embed).forward(x, ) ~~~~~~~~~~~~~~~~~~~~ <--- HERE pos_embed = self.pos_embed pretrain_use_cls_token = self.pretrain_use_cls_token File .../utils.py, line 10, in forward x: Tensor) -> Tensor: proj = self.proj x0 = (proj).forward(x, ) ~~~~~~~~~~~~~ <--- HERE return torch.permute(x0, [0, 2, 3, 1]) def get_abs_pos(abs_pos: Tensor, File .../conv.py, line 23, in forward weight = self.weight bias = self.bias _0 = (self)._conv_forward(input, weight, bias, ) ~~~~~~~~~~~~~~~~~~~ <--- HERE return _0 def _conv_forward(self: __torch__.torch.nn.modules.conv.Conv2d, File .../conv.py, line 29, in _conv_forward weight: Tensor, bias: Optional[Tensor]) -> Tensor: _1 = torch.conv2d(input, weight, bias, [16, 16], [0, 0], [1, 1]) ~~~~~~~~~~~~ <--- HERE return _1 Traceback of TorchScript, original code (most recent call last): File "nuke_vitmatte.py", line 72, in forward } return self.model(image_and_trimap)["phas"].contiguous() ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE File .../vitmatte.py, line 42, in forward images, H, W = self.preprocess_inputs(batched_inputs) features = self.backbone(images) ~~~~~~~~~~~~~ <--- HERE outputs = self.decoder(features, images) File .../vit.py, line 396, in forward def forward(self, x): x = self.patch_embed(x) ~~~~~~~~~~~~~~~~ <--- HERE if self.pos_embed is not None: x = x + get_abs_pos( File .../utils.py, line 205, in forward def forward(self, x): x = self.proj(x) ~~~~~~~~~ <--- HERE # B C H W -> B H W C x = x.permute(0, 2, 3, 1) File .../conv.py, line 463, in forward def forward(self, input: Tensor) -> Tensor: return self._conv_forward(input, self.weight, self.bias) ~~~~~~~~~~~~~~~~~~ <--- HERE File .../conv.py, line 459, in _conv_forward weight, bias, self.stride, _pair(0), self.dilation, self.groups) return F.conv2d(input, weight, bias, self.stride, ~~~~~~~~ <--- HERE self.padding, self.dilation, self.groups) RuntimeError: Input type (float) and bias type (struct c10::Half) should be the same Haven't seen anybody else with this problem, any idea what's causing it?

    • @SteveErnst117
      @SteveErnst117 15 วันที่ผ่านมา

      Ahh, okay apparently it was similar to somebody else's problem. I just had to reformat because my pixel aspect was 1.5 on this footage. Thanks!

    • @alexvillabon
      @alexvillabon 14 วันที่ผ่านมา

      Great!

    • @toxyne
      @toxyne 8 วันที่ผ่านมา

      @@alexvillabon I have the same error. How to fix it?

  • @behrampatel4872
    @behrampatel4872 15 วันที่ผ่านมา

    Edit 02 - Works ! I forgot to put a roto node before the inference node. Cheers. End Edit 02 Edit 01 - I downloaded the folder from git and put the external cat files in the same folder. Now the node is showing up in Nuke but it's not doing anything. End Edit 01 Hi Alex . The nodes not showing up in nukex 15 . The archive which is linked to a g drive only has a cat and a pt file. Unlike the other folders that have icons and gizmo files. Did you also download the archive from the external link ? Cheers, b

    • @alexvillabon
      @alexvillabon 14 วันที่ผ่านมา

      Nice! happy to hear you got it working!

    • @behrampatel4872
      @behrampatel4872 14 วันที่ผ่านมา

      @@alexvillabon Here's a small hack. I got an initial pass via the segment anything node (thanks to your video) and used that as the soft outer matte described in this video. Cheers, b

    • @alexvillabon
      @alexvillabon 14 วันที่ผ่านมา

      @@behrampatel4872 smart!

  • @user-fw6qr1nr9e
    @user-fw6qr1nr9e 15 วันที่ผ่านมา

    sorry, where i could find and put cat file? I read gethub, but not understand((

  • @andersnyman6320
    @andersnyman6320 15 วันที่ผ่านมา

    Works on CPU but on GPU i got this message. (Nuke14, Windows). RuntimeError: Input type (CUDAFloatType) and weight type (CUDAHalfType) should be the same Investigating what it could be...

    • @andersnyman6320
      @andersnyman6320 15 วันที่ผ่านมา

      Works now if I reformat to a square frame

    • @alexvillabon
      @alexvillabon 15 วันที่ผ่านมา

      Great!

  • @glmstudiogh
    @glmstudiogh 15 วันที่ผ่านมา

    I wish there was a way it could output roto shapes so artists can take over and make corrections.

    • @papajekket
      @papajekket 8 วันที่ผ่านมา

      well technically you can try to extract alpha from a shot and then use auto-trace in after effects to extract the shapes

  • @Phuey
    @Phuey 15 วันที่ผ่านมา

    topaz ai plugin fails to load in my nuke what do i do?

    • @alexvillabon
      @alexvillabon 15 วันที่ผ่านมา

      Im using Topaz as a standalone.

  • @huzainisahmawi
    @huzainisahmawi 15 วันที่ผ่านมา

    cool

  • @LFPAnimations
    @LFPAnimations 16 วันที่ผ่านมา

    this is insanely good. Thanks for sharing

  • @LucasPfaff
    @LucasPfaff 16 วันที่ผ่านมา

    I found the edge thickness to be not very useful tbh. Internally, it's just two FilerErodes - one expanding, one extracting the given matte, and mixing them so the larger one becomes the 0.5 the inference expects; after the inference, it gets masked by the larger one. So far, I always got better results by putting that to 0 and feeding a manually adjusted "double mask" with the "translucency range" set to 0.5

    • @LucasPfaff
      @LucasPfaff 15 วันที่ผ่านมา

      God that sounded harsh. I meant that at least I get the better results with a bit more setup, what you can pull out of the footage with only the most basic shapes is still insane.

    • @alexvillabon
      @alexvillabon 15 วันที่ผ่านมา

      Hey Lucas. I wasnt sure why you were annoyed at what I showed haha. Happy to hear that was not the case. Yes, what I show here just scratches the surface of what you can do. My intent with this video was to just show what you can achieve very quickly with very little effort, you can absolutely improve the results be taking a little more time and getting creative. For instance, I've had good success pairing it with copycat.

    • @LucasPfaff
      @LucasPfaff 15 วันที่ผ่านมา

      @@alexvillabon yeah it was probably a bit too late to comment on TH-cam, ha! Not at all annoyed. I'm really intrigued by it; today I tested a bit more with "bad mattes", and now I got a epiphany: you can do a really really rough shape for your first slap comp, and then just improve with better input later on when there's more time or whatever. I think I also commented under your last(?) video about using it to train further with CopyCat. I think this is a really mighty combo indeed!

  • @sen73nced
    @sen73nced 16 วันที่ผ่านมา

    I think the download link is broken, like someone mentioned, it leads to the "There aren’t any releases here" page.

    • @alexvillabon
      @alexvillabon 15 วันที่ผ่านมา

      All the info is on the github. Dont just jump to releases.

  • @AlexeyKorotkikh
    @AlexeyKorotkikh 16 วันที่ผ่านมา

    Am I the only one who cant figure out how to download it?) Link in description lead to "There aren’t any releases here"

    • @AlexeyKorotkikh
      @AlexeyKorotkikh 16 วันที่ผ่านมา

      Well. Now I know, but it is quite tricky)

    • @alexvillabon
      @alexvillabon 15 วันที่ผ่านมา

      All the info is on the github. Dont just jump to releases.

    • @sen73nced
      @sen73nced 15 วันที่ผ่านมา

      @@AlexeyKorotkikh hey, did you find a way to download the Cattery folder?

    • @AlexeyKorotkikh
      @AlexeyKorotkikh 14 วันที่ผ่านมา

      @@sen73nced Yes, there is no realize zip. You need to download each file separately from folder icon above description

  • @EspadaJusticeira
    @EspadaJusticeira 16 วันที่ผ่านมา

    i am already using to help keying green screens an it works very well..

    • @alexvillabon
      @alexvillabon 16 วันที่ผ่านมา

      Indeed! Ive used it on green screens before as well.

  • @ocdvfx
    @ocdvfx 16 วันที่ผ่านมา

    Even with the boiling, that's not bad at all! I would imagine you could throw it through temporalmedian and train copycat off 20-30 of the best frames n get a pretty solid result

    • @alexvillabon
      @alexvillabon 16 วันที่ผ่านมา

      Absolutely!

  • @incrediblesarath
    @incrediblesarath 16 วันที่ผ่านมา

    🤯Cool!

  • @zaferzipzip
    @zaferzipzip 16 วันที่ผ่านมา

    Not working for me, i have other cat tools added as well but this one is saying "no cat file is selected" any solution?

    • @alexvillabon
      @alexvillabon 16 วันที่ผ่านมา

      There was another person commenting with the same issue which has now been resolved. I have a feeling you didnt read the whole instructions on the github page. Take a look there.

  • @kobiohanna
    @kobiohanna 16 วันที่ผ่านมา

    Hi thank you for the vid, followed the instruction but when adding the node I get an error 'no cat file is selected' any idea?

    • @alexvillabon
      @alexvillabon 16 วันที่ผ่านมา

      That's strange, sounds to me like you didn't download everything. Did you read and follow all the instructions on the GitHub? I've added it to a few computers and haven't had an issue.

    • @kobiohanna
      @kobiohanna 16 วันที่ผ่านมา

      @@alexvillabon yes you are correct i forgot the cat file :) thanks!

    • @alexvillabon
      @alexvillabon 16 วันที่ผ่านมา

      @@kobiohanna great!

  • @LucasPfaff
    @LucasPfaff 19 วันที่ผ่านมา

    Thanks for showing that off again! I had a look at his GitHub and saw ViTMatte, which does have incredible output on hair. I saw Julian Kreussers tutorial on Advanced Removals on Fundrys Learning Channel, but instead of only using ModNet I keymixed it with ViTMatte; ViTMatte for fine detail/translucency like hair/motionblur, and ModNet for a fast solid body). Then trained a CopyCat with that (like you also showed in the ComfyUI-Normalmap Video), and the output was intense. Using his trick "stabilizing" it with the Denoise, I got a very decent matte for roughly an hour of work and 45min of training on a 100f shot. Amazing what we can get these days

  • @brunodelacalva6976
    @brunodelacalva6976 19 วันที่ผ่านมา

    Buenísimo. Gracias Alex.

  • @user-vr9dj4eo8v
    @user-vr9dj4eo8v 23 วันที่ผ่านมา

    I love these videos

    • @alexvillabon
      @alexvillabon 23 วันที่ผ่านมา

      @@user-vr9dj4eo8v Thank you! I really enjoy making them.

  • @LFPAnimations
    @LFPAnimations 27 วันที่ผ่านมา

    I have been using RIFE through flowframes for a while now. So cool to see it integrated into Nuke. It really is the best frame interpolation tool out there.

    • @alexvillabon
      @alexvillabon 23 วันที่ผ่านมา

      @@LFPAnimations I had never heard of flowframes! Thanks for pointing me in that direction :)

    • @LFPAnimations
      @LFPAnimations 23 วันที่ผ่านมา

      @@alexvillabon It is a great free program for batching RIFE operations, but having RIFE in Nuke is probably even more useful

  • @user-db2dl7wz8y
    @user-db2dl7wz8y 27 วันที่ผ่านมา

    looks good

  • @Osvaldsson
    @Osvaldsson 29 วันที่ผ่านมา

    Versioning, it’s so easy now, thanks Alex!

  • @81sw0le
    @81sw0le หลายเดือนก่อน

    Just now came across your channel. Are you trying to figure out how to use comfy + ue5 + nuke? Not many are trying to innovate like that and i'd love to know if this is the case, because I'm doing the exact some thing.

    • @alexvillabon
      @alexvillabon 16 วันที่ผ่านมา

      Hey! Yes, kind of. The equation is a bit more fluid than comfy + ue5 + nuke but I do try to make them work together where I can. Where can I see your work?

    • @81sw0le
      @81sw0le 16 วันที่ผ่านมา

      @@alexvillabon I've only used blender, AE, and comfy stuff for actual project so far. But I worked on a project called "Under the Black Rainbow" with a band called KING 810. You can watch the series here. That latest episode shows the best of what I've done.

  • @redbeard4979
    @redbeard4979 หลายเดือนก่อน

    thank you so much Alex! It helped me a lot. I used Ray render for mattepaint and I had exactly this problem with overscan. I had to go back to the skyline render because I didn’t have time to deal with overscan for the ray render. And now there is a solution.

    • @alexvillabon
      @alexvillabon หลายเดือนก่อน

      @@redbeard4979 happy to hear it helped out :)

  • @RichardServello
    @RichardServello หลายเดือนก่อน

    For something like the runner it would work better if it could work temporally.

    • @alexvillabon
      @alexvillabon หลายเดือนก่อน

      That’s the natural evolution. So far no model does that but im sure its a matter of time.

  • @RichardServello
    @RichardServello หลายเดือนก่อน

    That LaMa result is very useable as a first pass. Much better than photoshop generative fill.

  • @behrampatel4872
    @behrampatel4872 หลายเดือนก่อน

    Hi do you just drag the demo image into comfyui and let the manager figure out the missing nodes ? Or did you download the model files separately. Thanks

    • @alexvillabon
      @alexvillabon หลายเดือนก่อน

      Id recommend you watch an intro to comfyui to get your feet wet. I didnt cover the basics of how to get comfyui working because there are a ton of channels out there doing this very well.

  • @ss_websurfer
    @ss_websurfer หลายเดือนก่อน

    where can I get the pmask node?

    • @alexvillabon
      @alexvillabon หลายเดือนก่อน

      Any position mask node will do. There are a bunch on nukepedia such as pmatte

  • @iamimpress
    @iamimpress หลายเดือนก่อน

    How much ram do you have and which GPU are you running?

    • @alexvillabon
      @alexvillabon หลายเดือนก่อน

      @@iamimpress I have 64gb of ram and a 4090.

    • @iamimpress
      @iamimpress หลายเดือนก่อน

      @@alexvillabonthank you very much. Love the videos - just subscribed :)

    • @alexvillabon
      @alexvillabon หลายเดือนก่อน

      @@iamimpress happy to hear it! Thank you

  • @THEJATOXD
    @THEJATOXD หลายเดือนก่อน

    Something i have been looking for months, thanks a lot for the insight

  • @kietzi
    @kietzi หลายเดือนก่อน

    i see this in beauty retouches <3

  • @CarpeUniversum
    @CarpeUniversum หลายเดือนก่อน

    Clever

  • @DreamsIllusions-k8t
    @DreamsIllusions-k8t หลายเดือนก่อน

    very nice and informative! fantastic sharing my friend!

  • @behrampatel4872
    @behrampatel4872 หลายเดือนก่อน

    I hope this gets better. However if we get a clean matte for single frames, then we could then use the output to train copycat. Cheers

    • @alexvillabon
      @alexvillabon หลายเดือนก่อน

      Agreed! I have something coming in the next couple of weeks that should be able to do just that :)

  • @AiLife115
    @AiLife115 หลายเดือนก่อน

    Could you please make a cat file "base" and more large model ? sorry for asking, I dont know any code or program :(