All new Attention Masking nodes

แชร์
ฝัง
  • เผยแพร่เมื่อ 6 ก.ย. 2024

ความคิดเห็น • 165

  • @AthrunWilshire
    @AthrunWilshire 4 หลายเดือนก่อน +57

    The wizard says this isn't magic but creates pure magic anyway.

    • @latentvision
      @latentvision  4 หลายเดือนก่อน +24

      Any sufficiently advanced technology is indistinguishable from magic

  • @volli1979
    @volli1979 4 หลายเดือนก่อน +4

    6:05 "oh shit, this is so cool!" - nothing to add.

  • @NanaSun934
    @NanaSun934 4 หลายเดือนก่อน +5

    I am so thankful for your channel. I have watched countless yourtube video about comfy ui, but yours are definitely one of the clearest with deep understanding of the subject. I hardly leave comment EVER, but i felt the need to write this one. I was watching and rewatching your video and follow along. Its so much fun. Thank you so much!

  • @jayd8935
    @jayd8935 4 หลายเดือนก่อน +5

    I think it was a blessing that I found your channel. These workflows spark my creativity so much.

  • @xpecto7951
    @xpecto7951 3 หลายเดือนก่อน +2

    Please continue doing more informative videos like you always do, everyone else just shows prepared workflows but you actually show how to build them. Can't thank you enough.

  • @DataMysterium
    @DataMysterium 4 หลายเดือนก่อน +10

    Awesome as always, Thank you for sharing those amazing nodes with us.

  • @kf_calisthenics
    @kf_calisthenics 4 หลายเดือนก่อน +4

    Would love a video of you going into depth on the development and programming side of things!

    • @latentvision
      @latentvision  4 หลายเดือนก่อน

      maaaaybe 😄

  • @musicandhappinessbyjo795
    @musicandhappinessbyjo795 4 หลายเดือนก่อน +5

    The result looks pretty amazing. Could you maybe do one tutorial where there is combination with control net (not sure if that possible) just so we can also control the position of the characters.

  • @flisbonwlove
    @flisbonwlove 4 หลายเดือนก่อน +5

    Mr. Spinelli always delivering magic!! Thanks and keep the superb work 👏👏🙌🙌

  • @DarkGrayFantasy
    @DarkGrayFantasy 4 หลายเดือนก่อน +5

    Amazing stuff as always Matt3o! Can't wait for the next IPAv2 stuff you got going on!

  • @user-je7qy5ey3y
    @user-je7qy5ey3y 4 หลายเดือนก่อน +3

    Grazie Matteo stai facendo un buon lavoro

  • @contrarian8870
    @contrarian8870 4 หลายเดือนก่อน +1

    Great stuff, as always! One thing: the two girls were supposed to be "shopping" and the cat/tiger were supposed to be "playing". The subjects transferred properly (clean separation) but there's no trace of either "shopping" or "playing" in the result.

    • @latentvision
      @latentvision  4 หลายเดือนก่อน

      the first word in all prompts is "closeup" that basically overcomes anything else in the prompt

  • @d4n87
    @d4n87 4 หลายเดือนก่อน +2

    Grande matt3o, i tuoi nodi sono uno spettacolo! 😁👍
    Questo workflow soprattutto sembra assolutamente interessante e malleabile alle problematiche delle varie generazioni

  • @alessandrorusso583
    @alessandrorusso583 4 หลายเดือนก่อน +1

    Great video as always. A large number of interesting things. Always thank you for your time in the community.

  • @Kentel_AI
    @Kentel_AI 4 หลายเดือนก่อน +3

    Thanks again for the great work.

  • @Foolsjoker
    @Foolsjoker 4 หลายเดือนก่อน +1

    This is going to be powerful. Good work Mat3o!

  • @aliyilmaz852
    @aliyilmaz852 4 หลายเดือนก่อน +3

    Thanks again for great effort and explanation Matteo. You are amazing!
    Quick question: Is it possible to use controlnets with IPAdapter Regional Conditioning?

    • @latentvision
      @latentvision  4 หลายเดือนก่อน +1

      yes! absolutely!

  • @ttul
    @ttul 4 หลายเดือนก่อน +1

    Wow, this is so insanely cool. I can’t wait to play with it, Matteo.

  • @mattm7319
    @mattm7319 4 หลายเดือนก่อน

    the logic you've used in making these nodes makes it so much easier! thank you!

  • @context_eidolon_music
    @context_eidolon_music 2 หลายเดือนก่อน

    Thanks for all your hard work and genius!

    • @latentvision
      @latentvision  2 หลายเดือนก่อน

      just doing my part

  • @WhySoBroke
    @WhySoBroke 4 หลายเดือนก่อน

    An instamazing day when Maestro Latente spills his magical brilliance!!

  • @11011Owl
    @11011Owl 3 หลายเดือนก่อน +1

    most usefull videos about comfyui, thank you SO MUCH, im excited af about how cool it is

  • @elifmiami
    @elifmiami หลายเดือนก่อน

    This is an amazing workflow! I wish we could animate it.

  • @premium2681
    @premium2681 4 หลายเดือนก่อน +1

    Angel Mateo came down from latent space again to teach the world his magic

  • @marcos13vinicius11
    @marcos13vinicius11 4 หลายเดือนก่อน +1

    it's gonna help million times on my personal project!! thank you

  • @yql-dn1ob
    @yql-dn1ob 4 หลายเดือนก่อน +1

    Amazing work! improved the usability of the IPadapter!

  • @davidb8057
    @davidb8057 4 หลายเดือนก่อน

    Brilliant stuff, thanks again, Matteo. Can't wait for the FaceID nodes to be brought to this workflow.

  • @jccluaviz
    @jccluaviz 4 หลายเดือนก่อน

    Thank you, thank you, thank you.
    Great work, my friend.
    Another master piece of art.
    Really apreciated.

    • @latentvision
      @latentvision  4 หลายเดือนก่อน

      glad to help

  • @Showdonttell-hq1dk
    @Showdonttell-hq1dk 4 หลายเดือนก่อน

    Once again, it's simply wonderful! During a few tests, I noticed that the: "RGB mask from node", needs very bright colors to work. A slightly darker green and it no longer has any effect. Everything else produced cool results on the first try. Thanks for all the work! And I'm just about to follow your ComfyUI app tutorial video to make one myself.

    • @latentvision
      @latentvision  4 หลายเดือนก่อน

      you can set thresholds for each color, you can technically grab any shade

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk 4 หลายเดือนก่อน

      @@latentvision Of course I tried that. But it worked wonderfully with bright colors. It's no big deal. As I said, thanks for the great work! :)

    • @latentvision
      @latentvision  4 หลายเดือนก่อน +1

      @@Showdonttell-hq1dk using black or white and the threshold you can technically get any color. But you can probably better use the node Mask From Segmentation

  • @aivideos322
    @aivideos322 4 หลายเดือนก่อน

    u should be proud of your work. thanks for all you do. Was working on my video workflow with masking ipadaptors for multiple people... this will SOOOOOO make things easier.

  • @jacekfr3252
    @jacekfr3252 4 หลายเดือนก่อน +1

    "oh shit, this is so cool"

  • @allhailthealgorithm
    @allhailthealgorithm 4 หลายเดือนก่อน +1

    Amazing, thanks again for all your hard work!

  • @Ulayo
    @Ulayo 4 หลายเดือนก่อน +1

    Nice! More nodes to play with!

  • @skycladsquirrel
    @skycladsquirrel 4 หลายเดือนก่อน +1

    Great video! Thank you for all your hard work!

  • @AndyDeighton
    @AndyDeighton 4 หลายเดือนก่อน +1

    Genius. Fact. Again.

  • @mycelianotyours1980
    @mycelianotyours1980 4 หลายเดือนก่อน +1

    Thank you for everything!

  • @rawkeh
    @rawkeh 4 หลายเดือนก่อน

    8:01 "This is not magic," says the wizard

    • @latentvision
      @latentvision  4 หลายเดือนก่อน

      I swear it is not :P

  • @nrpacb
    @nrpacb 4 หลายเดือนก่อน

    I learned something new, happy, I want to ask when can we do a tutorial on replacing furniture indoors or something like that?

    • @latentvision
      @latentvision  4 หลายเดือนก่อน

      yeah that would be very interesting... I'll think about it

  • @35wangfeng
    @35wangfeng 4 หลายเดือนก่อน +1

    You rock!!!!! Thanks for the amazing job!!!!

  • @Cadmeus
    @Cadmeus 4 หลายเดือนก่อน +2

    What a cool update! This looks useful for controlling character clothing, hairstyle and that kind of thing, using reference images. Also, if you compose a 3D scene in Unreal Engine, it can output a segmented object map as colors, which could make this very powerful. You could link prompts and reference images to objects in the scene and then diffuse multiple camera angles from your scene, without any further setup.

  • @Freezasama
    @Freezasama 4 หลายเดือนก่อน +1

    what a legend

  • @helloRick618
    @helloRick618 4 หลายเดือนก่อน +1

    really cool

  • @ojciecvaader9279
    @ojciecvaader9279 4 หลายเดือนก่อน +1

    I really love your work

  • @heranzhou6976
    @heranzhou6976 4 หลายเดือนก่อน +1

    Wonderful. May I ask how I can insert FaceID into this workflow? Right now I get this error: Error occurred when executing IPAdapterFromParams: InsightFace: No face detected.

  • @PurzBeats
    @PurzBeats 4 หลายเดือนก่อน +2

    "the cat got tigerized"

  • @autonomousreviews2521
    @autonomousreviews2521 4 หลายเดือนก่อน

    Excellent! Thank you for your work and for sharing :)

  • @erdmanai
    @erdmanai 4 หลายเดือนก่อน +1

    Thank you very much man!

  • @lilien_rig
    @lilien_rig หลายเดือนก่อน

    ahh nice tutorial, I like it very thanks

  • @DashengSun-ki9qe
    @DashengSun-ki9qe 4 หลายเดือนก่อน +1

    Great workflow. Can you add edge control and depth to the process? I tried it but failed. Can you help me? I'm not sure how the nodes are supposed to be connected, it doesn't seem to work.

  • @matteogherardi7342
    @matteogherardi7342 3 หลายเดือนก่อน

    Sei un grande!

  • @kaiserscharrman
    @kaiserscharrman 4 หลายเดือนก่อน

    really really cool addition. thanks

  • @GggggQqqqqq1234
    @GggggQqqqqq1234 4 หลายเดือนก่อน +1

    Thank you!

  • @Mika43344
    @Mika43344 4 หลายเดือนก่อน

    Great work as always🎉

  • @freshlesh3019754
    @freshlesh3019754 4 หลายเดือนก่อน +1

    That was awesome

  • @AnotherPlace
    @AnotherPlace 4 หลายเดือนก่อน

    Continue creating magic senpai!! ❤️

  • @FotoAntonioCanada
    @FotoAntonioCanada 4 หลายเดือนก่อน +1

    Incredible

  • @Shingo_AI_Art
    @Shingo_AI_Art 4 หลายเดือนก่อน

    Awesome stuff, as always

  • @digidope
    @digidope 4 หลายเดือนก่อน

    Just wow! Thanks a lot again!

  • @deastman2
    @deastman2 4 หลายเดือนก่อน

    This is so helpful! I’m using closeup selfies of three people to create composite band photos for promotion, and this simplifies the workflow immensely. Question: Do you have any tips to go from three headshots to a composite image which shows three people full length, head to toe? Adding that to the prompts hasn’t worked very well so far, and I’m not sure if adding separate openpose figures for each person would be the way to go? Any advice would be most appreciated!

    • @latentvision
      @latentvision  4 หลายเดือนก่อน

      that has to be done in multiple passes. there are many ways you can approach that... it's hard to give you advice on such complex matter on a YT comment

    • @deastman2
      @deastman2 4 หลายเดือนก่อน

      @@latentvisionI understand. But “multiple passes” gives me an idea anyway. So probably I should generate bodies for each person first, and only then combine the three.

  • @leolis78
    @leolis78 หลายเดือนก่อน

    Hi Matteo, thanks for your contributions to the community.
    I am trying to use Attention Masking in the process of compositing product photos. The idea is to be able to define in which zone of the image each element is located. For example, in a photo of a wine, define the location of the bottle and the location of the props, such as a wine glass, a bunch of grapes, a corkscrew, etc. But I tried the Attention Masking technique and it is not giving me good results in SDXL. Is it only for Sd1.5? Do you think it is a good technique for this kind of compositions for product photography or do you think there is another better technique? Thanks in advance for your help! 😃😃😃

    • @latentvision
      @latentvision  หลายเดือนก่อน

      this is complex to answer in a YT comment. depends on the size of the props. You probably need to upscale the image and work with either inpainting or regional prompting. Try to ask on my discord server

  • @fulldivemedia
    @fulldivemedia หลายเดือนก่อน

    thanks,and i think you should put the "pill" word in the title :)

  • @WiremuTeKani
    @WiremuTeKani 4 หลายเดือนก่อน

    6:04 Yes, yes it is.

  • @JoeAndolina
    @JoeAndolina 4 หลายเดือนก่อน

    This workflow is amazing, thank you for sharing! I have been trying to get it to work with two characters generated from two LORAs. The LORAs have been trained on XL so they are expecting to make 1024x1024 images. I have made my whole image larger so that the mask areas are 1024x1024, but still everything is coming out kind of wonky. Have any of you explored a solution for generating two characters from separate LORAs in a single image?

  • @ceegeevibes1335
    @ceegeevibes1335 4 หลายเดือนก่อน

    love.... thank you !!!

  • @pfbeast
    @pfbeast 4 หลายเดือนก่อน

    ❤❤❤ as always best tutorial

  • @GggggQqqqqq1234
    @GggggQqqqqq1234 4 หลายเดือนก่อน

    감사합니다.

  • @crazyrobinhood
    @crazyrobinhood 4 หลายเดือนก่อน +1

    Molto bene... molto bene )

  • @fukong
    @fukong 3 หลายเดือนก่อน

    Great job done! I'm wondering if theres any workflow using faceid series IPadapter with regional prompting...

    • @latentvision
      @latentvision  3 หลายเดือนก่อน

      it totally works there's nothing special to do just use the FaceID models

    • @fukong
      @fukong 3 หลายเดือนก่อน

      @@latentvision Thanks so much for reply!! I know I can replace the IPadapter Unified loader with FaceID unified loader in this workflow, but I don't know how to receive images and adjust the v2 weight or choose a weight type while using regional conditioning for FaceID, in other word, I don't know how to create an equivalent "IPadapter FaceID Regional Conditioning" node with existing nodes.

  • @eduger
    @eduger 4 หลายเดือนก่อน

    amazing

  • @pyyhm
    @pyyhm 4 หลายเดือนก่อน

    Hey matt3o, great stuff! I'm trying to replicate this with SDXL models but getting a blank output. Any ideas?

  • @user-yb5es8qm3k
    @user-yb5es8qm3k 4 หลายเดือนก่อน

    This video is great, but I follow the video, why does the portrait not look like the original picture

  • @guilvalente
    @guilvalente 4 หลายเดือนก่อน

    Would this work with Animatediff? Perhaps for segmenting different clothing styles in a fashion film.

    • @latentvision
      @latentvision  4 หลายเดือนก่อน

      attention masking absolutely works with animatediff

  • @user-tx2ey4bx5l
    @user-tx2ey4bx5l 23 วันที่ผ่านมา

    Why does it show ClipVision model not found when I use it?

  • @tailongjin-yx3ki
    @tailongjin-yx3ki 4 หลายเดือนก่อน

    awesome

  • @francaleu7777
    @francaleu7777 4 หลายเดือนก่อน +1

    👏👏👏

  • @svenhinrichs4072
    @svenhinrichs4072 4 หลายเดือนก่อน

    oh shit, this is so cool.... :)

  • @stephanmodry1301
    @stephanmodry1301 4 หลายเดือนก่อน

    Absolutely incredible. Like always. BUT: Cat and tiger are not "playing". Please fix this as soon as possible.
    (just kidding, of course.) 😅

  • @Ai-dl2ut
    @Ai-dl2ut 4 หลายเดือนก่อน

    Awesome sir :)

  • @jcboisvert1446
    @jcboisvert1446 4 หลายเดือนก่อน

    Thanks

  • @user-yb5es8qm3k
    @user-yb5es8qm3k 4 หลายเดือนก่อน

    What file is the ipadpt in the embedded group read by the author and how to edit it

  • @ai_gene
    @ai_gene 2 หลายเดือนก่อน

    Why doesn’t it work so well with the SDXL model? In my case, the result is one girl with different styles on two sides of the head.

    • @latentvision
      @latentvision  2 หลายเดือนก่อน

      try to use bigger masks, try different checkpoints, use controlnets

  • @nicolasmarnic399
    @nicolasmarnic399 4 หลายเดือนก่อน

    Hello Mateo! Excellent workflow :)
    Consultation: To solve the proportion issues, that the cat is the size of a cat and that the tiger is the size of a tiger, the best solution would be to edit the size of the masks?
    Thanks

    • @latentvision
      @latentvision  4 หลายเดือนก่อน +1

      no, if you need precise sizing you need a controlnet probably. To install the essentials use the Manager or download the zip and unzip it into the custom_nodes directory

  • @Zetanimo
    @Zetanimo 4 หลายเดือนก่อน

    how would you go about adding some overlap like the girl and dragon example from the beginning of the video where they are touching? Or does this process have enough leeway to let them interact?

    • @latentvision
      @latentvision  4 หลายเดือนก่อน +1

      The masks can overlap, if the description is good enough the characters can interact. SD is not very good at "interactions" but standard stuff works (hugging, boxing, cheek-to-cheek, etc...). On top you can use controlnets

    • @Zetanimo
      @Zetanimo 4 หลายเดือนก่อน

      Thanks a lot! Looking forward to more content!@@latentvision

  • @michail_777
    @michail_777 4 หลายเดือนก่อน

    Hi. Thanks for your work. I was wondering. Is there any IPAdapter node that will be linked to AnimateDiff? And this node will work only in a certain frame.That is, if I connect 2 input images, from 0 to 100 frame one image affects the generation, and from 101 frame the second input image affects the generation. But it would be quite nice if from frame 90 to 110 these images are blended.

    • @latentvision
      @latentvision  4 หลายเดือนก่อน +1

      yes I'm working on that

    • @michail_777
      @michail_777 4 หลายเดือนก่อน

      @@latentvision Thank you. I've added AnimateDiff and 2CN to your workflow. And it's working well.

  • @hleet
    @hleet 4 หลายเดือนก่อน

    Very good stuff but hard to use. Thank you for this tutorial. I hope SD3 will better understand prompts and IPAdapter will be supported on SD3 as well. ... But SD3 is now a paid/API , so sad for "free opensource"

    • @latentvision
      @latentvision  4 หลายเดือนก่อน +1

      SD3 should be really easy to guide with images... let's see when they release the weights.

  • @n3bie
    @n3bie 3 หลายเดือนก่อน

    Woah

  • @fmfly2
    @fmfly2 4 หลายเดือนก่อน

    My comfyui don't have 🔧 Mask From RGB/CMY/BW, only have Mask from color. Where do i find it?

    • @latentvision
      @latentvision  4 หลายเดือนก่อน

      you just need to upgrade the extension

  • @jerrycurly
    @jerrycurly 4 หลายเดือนก่อน

    Is there a way to use controlnets in each region, I was having issues with that?

    • @latentvision
      @latentvision  4 หลายเดือนก่อน

      yes of course! just try it

  • @thomasmiller7678
    @thomasmiller7678 2 หลายเดือนก่อน

    Hi great stuff, is there anyway to do this kind of attention masking with loras, so I can apply separate loras to separate masks? There's a few things kicking around but nothing seems to work all that well.

    • @latentvision
      @latentvision  2 หลายเดือนก่อน

      not really (it would be technically feasible probably but not easy)

    • @thomasmiller7678
      @thomasmiller7678 2 หลายเดือนก่อน

      @@latentvision hmm this is why I have been struggling there are some applications nodes for it but from the stuff I've found haven't had much luck yet, might you be able to help me out or do a lil digging maybe you can pull of some more magic! 😄

    • @divye.ruhela
      @divye.ruhela 2 หลายเดือนก่อน

      @@thomasmiller7678 But can't you just use the concerned LoRAs in a separate workflow to generate the images you like, then bring them here, apply conditioning and combine?

    • @thomasmiller7678
      @thomasmiller7678 2 หลายเดือนก่อน

      Yes that is possible but it's still not a true influence like the Lora would be if it could be implemented

  • @sino-ph2gc
    @sino-ph2gc 4 หลายเดือนก่อน

    niubi!!

  • @walidflux
    @walidflux 4 หลายเดือนก่อน

    when are going to do videos with ip-adapter workflow?

    • @latentvision
      @latentvision  4 หลายเดือนก่อน

      not sure I understand

    • @walidflux
      @walidflux 4 หลายเดือนก่อน

      @@latentvision sorry, i meant animation with ip-adapter, there are many workflows out there most famous animatediff and ip-adapter i just though yours is defiantly going to be better

    • @latentvision
      @latentvision  4 หลายเดือนก่อน +1

      @@walidflux I'll try to do more animatediff tutorials, but I need to add a new node that will help with that

  • @hashshashin000
    @hashshashin000 4 หลายเดือนก่อน +1

    is there a way to use faceidv2 with this?

    • @latentvision
      @latentvision  4 หลายเดือนก่อน +2

      I will add the faceid nodes next

    • @hashshashin000
      @hashshashin000 4 หลายเดือนก่อน

      @@latentvision ♥

  • @makristudio7358
    @makristudio7358 4 หลายเดือนก่อน

    Hi, Which one is better IP adapter FaceID vs InstantID ?

    • @latentvision
      @latentvision  4 หลายเดือนก่อน

      the are different 😄depends on the application

  • @Vincent-ce7bp
    @Vincent-ce7bp 4 หลายเดือนก่อน

    If i have a strong color distribution in my reference style image the result seems to put the colors in the same areas as a resulting image. Is there a way around this? (ipadapterPlus with strong setting and style transfer)

    • @latentvision
      @latentvision  4 หลายเดือนก่อน

      I'd need to see. what you mean by strong color distribution?

    • @Vincent-ce7bp
      @Vincent-ce7bp 4 หลายเดือนก่อน

      ​@@latentvision I was talking about certain parts at the macro color placement of the style reference image. If for example the upper part of the reference image has an orange leather texture then in the resulting image it is also more likely to have an orange background or orange "parts" in this upper area of the image.

    • @latentvision
      @latentvision  4 หลายเดือนก่อน +1

      @@Vincent-ce7bp in that case probably your best bet is to play with "start_at" (like 0.2) and weight.

    • @Vincent-ce7bp
      @Vincent-ce7bp 4 หลายเดือนก่อน

      @@latentvision Thank you for the reply. I do not know if it is possible but perhaps you could code a weight_type option for the style transfer as in the ipAdapter Advanced note has a weight_type option. You could select style transfer as the first weight_type and then there is a subcategory (weight_type2) how this style transfer is applied: linear, ease in, ease out... But this is just a rough guess.

  • @burdenedbyhope
    @burdenedbyhope 4 หลายเดือนก่อน

    is this possible to use ipadapter and attention masks for character and items interaction? like a man handing over an apple or carrying a bag

    • @latentvision
      @latentvision  4 หลายเดือนก่อน

      yes of course! why not?!

    • @burdenedbyhope
      @burdenedbyhope 4 หลายเดือนก่อน

      @@latentvision maybe my weights/start/end are not right, I always have trouble make a known character interact with another known character or a known item. "Known" in this case means using IPAdapter. Most of the example I saw is 2 characters/subjects standing beside each other, not interacting, so I wonder.

    • @burdenedbyhope
      @burdenedbyhope 3 หลายเดือนก่อน

      @@latentvision I tested it in many cases, the interaction works pretty well; a girl holding an apple, a girl holding a teddy bear... all works well. With 2 girls holding hands, bleeding happens time to time, negative prompts are not always applicable; can the regional conditioning accept a negative image?

  • @user-rx8wb4dn9c
    @user-rx8wb4dn9c 4 หลายเดือนก่อน

    thanks, for style someone , is there benefit to use ipadapter v2 combine with instantId or just ipadapter v2 face id is enough ?
    if padapter v2 combine with instantId get more better result , any tutorial for that ?
    another is does a casual&normal camera taken photo of a person can get a fantasy result use above method ?

    • @latentvision
      @latentvision  4 หลายเดือนก่อน

      yes you can combine them to get better results, but don't expect huge improvements, just a tiny bit better :)

    • @user-rx8wb4dn9c
      @user-rx8wb4dn9c 4 หลายเดือนก่อน

      @@latentvision thanks, for point2 , can normal human photo from phone camera can be transfer into a style masterpiece using comfyui ? i don't find video in youtube talk about that

    • @latentvision
      @latentvision  4 หลายเดือนก่อน

      @@user-rx8wb4dn9c depends what you are trying to do. too vague as a question, sorry

    • @user-rx8wb4dn9c
      @user-rx8wb4dn9c 4 หลายเดือนก่อน

      @@latentvision below are my case :
      i want to take my child born photo into a t shirt.
      but this photo is taken from very long time ago , and the quality is bad , especially the face got a bit vague , anyway its my memory.
      can i using comfy ui transfer this vague photo into a picture that reserve the pose and face of my child , improved the quality and with T-Shirt art style which suitable for print to t shirt,and it should reserve my child's face and body pose that i can regonize.
      how can i do with that using comfyui ?

    • @latentvision
      @latentvision  4 หลายเดือนก่อน

      @@user-rx8wb4dn9c it is possible using a combination of techniques but it's impossible to give you a walk-through in an youtube comment... it highly depends on the conditions of the original picture

  • @a.zanardi
    @a.zanardi 4 หลายเดือนก่อน

    Matteo, FlashFace got released, will you bring it too?

    • @latentvision
      @latentvision  4 หลายเดือนก่อน +1

      I had a look at it, it's weird cookie. 10GB model that only works with SD1.5... I don't know...

    • @a.zanardi
      @a.zanardi 4 หลายเดือนก่อน

      @@latentvision 🤣🤣🤣🤣 Weird cookie it was really fun! Thank you so much for answering!

  • @Fernando-cj2el
    @Fernando-cj2el 4 หลายเดือนก่อน

    Mateo, I updated all and nodes still red, am I the only one´?😭

  • @BuildwithAI
    @BuildwithAI 4 หลายเดือนก่อน

    could you combine this with Lora?

    • @latentvision
      @latentvision  4 หลายเดือนก่อน

      one lora per mask? no, you can't the model pipeline is only one

  • @user-jx7bh1lx4q
    @user-jx7bh1lx4q หลายเดือนก่อน

    прекрасно но это только портретные изображения в близи, видно что на примере кота и тигра уже не знает как изобразить что они играют