Style Transfer Adapter for ControlNet (img2img)

แชร์
ฝัง
  • เผยแพร่เมื่อ 28 ก.ย. 2024
  • Very cool feature for ControlNet that lets you transfer a style.
    HOW TO SUPPORT MY CHANNEL
    -Support me by joining my Patreon: / enigmatic_e
    _________________________________________________________________________
    SOCIAL MEDIA
    -Join my discord: / discord
    -Instagram: / enigmatic_e
    -Tik Tok: / enigmatic_e
    -Twitter: / 8bit_e
    - Business Contact: esolomedia@gmail.com
    _________________________________________________________________________
    Details about Adapters
    TencentARC/T2I-Adapter: T2I-Adapter (github.com)
    Models
    huggingface.co...
    Ebstynth + SD
    • Stable Diffusion + EbS...
    Install SD
    • Installing Stable Diff...
    Install ControlNet
    • New Stable Diffusion E...

ความคิดเห็น • 83

  • @ixiTimmyixi
    @ixiTimmyixi ปีที่แล้ว +5

    I can not wait to apply this to my AI Animations. This is a huge game changer. Using less in the text prompt area is a step forward for us. Having two images being the only driving factors should help a ton with cohesion/consistency in animation

    • @enigmatic_e
      @enigmatic_e  ปีที่แล้ว +1

      I agree. Definitely makes it easier in some aspects.

  • @androidgamerxc
    @androidgamerxc ปีที่แล้ว +1

    @2:47 thank you so much for that i was thinking of reinstalling controlnet just because of that

  • @clenzen9930
    @clenzen9930 ปีที่แล้ว +3

    Guidance start is about *when* it starts to take effect.

  • @User-pq2yn
    @User-pq2yn ปีที่แล้ว +3

    The color adapter works for me, but the style adapter does not. The Guidance Start value doesn't change anything. The result is the same as when ControlNet is turned off. Please tell me how to fix this? Thank you!

    • @miguelarce6489
      @miguelarce6489 ปีที่แล้ว

      Happen the same to me. did you figure it out?

    • @User-pq2yn
      @User-pq2yn ปีที่แล้ว

      @@miguelarce6489 the total number of tokens in prompt and negative prompt should not exceed 75

  • @snckyy
    @snckyy ปีที่แล้ว

    incredible amount of useful information in this video. thank YOU!!!!!!

  • @ЕкатеринаАгарская-ф3г
    @ЕкатеринаАгарская-ф3г 6 หลายเดือนก่อน

    Hi! I really ask for help, I'm desperate :( Clip Vision preprocessor is not displayed (automatic1111), and I can't find where to download it. What am I doing wrong?

  • @theairchitect
    @theairchitect ปีที่แล้ว +1

    i try use this new controlnet extension and get not style in generated result. i remove all prompts (using img2img with 3 controlnets activate: cany + hed + t2iadapter with clip_vision preprocessor), in generating process appears error: "warning: StyleAdapter and cfg/guess mode may not works due to non-batch-cond inference" and generated result appears with not affected style =( frustrating .... i try many denoising strengths in img2img and many weights on controlnet instances without success... not applying the style on final generated result =( try to enable "Enable CFG-Based guidance" in contronet setting too, and still not working =( anyone got this same issue?

    • @J.l198
      @J.l198 ปีที่แล้ว

      I need help, having the same issue, when I generate it just generates a random image...

  • @judgeworks3687
    @judgeworks3687 ปีที่แล้ว +2

    love your clear instructions.
    I'm following along but my system seems to get stalled on the HED and clip vision controlnets. Any tips for when this happens? I keep restarting.
    Am trying same steps but first only doing one control net at a time to see if it works, and then adding each controlnet after it successfully runs. So far the HED is definitely slow to run. After this will try clip vison/T21style by itself (as one control net tab).

    • @enigmatic_e
      @enigmatic_e  ปีที่แล้ว

      What does your height and width look like? I ran into a similar problem and had to reduce the size to make certain parameters work. Might be that it can’t handle it

    • @judgeworks3687
      @judgeworks3687 ปีที่แล้ว

      @@enigmatic_e 512x512 I found this video (the guy mentioned some issue and how they got fixed) I will watch it later, I attached link below in case of interest. . the controlnet LED tab seems to be an issue. Is there a reason you use 3 tabs of controlnet? I'm testing out one tab of clipvision/t21 alone. It's still running. th-cam.com/video/tXaQAkOgezQ/w-d-xo.html

    • @judgeworks3687
      @judgeworks3687 ปีที่แล้ว

      @@enigmatic_e which video of yours do you show how to add gitpull into the code? I think it was your video? I need to access my webui-user.bat to add something to the code but I can't recall how to do that. Thanks if you have link to video you made where you showed that.

    • @enigmatic_e
      @enigmatic_e  ปีที่แล้ว +1

      @@judgeworks3687 i think this one th-cam.com/video/qmnXBx3PcuM/w-d-xo.html

    • @judgeworks3687
      @judgeworks3687 ปีที่แล้ว

      @@enigmatic_e yes this was it. Great video. I ended up uninstalling SD and re-installing from the video you sent me. Thank you!

  • @digital_magic
    @digital_magic ปีที่แล้ว

    Great video :-) Thanx for sharing

  • @CompositingAcademy
    @CompositingAcademy ปีที่แล้ว

    Really cool thanks for sharing! I wonder what would happen if you put 3d wireframes in the controlnet lines instead of the generated ones, could be very temporally stable

  • @GS195
    @GS195 9 หลายเดือนก่อน

    It turned Barret into President Shinra 😂

  • @zachkrausnick5030
    @zachkrausnick5030 ปีที่แล้ว

    Great Video ! Trying to update my xformers, it seemed to install a later version of pytorch that no longer supports cuda, and the version of xformers you used is no longer available, how do I fix this?

  • @J.l198
    @J.l198 ปีที่แล้ว +2

    I need help, when I generate the result is way different than the actual image im using.

  • @ramilgr7467
    @ramilgr7467 ปีที่แล้ว

    Thank you! very interesting!

  • @ErmilinaLight
    @ErmilinaLight 5 หลายเดือนก่อน

    Thank you!
    What should we choose as Control Type? All?
    Also, noticed that generating image with txt2img controlnet with given image it takes veeeeery long time, though my machine is decent. Do you have the same?

    • @enigmatic_e
      @enigmatic_e  4 หลายเดือนก่อน +1

      i believe there should be a box you can check that says "Upload independent control image"

    • @ErmilinaLight
      @ErmilinaLight 4 หลายเดือนก่อน

      @@enigmatic_e THANK YOU!!!!

  • @christophervillatoro3253
    @christophervillatoro3253 ปีที่แล้ว

    Hey I was going to tell you how to get after effects to pull in multiple png sequences and autocross fade them. You have to pull them in and make each ebsynth outfolder it's own sequence. Then right click all the sequences, create new composition, in the menu there is an option to crossfade all the imported sequences. Specify with the ebsynth settings. Voila!

    • @enigmatic_e
      @enigmatic_e  ปีที่แล้ว

      Ahh ok! I will have to try this! Thank you for the info!!

  • @tonon_AI
    @tonon_AI ปีที่แล้ว

    any tips on how to build this with ComfyUI?

  • @iamYork_
    @iamYork_ ปีที่แล้ว

    Looks like Gen-1 will have competition…

  • @JeffFengcn
    @JeffFengcn ปีที่แล้ว

    hi Sir., thanks for making those good videos on style transfer, i have a question , is there a way to change a person's outfit based on a input pattern of picture? using style transfer and inpaint? thanks in advance

  • @erdbeerbus
    @erdbeerbus ปีที่แล้ว

    this is really a cool way to get it ... thank you!did u explain the way to bring a whole img sequence into comfy to get your great 0:20 result? thx in advance!

    • @enigmatic_e
      @enigmatic_e  ปีที่แล้ว

      No I haven’t. I still need to get into comfy, still haven’t tried it yet.

  • @PlayerGamesOtaku
    @PlayerGamesOtaku ปีที่แล้ว

    hi, I have created more than 70 images with stable diffusion, and I would like to know how I can transform these photos into a moving animation with the same photos, could you help me?

    • @enigmatic_e
      @enigmatic_e  ปีที่แล้ว +1

      Other than using premier pro or any other editing software, I know there are websites that can make your image sequences into videos. I’m not sure which one is a good choice though I’ll have to look into it.

    • @PlayerGamesOtaku
      @PlayerGamesOtaku ปีที่แล้ว

      @@enigmatic_e if you create a tutorial, or find the sites you mentioned before, let me know :)

    • @enigmatic_e
      @enigmatic_e  ปีที่แล้ว

      @@PlayerGamesOtaku working it right actually

  • @BoringType
    @BoringType 11 หลายเดือนก่อน

    Thank your very much

  • @gloxmusic74
    @gloxmusic74 ปีที่แล้ว

    Nice find bro !! ....yeh consistency is still a problem with video, i find lowering the denoising strength helps but then you lose the style,..its a double edged sword ⚔️

    • @enigmatic_e
      @enigmatic_e  ปีที่แล้ว

      True, it’s always the struggle.

  • @SHsaiko
    @SHsaiko ปีที่แล้ว

    Great video man! I been learning so much from your vids. it might be a rookie quesion but I got stuck on the third controlnet model when you choose clip_vision under the preprocessor, I dont seem to have that option. Is it becuase I have to use a certain version of SD? thanks!

    • @enigmatic_e
      @enigmatic_e  ปีที่แล้ว

      Thanks! Have you updated everything? Like SD and ControlNet?

    • @SHsaiko
      @SHsaiko ปีที่แล้ว

      @@enigmatic_e oops. that solved it! thanks for the help! looking forward to your next vid man, great work :D

    • @mikhaillavrov8275
      @mikhaillavrov8275 ปีที่แล้ว

      @@SHsaiko Please describe what have you done, exactly please? I have updated all requirements but still there is no clip_vision on the left drop-down menu

  • @OsakaHarker
    @OsakaHarker ปีที่แล้ว

    Have you looked at the new Ebsynth_Utility extension to A1111?

    • @enigmatic_e
      @enigmatic_e  ปีที่แล้ว +1

      Wait what??

    • @BeatoxYT
      @BeatoxYT ปีที่แล้ว

      @@enigmatic_e new enigmatic eb synth utility video incoming

    • @melchiorao9759
      @melchiorao9759 ปีที่แล้ว

      @@enigmatic_e Automates most of the process.

  • @koto9x
    @koto9x ปีที่แล้ว

    Ur a legend

  • @j_shelby_damnwird
    @j_shelby_damnwird ปีที่แล้ว

    If I run more than one ControNet tab I get the CUDA out of memory error (8GB of VRAM GPU). Any suggestions?

    • @enigmatic_e
      @enigmatic_e  ปีที่แล้ว

      have you tried checking the low vram option in controlnet?

    • @j_shelby_damnwird
      @j_shelby_damnwird ปีที่แล้ว

      @@enigmatic_e Thank you for responding. Yes, to no avail :-(

    • @enigmatic_e
      @enigmatic_e  ปีที่แล้ว +1

      @@j_shelby_damnwird try lowering dimensions, that might help

    • @j_shelby_damnwird
      @j_shelby_damnwird ปีที่แล้ว

      @@enigmatic_e Thank you. Currently trying 1024 x 768- Will give a go 768 X 512.
      I definitely need to grabe me one of those fancy new GPUs :-/

    • @j_shelby_damnwird
      @j_shelby_damnwird ปีที่แล้ว

      @@enigmatic_e Will give it a go. Currently trying to output 1024 x 768. Maybe 768 x 512 will do the trick.
      I really need to grab me one of those new fancy GPUs :-/

  • @dexter0010
    @dexter0010 ปีที่แล้ว

    i dont have the clip_vision prerprocessor where do i download it ??????

    • @enigmatic_e
      @enigmatic_e  ปีที่แล้ว

      Did you update?

    • @ragdollmaster15
      @ragdollmaster15 ปีที่แล้ว

      @@enigmatic_e I also can't find it. I did git pull inside the ControlNet folder, reinstalled it 2 times and still can't find it

  • @HopsinThaGoat
    @HopsinThaGoat ปีที่แล้ว

    Ahhh yeah

  • @ohyeah9999
    @ohyeah9999 ปีที่แล้ว

    This is can make video and free??? I tried disco difussion, thats like trial.

  • @K-A_Z_A-K_S_URALA
    @K-A_Z_A-K_S_URALA ปีที่แล้ว

    не работает!

  • @워크-f2p
    @워크-f2p ปีที่แล้ว

    Ebstynth + SD link

    • @enigmatic_e
      @enigmatic_e  ปีที่แล้ว +1

      My bad, just updated link, th-cam.com/video/47HpHOLkIDo/w-d-xo.html

  • @RonnieMirands
    @RonnieMirands ปีที่แล้ว +2

    I am not getting great results like you out of the box, i have to play a lot of slides for starting showing, wondering what i am missing here lol

    • @RonnieMirands
      @RonnieMirands ปีที่แล้ว +1

      I follow the instructions from the Aitrepreneur channel and it worked for me.

    • @enigmatic_e
      @enigmatic_e  ปีที่แล้ว +1

      happy you figure it out.

  • @clenzen9930
    @clenzen9930 ปีที่แล้ว +1

    I made a post about making sure you deal with the ymal files, but I think it got deleted because it linked to Reddit. Anyway, there’s some work to be done if you haven’t.

    • @enigmatic_e
      @enigmatic_e  ปีที่แล้ว +1

      is it from the stable diffusion reddit?

  • @sidewaysdesign
    @sidewaysdesign ปีที่แล้ว +1

    Thanks for another informative video. This style transfer feature already makes Photoshop’s Style Transfer neural filter look sad by comparison. It’s clear that Stable Diffusion’s open-source status, enabling all of these new features, is leaving MidJourney and DALL-E in the dust.

    • @enigmatic_e
      @enigmatic_e  ปีที่แล้ว

      Yea, it’s getting so good!

  • @dreamayy8360
    @dreamayy8360 ปีที่แล้ว

    Shows "where to download it" with a list of .pth files..
    Then shows his folder where he's got safetensors and yaml files..
    Great tutorial.. just making stuff up and not actually showing where or how you installed anything.

  • @beatemero6718
    @beatemero6718 ปีที่แล้ว

    Why did you provide a Link for style Adapter, but Not for the clip_vision preprocessor?

  • @Fravije
    @Fravije 10 หลายเดือนก่อน

    Hello. What about style transfer in images?
    I'm looking for information about this but haven't found anything.
    For example, I want to make a series of images of animals. I have a photo of a tiger, a pencil drawing of a horse, a pencil drawing of a bull (but by a different artist), an ink drawing of a wolf, a watercolor drawing of a cheetah... and I want to transform them so that all these images are done in the same style, like as if they were painted by the same artist. Is there any product that can help achieve this goal?

    • @enigmatic_e
      @enigmatic_e  10 หลายเดือนก่อน

      Unfortunately this tutorial is outdated now. I havent messed around with style transfer lately so I dont know whats a good alternative at the moment.

  • @BeatoxYT
    @BeatoxYT ปีที่แล้ว

    Thanks for sharing this! Very cool that they’ve added this style option. Excited for your next video on connecting it with eb synth. I’ll watch that next and see what I can do as well.
    But damn, these Davinci Deflicker/Dirt removal render times are killing me haha

    • @enigmatic_e
      @enigmatic_e  ปีที่แล้ว

      I feel you on the deflicker. Sometimes stacking too many is not a good idea 😂

    • @BeatoxYT
      @BeatoxYT ปีที่แล้ว

      @@enigmatic_e have you found a good compromise? I tried just one and it wasn’t great. So I stuck with the 3 you stacked after the dirt remover. But 24 hours for a 30 second clip is unsustainable lol

    • @enigmatic_e
      @enigmatic_e  ปีที่แล้ว +1

      @@BeatoxYT no not yet. I’m sure another faster alternative will come out soon.

  • @mayasouthmoor3339
    @mayasouthmoor3339 ปีที่แล้ว

    where do you get clip vision even from?

    • @enigmatic_e
      @enigmatic_e  ปีที่แล้ว

      if its not in the link i provided, it might appear when you update everything