How to Colorize Photos with AI - Stable Diffusion + ControlNet Tutorial 2023

แชร์
ฝัง
  • เผยแพร่เมื่อ 5 มิ.ย. 2024
  • To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/AlbertBozesan/ . The first 200 of you will get 20% off Brilliant’s annual premium subscription.
    Links:
    How to Install and Use Stable Diffusion (June 2023) - Basic Tutorial
    • How to Install and Use...
    ControlNet: github.com/Mikubill/sd-webui-...
    Model +VAE: civitai.com/models/4201/reali...
    Prompt: (RAW photo:1.5), YOUR DESCRIPTION, healthy skin, high detail, high quality, studio lighting
    Negative: zombie, cgi, 3d, 2d, render, sketch, cartoon, drawing, illustration, anime, text, cropped, out of frame, low quality, duplicate, morbid, mutilated, clone, disfigured, long neck, hard light, (black and white:1.3), b&w, saturated, pattern, saturated, ugly colors, sick, pallid
    CHAPTERS
    0:00 Intro
    0:38 Tools you need
    3:26 Preparing the photo
    6:53 ControlNet
    8:08 Stable Diffusion Settings
    11:23 Sponsor Message
    12:26 First Results
    13:36 Refining
    15:23 Colorizing the photo
    17:17 img2img
    18:56 Refining
    19:58 Inpainting the face
    21:10 Final adjustments
    22:27 Results-
    ---------------------------------------------
    Did you like this vid? Like & Subscribe to this Channel!
    Follow me on Twitter: / albertbozesan
    This video was sponsored by Brilliant.

ความคิดเห็น • 141

  • @albertbozesan
    @albertbozesan  ปีที่แล้ว +2

    To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/AlbertBozesan/. The first 200 of you will get 20% off Brilliant’s annual premium subscription!

    • @ardezart
      @ardezart ปีที่แล้ว

      Thank you!

  • @joparebr
    @joparebr ปีที่แล้ว +13

    People is always hard to get it right but, for landscapes, animals and objects this is perfect.

  • @addyfalcon3656
    @addyfalcon3656 5 หลายเดือนก่อน +10

    Lets face it. Its not colorizing the image. It is a different image that looks similar.

  • @udonpraguypanya2992
    @udonpraguypanya2992 ปีที่แล้ว +2

    Very nice to see all the detail and especially how to set SD right.

  • @mohentohen2294
    @mohentohen2294 ปีที่แล้ว +7

    Thanks for the video, very useful!
    To make it easier to pick good parameters for the canny, there is a "preview annotation result" button at the bottom of the control net, which starts the preprocessor and shows the result after a second without starting the image generation. Saves tons of time.

  • @rtberbary0101
    @rtberbary0101 ปีที่แล้ว +1

    that is awesome, ive been using Adobe Photoshop's neural filters for photo restoration and coloring....it is amazing! but using that result into your SD process makes it even better

  • @YongLoveFCB
    @YongLoveFCB ปีที่แล้ว +1

    Any controller tutorial are urgently needed! Thank you! Also it will be great to see a detail guide how to use local laten upscale tool (llul)

  • @generichuman_
    @generichuman_ ปีที่แล้ว +5

    I really like the technique for colorizing the image, very cool. For faces, I think we're still waiting for a method that really works well without changing what the person looks like. A lot of photo restorations are of family members so having the final result still look like the person is essential. The only thing I found that works decently well is GFPGAN when upscaling. It's still not perfect though. Every time I try to touch the face via image gen, even with low noise values and a carefully tweaked canny, it just changes the face too much

  • @7xIkm
    @7xIkm วันที่ผ่านมา

    This is so bizarrely convoluted

  • @seetreee
    @seetreee ปีที่แล้ว +4

    Didn't expect to be sold on Affinity in a SD tutorial lol. That mask preview feature is so useful.

    • @albertbozesan
      @albertbozesan  ปีที่แล้ว

      Right?! Photoshop has loads of great features but just that masking alone makes Affinity my favorite for editing AI stuff.

  • @Ryuuko3
    @Ryuuko3 9 หลายเดือนก่อน +1

    This was very helpful, thank you

  • @rtoNinjaMooCow
    @rtoNinjaMooCow 8 หลายเดือนก่อน +1

    Wanted to thank you for this video. I have been having some good fun digging up historical images and updating not just the colors but also dropping in modern humans.

    • @albertbozesan
      @albertbozesan  8 หลายเดือนก่อน

      Great to hear! Thank you.

  • @iNSPODS
    @iNSPODS 9 หลายเดือนก่อน +1

    You deserve a lot more subs. Fantastic to-the-point informative video.
    Great results you got. The challenge is to do the same with family photos as it tends to change the subject far too much.

  • @erikt81a
    @erikt81a ปีที่แล้ว +4

    It would be a great if you could make this same tutorial but with the reference only control as well, I'm sure you'll get great results in the process of colorization.

    • @albertbozesan
      @albertbozesan  ปีที่แล้ว

      I’m sure the results would be cool - do you mean just different angles of a similar person or what would the goal be?

  • @polkovnikkaryagin3395
    @polkovnikkaryagin3395 2 หลายเดือนก่อน +1

    Good work, and all with good description step by step, with free tools

    • @albertbozesan
      @albertbozesan  หลายเดือนก่อน

      Thank you very much!

  • @Engle777
    @Engle777 10 หลายเดือนก่อน +1

    Thank you! I was able to recolor a few pictures of my grandparents and a picture of my uncle as a kid!

    • @albertbozesan
      @albertbozesan  10 หลายเดือนก่อน

      That’s awesome!

  • @Maisonier
    @Maisonier ปีที่แล้ว +1

    wow this is amazing, liked and subscribed.

  • @tclan414
    @tclan414 9 หลายเดือนก่อน +2

    You could try duplicating the photo in affinity photo then add a high pass filter to it and change the blend mode to overlay. I usually do a merge down and repeat the process 2 or more times. This will really shapen the image and make the lines pop out in the photo. It should help create a better cammy for the image. If you overt to it it could cause some color bleed but since you just using it for the cammy it shouldn't be a problem.

    • @albertbozesan
      @albertbozesan  9 หลายเดือนก่อน

      Thanks for the tip!

  • @polystormstudio
    @polystormstudio ปีที่แล้ว +4

    I didn't see the negative prompt in the description so here you go everybody:
    zombie, cgi, 3d, 2d, render, sketch, cartoon, drawing, illustration, anime, text, cropped, out of frame, low quality, duplicate, morbid, mutilated, clone, disfigured, long neck, hard light, (black and white:1.3), b&w, saturated, pattern, saturated, ugly colors, sick, pallid

    • @albertbozesan
      @albertbozesan  ปีที่แล้ว

      Whoops! Sorry and thank you. I’ve updated the description!

  • @njdigitalagency8619
    @njdigitalagency8619 3 วันที่ผ่านมา

    Great video. Thanks. Is it possible to do the same in SeaArt?

  • @user-xb4pu1cp6c
    @user-xb4pu1cp6c ปีที่แล้ว +1

    Wow, It's cool :)

  • @roughlyEnforcing
    @roughlyEnforcing ปีที่แล้ว +1

    thanks for this one

  • @takayamayoshikazu2782
    @takayamayoshikazu2782 ปีที่แล้ว +2

    やっと世の中に役立つ使い方を示した動画が出てきたな
    こういう使い方を待ってました

  • @ItReallyIsiPOD
    @ItReallyIsiPOD ปีที่แล้ว

    This is brilliant. I wish I understood the first thing about photo editing. I can use SD and even ControlNet alright, but photo editors are too difficult for me.

    • @albertbozesan
      @albertbozesan  ปีที่แล้ว

      I’ve gotten comments like this a few times - might make a tutorial with basics you need for AI art :) it’s not super hard but there are some things that are handy

  • @fungt89
    @fungt89 5 หลายเดือนก่อน +1

    Great tutorial! My ape brain would just run iteration lotteries but this was really useful

  • @denismagalon-studiopublisu7705
    @denismagalon-studiopublisu7705 8 หลายเดือนก่อน +1

    Perfect tutorial ! You speak like an airplane pilot but with a better mic. Thanks !!!

  • @gegenlaktose
    @gegenlaktose 2 หลายเดือนก่อน

    Does it really makes sense to do the liquify step BEFORE the img2img-generation? Since the exact position of the eyes and other details is really only influenced by ControlNet. (It would make sense to do this in post though, after the final generation.)

    • @albertbozesan
      @albertbozesan  หลายเดือนก่อน

      Depends on your denoising strength, but yeah it could not be worth it if you go for anything about 0.35

  • @timedriverable
    @timedriverable 6 หลายเดือนก่อน

    Whats your GPU Vram ? I hear they recommend at least 12gb. Thx in adv.

    • @albertbozesan
      @albertbozesan  6 หลายเดือนก่อน

      I had a 2070S with 8 gigs, no problem.

  • @abrahamnasiri7684
    @abrahamnasiri7684 ปีที่แล้ว +1

    hi, please can you make a similar tutorial on how to add color to a sketch

    • @albertbozesan
      @albertbozesan  ปีที่แล้ว +1

      I put it on my list, thanks :)

  • @morningwood3457
    @morningwood3457 2 หลายเดือนก่อน +1

    I'm really curious about the AITA in the third browser tab

    • @albertbozesan
      @albertbozesan  2 หลายเดือนก่อน +1

      It was a pretty good one! Here ya go: www.reddit.com/r/AmItheAsshole/s/bqY5FnooaB

    • @morningwood3457
      @morningwood3457 2 หลายเดือนก่อน

      Hahah thanks for the link, OP is definitely NTA.

  • @SatishRahi
    @SatishRahi ปีที่แล้ว

    I have Cyberlink's Photo director that does not seem to have Liquify. What exactly this does. We can't layer the original with SD generated one. It does not match exactly. What will make SD create an image that is spot on match , face feature by face feature

    • @albertbozesan
      @albertbozesan  ปีที่แล้ว +1

      That's because this method isn't perfectly precise just yet. You can fiddle with your ControlNet settings to try and get closer, but the final step for now is editing your photo and pushing the facial features back to the right spot. Photoshop's liquify feature is good for this, but other software will have similar tools. I don't know what Cyberlink Photo Director is and recommend either Photopea, Affinity Photo 2 or Photoshop instead.

  • @user-oc9be7sm5e
    @user-oc9be7sm5e ปีที่แล้ว +1

    Can you make a video guide on how to add beautiful backgrounds for removed images using Stable Diffusion without changing these original images?

    • @albertbozesan
      @albertbozesan  ปีที่แล้ว +1

      I might! But until then my suggestion would be to mask around your foreground and inpaint the background.

    • @user-oc9be7sm5e
      @user-oc9be7sm5e ปีที่แล้ว

      @@albertbozesan The inpaint function have much limitation, I have been trying so many times but it hard for detailed images. I already have many good exterior products with white background and I wanna use Stable Diffusion rather than other applications just for providing an appropriate background, scale. Please make an instruction video like this or give me some suggestions. Thank you so much.

    • @albertbozesan
      @albertbozesan  ปีที่แล้ว +1

      @@user-oc9be7sm5e i am working on a product photography video with Kris Kashtanova right now!

  • @stablefaker
    @stablefaker ปีที่แล้ว

    I try to follow along but my subject doesn't pose the same way as in the controlnet picture, its a closeup so its only using openpose face pretty much.

    • @albertbozesan
      @albertbozesan  11 หลายเดือนก่อน

      Do you have the latest ControlNet 1.1 models? Those work with face, the old ones don’t. huggingface.co/lllyasviel/ControlNet-v1-1/tree/main

    • @stablefaker
      @stablefaker 11 หลายเดือนก่อน

      ​@@albertbozesan Appreciate the reply, I've had controlnet 1.1 for quite some time so I don't see why my models wouldn't be updated.

  • @historyfactsshorts-7401
    @historyfactsshorts-7401 ปีที่แล้ว

    I don't have the models appearing. I downloaded and installed the models as you described.

    • @albertbozesan
      @albertbozesan  ปีที่แล้ว

      Have you restarted your UI? Sometimes that’s all it needs.

    • @historyfactsshorts-7401
      @historyfactsshorts-7401 ปีที่แล้ว

      @@albertbozesan several times including PC restart.

    • @albertbozesan
      @albertbozesan  ปีที่แล้ว

      @@historyfactsshorts-7401 Strange. Are you referring to the RealisticDiffusion checkpoints or ControlNet models or both? Maybe double check that the ControlNet models are in the right folders. Does Stable Diffusion in general work for you, with other models?

    • @falko2308
      @falko2308 ปีที่แล้ว +1

      I got the same issue. No models appearing in the drop-down (ControlNet esection) no matter what processor I choose. SD works great beside that

    • @albertbozesan
      @albertbozesan  ปีที่แล้ว

      @@falko2308 did you download the corresponding .YAML files and place them alongside?

  • @Striderly
    @Striderly 8 หลายเดือนก่อน

    Looks like the point of eye direction is different, while the original eyes are straight as the head lead, the colorized image sees to another direction

    • @Striderly
      @Striderly 8 หลายเดือนก่อน

      The nuances also felt different, while the original looks natural, the colorized one looks very photogenic.

    • @albertbozesan
      @albertbozesan  7 หลายเดือนก่อน

      It’s not perfect, I know. It’s just another tool :)

  • @fritzchristoph8670
    @fritzchristoph8670 4 หลายเดือนก่อน

    which GPU did you use?

    • @albertbozesan
      @albertbozesan  4 หลายเดือนก่อน +1

      2070 Super

    • @fritzchristoph8670
      @fritzchristoph8670 4 หลายเดือนก่อน

      Thank you, in this case i will try it myselfe

  • @flaviangsm2
    @flaviangsm2 8 หลายเดือนก่อน

    Well a great great video but now its seems some things have been changed, there are newer version with newer options that not match at some steps. If even have a patient to make a video of newer version and with what files are used where will be great. Keep it up !

    • @albertbozesan
      @albertbozesan  8 หลายเดือนก่อน +1

      Yeah, this stuff is updating constantly. Which is great!

  • @dcabral00
    @dcabral00 ปีที่แล้ว +1

    I can't believe that staple discussion is free!

    • @albertbozesan
      @albertbozesan  ปีที่แล้ว +2

      Wait till you hear about stumble decoration!!

    • @generichuman_
      @generichuman_ ปีที่แล้ว +1

      I missed the last staple discussion, what did they talk about?

    • @albertbozesan
      @albertbozesan  ปีที่แล้ว +1

      @@generichuman_ we all got cupcakes and access to Stable Diffusion 4.0. Check your spam filter!!

  • @svytta
    @svytta ปีที่แล้ว

    i would like to try Automatic1111 but i dont have a GPU, can i install it anyway?

    • @albertbozesan
      @albertbozesan  ปีที่แล้ว +1

      Not locally. But you can use the online version with Jupyter Notebook. I don’t know how to install that, but you can search on Reddit and probably find a good guide :)

    • @sarkedev
      @sarkedev ปีที่แล้ว

      Yes, you can use it with the --use-cpu flag, it will just be really slow.

  • @mr.entezaee
    @mr.entezaee ปีที่แล้ว

    Yes, I installed it. But during the installation of Pix2Pix, when I search, nothing comes up. Last time I downloaded it from another place and the whole program ran into an error problem and gave many errors. I had to delete it. But I need it... There are a lot of old black and white photos that need to be in quality and color . plz help
    There are thousands of low-quality black and white photos that should be colored and high-quality. without changing face And I was confused
    I did this by controlnet. The process was slow... I was hoping for a faster and better way

  • @FelipeAbdo
    @FelipeAbdo ปีที่แล้ว

    the depth preprocessor isn't appearing to me. What am I doing wrong?

    • @albertbozesan
      @albertbozesan  ปีที่แล้ว +1

      If you installed the new ControlNet 1.1, there should be several. The one I used here is the equivalent to depth-midas or similar. Do you see that one?

  • @SatishRahi
    @SatishRahi ปีที่แล้ว +1

    Installation of Control Net requires downloading 14 Models (2 Gb each) again and put in right folder? Thats what github says. Anyway ran into issue . Only Canny works. Others Error out. I have no recourse. Tried uninstall/reinstall also. Bleeding edge Tech issue

    • @albertbozesan
      @albertbozesan  ปีที่แล้ว +1

      You don’t need to download all the models, just the ones you want to use. Maybe the beginning of my general ControlNet tut can help you solve the issue?
      th-cam.com/video/dLM2Gz7GR44/w-d-xo.html

    • @SatishRahi
      @SatishRahi ปีที่แล้ว +1

      @@albertbozesan Thanks Albert. I notice that models you have (per your links) are named different. Yours for open pose is "control_openpose-fp16.safetensors", documentation I followed has model "control_v11p_sd15_openpose". Has something changed with new versions? Also, looks like there is not one set of models , but many. This guy Mikubill has a github who I followed for Control Net. Maybe I will start over following your instructions. It seems a bit of wild west :) of Open Source

  • @alpen-gold2760
    @alpen-gold2760 6 หลายเดือนก่อน

    Anybody help please,i did everything as per on video but stable diffusion were generated totaly another person on photo.

    • @albertbozesan
      @albertbozesan  6 หลายเดือนก่อน

      Did you enable ControlNet?

    • @alpen-gold2760
      @alpen-gold2760 6 หลายเดือนก่อน

      @@albertbozesan yes

  • @jzthecreator2270
    @jzthecreator2270 2 หลายเดือนก่อน

    So this can’t be done on a macbook m1?

    • @albertbozesan
      @albertbozesan  2 หลายเดือนก่อน +1

      I wouldn’t recommend it. I have one and AI takes aaaaaages. Several minutes per image rather than seconds. I recommend using an online service like RunDiffusion instead (not an ad, I just think it’s pretty good).

    • @jzthecreator2270
      @jzthecreator2270 2 หลายเดือนก่อน

      @@albertbozesan got you! Thanks!

  • @porkypuff5884
    @porkypuff5884 ปีที่แล้ว

    so weird thing is there's a ton preprocessors but no models?

    • @albertbozesan
      @albertbozesan  ปีที่แล้ว

      Did you download and install the models as I described?

    • @porkypuff5884
      @porkypuff5884 ปีที่แล้ว

      @@albertbozesan turns out it was a bug lol , had to restart pc but its all good now

  • @Hitagara
    @Hitagara 5 หลายเดือนก่อน

    It doesn’t look like restoration and colorization. It looks like a drawing based on.

  •  ปีที่แล้ว +2

    Great video, but I would suggest to skip (RAW photo), first of that RAW in a camera is a unprocessed photo that look flat and soft, no one want a RAW look; but also for AI read the word RAW as "hard, rouged, bare, grunge, basic", like in raw concrete or raw meat. Other alternative is "colorization", "Hasselblad color photo" or "AGFA positive film" to get a photo look.
    Same go for "high quality", this is something an AI do not understand unless you specify what quality you want like "detailed natural light skin with facial hair and small wrinkles around the eyes".
    I would then say that a negative prompt shall be avoided and only used if you see the AI do something odd, when you use ControlNet you do not need "Deformed, long neck, out and frame and so on" and then you say it is a photo in the prompt you do not need "Cartoon, anime, 3d, render" and so on, just have "B&W, grayscale, sepia" as negative prompt.
    What I gout out of this video that was helpful was to VAE trick and to add extra Control Net tabs, something that had flew over my head and I was happy to learn about.

    • @albertbozesan
      @albertbozesan  ปีที่แล้ว +2

      Glad you enjoyed it! There are things here that I disagree with. Stable Diffusion is trained on a *lot* of bad imagery, so negative prompting is practically always a must. While you are correct that a raw photo look is not what we want here, we can search the dataset to see what the AI thinks it means: rom1504.github.io/clip-retrieval/?back=knn.laion.ai&index=laion5B-H-14&useMclip=false&query=RAW+photo
      There is decent photography there, so it can help. Prompting is also a lot of trial and error as opposed to strict rules to follow. The prompts I use have proven to be better than others in my experience, so if it works in the end, who's to judge?

  • @Tferdz
    @Tferdz ปีที่แล้ว

    Your canny lines seem a bit messed up, there are too few and too many disconnected. Next time do try HED instead of canny. I think it gets better detail without fiddling too much with low/high settings. :)
    Note: try to use the Color from adapters after you got a good color palette, so newer interactions focus on details instead of colors.

    • @albertbozesan
      @albertbozesan  ปีที่แล้ว

      Thanks for the tips! Will try next time :)

  • @zeloguy
    @zeloguy ปีที่แล้ว +2

    I'm getting a 404 on ControlNet URL.

    • @albertbozesan
      @albertbozesan  ปีที่แล้ว

      Try this: github.com/Mikubill/sd-webui-controlnet

  • @user-ik8vy1rg8f
    @user-ik8vy1rg8f ปีที่แล้ว

    "is an art for itself" - - - I think a good way to say it would have been "in and of itself". Might be wrong though.

    • @albertbozesan
      @albertbozesan  ปีที่แล้ว +1

      In my other mother tongue German it’s “eine Kunst für sich”, which is what I (probably erroneously) translated here. Thanks for the tip!

  • @fintech1378
    @fintech1378 8 หลายเดือนก่อน

    how bout DALL-E 3

    • @albertbozesan
      @albertbozesan  8 หลายเดือนก่อน

      No ControlNet-Style features right now. DALLE is too simple for detailed pro use at the moment.

  • @TodayFreedom
    @TodayFreedom 11 หลายเดือนก่อน +1

    Fascinating stuff, but the end result looks absolutely nothing like the original individual. It’s a handy tutorial for this kind of work but the truth is nothing competes with photoshop for professional photo restoration.

  • @mainstreamdisruption
    @mainstreamdisruption หลายเดือนก่อน

    i dont have DPM++ 2M Karras available

    • @albertbozesan
      @albertbozesan  หลายเดือนก่อน

      Check your settings tab, I’m sure it’s in there.

  • @Mad-Flix666
    @Mad-Flix666 10 หลายเดือนก่อน

    best jumping point is hide your face

  • @tulebox
    @tulebox ปีที่แล้ว

    Saying you colorized a photo with AI is a bit misleading? You created an entirely new photo and tried to shape it into the old photo.

    • @albertbozesan
      @albertbozesan  ปีที่แล้ว +1

      You could say the same about any process that colorizes, or even manual restoration with a clone tool, I suppose 🤷‍♂️ there is visual information that wasn’t there before. It has to come from somewhere.

  • @Zzudwa
    @Zzudwa ปีที่แล้ว

    Well it's been just two months and the very first link already leading to noVAE version which makes most of this tutorial irrelevant...

    • @albertbozesan
      @albertbozesan  ปีที่แล้ว

      Most??

    • @Zzudwa
      @Zzudwa ปีที่แล้ว

      @@albertbozesan for a step-by-step guide failing the fist step mean failing entire guide. Don't get mad, for people who are more educated in basics maybe your manual still useful.

    • @albertbozesan
      @albertbozesan  ปีที่แล้ว +1

      @@Zzudwa just don't activate the VAE

  • @Comic_Book_Creator
    @Comic_Book_Creator ปีที่แล้ว

    sorry , but not close .. the eyes are too diferent

    • @albertbozesan
      @albertbozesan  ปีที่แล้ว

      I said the same in the video. But the technique can get you very far and save you a lot of work nonetheless. It’s a good tool to use along with others.

  • @meadow-maker
    @meadow-maker ปีที่แล้ว +1

    I can't get the VAE. I gave up! so incredibly frustrating. Why does it have to be so complicated, nobody wants to have to read a page of text just to follow along with a video. Every other page on civitai has a clear link to other files you need. You don't have to plough through a load of text and still get nowhere. There's nothing to even say it was removed. I have a life I don't want to spend forever reading text.

    • @albertbozesan
      @albertbozesan  ปีที่แล้ว

      It’s the original VAE
      huggingface.co/stabilityai/sd-vae-ft-mse-original

  • @DerekSuffolk
    @DerekSuffolk 7 หลายเดือนก่อน

    There's much easier ways to colour a photo than that.

    • @albertbozesan
      @albertbozesan  7 หลายเดือนก่อน

      Enlighten us 😅

  • @al3x2006
    @al3x2006 10 หลายเดือนก่อน

    tutorial is very wrong!!!, he choose strange method when it can be done in 5 steps

    • @albertbozesan
      @albertbozesan  10 หลายเดือนก่อน

      Enlighten me

    • @al3x2006
      @al3x2006 10 หลายเดือนก่อน

      @@albertbozesan that tutorial you do is try to draw someone in color and even the tab you use is wrong, the real method is in img2img no txt2img, and only need to activate 2 controlnet and textual inversion and it's all no need something like img2txt or investigate, and there's a branch that to that in 3 click's

    • @albertbozesan
      @albertbozesan  10 หลายเดือนก่อน

      @@al3x2006 img2img uses color information - you’re making it harder for SD to add color if you do that, so it has zero benefit in this scenario. Check out the rest of my channel, I recommend the tut for ControlNet.

    • @al3x2006
      @al3x2006 10 หลายเดือนก่อน

      @@albertbozesan single imagine for SD and all process in 15 second's instead at least 10 minutes trying to redraw something and the inspection time, it's not the best way, also you can use a trained model to do that, I can colorize 100 in 7 minutes without errors, sometimes poor color but you can put in xtras and make it boost the color, another 5 minutes for 100 pics.

    • @albertbozesan
      @albertbozesan  10 หลายเดือนก่อน

      @@al3x2006 please link to a detailed explanation of what you mean, I don’t understand what you are describing.

  • @aegisgfx
    @aegisgfx ปีที่แล้ว +2

    You may have glossed over how long it takes and how large of an install control net really is, seems to me its a 40 gig install unless my calculations are off

    • @albertbozesan
      @albertbozesan  ปีที่แล้ว

      If you download all the models it’s rather large. I pick and choose which ones. But yes, it can take a while.

    • @lordsnake1988
      @lordsnake1988 ปีที่แล้ว

      Did you download every model?😮
      I just use the ones i see in tutoríals
      Canny, openpose, depth and some times HED

    • @albertbozesan
      @albertbozesan  ปีที่แล้ว

      @@lordsnake1988 no way! Way too much :)