Full AI Art Workflow. ControlNet & Stable diffusion.

แชร์
ฝัง
  • เผยแพร่เมื่อ 25 ก.พ. 2023
  • Join me in this Seb Ross series as I show you how I make an AI thumbnail for TH-cam. We'll work with ControlNet, Stable Diffusion and Photopea. All free tools.
    Support me on Patreon to get access to unique perks! / sebastiankamph
    Chat with me in our community discord: / discord
    Control Lights in Stable Diffusion
    • Control Light in AI Im...
    LIVE Pose in Stable Diffusion
    • LIVE Pose in Stable Di...
    My workflow to Perfect Images
    • Revealing my Workflow ...
    ControlNet tutorial and install guide
    • NEW ControlNet for Sta...
    Ultimate Stable diffusion guide
    • Stable diffusion tutor...
    The Rise of AI Art: A Creative Revolution
    • The Rise of AI Art - A...
    7 Secrets to writing with ChatGPT (Don't tell your boss!)
    • 7 Secrets in ChatGPT (...
    Ultimate Animation guide in Stable diffusion
    • Stable diffusion anima...
    Dreambooth tutorial for Stable diffusion
    • Dreambooth tutorial fo...
    5 tricks you're not using
    • Top 5 Stable diffusion...
    Avoid these 7 mistakes
    • Don't make these 7 mis...
    How to ChatGPT. ChatGPT explained:
    • How to ChatGPT? Chat G...
    How to fix live render preview:
    • Stable diffusion gui m...

ความคิดเห็น • 114

  • @sebastiankamph
    @sebastiankamph  ปีที่แล้ว +1

    The FREE Prompt styles I use here:
    www.patreon.com/posts/sebs-hilis-79649068

  • @Rasukix
    @Rasukix ปีที่แล้ว +10

    tldr:
    1. generate batch images (without highresfix)
    2. pick one(s) you like
    3. send to img2img
    4. double resolution (also increase sampling steps for detail)
    5. inpaint changes you want (optional: use editing software)
    6. send to extras to upscale (can equally use controlnet tile & ultimate upscale script)

    • @jonevansauthor
      @jonevansauthor 8 หลายเดือนก่อน +1

      Thanks for the inspiration Rasukix, I used this as a springboard to go through the whole thing in detail and created timestamps as well. :)

  • @dollarsignfrodofan77
    @dollarsignfrodofan77 ปีที่แล้ว +50

    You're the AI version of Bob Ross. Keep it up!

  • @KkommA88
    @KkommA88 ปีที่แล้ว +54

    This shows SD is still far away from "fire and forget" as some might think. You still have to fine tune *a lot* in most cases.

    • @jagz888
      @jagz888 ปีที่แล้ว +4

      Yep I find it quicker to take my file dump it into photoshop correct bad forms and do a design I intentionally want.

    • @texx8205
      @texx8205 ปีที่แล้ว +3

      Still, even with ControlNET, the result is not quite what he wanted in the beginning..

    • @legolas66106
      @legolas66106 ปีที่แล้ว +2

      This will stay for a while though, people who are willing to put in actual effort and time to mess with ControlNet or img2img to actually steer the output will have much better results than just the 'prompt and post' crowd.

    • @TheHDMICable
      @TheHDMICable ปีที่แล้ว +3

      Most who say that generating images with AI is easy have no experience whatsoever, and are just parroting what others are saying. Their opinion hold little value, as they don't have anything of value to back their assertion.

  • @Horde1Blades
    @Horde1Blades ปีที่แล้ว +6

    your soothing voice and energy along with the calm explanations really suits this style of content.
    I'd be happy to see more of it in the future, thanks

  • @jameshughes3014
    @jameshughes3014 ปีที่แล้ว +8

    workflow is so important to getting good images. thank you for this video, it was helpful.

    • @sebastiankamph
      @sebastiankamph  ปีที่แล้ว

      You're very welcome, happy you enjoyed it 🥰

  • @usefulrandom1855
    @usefulrandom1855 ปีที่แล้ว

    Would love to see more videos like this. Its rare these sort of channel show in depth the workflow!

  • @DanielVagg
    @DanielVagg ปีที่แล้ว

    The comments that you make when things aren't working out, are my favourite part 😂
    I laugh a lot at some of my generations
    Thanks for sharing your process.

  • @wernerblahota6055
    @wernerblahota6055 9 หลายเดือนก่อน

    Would love to see more videos like this. Its rare these sort of channel

  • @MrFedemoral
    @MrFedemoral ปีที่แล้ว

    My favorite video from you so far. Straight to the point/process as a workflow example

  • @Oozywolf
    @Oozywolf ปีที่แล้ว +2

    Absolutely love these videos! Thank you. Informative and relaxing 😎

    • @sebastiankamph
      @sebastiankamph  ปีที่แล้ว +1

      So glad! This long format is new to me, happy you're enjoying it 😊💫

  • @samkennedy7173
    @samkennedy7173 ปีที่แล้ว +1

    Ah yes, I get my ASMR fix while learning about stable diffusion workflows. Best of both worlds.

  • @asbjborg
    @asbjborg ปีที่แล้ว

    Thank you very much for this video, I could watch and listen for hours. :)

  • @StudioBleenk
    @StudioBleenk ปีที่แล้ว

    This is awesome, very similiar to my own workflow but i learned a ton of useful things to apply. Subscribed!

    • @sebastiankamph
      @sebastiankamph  ปีที่แล้ว

      Awesome, thank you! Welcome to my world of AI and dad jokes.

  • @TheAiConqueror
    @TheAiConqueror ปีที่แล้ว

    Cool that you are now making longer videos about your workflow 😁👍

    • @sebastiankamph
      @sebastiankamph  ปีที่แล้ว +2

      People started calling me Seb Ross, so I got to deliver, right? 😁 Thank you for the support my friend! The mere idea of you is a national treasure 🌟

  • @lyonstyle
    @lyonstyle ปีที่แล้ว

    thank you so much for the easy to follow and very detailed video of how your workflow goes, i really learned a lot from this video!

    • @sebastiankamph
      @sebastiankamph  ปีที่แล้ว +1

      Glad it was helpful! 😊

    • @lyonstyle
      @lyonstyle ปีที่แล้ว

      @@sebastiankamph just subscribed as i'am starting my stable diffusion and ai artwork journey. looking forward to watch more of your videos! much love!

  • @michaelleue7594
    @michaelleue7594 ปีที่แล้ว +10

    I wish you would do more to explain your choice of models when selecting them. There are a lot of models out there and knowing which ones are good for which effects is a huge deal.

  • @morepoppunkthanpizza
    @morepoppunkthanpizza ปีที่แล้ว

    Loved the intro, you made me laugh in the first few seconds. Awesome content thanks so much.

  • @shahahahahahahaha
    @shahahahahahahaha ปีที่แล้ว +1

    Thank you for sharing your workflow. It's extremely useful. I'm trying to follow along using the same model, prompts and settings - but even from the beginning, I only get 2-3 out of 10 images with a face. Then afterwards when I enable controlnet and add in mechanical implants, I get no implants. What am I doing wrong?

  • @doabigcheese
    @doabigcheese ปีที่แล้ว

    Awesome video ! thnx for the insight!

    • @sebastiankamph
      @sebastiankamph  ปีที่แล้ว

      My pleasure! Glad you enjoyed it 😊

  • @MyTubeAIFlaskApp
    @MyTubeAIFlaskApp ปีที่แล้ว

    Very interesting video. I am a Python programmer, and love AI stuff and image processing.

  • @deanb8191
    @deanb8191 ปีที่แล้ว

    Need more vids on what models to use, what presets, that sort of thing!

  • @zephyra6248
    @zephyra6248 ปีที่แล้ว

    Hi Sebastian, your tutorials have helped me learn how to use SD with Automatic1111 a lot more effectively. Thank you!
    I discovered how you can change the start-up parameter settings for the Automatic1111 webui:
    1) Open "[filepath]/stable-diffusion-webui/ui-config.json" with a text editor.
    2) Edit the settings to your preferred defaults, save, and restart the server to apply the changes.
    Personally I always brainstorm in txt2img with eulerA, 35 steps, 4 batch size. Changing the default settings saves me a few seconds each time I refresh the Automatic1111 webui.
    Sidenote: It may also be possible to change minimum and maximum parameter values in the config file, but when I changed the value for ControlNet Thresholds A and B it seemed to completely reset my config settings when I started the server, thus I stopped experimenting with it.

  • @enthuesd
    @enthuesd ปีที่แล้ว

    superb thank you

  • @76abbath
    @76abbath ปีที่แล้ว

    Thanks a lot for this video ! I love seeing the workflow of someone like you, who I've been following for a long time. You can make other videos like this one, it's great! Could you do a video on the SD upscale script? Because I try to enlarge some images and between the SD upscale script, extra (ESRGAN...) and Chainner, I don't always have a good quality of enlargement. THANK YOU SO MUCH !

    • @sebastiankamph
      @sebastiankamph  ปีที่แล้ว +1

      Thank you for your support and glad to have had you around for such a long time 😍

  • @MrSongib
    @MrSongib ปีที่แล้ว

    Watching this gives me the vibe because I had the same problem and ended up photo bash the entire image. and at the end of the day, it's still a tool and just take inspiration from the tools and then do it myself again. lol
    Now I'm waiting for the Sketch model for ControlNet to come around and see how my sketch goes on it.
    And tbh most of us had problems with these AI tools in the composition of what we wanted but, sometimes we had good output but want a different background, character, pose etc, and then try to adjust it to the main composition that we liked and it just breaks the whole things, so what I did is try to inpaint certain things and put the silhouette close as possible to what I wanted and press generate.
    The biggest problem for me now is trying to adjust the Pose, idk how to keep the "Original" character intact without breaking the composition in it.

  • @davewaldmancreative
    @davewaldmancreative ปีที่แล้ว

    Hi Sebastian. Great tutorials. super pro. i have a question. Do you happen to know where to save images to use as init images? I've tried my drive. I've tried local. I'm running out of ideas! thanks sebastian. Dave

  • @fluffykitten7796
    @fluffykitten7796 ปีที่แล้ว

    Thank you! 👍🏻

  • @Rasukix
    @Rasukix ปีที่แล้ว

    So cool watching start to finish

    • @sebastiankamph
      @sebastiankamph  ปีที่แล้ว +1

      Glad you enjoyed it! 😊

    • @Rasukix
      @Rasukix ปีที่แล้ว

      @@sebastiankamph watching it again rn cus I'm still struggling with my workflow haha, this is my first week using AI tho so cant be too hard on myself

    • @Rasukix
      @Rasukix ปีที่แล้ว

      Idk how to get high quality images without highresfix with initial generation

    • @sebastiankamph
      @sebastiankamph  ปีที่แล้ว +1

      @@Rasukix The first generation doesn't need high quality images. You just need to find your composition. Then in the img2img step you'll get the high quality.

    • @Rasukix
      @Rasukix ปีที่แล้ว

      @@sebastiankamph this is what I struggle with, so I should generate without highresfix and then go img2img and use script to upscale?

  • @MrDimitriVorontzov
    @MrDimitriVorontzov ปีที่แล้ว

    Useful tutorial, thank you. Question: How can I load custom styles into Automatic 1111 UI (in RunDiffusion)?

  • @inkinno
    @inkinno ปีที่แล้ว

    you are awesome!

    • @sebastiankamph
      @sebastiankamph  ปีที่แล้ว

      You are! You absolute diamond you 💫

  • @ph3-lm
    @ph3-lm ปีที่แล้ว

    Well as for me just getting image to photoshop and quickly repaint or photobash stuff I like or I do not like works much much faster counting on following quite random generations. After that I get it back again to img2/img ot Control net. Maybe you should try that? Though I admit I paint/draw quite profficiently so I may be biased to that kind of workflow.

  • @PuckDudesHockey
    @PuckDudesHockey ปีที่แล้ว

    Sebastian: "Those are the current models that I am using, but that might change... in the coming week."
    Me: "Dang, video is a whole two months old... we're probably 5 generations of model past those..." 🙂

    • @sebastiankamph
      @sebastiankamph  ปีที่แล้ว +1

      Deliberate, Rmada Merge, Revanimated, Colorful. Some I still use now :)

  • @elevador3128
    @elevador3128 ปีที่แล้ว

    Thanks for say i have a big brain :)

  • @KentSteinhaug
    @KentSteinhaug ปีที่แล้ว +1

    Thanks! You tend to explain what you do and why in a way that's interesting and fun. And you usually have that little bit "extra" when it comes to fixing details and common problems. I really hope you will make more videos like this one, I really enjoyed it :)
    One question. Stable diffusion accepts special characters like ( ) ] [ and lettering like 2:1 (or whatever) etc. But I have never seen a list of such commands. Can you make something about that?
    Anyway, keep up the great work :D :D

    • @sebastiankamph
      @sebastiankamph  ปีที่แล้ว

      Glad you enjoyed it! I got that info in my videos, however not in one video only.

    • @KentSteinhaug
      @KentSteinhaug ปีที่แล้ว

      @@sebastiankamph - yes, I know. Do you know why it's "impossible" to train any models in Automatic 1111 at the moment? I have tried to follow all tutorials, yours included, and I have so far never managed to get one single model that is good. Even worse is it that the more news, the badder the results :(

  • @gabriellekamph
    @gabriellekamph ปีที่แล้ว

    🤩

  • @qaesarx
    @qaesarx ปีที่แล้ว +1

    We don't make mistakes. We just have happy accidents... Make love to the canvas...You can do anything you want to do. This is your world...There's nothing wrong with having a tree as a friend.

  • @MrFedemoral
    @MrFedemoral ปีที่แล้ว

    Sebas!, can you point me into the right direction about the models you are using ? Protogen and dreamshaper

    • @sebastiankamph
      @sebastiankamph  ปีที่แล้ว

      I get them mostly from Civitai. Now I use a lot of rMada Merge and Illuminati too.

  • @adrianiskandar
    @adrianiskandar ปีที่แล้ว

    Awesome video! But what is the software you are using? Do you have a video to install all the tools you are working with?

    • @sebastiankamph
      @sebastiankamph  ปีที่แล้ว +1

      Yes, check my Ultimate Guide to Stable Diffusion. I got lots of guides and install how tos.

  • @PatrickBaitman735
    @PatrickBaitman735 10 หลายเดือนก่อน

    If you have trouble with inpainting use a lower denoising. 0.4 = minor changes 0.7 = alot of changes. Try to manovuer in this area.

  • @hellfrozen3678
    @hellfrozen3678 ปีที่แล้ว

    You can also just interrupt the batch when you don't like the images from the start and it gets done to like 80% to save some time

  • @OlaffLudwig
    @OlaffLudwig ปีที่แล้ว

    Subscribed, for the content and because I am something of a dad joke aficionado myself.

  • @robbasgaming7044
    @robbasgaming7044 ปีที่แล้ว

    When are you using control net and when are you using img2img? I can't really grasp the use cases for control net vs inpaint img2img. Great video!

    • @sebastiankamph
      @sebastiankamph  ปีที่แล้ว +6

      ControlNet can keep a full composition or pose but change everything else. Img2img retains a lot of the image, so has less control. But img2img is great if you have the colours, style and composition you want already.

    • @robbasgaming7044
      @robbasgaming7044 ปีที่แล้ว

      @@sebastiankamph great thank you! I've made some great character portraits for d&d with your tips, and a model i found for d&d 🤣 thank you! 💪
      It feels like a specific model for what your looking for is the most important

    • @user-gz4od3to4c
      @user-gz4od3to4c ปีที่แล้ว

      @@robbasgaming7044 I'm also trying to make d&d stuff, do you have any recommendations ? Like sites, prompts, models, workflow or anything that you use ? ☺

  • @pto2k
    @pto2k ปีที่แล้ว

    Great video.
    0:36 wonder which discord channel could I find these style files?
    thank you

    • @sebastiankamph
      @sebastiankamph  ปีที่แล้ว +1

      Check video or channel description. Then inside #resources

  • @niravelniflheim1858
    @niravelniflheim1858 ปีที่แล้ว

    My poor machine only has 4gb VRAM, but I literally just noticed I can push the img size to 512x768 (2:3 ar) without it crashing (usually I do 384 x 576 locally). I'd only been using the larger resolution in my colab instance up 'till now, which I've only been using for a day or so, but it timed out. So just for the hell of it I threw its last output into the PNG Info tab on my local instance to carry on.
    Now I have an oddity where regenerating the image from that info doesn't recreate the exact same image - I'm not sure why - since it's the same model, seed and so on, copied right out of the image itself. Different hardware, different result? One thing's for sure - the cloud GPU is a lot faster!

  • @kanall103
    @kanall103 ปีที่แล้ว

    I have a "ControlNet" folder with "model" folder inside with pixar, monaliza, etc in pt. files how use it ?please

  • @lujoviste
    @lujoviste ปีที่แล้ว

    Hey Sebastian can you do a LoRA training style tutorial? For the love of me i can't seem to get it to work

  • @CienWill
    @CienWill ปีที่แล้ว

    hi Sebastian, Can you tell me your hardware set? your rendering is so fast!!

  • @pastuh
    @pastuh ปีที่แล้ว +1

    Funny how you trying generate image, even first image was best :))
    I mean 04:45 looks good.

  • @marcthenarc868
    @marcthenarc868 ปีที่แล้ว

    Do you have a 12G card ? So many images in batch. And so fast. I can barely ask for 4 small ones on my 8G.

  • @ZeroCool22
    @ZeroCool22 ปีที่แล้ว

    So, you don't use "Inpainting Models", just normal?

    • @sebastiankamph
      @sebastiankamph  ปีที่แล้ว

      You can use inpainting models, but most of the time there are less to choose from.

  • @studio85amsterdam
    @studio85amsterdam ปีที่แล้ว

    I used to glue my fingers as a kid. Mainly to make it look like there was a cut in it.

  • @Mohammed-oo5cj
    @Mohammed-oo5cj ปีที่แล้ว +1

    How to turn negative comments into a new lesson 😂😂😂😂😂😂😂😂😂😂❤❤❤❤

  • @ericgriffin120
    @ericgriffin120 ปีที่แล้ว

    You know you should have a standard statement in all your posts about your PC RIG. Or please point me to where it is. Thanks

    • @lajsmith_
      @lajsmith_ ปีที่แล้ว

      He has a 3080 I think

    • @sebastiankamph
      @sebastiankamph  ปีที่แล้ว +1

      Mvp right here. Yes 3080. And a Ryzen 5900x

  • @nikoleifalkon
    @nikoleifalkon ปีที่แล้ว

    2:34 check your prompts because have a lot of contradictory features for example Positive asks for hyperrealistic and depth of field but sharp focus? is prompted twice, 3d render in positive by many apps like Blender, CInema4d and Octane BUT in negative your dont want render nor 3D featured too, double trending on artstation too, to confused for a beginner. maybe is time to do a video debunking all that artists that nobody knows and all people prompt in the positive features needed to do an image, example: wanting Studio Ghibli style does not match with Ed Blinkey art style but you want both in the same image.

    • @nikoleifalkon
      @nikoleifalkon ปีที่แล้ว

      and don't take me wrong! I've learned a lot of your knowledge of all features and tips, just don't copy the prompts prior to analyze them first, be the first TH-camr with the real answers demystifying is a copy pasted prompt from random sites, maybe reducing opposite features could bring faster images created.

    • @sebastiankamph
      @sebastiankamph  ปีที่แล้ว +2

      You're 100% correct in that it's contradictory. There are lots of multiple words and even same stuff in negatives as positives. I've had to choose where I spend my time and it's been like a "no need to fix it, it's not really broken" kinda thing :) I have been thinking about it lately though, seeing as I link to it nowadays. Thanks for the input 😊

    • @nikoleifalkon
      @nikoleifalkon ปีที่แล้ว +1

      @@sebastiankamph thanks for the mature answer! now I know I'm in the correct channel, keep it up!

  • @gosia_kmiec
    @gosia_kmiec ปีที่แล้ว

    I came here to learn a bit about control net, I'm hoping AI will become more ethical with things like that but I can't believe that after all the backlash and pleading AI prompters like yourself STILL use names like Greg Rutkowski in their prompts, he and bunch of other artists clearly said no. Why don't you respect that?

    • @sebastiankamph
      @sebastiankamph  ปีที่แล้ว +1

      I actually don't anymore. This video is 4 months old. I also removed all artist names from my styles prompts.

    • @gosia_kmiec
      @gosia_kmiec ปีที่แล้ว

      @@sebastiankamph I can't express how glad I am to hear that! Thank you for clarifying my mistake and for acknowledging the issues around AI. I hope more people will do the same thing!

  • @tonymassage2314
    @tonymassage2314 ปีที่แล้ว

    also, on positive prompt you use 4k like two times, then also use 8k, and unreal render??? LOL, i mean, do you want to have a 4k or 8k result? and, if you add an ENGINE, the 4k or 8k result will NOT be as good....... what a lame prompt.......

  • @tonymassage2314
    @tonymassage2314 ปีที่แล้ว

    sebastian with all due respect, what's the point of using excessive negative prompt when you actually get very lame results? you are showing people what EVERY ROOKIE does, COPY and PASTE prompts without editing them - you have so many repeated words, unnecessary, and your results show that YOUR PROMPT needs fixing..... MODELS are more important than prompt, because if a model is badly trained, doesn't matter how LONG of testament your prompt is, you will get lame results...... im sorry, but i need to be honest. here is a list of your repeated words. UGLY< DEFORMED, BAD ARTS, DISFIGURED, DEFORMED, extras limbs, DEFORMED (again for the third time) , deformed (again) extra limbs (5 times!!!)), disfigured like 5 times!!! Deformed like 4 times!! Ugly like 6 times!!!
    I mean. if you get all these people following you, and they dont understand this, i mean, you are showing them the WRONG way, and still you get all your fans,,,, this shows me how people just FOLLOW ALONG and don't test themselves or question...... you just make money out of videos even if they are bad, i get it....... this means that if i was someone making tutorials using REAL EXPERIENCE, i will be a millionaire on youtube......

    • @sebastiankamph
      @sebastiankamph  ปีที่แล้ว

      There's no point in doing it really. I just haven't had time to clean them up over time as I spend most of my time making videos or answering viewers here or on Discord. It's extremely far down on my todo list.

  • @toututu2993
    @toututu2993 ปีที่แล้ว

    Remember people, never support ai arts with your money or praise as they already destroying what is great about humanity which is "creations" and quality craft with thoughts and life knowledge that brought us happiness. AI art doesn't have any clue or idea for our inspiration and greatness as much as a living human itself.
    What corporations and ai programmers have done is not okay at all. They don't care about the quality of experience and entertainment that brought people greatness, they only care about themselves collecting huge cash and doesn't care of the earth they living. It's getting worse and worse of how they can just insult us all with these ai arts techs that will make our entertainment media worse.