Make sure to visit my sponsor. Their Textify tool is revolutionary: storia.ai? Write an email to founders@storia.ai for a 10% discount on your existing subscription for 6 months.
the prompt that made those pics is different to the one you entered this is why you alway add all prompts in the description how is anyone going to be sure they get it all right before doing there own.
This is one crazy workflow that produces incredible results. I've watched it 3 times now and compiled over a page of notes. Tons of great information here. I've seen videos on upscaling, control net inpaint, control net tile, etc. but have never seen them put together into a cohesive workflow like this. Thanks for sharing. Really exceptional info. Liked and subscribed!
I’m so glad to hear you found the workflow helpful and took the time to dive deep into it-over a page of notes, that’s impressive! You're right; many tend to focus on single techniques, but I believe in the power of combining them to unlock even greater potential. Your support, by liking and subscribing, means a lot to me, and it motivates me to keep sharing more exceptional info.
@@AIKnowledge2Go If I could make one suggestion for this process, it would be to skip the Ultimate SD Upscale (step 5) and instead use the Tiled Diffusion extension. Ultimate SD Upscale produces gorgeous results, but it also produces noticeable seams and the details between the tiles can change wildly. So while the overall image looks remarkable at first glance, the end result falls apart on close inspection. Conversely, the Tiled Diffusion extension allows you to upscale your results without any noticeable seams, and the overall image will be coherent. The settings for this in I2I are pretty straightforward. Enable Tiled Diffusion and Keep Input Image Size. Leave Method to MultiDiffusion. Keep Tile Width/Height at their default (96) and Tile overlap to its default (48). Set the Upscaler to 4x-UltraSharp and set the Scale factor to 2 to 4. (You can go up to 16k). Keep Noise Inversion OFF. Turn on Tiled VAE and leave everything at its default. Turn on ControlNet Tile using the same settings you would for Ultimage Upscale. Set your CFG Scale somewhere between 5 and 7 (any higher and your image will look over baked). Set Denoise Strength to between 0.3 and 0.45. Anything outside this range will produce garbage results. Remove your prompt and simply put "8k, ultra sharp" (you can also use the Add Detail Lora with a low strength if you need more details.) Render and you'll get a gorgeous high-res image. The main thing to note with Tiled Diffusion is it tries to adhere to your original image as closely as possible. So work out the details in your low-res image. Then let Tiled Diffusion add the fine details as it scales things up.
@@SteveWarner awesome tip, actually just came back to this video for a refresh on upscale and was going through the comments, going to try both methods and see what works better. have a few images to test on. That's why I love this stuff, there's so many ways to do things, endless combinations and finding what works best for a given scenario.
I fell in love with digital art creation quite by accident, just 1 week ago. and have been going at it alone, through trial and error. Finding your tutorial as a recommended video, I was excited that it was so clear and concise that even I could follow it. Liked and subscribed and looking forward to this incredible adventure, now that I've found your channel!
haha i'm on the same boat as you just started few day ago, and i'm going mad about it, all the possibility, everytime i learn something new about it, i discover even greater thing about this tech, lot of error or bug at first but i'm getting ready to it, learn new way to do thing, haha i feel silly being amaze already by some picture i made with only a checkpoint, with little to no upcale, vae or even lora ..., just checkpoint and positive negative prompt, as i discover new way to make thing even greater, just learned about upscale with high res at first, then went upscale by img2img, tried rapidly ultimare sd upscaler without control net and discover control net yesterday, and lora not long ago aswell, the thing become even crazier at each step and this now, thx for the quality video and keep it up @AIKnowledge2Go
I just came across your channel for the first time. I’ve yet to see some of these tips! Turning off restore faces on the upscale should be a big help for me. That has always been my issues is the warped faces. Thanks!
Ive found that in general, at least the way i usually work, its best to turn off restore faces after either text 2 img, or after img 2 img (if you want to make significant changes to the image while keeping same face). When I do any kind of inpainting on face, my preferred method is to use ip adapter (heard good things about ID as well, but haven't tested it beyond a few images). Ip adapter seems pretty good if I want to add or remove some kind of detail to a face. Keeps the face the same without removing or heavily denoising my changes, face restore on the other hand does not seem to place nicely with inpainting on face.
So when I get to the stage to upscale by the "resize by" before we do the Ultimate upscale. I notice that if my prompt has a color word in it, regardless of if it's like "purple hair" the whole image will then get this purple tone to it, if I take out the color word, then I don't get the purple hair but image looks great. Would you recommend anything so the image doesn't get the color overlay at this step? And perhaps would you know what is driving this? @@AIKnowledge2Go
In the step starting around 7:10 did you mistakenly invert the settings for denoise and controlnet weight. Because if I put denoise to 0.9 or 1 and weight at .3-.6 I get a completely new img, but if I reverse them I get the intended effect.
Hi there! Actually, no, the settings of denoising at 0.9 and control weight at 0.3 - 0.6 are indeed correct. Could you please check your A1111 Console Window for any errors that might pop up when you use this controlNet workflow and render? Sometimes, the issue might be hiding in the details there.
@@AIKnowledge2Go I wasn't getting any errors, it ended up being using the i2i>inpaint tab instead of just the i2i tab like the other commentor mentioned (who seems to have deleted their comment for some reason #shrugs lol). But like I said even in the main i2i tab I was able to get the desired result with those options flipped. Thx!
@@AIKnowledge2Go Will using A1111 Forge webui make a difference. I get similar results as @juggz143 when using denoising at 0.9 and control weight at 0.3 - 0.6. (ie: my image changes completely
at 6:15 you said you uploaded the image from step one but it looks like you used the inpainted version instead. is the inpainted version the correct one to upload to controlnet?
You are absolutely right, I mixed them up. But since it's out painting it shouldn't impact the outcome that much. Control net is analyzing the image so it looks more for colors and surroundings and how it should in paint the new areas.
Thanks for the video. Used your flow, and everything looks good until inpaint upscale, then it goes off the rails. I have the same exact settings, can't figure out what's going on. I can only get a normal looking image if I lower the denoising.
You are welcome. Make sure that your controlNet unit works correctly. Check your console Window for errors when you start the upscale. If it works correctly then, increase ControlNet weight and lower denoising.
@@AIKnowledge2Go Thanks! Ya, tried all that. It only works if I drop denoising to ~0.2, and then I'm losing detail. Not sure how you get it to work with denoising so high, I've been trying everything. I'm in forge, so maybe that's it? Although, it should be 100% auto111.
You would first have to outpaint the image (into wide screen) and then send it to inpaint and mask the area where the character you want to add should stand and use a high denoising strength, 0.7 - 0.9. Use ControlNet inpaint with either llama or global harmonious as preprocessor. Remove everything from the prompt that has nothing to do with render. "Phototrealistic, 4K, HDR" etc. has to stay in order to give you the same look. Working with Character loras here can be tricky. I have a guide on my patreon in my 3 Dollar tier, but here is a free version that you can get you started: www.patreon.com/posts/99183367 Hope that helps.
@@AIKnowledge2Go I have another question. When putting the denoising strength to 90 for the image detail increase, I sometimes see changes happen to the image that I didn't want to change from the original image, but it's random. What is a good way to keep the image 100% the same, but still add the detail you did in this video?
Not having a control net inpaint module is one of the biggest drawbacks in SDXL. I know your pain I heard of a software called Fooocus that is good at inpainting. It's from t lllyasviel who has done a lot of work for the original control net models. But I haven’t tried it myself. github.com/lllyasviel/Fooocus It's an area I will definitely look into in further videos but I can't promise you when.
I only started using SD few weeks ago, and all videos on TH-cam are from 1-year-ago, did everyone stop using A1111? Or nothing really changed since then?
Yes un(forge)unately... bad pun... unfortunately no one knows whats the timeline for new A1111 versions. It doesn't support Flux and i may be wrong but it also doesn't support Stable Diffusion 3.5 atm. These both are the go to models for local image generation. I still use both A1111 mainly because some extension do not work inside forge. If you want to try and don't want to mange multiple downloads / model folder. Here I show how you can install A1111, Forge, ComfyUI and many more with just simple clicks by using Stabillity Matrix. th-cam.com/video/bwPk-NXggp0/w-d-xo.html
Unfortunately, there is no one shot answer as it strongly depends on Stable Diffusion version, checkpoint, lora usage settings etc. Sometimes it helps to increase denoising, but that can of course mess up the composition. On my Patreon you can find my FREE workflow guide. Maybe it can shed some light in the darkness. 100 % Free, no membership needed! www.patreon.com/posts/get-your-free-99183367
@@AIKnowledge2Go I found that upping the tile width in USDU to 1024 (Keeping resolution at 2) and putting the denoising at 2.5 to maintain consistency will output some really nice pictures with way less blurring. Occasionally i'll up it to 2048 is the picture is already really good for a really nice composition.
Your method of download although simple, you skip the steps on how you are supposed to enable these in a1111. Do they go in the extensions file? I did that after searching on the web, but there doesnt seem a way to enable control net unless I go install via the a1111 webui
Hi its been a while i created this video. I am not sure what you mean by my method of download. All links are provided in the video description. The Loras go into the lora Folder and the inpaint model into the ControlNet folder.
You can find my style collection here 100% free no membership needed: www.patreon.com/posts/my-collection-of-87325880 I also have a free sneak peek of my workflow and beginners guide if you are interested also 100% free: www.patreon.com/posts/get-your-free-99183367 www.patreon.com/posts/sneak-peek-alert-90799508 have fun with it happy creating
ein super workflow. habe ihn jetzt mehrere male durchgespielt, echt top die ergebnisse. aber, beim letzten upscaling schritt (mit script) verliere ich jedes mal das gesicht, welches ich mit ReActor reingebracht habe... wie kann ich das verhindern?
Hi danke für das Feedback, du müsstest dann die Denoising strength reduzieren, das gibt natürlich weniger filigrane details. Wichtig ist auch das du "After detailer" und "Face restoration" deaktiviert hast. Du könntest auch versuchen mit einem IPAdapter zu arbeiten. Dazu arbeite ich derzeit an einem Tutorial. Kommt aber wohl erst als übernächstes Video leider. Ich weiß auch nicht wie gut Ipadapter mit Tile Upscale zusammen arbeitet.
@@AIKnowledge2Go face restoration ist aus, werde mal mit weniger denoising versuchen. für ipadapter muss ich noch mal ein passendes model suchen. danke schon mal!
The video is recorded from an older version, sampler and scheduler are now separated. So if you want you can select Karras from the new "Schedule type" dropdown, but if its set to automatic it should pick karras by default.
One question to Checkpoint versions or versions in particular. You are using the Version 11 of the realcartoon checkpoint altough there is a version 17 on the screen. Does not a higher version mean it is newer or "better"?
Hi, yeah usually newer is better in terms of checkpoints. Some Checkpoint creators pushing out new versions on a daily basis. When i came up with the idea for the video 11 was the latest and i don't change version while working on a project. This is why its still version 11.
Du darfst... zum Zeitpunkt als ich das Video aufgenommen habe, war ich noch nicht so überzeugt von Fooocus. Tatsächlich ist eins meiner kommenden Projekte ein Inpaint tutorial wo ich Fooocus vorstellen werde
@@AIKnowledge2Go Thanks however, the step at 5:50 doesnt work for me : when I change the ratio from 768x768 to 1024*768, the image is stretched... I have these settings : controlnet, enable, upload independent control image, inpaint, inpaint_only+lama, thibaud_xl_openpose (because the controlnet models are not working for reforge), controlnet is more important, resize and fill, wit the control weight at 1 and the denoising strength at 9 edit : nevermind, found the issue ! it was because I forgot to switch back from "inpaint" to "img2img"
I know how you feel. There are these models from different People: civitai.com/models/136070/controlnetxl-cnxl. Unfortunately ControlNet Models for SDXL are inferior compared to SD 1.5. That's why i still use SD 1.5 in certain scenarios.
@@AIKnowledge2Go O wow! Ididn't know any existed for tile and inpain. Have you tried applying your inpainting controlnet wizardry with these models you've shared? Would be amazing to get something similar with the inpaint method working in sdxl.
I've been using the Controlnet Tile/Ultimate SD Upscale (USDU ) workflow for months but I still get a lot of artifacts using Controlnet. My solution has been to just use USDU with noise set to 2, and always use DPM++ 2M SDE Exponential, even if the original was done with Euler A.
Great workaround! I've also shifted from Euler A to experimenting with DPM++ 2M Karras and DPM++ SDE in Dreamshaper XL Lightning setups. It's fascinating to see how different settings impact the final output.
Are you using restore faces? I can't get my faces to even look a fraction of that quality. Edit: Saw that you were and turned it off later HOWEVER, I'm now running into an issue where the hands are getting worse after the resize and the Inpaint doesnt support 2736x1536 (only shows 2048 as max). Also the face distorted just a tiny bit so I'd like to fix that as well if possible, any recommendations?
@Bpmf-g3u thanks, I ended up spending the day repeat inpainting until I got it decent. Tried a few other things but none worked at all for some reason.
When doing Tile Upscale its important to turn of restor faces, because its going tile by tile that's why you need to turn it off. Regarding your problem usually you want to fix all your inpainting at lower resolutions and then upscale, if anything in the image distorted then decrease the denoising strength.
@@AIKnowledge2Go funny enough it looked fine before the upscale, perhaps I needed to change the denoising. I’ve noticed I added TOO much detail after the upscale and tiling. Would lowering or increasing the denoisong fix that? Sorry for all the questions I’ve never touched the upscaling before so this is all new to me. Appreciate the response!!
I'm sorry to hear that you have trouble with my workflow. I assume you are talking about the out painting Step? If so, can you please confirm that you are using resize by instead of resize to. Also can you please check your command window for any errors that are popping up while rendering?
Yes we can have you checked my th-cam.com/video/IcQvTb1jVAM/w-d-xo.html&lc=UgyPOEGpbzo_PNSjhmN4AaABAg video yet? Just change the prompt to anime related styles.
Would it be possible to upload all the images you created/used in each stage of this guide to an online host, i would be interested to see the difference between each stage in the process and may help other users by allowing them to see the difference also and maybe put some visual elements to the settings and options you have used. Nevertheless, excellent guide, detailed explaination and for what it’s worth the accent makes it 100 times better, you just earned my sub!
Thank you for the guide, it led to a fun discovery when i misremembered some of the settings. Turns out that SD ultimate upscale can be abused to make photomosaics. The results were both horrifying and beautiful. 10/10, would make a person made out of other people again.
You're welcome! It's fascinating how sometimes, what starts as an accidental setting can lead to both horrifying and wonderfully unique creations. I love that you're embracing the experimental side of things and discovering new possibilities.
Hi, this is now a new dropdown it's called "schedule type" you can leave it as automatic, but you can also set it by hand. I hope that helps happy creating.
yea i have the same problem than someone i read in the comment when i got to the part of inpaint upscale my change entierly, i lost the scenary the character everything, your seem almost inchanged except for the detail added, i'm not sure what going on...., so i can only running really low denoising not sure how your don't change ... nothing weird show up in the terminal so i think it work correctly ...
That does sound strange indeed. Could you please confirm if you're using the latest versions of A1111 and ControlNet? Also, it would be helpful to know which Checkpoint you're currently using. I want to try out for myself.
@@AIKnowledge2Go i'm not sure you got my answer i wrote you some hour ago as it seem to have vanish from the commentary idk why, anyway, i think i'm up to date has it's has not been a week since i started using ai creating image tool, so from the system info extension i got those info, app: stable-diffusion-webui-forge updated: 2024-03-08 device: NVIDIA GeForce RTX 3080 (1) (sm_90) (8, 6) cuda: 12.1 cudnn: 8801 driver: 551.52 python: 3.10.11 xformers: 0.0.25 diffusers: 0.25.0 transformers: 4.30.2 configured: base:realcartoonRealistic_v14.safetensors [edd7bf7340] refiner: vae:kl-f8-anime2.ckpt loaded: base:C:\stable-diffusion-webui-forge\models\Stable-diffusion ealcartoonRealistic_v14.safetensors refiner: vae:C:\stable-diffusion-webui-forge\models\VAE\kl-f8-anime2.ckpt So with this you should have my A1111 version, some of my spec, and the model i used when i tried your tutorial. i was follow closely so i also use the lora you use the fantasy one and add detail one. I still managed to get it done from where i were stuck ( the inpaint upscale step ), but only after lot of time and trial and error of testing various value, for it to work, but in the end nothing i was happy with, then i let it go and went into an img2img ultimate upscaler solution, but taking again some time to try to find a value i was happy with. As i said i'm pretty new to this so i made maybe some mistake idk, or a a combination of factors made this not working as expected idk.
Helpful insight on the SDXL controlnet scripts and the importance of setting the preprocessor to none. I'm on the hunt for an effective upscale workflow, so this is great info. Thanks!
so why not hi-res fix exactly?, I feel this could be considerably faster, hi-res, inpaint if you have to then control net tile with tiled vae up to 16k, done.
Experiment with ControlNet Upscale versus the high-res fix and you'll notice a difference. ControlNet tends to produce finer details, mainly because you have the option to increase the denoising strength. While this might not hold for every checkpoint, it's true for roughly 90% of them. Additionally, when fine-tuning your prompts, consider generating a large batch of images, say around 50, and then selecting the top 3. This process is considerably slower using the high-res fix. Inpainting works more efficiently with lower-resolution images too. Before you think about upscaling, ensure your composition feels complete. This foundational step is crucial for achieving the best overall quality in your final image.
@@AIKnowledge2Gomate, I have over 9k images uploaded to civit, like, im generaly of the understanding when upscaling its better to start with as high a quality base image as possible, hence why you want to hires fix. And when you say 'Inpainting works more efficiently with lower-resolution images' what do you mean?, you really notice any difference?, compared to hires with adetailer first?. I dunno man, I think hires fix, possibly using adetailer if needed then, 2nd pass in img2img and upscale with tiled vae and Tiled Diffusion, which can also hook into CN for using tile preprocessor.
@@quercus3290 No Offense, but I'm genuinely interested: at what point in your process do you apply outpainting to achieve a 16:9 aspect ratio? Do you begin with an image already in 16:9 resolution? I’d love to understand your process for applying a high-resolution fix, as I’m eager to test it myself. It’s not that I’m questioning the effectiveness of your workflow, but based on recent surveys, about 90% of my audience is using Nvidia 2080 graphics cards or something older. Time efficiency is crucial due to this. Inpainting at high resolutions is significantly slower and requires more patience. If the outcomes are unsatisfactory, and numerous adjustments need to be made, it can become quite frustrating. For the majority of my viewers, the workflow I've showcased appears to be more practical. Given the volume of images you’ve uploaded, am I correct in assuming you’re working with at least a GeForce RTX 4080 or something more advanced?
This is a really interesting back-and-forth. Not sure which method works best for me. I do think that having to start with a square image actually limits the range of compositions that you can get SD to produce though. SD will try to put all the relevant solutions into that initial square, and the outpainting potentially just adds 'filler' content. Starting with a non-square image might not yield the absolute best quality, but it might offer better composition?
Klasse Video. Ich feier deinen Dialekt so sehr! 😅😂 Hab schon ne Menge gelernt von dir Vielen HErzlichen Dank!^^ Schonmal darüber Nachgedacht die Videos auch mit deutscher Sprache nochmal hochzuladen?
Hi, danke für dein Feedback! Freut mich, dass dir mein Dialekt gefällt. Deisten deutschen sehen das leider anders... 😊 Ich habe einen deutschen Kanal (KIwissen2go), aber momentan fehlt mir die Zeit fürs übersetzen der Videos. Danke für deine Unterstützung!
@@AIKnowledge2Go ich feier das! :D deineanderen Kanal werd ich mir auch gleich mal ansehen! Sag mal, könntest du mal ein Video machen indem du erklärst, wie man 2 personen unabhängig voneinander erstellt?
@@MurphysPuppet Danke, dass du meine Videos feierst und auch Interesse an meinem anderen Kanal hast! Die Erstellung von zwei unabhängigen Personen mit Stable Diffusion kann tatsächlich herausfordernd sein.. Fürchte für ein ganzes Video ist das ein bisschen kurz. Werde allerdings demnächst mit Shorts experimentieren für solche Quick Tipps. Vielleicht mache ich das tatsächlich zum Thema.Am einfachsten erreichst du das mit Inpanting. Wenn du einen Mann oder eine Frau im Bild hast kannst du versuchen mit Prompts wie zum Beispiel: image of 1 woman 1 man, he wears a black coat, she wears a red dress. Allerdings bei gleichgeschlechtlichen Personen, wird das Eher schlecht als recht funktionieren.
I completely understand where you're coming from. The process can indeed get complex, which is why I alternate between using ComfyUI and Automatic 1111 based on the specific needs of each project. For tasks like animate diff, ComfyUI is my go-to. However, when it comes to SDXL lightning with ComfyUI can present some challenges for me when it comes to upscaling.I tried Ultimate SD Upscale, K Sampler Upscale and SUPIR, But the results are still mediocre. I'm always on the lookout for tips and tricks to streamline these workflows, so if you have any suggestions or need advice on a particular aspect, feel free to share! Cheers and thanks for the support :)
@@AIKnowledge2Go I wonder what's your opinion on recent Stable Swarm UI ? it's still in Beta, but basically combines A1111 and ComfyUI within one convenient web interface, with easy support for LAN use and multiple GPUs too :)
You are absolutely right, you don't need to do this in newer versions of A1111 anymore. I always add this because some of my audience may have older versions.
Sorry to hear that. Is your ControlNet inpainting working properly? Sometimes the command window is full of errors, and you wonder why it isn't working. Also, try different denoising strengths and generate at least two images; sometimes the second image really makes a difference.
Your mission, should you choose to accept it, involves keeping these "top secret" techniques under wraps. As long as you're discreet, you'll navigate this covert operation without any trouble. Welcome to the inner circle! 😉
Please don't be angry, since one of the last updates, the sampling method and schedule type have been separated. You can find it in the dropdown next to it. Actually the automatic setting should be fine.
Do you struggle with prompting? 🌟 Download a sneak peak of my prompt guide 🌟 No Membership needed: ⬇ Head over to my Patreon to grab your free copy now! ⬇ www.patreon.com/posts/sneak-peek-alert-90799508?Link&
Can you change your audio setup? It is a pain to listen to your voice. Get a better mic, use filters, record audio in another room, turn down the volume, do something.
Thank you all for your feedback and support. I'm currently in the process of exploring better audio editing techniques. From here it gets only better :)
@@AIKnowledge2Go Deine Videos sind irgendwie einzigartig, du gehst ins Details. Alle anderen Videos im Netz sind beginner Guides, die alle das gleiche erzählen und man kaum was neues lernt. Mach weiter so in depth guides, keep up the good work.
I'm on Forge and when I do the Inpaint Upscale step, I get this error and the resulting image is completely different: *** Error running process_before_every_sampling: D:\AI\webui_forge_cu121_torch21\webui\extensions-builtin\sd_forge_controlnet\scripts\controlnet.py Traceback (most recent call last): File "D:\AI\webui_forge_cu121_torch21\webui\modules\scripts.py", line 835, in process_before_every_sampling script.process_before_every_sampling(p, *script_args, **kwargs) File "D:\AI\webui_forge_cu121_torch21\webui\venv\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "D:\AI\webui_forge_cu121_torch21\webui\extensions-builtin\sd_forge_controlnet\scripts\controlnet.py", line 555, in process_before_every_sampling self.process_unit_before_every_sampling(p, unit, self.current_params[i], *args, **kwargs) File "D:\AI\webui_forge_cu121_torch21\webui\venv\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "D:\AI\webui_forge_cu121_torch21\webui\extensions-builtin\sd_forge_controlnet\scripts\controlnet.py", line 497, in process_unit_before_every_sampling cond, mask = params.preprocessor.process_before_every_sampling(p, cond, mask, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\AI\webui_forge_cu121_torch21\webui\extensions-builtin\forge_preprocessor_inpaint\scripts\preprocessor_inpaint.py", line 27, in process_before_every_sampling mask = mask.round() ^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'round' Do you know what is going on?
I'm sorry to hear you're encountering this issue. While I'm not deeply familiar with Forge specifics, this error suggests there might be a problem with the input mask. Since we dont use any it could a bug in controlNet for Forge. Can you confrim you used the rigth preprocessor? If you haven't already, consider seeking advice on forums or communities dedicated to Forge or similar AI tools; they might have encountered and resolved similar issues. Good luck, and I hope you find a solution soon!
Make sure to visit my sponsor. Their Textify tool is revolutionary: storia.ai?
Write an email to founders@storia.ai for a 10% discount on your existing subscription for 6 months.
the prompt that made those pics is different to the one you entered this is why you alway add all prompts in the description how is anyone going to be sure they get it all right before doing there own.
what is in the professional scenic photography style???
long shot, professional scenic photography, closeup image of a female druid, in leather armour, sitting on rock, casting nature spellfantast00d,smiling, perfect viewpoint, highly detailed, wide-angle lens, hyper realistic, with dramatic sky, polarizing filter, natural lighting, vivid colours, everything in sharp focus, HDR, UHD, 64k,
Negative prompt: nsfw, (worst quality, low quality,2D:2), monochrome, zombie, overexposure, watermark, text, bad anatomy, bad hand, extra hands, extra fingers, too many fingers, fused fingers, bad arm, distorted arm, extra
arms, fused arms, extra legs, missing leg, disembodied leg, extra nipples, detached arm, liquid hand, inverted hand, disembodied limb, small breasts, oversized head, extra body, extra duplicate, ugly, huge eyes, text, logo, worst face, (bad and mutated hands:1.3), (blurry:2.0), horror, geometry, bad prompt, (badhands), (missing fingers), multiple limbs, bad anatomy, (interlocked fingers:1.2), Ugly Fingers, (extra digit and hands and fingers and legs and arms:1.4), ((2girl)), (deformed fingers:1.2), (long fingers:1.2),(bad-artist-anime), bad-artist, bad hand, extra legs , canvas frame, (high contrast:1.2), (oversaturated:1.2), (glossy:l.l), cartoon, 3d, ((disfigured)), ((bad art)), ((b&w)), blurry, ((bad anatomy)), (((bad proportions))), ((extra limbs)), cloned face, (((disfigured))), extra limbs, (bad anatomy), gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), mutated hands, (fused fingers), (too many fingers), (((long neck))), Photoshop, video game, ugly, tiling, poorly drawn hands, 3d render, badhandv4, bad-hands-5
Steps: 35, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 1598295495, Size: 768x768, Model hash: edd7bf7340, Model: realcartoonRealistic_v14, Style Selector Enabled: True, Style Selector Randomize: False, Style Selector Style: base, Lora hashes: "fantasy00d-000015: 56defd48c8b6, add_detail: 7c6bad76eb54", Version: f0.0.17v1.8.0rc-latest-276-g29be1da7
@@TheNexusDragoon
Here you go:
SD 1.5 & SDXL Styles:
www.patreon.com/posts/my-collection-of-87325880
@@AIKnowledge2Go sweet i always try to recreate the image being made to make sure its all going right thanks
This is one crazy workflow that produces incredible results. I've watched it 3 times now and compiled over a page of notes. Tons of great information here. I've seen videos on upscaling, control net inpaint, control net tile, etc. but have never seen them put together into a cohesive workflow like this. Thanks for sharing. Really exceptional info. Liked and subscribed!
can you please share the notes!
I’m so glad to hear you found the workflow helpful and took the time to dive deep into it-over a page of notes, that’s impressive! You're right; many tend to focus on single techniques, but I believe in the power of combining them to unlock even greater potential. Your support, by liking and subscribing, means a lot to me, and it motivates me to keep sharing more exceptional info.
@@AIKnowledge2Go If I could make one suggestion for this process, it would be to skip the Ultimate SD Upscale (step 5) and instead use the Tiled Diffusion extension. Ultimate SD Upscale produces gorgeous results, but it also produces noticeable seams and the details between the tiles can change wildly. So while the overall image looks remarkable at first glance, the end result falls apart on close inspection. Conversely, the Tiled Diffusion extension allows you to upscale your results without any noticeable seams, and the overall image will be coherent. The settings for this in I2I are pretty straightforward. Enable Tiled Diffusion and Keep Input Image Size. Leave Method to MultiDiffusion. Keep Tile Width/Height at their default (96) and Tile overlap to its default (48). Set the Upscaler to 4x-UltraSharp and set the Scale factor to 2 to 4. (You can go up to 16k). Keep Noise Inversion OFF. Turn on Tiled VAE and leave everything at its default. Turn on ControlNet Tile using the same settings you would for Ultimage Upscale. Set your CFG Scale somewhere between 5 and 7 (any higher and your image will look over baked). Set Denoise Strength to between 0.3 and 0.45. Anything outside this range will produce garbage results. Remove your prompt and simply put "8k, ultra sharp" (you can also use the Add Detail Lora with a low strength if you need more details.) Render and you'll get a gorgeous high-res image. The main thing to note with Tiled Diffusion is it tries to adhere to your original image as closely as possible. So work out the details in your low-res image. Then let Tiled Diffusion add the fine details as it scales things up.
@@SteveWarner awesome tip, actually just came back to this video for a refresh on upscale and was going through the comments, going to try both methods and see what works better. have a few images to test on. That's why I love this stuff, there's so many ways to do things, endless combinations and finding what works best for a given scenario.
@@SteveWarnerGreat tip! Thank you for commenting :)
I fell in love with digital art creation quite by accident, just 1 week ago. and have been going at it alone, through trial and error. Finding your tutorial as a recommended video, I was excited that it was so clear and concise that even I could follow it. Liked and subscribed and looking forward to this incredible adventure, now that I've found your channel!
Thanks a lot for saying that. I am glad you found my content helpful. Happy creating.
haha i'm on the same boat as you just started few day ago, and i'm going mad about it, all the possibility, everytime i learn something new about it, i discover even greater thing about this tech, lot of error or bug at first but i'm getting ready to it, learn new way to do thing, haha i feel silly being amaze already by some picture i made with only a checkpoint, with little to no upcale, vae or even lora ..., just checkpoint and positive negative prompt, as i discover new way to make thing even greater, just learned about upscale with high res at first, then went upscale by img2img, tried rapidly ultimare sd upscaler without control net and discover control net yesterday, and lora not long ago aswell, the thing become even crazier at each step and this now, thx for the quality video and keep it up @AIKnowledge2Go
I just came across your channel for the first time. I’ve yet to see some of these tips! Turning off restore faces on the upscale should be a big help for me. That has always been my issues is the warped faces. Thanks!
I'm thrilled to hear you found the video helpful and insightful! Welcome to our community!
Ive found that in general, at least the way i usually work, its best to turn off restore faces after either text 2 img, or after img 2 img (if you want to make significant changes to the image while keeping same face). When I do any kind of inpainting on face, my preferred method is to use ip adapter (heard good things about ID as well, but haven't tested it beyond a few images). Ip adapter seems pretty good if I want to add or remove some kind of detail to a face. Keeps the face the same without removing or heavily denoising my changes, face restore on the other hand does not seem to place nicely with inpainting on face.
Awesome man! Your videos are what got me into AI art, so happy to be learning a new workflow. Can't wait to try it!
That's fantastic to hear! I'm glad my videos have inspired you to dive into AI art. Enjoy exploring the new workflow!
So when I get to the stage to upscale by the "resize by" before we do the Ultimate upscale. I notice that if my prompt has a color word in it, regardless of if it's like "purple hair" the whole image will then get this purple tone to it, if I take out the color word, then I don't get the purple hair but image looks great. Would you recommend anything so the image doesn't get the color overlay at this step? And perhaps would you know what is driving this? @@AIKnowledge2Go
In the step starting around 7:10 did you mistakenly invert the settings for denoise and controlnet weight. Because if I put denoise to 0.9 or 1 and weight at .3-.6 I get a completely new img, but if I reverse them I get the intended effect.
Hi there! Actually, no, the settings of denoising at 0.9 and control weight at 0.3 - 0.6 are indeed correct. Could you please check your A1111 Console Window for any errors that might pop up when you use this controlNet workflow and render? Sometimes, the issue might be hiding in the details there.
@@AIKnowledge2Go I wasn't getting any errors, it ended up being using the i2i>inpaint tab instead of just the i2i tab like the other commentor mentioned (who seems to have deleted their comment for some reason #shrugs lol). But like I said even in the main i2i tab I was able to get the desired result with those options flipped. Thx!
@@AIKnowledge2Go Will using A1111 Forge webui make a difference. I get similar results as @juggz143 when using denoising at 0.9 and control weight at 0.3 - 0.6. (ie: my image changes completely
@@juggz143 Same here. I don't think it could work with 0.9 denoising strength, especially when using a random seed...
I got the same results as you. If i flip it i get the desired results otherwise it gives me something else.
Control net was the cheat code I needed
I'm glad to hear that Control Net is helping you out! It's amazing how a little tool can make such a big difference in your workflow.
at 6:15 you said you uploaded the image from step one but it looks like you used the inpainted version instead. is the inpainted version the correct one to upload to controlnet?
You are absolutely right, I mixed them up. But since it's out painting it shouldn't impact the outcome that much. Control net is analyzing the image so it looks more for colors and surroundings and how it should in paint the new areas.
Thank you SOOOO much for the guide in your Patreon, it has been enlightening, so many advices, really really appreciate it!!
Glad you enjoy it!
Thanks for the video. Used your flow, and everything looks good until inpaint upscale, then it goes off the rails. I have the same exact settings, can't figure out what's going on. I can only get a normal looking image if I lower the denoising.
You are welcome. Make sure that your controlNet unit works correctly. Check your console Window for errors when you start the upscale. If it works correctly then, increase ControlNet weight and lower denoising.
@@AIKnowledge2Go Thanks! Ya, tried all that. It only works if I drop denoising to ~0.2, and then I'm losing detail. Not sure how you get it to work with denoising so high, I've been trying everything. I'm in forge, so maybe that's it? Although, it should be 100% auto111.
@@Al-StormI've got the same problem with Forge
Just discovered your channel...subscribed! Great Vid.
Welcome aboard!
This is great stuff! When you turn the image into wide screen, can you add more chracters into that space? If so, how?
You would first have to outpaint the image (into wide screen) and then send it to inpaint and mask the area where the character you want to add should stand and use a high denoising strength, 0.7 - 0.9. Use ControlNet inpaint with either llama or global harmonious as preprocessor. Remove everything from the prompt that has nothing to do with render. "Phototrealistic, 4K, HDR" etc. has to stay in order to give you the same look. Working with Character loras here can be tricky. I have a guide on my patreon in my 3 Dollar tier, but here is a free version that you can get you started: www.patreon.com/posts/99183367
Hope that helps.
@@AIKnowledge2Go I have another question. When putting the denoising strength to 90 for the image detail increase, I sometimes see changes happen to the image that I didn't want to change from the original image, but it's random. What is a good way to keep the image 100% the same, but still add the detail you did in this video?
Amazing video! My final image has even too many details, it's crazy! Thanks a lot!
Glad it helped! Happy creating
So much good ideas ! But now I am working with SDXL, and I am unable to find any inpaint controlnet module 😭😭. Do you have any solution ??
Not having a control net inpaint module is one of the biggest drawbacks in SDXL. I know your pain I heard of a software called Fooocus that is good at inpainting. It's from t lllyasviel who has done a lot of work for the original control net models. But I haven’t tried it myself.
github.com/lllyasviel/Fooocus
It's an area I will definitely look into in further videos but I can't promise you when.
I only started using SD few weeks ago, and all videos on TH-cam are from 1-year-ago, did everyone stop using A1111? Or nothing really changed since then?
I'm wondering the same, really odd. Also seems to be mainly Forge now?
Yes un(forge)unately... bad pun... unfortunately no one knows whats the timeline for new A1111 versions. It doesn't support Flux and i may be wrong but it also doesn't support Stable Diffusion 3.5 atm. These both are the go to models for local image generation. I still use both A1111 mainly because some extension do not work inside forge.
If you want to try and don't want to mange multiple downloads / model folder. Here I show how you can install A1111, Forge, ComfyUI and many more with just simple clicks by using Stabillity Matrix.
th-cam.com/video/bwPk-NXggp0/w-d-xo.html
What's a good way to make the end result less blurry and more detailed?
Unfortunately, there is no one shot answer as it strongly depends on Stable Diffusion version, checkpoint, lora usage settings etc. Sometimes it helps to increase denoising, but that can of course mess up the composition. On my Patreon you can find my FREE workflow guide. Maybe it can shed some light in the darkness. 100 % Free, no membership needed! www.patreon.com/posts/get-your-free-99183367
@@AIKnowledge2Go I found that upping the tile width in USDU to 1024 (Keeping resolution at 2) and putting the denoising at 2.5 to maintain consistency will output some really nice pictures with way less blurring. Occasionally i'll up it to 2048 is the picture is already really good for a really nice composition.
Your method of download although simple, you skip the steps on how you are supposed to enable these in a1111. Do they go in the extensions file? I did that after searching on the web, but there doesnt seem a way to enable control net unless I go install via the a1111 webui
even then i cant be sure im doing/downloading the same things you did.
Hi its been a while i created this video. I am not sure what you mean by my method of download.
All links are provided in the video description. The Loras go into the lora Folder and the inpaint model into the ControlNet folder.
Absolutely great video man. Thank you very much.
Glad you enjoyed it! Happy creating.
Could you drop your negative prompt? i would like to save it as a template
You can find my style collection here 100% free no membership needed:
www.patreon.com/posts/my-collection-of-87325880
I also have a free sneak peek of my workflow and beginners guide if you are interested also 100% free:
www.patreon.com/posts/get-your-free-99183367
www.patreon.com/posts/sneak-peek-alert-90799508
have fun with it happy creating
ein super workflow. habe ihn jetzt mehrere male durchgespielt, echt top die ergebnisse. aber, beim letzten upscaling schritt (mit script) verliere ich jedes mal das gesicht, welches ich mit ReActor reingebracht habe... wie kann ich das verhindern?
Hi danke für das Feedback, du müsstest dann die Denoising strength reduzieren, das gibt natürlich weniger filigrane details. Wichtig ist auch das du "After detailer" und "Face restoration" deaktiviert hast. Du könntest auch versuchen mit einem IPAdapter zu arbeiten. Dazu arbeite ich derzeit an einem Tutorial. Kommt aber wohl erst als übernächstes Video leider. Ich weiß auch nicht wie gut Ipadapter mit Tile Upscale zusammen arbeitet.
@@AIKnowledge2Go face restoration ist aus, werde mal mit weniger denoising versuchen. für ipadapter muss ich noch mal ein passendes model suchen. danke schon mal!
@@SHPjealousy immer gern.
how do you have sampling method DPM++ 2M Karras. while i only have DPM++ 2M. are they the same or no?
The video is recorded from an older version, sampler and scheduler are now separated. So if you want you can select Karras from the new "Schedule type" dropdown, but if its set to automatic it should pick karras by default.
@@AIKnowledge2Go thank you God bless you!
One question to Checkpoint versions or versions in particular. You are using the Version 11 of the realcartoon checkpoint altough there is a version 17 on the screen. Does not a higher version mean it is newer or "better"?
Hi, yeah usually newer is better in terms of checkpoints. Some Checkpoint creators pushing out new versions on a daily basis. When i came up with the idea for the video 11 was the latest and i don't change version while working on a project. This is why its still version 11.
@@AIKnowledge2Go danke dir für die schnelle Antwort :D
Thanks a lot, it a very good workflow. I tried it with another checkpoint, is perfect with your values!
Great to hear! Happy creating.
Hallöchen, darf ich fragen warum du 1111 nutzt und nicht zB Fooocus? Danke
Du darfst... zum Zeitpunkt als ich das Video aufgenommen habe, war ich noch nicht so überzeugt von Fooocus. Tatsächlich ist eins meiner kommenden Projekte ein Inpaint tutorial wo ich Fooocus vorstellen werde
Hey !
is it still up to date (in december of 2024), and working if I'm using reforge ? thanks ! :)
Yes It is. the techniques in this are timeless. It works in Forge Ui. Haven't used reforge yet.
@@AIKnowledge2Go Thanks
however, the step at 5:50 doesnt work for me : when I change the ratio from 768x768 to 1024*768, the image is stretched... I have these settings : controlnet, enable, upload independent control image, inpaint, inpaint_only+lama, thibaud_xl_openpose (because the controlnet models are not working for reforge), controlnet is more important, resize and fill, wit the control weight at 1 and the denoising strength at 9
edit : nevermind, found the issue ! it was because I forgot to switch back from "inpaint" to "img2img"
Need to know how you'd do this in SDXL! There's no inpaint or tile controller model that I can find for SDXL:(
I know how you feel. There are these models from different People: civitai.com/models/136070/controlnetxl-cnxl. Unfortunately ControlNet Models for SDXL are inferior compared to SD 1.5. That's why i still use SD 1.5 in certain scenarios.
@@AIKnowledge2Go O wow! Ididn't know any existed for tile and inpain. Have you tried applying your inpainting controlnet wizardry with these models you've shared? Would be amazing to get something similar with the inpaint method working in sdxl.
I've been using the Controlnet Tile/Ultimate SD Upscale (USDU ) workflow for months but I still get a lot of artifacts using Controlnet. My solution has been to just use USDU with noise set to 2, and always use DPM++ 2M SDE Exponential, even if the original was done with Euler A.
Great workaround! I've also shifted from Euler A to experimenting with DPM++ 2M Karras and DPM++ SDE in Dreamshaper XL Lightning setups. It's fascinating to see how different settings impact the final output.
@@AIKnowledge2Go have you tried JaggernaughtXL lightning< for me its best model, but i could make it work properly from you tutorial
Are you using restore faces? I can't get my faces to even look a fraction of that quality.
Edit: Saw that you were and turned it off later HOWEVER, I'm now running into an issue where the hands are getting worse after the resize and the Inpaint doesnt support 2736x1536 (only shows 2048 as max). Also the face distorted just a tiny bit so I'd like to fix that as well if possible, any recommendations?
@Bpmf-g3u thanks, I ended up spending the day repeat inpainting until I got it decent. Tried a few other things but none worked at all for some reason.
When doing Tile Upscale its important to turn of restor faces, because its going tile by tile that's why you need to turn it off. Regarding your problem usually you want to fix all your inpainting at lower resolutions and then upscale, if anything in the image distorted then decrease the denoising strength.
@@AIKnowledge2Go funny enough it looked fine before the upscale, perhaps I needed to change the denoising. I’ve noticed I added TOO much detail after the upscale and tiling. Would lowering or increasing the denoisong fix that? Sorry for all the questions I’ve never touched the upscaling before so this is all new to me. Appreciate the response!!
for some reason, the mask area get more dark than the original pic, and i dont know how to solve.
Strange, can you describe in more detail what you did and what model you are using. Maybe i can help?
The image stretches after generating with higher res and I have resize and fill select something else Im not doing right?
I'm sorry to hear that you have trouble with my workflow. I assume you are talking about the out painting Step? If so, can you please confirm that you are using resize by instead of resize to. Also can you please check your command window for any errors that are popping up while rendering?
Great video. How can we make Animie Type Video? Could you bring out another video regarding this, please?
Yes we can have you checked my th-cam.com/video/IcQvTb1jVAM/w-d-xo.html&lc=UgyPOEGpbzo_PNSjhmN4AaABAg video yet?
Just change the prompt to anime related styles.
CTRL + ENTER... Thank you
You are welcome, i figured it out by accident 😊
if i want to make AI pictures of a person. and make many image of the same person. how do i do that?
Hi, i have a consistent Character Tutorial in the works but unfortunately it won't be my next video. Look online for an IP-Adapter Tutorial.
@@AIKnowledge2Golooking forward to that consistent character tutorial 🎉
@@alonius12 the script is finished. Unfortunately i got side tracked by flux.1 But it will be one of my next videos, big promise.
Would it be possible to upload all the images you created/used in each stage of this guide to an online host, i would be interested to see the difference between each stage in the process and may help other users by allowing them to see the difference also and maybe put some visual elements to the settings and options you have used.
Nevertheless, excellent guide, detailed explaination and for what it’s worth the accent makes it 100 times better, you just earned my sub!
Thank you for the suggestion! I'll definitely consider uploading the images for each stage to provide a visual reference for viewers.
Thank you for the guide, it led to a fun discovery when i misremembered some of the settings. Turns out that SD ultimate upscale can be abused to make photomosaics. The results were both horrifying and beautiful. 10/10, would make a person made out of other people again.
You're welcome! It's fascinating how sometimes, what starts as an accidental setting can lead to both horrifying and wonderfully unique creations. I love that you're embracing the experimental side of things and discovering new possibilities.
Update for SDXL/Pony?Flux?
Believe it or not but i am currently working on exactly that. A best workflow for Flux / SDXL in Forge UI. Will still take a couple of days to finish.
@@AIKnowledge2Go SCORE! BTW your videos helpe me progress in this hobby immensely.
I've installed the latest version of stable diffusion from GitHub and I don't have Karras sampling at all.
Can someone please help me?
Hi, this is now a new dropdown it's called "schedule type" you can leave it as automatic, but you can also set it by hand. I hope that helps happy creating.
yea i have the same problem than someone i read in the comment when i got to the part of inpaint upscale my change entierly, i lost the scenary the character everything, your seem almost inchanged except for the detail added, i'm not sure what going on...., so i can only running really low denoising not sure how your don't change ... nothing weird show up in the terminal so i think it work correctly ...
That does sound strange indeed. Could you please confirm if you're using the latest versions of A1111 and ControlNet? Also, it would be helpful to know which Checkpoint you're currently using. I want to try out for myself.
@@AIKnowledge2Go i'm not sure you got my answer i wrote you some hour ago as it seem to have vanish from the commentary idk why, anyway, i think i'm up to date has it's has not been a week since i started using ai creating image tool, so from the system info extension i got those info,
app: stable-diffusion-webui-forge
updated: 2024-03-08
device: NVIDIA GeForce RTX 3080 (1) (sm_90) (8, 6)
cuda: 12.1
cudnn: 8801
driver: 551.52
python: 3.10.11
xformers: 0.0.25
diffusers: 0.25.0
transformers: 4.30.2
configured: base:realcartoonRealistic_v14.safetensors [edd7bf7340] refiner: vae:kl-f8-anime2.ckpt
loaded: base:C:\stable-diffusion-webui-forge\models\Stable-diffusion
ealcartoonRealistic_v14.safetensors refiner: vae:C:\stable-diffusion-webui-forge\models\VAE\kl-f8-anime2.ckpt
So with this you should have my A1111 version, some of my spec, and the model i used when i tried your tutorial. i was follow closely so i also use the lora you use the fantasy one and add detail one.
I still managed to get it done from where i were stuck ( the inpaint upscale step ), but only after lot of time and trial and error of testing various value, for it to work, but in the end nothing i was happy with, then i let it go and went into an img2img ultimate upscaler solution, but taking again some time to try to find a value i was happy with.
As i said i'm pretty new to this so i made maybe some mistake idk, or a a combination of factors made this not working as expected idk.
Are you on ForgeUI? I am getting the same thing, I think you can crop the denoise to 50% and get similar results
This works with the new SDXL controlnet scripts too, except the preprocessor MUST be set to none
Helpful insight on the SDXL controlnet scripts and the importance of setting the preprocessor to none. I'm on the hunt for an effective upscale workflow, so this is great info. Thanks!
Sehr schön und informativ, aber Ihr Ton klingt hohl, vielleicht können Sie ihn hochskalieren!
Hallo, danke für Ihr Feedback. Ich hab irgendwie noch nicht die richtigen Einstellungen mit dem neuen Mikrofon gefunden.
so why not hi-res fix exactly?, I feel this could be considerably faster, hi-res, inpaint if you have to then control net tile with tiled vae up to 16k, done.
Experiment with ControlNet Upscale versus the high-res fix and you'll notice a difference. ControlNet tends to produce finer details, mainly because you have the option to increase the denoising strength. While this might not hold for every checkpoint, it's true for roughly 90% of them. Additionally, when fine-tuning your prompts, consider generating a large batch of images, say around 50, and then selecting the top 3. This process is considerably slower using the high-res fix. Inpainting works more efficiently with lower-resolution images too.
Before you think about upscaling, ensure your composition feels complete. This foundational step is crucial for achieving the best overall quality in your final image.
@@AIKnowledge2Gomate, I have over 9k images uploaded to civit, like, im generaly of the understanding when upscaling its better to start with as high a quality base image as possible, hence why you want to hires fix. And when you say 'Inpainting works more efficiently with lower-resolution images' what do you mean?, you really notice any difference?, compared to hires with adetailer first?. I dunno man, I think hires fix, possibly using adetailer if needed then, 2nd pass in img2img and upscale with tiled vae and Tiled Diffusion, which can also hook into CN for using tile preprocessor.
@@quercus3290 No Offense, but I'm genuinely interested: at what point in your process do you apply outpainting to achieve a 16:9 aspect ratio? Do you begin with an image already in 16:9 resolution? I’d love to understand your process for applying a high-resolution fix, as I’m eager to test it myself.
It’s not that I’m questioning the effectiveness of your workflow, but based on recent surveys, about 90% of my audience is using Nvidia 2080 graphics cards or something older. Time efficiency is crucial due to this. Inpainting at high resolutions is significantly slower and requires more patience. If the outcomes are unsatisfactory, and numerous adjustments need to be made, it can become quite frustrating.
For the majority of my viewers, the workflow I've showcased appears to be more practical. Given the volume of images you’ve uploaded, am I correct in assuming you’re working with at least a GeForce RTX 4080 or something more advanced?
This is a really interesting back-and-forth. Not sure which method works best for me. I do think that having to start with a square image actually limits the range of compositions that you can get SD to produce though. SD will try to put all the relevant solutions into that initial square, and the outpainting potentially just adds 'filler' content. Starting with a non-square image might not yield the absolute best quality, but it might offer better composition?
Klasse Video. Ich feier deinen Dialekt so sehr! 😅😂
Hab schon ne Menge gelernt von dir Vielen HErzlichen Dank!^^
Schonmal darüber Nachgedacht die Videos auch mit deutscher Sprache nochmal hochzuladen?
Hi, danke für dein Feedback! Freut mich, dass dir mein Dialekt gefällt. Deisten deutschen sehen das leider anders... 😊 Ich habe einen deutschen Kanal (KIwissen2go), aber momentan fehlt mir die Zeit fürs übersetzen der Videos. Danke für deine Unterstützung!
@@AIKnowledge2Go ich feier das! :D deineanderen Kanal werd ich mir auch gleich mal ansehen! Sag mal, könntest du mal ein Video machen indem du erklärst, wie man 2 personen unabhängig voneinander erstellt?
@@MurphysPuppet Danke, dass du meine Videos feierst und auch Interesse an meinem anderen Kanal hast! Die Erstellung von zwei unabhängigen Personen mit Stable Diffusion kann tatsächlich herausfordernd sein.. Fürchte für ein ganzes Video ist das ein bisschen kurz. Werde allerdings demnächst mit Shorts experimentieren für solche Quick Tipps. Vielleicht mache ich das tatsächlich zum Thema.Am einfachsten erreichst du das mit Inpanting. Wenn du einen Mann oder eine Frau im Bild hast kannst du versuchen mit Prompts wie zum Beispiel: image of 1 woman 1 man, he wears a black coat, she wears a red dress. Allerdings bei gleichgeschlechtlichen Personen, wird das Eher schlecht als recht funktionieren.
Gutes Video.. der deutsche Akzent is der Hit
Danke, ohne kann ja jeder 😂
process is so complex I'm no longer surprised why people move from A1111 to ComfyUI, great vid anyway! cheers :)
I completely understand where you're coming from. The process can indeed get complex, which is why I alternate between using ComfyUI and Automatic 1111 based on the specific needs of each project. For tasks like animate diff, ComfyUI is my go-to. However, when it comes to SDXL lightning with ComfyUI can present some challenges for me when it comes to upscaling.I tried Ultimate SD Upscale, K Sampler Upscale and SUPIR, But the results are still mediocre. I'm always on the lookout for tips and tricks to streamline these workflows, so if you have any suggestions or need advice on a particular aspect, feel free to share! Cheers and thanks for the support :)
@@AIKnowledge2Go I wonder what's your opinion on recent Stable Swarm UI ? it's still in Beta, but basically combines A1111 and ComfyUI within one convenient web interface, with easy support for LAN use and multiple GPUs too :)
supportive comment. thanks for the guide! ❤🔥
I am glad you liked it. Happy creating
I thought yaml files did not need to be downloaded
You are absolutely right, you don't need to do this in newer versions of A1111 anymore. I always add this because some of my audience may have older versions.
@@AIKnowledge2Goi was wondering about that! Tried using version 16 17 15.2. I noticed some differences. Whats the best one! Im not liking 18.0
I tried to use your method to fix deformed fingers but miserably failed. But hey the model looks awesome!!
Sorry to hear that. Is your ControlNet inpainting working properly? Sometimes the command window is full of errors, and you wonder why it isn't working. Also, try different denoising strengths and generate at least two images; sometimes the second image really makes a difference.
Top secret ? I wonder if I will get in trouble for watching this ? :O)
Your mission, should you choose to accept it, involves keeping these "top secret" techniques under wraps. As long as you're discreet, you'll navigate this covert operation without any trouble. Welcome to the inner circle! 😉
german? thx for the video, it helps a lot
ep, I can't hide my accent 😂. I'm glad it was helpful!
Thanks for share Bro. Keep it clear and honest, you're doing and amazing job. 🖤
I appreciate that, thanks
i wish there was a way to like a video more than once
Your support means the world to me, and I wish I could like your comment more than once too!
i fucking dont have dpm++ 2 kuras, cant find please help i m angryyyyyyyyyyyyyyyyyyyyyyyy
Please don't be angry, since one of the last updates, the sampling method and schedule type have been separated. You can find it in the dropdown next to it. Actually the automatic setting should be fine.
Do you struggle with prompting? 🌟 Download a sneak peak of my prompt guide 🌟 No Membership needed: ⬇ Head over to my Patreon to grab your free copy now! ⬇ www.patreon.com/posts/sneak-peek-alert-90799508?Link&
upscale at 7:06 if anyone wanted to know ♡
Can you change your audio setup? It is a pain to listen to your voice. Get a better mic, use filters, record audio in another room, turn down the volume, do something.
he started not long time ago.. equipment costs money.. you will have to live with it for now.. untill he can effort it.
It’s not even that bad at all
Thank you all for your feedback and support. I'm currently in the process of exploring better audio editing techniques. From here it gets only better :)
@@AIKnowledge2Go Deine Videos sind irgendwie einzigartig, du gehst ins Details. Alle anderen Videos im Netz sind beginner Guides, die alle das gleiche erzählen und man kaum was neues lernt. Mach weiter so in depth guides, keep up the good work.
I'm on Forge and when I do the Inpaint Upscale step, I get this error and the resulting image is completely different:
*** Error running process_before_every_sampling: D:\AI\webui_forge_cu121_torch21\webui\extensions-builtin\sd_forge_controlnet\scripts\controlnet.py
Traceback (most recent call last):
File "D:\AI\webui_forge_cu121_torch21\webui\modules\scripts.py", line 835, in process_before_every_sampling
script.process_before_every_sampling(p, *script_args, **kwargs)
File "D:\AI\webui_forge_cu121_torch21\webui\venv\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\webui_forge_cu121_torch21\webui\extensions-builtin\sd_forge_controlnet\scripts\controlnet.py", line 555, in process_before_every_sampling
self.process_unit_before_every_sampling(p, unit, self.current_params[i], *args, **kwargs)
File "D:\AI\webui_forge_cu121_torch21\webui\venv\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\webui_forge_cu121_torch21\webui\extensions-builtin\sd_forge_controlnet\scripts\controlnet.py", line 497, in process_unit_before_every_sampling
cond, mask = params.preprocessor.process_before_every_sampling(p, cond, mask, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\webui_forge_cu121_torch21\webui\extensions-builtin\forge_preprocessor_inpaint\scripts\preprocessor_inpaint.py", line 27, in process_before_every_sampling
mask = mask.round()
^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'round'
Do you know what is going on?
I'm sorry to hear you're encountering this issue. While I'm not deeply familiar with Forge specifics, this error suggests there might be a problem with the input mask. Since we dont use any it could a bug in controlNet for Forge. Can you confrim you used the rigth preprocessor? If you haven't already, consider seeking advice on forums or communities dedicated to Forge or similar AI tools; they might have encountered and resolved similar issues. Good luck, and I hope you find a solution soon!
wo sind die ganzen deutschen die sagen HOHO DER IST DEUTSCH.
Vielleicht sind die noch auf Malle? 😂
@@AIKnowledge2Go danke für das video bruder xD