Map Bashing - NEW Technique for PERFECT Composition - ControlNET A1111

แชร์
ฝัง
  • เผยแพร่เมื่อ 11 มิ.ย. 2023
  • Map Bashing is a NEW Technique to combine ControlNet maps for Full Control. This allows you to create amazing Art. Have full artistic control over your AI works. You can exactly define where elements in your image go. At the same time you have full prompt control, because the ControlNET Maps have now color, daylight, weather or other information. So you can create many variations from the same composition
    #### Links from the Video ####
    Make Ads in A1111: • Make AI Ads in Flair.A...
    Woman Sitting unsplash.com/photos/b9Z6TOnHtXE
    Goose unsplash.com/photos/eObAZAgVAcc
    Pillar www.pexels.com/photo/a-brown-...
    explorer: unsplash.com/photos/8tY7wHckcM8
    castle: unsplash.com/photos/8tY7wHckcM8
    mountains unsplash.com/photos/lSXpV8bDeMA
    Ruins unsplash.com/photos/d57A7x85f3w
    #### Join and Support me ####
    Buy me a Coffee: www.buymeacoffee.com/oliviotu...
    Join my Facebook Group: / theairevolution
    Joint my Discord Group: / discord
  • แนวปฏิบัติและการใช้ชีวิต

ความคิดเห็น • 155

  • @OlivioSarikas
    @OlivioSarikas  ปีที่แล้ว +8

    #### Links from the Video ####
    Make Ads in A1111: th-cam.com/video/LBTAT5WhFko/w-d-xo.html
    Woman Sitting unsplash.com/photos/b9Z6TOnHtXE
    Goose unsplash.com/photos/eObAZAgVAcc
    Pillar www.pexels.com/photo/a-brown-concrete-ruined-structure-near-a-city-under-blue-sky-5484812/
    explorer: unsplash.com/photos/8tY7wHckcM8
    castle: unsplash.com/photos/8tY7wHckcM8
    mountains unsplash.com/photos/lSXpV8bDeMA
    Ruins unsplash.com/photos/d57A7x85f3w

    • @aeit999
      @aeit999 ปีที่แล้ว

      Latent couple when?

    • @xiawilly8902
      @xiawilly8902 ปีที่แล้ว

      looks like the explorer image and castle image are the same.

  • @ainosekai
    @ainosekai ปีที่แล้ว +74

    Sir, no need to check 'restore face'. Because if you use kinda 2.5D/animated base model, it face will looks weird.
    You can use an extension named 'After Detailer'. It can fix your character's faces flawlessly (based on your model). Also it can works perfectly with character (face) LoRa. There are also several models like it can fix hands/finger and body.
    Give it a try~

    • @hacknslashpro9056
      @hacknslashpro9056 ปีที่แล้ว +1

      how to put own face in a picture generated in SD, should we use inpaint or what? need the same style tho and matching lighting, should we use inpaint or what?

    • @ryry9780
      @ryry9780 ปีที่แล้ว +3

      ​As a birthday gift to my sister three months ago, I made a picture featuring her and one of her favorite characters.
      The way it worked was I trained models of both the character and my sister. My sister's models had to be done in two steps: first with IRL pictures, then second with generated animated pictures.
      Once that was a done, it was a matter of compositing them all together in one pic via OpenPose + Canny + Depth and hours of Inpainting, with a little Photopea.
      Took me 20 work-hours.
      Idk how much of this process has changed since Auto1111 is now at v1.3.2 and ControlNet at 1.1.

    • @samc5933
      @samc5933 ปีที่แล้ว +1

      What are these “other models” that fix hands? If you can point me in the right direction, I’d be grateful!

    • @Feelix420
      @Feelix420 ปีที่แล้ว

      @@samc5933 until ai learns to draw hands and feet i wouldn't worry so much about ai like Elon is now

    • @cleverestx
      @cleverestx ปีที่แล้ว +1

      adetailer is amazing, comes standard on Vladmandic...it can be set to detect hands and fix those as well if you choose the hand instead of face model, but only mildly, not as effective on hands as it is on faces, but still can save a picture from time to time!

  • @Maria_Nette
    @Maria_Nette ปีที่แล้ว +6

    ControlNet gets even better with every new update.

    • @aeit999
      @aeit999 ปีที่แล้ว +1

      It is. But this method is as old as control

  • @jason-sk9oi
    @jason-sk9oi ปีที่แล้ว +13

    Tremendous human artistic control while maintaining the ai creativity as well. Nice!

    • @paulodonovanmusic
      @paulodonovanmusic ปีที่แล้ว

      Exactly. I think a lot of traditional artists, particularly those with at least basic desktop publishing skills (or basic doodling skills) would love how empowering this is. 1111 is such a wonderful art tool, it's a pity that it can be so technically challenging to get set up, I hope this gets solved soon and that the solution becomes more accessible to the unwashed masses.

    • @chickenmadness1732
      @chickenmadness1732 ปีที่แล้ว

      @@paulodonovanmusic Yeah it's very close to how a real artist concept artist for movies and games works.
      Main difference is they use a collage of photos to get a rough composition and then paint over it.

  • @mikerhinos
    @mikerhinos ปีที่แล้ว +1

    This is amazing as very often... one of the most under rated TH-cam account on A1111 tutorials !

  • @neeqstock8617
    @neeqstock8617 ปีที่แล้ว +24

    Tried it, and this is probably the most simple, creative, and effort-effective technique I've come across. It's so easy to edit edge maps, even with simple image editing software. Thank you Olivio! :D

  • @akanekomi
    @akanekomi ปีที่แล้ว +3

    I have been using similar techniques for a while now, I AI Dance animations I make are a lot more complex, glad you made a tutorial on this, Ill redirect anyone who asks for SD tutorials to your channel. Thanks Olivio❤❤

  • @ex0stasis72
    @ex0stasis72 ปีที่แล้ว +3

    I'm so excited to use this technique. I was getting frustrated with the limitations of openpose not being detailed enough. But this soft edge thing looks really powerful as long as I'm willing to do a little manual photo editing beforehand.

  • @eddiedixon1356
    @eddiedixon1356 ปีที่แล้ว +1

    This is exactly what I was looking for. I still have a few things to piece together but this was huge, thank you so Much for your time.

  • @AZTECMAN
    @AZTECMAN ปีที่แล้ว +2

    One very similar method I've been exploring is creating depth maps via digital painting.
    Additionally, I've experimented with using a inference based map and then modifying by hand it to get more unusual results.
    Mixing 3D based maps (rendered), inference based (preprocessed), and digital painting methods, while utilizing img2img and multi-controlnet highlights the power of this tech.
    "Map Bashing" is a great term.

  • @jacque1331
    @jacque1331 ปีที่แล้ว

    Olivio, you're a Rockstar! Been following you for a while. Extremely grateful to have found your channel.

  • @boyanfg
    @boyanfg ปีที่แล้ว

    Hi Olivio! I am amazed about the master level at which you use the tools. Thank you for sharing this with us!

  • @frostreaper1607
    @frostreaper1607 ปีที่แล้ว

    Oh wow, this actually solves the composition and color issues, great find Olivio thanks !

  • @soothingtunes6780
    @soothingtunes6780 11 หลายเดือนก่อน

    You are a lot more amazing than Stable Diffusion XL bro, what good is a tool if we don't have people like you to show us how to use it properly!!!

  • @travislrogers
    @travislrogers ปีที่แล้ว

    Amazing process! Thanks for sharing this!

  • @BruceMorgan1979
    @BruceMorgan1979 ปีที่แล้ว +1

    Fantastic, and well detailed video Olivio. Look forward to trying this.

  • @aicarpool
    @aicarpool ปีที่แล้ว +2

    Who’s da man? You da man!

  • @CCoburn3
    @CCoburn3 ปีที่แล้ว +1

    Great video. I'm particularly happy that you used Affinity Photo to create your maps.

  • @GREATMAGICIANLYNEY
    @GREATMAGICIANLYNEY ปีที่แล้ว

    I've been contemplating how best to bash up source images to create a final composition for SD rendering and this looks like a grand solution! Thanks for sharing.

  • @trickydicky8488
    @trickydicky8488 ปีที่แล้ว +1

    Watched your live stream over this last night. Highly enjoyed it.

  • @luke2642
    @luke2642 ปีที่แล้ว +15

    You could also use background removal tool step to preprocess each image, or as others suggested, non destructive masking when cutting them out.

    • @TorQueMoD
      @TorQueMoD ปีที่แล้ว +3

      You don't even need to do any sort of masking. When both images have a black background and white strokes, just set the top layers to Linear Dodge blend and they will seamlessly blend together.

  • @ronnykhalil
    @ronnykhalil ปีที่แล้ว

    this is brilliant! thanks for sharing. opens up so many possibilities, and also helps me grasp the infinitely vast world of controlnet a little better

  • @dm4life579
    @dm4life579 ปีที่แล้ว

    This will take my non-existant photo bashing skills to the next level. Thanks!

  • @monteeaglevision5505
    @monteeaglevision5505 ปีที่แล้ว

    You are a legend!!! Thank you sooooo much for this. Game changer. I will check back and let you know how it goes!

  • @ex0stasis72
    @ex0stasis72 ปีที่แล้ว +1

    I recommend playing around with adding this to your positive prompt: "depth of field, bokeh, (wide angle lens:1.2)"
    Without the double quotes of course.
    Wide angle lens is a trick that allows the subject's face to take up more of the area on the image while still fitting in enough context of the area around the subject. And the more pixels you allow it to generate the face, the more details you'll get generally. Although, if you already have controlnet dictating the composition of the image, adding wide angle lens to your prompt will likely have no effect and therefore reduce the effectiveness of everything else in your prompt.
    The depth of field and bokeh are just some ways to make it feel like it was a photo shot professionally by a photographer than if it was just shot by an average person with automatic camera settings.

  • @joywritr
    @joywritr ปีที่แล้ว +9

    This was very useful, thank you. I was considering drawing outlines over photos and 3D renders to do something similar, but using the masks generated by the AI should work as well and save a lot of time.

  • @mysterious_monolith_
    @mysterious_monolith_ ปีที่แล้ว

    That was incredible! I love what you do. I don't have ControlNET but if I could get it I would study your methods even more.

  • @destructiveeyeofdemi
    @destructiveeyeofdemi ปีที่แล้ว

    Thorough brother.
    Peace and love from Cape Town.

  • @bjax2085
    @bjax2085 ปีที่แล้ว

    Brilliant!! Thanks!

  • @ctrlartdel
    @ctrlartdel 11 หลายเดือนก่อน

    This is one of your best videos, and you have a lot of really good videos!

  • @jonmichaelgalindo
    @jonmichaelgalindo ปีที่แล้ว +5

    I've been using this for ages! ❤
    NOTE!: RevAnimated is *terrible* at obeying controlnet! (It is my favorite model for composition, but... I wouldn't use it like this.)
    I inpaint after the initial render. Same map bash controlnet, +inpaint controlnet (no image), inpaint her face w/ "face" prompt, pillar w/ "pillar" prompt, etc.
    No final full-image upscale; SD can't handle more than 3 large-scale concepts.
    You can get hires details in a 4k canvas by cropping a section, inpainting more detail, then blending the section back in w/ photoediting software. (This takes some extra lighting-control steps; there are tutorials on how to control lighting in SD.)

    • @foxmp1585
      @foxmp1585 10 หลายเดือนก่อน

      Could you clarify the "extra lighting-control steps" you mentioned? Is that the map we painted in Black&white and then feed into img2img tab?
      Thank you in advance!

    • @jonmichaelgalindo
      @jonmichaelgalindo 10 หลายเดือนก่อน

      @@foxmp1585 I barely remember my workflow from back then... SDXL is fantastic at figuring out what sketches mean in img2img. Right now, I block out a color paint sketch with a large brush, then run it through img2img with the prompt, then paint over the output, and run it through again and repeat, eventually upscaling and inpainting region by region with the same process. I have just about perfect control over composition, facial expressions, lighting, and style. :-)

  • @morizanova
    @morizanova ปีที่แล้ว

    Thanks .. smart trick to make machine function as our helper not just our overlord

  • @Aisaaax
    @Aisaaax 9 หลายเดือนก่อน

    This is a great video! Thank you! 😮

  • @yadav-r
    @yadav-r ปีที่แล้ว

    wow, learned a new thing today. Thank you for sharing.

  • @Braunfeltd
    @Braunfeltd ปีที่แล้ว

    Love your stuff, learning lots. this is awesome

  • @minhhaipham9527
    @minhhaipham9527 ปีที่แล้ว +1

    Awesome, please make more videos like this. Thank!

  • @ericvictor8113
    @ericvictor8113 ปีที่แล้ว +1

    Incredible video like always is. GRats!

  • @coloryvr
    @coloryvr ปีที่แล้ว

    Super helpful as always! Big FAT FANX!

  • @Carolingio
    @Carolingio ปีที่แล้ว

    👏👏👏👏👏
    Nice, Thanks Olivio

  • @amj2048
    @amj2048 ปีที่แล้ว

    this was really cool, thanks for sharing!

  • @heikohesse4666
    @heikohesse4666 ปีที่แล้ว

    very cool video - thanks for it

  • @spoonikle
    @spoonikle ปีที่แล้ว

    Holy smokes. This changes the flow

  • @MadazzaMusik
    @MadazzaMusik ปีที่แล้ว

    Brilliant stuff

  • @ysy69
    @ysy69 ปีที่แล้ว

    Beautiful

  • @PhilippSeven
    @PhilippSeven ปีที่แล้ว +2

    Thank you for this technique! It’s really useful. As for advice from my side, I suggest using an alternative methods for fixing faces (aDetailer, inpaint, etc ) instead of “restore faces”. It uses one model for each face, and as a result, the faces turn out to be too generic.

  • @accy1337
    @accy1337 ปีที่แล้ว

    You are amazing!

  • @TheGalacticIndian
    @TheGalacticIndian ปีที่แล้ว

    I love it!♥♥

  • @adastra231
    @adastra231 ปีที่แล้ว

    wonderful

  • @starmanmia
    @starmanmia 6 หลายเดือนก่อน

    Hello future me,remember to use IP adapter for faces and body and have A detailer for a backup works well x

  • @WolfCatalyst
    @WolfCatalyst ปีที่แล้ว

    This was a great tutorial on affinity

  • @williamuria4048
    @williamuria4048 ปีที่แล้ว

    WOW I like It!

  • @blood505
    @blood505 11 หลายเดือนก่อน

    спасибо за видео 👍

  • @Marcus_Ramour
    @Marcus_Ramour 11 หลายเดือนก่อน +1

    Brilliant video and thanks for sharing your workflow. I have been doing something similar but using blender & daz studio to build the composition first (although this does take a lot longer I think!).

  • @Grimmona
    @Grimmona ปีที่แล้ว +3

    I installed automatic 1111 last week and now I'm watching one video after another from you, so i get ready to become an Ai artist😁

  • @mayalarskov
    @mayalarskov ปีที่แล้ว +1

    hi Olivio, the image of the castle has the same link as the explorer image. Great video!

  • @glssjg
    @glssjg ปีที่แล้ว +40

    you need to familiarize yourself with masks in your image editor so that way you're using a nondestructive process instead of rasterizing and then resizing things which will lose you quality and if you erase things you wont have a way to undo other than using the undo button.

    • @theSato
      @theSato ปีที่แล้ว +20

      In a way, I agree with you - but honestly, the whole point of a workflow like this is (and AI/SD in general I think) that its as quick/efficient as possible. Going in and using more "proper" methods like masking/mask management, more layers, etc is nice, but it takes more time and more clicks to do, and for the purposes of making a quick map for ControlNet like this, likely not even worth bothering (in my opinion).

    • @glssjg
      @glssjg ปีที่แล้ว +18

      @@theSato I mean once you learn to use masks it is so much quicker. for example he had to resize the girl larger because he wanted to make sure the quality was best, If he did a mask he could have just made a mask and erase with a black paint brush (hit x to switch to white brush to correct a mistake) or do the free section method and instead of pressing delete you just fill with the foreground color by hitting option+delete. it's a super small thing as you said but will make your workflow faster, your mistakes less damaging (resizing a rasterized image over and over will decrease it's quality), and lastly it will just make your images better.
      sorry for writing a book, once you learn masks you will never not use them again.

    • @jonmichaelgalindo
      @jonmichaelgalindo ปีที่แล้ว +2

      I've found myself saving intermediate steps less and less. Something about AI just changes the way you feel about data. (Also, Infinite Painter doesn't have masks, and I can make great art just fine.)

    • @blakecasimir
      @blakecasimir ปีที่แล้ว +2

      ​@@theSatoI agree with this. The bashing part of the process isn't so much about precision as giving SD a rough visual guide to what you want.

    • @theSato
      @theSato ปีที่แล้ว +8

      @@ayaneagano6059 I know how to use masks, dont get me wrong. But it's an unnecessary extra step when you're just trying to spend 30 seconds bashing some maps or elements together for sd/ controlnet. The precision is redundant and I have no need to sit there and get it all just right.
      For purposes other than the one shown in the video, yes, use masks and itll save time long term. But for the use in the video, it just costs more time when it's meant to be one and done quickly and quality losses from resizing is irrelevant

  • @ddiva1973
    @ddiva1973 ปีที่แล้ว

    @14:43 mind blown 🤯😵🎉

  • @kyoko703
    @kyoko703 ปีที่แล้ว +1

    Holy bananas!!!!!!!!!!!!!!!!!

  • @SergeGolikov
    @SergeGolikov ปีที่แล้ว +4

    Brilliant results! if not a very convoluted workflow beyond the scope of but the most dedicated, but as the saying goes, no pain - no gain 🍷
    Would it not be simpler to create the Control Maps right in Affinity Photo by using the FILTER/Detect Edges command on your source images? just a thought.

  • @KryptLynx
    @KryptLynx ปีที่แล้ว

    Those fingers, though :D

  • @novabk2729
    @novabk2729 ปีที่แล้ว

    超級有用!!!!! thx

  • @EmilioNorrmann
    @EmilioNorrmann ปีที่แล้ว

    nice

  • @Pianist7137Gaming
    @Pianist7137Gaming ปีที่แล้ว

    For iOS users in iOS 16 and above, there's an easy way to crop out the image, transfer the image to your phone (google photos or something), save image, press and hold on the area you want captured. Tap share and save image, then transfer it back to your pc.

  • @d1m18
    @d1m18 ปีที่แล้ว

    This is very valueable content but may I suggest you alter the title a bit? It is not very enticing to users who are not fully in the know of AI and prompts.
    Keep up the great work!

  • @AlfredLua
    @AlfredLua ปีที่แล้ว

    Hi Olivio, thank you for the super cool video! Curious, if you were using a depth map instead of softedge for the woman, how would you edit it in Affinity to remove the background? It seems trickier for depth map since the background might be a shade of gray instead of absolute black. Thanks.

  • @rodrigoundaa
    @rodrigoundaa ปีที่แล้ว

    amazing video.!!! as usual. Im still not getting where to do it. it is local on your pc? need a very powerfull GPU? or its online?

  • @nspc69
    @nspc69 ปีที่แล้ว +4

    It can be easier to fuse layers with "additive" filter

  • @DJHUNTERELDEBASTADOR
    @DJHUNTERELDEBASTADOR ปีที่แล้ว

    esa era mi método para crear arte 😊

  • @Kal-el23
    @Kal-el23 ปีที่แล้ว

    It would be interesting to see what your outcome is without the maps, and just using the prompts as a comparison.

  • @shipudlink
    @shipudlink ปีที่แล้ว

    like always

  • @TorQueMoD
    @TorQueMoD ปีที่แล้ว

    This is great! What's the AI program you're using called? It's obviously not Midjourney.

  • @nsrakin
    @nsrakin ปีที่แล้ว

    You're a legend... Are you available on LinkedIn?

  • @yoavco99
    @yoavco99 ปีที่แล้ว

    To fix faces automatically you can use the adetailer extension.

  • @hugoruix_yt995
    @hugoruix_yt995 ปีที่แล้ว

    Oh I see, I missunderstood. Name makes more sense now

  • @hngjoe
    @hngjoe ปีที่แล้ว

    Hi. Thanks for sharing your smart notes of every new thing. I really appreciate that. I have one question. After checking update in SD's extension, system response that I have lates controlnet(caf54076 (Tue Jun 13 07:39:32 2023)). However, I can't find Softedge control model in that dropdown list. Though, i do have Softedge controlnet type and pre-processer. What might be wrong?

  • @merion297
    @merion297 ปีที่แล้ว +1

    Cool! Now what if we make an animation using e.g. Blender but only for the line art, then input each frame to ControlNet then generate the finaly animation frame-by-frame? I wonder when it becomes so consistent that we can consider it as a real animation.

  • @bjax2085
    @bjax2085 ปีที่แล้ว

    Still searching for this AI tool for comic book and children's book creators: 1. Al draws actor using prompts. 2. Option to convert the selected character to a simple, clean 3d frame (no background). The character can be rotated. 3. The limbs, head, eyelids, etc can be repositioned using many pivot points. 4. Then, we can ask for the character to be completely regenerated again using the face and clothing on the original. Once we are satisfied we can save and paste the character in a background graphic.

  • @gwcstudio
    @gwcstudio ปีที่แล้ว +1

    How do you control a scene with 2 people in it? Say, fighting. Do a map bash and then a colored version of the map with separate prompts?

  • @NERvshrd
    @NERvshrd ปีที่แล้ว

    Have you watched the log while running hires fix with upscale by 1? I tried doing so as you noted, but it just ignores the process. On or off, no difference in output. Might just be bacuase I'm using vlad's fork. worth double-checking, though

  • @ValicsLehel
    @ValicsLehel ปีที่แล้ว

    OK to use A1111 to get the outline, but also Photoshop filter can do this and you can do at any resolution. So I think that this first steps can be done with filters to get the outline picture and bash it, Even you can do before the mix roughly and then apply the filter, will not speed up the process because you see what you are doing easier.

    • @OlivioSarikas
      @OlivioSarikas  ปีที่แล้ว +1

      I don't think Photoshop has filters for Depth map, Normal Map or Open Pose. And for the soft edge filter there is a option, but there are 4 options in ControlNET and does the PS version look exactily the same as the ControlNET version?

  • @cryptobullish
    @cryptobullish ปีที่แล้ว

    Crazy cool! How can I retain the face if I wanted to use my own face? What’s the best prompt to use to ensure the closest resemblance? Thanks!

    • @wykydytron
      @wykydytron ปีที่แล้ว +2

      Make Lora of your face then use adetailer

  • @honestgoat
    @honestgoat ปีที่แล้ว

    Great video Olivio. What extension or setting are you using that allows you @ 11:13 to select the vae and clip skip right there in txt2img page?

    • @forifdeflais2051
      @forifdeflais2051 ปีที่แล้ว

      I would like to know as well

    • @addermoth
      @addermoth ปีที่แล้ว +1

      In Auto1111 go to settings, user interface, look down the page for "[info] Quicksettings list ". From there go to the arrow on the right and then highlight and check (A tick mark will appear) both 'sd_vae' and "CLIP_stop_at_last_layers". Restart the UI and they will be where Olivio has them. Hope that helped.

    • @forifdeflais2051
      @forifdeflais2051 ปีที่แล้ว

      @@addermoth Thank you!

  • @anim8or
    @anim8or ปีที่แล้ว

    What version of SD are you using? Have you upgraded to 2.0+? (If so do you have a video on how to upgrade?)

  • @Shoopps
    @Shoopps ปีที่แล้ว

    I'm happy ai still struggle with hands.

  • @lsd250
    @lsd250 ปีที่แล้ว

    Hi all, may someone answer me a question?
    How much GPU do I need to run A111? I'm using mostly Midjourney because I've a really old PC

  • @MONTY-YTNOM
    @MONTY-YTNOM ปีที่แล้ว

    How do you see the 'quality' from that drop down menu ?

  • @serizawa3844
    @serizawa3844 ปีที่แล้ว

    0:01 six fingers ahushauhsuahsua

  • @rajendrameena150
    @rajendrameena150 ปีที่แล้ว

    Is there any way to render the render elements inside 3d application like masking id, Z depth , Ambient occlusion, material id and different channels to add information in stable diffusion for making more variation out of it.

    • @foxmp1585
      @foxmp1585 10 หลายเดือนก่อน

      Currently SD can properly reads Z-Depth (Depth map), Material ID (Segmentation map), Normal map.
      And it depends on apps of your choice (Blender, Max, Maya, C4D, ...).
      Each of these app will have their own way of rendering/ exporting these maps, you need to find out yourself. It'll take time but worth it!

  • @hakandurgut
    @hakandurgut ปีที่แล้ว +1

    It would have been much easier with photoshop select subject. I wonder if edge detection would do the same for soft edge

  • @springheeledjackofthegurdi2117
    @springheeledjackofthegurdi2117 ปีที่แล้ว

    could this be done all in automatic using mini paint?

  • @itchykami
    @itchykami ปีที่แล้ว

    Everyone wants to give bird wings. I might try using a peacock spider instead.

  • @Shandypur
    @Shandypur ปีที่แล้ว

    There's is close button bottom right of the preview image. I feel little anxiety that you didn't click it. haha

  • @TheElement2k7
    @TheElement2k7 ปีที่แล้ว

    How do you got two tabs of controlnet?

  • @andu896
    @andu896 ปีที่แล้ว

    Remove background first with AI or right click on Mac. Then do the depth maps.

  • @electricdreamer
    @electricdreamer ปีที่แล้ว

    Can you do this with Invoke AI?

  • @maxeremenko
    @maxeremenko ปีที่แล้ว +1

    The image is not generated from the mask I created. Only based on the Prompt. I have set all the settings as in the video. What could be the problem?

    • @jibcot8541
      @jibcot8541 ปีที่แล้ว

      Have you clicked the "Enable" check box in control net blade? I'm often missing that!

    • @maxeremenko
      @maxeremenko ปีที่แล้ว

      @@jibcot8541 Thank you. Yes, I clicked on enable. Unfortunately, it keeps generating random results. It feels like I have something not installed.

    • @maxeremenko
      @maxeremenko ปีที่แล้ว

      @@jibcot8541 problem was solved by removing the segment-anything extension

  • @serena-yu
    @serena-yu ปีที่แล้ว

    Looks like rendering of hands is still the Achilles' heel.

    • @OlivioSarikas
      @OlivioSarikas  ปีที่แล้ว +1

      Hands are just really hard to create and understand. Even for actual artists, this is one of the hardest things to create

  • @moomoodad
    @moomoodad ปีที่แล้ว

    How to fix finger deformity, multiple fingers, and bifurcation?

  • @robbasgaming7044
    @robbasgaming7044 ปีที่แล้ว

    Can this be used for commercial use? The base is someone else's intellectual property 🤔

  • @ericvictor8113
    @ericvictor8113 ปีที่แล้ว

    Almost FIRST?