Hey Tim, You can use --version and ---v while in the inpainting box. It doesn't fail but its doesn't really work either since going from v5.2 to v.51 should reveal a different aesthetic and it doesn't seem to. But its a minor thing since most people stick to the engine they are in instead of jumping around. You can use cntl click to remove specific masks you have set up either in the current inpainting session or in the last one when you open it up again to retry - this can be easier than undoing back through your history of masking. To use it on a pan, just custom zoom on the pan with zoom set to 1 and set the aspect ratio to the ratio of your current pan. Then upscale that result and you are good to go
Good tips here! The Cnt Click (someone else said right click) will likely be a huge timesaver! So, I did try Version 4, and maybe that was too far back? But in general, I think you are right, inpainting to different versions is likely something that most would never bother doing. I still maintain that Version 4 had something super unique about it though-- I'd love to be able to use it an an effect.
Great video, Tim! A nice trick I accidentally stumbled upon is if you right click a selection it will delete it, making it easy to delete a specific selection. I had the issue a lot where I was getting nothing in my inpainting outputs, it may have been too small of windows. Something else I was experimenting with for cohesiveness has been building out an image with inpainting then rerunning that image as an image prompt. I hope eventually we get --iw 100 or something we can just inpaint on our own images. PS gen is awesome but has its own styles and limitations.
Hey Andy!! That's a hot tip with the right clicking! I'll give that a shot! I had thought about adding in a bit where I would experiment with running an inpainted output as a Image reference to see what would happen, but the video was running kind of long. One of the Discord peeps actually got Image Prompting to work in Inpainting! Although, it does not work very well. I think it might be a (currently) unsupported feature that has a backdoor. I think for sure it is coming though!
I appreciate you always take the time to explain the why and get into the details. I have used this extensively to remove garbled watermark / text, or fix hands or eyes--without breaking my entire composition.
You can use image references in Vary (Region) in MJ. Also, after panning, do a custom zoom 1 with the same aspect ratio to get back to a clean image with all of the Vary options.
Yeah, it's funny-- the documentation says no...except the OTHER documentation lists --iw! A pal on the Discord did mention you could use image prompting but that Ctr-V didn't work, you had to right click paste the link in (at least that was their experience)-- but overall, the image prompting isn't yielding good results. I have the feeling it will eventually be a feature, but it wasn't ready for prime time yet-- but naturally, users found the backdoor....as we always do! Good tip on the panning workaround!
I get a funny thing where I have to first delete a character in the prompt while in the edit window to be able to enter text. But I have used an image reference to add a graphic to a T-shirt on a subject. I placed that first (just like a normal prompt), and it turned out alright. @@TheoreticallyMedia
@@AnthonySell Yeah, I think it "works" it just isn't supported right now. I did end up chopping that part out of the video, since so many people were commenting on it, though.
Great video! I was able to get image prompts working for in-painting and hence consistent characters. It works really well to do face-swapping. I wrote up a Medium Post on it: "Creating Consistent Characters using Midjourney’s new In-Painting Tool"
Awesome man! I think you ping'd me on Twitter (still trying to bring myself to say X), I'll check it out! How do you feel it compares to Insight Face Swapper?
Sorry to bug you. This is an inpainting question. I was able to re-create your BACKROOMS image - balloon and all. I decided to also inpaint a dog sitting by the girl. Worked like a charm. Next, I also wanted to add a bird sitting on the desk. It would not create the bird. Tried a cat, did not create. So my question is after adding a balloon and dog did I reach a limitation for in painting adding objects? Not sure.
It shouldn’t, as far as I’m aware. Try taking a large area and doing an inpaint with something really stupid, like a lawnmower, and see if that works. You might be running into the issue I had with the wolf, later in the video.
I might have tried to generate a white wolf in a similar background and then blended the two. Maybe I'll try that later to se if there is any effect. Thanks for the all the tips. Love the guitars.
@@TheoreticallyMedia I have a Fender HSS Strat that I fitted with a Seymour Duncan Alnico II Pro Trembucker in the bridge. A Performer Telecaster with a humbucker and a Yosemite bridge pickup, and then I do have a Fender Tonemaster Deluxe, but I rarely use it anymore, I pump my GT-1000 into a Headrush 108.
Really excited to try this new feature out in my workflow. Thanks again for your excellent deep dive videos! I try and treat them as required viewing 👌
Anytime! I know I'm not the only one who appreciates your format and your concise (and entertaining) breakdowns of how to use these amazing tools. We live in exciting times!
The Smile looks of, because when the mouth smiles, the whole face change. Especially the eyes move too. so if you want to make it realistic select the whole face or justs eyes and mouth.
I’ve also experimented a lot and realized you can try just literally writing for your prompt just the thing you are trying to add, especially in the case of the white wolf. MJ is pretty good at understanding how to blend it with the rest.
For sure, the bounding box takes elements from what surrounds it and makes calls based off it. Which is why you don't get animated looks in photographs...uhh, maybe unless you prompt for it? Hmmm, might want to try that just for fun!
@@TheoreticallyMedia I’ve been able to accomplish that with zoom and pan. I think it could be possible with vary region. Could try some Roger Rabbit images. If I were to give it a try I’d add -niji 5 at the end (because I think you can technically change between any v5 versions for vary region) and try also doing -no photo. I’ve also generally found that, at least in my experience, that MJ often seems to be better at replacing existing elements as opposed to adding new ones.
My pleasure!! I'm just happy it dropped after I got back from vacation! I mean, it dropped on the MONDAY I got back from Vacation...So I had to hit the ground running-- but hey, at least I got my week in the sun!
So..:sorta. Midjourney always obscures faces (even dogs!) and never give you a full 1:1. I think that was due to deepfakes. Although, it’s possible that you’ll end up with a dogs face that is similar to yours, you won’t get the exact one. And as a dog person myself, I know you’ll see the difference!
I hope it is OK to ask you another Midjourney question. Please know that I do try to find the answer, be it research or discord, before bugging you. By far your knowledge level is way higher than others I work with. Ok...here is my question. *****BELOW IS MIDJOURNEY DOCUMENTATION FOR QUALITY***** Quality The --quality or --q parameter changes how much time is spent generating an image. Higher-quality settings take longer to process and produce more details. Higher values also mean more GPU minutes are used per job. The default --quality value is 1. --quality only accepts the values: .25, .5, and 1 for the current model. Larger values are rounded down to 1. --quality only influences the initial image generation. *****END OF DOCUMENTATION***** No matter how many times I read it, I find it confusing. Why? Let me explain. It says higher quality setting produces more details (makes sense). It also says the values accepted are .25, .5, and 1 and that larger values are rounded down to 1. The default value is already set at the highest possible value which is 1. So what's the purpose? If the default value is already set at the highest acceptable value, why even bother with this parameter? I guess I could set it at .25 or .5 but that gives less details - no thank you. It also says in the documentation: quality only influences the initial image generation. I have no idea what that means? Am I missing something or not understanding how quality works? I swear I've seen prompts with it set at --q 2 but according to the documentation this would be rounded down to 1 Any thoughts or help would be appreciated.
You know what, let me dig into this. This sounds like it might make for a good video, because you are totally right, the documentation makes no sense here. I’ll put it on the docket and either do a full video, or pop it in as a segment. That’s a great topic!
This was amazing, I cant wait to utilize this . In fact a little bit of tears came out probably like Joan of Arc had but totally different reasons lol. Keep em coming Tim! Thank You.
Thank you so much! Haha, yeah-- I can only imagine that's the smile you make when the voice in your head says: "By the way, you're going to be burned at the stake..."
Actually, it does accept image references, as far as they were previously loaded. Another question is the result, but in some cases, it does a nice job👍
Yeah, so I actually cut that section out. According to documentation it isn't available, but as a few others pointed out, you can do it. I think its an unsupported feature (given that it seems to be a bit unpredictable), and everyone just found the backdoor-- which, apparently was left wide open!!
I have noticed some differences when trying version, also. For instance, I changed a little girls ear to an elf ear by implementing Niji, which doesn’t usually happen in normal v5. So, I think it might actually work.
Yeah, Niji has some weird results. I don't think it is "plugged in" to the inpainting model yet. I think when you inpaint in Niji, it's just the standard model that is being utilized.
That’s awesome to hear! It really is my daily AI driver. If you need tips, the video I did on Prompting (the thumb is /imagine) is a pretty good place to start! Let me know if you have any questions! Enjoy!
Great info Tim! I haven’t been using mid journey for the past month because I’m in the middle of doing a video for a big online conference for Fine Art Connoisseur magazine, as well as a solo show so that’s got me bogs down. But I will be jumping back in in November, I just keep watching your videos you give great information.😊
Hey Nanci! That's awesome about the solo show, best of luck! I'm sure you're a seasoned pro at those things, so the only tip I have is "Lots of Wine loosens the Coin Purse!" And congrats on the conference! Happy to keep you informed so you aren't overwhelmed when you return in November! I'm sure there will be a TON of new features by then!
@@TheoreticallyMedia Tim I heard you talk about your Patreon page so I’m going to check that out when the dust clear is in November and I may be one of your people on there but it all depends on how much it is. I have so much going on, but if I had to take any Patreon lessons with anybody it would be you you’re my favorite.
@@NanciFranceVaz_artist Oh, thank you so much!! To be honest, the Patreon is a little barebones. I'm trying to get some time to do more with it. Right now it's just $5 a month and is mostly for being able to watch the videos ad free, and PDFs are provided. It's more a "support" platform than anything. That said, I do need to spend some time creating separate tiers and doing more with it. Hoping to have it built up by the time Nov rolls around for you! Totally happy to schedule something with you outside of any platform as well! (Also, I'm blushing at the favorite thing...haha!)
Welcome back! Hope you had a great vacation. Thank you for the video. 👍 Glad it's working for you. Whenever I click on the prompt area after selecting a region, it jumps out to Discord. A wonderful step taken by them, and _in_ Discord? That's impressive!
That's weird-- might be a little buggy on their part? Maybe try writing the prompt before you select a bounding area? Maybe you can trick it into working for the time being?
@@AG_before 100%! Let me know how it goes! A weird suggestion: But have you tried closing and reopening Discord? I noticed there have been updates that require a reboot. That might help?
@@TheoreticallyMedia Not a weird one at all. So here's the thing - I was using discord in chrome. I had logged out and back in and there was no fix. I downloaded discord directly, updated that, and then retried logging into the browser version and it works. Hope this helps someone, and thank you yet again. 👍
Let's say you create an --ar 1:1 image - you'll get 1024x1024 px after upscale. If you pan left, you'll get 1536x1024, so you actually get 1.5 of a normal Midjourney image with the ar 3:2. If you now zoom that with factor 1, you'll get 1 normal Midjourney image again, so it'll be 1344x896 px. Zooming a panorama with zoom factor 1 will convert the panorama into an editable Midjourney image but that'll always reduce the resolution to the max pixel count supported by MJ (around 1 megapixel). Only then the image is editable (you can vary it or inpaint it). You cannot edit a panorama, because that's multiple images stitched together, potentially very very large, and the model doesn't support that.
Zoom as well! There are a handful of things left: Consistent Characters, Correctly Spelled Words, and something like a Pose to Image...but yeah, I'm not complaining! We're in a real wonderland era!
I can find it for you if you’d like, but the general format I use for these Midjourney tutorials is something like “stunning, beautiful, (some other adjectives), (topic)” Depending on what I get, I’ll adjust. Like for this one, I might have started off with (topic) as “painting” but ended up with a woman actually painting. So, I would adjust to other ideas like watercolor, or something of the sort. But honestly, I think the “amazing, splendid, stunning” prompt tokens at the start are doing the heavy lifting here.
@@TheoreticallyMedia Great, thank you very much, I would love to know how to get that style of images. They inspire me a lot for my paintings. I'm new to Midjourney but I'll try to get them with your advice
according to documentation it doesn't work, so I (foolishly) didn't try it. The documentation says no...except the OTHER documentation lists iw! A pal on the Discord did mention you could use image prompting but that Ctr-V didn't work, you had to right click paste the link in (at least that was their experience) but overall, the image prompting isn't yielding good results. I have the feeling it will eventually be a feature, but it wasn't ready for prime time yet-- but naturally, users found the backdoor....as we always do!
Thanks for the vid. I turned on remix in /settings but don't see an option to open up a separate in painting section in Discord. I must be missing something. The fact that remix setting exists in Discord makes me think the option should show up alongside other options at bottom of uploaded or generated image.
A few people ran into this (myself included): Try closing Discord entirely and then re-opening it. There is an update it need to make, that's likely preventing the inpainting popup. Hopefully you've already got that sorted!!
@@TheoreticallyMedia I didn't initially realize the "vary (region)" option was for inpainting. I thought it was another auto-variation button but they did put a paint brush there. Just needed to take a minute to let myself process what I was looking at. Thanks for the videos.
at the moment when I try to write something in the area that I want to change the image gets closed...but I still get to create an area I want to change...
Great and excellent tutorial, I have a question however not related but you're gifted on these subjects, Did github remove controlnet from their repository? I finally installed Automatic 1111 and about 12 extensions but controlnet was not available. I'm a newb so maybe I'm losing my mind and not seeing it, can someone comment and tell me controlnet extension for Auto1111 is there? Anyone can comment on this question, thanks
I'd be surprised if that were the case. Is this it? github.com/Mikubill/sd-webui-controlnet Granted, I'm not a local SD guy-- at some point in the near future I'll build a PC to play with it...maybe later this year. As is, being a Mac guy, SD is a pain to install and maintain.
in my testing with it, I feel that the infilled part doesn't match the art style of the rest of the image 100%. I have however, done an inpaint, got a better overall image, then did a subtle variation to get the whole image to look more cohesive.
Yeah, I was going to note that in the Bond Niji image where I got Emily The Strange as a Ninja. MJ picked up: "This is an illustrated look" but not the overall aesthetic. But I do think that'll improve over time...
Technically no. You could upload a reference image, run a prompt on it, and then Vary that output. But, Midjourney will only be “inspired” by your original image, so it won’t be a 1:1.
Maybe this would be good for reference images that happen to have text that just gets Scrabble either to try to get rid of it or have mint Journey turn it into another element of the picture
I’m curious to see how MJ reacts to all this. V6, in my opinion, should really have something like this in it. If MJ can get an LCM working, along with their rendering models? I mean, it will be immensely powerful.
@@TheoreticallyMedia It's a striking image. I layered a screenshot of your thumbnail over one of my images. They're not completely identical, of course, but the face overlays perfectly, feature for feature, and the style and color are the same. No surprise there really. With so many people cranking out images there's going to be overlap. In fact, I compared a few other images, with the same looking into the camera pose and expression, and the faces are identical except color. It gives credence to the critics of AI "art" who say it's just copy pasting from existing works. It also gives credence to my long stated theory that there are only about five hundred original faces in the world, all others being a copy paste with random artifacts inserted. My images were generated from an image prompt, a sunrise photo I took, with the simple text prompt "watercolor". Midjourney inserted a face, and I ran with the variations after first trying to get Midjourney to stop inserting faces.
Can you tell the devs to let us upload an image of a character and have inpaint solve the character consistency problem, and without needing to select the area, I know of another way to do it, but its 50 times a day limit, many thanks 👍👍👍
Yeah, good 'ole Inswapper. I know consistent characters has been a long awaited feature, and the devs have mentioned that it a project they're working on-- hopefully we'll see it soon!
It’s pretty interesting to use in a non-Inpainting context. And apparently you CAN weight in Inpainting as well! They backtracked on the documentation! Haha, I think the idea was to discourage users from using it until it was ready- but apparently I’m the only one that actually reads those docs! Ugh…
After you upscale an image, there should be two buttons under the image named Zoom 2x and Zoom 1.5. Just hit one of those! Also, you can zoom by issuing a Vary command with --zoom followed by a number lower than 2...say, 1.2 or something.
@TheoreticallyMedia you're welcome, it's something my eyes cannot seem to not notice 🤭 I can keep sending you notes if I notice any in the future if you'd like.
@@MaureenAstrid 100%! And I promise that I'll keep you in the back of my head as I'm spell checking! Or...sometimes I'll admit to having Chatgpt spell check for me!
Just tried it, it's at the bottom for me. What device are you on? Ios here...Um, maybe try closing the Discord app and reopening it? That seems to be solving a lot of issues.
Even though your video is about inpainting, you did address using text (prompt) weights. I've seen the format of text prompts done in 3 different ways: red car::3 blue car::1 (no spaces) red car ::3 blue car ::1 (space after car) red car:: 3 blue car:: 1 (space after colon) On discord the above were examples I got for doing text weights. The last one seems to be the most effective, but I think you did the weight format differently. The midjourney documentation is not helpful in giving us the straight scoop.
It’s true! I think my original video on weighting is “wrong”- I’ve seen it so many different ways as well, and like you said, the documentation is inconsistent as well. I’ll dig into this and see if I can get a straight answer!
If you test the 3 different formats, I liked this one the best: red car:: 3 blue car:: 1 (space after colon) but I don't believe that is the correct format. There may not be a wrong way of doing it, not sure. The red car:: 3 blue car:: 1 (space after colon) gave me a good mixture of red and blue. The other two had all cars red with no blue showing.
Can anyone tell me what cinematic “still” means ? What is the word still supposed to mean ? Does it mean a snapshot of video clip basically ? Thanks, sorry , not a native English speaker
@@TheoreticallyMedia well, actually. Now that I think about it, where can I watch a video on your channel about the slider method that you mention in the end ?
It can for sure be hit or miss. The one suggestion I might have for you is to try to make a bigger bounding box. Sometimes, with more room to play, it'll generate better.
Do you mean Smuge? I think that typo'd as Smug. Yeah-- I think it'll take some battling when MJ refuses to give you what you're looking for. I'm thinking about some alternate ways to tackle it-- including going back to the old Photobash method.
Yeah, not gonna lie, it wouldn't have been my 100% this-is-it image either...but, I've got a certain amount of time to do these videos, so I had to take it. I think with inpainting and a few other rolls, I would have gotten it there. But for demo purposes, I think it worked. I did like the lighting on the subject though. That felt "Stage Natural"
@TheoreticallyMedia 100% know what you mean. It's crazy that sometime it nails perspective, lighting. Reflections etc, but then sometimes it'll just bury someone's feet in the ground. It'll be amazing when we've got the granularity to say, "scale her up 30%, rotate her 1/8 turn counter clockwise" etc. Probably won't be too long the rate they're developing this stuff.
@@robertdouble559 agreed! I don't engage with the Anti-Ai crowd very often (too little time for that), but one tentpole I have is: It really isn't just typing some words in. I mean it CAN be, but for those of use trying to achieve very specific things? It's a TON more work that requires a lot of practice in different disciplines.
I was talking about this earlier, and I think I even had it in the video at one point but it got cut due to time. Yeah, Leo and SD's inpainting is for sure more robust, but it has also been around a lot longer. This is like, day 2 for MJ Inpainting...give it some time. First step needed to be taken.
Hey Tim, You can use --version and ---v while in the inpainting box. It doesn't fail but its doesn't really work either since going from v5.2 to v.51 should reveal a different aesthetic and it doesn't seem to. But its a minor thing since most people stick to the engine they are in instead of jumping around.
You can use cntl click to remove specific masks you have set up either in the current inpainting session or in the last one when you open it up again to retry - this can be easier than undoing back through your history of masking.
To use it on a pan, just custom zoom on the pan with zoom set to 1 and set the aspect ratio to the ratio of your current pan. Then upscale that result and you are good to go
Good tips here! The Cnt Click (someone else said right click) will likely be a huge timesaver! So, I did try Version 4, and maybe that was too far back? But in general, I think you are right, inpainting to different versions is likely something that most would never bother doing. I still maintain that Version 4 had something super unique about it though-- I'd love to be able to use it an an effect.
V4 is still my fav, its just such a shame about the low res blurriness @@TheoreticallyMedia
Great video, Tim! A nice trick I accidentally stumbled upon is if you right click a selection it will delete it, making it easy to delete a specific selection. I had the issue a lot where I was getting nothing in my inpainting outputs, it may have been too small of windows. Something else I was experimenting with for cohesiveness has been building out an image with inpainting then rerunning that image as an image prompt. I hope eventually we get --iw 100 or something we can just inpaint on our own images. PS gen is awesome but has its own styles and limitations.
Hey Andy!! That's a hot tip with the right clicking! I'll give that a shot! I had thought about adding in a bit where I would experiment with running an inpainted output as a Image reference to see what would happen, but the video was running kind of long. One of the Discord peeps actually got Image Prompting to work in Inpainting! Although, it does not work very well. I think it might be a (currently) unsupported feature that has a backdoor. I think for sure it is coming though!
I appreciate you always take the time to explain the why and get into the details. I have used this extensively to remove garbled watermark / text, or fix hands or eyes--without breaking my entire composition.
Thank you so much! I couldn't agree more-- all those little things that kept one roll from being perfect: We now can just zap them away!
Is it possible to use this new functionality, but with images uploaded to MJ or only with images generated in MJ ?
You can use image references in Vary (Region) in MJ. Also, after panning, do a custom zoom 1 with the same aspect ratio to get back to a clean image with all of the Vary options.
Yeah, it's funny-- the documentation says no...except the OTHER documentation lists --iw! A pal on the Discord did mention you could use image prompting but that Ctr-V didn't work, you had to right click paste the link in (at least that was their experience)-- but overall, the image prompting isn't yielding good results.
I have the feeling it will eventually be a feature, but it wasn't ready for prime time yet-- but naturally, users found the backdoor....as we always do!
Good tip on the panning workaround!
I don't know why that came in with a strike out? That's weird... --weird 100
I get a funny thing where I have to first delete a character in the prompt while in the edit window to be able to enter text. But I have used an image reference to add a graphic to a T-shirt on a subject. I placed that first (just like a normal prompt), and it turned out alright. @@TheoreticallyMedia
@@AnthonySell Yeah, I think it "works" it just isn't supported right now. I did end up chopping that part out of the video, since so many people were commenting on it, though.
Dude, thank you so much for that vary-after-panning tip!
Great video! I was able to get image prompts working for in-painting and hence consistent characters. It works really well to do face-swapping. I wrote up a Medium Post on it: "Creating Consistent Characters using Midjourney’s new In-Painting Tool"
Awesome man! I think you ping'd me on Twitter (still trying to bring myself to say X), I'll check it out! How do you feel it compares to Insight Face Swapper?
Sorry to bug you. This is an inpainting question. I was able to re-create your BACKROOMS image - balloon and all. I decided to also inpaint a dog sitting by the girl. Worked like a charm. Next, I also wanted to add a bird sitting on the desk. It would not create the bird. Tried a cat, did not create. So my question is after adding a balloon and dog did I reach a limitation for in painting adding objects? Not sure.
It shouldn’t, as far as I’m aware. Try taking a large area and doing an inpaint with something really stupid, like a lawnmower, and see if that works.
You might be running into the issue I had with the wolf, later in the video.
I might have tried to generate a white wolf in a similar background and then blended the two. Maybe I'll try that later to se if there is any effect. Thanks for the all the tips. Love the guitars.
Let me know how it turns out! And thanks on the guitar front! You play as well, I presume? What’s your setup? (Love guitar nerd talk!)
@@TheoreticallyMedia I have a Fender HSS Strat that I fitted with a Seymour Duncan Alnico II Pro Trembucker in the bridge. A Performer Telecaster with a humbucker and a Yosemite bridge pickup, and then I do have a Fender Tonemaster Deluxe, but I rarely use it anymore, I pump my GT-1000 into a Headrush 108.
Really excited to try this new feature out in my workflow. Thanks again for your excellent deep dive videos! I try and treat them as required viewing 👌
Oh, thank you so much!! Really a day uplifter to hear that!
Anytime! I know I'm not the only one who appreciates your format and your concise (and entertaining) breakdowns of how to use these amazing tools. We live in exciting times!
The Smile looks of, because when the mouth smiles, the whole face change. Especially the eyes move too. so if you want to make it realistic select the whole face or justs eyes and mouth.
I’ve also experimented a lot and realized you can try just literally writing for your prompt just the thing you are trying to add, especially in the case of the white wolf. MJ is pretty good at understanding how to blend it with the rest.
For sure, the bounding box takes elements from what surrounds it and makes calls based off it. Which is why you don't get animated looks in photographs...uhh, maybe unless you prompt for it? Hmmm, might want to try that just for fun!
@@TheoreticallyMedia I’ve been able to accomplish that with zoom and pan. I think it could be possible with vary region. Could try some Roger Rabbit images. If I were to give it a try I’d add -niji 5 at the end (because I think you can technically change between any v5 versions for vary region) and try also doing -no photo. I’ve also generally found that, at least in my experience, that MJ often seems to be better at replacing existing elements as opposed to adding new ones.
Thanks, Tim, I was hoping you'd review this new feature. Your typical thorough explanation! Thanks!
My pleasure!! I'm just happy it dropped after I got back from vacation! I mean, it dropped on the MONDAY I got back from Vacation...So I had to hit the ground running-- but hey, at least I got my week in the sun!
can I add in a custom image tho? like on say a dog artwork I paste my dog's face??
So..:sorta. Midjourney always obscures faces (even dogs!) and never give you a full 1:1. I think that was due to deepfakes. Although, it’s possible that you’ll end up with a dogs face that is similar to yours, you won’t get the exact one.
And as a dog person myself, I know you’ll see the difference!
Already used it to fix hands! Let’s go!
/imagine The Six Fingered Man from Princess Bride....heyyyyyyy...wait a minute!!
I hope it is OK to ask you another Midjourney question. Please know that I do try to find the answer, be it research or discord, before bugging you. By far your knowledge level is way higher than others I work with. Ok...here is my question.
*****BELOW IS MIDJOURNEY DOCUMENTATION FOR QUALITY*****
Quality
The --quality or --q parameter changes how much time is spent generating an image. Higher-quality settings take longer to process and produce more details. Higher values also mean more GPU minutes are used per job.
The default --quality value is 1.
--quality only accepts the values: .25, .5, and 1 for the current model. Larger values are rounded down to 1.
--quality only influences the initial image generation.
*****END OF DOCUMENTATION*****
No matter how many times I read it, I find it confusing. Why? Let me explain.
It says higher quality setting produces more details (makes sense). It also says the values accepted are .25, .5, and 1 and that larger values are rounded down to 1. The default value is already set at the highest possible value which is 1. So what's the purpose? If the default value is already set at the highest acceptable value, why even bother with this parameter? I guess I could set it at .25 or .5 but that gives less details - no thank you.
It also says in the documentation: quality only influences the initial image generation. I have no idea what that means?
Am I missing something or not understanding how quality works?
I swear I've seen prompts with it set at --q 2 but according to the documentation this would be rounded down to 1
Any thoughts or help would be appreciated.
You know what, let me dig into this. This sounds like it might make for a good video, because you are totally right, the documentation makes no sense here.
I’ll put it on the docket and either do a full video, or pop it in as a segment. That’s a great topic!
This was amazing, I cant wait to utilize this . In fact a little bit of tears came out probably like Joan of Arc had but totally different reasons lol. Keep em coming Tim! Thank You.
Thank you so much! Haha, yeah-- I can only imagine that's the smile you make when the voice in your head says: "By the way, you're going to be burned at the stake..."
Actually, it does accept image references, as far as they were previously loaded. Another question is the result, but in some cases, it does a nice job👍
Yeah, so I actually cut that section out. According to documentation it isn't available, but as a few others pointed out, you can do it. I think its an unsupported feature (given that it seems to be a bit unpredictable), and everyone just found the backdoor-- which, apparently was left wide open!!
@@TheoreticallyMediaThat's right. Usually it doesn't work fine more than in 50% of times and you can never predict the exact results...
I have noticed some differences when trying version, also. For instance, I changed a little girls ear to an elf ear by implementing Niji, which doesn’t usually happen in normal v5. So, I think it might actually work.
Yeah, Niji has some weird results. I don't think it is "plugged in" to the inpainting model yet. I think when you inpaint in Niji, it's just the standard model that is being utilized.
Dude…. Been going back and forth about grabbing the Midjourney subscription. Your channel totally sold me! Great content!! Subbed.
That’s awesome to hear! It really is my daily AI driver. If you need tips, the video I did on Prompting (the thumb is /imagine) is a pretty good place to start!
Let me know if you have any questions! Enjoy!
💪@@TheoreticallyMedia
Great info Tim! I haven’t been using mid journey for the past month because I’m in the middle of doing a video for a big online conference for Fine Art Connoisseur magazine, as well as a solo show so that’s got me bogs down. But I will be jumping back in in November, I just keep watching your videos you give great information.😊
Hey Nanci! That's awesome about the solo show, best of luck! I'm sure you're a seasoned pro at those things, so the only tip I have is "Lots of Wine loosens the Coin Purse!" And congrats on the conference! Happy to keep you informed so you aren't overwhelmed when you return in November! I'm sure there will be a TON of new features by then!
@@TheoreticallyMedia Tim I heard you talk about your Patreon page so I’m going to check that out when the dust clear is in November and I may be one of your people on there but it all depends on how much it is. I have so much going on, but if I had to take any Patreon lessons with anybody it would be you you’re my favorite.
@@NanciFranceVaz_artist Oh, thank you so much!! To be honest, the Patreon is a little barebones. I'm trying to get some time to do more with it. Right now it's just $5 a month and is mostly for being able to watch the videos ad free, and PDFs are provided. It's more a "support" platform than anything. That said, I do need to spend some time creating separate tiers and doing more with it. Hoping to have it built up by the time Nov rolls around for you!
Totally happy to schedule something with you outside of any platform as well!
(Also, I'm blushing at the favorite thing...haha!)
Great run through, thanks as always Tim! I'm going to need more time with it, but happy to see so many getting great and interesting results from it.
It's just a ton of fun. Weird and quirky at times, but that's also a feature not a bug for me!
Totally! I refer to them as "undocumented features" @@TheoreticallyMedia 😄
Welcome back! Hope you had a great vacation. Thank you for the video. 👍 Glad it's working for you. Whenever I click on the prompt area after selecting a region, it jumps out to Discord. A wonderful step taken by them, and _in_ Discord? That's impressive!
That's weird-- might be a little buggy on their part? Maybe try writing the prompt before you select a bounding area? Maybe you can trick it into working for the time being?
@@TheoreticallyMedia Currently working on it. Thank you for replying.
I shoe it in my settings and click it, but it's not coming up after I upscale the picture. Fairly new and learning! Thank you for your help!
@@AG_before 100%! Let me know how it goes! A weird suggestion: But have you tried closing and reopening Discord? I noticed there have been updates that require a reboot. That might help?
@@TheoreticallyMedia Not a weird one at all. So here's the thing - I was using discord in chrome. I had logged out and back in and there was no fix.
I downloaded discord directly, updated that, and then retried logging into the browser version and it works.
Hope this helps someone, and thank you yet again. 👍
Really good work of yours, accurate and clear, thank you very much!
Hey! Thanks so much for the watch and the comment!!
Let's say you create an --ar 1:1 image - you'll get 1024x1024 px after upscale. If you pan left, you'll get 1536x1024, so you actually get 1.5 of a normal Midjourney image with the ar 3:2. If you now zoom that with factor 1, you'll get 1 normal Midjourney image again, so it'll be 1344x896 px. Zooming a panorama with zoom factor 1 will convert the panorama into an editable Midjourney image but that'll always reduce the resolution to the max pixel count supported by MJ (around 1 megapixel). Only then the image is editable (you can vary it or inpaint it). You cannot edit a panorama, because that's multiple images stitched together, potentially very very large, and the model doesn't support that.
Ah, that makes sense! Well, perhaps we'll see it in version 7! Ha, actually by then we'll be complaining that we can inpaint on video!
Just purchased 3 of your awesome PDFs brah! 🎉
Oh man! You rule!! Email/comments/Discord is always open if you have any questions or need anything! Thank you so much!!
Good tips man. Playing with it now.
Hope you have a blast! There's a lot of fun to be discovered here!
Between this and panning, Midjourney perfect.
Zoom as well! There are a handful of things left: Consistent Characters, Correctly Spelled Words, and something like a Pose to Image...but yeah, I'm not complaining! We're in a real wonderland era!
I really like the image on the cover (the orange-haired girl with blue eyes) is it made with some prompt of midjourney?
I can find it for you if you’d like, but the general format I use for these Midjourney tutorials is something like “stunning, beautiful, (some other adjectives), (topic)”
Depending on what I get, I’ll adjust. Like for this one, I might have started off with (topic) as “painting” but ended up with a woman actually painting. So, I would adjust to other ideas like watercolor, or something of the sort.
But honestly, I think the “amazing, splendid, stunning” prompt tokens at the start are doing the heavy lifting here.
@@TheoreticallyMedia Great, thank you very much, I would love to know how to get that style of images. They inspire me a lot for my paintings. I'm new to Midjourney but I'll try to get them with your advice
Phenomenal breakdown!
Thank you Sway!! Means a lot!
image prompting 100% does work for me inside the inpainting feature. why do you think it doesn't work?
according to documentation it doesn't work, so I (foolishly) didn't try it. The documentation says no...except the OTHER documentation lists iw! A pal on the Discord did mention you could use image prompting but that Ctr-V didn't work, you had to right click paste the link in (at least that was their experience) but overall, the image prompting isn't yielding good results.
I have the feeling it will eventually be a feature, but it wasn't ready for prime time yet-- but naturally, users found the backdoor....as we always do!
Thanks for the vid. I turned on remix in /settings but don't see an option to open up a separate in painting section in Discord. I must be missing something. The fact that remix setting exists in Discord makes me think the option should show up alongside other options at bottom of uploaded or generated image.
A few people ran into this (myself included): Try closing Discord entirely and then re-opening it. There is an update it need to make, that's likely preventing the inpainting popup. Hopefully you've already got that sorted!!
@@TheoreticallyMedia I didn't initially realize the "vary (region)" option was for inpainting. I thought it was another auto-variation button but they did put a paint brush there. Just needed to take a minute to let myself process what I was looking at. Thanks for the videos.
@@garypick to be honest, I don’t know why they just didn’t call it Inpainting. It isn’t like it’s trademarked somewhere…
Thank you for the update.
Thank you for the watch and the comment!!
at the moment when I try to write something in the area that I want to change the image gets closed...but I still get to create an area I want to change...
Great and excellent tutorial, I have a question however not related but you're gifted on these subjects, Did github remove controlnet from their repository? I finally installed Automatic 1111 and about 12 extensions but controlnet was not available. I'm a newb so maybe I'm losing my mind and not seeing it, can someone comment and tell me controlnet extension for Auto1111 is there? Anyone can comment on this question, thanks
I'd be surprised if that were the case. Is this it? github.com/Mikubill/sd-webui-controlnet
Granted, I'm not a local SD guy-- at some point in the near future I'll build a PC to play with it...maybe later this year. As is, being a Mac guy, SD is a pain to install and maintain.
I had to install twice but got it to work
in my testing with it, I feel that the infilled part doesn't match the art style of the rest of the image 100%. I have however, done an inpaint, got a better overall image, then did a subtle variation to get the whole image to look more cohesive.
Yeah, I was going to note that in the Bond Niji image where I got Emily The Strange as a Ninja. MJ picked up: "This is an illustrated look" but not the overall aesthetic. But I do think that'll improve over time...
is it possible to use very region with uploaded images or only with generated images?
Technically no. You could upload a reference image, run a prompt on it, and then Vary that output. But, Midjourney will only be “inspired” by your original image, so it won’t be a 1:1.
@@TheoreticallyMedia wow, BIG Thank!!!
Great video. Thanks for the info this was super informative. Subbed!
Maybe this would be good for reference images that happen to have text that just gets Scrabble either to try to get rid of it or have mint Journey turn it into another element of the picture
I’m curious to see how MJ reacts to all this. V6, in my opinion, should really have something like this in it.
If MJ can get an LCM working, along with their rendering models? I mean, it will be immensely powerful.
Thanks so much Tim really great stuff
That’s fantastic to hear! Thank you so much for the comment!
Thank you. I was trying to figure this out.
Your thumbnail looks so much like some images I generated that I thought It was actually one of mine.
Haha, I like your taste in that case!!
@@TheoreticallyMedia
It's a striking image. I layered a screenshot of your thumbnail over one of my images. They're not completely identical, of course, but the face overlays perfectly, feature for feature, and the style and color are the same. No surprise there really. With so many people cranking out images there's going to be overlap. In fact, I compared a few other images, with the same looking into the camera pose and expression, and the faces are identical except color. It gives credence to the critics of AI "art" who say it's just copy pasting from existing works. It also gives credence to my long stated theory that there are only about five hundred original faces in the world, all others being a copy paste with random artifacts inserted.
My images were generated from an image prompt, a sunrise photo I took, with the simple text prompt "watercolor". Midjourney inserted a face, and I ran with the variations after first trying to get Midjourney to stop inserting faces.
Thanks for your video! Question, can we import an image from our pc an edit it with inpainting?
So, we cannot. MJ will always alter the initial image. For that, you're looking more at Photoshop's Generative Fill.
kindda got the same question, but don't think it's poss but good if there's a way
You mean the AI of photoshop right? With all the money that the team from MJ are making and they still didn't put that option........
Can you tell the devs to let us upload an image of a character and have inpaint solve the character consistency problem, and without needing to select the area, I know of another way to do it, but its 50 times a day limit, many thanks 👍👍👍
Yeah, good 'ole Inswapper. I know consistent characters has been a long awaited feature, and the devs have mentioned that it a project they're working on-- hopefully we'll see it soon!
Hmmm, I don't seem to have this yet and I'm using 5.2
try closing Discord and reopening it. That should solve the issue. There's an update it needs to make.
Thanks for the :: slider tip.
It’s pretty interesting to use in a non-Inpainting context. And apparently you CAN weight in Inpainting as well! They backtracked on the documentation! Haha, I think the idea was to discourage users from using it until it was ready- but apparently I’m the only one that actually reads those docs! Ugh…
How'd you zoom out like that??
After you upscale an image, there should be two buttons under the image named Zoom 2x and Zoom 1.5. Just hit one of those! Also, you can zoom by issuing a Vary command with --zoom followed by a number lower than 2...say, 1.2 or something.
thanks for this!
100%!! Thanks for the comment!!
Hi, thanks for yet another very helpful and good video. 👍 Just a little note that you might want to fix your email address on the gumroad product.
Ahhh, thank you! I forgot to get back to you to thank you for the correction on the last PDF. You might be my unofficial copy editor!
@TheoreticallyMedia you're welcome, it's something my eyes cannot seem to not notice 🤭 I can keep sending you notes if I notice any in the future if you'd like.
@@MaureenAstrid 100%! And I promise that I'll keep you in the back of my head as I'm spell checking! Or...sometimes I'll admit to having Chatgpt spell check for me!
Link to discord server is invalid
Updated! Thank you!! Here's the link for you: discord.gg/nj294sEWqD
So exciting!
I'm really super stoked about it! It is SO much fun!
There’s nowhere to type the commands in mobile after selecting vary button :(
Just tried it, it's at the bottom for me. What device are you on? Ios here...Um, maybe try closing the Discord app and reopening it? That seems to be solving a lot of issues.
You have to turn "REMIX MODE" ON. /remix
@@SecondLifeAround thanks for that!! Nailed it!!!
Even though your video is about inpainting, you did address using text (prompt) weights. I've seen the format of text prompts done in 3 different ways:
red car::3 blue car::1 (no spaces)
red car ::3 blue car ::1 (space after car)
red car:: 3 blue car:: 1 (space after colon)
On discord the above were examples I got for doing text weights. The last one seems to be the most effective, but I think you did the weight format differently. The midjourney documentation is not helpful in giving us the straight scoop.
It’s true! I think my original video on weighting is “wrong”- I’ve seen it so many different ways as well, and like you said, the documentation is inconsistent as well.
I’ll dig into this and see if I can get a straight answer!
If you test the 3 different formats, I liked this one the best: red car:: 3 blue car:: 1 (space after colon) but I don't believe that is the correct format. There may not be a wrong way of doing it, not sure. The red car:: 3 blue car:: 1 (space after colon) gave me a good mixture of red and blue. The other two had all cars red with no blue showing.
Can anyone tell me what cinematic “still” means ? What is the word still supposed to mean ? Does it mean a snapshot of video clip basically ? Thanks, sorry , not a native English speaker
Yup! That’s basically exactly it! Like a screen grab from a movie. Let me know if I can help any more!
@@TheoreticallyMedia
Thanks! For now , no more questions:)
@@TheoreticallyMedia well, actually. Now that I think about it, where can I watch a video on your channel about the slider method that you mention in the end ?
@@johnnydeppsky3510 oh, let me look around. Haha; so many videos I can’t remember what is in what!
BTW, that image of the elegant 1960s man in the Californian diner is begging for a Chris Isaak face-swap.
It would be a Wicked Thing to face swap....
that face swap is only gonna break your heart
Free PDF Workbook on Gumroad is here: theoreticallymedia.gumroad.com/l/inpainting
👋
1:00
😆 🤣 😂 Love that image, BTW
I’ve found the I paint to be pretty bad and just makes whatever selection blurry and worse.
It can for sure be hit or miss. The one suggestion I might have for you is to try to make a bigger bounding box. Sometimes, with more room to play, it'll generate better.
Why would you use a picture as thumbnail that you are not elaborating on???
The smug and blend kinda ruin the image though. It doesn’t look good.
Do you mean Smuge? I think that typo'd as Smug. Yeah-- I think it'll take some battling when MJ refuses to give you what you're looking for. I'm thinking about some alternate ways to tackle it-- including going back to the old Photobash method.
Double limbs in a horror story isn't a mistake 😂😂😂
haha, that's a GOOD point! Girl in a creepy room holding a balloon is unnerving, but girl with three arms holding a red balloon is TERRIFYING!
LoL. Joan of Arc's smile is like Kristen Stewart's.
Oh, 100% thinking the same. There are some Twilight stills in that training data for sure!
5:12 she looks like she's standing in a hole, particularly in relation to the stove behind her. Perspective is waaaay off.
Yeah, not gonna lie, it wouldn't have been my 100% this-is-it image either...but, I've got a certain amount of time to do these videos, so I had to take it. I think with inpainting and a few other rolls, I would have gotten it there. But for demo purposes, I think it worked. I did like the lighting on the subject though. That felt "Stage Natural"
@TheoreticallyMedia 100% know what you mean. It's crazy that sometime it nails perspective, lighting. Reflections etc, but then sometimes it'll just bury someone's feet in the ground. It'll be amazing when we've got the granularity to say, "scale her up 30%, rotate her 1/8 turn counter clockwise" etc. Probably won't be too long the rate they're developing this stuff.
@@robertdouble559 agreed! I don't engage with the Anti-Ai crowd very often (too little time for that), but one tentpole I have is: It really isn't just typing some words in. I mean it CAN be, but for those of use trying to achieve very specific things? It's a TON more work that requires a lot of practice in different disciplines.
Come on, Joan of Arc was never happy.
Haha, Historically Accurate!!
The in-painting feature is gimmicky and does not work well...what a disappointing pathetic MJ downgrade. Leonardo AI's in-painting is far superior.
I was talking about this earlier, and I think I even had it in the video at one point but it got cut due to time. Yeah, Leo and SD's inpainting is for sure more robust, but it has also been around a lot longer. This is like, day 2 for MJ Inpainting...give it some time. First step needed to be taken.
I'm curious why anyone would systematically troll all of the MJ YT tutorials, trashing this feature, (as you have).