This is the coolest Midjourney has released! If you also want to generate MULTIPLE consistent characters in the same scene, here's a new video guide I made for that th-cam.com/video/n4UIyb9Aln4/w-d-xo.html I get asked this question a lot: Q: Is it possible for me to use my own reference photos to create consistent characters in Midjourney? Instead of generate references photos inside of Midjourney. A: Sort of. Ai methods that can use your own reference photos such as Dreambooth or LoRA, actually have to make small updates to the Ai model itself in order to use your reference photos properly. Think of it as creating a customized version of the Ai model specifically for your reference photos. Midjourney doesn't do this. It uses the same exact Ai model regardless of your references. So when your upload your own references to generate a consistent character, Midjourney essentially finds the "closest" thing inside the currently existing model and generates that instead. The more similar your reference photos are to Midjourney's training dataset, the easier it will be for Midjourney to replicate those references. Q: So there's no hope? A: There's hope! Image inpainting (vary region) and outpainting (pan+zoom) on your own images are definitely possible in Midjourney from a technological perspective. It just depends on the when Midjourney team decides to release them. Q: What should I do then? A: The best way to embed your own photos into Midjourney that I know of is using the insight face swap bot, although it is limited in what it can do. I have a short tutorial for that here. th-cam.com/video/PvN-nhRMdm0/w-d-xo.html Source: I work in the Ai industry.
CAN YOU PLEASE MAKE A LONGER VIDEO creating multiple consistent characters using ur own reference face images ? this will make it easier for us to understand about how to create a narrative through insight face application
Thank you, I try to break things down and go in depth with my analysis. This feature is really good and I'm excited to see how much it improves over the next 6 months.
I love your vids but I can't seem to duplicate the results lol. Some of anime characters have detailed markings on their faces and no matter what I do MJ always puts their facial markings makeup/tattoos etc in a diffrent spot or changes them in easily noticeable ways. It seems impossible to get a consistent character if there's a lot of detail. Also, clothes constantly change. s
If you also want to generate MULTIPLE consistent characters in the same scene, here's a new video guide I made for that 👉👉th-cam.com/video/n4UIyb9Aln4/w-d-xo.html
I've been playing around with cref & sref the past couple of days. Feels so fkin good to not have to create a whole set of pictures & set them as prefer option, it was never as accurate as cref, definitely a huge W for midjourney with this update
Thank you, Tao! I always look forward to your videos. Your tutorials always address what I need. They're always structured, detailed, and make it easy for me to understand.
Glad to hear this was helpful! There's always new things to learn about how Midjourney works, I always see some tips I didn't know before when I watch other people's videos.
¡Qué bueno saber que disfrutaste este tutorial! Estoy seguro de que publicarán más actualizaciones para esta función. ¡Los mantendré informados cuando lo hagan!
Awesome video. I've been trying to figure out how to use midjourney for a week now. I haven't found a video that actually demonstrates consistent characters as well as you do. Thank you.
Yeah, this update was really needed for them. Consistent characters was the most requested feature by far in the past year or so. Now they can focus on improving it and bringing other new features for us.
@@taoprompts No one has mentioned on TH-cam consistency of setting - like how do I ensure my characters in different shots are in the same kitchen? Also I hear more than one consistent character in a shot can be done, but it's problematic.
Consistent Scenes are hard to do, here's a twitter post from Chase Lean that shares a method with some potential: twitter.com/chaseleantj/status/1693246015124713634 twitter.com/chaseleantj/status/1707009949241626789 . It's worth a shot. I will have to test out multiple consistent characters to see how it works.@@suzannecarter445
Man, your tips are amazing. A huge thank you. If I may abuse it, could you make a video on the lenses and cameras to mention in midjourney to get good results please ?
Of course, here's a guide I made that covers all types of camera types and film stock you can use in Midjourney th-cam.com/video/Cv0J6fGS5Cg/w-d-xo.html
First of, you are by far the best teacher on youtube for MJ. Second, do you got any tip for preventing your charakter to generate the crazy freckles everywhere?
@@taoprompts yes,you helped me alot! One thing i noticed about character consistency is that it works well for people belonging to certain ethinicity. Have u found something like that?
@@ramanandh1261 I haven't tested it out that thoroughly yet. It seems to work fine for me when trying people of different ethnicities. How good the result will be depends on the training data as well. If it's trained on a lot of data from one particular group, it will be better at generating consistent images of people in that group.
Hi. Thanks for the amazing video. I started creating a consistent character, but I ran into a problem. I made several characters, already used them in several generations and everything worked well, but now when I generate a picture and select the character's face to generate a consistent character, I write prompt --name_of_the_character, then I get an error "Unrecognized parameter(s)". Can you please tell me what to do?
Did you change anything in the saved options? You could try and reset the option for the character name and parameters and make sure the parameters where entered correctly.
@@taoprompts No, I didn't change anything. I've already tried changing the option, but it didn't help either. If you generate using /imagine, then everything works as it should, but if you use vary (region), it does not work. The most interesting thing is that if you copy the links to the pictures that I used when generating the character and use these pictures for vary (region), then everything works, but it doesn't work with my option for the character name
@@vyacha9544 It's weird that it works if you use "/imagine" but not with the vary region tool, I didn't get that error before. I'm not sure what's causing it in that case.
Hi Tao! Very interesting! I wanted to define a photo of a character to then transpose it into different photos with a certain style. Should the photo to be used with --cr be neutral or is it better if it has the same style as the destination? Thanks, thank you very much!!!
You can use cref to inpaint multiple consistent characters into an image, I made a new video guide for that here: th-cam.com/video/n4UIyb9Aln4/w-d-xo.html
is there a way to add 2 more more characters in an image with the same steps you followed here. for different characters in 1 image, for a comic? like --cref 1 , --cref 2 , after our process i need both --1and --2 in single image together in different situation. is it possible?
Hey, you could try generating an image with 2 people in it. Then use the vary region tool on each of their faces individually with --cref. I will try this out myself and post an update video if I get good results.
@@taopromptsthat will be wonderful...cant wait to see it. Im planning on making comic scriot videos. For example about loki and thor. But in process i cant get two of the. Together with call peramaters and stuffs. Sometimes i get two lokis instead of thor and loki, sometimes two thors....sometimes i get same thor and loki in same dress . Not getting the result which i wanted for my comic strips
If you want images showing popular characters, you can always try prompting for them directly: "three superheroes sit side-by-side at a diner. On the left is the hulk wearing a red hoodie. In the middle is the black widow wearing a light blue collared shirt. On the right is Tony Stark wearing a black leather jacket. There are many plates of half eaten food in front of them. There are reporters in the background taking photos of them. Shot with kodak portra 400 film"@@kisukeurahara007
I'm still learning this stuff. I love how methodical you are: like = 'well MJ is merging her hat and hair...so I'll use the vary region option' ...this kind of methodology is still not clear to me at present
Thanks! I do planning on making more pdfs / notion workbooks for my videos. They do take a while to do, so I'll probably make them for the more popular videos.
@@taoprompts great!! I'm new to this ai stuff and your videos have been amazing and educating. I saw something like 200~ a month subscription. Does this means I can only generate 200 picture a month?
Hello Tao. Your tutorials are incredible, congratulations on your work. Can you help me with this? I have a character made with CGI, but I would like to optimize the work by also creating some images of her with AI. I've tried to get MJ to recreate it but the result is never satisfactory. Do you have any idea how I can solve this?
The character consistency feature is only designed to work with images from Midjourney, it will be hard to have it recreate your own characters. It may be possible to get a reasonable replication of your character using Stable Diffusion + training a LoRA type of model, although it will take quite a bit of model tuning depending on how consistent you want to images to be
I think both can work. The regular Midjourney version is also great for fantasy/splash art type of images. Niji is a bit more specialized for the Anime style, but the regular Midjourney can do pretty much any style.
Do you know if it's possible to separate two characters with the character commands? For example, I created a character called "nina" and wanted her to stand in front of another person who would look totally different, but when I use --nina, it applies it to both characters.
Thank you ! Is there a way to address scene consistency? You managed to solve the character consistency issue before the --cref feature was introduced; you're truly a genius.
Here's a post from Chase Lean on scene consistency: twitter.com/chaseleantj/status/1707009949241626789 . It's worth trying, although Midjourney is still not great at this yet.
Thanks for the new tutorial. The full-body shot method works well for anime figures but unfortunately not for photorealistic images because the faces are distorted. However, that is due to the MJ 6 and not to your tutorial. ;-)
Midjourney does struggle when the face is a small portion of the image, sometimes upscaling can help. But the consistent characters feature can exaggerate certain facial features, almost like a mini- caricature. Depending on the reference images you use this can make a big difference.
I haven't tried that, the --cref feature gives best results for photorealistic images. For black and white cartoons the results won't be as consistent although it might be good enough depending on what you expect
Hey Tao, question: when I’m using the snippet tool to save the photos, after when I go to re enter them into Midjourney I seem to get a noticeable loss in picture quality/resolution. Why is this happening? Am I missing something? And Thanks for all your tutorials!
Hey, the snipping tool will only preserve the resolution on your monitor screen. So you can always download the image from Midjourney, open it up to full resolution, and then use the snipping tool to keep the original resolution of the image.
Hi, thanks a lot for such a helpful video. I have a question. I got mid journey alpha paid version, the issue is with creating consistent characters in it. As I am writing kids story book. I require at least two characters consistently for entire book like different emotions poses dress. Mid journey alpha is not helping here. Rather I should say I am not able to apply these knowledge there. There is no --cref or --cw options. Can you help?
The website has a different interface for using reference images. Here's a recent video tutorial I made for the alpha website; th-cam.com/video/N36oEnxD6QI/w-d-xo.html
Hi, I add in --cw followed by a number to change my character weight, but it says unrecognized parameter and I've tried several times, it still came out the same. I did add my cref reference before typing in --cw, but the problem persists. What can I do?
Midjourney appears to have made some changes to the vary region box and that's messing up the /prefer_option_set command. A temporary workaround is to just attach the image_url directly after cref instead of using /prefer_option_set. "--cref image_url_1 image_url_2 --cw 50"
I noticed when i use the /prefer_option_set to create my --cref and save. Mine doesn't say: " --cref", it says "cref" at the custom option jin set to --cref What did i do wrong?
I followed the instructions until the part where you did 'region' variation using the set name, but I always got this error: ❌ Submission Error! SyntaxError: Unexpected token 'I', "Internal S"... is not valid JSON. What did I miss?...Thank you.
Hey, the /prefer_option_set short cut is broken in the vary region prompt box right now. A workaround is to just use --cref with image_url so "your prompt --cref image_url"
Is it possible to have 2 or more consistent characters in different scenes? I've tried this using the preferred option set for two characters and with two --cref at the end of the prompt, and haven't had much luck.
Hey, you may want to try using the vary region inpainting tool. Generate an image with 2 people in it. Then use the inpainting tool on their faces with --cref to try and create multiple consistent characters. I'm planning on testing this out myself soon.
This had been working fine for months but it's suddenly stopped working. Now whenever I try to vary a region with one of my characters, it says "Unrecognized parameter(s): `--charactername` Any idea what's gone wrong?
A lot of people are having trouble with vary region. A work around is to just directly use the image_url links instead of prefer_option_set. So inside vary region prompt box "--cref image_url1 image_url2".
I did the preferred option set few different times. And every time when i use it it gives me a result of a totally different caracter (i mix 5 almost exact inages lole you advised). I seem to get better results when i simply just use one photo link reference.why?
I haven't used discord for a while, but in the website if you upload multiple image references of a character and set them to be used as "character reference" it works pretty well
Sure thing, I have a video guide on that exactly here. th-cam.com/video/n4UIyb9Aln4/w-d-xo.html -> Also if you use that video's method: The /prefer_option_set short cut is broken in the vary region prompt box right now. A workaround is to just use --cref with image_url so "your prompt --cref image_url"
i can not use the /prefer option set, it gives me an error. My option is set to the name ayana and this is the error im gettting: Unrecognized parameter(s): `--ayana`, `photo`
Are you still having this issues, /prefer option set seems to work fine for me. Are you sure you entered in the name and value the same way as I did in the video?
@@ReadingGal You can sign up for Midjourney here: www.midjourney.com/ You may need to setup through discord if you are a new user. For some new users they may allow access to the website
Image address link certainly doesn't work. You talked about those who using desktop, but you never spoke on the majority of devices people use... Phones!
For my case, even if i use the --cref function, the character change a lot between the result and the original. I can't use the result, its not similar enough. I made a little monster character.
This will work best with characters originally created in Midjourney. It will extract the major features of the character and try to replicate them but they do not always look similar to the original. Sometimes it takes a few tries with different original characters before you get one that works well. However, I noticed that the generated consistent characters tend too all look similar even if they do not match the original.
To remove a saved option set, use prefer option set and enter the name you want to delete and leave the value box empty. If you enter that command it will delete that option set.
This is the coolest Midjourney has released!
If you also want to generate MULTIPLE consistent characters in the same scene, here's a new video guide I made for that th-cam.com/video/n4UIyb9Aln4/w-d-xo.html
I get asked this question a lot:
Q: Is it possible for me to use my own reference photos to create consistent characters in Midjourney? Instead of generate references photos inside of Midjourney.
A: Sort of.
Ai methods that can use your own reference photos such as Dreambooth or LoRA, actually have to make small updates to the Ai model itself in order to use your reference photos properly.
Think of it as creating a customized version of the Ai model specifically for your reference photos.
Midjourney doesn't do this. It uses the same exact Ai model regardless of your references. So when your upload your own references to generate a consistent character, Midjourney essentially finds the "closest" thing inside the currently existing model and generates that instead.
The more similar your reference photos are to Midjourney's training dataset, the easier it will be for Midjourney to replicate those references.
Q: So there's no hope?
A: There's hope!
Image inpainting (vary region) and outpainting (pan+zoom) on your own images are definitely possible in Midjourney from a technological perspective. It just depends on the when Midjourney team decides to release them.
Q: What should I do then?
A: The best way to embed your own photos into Midjourney that I know of is using the insight face swap bot, although it is limited in what it can do.
I have a short tutorial for that here.
th-cam.com/video/PvN-nhRMdm0/w-d-xo.html
Source: I work in the Ai industry.
The Insight face swap bot might already be quite outdated. Do you know of any other face swap tool that is better?
I haven't seen another one that works for Midjourney@@puja1985
CAN YOU PLEASE MAKE A LONGER VIDEO creating multiple consistent characters using ur own reference face images ? this will make it easier for us to understand about how to create a narrative through insight face application
Can you actually make consistent characters on niji?
You are definitely the best MJ tutorial provider, practical and in depth. With great analysis over its advantages also limitations.
Thank you, I try to break things down and go in depth with my analysis. This feature is really good and I'm excited to see how much it improves over the next 6 months.
@Gabriecielo I concur!
I love your vids but I can't seem to duplicate the results lol. Some of anime characters have detailed markings on their faces and no matter what I do MJ always puts their facial markings makeup/tattoos etc in a diffrent spot or changes them in easily noticeable ways. It seems impossible to get a consistent character if there's a lot of detail. Also, clothes constantly change.
s
If you also want to generate MULTIPLE consistent characters in the same scene, here's a new video guide I made for that 👉👉th-cam.com/video/n4UIyb9Aln4/w-d-xo.html
I've been playing around with cref & sref the past couple of days. Feels so fkin good to not have to create a whole set of pictures & set them as prefer option, it was never as accurate as cref, definitely a huge W for midjourney with this update
They've been on a roll lately with the consistency style & character features. This was definitely needed to give us something new to work with.
Thank you, Tao! I always look forward to your videos. Your tutorials always address what I need. They're always structured, detailed, and make it easy for me to understand.
Thank you, I try to take my time to make the best tutorials possible. Great to hear they are convenient to use for you to use.
While I was just looking for how good this new feature works, I just happened to learn so many things about MidJourney! You're awesome. Keep it up!
Glad to hear this was helpful! There's always new things to learn about how Midjourney works, I always see some tips I didn't know before when I watch other people's videos.
Dude! Thank so much! This tutorial is the only one that gave a clear explanation on how to create a consistent character! Definitely a thumbs up!
Glad to help man, consistent characters are a great Midjourney feature
Fácil de entender, amigo. Hace tiempo que andaba buscando entender esto. Gracias por tus aportes.
¡Qué bueno saber que disfrutaste este tutorial! Estoy seguro de que publicarán más actualizaciones para esta función. ¡Los mantendré informados cuando lo hagan!
Awesome video. I've been trying to figure out how to use midjourney for a week now. I haven't found a video that actually demonstrates consistent characters as well as you do. Thank you.
Glad this was helpful 👍. There's a lot of different ways you can use consistent characters, I tried to cover as many as I could here
Wow man. I've watched tons of videos looking for each of the answers that you cover in this video! So friggin' awesome! Keep up the good work!
Thanks Alan, character consistency is a great feature, there's still a lot of room for exploration for what's possible with it. I'll keep you updated!
You're a really great guy ,you're just trying to help people about ai but not just for money , turkey loves u 🖤🖤
Thank you 🙏, I really appreciate that. I enjoy making these videos a lot, so thanks for supporting.
Thank you! Thank you really for all the knowledge you have passed on to us.
For sure man, glad these videos are helpful 🙏
God bless you ma guy! Saving my life out here!
For sure 👍. Consistent characters is one of the best features they've released.
This tutorial is amazing. Thank you!
Thanks man! I appreciate that 🙏
I was waiting for this video of you to drop🔥🔥🔥
This was a great update! I'm planning on follow up videos soon.
Thank you very much, one more time. 👍 In my opinion with this release midjourney will keep as number one and unbeatable!
Yeah, this update was really needed for them. Consistent characters was the most requested feature by far in the past year or so. Now they can focus on improving it and bringing other new features for us.
I've been waiting a week for you to do this one!
Thanks for waiting! This was a huge update, I'll plan on making a few follow up videos for character consistency soon.
@@taoprompts No one has mentioned on TH-cam consistency of setting - like how do I ensure my characters in different shots are in the same kitchen? Also I hear more than one consistent character in a shot can be done, but it's problematic.
Consistent Scenes are hard to do, here's a twitter post from Chase Lean that shares a method with some potential: twitter.com/chaseleantj/status/1693246015124713634 twitter.com/chaseleantj/status/1707009949241626789 . It's worth a shot. I will have to test out multiple consistent characters to see how it works.@@suzannecarter445
This was great and, as usual, better than any of the other TH-cam instructive videos on using the cref feature - thanks!
Thanks Suzanne, this one took a lot of testing to make!
So helpful, thanks, I'll check out the rest of your vids.
Glad this helped! Consistent characters are easier now with Flux finetuning. I have a guide for that here: th-cam.com/video/5Z8fwEeWfRg/w-d-xo.html
You’re tutorial’s is best ! Thanks
I was struggling to use the cref feature recently, but always got some bad results. Your course saved my life!
It's a great feature! I think there's a lot more that can be done with it that hasn't been discovered yet.
Thats some gold rigth there. Thanks man
Glad to help man 🙏
Appreciate your videos. Terrific knowledge you are sharing. Thank you.
This video provides crazy value, thank you so much!
Thank you 👍, I'm glad you found this helpful
A value packed guide.
Thank you very much
I added in as much as I could think of, great to hear this was helpful 👍
Tao bro makes my night as usual when see your latest vid and have only seen the thumbnail so far! 🎉
I appreciate that man, Midjourney is on a roll lately. I'm super impressed at how quickly they've been pushing out new features lately.
sooooo much fun!!! thanks for this wonderful video
You're welcome! This is a really nice feature to play around with.
your tutorials are very helpuful Tao. thank you very much man.
Thank you, I appreciate that. These tutorials take a lot of time to make, it's good to know they are helpful.
@@taoprompts you're welcome Tao.
this is a fantastic tutorial thank you
Great tutorial!! exactly what I needed.
Glad this helped man! And thank you.
Another great video!
Thanks Val!
Fantastic explanation
Thanks for a very nice, useful and great video !
Another excellent tutorial!
Thank you!
Thank you for this great content. Helped me a lot on my project! Awesome epic stuff! Keep up the great content.
Happy to help with your project! I've got many more video ideas I'm working on 👍
Man, your tips are amazing. A huge thank you. If I may abuse it, could you make a video on the lenses and cameras to mention in midjourney to get good results please ?
I would add that it works better when the reference photos are in "very" low quality
Strangely 🤔🤔
Of course, here's a guide I made that covers all types of camera types and film stock you can use in Midjourney th-cam.com/video/Cv0J6fGS5Cg/w-d-xo.html
@@taoprompts Perfect , thanks 🙏👍
Big thanks!
Digging your channel hard buddy. Amazing work
Thanks Dorian, I really appreciate the support.
really thanks !!
Amazing tutorial bro❤
Thank you 🙏. This feature is going to keep getting improved.
First of, you are by far the best teacher on youtube for MJ. Second, do you got any tip for preventing your charakter to generate the crazy freckles everywhere?
Thanks! You could try using negative prompting, so after your prompt add in "--no freckles" to reduce freckles being generated.
@@taoprompts Gonna try it, thanks for replying!
Great Video!!! Thanks
Great stuff 🎉
amazing!!!
Thanks alot for accepting my request
For sure, keep those requests coming, I try to make videos for the most popular ones.
@@taoprompts Thanks alot May God bless you
Thank you so much
I'm always happy to help!
Yeah this video is crazy good
Thanks! This took a lot of time to put together.
You are god level man!!!!!!
I just do a lot of testing! Midjourney starts to get easier the more you use it
@@taoprompts yes,you helped me alot! One thing i noticed about character consistency is that it works well for people belonging to certain ethinicity. Have u found something like that?
@@ramanandh1261 I haven't tested it out that thoroughly yet. It seems to work fine for me when trying people of different ethnicities. How good the result will be depends on the training data as well. If it's trained on a lot of data from one particular group, it will be better at generating consistent images of people in that group.
Thank you so so much
Hi. Thanks for the amazing video. I started creating a consistent character, but I ran into a problem. I made several characters, already used them in several generations and everything worked well, but now when I generate a picture and select the character's face to generate a consistent character, I write prompt --name_of_the_character, then I get an error "Unrecognized parameter(s)". Can you please tell me what to do?
Did you change anything in the saved options? You could try and reset the option for the character name and parameters and make sure the parameters where entered correctly.
@@taoprompts No, I didn't change anything. I've already tried changing the option, but it didn't help either. If you generate using /imagine, then everything works as it should, but if you use vary (region), it does not work. The most interesting thing is that if you copy the links to the pictures that I used when generating the character and use these pictures for vary (region), then everything works, but it doesn't work with my option for the character name
@@vyacha9544 It's weird that it works if you use "/imagine" but not with the vary region tool, I didn't get that error before. I'm not sure what's causing it in that case.
Did you solve it? I'm having this exact same problem.
Thanks 👍👍👍👍
Hi Tao! Very interesting! I wanted to define a photo of a character to then transpose it into different photos with a certain style. Should the photo to be used with --cr be neutral or is it better if it has the same style as the destination?
Thanks, thank you very much!!!
You could try using a character reference photo and then a style reference photo for the specific visual style you want.
This is a great tutorial thanks! I finally made my characters.
One question though - can one image have 2 consistent characters?
You can use cref to inpaint multiple consistent characters into an image, I made a new video guide for that here: th-cam.com/video/n4UIyb9Aln4/w-d-xo.html
I love this...
is there a way to add 2 more more characters in an image with the same steps you followed here. for different characters in 1 image, for a comic? like --cref 1 , --cref 2 , after our process i need both --1and --2 in single image together in different situation. is it possible?
Hey, you could try generating an image with 2 people in it. Then use the vary region tool on each of their faces individually with --cref. I will try this out myself and post an update video if I get good results.
@@taopromptsthat will be wonderful...cant wait to see it. Im planning on making comic scriot videos. For example about loki and thor. But in process i cant get two of the. Together with call peramaters and stuffs. Sometimes i get two lokis instead of thor and loki, sometimes two thors....sometimes i get same thor and loki in same dress . Not getting the result which i wanted for my comic strips
If you want images showing popular characters, you can always try prompting for them directly:
"three superheroes sit side-by-side at a diner. On the left is the hulk wearing a red hoodie. In the middle is the black widow wearing a light blue collared shirt. On the right is Tony Stark wearing a black leather jacket. There are many plates of half eaten food in front of them. There are reporters in the background taking photos of them. Shot with kodak portra 400 film"@@kisukeurahara007
I'm still learning this stuff. I love how methodical you are: like = 'well MJ is merging her hat and hair...so I'll use the vary region option' ...this kind of methodology is still not clear to me at present
There's a lot of little details in Midjourney like that to make it work, I'll try to give more hints like that in my videos.
can you make a pdf? your tutorials are so good. It would be great to have a pdf for each tutorial
Thanks! I do planning on making more pdfs / notion workbooks for my videos. They do take a while to do, so I'll probably make them for the more popular videos.
Thakns
Amazing video. What subscription plan are you using?
I have the standard annual plan
@@taoprompts great!! I'm new to this ai stuff and your videos have been amazing and educating. I saw something like 200~ a month subscription. Does this means I can only generate 200 picture a month?
@@alexarthur6268 For the basic plan you can generate 200 prompts a month. Each generation will create 4 images based on the prompt.
@@taoprompts okay thank you for the info brother ♥️♥️
Hello Tao. Your tutorials are incredible, congratulations on your work.
Can you help me with this? I have a character made with CGI, but I would like to optimize the work by also creating some images of her with AI. I've tried to get MJ to recreate it but the result is never satisfactory. Do you have any idea how I can solve this?
The character consistency feature is only designed to work with images from Midjourney, it will be hard to have it recreate your own characters. It may be possible to get a reasonable replication of your character using Stable Diffusion + training a LoRA type of model, although it will take quite a bit of model tuning depending on how consistent you want to images to be
Is niji also recommended if I want to generate fantasy rpg (like dnd) artwork? Or should I use regular midjourney v6?
I think both can work. The regular Midjourney version is also great for fantasy/splash art type of images. Niji is a bit more specialized for the Anime style, but the regular Midjourney can do pretty much any style.
This is amazing thank you so much!! What do you do though if on MJ, when you right click to copy the image link, it doesn't come up. Any suggestions?
That's weird. If you go to the Midjourney website, and head to the archive tab on the left. You can download all your images there.
where have you been all this time? i been searching for this like 3 day lol
I got you man, glad this helped you out.
Do you know if it's possible to separate two characters with the character commands? For example, I created a character called "nina" and wanted her to stand in front of another person who would look totally different, but when I use --nina, it applies it to both characters.
Here's a recent guide that shows how to make multiple characters in Midjourney & AI Video
Thank you ! Is there a way to address scene consistency? You managed to solve the character consistency issue before the --cref feature was introduced; you're truly a genius.
Here's a post from Chase Lean on scene consistency: twitter.com/chaseleantj/status/1707009949241626789 . It's worth trying, although Midjourney is still not great at this yet.
Thanks for the new tutorial. The full-body shot method works well for anime figures but unfortunately not for photorealistic images because the faces are distorted. However, that is due to the MJ 6 and not to your tutorial. ;-)
Midjourney does struggle when the face is a small portion of the image, sometimes upscaling can help. But the consistent characters feature can exaggerate certain facial features, almost like a mini- caricature. Depending on the reference images you use this can make a big difference.
Also, your opinion is it possible to use --cref to create a series of consistent characters that are suitable for colouring books?
I haven't tried that, the --cref feature gives best results for photorealistic images. For black and white cartoons the results won't be as consistent although it might be good enough depending on what you expect
Does it work effectively with own photo if upload N or full body pose good point mention foot action and the lacking accessories good point
This feature isn't designed for your own photos, it will work best if you use Midjourney generated photos as references.
Hey Tao, question: when I’m using the snippet tool to save the photos, after when I go to re enter them into Midjourney I seem to get a noticeable loss in picture quality/resolution. Why is this happening? Am I missing something? And Thanks for all your tutorials!
Hey, the snipping tool will only preserve the resolution on your monitor screen. So you can always download the image from Midjourney, open it up to full resolution, and then use the snipping tool to keep the original resolution of the image.
@@taopromptsthanks !
Can this be used in yt vdos?
yeah, should be fine
Hi, thanks a lot for such a helpful video. I have a question. I got mid journey alpha paid version, the issue is with creating consistent characters in it. As I am writing kids story book. I require at least two characters consistently for entire book like different emotions poses dress. Mid journey alpha is not helping here. Rather I should say I am not able to apply these knowledge there. There is no --cref or --cw options. Can you help?
The website has a different interface for using reference images. Here's a recent video tutorial I made for the alpha website; th-cam.com/video/N36oEnxD6QI/w-d-xo.html
@@taoprompts you made it easy. Thanks 🙂👍
Hi, I add in --cw followed by a number to change my character weight, but it says unrecognized parameter and I've tried several times, it still came out the same. I did add my cref reference before typing in --cw, but the problem persists. What can I do?
Midjourney appears to have made some changes to the vary region box and that's messing up the /prefer_option_set command. A temporary workaround is to just attach the image_url directly after cref instead of using /prefer_option_set.
"--cref image_url_1 image_url_2 --cw 50"
I get different results with the "split into 6 different images" part.
It may take a few tries to get a result you want. Keep the stylize value low and try using a wide 16:9 aspect ratio
I noticed when i use the /prefer_option_set to create my --cref and save. Mine doesn't say: " --cref", it says "cref" at the custom option jin set to --cref
What did i do wrong?
Fixed It
I followed the instructions until the part where you did 'region' variation using the set name, but I always got this error: ❌ Submission Error!
SyntaxError: Unexpected token 'I', "Internal S"... is not valid JSON.
What did I miss?...Thank you.
Hey, the /prefer_option_set short cut is broken in the vary region prompt box right now. A workaround is to just use --cref with image_url so
"your prompt --cref image_url"
i cant copy image on the browser version to put in the cref. how can i do this on their new website platform?
Right click -> share and save -> copy image url
in your leonardo tutorial you mentioned it but can you use your own photos here?
No, unfortunately this feature doesn't work well with your own images
Is it possible to have 2 or more consistent characters in different scenes? I've tried this using the preferred option set for two characters and with two --cref at the end of the prompt, and haven't had much luck.
Hey, you may want to try using the vary region inpainting tool. Generate an image with 2 people in it. Then use the inpainting tool on their faces with --cref to try and create multiple consistent characters. I'm planning on testing this out myself soon.
you're Worldwide
We're going to keep growing 👊
This had been working fine for months but it's suddenly stopped working. Now whenever I try to vary a region with one of my characters, it says "Unrecognized parameter(s): `--charactername` Any idea what's gone wrong?
A lot of people are having trouble with vary region. A work around is to just directly use the image_url links instead of prefer_option_set.
So inside vary region prompt box "--cref image_url1 image_url2".
I did the preferred option set few different times. And every time when i use it it gives me a result of a totally different caracter (i mix 5 almost exact inages lole you advised). I seem to get better results when i simply just use one photo link reference.why?
I haven't used discord for a while, but in the website if you upload multiple image references of a character and set them to be used as "character reference" it works pretty well
nice
Hi, Tao, I want to ask about the how to make 2 or more people in one photo using different Consistent Characters?
Sure thing, I have a video guide on that exactly here.
th-cam.com/video/n4UIyb9Aln4/w-d-xo.html
-> Also if you use that video's method:
The /prefer_option_set short cut is broken in the vary region prompt box right now. A workaround is to just use --cref with image_url so
"your prompt --cref image_url"
What is with 2 consistent characters in the same picture? Or even 3 or 4?
It should be possible with 2 or more characters using the vary region inpainting tool, I will test that out soon.
❤
i can not use the /prefer option set, it gives me an error. My option is set to the name ayana and this is the error im gettting:
Unrecognized parameter(s): `--ayana`, `photo`
Are you still having this issues, /prefer option set seems to work fine for me. Are you sure you entered in the name and value the same way as I did in the video?
Can we produce consistent characters in cartoon style? like illustrations in kids story books?
Sure, this feature will work best with photorealistic images but it can be applied to cartoon characters also
@@taoprompts Could you please send me a direct link to download Midjourney v6, the one you were giving the tutorials with?
@@ReadingGal You can sign up for Midjourney here: www.midjourney.com/
You may need to setup through discord if you are a new user. For some new users they may allow access to the website
Image address link certainly doesn't work. You talked about those who using desktop, but you never spoke on the majority of devices people use... Phones!
Mobile works mostly the same, you just press down and hold on an image to get a menu with the option to copy the media link.
Is there any command if i want use an object for references not a character?
If you want to try using a object you can use the same --cref parameter. It may not work quite as well though.
If I want several consistent characters to appear in one screen, can this be achieved?
It's definitely possible, I have made a video guide for multiple consistent characters here: th-cam.com/video/n4UIyb9Aln4/w-d-xo.html
the freckles and moles, it doesn’t create them the same way, so i would go for someone who has a clear skin since it’s easier to
That's true, Midjourney focuses mostly on the larger features and doesn't always get the fine grain features correct.
For my case, even if i use the --cref function, the character change a lot between the result and the original. I can't use the result, its not similar enough. I made a little monster character.
This will work best with characters originally created in Midjourney. It will extract the major features of the character and try to replicate them but they do not always look similar to the original.
Sometimes it takes a few tries with different original characters before you get one that works well.
However, I noticed that the generated consistent characters tend too all look similar even if they do not match the original.
can you explain: why is not good to use our own photos as reference, THX!
Hey, I updated the pinned comment on the video that explains why your own photos don't work so well.
@@taoprompts thanks!!!
Is there a way to do western style comics not just mini mode
Yes, just use a character reference image of a western style cartoon and maybe use a style reference with a western cartoon also.
How can I delete a created --prefer_option_set if I did it wrong?
To remove a saved option set, use prefer option set and enter the name you want to delete and leave the value box empty. If you enter that command it will delete that option set.
What about multiple characters in same image?
I will test that out this week, and post an update video soon.
Midjourney v6 is free for how many pictures? I mean how many pictures I can produce free?
I don't think there is a free trial right now. You can always purchase a subscription and cancel it if you don't like the it.
Can you make a video on story book illustrattionnnss
What kinds of story books illustrations are you interested in?
@@taoprompts Pixar style or 3d cartoon etc. thankyou
what if you want to include 2 characters?
Here's a video guide I made for getting 2 consistent characters: th-cam.com/video/n4UIyb9Aln4/w-d-xo.html
Would you please like to share 3d pixar animation using mid journey pleaseee
Do you mean like a prompt guide?
@@taoprompts yes prompt guidnance