Do you still like this technique given the new Midjourney "--cref" parameter? Would you change anything or still use this technique when it comes to getting different angles?
It works pretty well if you use --cref and just change the prompt by adding in "high angle shot from above" or something like that. I'm making a video for that now
Glad you liked this! By the way this method is outdated, there's a new consistent characters features that makes it a lot easier to get different camera angles, I've got a video tutorial here that shows how to use it: th-cam.com/video/Vi5KQUZSKkM/w-d-xo.html
@@taoprompts oh sweet! Thanks so much. I’ve got all your vids opened in tabs to get through 🫠 I find your method super straight forward and easy to follow. Keep it up!
Great video! I leaned a lot. Do you have tutorials on using Leonardo Ai and DALL.E on this? Especially making coloring book images style consistency? Thanks much!
Thanks! I haven't spent that much time using Leonardo Ai or Dall-E, so I don't have guides for those. I've seen a lot of videos like those on other channels though.
Yeah, the new cref update is really nice, you can do everything from camera angles, expressions, and different activities with it. I made an updated guide for that feature here: th-cam.com/video/Vi5KQUZSKkM/w-d-xo.html
Hey to generate multiple face angles try using the new character references feature --cref. I made a video tutorial for that here: th-cam.com/video/Vi5KQUZSKkM/w-d-xo.html
Great tutorial, I have a question here. While trying to use the lasso tool then insert --picture, the output is error message "expected at least one argument", could you please look into it? I have followed each step carefully, thanks in advance, Tao Prompts.
Hey, First there's a new consistent characters feature "--cref" I would recommend you use for creating consistent characters. I have a tutorial for that here that shows how to get different camera angles and facial expressions: th-cam.com/video/Vi5KQUZSKkM/w-d-xo.html Midjourney has made some changes to it's interface. A temporary workaround is to just link the image url's directly in the vary region prompt box, instead of trying to use /prefer_option_set.
Your video tutorials have helped get me off to a great start on Midjourney. I've been trying to use the "Vary(Region)" tool, but it is not showing. I have scoured the midjourney tutorials, google, and youtube, but have not found out anything more than making sure my remix is on and that I am using V6. Do you know anything that may help? Do I need to have over 1000 images in my gallery before I can use the tool? Does the tool work on chromebooks? I appreciate any advice. Thank you so much for your incredibly helpful videos.
@taoprompts I needed to install Linus onto my chromebook before getting the web app version for Discord. Now I have some new bugs to figure out, but at least I can start using the vary region tool. Just in time for the character reference tool drop. It really is amazing technology.
Hi Tao! Great tutorial! I have a question. So I created a shortcut of my character, it worked well. However, when I used it in Region (--cref --kim), MJ didn't recognize the prompt. It always says 'cref', 'expected at least one argument'. Does any know how to fix this error? Many thanks!
Hey, a lot of people have been having trouble with that issue right now. Midjourney appears to have made some changes to the vary region box and that's messing up the /prefer_option_set command. A temporary workaround is to just attach the image_url directly after cref instead of using /prefer_option_set. "--cref image_url_1 image_url_2"
Thank you ! Is there a way to address scene consistency? You managed to solve the character consistency issue before the --cref feature was introduced; you're truly a genius.
That's tricky to do, here's a twitter post from Chase Lean that details one method to get multiple shots with the same scene in it, it's worth trying out: twitter.com/chaseleantj/status/1693246015124713634
Yeah, I saw that update. I guess I had bad timing for this video. But good news is I've posted an updated video on --cref here: th-cam.com/video/Vi5KQUZSKkM/w-d-xo.html
Yeah, I heard they have been working on this for a while. I'm definitely going to try it and see what it can do. Like if it can do both people and cartoons.
@taoprompts let's hope it's good, with pika's new sound effects and lipsync it's the final hurdle to creating proper movie scenes. If not I'll definitely generate characters using your method as it looks like the best method I have seen.
Yes, it can sometimes not match the same lighting in the environment image. In the prompt if you ask for different lighting like "dim and desaturated colors" it can help a bit.
You can try, but Midjourney is not designed to do that. The best way to embed your own face into an Ai model is through stable diffusion with LoRA. However, you can use the insight face swapper in Midjourney to do some basic face swapping. I have a simple guide for that here: th-cam.com/video/PvN-nhRMdm0/w-d-xo.html
bro amazing video. But I am facing problem with this method, I have created some characters and whenever I try to use them in "vary region " prompt it shows unrecognised parameter. But while generating direct in "/image" it shows generates images. So whats the issue? ps I have checked the spelling its write
Hey, in case you didn't know yet, there's a new consistent characters --cref feature that makes this process a lot easier. I have a video tutorial on that here: th-cam.com/video/Vi5KQUZSKkM/w-d-xo.html The vary region box has changed and doesn't work with /prefer_option_set and --cref right now. A temporary work around is to just attach the image_url directly after --cref. "your prompt --cref image_url"
Just use /prefer_option_set and enter the name you want to delete inside the option box. Leave the value blank. Then if you enter, it will delete whatever you saved to that name.
With the latest version of Midjourney using the website, I just cannot get this to work at the beginning stage. It will just not make separate images the way it does for you. Been trying to get it to work for the past 2 days. Very frustrating. Not sure why.
Hey, this video is outdated. The best way now is to use the CREF character reference feature to attach a character image and the prompt for a specific emotion
What doesn't work about it? By the way, there has been a new consistent characters feature released which make this process a lot easier. I have a guide for that here: th-cam.com/video/Vi5KQUZSKkM/w-d-xo.html
That's weird, are you using the newest version of discord? Also in the bottom left of the vary region interface, do you see a lasso selection tool and a square selection tool?@@amumtomum
Brilliant video! Hey Tao - you are now my favorite instructor for things like Midjourney and Pika - you don't waste my time. But just for some feedback that might be helpful (because I want your channel to thrive). When I first started tuning into your channel, I was kind of bored by your deadpan delivery. It took a while for me to appreciate your gifts as a communicator and your knowledge because you didn't seem enthusiastic about your subject. Now, you've got to be you, but if you just want an example of someone whose enthusiasm for his subject is over the top, take a look at someone like Matt Wolfe. Even if his subject was watching paint dry, I'd still watch because his enthusiasm is infectious. You are obviously in love with AI imaging but you might show it a bit more. Please keep up the good work, you deserve 1M subscribers.
Personally I like it. Most MJ tubers have attitudes one I reached out to actually stole my idea and didn't give credit. This dude seems cool and legit. I'm here for the lessons not performance. I do get where your coming from but again he beats the rest hands down.
Well, as for me actually one the thing I like is that Tao is chill, no "American style" of trying to sell you something or playing with basic emotions to hook you up. He goes straight to the point, no BS, clear info. As I said, at least for me that's an asset and a more unique style.
Thanks for the feedback! That's definitely something I'm working on. It's a little weird to talk to a camera, so it takes time to get used to, but I do try to improve my delivery a bit every video. Matt's really good on camera, even in his old videos from a year and a half ago when he first started doing Ai content, you could see that he was a very charismatic speaker.
The vary region box has changed and doesn't work with /prefer_option_set and --cref right now. A temporary work around is to just attach the image_url directly after --cref. "your prompt --cref image_url"
There may be a snipping tool in iPad that I'm not aware of. Midjourney did release a new consistent characters feature though. They posted it in discord announcements and its much easier to use. I'll post a video about that soon.@@randybrown4649
Do you mean, see the back of their head? In that case I wouldn't use an image reference because it will always try to generate some facial features. Instead just prompt for the back of the head directly with the details of that person.
@@taoprompts Wow thanks, I am such an idiot, I kept trying to make him face away with a frontal -cref but like you said, it would always try everything to show the face, even break his neck. 🤣 I got around it by using another of your video, that was explaining about asking for character reference sheet of "a full turnaround" when you prompt for a character. I managed to get my character facing away and I can use that reference when I need him/her to show the back of their head. Thanks again! Love your videos.
@@gnoel5722 I ran into the same issue myself before, good to know you where able to get a reference of the behind the back view. That's probably the best way of doing it 👍.
Here's an UPDATED video for the new --CREF Character Consistency feature in Midjourney: th-cam.com/video/Vi5KQUZSKkM/w-d-xo.html
And another incredibly valuable tutorial. Thank you Tao!
Thanks! It looks like they have released a new consistent characters feature so I'll make a video testing that out soon!
Do you still like this technique given the new Midjourney "--cref" parameter? Would you change anything or still use this technique when it comes to getting different angles?
It works pretty well if you use --cref and just change the prompt by adding in "high angle shot from above" or something like that. I'm making a video for that now
Fantastic vid as always Tao! 🎉
Thanks man, A lot of new updates coming for Midjourney soon!
Liked, subscribed, saved and downloaded the gumroad PDFs (including the pay one and donations for the other two). Very helpful, man. Thanks.
I appreciate that man! I've got more pdf/notion guides planned, just have to get around to making them.
Great tutorial! Thnx! This must have taken a lot of time to figure out. Thank you for sharing!
Thanks man! Yeah it took a while to make this work.
Best MJ tutorial I've seen.
Thank you! Glad to know you enjoyed it.
AWESOME! Love it! Thank you for creating this tutorial. 🙂
You're welcome! I'm glad you liked it.
Hey Tao. Thanks for this very useful tutorial. Exactly what I was looking for!
For sure! Midjourney is supposed to be coming out with a consistent character feature soon, so let's see if that can make this process easier.
10K subs!! Congrats pal
Thanks man! Excited for you to get 50k soon
Thanks, interesting and useful! 🙏
Glad you liked it!
Best tutorial, you rock!
Thanks for supporting man!
Great video, thanks for the tips!
Glad you liked this! By the way this method is outdated, there's a new consistent characters features that makes it a lot easier to get different camera angles, I've got a video tutorial here that shows how to use it: th-cam.com/video/Vi5KQUZSKkM/w-d-xo.html
@@taoprompts oh sweet! Thanks so much. I’ve got all your vids opened in tabs to get through 🫠 I find your method super straight forward and easy to follow. Keep it up!
so impressive!
Thank you! It took a while to make this work.
My friend Tao Midjourney released - -cref for consistent character.
Yeah, I saw that, looks like I had some bad timing for this video.
@@taoprompts don’t Worry You Always Do the Best!
great video ,
but I think character reference should be out this week ,
so I will wait for a video from you about it to see how good it is
I did hear that they are planning to release the character consistency feature. I'll try it out when they release it and test how much more it can do.
@@taoprompts waiting for your video about the character reference
Great video! I leaned a lot. Do you have tutorials on using Leonardo Ai and DALL.E on this? Especially making coloring book images style consistency? Thanks much!
Thanks! I haven't spent that much time using Leonardo Ai or Dall-E, so I don't have guides for those. I've seen a lot of videos like those on other channels though.
Midjourney JUST had an update 4 days ago for consistent characters. Easier than ever now!!
Yeah, the new cref update is really nice, you can do everything from camera angles, expressions, and different activities with it. I made an updated guide for that feature here: th-cam.com/video/Vi5KQUZSKkM/w-d-xo.html
Fantastic video! I have a question: If I already have a centered face, how can I generate multiple angles from it?
Hey to generate multiple face angles try using the new character references feature --cref. I made a video tutorial for that here: th-cam.com/video/Vi5KQUZSKkM/w-d-xo.html
Thank you❤❤❤
Thanks Tim, you're welcome 👍
Very interesting and useful
Thank you! I hope this helps.
@@taoprompts yes definitely ❤
@@taoprompts will you please tell us how to creat African tales story step 1 step nd with consistency on midjourney
I'll make a video on the new consistent characters feature, maybe that will help!@@Farrahkhan789
Great tutorial, I have a question here. While trying to use the lasso tool then insert --picture, the output is error message "expected at least one argument", could you please look into it? I have followed each step carefully, thanks in advance, Tao Prompts.
Hey,
First there's a new consistent characters feature "--cref" I would recommend you use for creating consistent characters. I have a tutorial for that here that shows how to get different camera angles and facial expressions: th-cam.com/video/Vi5KQUZSKkM/w-d-xo.html
Midjourney has made some changes to it's interface. A temporary workaround is to just link the image url's directly in the vary region prompt box, instead of trying to use /prefer_option_set.
Brother Tao, -cref just released and it’s a game changer
Yeah I saw that announcement, I'm working a on a video guide for that now.
@@taoprompts Awesome. Your content is highly underrated and practical. Saves me hours on Krita. Great channel
Your video tutorials have helped get me off to a great start on Midjourney. I've been trying to use the "Vary(Region)" tool, but it is not showing. I have scoured the midjourney tutorials, google, and youtube, but have not found out anything more than making sure my remix is on and that I am using V6. Do you know anything that may help? Do I need to have over 1000 images in my gallery before I can use the tool? Does the tool work on chromebooks?
I appreciate any advice. Thank you so much for your incredibly helpful videos.
I think everybody should be able to use that tool. If you're using Midjourney version 6 it should be there. Do you have the newest version of Discord?
@taoprompts I needed to install Linus onto my chromebook before getting the web app version for Discord. Now I have some new bugs to figure out, but at least I can start using the vary region tool. Just in time for the character reference tool drop. It really is amazing technology.
Tao!
🫵🙏
Hi Tao! Great tutorial! I have a question. So I created a shortcut of my character, it worked well. However, when I used it in Region (--cref --kim), MJ didn't recognize the prompt. It always says 'cref', 'expected at least one argument'. Does any know how to fix this error? Many thanks!
Hey, a lot of people have been having trouble with that issue right now. Midjourney appears to have made some changes to the vary region box and that's messing up the /prefer_option_set command. A temporary workaround is to just attach the image_url directly after cref instead of using /prefer_option_set.
"--cref image_url_1 image_url_2"
@@taoprompts thank you so much for your response!
Thank you ! Is there a way to address scene consistency? You managed to solve the character consistency issue before the --cref feature was introduced; you're truly a genius.
That's tricky to do, here's a twitter post from Chase Lean that details one method to get multiple shots with the same scene in it, it's worth trying out: twitter.com/chaseleantj/status/1693246015124713634
Great video we are looking for --cref ❤
Thanks! That feature should be out soon, I'll make an update vid about it.
Wow after all the work on making this video, MJ released yesterday the --cref. Sorry.
Yeah, I saw that update. I guess I had bad timing for this video. But good news is I've posted an updated video on --cref here: th-cam.com/video/Vi5KQUZSKkM/w-d-xo.html
All your videos are good, but you do know consistent character cref is due for release this week?
Yeah, I heard they have been working on this for a while. I'm definitely going to try it and see what it can do. Like if it can do both people and cartoons.
@taoprompts let's hope it's good, with pika's new sound effects and lipsync it's the final hurdle to creating proper movie scenes. If not I'll definitely generate characters using your method as it looks like the best method I have seen.
Cool bro 🎉🎉
@Tao I found one problem with this method. Integrity with the rest of the image e.g light. But I'm still looking for solution.
Yes, it can sometimes not match the same lighting in the environment image. In the prompt if you ask for different lighting like "dim and desaturated colors" it can help a bit.
Hi, is it possible to train an existing face (mine for example) like this?
You can try, but Midjourney is not designed to do that. The best way to embed your own face into an Ai model is through stable diffusion with LoRA. However, you can use the insight face swapper in Midjourney to do some basic face swapping. I have a simple guide for that here: th-cam.com/video/PvN-nhRMdm0/w-d-xo.html
bro amazing video. But I am facing problem with this method, I have created some characters and whenever I try to use them in "vary region " prompt it shows unrecognised parameter. But while generating direct in "/image" it shows generates images. So whats the issue? ps I have checked the spelling its write
Hey, in case you didn't know yet, there's a new consistent characters --cref feature that makes this process a lot easier.
I have a video tutorial on that here: th-cam.com/video/Vi5KQUZSKkM/w-d-xo.html
The vary region box has changed and doesn't work with /prefer_option_set and --cref right now. A temporary work around is to just attach the image_url directly after --cref.
"your prompt --cref image_url"
Please let me know if I want to delete saved data like this, what command should I use?
Just use /prefer_option_set and enter the name you want to delete inside the option box. Leave the value blank. Then if you enter, it will delete whatever you saved to that name.
@@taoprompts tks bro
With the latest version of Midjourney using the website, I just cannot get this to work at the beginning stage. It will just not make separate images the way it does for you. Been trying to get it to work for the past 2 days. Very frustrating. Not sure why.
Hey, this video is outdated. The best way now is to use the CREF character reference feature to attach a character image and the prompt for a specific emotion
Why doesn't my vary region work. Can you help me please?
What doesn't work about it? By the way, there has been a new consistent characters feature released which make this process a lot easier. I have a guide for that here: th-cam.com/video/Vi5KQUZSKkM/w-d-xo.html
@@taoprompts when I go to vary region it doesn't allow me to mark face, or anything
That's weird, are you using the newest version of discord? Also in the bottom left of the vary region interface, do you see a lasso selection tool and a square selection tool?@@amumtomum
Sorry Tao, it works now. But it didn't for weeks. Thanks for the video. I'll try now😊
wow
How do you do this on mobile I have an iPad but I can’t figure out how to do the parameter part
Do you mean "/prefer option set". Which part are you having problems with, are you able to upload reference images into Midjourney?
Brilliant video! Hey Tao - you are now my favorite instructor for things like Midjourney and Pika - you don't waste my time. But just for some feedback that might be helpful (because I want your channel to thrive). When I first started tuning into your channel, I was kind of bored by your deadpan delivery. It took a while for me to appreciate your gifts as a communicator and your knowledge because you didn't seem enthusiastic about your subject. Now, you've got to be you, but if you just want an example of someone whose enthusiasm for his subject is over the top, take a look at someone like Matt Wolfe. Even if his subject was watching paint dry, I'd still watch because his enthusiasm is infectious. You are obviously in love with AI imaging but you might show it a bit more. Please keep up the good work, you deserve 1M subscribers.
Personally I like it. Most MJ tubers have attitudes one I reached out to actually stole my idea and didn't give credit. This dude seems cool and legit. I'm here for the lessons not performance. I do get where your coming from but again he beats the rest hands down.
Well, as for me actually one the thing I like is that Tao is chill, no "American style" of trying to sell you something or playing with basic emotions to hook you up. He goes straight to the point, no BS, clear info. As I said, at least for me that's an asset and a more unique style.
Thanks for the feedback! That's definitely something I'm working on. It's a little weird to talk to a camera, so it takes time to get used to, but I do try to improve my delivery a bit every video.
Matt's really good on camera, even in his old videos from a year and a half ago when he first started doing Ai content, you could see that he was a very charismatic speaker.
Thanks, I appreciate that. It sucks that someone took your idea, I know that must feel bad.
I appreciate that, I try to make my guides simple and concise.
Why don't use envelope emoji to take separate images?
There's definitely other ways of doing it, I tried to make this guide as easy to follow as possible.
@@taoprompts cool!
hi --cref --anne or just --anne not working ,can you pleease share the possible reason
The vary region box has changed and doesn't work with /prefer_option_set and --cref right now. A temporary work around is to just attach the image_url directly after --cref.
"your prompt --cref image_url"
Is there an iPad iOS equivilant to “snipping tool”
I haven't used an iPad before, maybe you could take a screenshot and then crop it somehow
@@taoprompts I did, it’s cumbersome compared to snipping tool?
There may be a snipping tool in iPad that I'm not aware of. Midjourney did release a new consistent characters feature though. They posted it in discord announcements and its much easier to use. I'll post a video about that soon.@@randybrown4649
CMD+SHIFT+4 if you have a keyboard attached
Well done, but it’s seems not work for the illustration style character
Which part of the guide doesn't work for that? I will test out a version that works for cartoon styles in the future.
I wish MJ had a more user friendly way of making consistent characters. The fact you have to employ convoluted workarounds like this is annoying.
I read they are working on a consistent characters feature now, I think that should make things a bit easier.
Anyone know how to make your character face away from viewer?
Do you mean, see the back of their head? In that case I wouldn't use an image reference because it will always try to generate some facial features. Instead just prompt for the back of the head directly with the details of that person.
@@taoprompts Wow thanks, I am such an idiot, I kept trying to make him face away with a frontal -cref but like you said, it would always try everything to show the face, even break his neck. 🤣 I got around it by using another of your video, that was explaining about asking for character reference sheet of "a full turnaround" when you prompt for a character. I managed to get my character facing away and I can use that reference when I need him/her to show the back of their head. Thanks again! Love your videos.
@@gnoel5722 I ran into the same issue myself before, good to know you where able to get a reference of the behind the back view. That's probably the best way of doing it 👍.
🫵👌