Thank you for this very useful video, Vladimir. When applying the variation (region), is there a way to save that selection to quickly recreate with different prompts, or do you have to vary (region) each time you change the prompt?
Hello, my friend, Firstly, I want to congratulate you on the rich content in the Midjourney image video. I've just entered the world of A.I., even taking a course on Midjourney, but you went WAY beyond the course I took and helped a lot. Still, if possible, I'd like to ask a few questions: Much of what I've acquired from clients is in the field of image modification and creating alternatives, whether in Disney style, anime, caricature, or even realistic, like the cyberpunk you did. However, even though I've used your teachings from the video and my course, I still encounter a significant barrier in similarity between the face created by Midjourney and the photo I provide. In other words: 1) Do you recommend any software/program/app that makes this blending easier? 2) You used a photo in the video where, in the first prompt + blend, the image was practically PERFECT. Even using the same settings, I didn't come close to anything similar in the first 10 variations. Is this because you used a high-quality photo or because it's a photo of someone who is already public (I imagine, being a model, she has several photos on social media that Midjourney can use as a base)? 3) What is the minimum recommended quality and size of a face photo to be used effectively? 4) Do you have any courses or would you recommend any focused on the niche I mentioned? 5) Besides the prompts and settings you used in the video, is there any additional information you use externally to create these very similar images? Well, I appreciate the video again and any answers you can provide. Oh... sorry for the English assisted by ChatGPT, hahaha. Big hug, Claudio
1. If you want unique look, i recommend have local installation of Stable Diffusion with your own custom trained models, to reflect specific style and look. 2. All photos I am using with models is my own, i am photographer and those photos of the models from my sessions. It is possible they are in database ( i do sell some on stock websites), however, specifically on those used in video, they are was showing only on my FB page. and this is very unlikely they are in scarbed database. You can try send me photo, you trying to use in MJ. check for geekatplay or Vladimir Chopine on google 3. 512x512 this is side for the trained model in SD 4. I have few on Udemy and working on the new one, specifically about Photoshop/AI, highend retouching and compositing. I also have live courses on digitalartlive.com/ 5 use /describe on image to see how MJ will described image you want to use. Thank you!
Fantastic tutorial, thank you!
Perfect!
Thank you for this very useful video, Vladimir. When applying the variation (region), is there a way to save that selection to quickly recreate with different prompts, or do you have to vary (region) each time you change the prompt?
Hello, my friend,
Firstly, I want to congratulate you on the rich content in the Midjourney image video. I've just entered the world of A.I., even taking a course on Midjourney, but you went WAY beyond the course I took and helped a lot.
Still, if possible, I'd like to ask a few questions:
Much of what I've acquired from clients is in the field of image modification and creating alternatives, whether in Disney style, anime, caricature, or even realistic, like the cyberpunk you did. However, even though I've used your teachings from the video and my course, I still encounter a significant barrier in similarity between the face created by Midjourney and the photo I provide. In other words:
1) Do you recommend any software/program/app that makes this blending easier?
2) You used a photo in the video where, in the first prompt + blend, the image was practically PERFECT. Even using the same settings, I didn't come close to anything similar in the first 10 variations. Is this because you used a high-quality photo or because it's a photo of someone who is already public (I imagine, being a model, she has several photos on social media that Midjourney can use as a base)?
3) What is the minimum recommended quality and size of a face photo to be used effectively?
4) Do you have any courses or would you recommend any focused on the niche I mentioned?
5) Besides the prompts and settings you used in the video, is there any additional information you use externally to create these very similar images?
Well, I appreciate the video again and any answers you can provide.
Oh... sorry for the English assisted by ChatGPT, hahaha.
Big hug,
Claudio
1. If you want unique look, i recommend have local installation of Stable Diffusion with your own custom trained models, to reflect specific style and look.
2. All photos I am using with models is my own, i am photographer and those photos of the models from my sessions. It is possible they are in database ( i do sell some on stock websites), however, specifically on those used in video, they are was showing only on my FB page. and this is very unlikely they are in scarbed database. You can try send me photo, you trying to use in MJ. check for geekatplay or Vladimir Chopine on google
3. 512x512 this is side for the trained model in SD
4. I have few on Udemy and working on the new one, specifically about Photoshop/AI, highend retouching and compositing. I also have live courses on digitalartlive.com/
5 use /describe on image to see how MJ will described image you want to use.
Thank you!
Hello Vladimir, what tips for consistency in ComfyUi?
i am working on multiple videos about using ComfyUI
can this method able to compare to dreambooth?
how can two faces be converted at the same time?
💯💥👍