ComfyUI is fantastic, and you are showing a great example of it. Thanks to the community behind them and the sharing they do, all those tools and workflows exist, like most of the parts of this workflow. Selling this feels like exploiting the work of others.
Well - selling thei work is not good. However, if you use somebody else's workflows to learn and eventually build your own and then sell - I do not see any wrong with this. Think about Linux - there are many linux based software then being sold commercially. As for the models that people let you to download - if they the license let you to use it commercially - then its fne, otherwise owner won't let you to do so. Just read the license and use it appropriately.
7 หลายเดือนก่อน
@@photigy Fair enough. If you have developed your own unique workflow and potentially own nodes on top of others' work, this becomes derivative work.
I can see doing food photography--a whole table spread with different dishes in a lovely ambiance and setting. All the food with the right lighting, textures, and colour would be shot as originals with the rest generated by AI including props. Hey, expand your "prop library" with AI! And maybe AI could inform a "Downton Abbey" look. You can create a "visual brief" in collaboration with your client using internet images to describe different aspects of a photo shoot including: style, specific location, historic era, concepts, visual elements such as texture or shape, camera angle, lighting and shadows, colour palette, background elements, hand positions, wardrobe, props, etc. Such a visual brief could inform both the "actual shoot" and the AI generated model. Could be very useful for social media work with brands... Maybe in the AI for photography project we could do a shoot of, "Tea (dinner) at Downton Abbey." Just a thought.
Thanks for sharing, a real hands-on demo of the app, much better than any of the polished content where you do not see how the result was accomplished at the end. I would like to see consistent results from my prompts, not to get slight variations, especially on those areas where the result has been fine, other than I tell the system to create those variations. In other words, I want to be able to create repeated results that look exactly the same until I want variations to occur.
Yes, consistency is a big deal when dealing with AI. However, controlNet and IPadapter can help a lot on this. I'll be posting more about workflows that I will create, I think there are a lot to improve.
hi this is great stuff and resonates for me - i have been doing a mix of photography and video in multple genres as a people and product imagist and have been using AI tools since only december where I use them to help enhance some shots including changing models clothes, shoes adding accessories, putting products into different backgrounds, doing a shoot based on knowing a concept that involves AI background for model or product. As you have mentioned, I have already been able to tackle some projects that have a lower budget that I may have normally not taken on but with AI tools as a copilot in the projects Im able to work in this way to help those clients with additional production boost and capabilities The censorship has been an issue as many models/actresses/singers want different levels of glamour and photoshop and most AI tools hinder my work. I have tried installing Stable Diffusion twice before but my system is not powerful enough 32GB RAM PC and 1650 Super NVIDA card is just not enough so most tools are cloud based for me and could be the way to go until I allow myself to splash out on a new machine and GPU specifically for my AI, video and motion and 3D work -
Thank you for your comment. I see a huge potential in this technology as well. I suggest you try to run comfy UI on a cloud, you can use this solution, I use it for my students on Bootcamp and it’s great: www.thinkdiffusion.com/?via=alex
Hi Alex, could this take a product rotation on white background and change the background to a static scene or even to a changing/animated scene that loops with the product rotation?
Thanks, but I was meaning can it work on an existing product rotation made with the turntable technique I learned from your site, and change the background?
I made images using Midjourney working in reversed way than this.. and the results are way better than what you have shown here.. Probably Stable Diffusion has much more potentiality but I never used it so I can only talk about what I see in this video.. I would like to show you my images to let you realize what I'm talking about..
Interesting to see, send me email to admin@photigy . Com And I’ll take a look. Most likely use Photoshop or your subject didn’t have text in it. In any case I would love to see your work.
I watched the videos, thank you very much. What if I have rings, necklaces and earrings, and I want to create photos in which the virtual model is wearing these pieces?
Thanks for sharing your experience on Stable Diffusion, it seems to be useful in my are if tge interior design. Please, could you share as well information about the powerful computer on clouds you talk? I would like to try the 3D video visualization for interiors on Stable Diffusion, but I have only the leptop. Thank you in advance
I have checked stability AI But it says that for commercial use I need to subscribe to a Paid Membership (20$ per month), maybe something is changed? The free membership does not allow commercial use. Have I missed something? :D
It’s free to use before you start making $1 million per year. I’m talking about their models. And for $20 per month you probably will be getting computing power in their cloud servers. I’m using it locally on my PC so it’s free for me
I have checked Stability AI but for commercial use I have to subscribe to a paid membership (20$ per month), the free memebrship one does not consent commercial use. Have I missed something? :)
If you run it via any of open source GU such as A1111, comfyui or others - its free for commercial use untill you start making $1M per year. After that - you need to pay Stability AI if you'll be using their models
The whole censorship issue will kill the commercial AI image generator platforms sooner or later. Especially professionals, early adopters and tech savvy people are highly sensible to this bs and don't like to be patronized like this at all. Especially Dall-E refuses to create almost anything that it 'thinks' is not suitable for children - and this even if you pay for the ChatGPT 4 version with a credit card, which normally suggests that you must be an adult anyway. At the moment many ordinary people cannot afford to buy a dedicated PC with an expensive gamer GPU just for fiddling with a locally installed image AI like SD. But this of course will change rapidly. And even now you can use ChatGPT to create optimised prompts for SD. You just need to add the NSFW stuff manually before using the prompt in SD. Those companys obviously don't realize - or don't care - that when it comes to media and entertainment, it has ALWAYS been the NSFW stuff especially, that advanced technology.
Unfortunately, MidJourney is not an alternative, it doesn't come even close to what you can do is stable diffusion. And similar to MidJourney, where are you been to use their own servers to compute, there are plenty of solutions for stable diffusion when you pay while running this on cloud. So I don't think you can compare them and stable. Diffusion does have disadvantages that MidJourney has.
Let me tell you the real reason why MidJourney sucks, because its more of a luck and you have no control! Don't start with your debate of "It's about your prompt" but hey copy and paste the same prompt that others have got great result, yours will turn out to be trash if you aren't lucky! 100% true. Use the same prompt, Same setting whatever! luck!
ComfyUI is fantastic, and you are showing a great example of it. Thanks to the community behind them and the sharing they do, all those tools and workflows exist, like most of the parts of this workflow. Selling this feels like exploiting the work of others.
Well - selling thei work is not good. However, if you use somebody else's workflows to learn and eventually build your own and then sell - I do not see any wrong with this. Think about Linux - there are many linux based software then being sold commercially.
As for the models that people let you to download - if they the license let you to use it commercially - then its fne, otherwise owner won't let you to do so.
Just read the license and use it appropriately.
@@photigy Fair enough. If you have developed your own unique workflow and potentially own nodes on top of others' work, this becomes derivative work.
I can see doing food photography--a whole table spread with different dishes in a lovely ambiance and setting. All the food with the right lighting, textures, and colour would be shot as originals with the rest generated by AI including props. Hey, expand your "prop library" with AI! And maybe AI could inform a "Downton Abbey" look. You can create a "visual brief" in collaboration with your client using internet images to describe different aspects of a photo shoot including: style, specific location, historic era, concepts, visual elements such as texture or shape, camera angle, lighting and shadows, colour palette, background elements, hand positions, wardrobe, props, etc. Such a visual brief could inform both the "actual shoot" and the AI generated model. Could be very useful for social media work with brands...
Maybe in the AI for photography project we could do a shoot of, "Tea (dinner) at Downton Abbey." Just a thought.
Interesting ideas, thank you for sharing!
Просто совет по улучшению вашего видео. Исспользуйте поп фильтер для микрофона и не жмите так сильно голос компрессором. Легче будет восприниматься
Yep, agree. thank you!
Can you create realistic models holding the product as well?????
Can you provide a masterclass for creating these types of workflows?
Thanks for sharing, a real hands-on demo of the app, much better than any of the polished content where you do not see how the result was accomplished at the end. I would like to see consistent results from my prompts, not to get slight variations, especially on those areas where the result has been fine, other than I tell the system to create those variations. In other words, I want to be able to create repeated results that look exactly the same until I want variations to occur.
Yes, consistency is a big deal when dealing with AI. However, controlNet and IPadapter can help a lot on this. I'll be posting more about workflows that I will create, I think there are a lot to improve.
Love it.
where can I use this workflow?
Do you have it for sale?
Nice video, thank you. Do you teach how to build a prompt like that?
Yes, I do. Check out bootcamp, we are starting this Monday, and I have 2 more places available: www.aimasterytools.com/ai-bootcamp-for-photographers
Thanks for the vid! Can this comfyUI system be called via an API/webhook if not hosted locally?
WOuld that somehow work with 3D product files? So if I have a 3D file of my product, can I generate Pictures with that?
hi this is great stuff and resonates for me - i have been doing a mix of photography and video in multple genres as a people and product imagist and have been using AI tools since only december where I use them to help enhance some shots including changing models clothes, shoes adding accessories, putting products into different backgrounds, doing a shoot based on knowing a concept that involves AI background for model or product.
As you have mentioned, I have already been able to tackle some projects that have a lower budget that I may have normally not taken on but with AI tools as a copilot in the projects Im able to work in this way to help those clients with additional production boost and capabilities
The censorship has been an issue as many models/actresses/singers want different levels of glamour and photoshop and most AI tools hinder my work.
I have tried installing Stable Diffusion twice before but my system is not powerful enough 32GB RAM PC and 1650 Super NVIDA card is just not enough so most tools are cloud based for me and could be the way to go until I allow myself to splash out on a new machine and GPU specifically for my AI, video and motion and 3D work -
Thank you for your comment. I see a huge potential in this technology as well. I suggest you try to run comfy UI on a cloud, you can use this solution, I use it for my students on Bootcamp and it’s great: www.thinkdiffusion.com/?via=alex
Appreciate the feedback I'll take a look at this seems potential solution until I can gift myself a more powerful pc setup!
Hi Alex, could this take a product rotation on white background and change the background to a static scene or even to a changing/animated scene that loops with the product rotation?
Yes, it can rotate subject - I have an example of the video of my my photo (burger) done like this.
Thanks, but I was meaning can it work on an existing product rotation made with the turntable technique I learned from your site, and change the background?
Hi there, thank you for the content. Do you sell this workflow as it is shown here?
Hi I couldn't figure out how to register for free webinar on your site
Found it but it says "We can't seem to find that page. "
is there a video that walks thorugh this workflow or do you just do a webinar?
Its a part of our bootcamp for photographers at www.aimasterytools.com
very nice,but did not understand,what free?
I made images using Midjourney working in reversed way than this.. and the results are way better than what you have shown here.. Probably Stable Diffusion has much more potentiality but I never used it so I can only talk about what I see in this video.. I would like to show you my images to let you realize what I'm talking about..
Interesting to see, send me email to admin@photigy . Com And I’ll take a look.
Most likely use Photoshop or your subject didn’t have text in it. In any case I would love to see your work.
I'm also interested in seeing your pictures.. 😎
Where can I see them?
Thanks💪🏻✌🏻
Same here, if you have a link to see them?
I watched the videos, thank you very much. What if I have rings, necklaces and earrings, and I want to create photos in which the virtual model is wearing these pieces?
Thanks for sharing your experience on Stable Diffusion, it seems to be useful in my are if tge interior design. Please, could you share as well information about the powerful computer on clouds you talk? I would like to try the 3D video visualization for interiors on Stable Diffusion, but I have only the leptop. Thank you in advance
Thanks for sharing and what your bigger picture for this App Alex.
Soon as you have a release, a beta version of it and you'll see yourself. There is a huge potential in it.
I have checked stability AI But it says that for commercial use I need to subscribe to a Paid Membership (20$ per month), maybe something is changed? The free membership does not allow commercial use. Have I missed something? :D
It’s free to use before you start making $1 million per year. I’m talking about their models. And for $20 per month you probably will be getting computing power in their cloud servers.
I’m using it locally on my PC so it’s free for me
I have checked Stability AI but for commercial use I have to subscribe to a paid membership (20$ per month), the free memebrship one does not consent commercial use. Have I missed something? :)
If you run it via any of open source GU such as A1111, comfyui or others - its free for commercial use untill you start making $1M per year. After that - you need to pay Stability AI if you'll be using their models
@@photigy Perfect! :)
I would love to do this but I can’t seem to find out how to download it do u have some advice
It's not that easy, I will show you how
The whole censorship issue will kill the commercial AI image generator platforms sooner or later. Especially professionals, early adopters and tech savvy people are highly sensible to this bs and don't like to be patronized like this at all. Especially Dall-E refuses to create almost anything that it 'thinks' is not suitable for children - and this even if you pay for the ChatGPT 4 version with a credit card, which normally suggests that you must be an adult anyway. At the moment many ordinary people cannot afford to buy a dedicated PC with an expensive gamer GPU just for fiddling with a locally installed image AI like SD. But this of course will change rapidly. And even now you can use ChatGPT to create optimised prompts for SD. You just need to add the NSFW stuff manually before using the prompt in SD. Those companys obviously don't realize - or don't care - that when it comes to media and entertainment, it has ALWAYS been the NSFW stuff especially, that advanced technology.
Yep, 100%
Thank you for your efforts !!
what program is this that I can't find hahah
Thank you for your interest. We have a new course for Ai photographers. You may check it here learn.photigy.com/ai-smart-creator-course
The problem is you need a really powerful computer to use Stable Diffusion properly, Midjourney is a good alternative for those that are not able to
Unfortunately, MidJourney is not an alternative, it doesn't come even close to what you can do is stable diffusion.
And similar to MidJourney, where are you been to use their own servers to compute, there are plenty of solutions for stable diffusion when you pay while running this on cloud. So I don't think you can compare them and stable. Diffusion does have disadvantages that MidJourney has.
and then you can also put it in something like KREA AI or Magnific and get even more details ;)
Бро, в фотошопе ты все это сделаешь генеративной заливкой в 2 клика. Без лишних страданий.
Bro, you cannot do this in Photoshop, Adobe AI sucks many times. I've tried - its not even close
I also liked the workflow.. 💪🏻😎 Yea, I would like to see more of them. So I follow you now. 😁✌🏻
Sounds good, thank you. Will do more.
Is this workflow not free?
I made this workflow for my students on AI Boot Camp, so technically, it's not free to download and use.
Bro stable diffusion dose not allow commerical image generation tou need to get a license id you use the models
Можно состариться, пока это заработает так как надо😅
haha:-) Not really, you can do it faster!
Let me tell you the real reason why MidJourney sucks,
because its more of a luck and you have no control! Don't start with your debate of "It's about your prompt"
but hey copy and paste the same prompt that others have got great result, yours will turn out to be trash if you aren't lucky! 100% true.
Use the same prompt, Same setting whatever! luck!
Yes, agree. With Sd you have way more control of the result. not 100% control, there is "luck", but still more than MJ
wow...this is basically blender node style operation....I hate it...
why hate? its so flexible, I love it