I gave gpt 4 that prompt you had, and told it to make opposite negatives for them aswell. This thing is making prompts that frankly are the best photos i've literally ever seen generated ever.
@@sedetweiler @smashachu how did you access GPT-4 ? I have a plus subscription but every time I click to change on the GPT model, it changes to "undefined" ? Thanks
can you explain the workflow a little bit , and will it generate perfect negative prompt , or atleast just tell me what should i instruct it to do exactly
thanx for this tutorial i enjoyed it a lot, my main problem here is this error "- Value not in list: model: 'gpt-3.5-turbo' not in []" happens with me only cuz i m using Personal free ChatGpt Account (it still provides API keys) but still not working.
I am following your tutorial but I get an error message about Prompt outputs failed validation ChatGPT Simple _O: - Value not in list: model: 'gpt-3.5-turbo' not in [] so there is no config in D:\ComfyUI-master\ComfyUI_windows_portable\ComfyUI\custom_nodes. I have an API key but can't find the config or 'gpt-3.5-turbo'. Any help would be appreciated.
Lol that's cool, I was already doing similar things just using python to prompt openai endpoints and local host comfyui, easier to keep track of what I'm doing and all my relevant data in my program than by letting comfyui prompt for me, and i can do my own text sterilization on the openai response before feeding it to comfyui, and the openai endpoints accept so many arguments to affect outcome (presence penalty, frequency penalty, temperature, top_p, etc) that feels very easy to interact with directly in the completion request, I feel like in general it's more versatile to prompt openai by code and feed that data to comfy via python api calls
@sedetweiler a bit before gpt4 first came out I started a "use chatgpt to learn python while making individual tools that might become useful if I find a way to combine them" project that has lead me this far. I haven't actually fully familiarized myself with comfyui yet. For me, it is an access point to its api where I can drop whatever I want into a json of the workflow as my api payload. If you wanted to talk/share anything I'm down, but I'm very new and learning as I go.
thank u for best video but why i getting error " Prompt outputs failed validation ChatGPT Simple _O: - Value not in list: model: 'gpt-3.5-turbo' not in []
Haven't even watched the video yet, but this is an area I'm definitely interested in. Advanced Hallucinations from the "mind" of ChatGPT: Let the Games Begin!
looks like "o" has changed the ChatGPT model completely (or I must be missing something). The components mentioned in the video are gone and the new ones are less obvious on how to use. Love the idea. Going to figure out where "O" went with the update.
I will have to check it out. I am actually surprised there are not other ChatGPT options out there yet. I might have to do another search as this one is fine, but often adds weird tags that are not coming from ChatGPT.
hello. thanks for coming back. I know this is for comfyUI and I am going through that now. Learning curve/time. But just a quick question as you work for them and you would know this Q: Just out of curiosity. Pre SD A/1111/1.6. Sampling method had parameter slider bars/numerical values to modify; all SampleMthds parameters/sliders were the same. Now, with 1.6 the GUI/Sample method is different. How do you get that UI workflow set/grouping of sliders/values back into the V1.6. I have/took large amounts of notes for my images/sample methods. I looked through the settings and could not find it. Help? sorry. not to highjack the comfyUI where I know there are those controls. but............thanks.
You can rename the category (e.g. "OpenAI" instead of "O") if you're willing to edit the node's python source and change the string there. If you "git commit" your change into the cloned repo, it should merge fine with later updates to a point. Ultimately if the auther decides to change the category name as well, the merge would likely result in a conflict needing resolution.
Yeah, I just didn't want to deal with it. I am hoping there is an addition at some point to allow us to organize these as the list is going to get long!
When I load in the Concat Text _O it has 13 text prompt areas and then the separator. Is there a way to make it just two paragraphs like what you have in this video? I realize it's been almost a year, so things probably have changed, but I need some help.
Prompt outputs failed validation ChatGPT Simple _O: - Value not in list: model: 'gpt-3.5-turbo' not in [] ???i have no idea what's can i do next, please help me fix this problem. thx
Try the newer custom code 'ComfyUI-Chat-GPT-Integration' which is a branch of 'O'. The API key/Prompt node works. Use the rest of the nodes from 'O' to form the flow.
I desperately want this to work but I can't really afford the $30 AUD a month for the gpt subscription. Thank you for these videos Scott. You have got me past the "it's too overwhelming" roadblock for comfy.
Is threre a way to have the prompt that chatgpt generates visible before it's fed into the encoder? I'd like to use those prompts again or at least see what is being created. TY, I joined your channel as you've provided the best tutorials I've found on YT.
Agreed. Some one should make a node that shows and saves chatgpt's prompts. It's frustrating when you get almost perfect image, but cannot fine tune the prompt later if you close comfyui.
@@lukeovermind I looked up what you were referring to and found it. The repo is pythongosssss / ComfyUI-Custom-Scripts and it is amazing! It's not only a collection of helpful nodes like the ShowText node you were talking about, but so many other helpful ComfyUI quality of life features and settings like simply being able to see your generated image under the UI menu, so no matter where you are in the graph you can see if your image is done. Highly recommended! Thanks for bringing it up.
Is there a way to do this with a local LLM, text generation webui and I think koboldai, both have api. Guessing that calling and using the api would be the same as chatgpt? Side note: REALLY interesting videos)
Wow, i'm totally addicted to this stable diffusion node worflow, and your channel. Love it! I have not used auto1111 that mutch, cause i feel nodes are more powerful even though it can be slower to setup. I feel comfyUI is auto1111 on steroids, but requires more knowledge of how to operate it. Am i right or should i invest time into auto1111 aswell? Keep on the great work, Scott.
AUTO1111 is a great tool, and at the end of the day they both create great images. However, you are really working with that workflow and whatever they decide to bake into the product. Comfy is for those that want to perhaps do something a bit more specific as well as learn how all of this works. Comfy's goal was to learn how SD works, and for a lot of people they just want to make pictures and not really learn it, which is fine. I drive a car and don't need to get under the hood.
Thanks!! This was a killer idea and a killer video! I'm having fun with this. It's a great source of inspiration. Trying to figure out how I can capture along with random seed and chatgpt prompt it generates to rework the scene later.
Sequential. However, if you are using the same prompt and all that, most of the start gets cached so running other forks on the same graph is pretty fast.
Is the GPT chat already there when you install Comfy or did you forget to tell me how to download? Or do you have a separate video on how to set up GPT chat in Comfy?
@@sedetweiler I apologize. I misunderstood. It turns out that along with the GPT nodes the model is also loaded? Or does it work over the internet? I don't want to install it yet, because I don't understand what kind of model it is and what folder it is placed in when booting?
Thanks a lot for your tutorial ! I have this issue in Comfy, any idea to solve it please ? "Error occurred when executing ChatGPT Simple _O: You exceeded your current quota, please check your plan and billing details."
I just got a response from ChatGPT with "(Gelbooru tags: sunset, surreal, crimson clouds, eerie, intense colors)" so it must have some idea what it's being asked.
I was looking at the code for the node and there are initialization messages for both modes. Both mention gelbooru tags but when you use "tags", ChatGPT is told to use as much tags as possible.
Unfortunately, the Simple_O prompt doesn't work as it no longer authenticates. The reason the failed error "- Value not in list: model: 'gpt-3.5-turbo is because it can no longer actually query openai for a list of models. That's just a default message that appears in that field, hence the undefined when you try to click it. The newer custom node 'ComfyUI-Chat-GPT-Integration' which is a branch of 'O' works. So can use it in conjunction with the other helpful nods in 'O'
Prompt outputs failed validation ChatGPT Simple _O: - Value not in list: model: 'gpt-3.5-turbo' not in [] i receive this error. I opened the json file with editor put in the key saved. did the workflow but receive this error. any ideas?
Hello Mr. Dtweiler, thank you for taking your time with my question. In Openai i have transferred 15 USD for this purpose. Chatgpt i use the normal version but i think it supports 3.5. I heard from another person now that he has the same problem and thinks there has been some changes from openai.@@sedetweiler
Prompt outputs failed validation: Value not in list: model: 'gpt-3.5-turbo-1106' not in [] ChatGptPrompt: - Value not in list: model: 'gpt-3.5-turbo-1106' not in []
ok~Thanks for the reply, thank you for the tutorial, I will continue to follow your footsteps to learn comfyui, and look forward to your new tutorials, thank you~ @@sedetweiler
That would be cool. I actually started a project before my Adhd kicked in where I use Claude as a Muse, it can generate Prompts, but it also acts a creative collaborator. Even got it to explain why it uses certain keywords
The problem related to gpt payement, I have solved thank you.I am looking for t2i adapter style model for SDXL.Have you got any information if anyone released t2i style model for SDXL?@@sedetweiler
You could find an open source llm, host that, then either edit the "simple openai" code to prompt that instead or write your own python to prompt your locally running llm, and prompt your locally running comfyui But it might just be worth the pennies it costs to use openai api endpoints, they are super cheap
I gave gpt 4 that prompt you had, and told it to make opposite negatives for them aswell. This thing is making prompts that frankly are the best photos i've literally ever seen generated ever.
It's a pretty great level up, imho. Cheers!
@@sedetweiler @smashachu how did you access GPT-4 ? I have a plus subscription but every time I click to change on the GPT model, it changes to "undefined" ? Thanks
can you explain the workflow a little bit , and will it generate perfect negative prompt , or atleast just tell me what should i instruct it to do exactly
thanx for this tutorial i enjoyed it a lot, my main problem here is this error "- Value not in list: model: 'gpt-3.5-turbo' not in []" happens with me only cuz i m using Personal free ChatGpt Account (it still provides API keys) but still not working.
nobody knows the solution ?
I have the same problem. Any solutions?
as soon as you pay it will start working
I am following your tutorial but I get an error message about Prompt outputs failed validation
ChatGPT Simple _O: - Value not in list: model: 'gpt-3.5-turbo' not in [] so there is no config in D:\ComfyUI-master\ComfyUI_windows_portable\ComfyUI\custom_nodes. I have an API key but can't find the config or 'gpt-3.5-turbo'. Any help would be appreciated.
pip install openai==0.28.1
Lol that's cool, I was already doing similar things just using python to prompt openai endpoints and local host comfyui, easier to keep track of what I'm doing and all my relevant data in my program than by letting comfyui prompt for me, and i can do my own text sterilization on the openai response before feeding it to comfyui, and the openai endpoints accept so many arguments to affect outcome (presence penalty, frequency penalty, temperature, top_p, etc) that feels very easy to interact with directly in the completion request, I feel like in general it's more versatile to prompt openai by code and feed that data to comfy via python api calls
You should consider making a node based on your work! Sounds great!
@sedetweiler a bit before gpt4 first came out I started a "use chatgpt to learn python while making individual tools that might become useful if I find a way to combine them" project that has lead me this far. I haven't actually fully familiarized myself with comfyui yet. For me, it is an access point to its api where I can drop whatever I want into a json of the workflow as my api payload. If you wanted to talk/share anything I'm down, but I'm very new and learning as I go.
Your tutorials are fantastic! Please keep making them! Thank you!
That's the plan! 🥂
I second this. This series is the definitive masterclass in stable diffusion!
thank u for best video but why i getting error " Prompt outputs failed validation
ChatGPT Simple _O:
- Value not in list: model: 'gpt-3.5-turbo' not in []
i got same error
@@gardentv7833 I had the same error. I just saved the flow; closed comfyAI and reopened it and it worked....
Same Error
I'm also getting that same error.
Actually typed my API key in wrong so you all might want to check and recheck it.
Haven't even watched the video yet, but this is an area I'm definitely interested in. Advanced Hallucinations from the "mind" of ChatGPT: Let the Games Begin!
I have been having a blast with it.
you do awesome work man! thank you for showing these!
Glad you like them! Cheers!
looks like "o" has changed the ChatGPT model completely (or I must be missing something). The components mentioned in the video are gone and the new ones are less obvious on how to use. Love the idea. Going to figure out where "O" went with the update.
I will have to check it out. I am actually surprised there are not other ChatGPT options out there yet. I might have to do another search as this one is fine, but often adds weird tags that are not coming from ChatGPT.
cool, but can we do this with open source models and ollama perhaps?
Here is what I tried and the results are outstanding! "How will the best photograph ever made be described"
Fantastic!
hello. thanks for coming back. I know this is for comfyUI and I am going through that now. Learning curve/time. But just a quick question as you work for them and you would know this Q: Just out of curiosity. Pre SD A/1111/1.6. Sampling method had parameter slider bars/numerical values to modify; all SampleMthds parameters/sliders were the same. Now, with 1.6 the GUI/Sample method is different. How do you get that UI workflow set/grouping of sliders/values back into the V1.6. I have/took large amounts of notes for my images/sample methods. I looked through the settings and could not find it. Help? sorry. not to highjack the comfyUI where I know there are those controls. but............thanks.
You can rename the category (e.g. "OpenAI" instead of "O") if you're willing to edit the node's python source and change the string there. If you "git commit" your change into the cloned repo, it should merge fine with later updates to a point. Ultimately if the auther decides to change the category name as well, the merge would likely result in a conflict needing resolution.
Yeah, I just didn't want to deal with it. I am hoping there is an addition at some point to allow us to organize these as the list is going to get long!
@@sedetweilernot to mention duplicate nodes!
When I load in the Concat Text _O it has 13 text prompt areas and then the separator. Is there a way to make it just two paragraphs like what you have in this video? I realize it's been almost a year, so things probably have changed, but I need some help.
Prompt outputs failed validation
ChatGPT Simple _O:
- Value not in list: model: 'gpt-3.5-turbo' not in [] ???i have no idea what's can i do next, please help me fix this problem. thx
Try the newer custom code 'ComfyUI-Chat-GPT-Integration' which is a branch of 'O'. The API key/Prompt node works. Use the rest of the nodes from 'O' to form the flow.
I desperately want this to work but I can't really afford the $30 AUD a month for the gpt subscription.
Thank you for these videos Scott. You have got me past the "it's too overwhelming" roadblock for comfy.
I understand. I am sure someone will make one that works locally, and it might already be out there.
Is threre a way to have the prompt that chatgpt generates visible before it's fed into the encoder? I'd like to use those prompts again or at least see what is being created. TY, I joined your channel as you've provided the best tutorials I've found on YT.
Agreed. Some one should make a node that shows and saves chatgpt's prompts. It's frustrating when you get almost perfect image, but cannot fine tune the prompt later if you close comfyui.
I found a node suite called "tiny" something and it has a debug node that is updated on the graph, so it saves with the image.
Tiny Terra Nodes has a textDebug node that shows the output in the workflow as well as (optionally) the console
Also I have used a shedload of WAS textNodes so that all my prompt details get written to a .txt file, Handy for batching Random Prompt generation
Oh, thank you! I will have to check those out!
There is also a Preview Text node, I think it is from the creator that has a snake icon on all his nodes
@@lukeovermind I looked up what you were referring to and found it. The repo is pythongosssss / ComfyUI-Custom-Scripts and it is amazing! It's not only a collection of helpful nodes like the ShowText node you were talking about, but so many other helpful ComfyUI quality of life features and settings like simply being able to see your generated image under the UI menu, so no matter where you are in the graph you can see if your image is done. Highly recommended! Thanks for bringing it up.
Sounds like he is selling something, doesn't it? :-)
Is there a way to do this with a local LLM, text generation webui and I think koboldai, both have api. Guessing that calling and using the api would be the same as chatgpt?
Side note: REALLY interesting videos)
Not yet, but I can't wait until there is!
@@sedetweiler Hopefully soon, I like to keep things all local if i can.
Wow, i'm totally addicted to this stable diffusion node worflow, and your channel. Love it! I have not used auto1111 that mutch, cause i feel nodes are more powerful even though it can be slower to setup. I feel comfyUI is auto1111 on steroids, but requires more knowledge of how to operate it. Am i right or should i invest time into auto1111 aswell? Keep on the great work, Scott.
AUTO1111 is a great tool, and at the end of the day they both create great images. However, you are really working with that workflow and whatever they decide to bake into the product. Comfy is for those that want to perhaps do something a bit more specific as well as learn how all of this works. Comfy's goal was to learn how SD works, and for a lot of people they just want to make pictures and not really learn it, which is fine. I drive a car and don't need to get under the hood.
Automatic1111 at the office (just get stuff done, accept the limitations.. also easier for coworkers), ComfyUI at home (tinkering) :)
hi, did you try installing Roop on Comfyui ? I can't find any tutorial about it.
Thanks!! This was a killer idea and a killer video! I'm having fun with this. It's a great source of inspiration. Trying to figure out how I can capture along with random seed and chatgpt prompt it generates to rework the scene later.
There is another node group that has a text field that can be populated, but I didn't know about it at the time. "tiny" something, and it worked okay.
This too much fun!
I am still playing with it! :-)
Do you know if comfyui has any parallel nodes or are they all sequential?
Sequential. However, if you are using the same prompt and all that, most of the start gets cached so running other forks on the same graph is pretty fast.
That is really too bad as a few of us had an idea for a mixer, but for a real mixer we need parallelism.@@sedetweiler
Can I create custom nodes where I can code? Actually I want to add SAM in my pipeline
Is the GPT chat already there when you install Comfy or did you forget to tell me how to download? Or do you have a separate video on how to set up GPT chat in Comfy?
It is in that node suite I show "Omar" and adding that will enable it.
@@sedetweiler I apologize. I misunderstood. It turns out that along with the GPT nodes the model is also loaded? Or does it work over the internet? I don't want to install it yet, because I don't understand what kind of model it is and what folder it is placed in when booting?
Thanks a lot for your tutorial ! I have this issue in Comfy, any idea to solve it please ? "Error occurred when executing ChatGPT Simple _O: You exceeded your current quota, please check your plan and billing details."
Sounds like the error is telling you what you need to know. You can't do this on the free version for long.
any know how i go about getting the gpt 3.5 turbo model?
I just got a response from ChatGPT with "(Gelbooru tags: sunset, surreal, crimson clouds, eerie, intense colors)" so it must have some idea what it's being asked.
Oh, it does that if you have the OpenAI mode set to "tags" versus "description"
I was looking at the code for the node and there are initialization messages for both modes. Both mention gelbooru tags but when you use "tags", ChatGPT is told to use as much tags as possible.
Hi, I have a paid account with open AI and the program keeps telling me that my API key is invalid. How do I fix this?
There is a file in the custom node you need to edit. You can check the git repository if my video didn't help you find it.
Unfortunately, the Simple_O prompt doesn't work as it no longer authenticates. The reason the failed error "- Value not in list: model: 'gpt-3.5-turbo is because it can no longer actually query openai for a list of models. That's just a default message that appears in that field, hence the undefined when you try to click it. The newer custom node 'ComfyUI-Chat-GPT-Integration' which is a branch of 'O' works. So can use it in conjunction with the other helpful nods in 'O'
No luck here either
i tried,but not work
Prompt outputs failed validation
ChatGPT Simple _O:
- Value not in list: model: 'gpt-3.5-turbo' not in []
i receive this error. I opened the json file with editor put in the key saved. did the workflow but receive this error. any ideas?
You using the free version by chance?
Hello Mr. Dtweiler, thank you for taking your time with my question. In Openai i have transferred 15 USD for this purpose. Chatgpt i use the normal version but i think it supports 3.5. I heard from another person now that he has the same problem and thinks there has been some changes from openai.@@sedetweiler
is it possible to do this with a local download of a LLM, not an API?
I have not looked, but I am betting someone has or soon will make one that works locally.
It was an enjoyable tutorial
Thanks!
hey where did you download the Save( API Format) from?
that is coming soon. :-)
im interested! @@sedetweiler
looking forward to it@@sedetweiler
my gpt models dont show?????
Do you have this problem? Failed to auto update `Quality of Life Suit`
Yeah, I uninstalled and and re-installed it and it was fine again.
Thank you, Wonderful tutorials. Easy to understand.@@sedetweiler
Prompt outputs failed validation: Value not in list: model: 'gpt-3.5-turbo-1106' not in [] ChatGptPrompt: - Value not in list: model: 'gpt-3.5-turbo-1106' not in []
The developer has a bug and it isn't yet handled. Nothing we can do about it until he fixes it. I did make a comment on his git several months ago.
ok~Thanks for the reply, thank you for the tutorial, I will continue to follow your footsteps to learn comfyui, and look forward to your new tutorials, thank you~
@@sedetweiler
is it possible to use Claude 2 instead of chatgpt?
I would love that as well, but I don't know that anyone has made a node for that yet.
That would be cool. I actually started a project before my Adhd kicked in where I use Claude as a Muse, it can generate Prompts, but it also acts a creative collaborator. Even got it to explain why it uses certain keywords
oh nice! let me know when you get it done!
required input is missing: model
Then add the link to the model :-)
The problem related to gpt payement, I have solved thank you.I am looking for t2i adapter style model for SDXL.Have you got any information if anyone released t2i style model for SDXL?@@sedetweiler
@@sedetweiler ChatGPT Models are hard to locate ... ! They are not in the Models Folder, but where?
👋
:-)
Ok, this is great. But now is there a way to do it for free?
Not yet, but I spend a total of $0.06 yesterday so even just taking your time trying to find one will literally cost you more money.
You could find an open source llm, host that, then either edit the "simple openai" code to prompt that instead or write your own python to prompt your locally running llm, and prompt your locally running comfyui
But it might just be worth the pennies it costs to use openai api endpoints, they are super cheap
@@sedetweiler got it. Thanks.