I wonder more about parts like: If i allready have rendered out some 360 image sequence-s for example from Unreal engine and i want to enhance those images all with SAME style so AI enhances all rendered images and MAINTAINS object locations. Only use AI for concept art fancy but in end try to get same results later in program (in rendering stages like textures, shaders etc).
Good point! You're absolutely right-AI tools can't yet maintain consistent object locations or apply the exact same style across a 360 image sequence. Tools like DaVinci Resolve, Adobe Premiere, or After Effects are still essential for ensuring consistency in sequences like this-I’m guessing you’re already familiar with these.
Hi @DominikaKalinowska. Yes, you can read about it here: www.reddit.com/r/StableDiffusion/comments/1f5mvsg/stable_diffusion_15_model_disappeared_from/?rdt=62580 I have made an alternative download link here: www.cadman.dk/upload/v1-5-pruned-emaonly.rar Just follow instructions from video: 07:06 Install model/checkpoint. Hope it helps!
Hi. Yes, you can read about it here: www.reddit.com/r/StableDiffusion/comments/1f5mvsg/stable_diffusion_15_model_disappeared_from/?rdt=62580 I have made an alternative download link here: www.cadman.dk/upload/v1-5-pruned-emaonly.rar Just follow instructions from video: 07:06 Install model/checkpoint
I’m not sure about its use in architecture yet, but FLUX looks very impressive in many ways. It could definitely be a topic for a future video once I’ve studied it more. Take care
"Error code 1" usually indicates a setup issue. Here are some possible fixes: First, check your Python version. Stable Diffusion WebUI typically requires Python 3.10.x. Download and install it from the official Python site (www.python.org/downloads/release/python-310x/), and make sure to add Python to your system PATH during installation. Next, ensure you have all required dependencies. Open a command prompt in your WebUI folder and run pip install -r requirements.txt to install or update the necessary libraries. If you're using a GPU, ensure your NVIDIA drivers are up to date and compatible with the CUDA version required by PyTorch. You can update your drivers from NVIDIA's site (www.nvidia.com/Download/index.aspx) and install PyTorch compatible with your setup from pytorch.org/get-started/locally/. Another possibility is a corrupted installation. If you suspect this, re-clone the repository by running git clone github.com/AUTOMATIC1111/stable-diffusion-webui.git and then try running webui-user.bat again. Also, check permissions. Make sure to run webui-user.bat as an administrator by right-clicking on it and selecting "Run as Administrator." Finally, ensure you’ve placed the correct model file (like .ckpt or .safetensors) in the models/Stable-diffusion folder. Without a proper model file, the WebUI won't run. If none of these solutions work, check the error logs in the command prompt for more details or add --debug to the webui-user.bat launch arguments for more troubleshooting information. Hope it helps!
@ArchViz007 Thank you very much for your response. I did as you advised, and Error 1 disappeared, but a new issue has appeared. I opened the program interface in the browser, but it doesn’t generate any images. I type the prompt, press the generate button, and nothing happens. I’m sure the model is fine. In cmd Panel there's this warning : warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning) I don’t want to bother you. If you’re busy, feel free to ignore my replay. Thank you very much.
@@tahershowkal5482 You're welcome! The timm.layers warning is harmless and unrelated. If images aren’t generating, it could be due to VRAM (GPU memory) limits or a missing dependency. In webui-user.bat, add --no-half to reduce VRAM usage and run it again. Ensure your model file is in models/Stable-diffusion and named correctly (e.g., .ckpt or .safetensors). Try generating smaller image resolutions (e.g., 512x512) as a test. Check the command prompt for specific errors when you press "Generate." Hope it works now 😊
Just name the stuff in negative prompt, you don't have to write "no human", that means that it has to have a human. Also try using a blender model for shape, vertex colors for masks... you can keep the shape that way just change the texture, i'm rambling but you can do a lot with 3d and control nets etc...
Support the channel: paypal.me/ArchViz007/50 | buymeacoffee.com/ArchViz007 - Thanks!
I wonder more about parts like:
If i allready have rendered out some 360 image sequence-s for example from Unreal engine
and i want to enhance those images all with SAME style so AI enhances all rendered images and MAINTAINS object locations.
Only use AI for concept art fancy but in end try to get same results later in program (in rendering stages like textures, shaders etc).
Good point! You're absolutely right-AI tools can't yet maintain consistent object locations or apply the exact same style across a 360 image sequence. Tools like DaVinci Resolve, Adobe Premiere, or After Effects are still essential for ensuring consistency in sequences like this-I’m guessing you’re already familiar with these.
Thanks for the video.. looking forward for more
You´re welcome @siddiaz1623!
Hi, link to huggingface where you downloaded model/checkpoint doesn`t work. Do you know why? And thanks for this video!
Hi @DominikaKalinowska. Yes, you can read about it here: www.reddit.com/r/StableDiffusion/comments/1f5mvsg/stable_diffusion_15_model_disappeared_from/?rdt=62580
I have made an alternative download link here: www.cadman.dk/upload/v1-5-pruned-emaonly.rar
Just follow instructions from video: 07:06 Install model/checkpoint. Hope it helps!
A very carefully prepared and informative video. Thank you for your efforts. Will you make a video about ControlNet?
Gr8 hasanogut1848. Yes at some point. Very busy at the moment though! Take care
thank you for a great video
weicome @pouyarezaei4679, take care!
thank you for a great video keep up the good work
Thx @Adnan!
exactly, latentnothing is a form of eraeser with higher blur value you can work the skies as you said !!!!,
Hi, From Here (Download model / checkpoint:) the page cannot be found from hugging face it is Error 404 is there alternative link here? thank you
Hi. Yes, you can read about it here: www.reddit.com/r/StableDiffusion/comments/1f5mvsg/stable_diffusion_15_model_disappeared_from/?rdt=62580
I have made an alternative download link here: www.cadman.dk/upload/v1-5-pruned-emaonly.rar
Just follow instructions from video: 07:06 Install model/checkpoint
which is better SD or Flux ?
I’m not sure about its use in architecture yet, but FLUX looks very impressive in many ways. It could definitely be a topic for a future video once I’ve studied it more. Take care
@@ArchViz007 Good idea, we will wait)
I get "Error code1" when I run webui-user.bat .
Any solutions ?
"Error code 1" usually indicates a setup issue. Here are some possible fixes:
First, check your Python version. Stable Diffusion WebUI typically requires Python 3.10.x. Download and install it from the official Python site (www.python.org/downloads/release/python-310x/), and make sure to add Python to your system PATH during installation.
Next, ensure you have all required dependencies. Open a command prompt in your WebUI folder and run pip install -r requirements.txt to install or update the necessary libraries.
If you're using a GPU, ensure your NVIDIA drivers are up to date and compatible with the CUDA version required by PyTorch. You can update your drivers from NVIDIA's site (www.nvidia.com/Download/index.aspx) and install PyTorch compatible with your setup from pytorch.org/get-started/locally/.
Another possibility is a corrupted installation. If you suspect this, re-clone the repository by running git clone github.com/AUTOMATIC1111/stable-diffusion-webui.git and then try running webui-user.bat again.
Also, check permissions. Make sure to run webui-user.bat as an administrator by right-clicking on it and selecting "Run as Administrator."
Finally, ensure you’ve placed the correct model file (like .ckpt or .safetensors) in the models/Stable-diffusion folder. Without a proper model file, the WebUI won't run.
If none of these solutions work, check the error logs in the command prompt for more details or add --debug to the webui-user.bat launch arguments for more troubleshooting information.
Hope it helps!
@ArchViz007 Thank you very much for your response.
I did as you advised, and Error 1 disappeared, but a new issue has appeared.
I opened the program interface in the browser, but it doesn’t generate any images.
I type the prompt, press the generate button, and nothing happens.
I’m sure the model is fine.
In cmd Panel there's this warning : warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
I don’t want to bother you. If you’re busy, feel free to ignore my replay.
Thank you very much.
@@tahershowkal5482 You're welcome!
The timm.layers warning is harmless and unrelated. If images aren’t generating, it could be due to VRAM (GPU memory) limits or a missing dependency.
In webui-user.bat, add --no-half to reduce VRAM usage and run it again.
Ensure your model file is in models/Stable-diffusion and named correctly (e.g., .ckpt or .safetensors).
Try generating smaller image resolutions (e.g., 512x512) as a test.
Check the command prompt for specific errors when you press "Generate."
Hope it works now 😊
@ArchViz007 thank you very much ,I'll try it tomorrow .
@ArchViz007 Thank you very much,I added this line : set COMMANDLINE_ARGS=--no-gradio-queue
and it worked 👍
Just name the stuff in negative prompt, you don't have to write "no human", that means that it has to have a human. Also try using a blender model for shape, vertex colors for masks... you can keep the shape that way just change the texture, i'm rambling but you can do a lot with 3d and control nets etc...
:) yeah I noticed that also after upload! lol. I know you can do a lot with a 3d model also. Thx @KILABANANA