Im a former data scientist (doing the whole medical school thing now), and this channel gives me the Data Science Developer Fix I need sometimes. Thank you for your content. Us Tech nerds love you more than we can comment. now back to studying lol
Awesome guide, thanks a lot Alex! Tried using A1111 last month or so, but today learned ComfyUI is also supported by Apple Silicon. Turns out, it's more optimised and much faster! Does not use too much RAM compared to A1111, and ComfyUI can even run models that were crashing on A1111 (on GPU poor 8GB base Mac). The setup and usage is slightly more advanced but not by much, and a guide from you would be appreciated!
May 2024 has been a watershed month. For the first time in my life, I really felt I've fallen behind. M4, Gemini 1.5, ChatGPT4o, Copilot+PC with Snapdragon X Elite. So much tech that my M1 MBA 8GB RAM will fail to take advantage of and fail to compete with. All this including local LLMs that are too resource intensive to even try out. My next laptop could well be a Windows PC if Apple doesn't address the RAM situation in their base models.
After copything the downloaded models to Stable-diffusion folder.after restarting the new models are refelcted in stable-diffusion URL but not showing in image default models in openweb ui
Thank you!!! The free models look like Dalle from last year, so maybe next year the free models will look like Dalle this year. I've spent way too much free time messing with ollama and open web ui because of your last video. Could you look into the RAG and the web search features? I've never gotten web stuff to work, but I feel like RAG documents do work, but I haven't been too successful messing with it. There's not a lot of content on it, but it seems like a perfect way to put my existing repo or repos so that the model can pick up on conventions and context.
There are way better models available. Just that he's not using the right ones. He's using random models from SD1.5 which was forever ago. I'd recommend something like RealVisXL based on SDXL and there are super fast lightning models too.
Hi Alex. What do you think is good for newb IT...MacBook Air M3 24GB 1 TB or MacBook Pro M3 18GB 1 TB (15" and 14")... (coding, parallels, adobe -all programs, ...maybe machine learning) Thanks, Spasibo.
Automatic1111 is pretty bad on my 16" macbook pro.. they perform better on Nvidia.. I think code are not optimized for apple silicon. Diffusion bee is a lot faster but most models are pretty bad and some models from civitai not working.. is any way to optimize these and to make it faster on apple silicons?
Hi, Alex, Thankyou you teach us so funny. I like it. But it would be better if you make it as docker image, or teach to make it as docker image, because I dont want to make chaos with different environments of python.
@@AZisk It was Biofach 2004, and we had a booth at the fair. There were organic foods, cosmetics, and various products from all over Europe. One of the companies there manufactured mattresses made from organic seeds. A salesman from that company, who was a really nice person, stood out. However, his Austrian coworkers kept making fun of him because he was a cousin of the famous Arnold Schwarzenegger. Although he was short and skinny, he shared many facial features with Mr. Schwarzenegger, but in a more rugged way. I'm sorry if my English doesn't allow me to describe it better.
Im a former data scientist (doing the whole medical school thing now), and this channel gives me the Data Science Developer Fix I need sometimes. Thank you for your content. Us Tech nerds love you more than we can comment. now back to studying lol
The pace of this video is brilliant. Quick but with all the relevant information.
I tried
Awesome guide, thanks a lot Alex! Tried using A1111 last month or so, but today learned ComfyUI is also supported by Apple Silicon. Turns out, it's more optimised and much faster! Does not use too much RAM compared to A1111, and ComfyUI can even run models that were crashing on A1111 (on GPU poor 8GB base Mac). The setup and usage is slightly more advanced but not by much, and a guide from you would be appreciated!
Like that Alex reacts to comment quite fast. Thanks, this video is useful
You're welcome!
Once again, you are taking the perfect approach to achieving success! Thank you.
Insane how underrated your channel is. Kinda like it that way tbh... But seriously, I love you Alex.
I really enjoy watching your video. It is informative in a vibe of fun. Thx for your effort!
Glad you enjoyed it!
I love to watch your videos! It's so fun and at the same time informative as well.
Oh thank you!
My M1 MBA 8/256 will definitely die trying to generate any of these (
Probably need at least 16gb RAM machine
RIP 8gb
idk. try and let us know.
Any luck? Let us know the result
M1 only works great with the phi3 model for text generation, You need 16gb of ram for image and bugger model
Thanks Alex.
What are your thoughts on M4 and how it will speed up inference when released to Macs?
May 2024 has been a watershed month. For the first time in my life, I really felt I've fallen behind. M4, Gemini 1.5, ChatGPT4o, Copilot+PC with Snapdragon X Elite. So much tech that my M1 MBA 8GB RAM will fail to take advantage of and fail to compete with. All this including local LLMs that are too resource intensive to even try out. My next laptop could well be a Windows PC if Apple doesn't address the RAM situation in their base models.
This is awesome!! Thank you for sharing this 🤯
Glad you enjoyed it!
Thank you so much for these videos. They’re perfect for me as a new Mac user with little knowledge of the terminal commands.
can we get a follow up on how to train our own models?
can’t wait for custom MLX models to show up 😉
Do a videos with TTS please
After copything the downloaded models to Stable-diffusion folder.after restarting the new models are refelcted in stable-diffusion URL but not showing in image default models in openweb ui
thanks for videos! can you do one about the music/samples generation?
Amazing content. Loving these tutorials!
This is really useful. Thanks a bunch for the gotchas and tutorial
Hi Alex, can you benchmark the new M4 macbooks pro against the similar AI use cases? Thanks
Nice one. If it's possible to use Ollama directly to generate image?
Thank you!!! The free models look like Dalle from last year, so maybe next year the free models will look like Dalle this year. I've spent way too much free time messing with ollama and open web ui because of your last video. Could you look into the RAG and the web search features? I've never gotten web stuff to work, but I feel like RAG documents do work, but I haven't been too successful messing with it. There's not a lot of content on it, but it seems like a perfect way to put my existing repo or repos so that the model can pick up on conventions and context.
There are way better models available. Just that he's not using the right ones. He's using random models from SD1.5 which was forever ago.
I'd recommend something like RealVisXL based on SDXL and there are super fast lightning models too.
BTW A1111 already creates a conda environment when running anyway
Hi Alex. What do you think is good for newb IT...MacBook Air M3 24GB 1 TB or MacBook Pro M3 18GB 1 TB (15" and 14")... (coding, parallels, adobe -all programs, ...maybe machine learning) Thanks, Spasibo.
Im starting from zero, no experience coding or using terminal. Can you give someone like me step by step directions to get this working?
Great video!! how about local Text-To-Speech for local webUI also? combine with image recognition then we will have chatGPT-o local version :) Thanks!
I don't see images under settings in openwebui - was it disabled?
"You shouldn't be doing this at work anyway!"
LOL.
This IS my work.
🙂
Thank you for the videso, would this work on an intel based MAC?
Is there a way to use the image AIs without all the frontend overhead? :)
I do have source code of product (bash script, python, c++), could you use llama read the source code and help troubleshooting problem on log files?
This is some cool stuff. Keep it coming!
An overview of Pinokio would make for a good video
I'm running openwebui in a docker container and it can not access localhost, how can I go about it?
Same here :(
I used the “host.docker.internal” for connection.
Or you could just install something like Mochi Diffusion or Guernika?
Just use diffusionbee
Brilliant, Can you do a windows one?
Really nice
Can I run this from external storage?
Automatic1111 is pretty bad on my 16" macbook pro.. they perform better on Nvidia.. I think code are not optimized for apple silicon. Diffusion bee is a lot faster but most models are pretty bad and some models from civitai not working.. is any way to optimize these and to make it faster on apple silicons?
Hi, Alex, Thankyou you teach us so funny. I like it. But it would be better if you make it as docker image, or teach to make it as docker image, because I dont want to make chaos with different environments of python.
great tutorial,thanks
Can we run this model on m3 pro base variant Alex????
Whta about Fooocus project?
what about Lora?
amazing stuff
Why not NPU ?
I don't have web ui db
Now I'm curious. I'll try it in my m1 air hopefully it won't toast my machine 😂
Are you from NYC?
nope. although i grew up in buffalo
The last boom! was not enough
Use forge instead, it's much faster then A1111
Forge is not updating
It can also run on Linux :) and Windows 10/11 .. let see how it runs on Win11 ARM :)
I never told you, but I met an austrian cousin from Mr. Schwarzenegger. Really skinny guy and short. I guess Mr. Schwarzenegger had good bones.
🤣 did he look like the stable diffused guy?
@@AZisk It was Biofach 2004, and we had a booth at the fair. There were organic foods, cosmetics, and various products from all over Europe. One of the companies there manufactured mattresses made from organic seeds. A salesman from that company, who was a really nice person, stood out. However, his Austrian coworkers kept making fun of him because he was a cousin of the famous Arnold Schwarzenegger. Although he was short and skinny, he shared many facial features with Mr. Schwarzenegger, but in a more rugged way. I'm sorry if my English doesn't allow me to describe it better.
boom!
💥
Thanks Microsoft😊
You meant a photo of a "Lama" not a "llama". 😂
I don‘t think Arnold Schwarzenegger is an animal.😅😅😅
Your placards are all scratched up. You should make new ones
Bro shows us some 1.5 models like its 2022 💀
Bro didn't have patience to watch the video. Tik tok gen.
Lolz @ FAST :P