Excellent video. Short and sweet and gives practical advice on each approach. Curious why you did not talk about hugging face where you can also run models completely locally after saving them.
Great list! have quick question... I want to run any of these on server, create a nocode app using APIs, and have users access this application. Kinda creating for a local group of users. How do you think this is going to work in terms of machine requirements. Please guide if it's good approach! 🙏🏻
Excellent video. Short and sweet and gives practical advice on each approach.
Curious why you did not talk about hugging face where you can also run models completely locally after saving them.
Thanks for helping to make us aware of options that are out there.
Ollama is the best one :)
Hi! Thank you for this video. I will need to try one or more of these out 😊
Great list! have quick question... I want to run any of these on server, create a nocode app using APIs, and have users access this application. Kinda creating for a local group of users. How do you think this is going to work in terms of machine requirements. Please guide if it's good approach! 🙏🏻
Ollama is runnable on Windows by using WSL2.
Nice video. LM Studio should be the sixth - good windows version with a REST API.
Orca 2 developed by microsoft it is also a good LLM :)
Do any of these run strictly in the browser client with no server? Like using webgpu
which method is the best?
Services are expensive ? I built my own audio to text generator using amazon s3 for a fraction of the price when I was going to use your api
How to create a personal chatbot which can answer questions based on some text document fed to it previously? Someone kindly give some ideas.
LM Studio
did you play Hitler in Kung Fuy??? Can really shoot people through a land line? I love that movie.