Matt, this video was incredibly helpful! I was one of those poor souls lost in the Discord trying to figure out updates. You just saved me (and probably a lot of others) a ton of time and frustration. The step-by-step for each platform was clear and concise - even *I* could understand it! Keep up the amazing work, and looking forward to more Ollama wisdom in your course. Subscribed! 😄 👍
hi Matt! Some time ago I asked you about how to integrate llms directly in an app. I mean for example for an Electron (desktop) app, how to include an llm inside it. You have helped me a lot indicating me a post of how to use the binaries (a post on discord) as Llamacpp does. However, I've not achieved the performance that Ollama has. Do you think is it possible to use ollama as something like a "package/container" and include it directly on an Electron app? So instead of dealing with creating ollama again, just use it directly. Thanks for all the content you post in your channel, it's really helpful!
thanks Matt, is there a way to do the update without loosing the ollama.service configuration each time for eg Ubuntu ? (ofc I do have backup and I paste my old config but I'd really like to keep what is inside the ollama.service 🙂)
epic ending, Matt. Thanks, amazing job you're doing over here/there.
Matt, this video was incredibly helpful! I was one of those poor souls lost in the Discord trying to figure out updates. You just saved me (and probably a lot of others) a ton of time and frustration. The step-by-step for each platform was clear and concise - even *I* could understand it! Keep up the amazing work, and looking forward to more Ollama wisdom in your course. Subscribed! 😄 👍
First introduction, presentation quality is like Tim Cook, video content is even better...
Idk why I didn’t think about just using docker 😂 thank you 🙏
using ollama in docker makes it slowww
hi Matt!
Some time ago I asked you about how to integrate llms directly in an app. I mean for example for an Electron (desktop) app, how to include an llm inside it. You have helped me a lot indicating me a post of how to use the binaries (a post on discord) as Llamacpp does.
However, I've not achieved the performance that Ollama has. Do you think is it possible to use ollama as something like a "package/container" and include it directly on an Electron app? So instead of dealing with creating ollama again, just use it directly.
Thanks for all the content you post in your channel, it's really helpful!
Very clear and helpful content. Thank you.
I have it installed on Win 11, I really like how simple the app is. I have had some set backs with the automatic updates, but so far always fixable.
thanks Matt, is there a way to do the update without loosing the ollama.service configuration each time for eg Ubuntu ? (ofc I do have backup and I paste my old config but I'd really like to keep what is inside the ollama.service 🙂)
Hi i am running ollama on android it seems to run fine for models with 8b and less then that fine
@technovangelist why not use WSL? You didn’t give a reason why it is faster?
Using wsl means you are using ollama in a Ubuntu container in the wsl vm on top of windows. Native for most folks is 20-30% faster.
Thanks Matt ❤
Do you happen to know how to retrieve or change your webui password on linux? Ive been trying and asked discord, but to no avail.
How to configure ollama for windows to use proxy. I have pc behind corporate proxy but ollama for windows not using it.
Funny this pops up as im trying to figure out how to build newest version on NixOS 😂😝
How to make ollama auto download and update
I do wish the ollama team would build and distribute via the standard package managers. That said I understand it’s not the core work.
❤❤❤
First!