Ollama UI Tutorial - Incredible Local LLM UI With EVERY Feature

แชร์
ฝัง
  • เผยแพร่เมื่อ 4 มิ.ย. 2024
  • Open WebUI is a fantastic front end for any LLM inference engine you want to run.
    Sign up for a free 14-day trial at aura.com/matthewberman
    Aura is sponsoring this video
    Join My Newsletter for Regular AI Updates 👇🏼
    www.matthewberman.com
    Need AI Consulting? 📈
    forwardfuture.ai/
    My Links 🔗
    👉🏻 Subscribe: / @matthew_berman
    👉🏻 Twitter: / matthewberman
    👉🏻 Discord: / discord
    👉🏻 Patreon: / matthewberman
    👉🏻 Instagram: / matthewberman_ai
    👉🏻 Threads: www.threads.net/@matthewberma...
    Media/Sponsorship Inquiries ✅
    bit.ly/44TC45V
    Links:
    github.com/open-webui/open-webui
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 210

  • @guinea_horn
    @guinea_horn 24 วันที่ผ่านมา +64

    As a contributor (merged one single PR 😊) but mostly a very early adopter of this project, I'm always stoked to see people talking about open-webui

    • @AZisk
      @AZisk 24 วันที่ผ่านมา +2

      great project! I just covered it on my channel too. congrats!

    • @aladinmovies
      @aladinmovies 24 วันที่ผ่านมา +1

      Nice

    • @fire17102
      @fire17102 23 วันที่ผ่านมา

      What was your PR about? ❤
      Watching this video makes me wanna go add some stuff too,
      Things I caught from Matthew: RAG loading indicator until files are avail. Get new models in UI rather than term.
      Things I wish to add: litellm, groq adding pipelines and flows not just prompts, adding #folder to context, easier installer that checks docker etc with Houston assistant. OpenInterpreter integration. OpenUI integration + Developer Mode, Tester Mode, Rapid Feedback, Obvsf API Pools, Import ChatGPT & Claude History, Stars, Topics (connected threads), Timelines, Search Hist, Actions, Prompt Refinements, Self Improvement, Skill Library, Image Gen & Editing. Song Gen, Incognito thread, global/group chats. Sign in from phone/multi device. Agents and Clients Dashs.

    • @snuwan
      @snuwan 23 วันที่ผ่านมา

      I have been using it for a while. A great UI

    • @jarad4621
      @jarad4621 23 วันที่ผ่านมา

      ​@@fire17102awesome yeah please add all that stuff

  • @AZisk
    @AZisk 24 วันที่ผ่านมา +48

    This is definitely my local go-to now. What an amazing project.

    • @nickiascerinschi206
      @nickiascerinschi206 24 วันที่ผ่านมา

      Могу я там исп llmstudio?

    • @stickmanland
      @stickmanland 23 วันที่ผ่านมา

      Wow, you too?

  • @user-bp6dx1mm2q
    @user-bp6dx1mm2q 24 วันที่ผ่านมา +28

    I've been with you for over a year and it's been amazing watching you dominate this LLM news space. For example your snake game has become a standard in the industry now!!! Like you, I'm constantly in the LLM lab and I'm constantly coming across your name with a quote regarding a large language model. Awesome job carving out a niche - MUCH LOVE FROM NEW ORLEANS 🔥💪

  • @jimigoodmojo
    @jimigoodmojo 24 วันที่ผ่านมา +14

    Few more cool features
    1. Image generation. I hooked this up to both local automatic1111 and DallE3 with api key. It's a bit of an odd workflow. You prompt it and the response will have a little pic button under it. I loaded a model finetuned for image prompts so response is cool.
    2. Hook up openai models for chat choices with your api key.
    3. Pull any llava model and you can hit the plus button to load a picture and ask questions about it.

  • @retromancer8262
    @retromancer8262 24 วันที่ผ่านมา +17

    You can actually skip the git clone step. everything is contained in the docker image.

  • @jlccVPServ
    @jlccVPServ 24 วันที่ผ่านมา +5

    I don’t usually post messages, but your video changed that. Very well done! I followed your steps, and within minutes, I had LLama3 running on Open-webUI with Docker on Ollama on a Windows computer. Thank you, Sir. Keep up the great work!

  • @kyrilgarcia
    @kyrilgarcia 23 วันที่ผ่านมา

    just got this installed, thanks for the quick tutorial 🙂
    can't wait to explore all the features

  • @PeterSchuller-ng6nk
    @PeterSchuller-ng6nk 21 วันที่ผ่านมา

    Thank you for your inspiration! That´s awesome!

  • @PJ-hi1gz
    @PJ-hi1gz 6 วันที่ผ่านมา

    Thank you. Best tutorial on youtube. Very clear.

  • @garibacha
    @garibacha 24 วันที่ผ่านมา +14

    I was fishing yesterday... unbelievably, thank you! Ps.: Can you make a video about WebUI + Open Interpreter + LLM Local or LM Studio? Thanks

  • @spencerezralow7136
    @spencerezralow7136 24 วันที่ผ่านมา +5

    Really awesome. This is something I’ve been looking for for a long time. The one I built myself is terrible.

  • @SiliconSouthShow
    @SiliconSouthShow 24 วันที่ผ่านมา

    I been using this combo for a very long time, as wellive edited my docker container with owu and and customized the ui with my titles features etc. Thx for the vid, ppl r going to love this.

    • @karanbagul2536
      @karanbagul2536 22 วันที่ผ่านมา

      Can you tell me how you did it? It would be a great help.i want to customise the ui

  • @metobabba
    @metobabba 24 วันที่ผ่านมา +1

    for those who use raycast on mac there is an extension that you can do most of these features with a single shortcut. its very cool. you can use custom modelfiles too.

  • @benben2846
    @benben2846 23 วันที่ผ่านมา

    merci, j'ai réussi à l'installer sur Linux mint depuis quelque jours et c'est vraiment super pour le local !

  • @DaveEtchells
    @DaveEtchells 24 วันที่ผ่านมา +1

    Another superb, insanely useful how-to vid!
    A noob question: Setting this up with Docker vs Anaconda, the config with Python version and all the supporting libraries and package becomes the default for that user account on your computer, correct? (I’m on a Mac like you, and don’t think this is an issue at all for me, just wanted to understand the setup better.)

  • @johnwalker348
    @johnwalker348 14 วันที่ผ่านมา

    amazing tutorial, thanks!!
    how do you know the chunking methods for the embedding?

  • @takstravelvlog7357
    @takstravelvlog7357 23 วันที่ผ่านมา

    Thanks for creating this video with clear instructions!!
    After running this UI with llama3, the responses that I get is very slow even to those simple questions, e.g. how are you?. Is having GPU in my machine must to use this with proper speed?

  • @zippytechnologies
    @zippytechnologies 24 วันที่ผ่านมา +1

    Yes I knew talking about anticipation would put you over the top to release😂

  • @user-bd8jb7ln5g
    @user-bd8jb7ln5g 23 วันที่ผ่านมา

    What we need is multi-prompt templates (series of prompts, one at a time), including step repeat. This way we can have the LLMs reflect on its previous answer before executing the next step in the series.

  • @VAS.T
    @VAS.T 22 วันที่ผ่านมา

    what gpu server provider you use for your language model deployment? or do you only need gpu power for training?
    can you create chat agents from those ollama deployed chat models?
    im just getting into stuff and some of your videos are nice to follow

  • @jackgaleras
    @jackgaleras 11 วันที่ผ่านมา

    Mil gracias .. bien explicado

  • @xDARKMATT3Rx
    @xDARKMATT3Rx 22 วันที่ผ่านมา

    Another extremely informative video! I have an idea for a follow-up video: I would love to see a tutorial on how to secure the docker container with https / ssl. I am trying to figure this out through experimentation and have so far been unsuccessful. Thank you again for your great content!

  • @perschistence2651
    @perschistence2651 24 วันที่ผ่านมา

    Amazing project, can I also connect it to external APIs? Like groq?

  • @lwjunior2
    @lwjunior2 24 วันที่ผ่านมา

    Can it handle the processing of documents with tables? Will an OCR like Tesseract be needed? Any suggestions

  • @hicamajig
    @hicamajig 24 วันที่ผ่านมา +3

    Supposedly they are working on implementing a perplexity style search too! Pretty slick

    • @jarad4621
      @jarad4621 23 วันที่ผ่านมา

      Like web?

    • @stickmanland
      @stickmanland 23 วันที่ผ่านมา

      @@jarad4621 Yes web

    • @ihaveacutenose
      @ihaveacutenose 15 วันที่ผ่านมา

      If they get something like that it's game over for the rest of the opensource interfaces!

    • @ihaveacutenose
      @ihaveacutenose 15 วันที่ผ่านมา

      Can you use page-assist with openweb ui?

  • @nixdroid9895
    @nixdroid9895 24 วันที่ผ่านมา

    Great work Matt. can we get it to run the rabbit R1 locally?

  • @polishandrew3955
    @polishandrew3955 24 วันที่ผ่านมา

    Anyone know which terminal software Matthew uses for his? That auto complete is a nice feature and would love to add it to mine

  • @michai333
    @michai333 24 วันที่ผ่านมา

    How do you feel this compares to the newest LM Studio? It looks like it’s lacking on the advanced settings.

  • @trezero
    @trezero 24 วันที่ผ่านมา +5

    It would be great to see a comparison between ollama and lm studio explaining the benefits and reasoningg of when to use each. The one thing i havent seen much of is how to leverage (if possible) other models from huggingface within ollama. This is easy to do in LM studio. For most other thigs i prefer ollama but i tend to use LM studio to test new models that ollama might not have readily available.

    • @xTheToolx
      @xTheToolx 24 วันที่ผ่านมา

      Same here. I prefer LM Studio and use it across networks hosted from a server. Not quite the same with ollama. Going to experiment with open-webui and see if it can connect to LMStidio with some code tuning.

    • @jarad4621
      @jarad4621 23 วันที่ผ่านมา

      Ask perplexity pro new model Bout it really good now based on llama 70b

  • @lenderzconstable
    @lenderzconstable 24 วันที่ผ่านมา

    I dont know very much but i have passion on this. now, to run this as you are doing one must have a pc that packs quite a punch, is that right?

  • @mpvincent7
    @mpvincent7 24 วันที่ผ่านมา

    Will try it This coming week! Have you or anyone tried the "Chat with RTX" by NVIDIA? I am trying to go through the setup giving it directories to intake BUT it failed on most expecting zip files... I recall you had covered "Private GPT" before, would it be able to do the same thing intended for Chat with RTX? I bought a new system with a 4060TI 16gig and 32 gig RAM, 1 TB SSD with 2 TB HDD, windows 11... It should run well but currently crawling while Parsing Nodes... any guidence woudl be greatly apreciated!

  • @mrthp
    @mrthp 24 วันที่ผ่านมา

    man thank you this is the most comprehensive easiest way to install this. I had ollama before but with open webui it's a game changer. Do i need to run the app docker before each time or it will auto boot when computer starts ?

    • @darkskinnedpimp
      @darkskinnedpimp 24 วันที่ผ่านมา

      You can change the services that start with the computer by typing MSCONFIG after hitting Windows+R. If you see Docker in the list it will boot with computer

  • @hotlineoperator
    @hotlineoperator 24 วันที่ผ่านมา

    Impressive, this is having all - except support for my local language (what ChatGPT do have).

  • @jarad4621
    @jarad4621 23 วันที่ผ่านมา

    Pleas can somebody tell me the difference between the llama 3 latest or nornal one for example and the instruct verrsion, they are seperate on ollama

  • @quatre1559
    @quatre1559 23 วันที่ผ่านมา +1

    can you link the video where you install docker.. i cant seem to find it..

  • @AdamMoreyPlus
    @AdamMoreyPlus 6 วันที่ผ่านมา

    Great video. Thanks! fyi - In the install section, it's not necessary to clone the repo before running docker.

  • @galdakaMusic
    @galdakaMusic 23 วันที่ผ่านมา

    Is posible install this in a Rpi5 with Coral and use from multiple sources like Home Assistant or mi custom app?

  • @verasalem5071
    @verasalem5071 21 วันที่ผ่านมา

    Hey matt, can you tell us how you setup your terminal? (which terminal software you use, addons, etc). Was also hoping you can create a video testing which LLMs do the best at text summarization for large documents / transcripts

  • @doubs22
    @doubs22 24 วันที่ผ่านมา +1

    We have been using Chabot-ui and enjoy the built in tools and assistants libraries.
    Keeping an eye on these other repos until a true leader emerges.

    • @jarad4621
      @jarad4621 23 วันที่ผ่านมา

      Does it have web?

    • @doubs22
      @doubs22 18 วันที่ผ่านมา

      @jarad4621 web search? It needs to be added as a tool/skill.

  • @DarinLawsonHosking
    @DarinLawsonHosking 24 วันที่ผ่านมา +1

    This comes so close
    Quick question I am looking for a single installation that allows running LLMs locally and has a front end that includes text-to-speech and speech-to-text, with a backend that allows me to attach from the local network from other machines to provide LLM services the kicker is I would like to to add all interactions into a rag from both front end and back end services to allow "learning"
    The idea is to have 1 pc providing all my AI needs locally for home automation, assistance, and various library/research/content generation services on my local network and continuing to learn about me and my needs as time passes (ultimately to be my interface with the internet in general) and help/lead would be greatly appreciated.

  • @matikaevur6299
    @matikaevur6299 24 วันที่ผ่านมา +1

    Thanks! Great video!
    Can you do bit different scenario : LLM engine (Ollama, llama.cpp or something else) running on dedicated hw with GPU acc. and frontend in local machine (vm/container/conda env).
    I'v been using ollama on hw with GPU's and frontends in VM or container since they started supporting OpenAI API .. but very mixed results. Idea is that opensource LLM backends are quite stable now .. few upgrades, new features and bugfixes. Just download new model when something interesting appears.
    But frontends and other implementations are still Wild West and better kept in easily recyclable format :)

  • @Phanboy
    @Phanboy 23 วันที่ผ่านมา +1

    Does it have a memory feature? Wonder if it can be used as an ongoing "database" per-se

  • @claudio2081
    @claudio2081 24 วันที่ผ่านมา

    Can you do function calling with open webui and design ai agent workflows?

  • @annonymous3326
    @annonymous3326 17 วันที่ผ่านมา +3

    Wow I just built a virtual AI girlfriend using ollama. I'm trying it on Llama3 model, and recently migrated to one of the uncensored models. Good-bye wife, hello AI - LOL.

  • @thekingofallblogs
    @thekingofallblogs 23 วันที่ผ่านมา

    in case this helps someone. to run linux images on windows, you need to switch to linux containers, which will use wsl2. you do this by rightclicking on docker running in taskbar, and selecting switch to linux containers.

  • @aamir122a
    @aamir122a 24 วันที่ผ่านมา +1

    I have tried the blulk load feature in the documents , it does not work , cannot change the embedding models as well , after the restart it keeps defaulting to the original. There is a ticket in the github for exactly this . However I have not heard back from the team.

  • @Devsterinator
    @Devsterinator 4 วันที่ผ่านมา

    Does anyone know how to make the open-webui interface accessible from my phone on the same network? I'm used to being able to add --listen to a gradio webui and access it anywhere on the network, but I haven't seen anyone do that with this one.

  • @denijane89
    @denijane89 23 วันที่ผ่านมา +1

    I used yarn in the plane to discuss with phi3 about astrophysics. It was epic. At some point, it becomes slow (llama3 started out slow), so I would read for a minute something else and return to its response and continue the chat. It was great. I so hate the offline time on planes, but having a gpt (or 10) on you computer really helps. Also very funny how my battery would drop on burst every time I press enter, lol. It takes juice generating those responses. But again, it's pretty cool. I think at some point I wanted to install ollama ui but the docker part lost me.

  • @Gl0we22
    @Gl0we22 22 วันที่ผ่านมา

    have you tried Silly tavern? its more geared towards characters and roleplay but has some nice features and a fully customizable ui

  • @Stealthy_Sloth
    @Stealthy_Sloth 24 วันที่ผ่านมา

    I tried docker on Windows 11 and within the instance it would work but not across the machine and I used the proper switch to run it.

  • @markg5891
    @markg5891 24 วันที่ผ่านมา +1

    Question about these embedding models. I _thought_ that adding your own document makes and "embedding" loaded into the LLM for you to search your document. But what are those embedding models that you can define in the gui settings? I can't quite figure out why i'd want them or what they are used for..

    • @jarad4621
      @jarad4621 23 วันที่ผ่านมา

      I just learn this myself. Apparently embedding is a seperate thing or app or code that does that part so you actually need the database and an embedding model only to be able to save stuff in. vector you don't need the llm yet only once it's saved by the embedder then you can retrieve from the db with your lmm and rag, the main llm doesn't save the data only fetches it later

    • @markg5891
      @markg5891 21 วันที่ผ่านมา

      @@jarad4621 Thank you for your reply! I've read it about 10 times but still don't think i'm following you entirely. I get that you don't train your main LLM for your custom data. That's what the whole RAG thing is about. But with RAG you end up with a vector database that your main LLM can use to provide better results for specific subjects. Thus far i get it.
      But what is that "embeddings llm" like "nomic-embed-text"? I can't figure out how you use it, where you use it or what to do with it.. Just guessing here, or is there some kind of model hierarchy? Does it follow a path like llama -> nomic-embed-text -> own vector db? In which case the "nomic-embed-text" would be the one interacting with my data, not llama like it would be in a classic RAG setup? 99% guesses here, i just don't know.

  • @user-tg6vq1kn6v
    @user-tg6vq1kn6v 24 วันที่ผ่านมา +3

    My go to channel for learning how not to write the game snake in python

  • @ShadowDoggie
    @ShadowDoggie 18 วันที่ผ่านมา

    there is no voice calling feature in it like chatgpt has now on the phone right?

  • @abdullahazeem113
    @abdullahazeem113 24 วันที่ผ่านมา +3

    Does it support image processing or is there anyway in which you could add image processing capabilities

    • @JoeBrigAI
      @JoeBrigAI 24 วันที่ผ่านมา

      Under settings there is a way to link this UI to an existing image generator like Automatic111 or ComfiUI.

    • @jimigoodmojo
      @jimigoodmojo 24 วันที่ผ่านมา

      Llava model allows you to ask questions about images. Hit + next to prompt field to upload pic. Then ask away. Wasn't great generating code from mock, but answered general questions ok.

  • @mendthedivide
    @mendthedivide 24 วันที่ผ่านมา +2

    its on Pinokio as well

  • @WhyAreYouFindingMe
    @WhyAreYouFindingMe 24 วันที่ผ่านมา +2

    Hey! Love your videos! and yes, I'm your first commenter of this video

  • @drlordbasil
    @drlordbasil 23 วันที่ผ่านมา

    I clicked hella fast because I thought I missed an update from ollama XD love openui though!

  • @brunodangelo1146
    @brunodangelo1146 24 วันที่ผ่านมา +2

    What's the advantage of this over LMStudio?
    Not asking which one is better, but on what use cases they excel over the other.

    • @juanjesusligero391
      @juanjesusligero391 24 วันที่ผ่านมา +1

      For one thing, Open WebU is open source, which means that you'll be able to use it forever for free, for commercial or personal use (that and all the other advantages of it being FOSS, like more security, a nice and supportive community, etc.). That being said, LMStudio is easier to install.

  • @raghuvallikkat3384
    @raghuvallikkat3384 22 วันที่ผ่านมา +1

    do we really need to do the git clone. I think it is enough we do docker run

  • @14types
    @14types 24 วันที่ผ่านมา

    what minimum reqs? an i install it on simple vps?

  • @OpusG5
    @OpusG5 24 วันที่ผ่านมา

    I was hoping someone else would handle all the heavy lifting with the UI, as it was somewhat of a turn-off for me to read replies in the shell. Thank you for sharing it!

  • @jasonsalgado4917
    @jasonsalgado4917 24 วันที่ผ่านมา +1

    Why would I use this over autogen? Looks pretty similar

  • @tiredlocke
    @tiredlocke 23 วันที่ผ่านมา

    Do these solutions have an API available, like openai? I'd like to just easily replace openai for some of my projects.

    • @SooDamGood
      @SooDamGood 22 วันที่ผ่านมา +1

      yes

  • @sherpya
    @sherpya 23 วันที่ผ่านมา

    just make sure you don't use update models functionality if you have some models as ollama author pointed, until they implement model hash comparisons

  • @theworddoner
    @theworddoner 24 วันที่ผ่านมา

    I prefer ooba text Gen webui as it’s more customizable. The RAG implementation is not something I’ve tried before. I guess I’ll give it a try.

  • @user-yi2mo9km2s
    @user-yi2mo9km2s 24 วันที่ผ่านมา

    Is it able to access SQL, web search engines?

  • @jabeztadesse
    @jabeztadesse 13 วันที่ผ่านมา

    I have other videos where I show you ... please link them in the description!!!

  • @icegiant1000
    @icegiant1000 24 วันที่ผ่านมา

    Two dumb dumb questions: I just read about Gradient AI offering Llama with a 1M and 4M context window. Would that work with this? Is that a factor of what they did to it? Really would like to have a 2nd alternative to the 1M context window at Gemini 1.5 Pro. Final question, I have a pretty strong gaming laptop, CPU MARK of around 23,000, but no uber graphics card worth noting, it does have an nVidia chip in there somewhere. How fast would this setup be? Maple syrup, or ChatGPT 4 speed? (And does that RAG repo act like a context window, or it just provides documents when required?)

    • @jimigoodmojo
      @jimigoodmojo 23 วันที่ผ่านมา

      using a 256k context window requires at least 64GB of memory. Using a 1M+ context window requires significantly more (100GB+). Check out the model page in ollama library for more details.

    • @jimigoodmojo
      @jimigoodmojo 23 วันที่ผ่านมา

      As for your pc... try it... it will depend on models, but some of these will run ok on just cpu and I think you might eek something from your gpu. I think you'll be fine with phi3, llama3 and mistral.

  • @certilremy
    @certilremy 24 วันที่ผ่านมา

    Can i have ollama, llama3 and the open web UI on a MacBook pro m1 with a 16gb ram?

    • @capoyeti
      @capoyeti 24 วันที่ผ่านมา

      That’s exactly what I’m running, so yeah, you can

    • @certilremy
      @certilremy 23 วันที่ผ่านมา

      @@capoyeti thank you

  • @dylan_curious
    @dylan_curious 23 วันที่ผ่านมา

    Great tutorial!

  • @Termonia
    @Termonia 24 วันที่ผ่านมา +1

    Matthew, could be awesome a podcast with David Ondrej. He is inviting you in his videos. I'd love you both to talk about this tools. I'm using Anything LLM with LM Studio. What is the difference with Ollama with Ollama UI? thank you for your videos.

  • @torarinvik4920
    @torarinvik4920 24 วันที่ผ่านมา +1

    There is an agentic system that runs on Groq, might be worth checking out, I can't remember the name.

  • @CM-zl2jw
    @CM-zl2jw 24 วันที่ผ่านมา +1

    Thank you. I just NEED to decide on which GPU to buy.
    Btw. Why do 6% of the people in your poll believe GPUs should be regulated?

    • @damienhughes4559
      @damienhughes4559 24 วันที่ผ่านมา +2

      Maybe 6% of his audience are OpenAI employees. 😂

  • @Maisonier
    @Maisonier 24 วันที่ผ่านมา +3

    How does it compare with Anything LLM?

    • @spillledcarryout
      @spillledcarryout 24 วันที่ผ่านมา +1

      Anything LLM is way clunkier, all models i have tried act like quite dumb

  • @orthodox_gentleman
    @orthodox_gentleman 11 วันที่ผ่านมา

    I wonder how it could be best accessed via iPhone GUI?

  • @spillledcarryout
    @spillledcarryout 24 วันที่ผ่านมา

    Matthew, as a doctor and entrepreneur in private practice, I am grateful for the insights and skills I have learned from your TH-cam videos. I do not usually comment on videos, but yours is exceptional. Your content is practical and directly applicable to my profession, where we are constantly analyzing massive documents with sensitive data. And now, this is possible locally on my machine, fully HIPAA compliant! Thank you again and I cannot wait for what happens next :)

  • @volaaz
    @volaaz 24 วันที่ผ่านมา

    We need local GPTs do you have one?

  •  24 วันที่ผ่านมา

    Perfect!

  • @daithi007
    @daithi007 23 วันที่ผ่านมา

    Great video

  • @JohnSmith-vo8is
    @JohnSmith-vo8is 14 วันที่ผ่านมา

    Could this work on Jan?

  • @ksem1337
    @ksem1337 24 วันที่ผ่านมา

    Can you add uncensored model to it?

  • @adamrodriguez7598
    @adamrodriguez7598 24 วันที่ผ่านมา

    Can this work on an Intel Mac?

  • @SiliconSouthShow
    @SiliconSouthShow 24 วันที่ผ่านมา +1

    HEADS UP your first acct is your admin acct, if you update owu or ollama, or dkr, make sure you follow the instructions or it will lose your user name and password and you wont have admin anymore, and its a pain to straighten it out, doable, I did it, but a pain.

  • @aa-xn5hc
    @aa-xn5hc 23 วันที่ผ่านมา +1

    Did not explain how to update and still keep all the settings intact!

  • @brainwithani5693
    @brainwithani5693 23 วันที่ผ่านมา

    I dont think i understand, can this be used as a UI for a particular platform or device, or is it just a standalone thing? I mean, would this work as a UI instead of windows or Android, or is this a toy?

    • @SooDamGood
      @SooDamGood 22 วันที่ผ่านมา

      its local on your pc

  • @jakovnikolic8622
    @jakovnikolic8622 23 วันที่ผ่านมา

    Your video about this software comes way too late to be honest. The guy behind this project had made this thing awesome like year and a half ago. But it is nice that you did it anyway people need to know about this project and support it since rly rly good. Keep up the good work ;)

  • @sephirothcloud3953
    @sephirothcloud3953 24 วันที่ผ่านมา

    This version supports llama3 1M context?

  • @mountee
    @mountee 24 วันที่ผ่านมา

    Prefer lobe chat as then multiple local and remote api can be configured at the same time

    • @abdullahazeem113
      @abdullahazeem113 24 วันที่ผ่านมา +1

      Can you recommend some other chat ui which supports model like llama 3 etc

    • @cavalrycome
      @cavalrycome 24 วันที่ผ่านมา

      Open WebUI allows you to configure multiple local and remote APIs simultaneously. I have it set up with Ollama but also OpenAI, Mistral and Groq all at once.

    • @nahomb2784
      @nahomb2784 24 วันที่ผ่านมา

      UI looks nice, but UX wise it's still not mature

  • @hamidmohamadzade1920
    @hamidmohamadzade1920 24 วันที่ผ่านมา

    how large is the content window ?

    • @jimigoodmojo
      @jimigoodmojo 24 วันที่ผ่านมา

      depends on the model you're running. I believe each model comes with its own default and you have to start mucking with ollama model files if you don't like the default. Gradient llama3 is supposed to have quite large one.

  • @marcelbloch9516
    @marcelbloch9516 20 วันที่ผ่านมา +1

    if you using docker you dont have to clone that git repo to run it.

  • @mernik5599
    @mernik5599 24 วันที่ผ่านมา

    How to enable web access and function calling to this?

    • @cavalrycome
      @cavalrycome 24 วันที่ผ่านมา

      It's not possible at the moment.

  • @pedrodaniel3161
    @pedrodaniel3161 20 วันที่ผ่านมา

    can i connect with groq??

  • @aa-xn5hc
    @aa-xn5hc 23 วันที่ผ่านมา

    Show how to use the local installation from mobile phone

  • @RamonGuthrie
    @RamonGuthrie 24 วันที่ผ่านมา +2

    Is this better than LM Studio, does it support RAG and web access?

    • @juanjesusligero391
      @juanjesusligero391 24 วันที่ผ่านมา

      Yeah, it supports RAG and and web access (but not web search!).

  • @abdelhakkhalil7684
    @abdelhakkhalil7684 24 วันที่ผ่านมา

    I have already downloaded many models that I use with Oobabooga and LM Studio (Thank you for suggesting LM Studio, it's a good software. Shame it does not support EXL2). But, I don't want to download models specific to Ollama.

  • @JustLennyBenny
    @JustLennyBenny 24 วันที่ผ่านมา

    youtube TellSell channel for AI services :P

  • @dreamyrhodes
    @dreamyrhodes 24 วันที่ผ่านมา

    How is the API?

  • @joe_limon
    @joe_limon 24 วันที่ผ่านมา

    Are there any gpt4 level multi modal models in the open source community yet? I wanna stick a several hundred page asme code document into one of these models

    • @abdullahazeem113
      @abdullahazeem113 24 วันที่ผ่านมา

      Yep llama 3 70-b, command r plus it also beats some of gpt 4 predecessors

    • @joe_limon
      @joe_limon 24 วันที่ผ่านมา

      ​​@@abdullahazeem113 Tyvm didnt realize it was multi modal

    • @abdullahazeem113
      @abdullahazeem113 24 วันที่ผ่านมา

      @@joe_limon oh you can make your own or use reka i use it or you can use hugging face chat or coral cohere it supports both web searching text based documents etc

    • @abdullahazeem113
      @abdullahazeem113 24 วันที่ผ่านมา

      @@joe_limon oh okay use reka coral cohere etc

  • @Pitraptor
    @Pitraptor 13 วันที่ผ่านมา

    You were going to say we could have multiple llamas loaded weren't you?

  • @tukanhamen
    @tukanhamen 24 วันที่ผ่านมา

    I really wish people would start being more original with their ui. If I wanted my front end to look just like ChatGPT then I would just use ChatGPT free lol. That’s why I like LMStudio. Only thing holding me back from using it more often is the lack of voice features.