Vincent Codes Finance
Vincent Codes Finance
  • 24
  • 61 063
Use this free ChatGPT clone for local private chats with documents and web search
Msty 1.0: My favorite AI chat app gets new features, including access to the web!
In this video, I present an overview of the features of Msty, my favorite local AI chat app, including the latest features such as real time data the searched the web, improved document and image attachements, and parallel chat. I also discuss Knowledge stacks, a built-in RAG solution that makes your documents or even Obsidian vaults accessible to your LLM models.
Msty is the most user-friendly solution that I have seen to download and run local open source LLMs such as Meta's Llama 3, Mistral's Mixtral, Google's Gemma and Microsoft's WizardLM 2. It also supports multimodal models such as Llava.
Don't send your private data to OpenAI's ChatGPT or Anthropic's Claude.ai, keep it private on your pc or mac. Or do if you want, it also supports cloud models such as GPT-4o and Claude.
Msty makes all this possible by bundling Ollama for a nice UI.
If you want to install Ollama directly, or try out Open WebUI, an open-source web UI for AI chat, check out my previous video on how to install Ollama and Open WebUI: th-cam.com/video/UmUDpxnmLW4/w-d-xo.html
👍 Please like if you found this video helpful, and subscribe to stay updated with my latest tutorials. 🔔
❤️ You can support this channel by buying me a ☕: buymeacoffee.com/codesfinance
🔖 Chapters:
00:00 Intro
00:35 Installation
02:03 Install local models
03:25 Install cloud models
04:02 Chat
06:11 Split Chat
08:59 Web Access
10:40 Chat with Documents
12:12 Chat with Images
13:31 Prompts Library
15:00 Knowledge Stacks
18:38 Final thoughts
🔗 Video links:
Msty: msty.app/
Ollama: ollama.com/
OpenAI API (ChatGPT 4o): platform.openai.com/docs/overview
Anthropic API (Claude): www.anthropic.com/api
🐍 More Vincent Codes Finance:
- ✍🏻 Blog: vincent.codes.finance
- 🐦 X: CodesFinance
- 🧵 Threads: www.threads.net/@codesfinance
- 😺 GitHub: github.com/Vincent-Codes-Finance
- 📘 Facebook: people/Vincent-Codes-Finance/61559283113665/
- 👨‍💼 LinkedIn: www.linkedin.com/company/vincent-codes-finance/
- 🎓 Academic website: www.vincentgregoire.com/
#llama3 #msty #ollama #chatbot #aichatgpt #aichat #gpt4o #rag #mixtral #dbrx #obsidian #wizardlm2 #chatgpt #llm #largelanguagemodels #openwebui #gpt #opensource #cohere #databricks #opensourceai #llama2 #mistral #bigdata #research #researchtips #professor #datascience #dataanalytics #dataanalysis #uncensored #private #mac #macbookpro #claude #anthropic
มุมมอง: 393

วีดีโอ

Playwright: Advanced Web Scraping in Python
มุมมอง 262หลายเดือนก่อน
In this video, we'll see how you can scrape complex webpages, including pages that use hydration( ie. load their data in the browser using JavaScript), in Python using the Playwright framework. This lets you scrape websites that cannot be scraped using simpler tools such as the requests library. Using Playwright, you can also take screenshots, scrape websites that require authentication, and mu...
Use Dev Containers in VS Code for Safe and Replicable Data Analysis in Python
มุมมอง 121หลายเดือนก่อน
Use Dev Containers in VS Code for Safe and Replicable Analysis in Python Development containers in Visual Studio Code make it easier for data scientists to use open-source packages safely while making their analysis easier to replicate. All the code runs inside a lightweight virtual machine called a containers, which isolates it from your local os. This also helps make sure that all all your wo...
This FREE ChatGPT Clone Just Got Better: MSTY with Parallel Chat, Knowledge Stacks and More!
มุมมอง 2.3Kหลายเดือนก่อน
The Best Local AI Chat MSTY Just Got Better - Parallel Chat, Knowledge Stacks and More! In this video, I have a second look at my favorite local AI chat app Msty, especially at the new features released in the latest version: parallel chats and knowledge stacks. Knowledge stacks is the new RAG solution that makes your documents or even Obsidian vaults accessible to your LLM models. Msty is the ...
Explore Data Like a Pro in VS Code with Data Wrangler, Pandas and Python
มุมมอง 4062 หลายเดือนก่อน
Explore Data Like a Pro in Visual Studio Code with Data Wrangler, Pandas, and Python Data Wrangler is a Visual Studio Code extension that makes it easy for Data Scientists to explore and clean data. You can preview every step, reverse any changes, and export all your work as a Python function that can replicate your steps in a reproducible way using pandas. This video will walk you through how ...
AI Writing Assistant in Visual Studio Code with Ollama, and Continue - Custom, Private and Free
มุมมอง 8812 หลายเดือนก่อน
Setup a Custom AI Writing Assistant in VS Code with Continue.dev SEE BELOW FOR THE CUSTOM COMMANDS USED IN THE VIDEO. In this video, we'll see how you can use Ollama and Continue to run a customizable GitHub Copilot clone and turn it into an AI writing assistant that you can use with LaTeX, Markdown, or Quarto. Run it locally for free using open source large language models (LLMs) such as Meta'...
Chat with your Documents Privately with Local AI using Ollama and AnythingLLM
มุมมอง 2.9K2 หลายเดือนก่อน
Chat with your Documents Privately with Local AI using Ollama and AnythingLLM In this video, we'll see how you can install and use AnythingLLM, a desktop app that provides a Chat GPT style AI chat that runs locally on your machine. While not the most user-friendly UI I have seen, the built0in document chat feature works very well. Use it with local LLMs such as Meta's Llama 3, Mistral's Mixtral...
Private Chat GPT-clone with LLama3, Ollama and Msty - Free and Easier than Open WebUI
มุมมอง 2.3K2 หลายเดือนก่อน
Private Llama3 AI Chat, Easy and Free with Msty and Ollama - Easier than Open WebUI In this video, we'll see how you can install and use Msty, a desktop app that provides a Chat GPT style AI chat that runs locally on your machine. Msty is the most user-friendly solution that I have seen to download and run local LLMs such as Meta's Llama 3, Mistral's Mixtral, Google's Gemma and Microsoft's Wiza...
Can my Laptop run Meta's Llama 3, WizardLM 2, DBRX, Mixtral 8x22b, and Command R+?
มุมมอง 2.2K3 หลายเดือนก่อน
Can my Laptop run Llama 3, Wizard LM2, DBRX, Mixtral 8x22b, and Command R ? In this video, I look at the models that were made available by Ollama this week and see if they can run on my Apple M3 MacBook Pro with 64GB of RAM. Specifically, I look at Meta's Llama3 (70b and 8b), Mistral's Mixtral 8x22b, Databricks' DBRX, Cohere's Command R and Microsoft's WizardLM2 (8x22b and 7b). If you want to ...
Sentiment Analysis of Financial News in Python - 3 Ways using Dictionary, FinBert and LLMs
มุมมอง 7753 หลายเดือนก่อน
In this video, we'll see 3 different methods to perform sentiment analysis of financial news in python: - A dictionary.-based approach using the Loughran and McDonald dictionary - FinBert, a Bert model fine-tuned for sentiment analysis of financial text using the transformers library from Hugging Face - Using large langage models with the langchain library. The code is available here: github.co...
Scrape Financial Data from SEC Edgar with Python
มุมมอง 1.9K3 หลายเดือนก่อน
In this video, we'll see how you can scrape financial reporting data from SEC Edgar using Python and the Edgartools library. After watching this video, you should be able to access and scrape many types of corporate disclosure such as 10-K (annual reports), 10-Q (quarterly reports), 8-K (material events), 13F (portfolio holdings), and many more. All that data can easily be extracted as strings ...
Summarize PDFs with a Local AI (Private GPT) in Python
มุมมอง 2.5K3 หลายเดือนก่อน
In this video, we'll see how you can code your own python web app to summarize and query PDFs with a local private AI large language model (LLM) using Ollama, Langchain, and Streamlit. The code is available here: github.com/Vincent-Codes-Finance/documents-llm A written version of the tutorial is available here: vincent.codes.finance/posts/documents-llm/ The paper used as example is here: doi.or...
Faster NumPy on Mac GPU with MLX
มุมมอง 8544 หลายเดือนก่อน
Accelerate your Numpy Scientific Workflows on Apple Silicon with MLX In this video, I compare the execution speed of numpy and numba with MLX, a python library that executes code on Mac GPUs and provides a numpy-compatible API. For my tests, I simulate a large number time series that follow an AR(3)-GARCH(1,1) process and compute the t-stat for the H0 that mean returns are 0 for each sample pat...
Free and Private GitHub Copilot Clone for VS Code Using Ollama and Continue
มุมมอง 9K4 หลายเดือนก่อน
Install a Private AI Coding Assistant in VS Code for Free Using Ollama and Continue In this video, we'll see how you can use Ollama and Continue to run a private GitHub Copilot clone locally for free using open source large language models (LLMs) such as codellama and deepseek-coder. This lets you try out different models, and even use uncensored models. All this with a PrivateGPT Don't send yo...
Run your Own Private Chat GPT, Free and Uncensored, with Ollama + Open WebUI
มุมมอง 23K4 หลายเดือนก่อน
Run your Own Private Chat GPT, Free and Uncensored, with Ollama Open WebUI
5 tips for reading large CSV files faster
มุมมอง 5074 หลายเดือนก่อน
5 tips for reading large CSV files faster
Setup VS Code for Scientific Writing with LaTeX
มุมมอง 7K5 หลายเดือนก่อน
Setup VS Code for Scientific Writing with LaTeX
Use Github For Academic Research Projects: Track Changes Like a Pro
มุมมอง 4965 หลายเดือนก่อน
Use Github For Academic Research Projects: Track Changes Like a Pro
Time Series Regressions in Python with Statsmodels
มุมมอง 1195 หลายเดือนก่อน
Time Series Regressions in Python with Statsmodels
Panel Regressions in Python with linearmodels
มุมมอง 4835 หลายเดือนก่อน
Panel Regressions in Python with linearmodels
How to Install Python 3.12 on mac OS (2024) + VS Code and Poetry with Homebrew
มุมมอง 6656 หลายเดือนก่อน
How to Install Python 3.12 on mac OS (2024) VS Code and Poetry with Homebrew

ความคิดเห็น

  • @GEORGE.M.M
    @GEORGE.M.M 5 วันที่ผ่านมา

    Great tutorial, Msty is fantastic, it provides some better features than WebUi. Howev,er , I don't knowI am not sure how much better Rag works with Msty local embeddings compared to WebUi. What local embedding have you seen yeid the best results for large research papers?

  • @luisderivas6005
    @luisderivas6005 5 วันที่ผ่านมา

    Sure....it also requires a VERY beefy machine with plenty of GPU resources, unless you are OK with waiting for 5 minutes between responses.

    • @VincentCodesFinance
      @VincentCodesFinance 5 วันที่ผ่านมา

      You're right that your local ressources will limit the models you can use. For Mac users, an entry-level M1 MacBook Air should run quantized version of Llama3 at a reasonable speed.

  • @aminmoeinian
    @aminmoeinian 13 วันที่ผ่านมา

    Amazing videos, thanks a lot professor

    • @VincentCodesFinance
      @VincentCodesFinance 12 วันที่ผ่านมา

      Thanks Amin, glad you find them useful!

  • @unklebonehead
    @unklebonehead 13 วันที่ผ่านมา

    Msty is nothing short of amazing!

  • @danielbrzezicki5880
    @danielbrzezicki5880 13 วันที่ผ่านมา

    Does it work with my android phone?

    • @VincentCodesFinance
      @VincentCodesFinance 13 วันที่ผ่านมา

      You can try, but it will probably drain your battery at record speed. The installation instructions are different than those in my long-form video, have a look at this blog post for Android: davidefornelli.com/posts/posts/LLM%20on%20Android.html For OpenWebUI as well you will have to follow the regular instructions, I wouldn't use docker.

  • @stephenzzz
    @stephenzzz 14 วันที่ผ่านมา

    Thanks Vincent. Question for you, I want to have a customer paywall for my my knowledge content with a RAG of sorts to answer questions. Which system out there do you think would work best, that is low code. Then I could figure out how to put it behind a paywall.

    • @VincentCodesFinance
      @VincentCodesFinance 14 วันที่ผ่านมา

      That's a good question. I have only looked at local solutions for RAG, both in built-in solutions and DIY in python using langchain and vector dbs. So far the Msty implementation is the best that I have come across, but it's local only. I did make a video on AnythingLLM which also offers a cloud version, but to be honest I was unimpressed at the time. If I was to go at it myself I would probably build a custom solution in Python with a vector database to have more control. I don't know any no-code solutions, but I'm curious so please let me know if you find one that works well!

  • @siddharthkandwal6514
    @siddharthkandwal6514 17 วันที่ผ่านมา

    Where are the content and chats saved on mac?

    • @VincentCodesFinance
      @VincentCodesFinance 16 วันที่ผ่านมา

      Most of the data is saved in "Library/Application Support/Msty/" directory. It is usually hidden on Mac, but you can access it in Finder using the menu item Go->Go to Folder... Some of the data will also end up in Library/Caches. The downloaded models are stored in the default Ollama directory, "~/.ollama/models".

  • @YashGupta-ul2ds
    @YashGupta-ul2ds 24 วันที่ผ่านมา

    Very good video! If I want to get the 10K filings for a given company ticker to use for RAG training in an LLM would the approach in the video suffice?

    • @VincentCodesFinance
      @VincentCodesFinance 24 วันที่ผ่านมา

      Yes, kind of. It will let you download all the filing information. But keep in mind that 10Ks contain more than just the filing text. To get financials, you have to extract the XRBL information from the filing. Most filings also have attachments. Edgartools will let you download all of these, the hardest part for RAG is not the scraping, it will be to structure all that data in a useful way for querying.

  • @englishmimics
    @englishmimics 28 วันที่ผ่านมา

    Vincent I just updated Msty and now I am unable to run the LLMs! I keep receiving the following error message: llama runner process has terminated: exit status Oxc0000135 Did it happen to you as well?

    • @englishmimics
      @englishmimics 28 วันที่ผ่านมา

      I have tried uninstalling and reinstalling Msty, but the issue still persists!

  • @englishmimics
    @englishmimics 28 วันที่ผ่านมา

    Hey Vincent, Hope all's good with you! I was wondering if there's a way to save the large language models we got from Msty in case I ever need to reinstall Windows. Just want to make sure we have a backup plan in place.

    • @VincentCodesFinance
      @VincentCodesFinance 28 วันที่ผ่านมา

      Models downloaded directly with Ollama should be in the "C:/Users/username/.ollama" directory on Windows. If you copy the content, it should be ok. I presume Msty also saves them there, but I don't have a windows machine to confirm.

    • @englishmimics
      @englishmimics 28 วันที่ผ่านมา

      ​@@VincentCodesFinance Thank you for your prompt response. I couldn't locate the models there, so I looked in the appdata folder and found them here: C:\Users\username\AppData\Roaming\Msty\models

  • @robwin0072
    @robwin0072 หลายเดือนก่อน

    Thank you for this walk-through. At 01:43 you spoke of ‘chat’ tags: Two things: 1. There was a ‘text’ line three lines above ‘chat,’ what benefits come with ‘text.’ command line instruction? 2. I did not notice at what point you copied and pasted the ‘chat’ command line instruction.

    • @VincentCodesFinance
      @VincentCodesFinance 28 วันที่ผ่านมา

      Text models are optimized for text completion instead of chat-style querying. In most uses cases à la ChatPGT that we usually think of, the chat variant is the one you want. 2.You can copy any of the ones that are there (you should use llama3 no, not llama2, or one of the newest uncensored models.) The only thing to be aware of if that the command that gets copied is "ollama run modelname" instead of "ollama pull modelname". The run command will trigger a pull if necessary and the model will be downloaded, but run will also load the model in memory and make it available in the command line.

  • @josersleal
    @josersleal หลายเดือนก่อน

    humans will stop writing researched or other because other humasn do not read them anymore. machines will create knowledge for machines in a non human able format (why should they, its slow). these AI experts are children with a new toy, like openheimer and co... idiots

  • @JaiRaj26
    @JaiRaj26 หลายเดือนก่อน

    What about the autocomplete option? Can this model autocomplete code like Copilot?

    • @VincentCodesFinance
      @VincentCodesFinance หลายเดือนก่อน

      They added this feature (still in beta) in the new version. I've had mixed results with it using local models, it was not as good as copilot. For it to be useful, you have to use a very small model otherwise latency becomes an issue. Their documentation is a useful guide on picking the right model: docs.continue.dev/walkthroughs/tab-autocomplete

  • @tarkanh2519
    @tarkanh2519 หลายเดือนก่อน

    This is an amazing video that denotes the Edgartools. If I have chance to like it 1000 times , I could like it.. Thanks again.. This series must go on ....

    • @VincentCodesFinance
      @VincentCodesFinance หลายเดือนก่อน

      Thanks for the kind words! I have a few more scraping-related videos planned in the near future.

  • @rahuldinesh2840
    @rahuldinesh2840 หลายเดือนก่อน

    What is the configuration of your PC?

    • @VincentCodesFinance
      @VincentCodesFinance หลายเดือนก่อน

      I'm using a macbook pro with m3 max cpu and 64gb of ram. If using a pc, you'll want a fast gpu with a decent amount of ram on the gpu otherwise ollama will run on the cpu (much slower).

  • @englishmimics
    @englishmimics หลายเดือนก่อน

    Vincent, could you please create some tutorials on how to download and install LLMs for OobaBooga WebUI? It seamlessly integrates with SillyTavern, which is very interesting in itself.

  • @maximt1401
    @maximt1401 หลายเดือนก่อน

    I have a large json file I would like to extract insights from. What is going to be the best way to do this ?

    • @VincentCodesFinance
      @VincentCodesFinance หลายเดือนก่อน

      Depends on the json file. You can try to embed it in a knowledge stack, but this is meant for querying, i.e. letting your LLM search the stack for the answer you're looking for. To process the whole file, I would first check if it can fit within the context of the model (llamma 3 support 8k tokens I think). If so, you can include the full file in your query. Otherwise, you could chunk your file and process it using a map-reduce approach like I do in this video: th-cam.com/video/Tnu_ykn1HmI/w-d-xo.html

    • @maximt1401
      @maximt1401 หลายเดือนก่อน

      @@VincentCodesFinancethank you so much for the detailed reply. I'm going to have a go at both of these methods 🙏😸

  • @Threecommaaclub
    @Threecommaaclub หลายเดือนก่อน

    at 3:24 you stated that running Apple.Financials would yield the most recent data, however the data scraped was from 2023. is there a way to ensure that the data provided when running this code is the most recent data available?

    • @VincentCodesFinance
      @VincentCodesFinance หลายเดือนก่อน

      I should have been more precise. This gives you the financial data from the latest annual report (form 10-K). In the case of Apple, the latest financials available would be from the quarterly report issued earlier this month. You can get the latest quarterly financials using: `apple.get_filings(form="10-Q").latest().obj().financials`. Note that companies only file reports quarterly, some there is always some lage in the data.

  • @XXSinanovichXX
    @XXSinanovichXX หลายเดือนก่อน

    Thank you for this video Is there a way to calculate newey west standard errors in 2SLS IV panel regressions using linearmodels?

    • @VincentCodesFinance
      @VincentCodesFinance หลายเดือนก่อน

      It should work if you call .fit(cov_type="kernel") . By default it will use the bartlett kernel (same a Newey-West). You can specify the bandwidth using fit(cov_type="kernel", cov_config={"bandwidth": 3}) bashtage.github.io/linearmodels/devel/iv/iv/linearmodels.iv.model.IV2SLS.fit.html#linearmodels.iv.model.IV2SLS.fit

  • @englishmimics
    @englishmimics หลายเดือนก่อน

    Hey Vincent, can you figure out how to pass on those cool AI language models that Msty downloaded to other programs such as SillyTavern and Oobabooga TextGen WebUI?

    • @VincentCodesFinance
      @VincentCodesFinance หลายเดือนก่อน

      I don't know these two tools much. From a quick glance at their docs, it seems that they do not support Ollama, which is what Msty uses. Msty has a built-in Ollama instance, but it will share models with a stand-alone Ollama install if you have one. However, as far as I know it cannot share models with other local LLM servers. In another video I show how to setup Open WebUI, which is similar to TextGen WebUI and supports Ollama.

    • @englishmimics
      @englishmimics หลายเดือนก่อน

      ​@@VincentCodesFinance I can't wait for your next tutorial!

  • @stephenzzz
    @stephenzzz หลายเดือนก่อน

    Thanks Vincent

  • @englishmimics
    @englishmimics 2 หลายเดือนก่อน

    Vincent, Lately, I've been playing around with 2D and 3D animation for my clients, but now I'm eager to try my hand at AI stuff. Yep, total newbie here! I'm super pumped about all these cool AI-powered tools popping up everywhere. So, today I stumbled upon Silly Tavern. Have you heard of it? It's basically a frontend tool that can't load models on its own but can work with models loaded by other backends like text-generation-webui, koboldcpp, and others. I'm wondering if you could lend me a hand in figuring out how to connect Msty to Silly Tavern. I want to use Msty LLMs as the source for it. Any ideas?

  • @englishmimics
    @englishmimics 2 หลายเดือนก่อน

    Wow, Vincent, I just wanted to say a massive thank you! I've been really looking forward to this tutorial. Your input in the AI community is greatly appreciated. I have to say, your TH-cam channel is like a hidden treasure, and I'm thrilled that I found it!

    • @VincentCodesFinance
      @VincentCodesFinance หลายเดือนก่อน

      Glad you like it! thanks for the kind words.

  • @englishmimics
    @englishmimics 2 หลายเดือนก่อน

    Hello Vincent, I hope you're well. I noticed that Msty has updated some software features and added RAG to their LLMs. Could you help us understand this? You're always my go-to for these things.

    • @VincentCodesFinance
      @VincentCodesFinance 2 หลายเดือนก่อน

      Already working on it! It will likely be more than one video given all the new features.

  • @tvandang3234
    @tvandang3234 2 หลายเดือนก่อน

    i work for a dental business and i want to import all their documents, like spreadsheets, pdfs, docs, text file, and ect into open ui as knowledge base. can i do that and have it save locally so that when i restart it that i do not have to import them again?

    • @VincentCodesFinance
      @VincentCodesFinance 2 หลายเดือนก่อน

      I haven't tried to build a setup as involved as this. I think you can save the loaded documents, but I only tried it as a single user, so I'm not sure if sharing is possible, and what types of files are supported besides pdf. Be aware however that this is a recent open source project under active development, I would be careful before loading any sensitive medical data in a server software that has not undergone a security audit

  • @tiffanyw3794
    @tiffanyw3794 2 หลายเดือนก่อน

    Thank you this is the best video explaining how to do this?

  • @dogan1318
    @dogan1318 2 หลายเดือนก่อน

    hi can we ollama provider url? i want to use ollama which i serve in my server

    • @VincentCodesFinance
      @VincentCodesFinance 2 หลายเดือนก่อน

      Yes! You can specify the url with the apiBase parameter. See docs.continue.dev/reference/Model%20Providers/ollama for an example.

    • @dogan1318
      @dogan1318 2 หลายเดือนก่อน

      @@VincentCodesFinance Thank you 🙏

  • @VincentCodesFinance
    @VincentCodesFinance 2 หลายเดือนก่อน

    👍 Please like if you found this video helpful, and subscribe to stay updated with my latest tutorials. 🔔 ❤ You can support this channel by buying me a ☕: buymeacoffee.com/codesfinance

  • @wunjo8586
    @wunjo8586 2 หลายเดือนก่อน

    Hi, nice video, I'm now to coding, how can I extract the cashflow for example for different dates? let's say 2005. Thank you!

    • @VincentCodesFinance
      @VincentCodesFinance 2 หลายเดือนก่อน

      I haven't done this specifically, but you have to look at the filings that would have that information. For most companies, cash would be available on a quarterly basis in the 10-Q form, as part of the XBRL (I'm not sure of the exact xbrl field). I would approach this by searching for all filings of type "10-Q" that I'm interested in (based on year and company) and then retrieving the xbrl data for each filing, then extracting the cash flow column.

  • @wrOngplan3t
    @wrOngplan3t 2 หลายเดือนก่อน

    This was pretty cool! I'm just a barely a hobby coder, some Arduino or Processing Java stuff once in a blue moon lol. I've barely scratched the surface with this extension. I've yet to see how much better this is than just copy-paste to / from an LLM, but it certainly have potential! Seems pretty helpful, although you also have to be skeptical of it's outputs and claims (same as ever), sometimes it seems to hallucinate a bit, or assume some stuff not quite right. But it's alright as long as one is aware of it. Using Linux Mint 21.3, and VSCodium. Already had Ollama present. I found the Continue add-on.and thanks to your config.json editing I got it working. So far with Codegemma. Great video, thanks!

    • @VincentCodesFinance
      @VincentCodesFinance 2 หลายเดือนก่อน

      Thanks! I guess the main benefit of having it directly in VS Code is convenience: it lets you stay within VS Code, is integrated with the UI, and the buttons and slash commands will automatically wraps prompts around your code.. In terms of output quality, it will mostly depend on the model you use.

  • @drkvaladao776
    @drkvaladao776 2 หลายเดือนก่อน

    Very nice, easy to follow and set up even if you are not a programer like me, I'm using it with Llama 3 at the moment. Subscribed for more content like this.

  • @englishmimics
    @englishmimics 2 หลายเดือนก่อน

    Vincent, I've seen all your tutorials on ChatGPT-like clones, but I'm still unsure which one to install on my computer! Could you please advise me on which one you recommend, considering I'm not a coding pro? I just want to avoid any unnecessary headaches!

    • @VincentCodesFinance
      @VincentCodesFinance 2 หลายเดือนก่อน

      So far the easiest to install and use is Msty. However, it does not super documents (yet), it is just a pure chat for now.

  • @Giacint
    @Giacint 2 หลายเดือนก่อน

    Hi, thank you for video. Can you answer if there is any way I can load a separately downloaded gguf model into Msty. Just specifying the models folder does not show them in the list.

    • @VincentCodesFinance
      @VincentCodesFinance 2 หลายเดือนก่อน

      That I don't know. You should ask Ashok (Msty developer) on their discord channel, he is very responsive to user questions: discord.gg/2QBw6XxkCC

    • @Giacint
      @Giacint 2 หลายเดือนก่อน

      ​@@VincentCodesFinance Thanks for the response

  • @englishmimics
    @englishmimics 2 หลายเดือนก่อน

    Thank you so much Vincent for taking the time to create and share that amazing video with us. Your effort is truly appreciated.

  • @Alex29196
    @Alex29196 2 หลายเดือนก่อน

    AnythingLLM won't be able to outperform any local inference UI, lacking voice output like Ollama web UI and others, plus inference runs much slower than Msty, ollama CLI and LMstudio.

    • @VincentCodesFinance
      @VincentCodesFinance 2 หลายเดือนก่อน

      I agree that the UI is inferior to pretty much all the other ones I tried. As for inference speed, I configured it to use my local Ollama installation, so the speed is the same. The one use case where I found AnythingLLM better than WebUI is for document search (Msty doesn't have it yet). I haven't tried LMstudio yet but it's high on my list.

    • @Alex29196
      @Alex29196 2 หลายเดือนก่อน

      @@VincentCodesFinance Try Msty inferece spped is really fast, i run a low en laptop, with 4g vram 16gb ram. Hope it helps

  • @VincentCodesFinance
    @VincentCodesFinance 2 หลายเดือนก่อน

    👍 Please like if you found this video helpful, and subscribe to stay updated with my latest tutorials. 🔔 ❤ You can support this channel by buying me a ☕: buymeacoffee.com/codesfinance

  • @englishmimics
    @englishmimics 2 หลายเดือนก่อน

    Vincent, I must say that your TH-cam channel is truly exceptional and stands out among the rest. I just hit the subscribe button and I am eagerly looking forward to seeing more of your amazing content.

    • @VincentCodesFinance
      @VincentCodesFinance 2 หลายเดือนก่อน

      Thanks for the kind words, it's always nice to hear!

  • @englishmimics
    @englishmimics 2 หลายเดือนก่อน

    Thanks for sharing! Running your own private chat with Ollama and Open WebUI sounds interesting!

    • @VincentCodesFinance
      @VincentCodesFinance 2 หลายเดือนก่อน

      It is! In my latest video I show a new even simpler UI called Msty, which is a simple desktop app that comes bundled with Ollama (or can use your existing Ollama). If you don't mind using a closed-source UI, it's the nicest one I've seen so far: th-cam.com/video/REEYqYEtqAc/w-d-xo.html

    • @englishmimics
      @englishmimics 2 หลายเดือนก่อน

      @@VincentCodesFinance That sounds cool! It's awesome that you're checking out simpler UI options. Thanks for sharing!

  • @my_yt_
    @my_yt_ 2 หลายเดือนก่อน

    Thanks for researching and posting these video gems you've been putting out. I'm adding this into my "toolbox" for sure.

  • @mohammedsaleh-ck8jf
    @mohammedsaleh-ck8jf 2 หลายเดือนก่อน

    can you make a video about how to use local LLM with metagpt or taskwaver .

    • @VincentCodesFinance
      @VincentCodesFinance 2 หลายเดือนก่อน

      Thanks for the tips. I never used them but will look into them. TaskWeaver seems like something that would be useful for my research work.

  • @mohammedsaleh-ck8jf
    @mohammedsaleh-ck8jf 2 หลายเดือนก่อน

    keep going you are the best

  • @VincentCodesFinance
    @VincentCodesFinance 2 หลายเดือนก่อน

    👍 Please like if you found this video helpful, and subscribe to stay updated with my latest tutorials. 🔔 ❤ You can support this channel by buying me a ☕: buymeacoffee.com/codesfinance

  • @VincentCodesFinance
    @VincentCodesFinance 2 หลายเดือนก่อน

    👍 Please like if you found this video helpful, and subscribe to stay updated with my latest tutorials. 🔔 ❤ You can support this channel by buying me a ☕: buymeacoffee.com/codesfinance

  • @VincentCodesFinance
    @VincentCodesFinance 2 หลายเดือนก่อน

    👍 Please like if you found this video helpful, and subscribe to stay updated with my latest tutorials. 🔔 ❤ You can support this channel by buying me a ☕: buymeacoffee.com/codesfinance

  • @VincentCodesFinance
    @VincentCodesFinance 2 หลายเดือนก่อน

    👍 Please like if you found this video helpful, and subscribe to stay updated with my latest tutorials. 🔔 ❤ You can support this channel by buying me a ☕: buymeacoffee.com/codesfinance

  • @VincentCodesFinance
    @VincentCodesFinance 2 หลายเดือนก่อน

    👍 Please like if you found this video helpful, and subscribe to stay updated with my latest tutorials. 🔔 ❤ You can support this channel by buying me a ☕: buymeacoffee.com/codesfinance

  • @JeffersonAmaral-id2en
    @JeffersonAmaral-id2en 2 หลายเดือนก่อน

    congratulations for share!!

  • @LauraLanford
    @LauraLanford 2 หลายเดือนก่อน

    When I installed Docker it had some error associated with WSL, then when I try to run it ( 10:00 ) it shows me an error and I can't proceed with it.

    • @VincentCodesFinance
      @VincentCodesFinance 2 หลายเดือนก่อน

      Hum, I don't have much experience with WSL. Could it be related to this issue? github.com/docker/for-win/issues/13845

  • @mikedoyle9908
    @mikedoyle9908 2 หลายเดือนก่อน

    greta work - thanks!! Can I ask what spec machine you have to run those models?

    • @VincentCodesFinance
      @VincentCodesFinance 2 หลายเดือนก่อน

      Thanks! I have a MacBook Pro M3 Max with 64GB of ram. The Apple chips are great for running these model because the ram is shared between cpu and gpu, so the gpu can use all of it when needed. In my most recent video, I test the new models released this month to see which ones will run smoothly on my laptop: th-cam.com/video/0ujRg04fDW4/w-d-xo.html

  • @KevlarMike
    @KevlarMike 2 หลายเดือนก่อน

    Thanks this is exactly what I was looking for.

    • @VincentCodesFinance
      @VincentCodesFinance 2 หลายเดือนก่อน

      Happy to help! Make sure to try the new Llama 3 that was recently released, it's a big step up from llama2 in the video.