How To Install Jan AI on Linux Mint

แชร์
ฝัง
  • เผยแพร่เมื่อ 16 พ.ย. 2024

ความคิดเห็น • 32

  • @theit-unicorn1873
    @theit-unicorn1873  3 หลายเดือนก่อน +7

    Do you prefer the privacy of local LLMs?

  • @timothyhayes5741
    @timothyhayes5741 3 หลายเดือนก่อน +4

    Thank you for this series. It is nice to see you troubleshoot and explain why/how you get an open source project to work.

    • @theit-unicorn1873
      @theit-unicorn1873  3 หลายเดือนก่อน

      @@timothyhayes5741 Thank you.I really appreciate that feedback.It means a lot

  • @DigitalWerkstattDE
    @DigitalWerkstattDE 20 วันที่ผ่านมา +1

    Its my first time on linux and I learning so much

  • @sneekeruk
    @sneekeruk 3 หลายเดือนก่อน +1

    Great set of videos, working my way though them at the moment, once small thing that would save you loads of work, you don't need to write the scripts or make the .desktop fine under mint, just go to the install directory, hold Ctrl+Shift and drag it to the desktop

  • @fluffoid12
    @fluffoid12 หลายเดือนก่อน +1

    Thank you for these informative application videos, an awesome birthday gift!

  • @DigitalWerkstattDE
    @DigitalWerkstattDE 20 วันที่ผ่านมา +1

    I love theese ai tools..

  • @DigitalWerkstattDE
    @DigitalWerkstattDE 20 วันที่ผ่านมา +1

    U are the best one soooo far !

  • @ngbizvn1300
    @ngbizvn1300 3 หลายเดือนก่อน +2

    Great series for beginners. Love the detail step-by-step setup. I use Ollama but not Jan AI.
    One other project that i recently 'discovered' that has high potential is llamafile (by Mozilla) (opensource). I think it is by far the easiest to run as they pre-package models inside and run like windows portable programs. I like this specially in future where i can have my own model fine-tuned to what i want and have them "frozen" with all the dependencies baked inside. The base app can also run gguf files from hugging face if those with the extension "llamafile" was not pre-packaged. The coolest thing is i am able to run a 1.1b model on a mid-tier Android phone ! (but painfully slow, something like 1.5 tok/s) but with future small capable models it is very promising indeed. Worth keeping an eye on the project. Not as mature as Ollama and JanAI but getting there (also uses OpenAI API format and can be used as backend). !

    • @theit-unicorn1873
      @theit-unicorn1873  3 หลายเดือนก่อน

      Very cool, I love to hear about this kind of stuff. Thank you! Any other stand alone solutions outside of the LLM frontends that you like for this type of project?

    • @ngbizvn1300
      @ngbizvn1300 3 หลายเดือนก่อน

      ​@@theit-unicorn1873 Yes, ultimately once we have our own private LLM on our machine, specially on our phone; they can be used in the background for processing privately our application eg RAG with our medical history data; and when the models are more capable be able to analyse for us our own custom data thrown to them. All without leaking to big tech. I am very concern with big tech close source Models that are harvesting our data specially our domain specific data. There are tons of applications that can be done once we are able to fine-tune or use with Agents for applications. Exciting time ahead for self-sovereign, specially running on CPU only system (which i am a big supporter) btw, i do have GPU but inference on CPU is so much more flexible specially running on larger models eg 70b parameters. (my dual-Xeon has 256GB ECC RAM).

  • @fernleaf07
    @fernleaf07 3 หลายเดือนก่อน +1

    Will you be looking at TensorFlow?

    • @theit-unicorn1873
      @theit-unicorn1873  3 หลายเดือนก่อน

      Perhaps down the road we will. I'll be honest, I only recently looked at TensorFlow, so I'll need to familiarize myself before trying to do a video on it. Any tips for a noob? Thanks!

  • @AjaySingh-228
    @AjaySingh-228 3 หลายเดือนก่อน +1

    Looks impressive...i installed just now

    • @theit-unicorn1873
      @theit-unicorn1873  3 หลายเดือนก่อน +1

      Nice! Let us now how it goes.

    • @AjaySingh-228
      @AjaySingh-228 3 หลายเดือนก่อน

      @@theit-unicorn1873 I don't know how to insert ChatGPT API in it

  • @jeffhughes729
    @jeffhughes729 3 หลายเดือนก่อน +1

    Hmmm. Followed the guide and everything appears to work ok but I just dont get any responses back for trom my inputs. Any ideas?

    • @theit-unicorn1873
      @theit-unicorn1873  3 หลายเดือนก่อน

      When you asked the first question did you see the model loading? What model are you using?

    • @jeffhughes729
      @jeffhughes729 3 หลายเดือนก่อน

      @@theit-unicorn1873 I downloaded the same as you did in the video and both give the same results. When I ask it something i can briefly see the model loading

  • @DigitalWerkstattDE
    @DigitalWerkstattDE 19 วันที่ผ่านมา

    A video for what extensions can be used on linux.

  • @theunismulder7119
    @theunismulder7119 3 หลายเดือนก่อน +1

    Thanks, Love the step by step process so far for the AI series!!! [from the 3rd video so far.]
    I have installed it on PC, with one problem. I am not getting any responses with both models been indicated to install in guide. [That is for Llama 8B Q4 & Mistral Instruct 7B Q4]
    I went from Assistant to Model and then select each one of them on a PC with 16Gb RAM.
    The thread was a short " What is the meaning of the name Hildegard" -
    It failed and came up with the following message - Apologies, something’s amiss!
    Jan’s in beta. Access troubleshooting assistance now.
    I will try to follow the trouble shooting Assistance document.
    Step 1
    Follow our troubleshooting guide for step-by-step solutions.

    • @theit-unicorn1873
      @theit-unicorn1873  3 หลายเดือนก่อน

      Have you tried a different question just to be sure it's not the query? I doubt it is, but worth a try to rule that out.

    • @theunismulder7119
      @theunismulder7119 3 หลายเดือนก่อน +1

      @@theit-unicorn1873 I have noticed that the model do startup after I type in the same query as you have type.
      Still no response. I did notice that when I go to the settings on the left bottom of screen - that both models is indicating "Inactive". I did try the startup there as well still no response for the same query.

    • @theit-unicorn1873
      @theit-unicorn1873  3 หลายเดือนก่อน

      @@theunismulder7119 any logs?

    • @benjaminwestlake3502
      @benjaminwestlake3502 3 หลายเดือนก่อน

      @@theunismulder7119 Getting the same thing, I have the same environment as in the video (linux mint vm with 8 cores, 16 gb ram). Getting an error when trying to activate the model (doesn't matter which model I try). the error is "cortex exited with code: null" and "Error: Load model failed with error TypeError: fetch failed."

    • @theunismulder7119
      @theunismulder7119 2 หลายเดือนก่อน

      ​@benjaminwestlake3502 In my case, I have found that the CPU is not meeting the required specs for Jan- no support for AVX2.

  • @ScottG420
    @ScottG420 หลายเดือนก่อน +1

    can we get all the AIs to have a conversation with each other?

    • @theit-unicorn1873
      @theit-unicorn1873  หลายเดือนก่อน

      Funny you should ask, lol. I am working on an AI animatronic and I had an idea of creating a second one and letting them debate. We could use local AI, uncensored, and let them go at it. I might set something like that up in the future.

  • @MrBoboka12
    @MrBoboka12 3 หลายเดือนก่อน +1

    You forgot to mention, how those models can give uncensored answers

    • @RandomPickles
      @RandomPickles หลายเดือนก่อน

      Everybody knows