Great set of videos, working my way though them at the moment, once small thing that would save you loads of work, you don't need to write the scripts or make the .desktop fine under mint, just go to the install directory, hold Ctrl+Shift and drag it to the desktop
Great series for beginners. Love the detail step-by-step setup. I use Ollama but not Jan AI. One other project that i recently 'discovered' that has high potential is llamafile (by Mozilla) (opensource). I think it is by far the easiest to run as they pre-package models inside and run like windows portable programs. I like this specially in future where i can have my own model fine-tuned to what i want and have them "frozen" with all the dependencies baked inside. The base app can also run gguf files from hugging face if those with the extension "llamafile" was not pre-packaged. The coolest thing is i am able to run a 1.1b model on a mid-tier Android phone ! (but painfully slow, something like 1.5 tok/s) but with future small capable models it is very promising indeed. Worth keeping an eye on the project. Not as mature as Ollama and JanAI but getting there (also uses OpenAI API format and can be used as backend). !
Very cool, I love to hear about this kind of stuff. Thank you! Any other stand alone solutions outside of the LLM frontends that you like for this type of project?
@@theit-unicorn1873 Yes, ultimately once we have our own private LLM on our machine, specially on our phone; they can be used in the background for processing privately our application eg RAG with our medical history data; and when the models are more capable be able to analyse for us our own custom data thrown to them. All without leaking to big tech. I am very concern with big tech close source Models that are harvesting our data specially our domain specific data. There are tons of applications that can be done once we are able to fine-tune or use with Agents for applications. Exciting time ahead for self-sovereign, specially running on CPU only system (which i am a big supporter) btw, i do have GPU but inference on CPU is so much more flexible specially running on larger models eg 70b parameters. (my dual-Xeon has 256GB ECC RAM).
Perhaps down the road we will. I'll be honest, I only recently looked at TensorFlow, so I'll need to familiarize myself before trying to do a video on it. Any tips for a noob? Thanks!
@@theit-unicorn1873 I downloaded the same as you did in the video and both give the same results. When I ask it something i can briefly see the model loading
Thanks, Love the step by step process so far for the AI series!!! [from the 3rd video so far.] I have installed it on PC, with one problem. I am not getting any responses with both models been indicated to install in guide. [That is for Llama 8B Q4 & Mistral Instruct 7B Q4] I went from Assistant to Model and then select each one of them on a PC with 16Gb RAM. The thread was a short " What is the meaning of the name Hildegard" - It failed and came up with the following message - Apologies, something’s amiss! Jan’s in beta. Access troubleshooting assistance now. I will try to follow the trouble shooting Assistance document. Step 1 Follow our troubleshooting guide for step-by-step solutions.
@@theit-unicorn1873 I have noticed that the model do startup after I type in the same query as you have type. Still no response. I did notice that when I go to the settings on the left bottom of screen - that both models is indicating "Inactive". I did try the startup there as well still no response for the same query.
@@theunismulder7119 Getting the same thing, I have the same environment as in the video (linux mint vm with 8 cores, 16 gb ram). Getting an error when trying to activate the model (doesn't matter which model I try). the error is "cortex exited with code: null" and "Error: Load model failed with error TypeError: fetch failed."
Funny you should ask, lol. I am working on an AI animatronic and I had an idea of creating a second one and letting them debate. We could use local AI, uncensored, and let them go at it. I might set something like that up in the future.
Do you prefer the privacy of local LLMs?
Thank you for this series. It is nice to see you troubleshoot and explain why/how you get an open source project to work.
@@timothyhayes5741 Thank you.I really appreciate that feedback.It means a lot
Its my first time on linux and I learning so much
Great set of videos, working my way though them at the moment, once small thing that would save you loads of work, you don't need to write the scripts or make the .desktop fine under mint, just go to the install directory, hold Ctrl+Shift and drag it to the desktop
Thank you for these informative application videos, an awesome birthday gift!
I love theese ai tools..
U are the best one soooo far !
Great series for beginners. Love the detail step-by-step setup. I use Ollama but not Jan AI.
One other project that i recently 'discovered' that has high potential is llamafile (by Mozilla) (opensource). I think it is by far the easiest to run as they pre-package models inside and run like windows portable programs. I like this specially in future where i can have my own model fine-tuned to what i want and have them "frozen" with all the dependencies baked inside. The base app can also run gguf files from hugging face if those with the extension "llamafile" was not pre-packaged. The coolest thing is i am able to run a 1.1b model on a mid-tier Android phone ! (but painfully slow, something like 1.5 tok/s) but with future small capable models it is very promising indeed. Worth keeping an eye on the project. Not as mature as Ollama and JanAI but getting there (also uses OpenAI API format and can be used as backend). !
Very cool, I love to hear about this kind of stuff. Thank you! Any other stand alone solutions outside of the LLM frontends that you like for this type of project?
@@theit-unicorn1873 Yes, ultimately once we have our own private LLM on our machine, specially on our phone; they can be used in the background for processing privately our application eg RAG with our medical history data; and when the models are more capable be able to analyse for us our own custom data thrown to them. All without leaking to big tech. I am very concern with big tech close source Models that are harvesting our data specially our domain specific data. There are tons of applications that can be done once we are able to fine-tune or use with Agents for applications. Exciting time ahead for self-sovereign, specially running on CPU only system (which i am a big supporter) btw, i do have GPU but inference on CPU is so much more flexible specially running on larger models eg 70b parameters. (my dual-Xeon has 256GB ECC RAM).
Will you be looking at TensorFlow?
Perhaps down the road we will. I'll be honest, I only recently looked at TensorFlow, so I'll need to familiarize myself before trying to do a video on it. Any tips for a noob? Thanks!
Looks impressive...i installed just now
Nice! Let us now how it goes.
@@theit-unicorn1873 I don't know how to insert ChatGPT API in it
Hmmm. Followed the guide and everything appears to work ok but I just dont get any responses back for trom my inputs. Any ideas?
When you asked the first question did you see the model loading? What model are you using?
@@theit-unicorn1873 I downloaded the same as you did in the video and both give the same results. When I ask it something i can briefly see the model loading
A video for what extensions can be used on linux.
Thanks, Love the step by step process so far for the AI series!!! [from the 3rd video so far.]
I have installed it on PC, with one problem. I am not getting any responses with both models been indicated to install in guide. [That is for Llama 8B Q4 & Mistral Instruct 7B Q4]
I went from Assistant to Model and then select each one of them on a PC with 16Gb RAM.
The thread was a short " What is the meaning of the name Hildegard" -
It failed and came up with the following message - Apologies, something’s amiss!
Jan’s in beta. Access troubleshooting assistance now.
I will try to follow the trouble shooting Assistance document.
Step 1
Follow our troubleshooting guide for step-by-step solutions.
Have you tried a different question just to be sure it's not the query? I doubt it is, but worth a try to rule that out.
@@theit-unicorn1873 I have noticed that the model do startup after I type in the same query as you have type.
Still no response. I did notice that when I go to the settings on the left bottom of screen - that both models is indicating "Inactive". I did try the startup there as well still no response for the same query.
@@theunismulder7119 any logs?
@@theunismulder7119 Getting the same thing, I have the same environment as in the video (linux mint vm with 8 cores, 16 gb ram). Getting an error when trying to activate the model (doesn't matter which model I try). the error is "cortex exited with code: null" and "Error: Load model failed with error TypeError: fetch failed."
@benjaminwestlake3502 In my case, I have found that the CPU is not meeting the required specs for Jan- no support for AVX2.
can we get all the AIs to have a conversation with each other?
Funny you should ask, lol. I am working on an AI animatronic and I had an idea of creating a second one and letting them debate. We could use local AI, uncensored, and let them go at it. I might set something like that up in the future.
You forgot to mention, how those models can give uncensored answers
Everybody knows