Yes bro. Ollama recently added windows support. Once you installed ollama and downloaded any local llm, you can easily fine tune it either via prompt engg system prompt or via LoRa methods. Most of the latest llms like llama2, mistral, mixtral, dolphin, gemma etc are very much versatile for all domains. You can easily make it work with little effort on prompt engg
You don't need a API key at all 😃. That's the beauty of locally hosting the llms. If you use OpenAi's LLM (like chat gpt) then you need to pay $ to open AI and get the API key. With local LLM, it's free, private and secure
Ollama should be running bro. Which OS are you using ? I cannot paste external links here . Can you join either CIT telegram or discord and ask your Q? Happy to help!
Good 👍
Your videos are very helpful and informative, please create video about AI agent using NO code bro🎉
I'll put it on the list! 👍
nice job.. This will become a good reference
amazing!
Prompt and response sql la kuduka mudiyuma atha pathi idea eruntha solunga
Are you referring to text to sql ?
super bro , bro apdiye hugging face eppidi use pannanumnu upload pannunga bro
Super bro, Chainlit tutorial mudinja upload pannugga
Hi bro ipo windows kum ollama use aguthu but itha nama particular domain ku yapade use pandrathu bro
Yes bro. Ollama recently added windows support. Once you installed ollama and downloaded any local llm, you can easily fine tune it either via prompt engg system prompt or via LoRa methods. Most of the latest llms like llama2, mistral, mixtral, dolphin, gemma etc are very much versatile for all domains. You can easily make it work with little effort on prompt engg
@@conceptsintamil Thank you so much bro.
@@conceptsintamil Bro, next put a tutorial for it. This will really help non-technical people in fine-tuning LLM for particular domain
@@mithunkumar25557 yes it’s on the d way 😊
@@conceptsintamil Thanks a lot bro. Will be very helpful. I am also currently trying to fine-tune an LLM for my domain in Tamil.
Bro output vara romba late aagudu.
Any tip for this?
Does your machine has only CPU bro? If so performance will be limited. Try using a machine that has either nvidia or amd gpus
own datala llama api key use panni ready panni video poduga bro ethu pola
You don't need a API key at all 😃. That's the beauty of locally hosting the llms. If you use OpenAi's LLM (like chat gpt) then you need to pay $ to open AI and get the API key. With local LLM, it's free, private and secure
na windows dha use pandra so how to do?
Ollama doesn’t support windows bro. One option is to run a Linux vm and run ollama inside . The chat bot can still run in windows
Ollama call failed with status code 404 itha ERROR VARUTHU ENNA PANNA
Ollama should be running bro. Which OS are you using ? I cannot paste external links here . Can you join either CIT telegram or discord and ask your Q? Happy to help!
@@conceptsintamil bro discord ketan bro replay please
Brother also post a video on how to fine tune Llama.
bro unga linkedin id soluga