Would be very cool to see you continuing this where you implement RAG along with stripe where we can try to sell this. Its quite unique and pretty much no one has done that yet and such content can be done by someone like you who has solid background in Gen AI along with other tech stack.
Question for you - wouldn't we need to have a backend instead of doing everything in javascript (frontend), just because we can't keep our key in our front end files? It will be easy to hack after the extension gets published to the google store.
thanks for sharing, I have a special use case "form auto filler", this extension should able to fill the forms automatically. some useful points: * store user data in a vector database. * when needed the data pull it from the database * make some changes, fill the form can help me to do it?
great tutorial! but if posible other with ollama with other function how summarizing, or chat with github, because i don't know how models of ollama can interact with web, thanks
How can we verify if the LLM response is accurate in checking whether the site is a phishing site? Would there be response bias given most sites/example used is not a phishing site for this use case. Thanks. Great video as always.
I tried using two local models. TinyLlama responded No. But the response is a Yes after I logged into Openai website and Groq (actual examples). Llama2 responded No to both. In short, the response is not definite and subject to model quality. Just for sharing. 😄
it is possible, but it will pose a security vulnerability since anyone can see your api keys. Or you could use something like AWS lambda and store your API keys there, so you don't have to create a backend
Would be very cool to see you continuing this where you implement RAG along with stripe where we can try to sell this. Its quite unique and pretty much no one has done that yet and such content can be done by someone like you who has solid background in Gen AI along with other tech stack.
Question for you - wouldn't we need to have a backend instead of doing everything in javascript (frontend), just because we can't keep our key in our front end files? It will be easy to hack after the extension gets published to the google store.
thanks for sharing, I have a special use case "form auto filler", this extension should able to fill the forms automatically.
some useful points:
* store user data in a vector database.
* when needed the data pull it from the database
* make some changes, fill the form
can help me to do it?
Can you please make anymore extension video with free llm
great tutorial! but if posible other with ollama with other function how summarizing, or chat with github, because i don't know how models of ollama can interact with web, thanks
Great suggestion!
Thank you for the amazing video :) I am trying to code along with you and nice T-shirt.👍
Happy to hear that!
How can we verify if the LLM response is accurate in checking whether the site is a phishing site? Would there be response bias given most sites/example used is not a phishing site for this use case. Thanks. Great video as always.
I tried using two local models. TinyLlama responded No. But the response is a Yes after I logged into Openai website and Groq (actual examples). Llama2 responded No to both. In short, the response is not definite and subject to model quality. Just for sharing. 😄
Is is possible to try with anyother llm model and can give me a reference please@@m4tthias
Thank you sir
Welcome
Is it possible to write a Chrome extension that runs an AI model without the backend?
it is possible, but it will pose a security vulnerability since anyone can see your api keys. Or you could use something like AWS lambda and store your API keys there, so you don't have to create a backend
please make a video on creating llm powered mobile app
Bro i need one help bro please
Please reply 😢😢😢😢😢