Would be very cool to see you continuing this where you implement RAG along with stripe where we can try to sell this. Its quite unique and pretty much no one has done that yet and such content can be done by someone like you who has solid background in Gen AI along with other tech stack.
thanks for sharing, I have a special use case "form auto filler", this extension should able to fill the forms automatically. some useful points: * store user data in a vector database. * when needed the data pull it from the database * make some changes, fill the form can help me to do it?
great tutorial! but if posible other with ollama with other function how summarizing, or chat with github, because i don't know how models of ollama can interact with web, thanks
How can we verify if the LLM response is accurate in checking whether the site is a phishing site? Would there be response bias given most sites/example used is not a phishing site for this use case. Thanks. Great video as always.
I tried using two local models. TinyLlama responded No. But the response is a Yes after I logged into Openai website and Groq (actual examples). Llama2 responded No to both. In short, the response is not definite and subject to model quality. Just for sharing. 😄
Would be very cool to see you continuing this where you implement RAG along with stripe where we can try to sell this. Its quite unique and pretty much no one has done that yet and such content can be done by someone like you who has solid background in Gen AI along with other tech stack.
thanks for sharing, I have a special use case "form auto filler", this extension should able to fill the forms automatically.
some useful points:
* store user data in a vector database.
* when needed the data pull it from the database
* make some changes, fill the form
can help me to do it?
Can you please make anymore extension video with free llm
Thank you for the amazing video :) I am trying to code along with you and nice T-shirt.👍
Happy to hear that!
Is it possible to write a Chrome extension that runs an AI model without the backend?
great tutorial! but if posible other with ollama with other function how summarizing, or chat with github, because i don't know how models of ollama can interact with web, thanks
Great suggestion!
How can we verify if the LLM response is accurate in checking whether the site is a phishing site? Would there be response bias given most sites/example used is not a phishing site for this use case. Thanks. Great video as always.
I tried using two local models. TinyLlama responded No. But the response is a Yes after I logged into Openai website and Groq (actual examples). Llama2 responded No to both. In short, the response is not definite and subject to model quality. Just for sharing. 😄
Is is possible to try with anyother llm model and can give me a reference please@@m4tthias
Thank you sir
Welcome
please make a video on creating llm powered mobile app
Bro i need one help bro please
Please reply 😢😢😢😢😢