Great video Tarun. Just a fix... the upload image and camera image options in the code will always return wrong output as the streamlit uploader which you are passing to analyze_image() is not the file path rather it is Byte10 class. As a result even you provide the legit image it will always be unable to analyse. We need to temp save the image and then provide the path. This can be done by adding with NamedTemporaryFile(dir='.', suffix='.jpg') as f: f.write(uploaded_file.getbuffer()) analyze_image(f.name) Don't forget --> from tempfile import NamedTemporaryFile A great work BTW, keep rocking.
Good catch. Let me fix this asap 😅. Thank you for pointing this out, I will make the changes and try to pin this comment or add description on the changes done in the code. Thank you.
@@AIwithTarun Also add "Please do not analyse any other type of images." in the system prompt else it will analyze any type of images. 😊 nothing wrong though, it can be generalised.
@@digiIntuitions sure. I use iPhone to capture the face, QuickTime Player to screen record and finally Final Cut Pro to edit or merge the video. Initially I was using Zoom.
Great video, Tarun! It’s really helpful. Can we have a mobile version where we can scan product details and get reviews/ratings to decide whether to accept or reject the product when I’m at Dmart? The details should be available with just a button click, so I can read them later. This should follow the quality control guidelines set by the Government of India. Just sharing some ideas.👋👍
Thank you. Yes, the app is deployed, it can be tested directly on mobile as well. (ingredients-analyzer.streamlit.app/). You need to upload the image and get the results accordingly. Regarding the reviews and ratings, as of now its not implemented, but yes its easy to achieve that along with implementation of quality control guidelines
Thank you very muchTarun. when I try first approach I get only few line message like "The image shows a product package for Bournvita, a nutritional supplement. The package is primarily orange and white,etc. Not full details like your output. Can you let me know wny ? I used same System_prompt, Instructions,etc.
I haven’t planned for it. Need to think about it. Meanwhile if you have any questions on building it, join our Discord channel, we can have discussion over there
I want to create a LocalRAG system (chat with PDF) using Llama 3.2 and text embeddings. However, the results often include hallucinated information. Do you have any suggestions on how to train and test the model to ensure the system provides accurate answers?
There are various factor to check with you are working on RAG using Open Source LLMs: - Have you used the Prompt template as used in Llama3.2? If your context is getting extracted you need to augment your prompt to reduce hallucinations [However this is not 100% accurate but it reduces the risk] - On the retriever part, you need to check if the relevant documents is retrieved or not for the user query. This is where you need to try CRAG or Re-reranking to improve the performance. You can join our Discord server, we can take this discussion further to see where things are going wrong.
Great video Tarun. Just a fix... the upload image and camera image options in the code will always return wrong output as the streamlit uploader which you are passing to analyze_image() is not the file path rather it is Byte10 class. As a result even you provide the legit image it will always be unable to analyse. We need to temp save the image and then provide the path. This can be done by adding with NamedTemporaryFile(dir='.', suffix='.jpg') as f:
f.write(uploaded_file.getbuffer())
analyze_image(f.name)
Don't forget --> from tempfile import NamedTemporaryFile
A great work BTW, keep rocking.
Good catch. Let me fix this asap 😅. Thank you for pointing this out, I will make the changes and try to pin this comment or add description on the changes done in the code.
Thank you.
@@AIwithTarun Also add "Please do not analyse any other type of images." in the system prompt else it will analyze any type of images. 😊 nothing wrong though, it can be generalised.
The app is updated: github.com/lucifertrj/Product-Ingredient-Agent/
Thank you :)
@@AIwithTarun Great! Off topic, what software you use to create videos? I want to create few videos as well, if you don't mind telling me.
@@digiIntuitions sure. I use iPhone to capture the face, QuickTime Player to screen record and finally Final Cut Pro to edit or merge the video.
Initially I was using Zoom.
Great work !! Keep helping with your rocking videos !
Great video, Tarun! It’s really helpful. Can we have a mobile version where we can scan product details and get reviews/ratings to decide whether to accept or reject the product when I’m at Dmart? The details should be available with just a button click, so I can read them later. This should follow the quality control guidelines set by the Government of India. Just sharing some ideas.👋👍
Thank you. Yes, the app is deployed, it can be tested directly on mobile as well. (ingredients-analyzer.streamlit.app/). You need to upload the image and get the results accordingly.
Regarding the reviews and ratings, as of now its not implemented, but yes its easy to achieve that along with implementation of quality control guidelines
which IDE are you using & how to get this kind of terminal 17:06?
I am using VS Code. For terminal it’s ZSH. When you run print_response you get that kind of results
wow perfect 🥰🥰🥰
Thank you for this series! You are a great teacher 🫶
Thank you🚀 We are just getting started. It’s only been 3 videos yet. More videos on the way. Keep supporting
Thank you brother, I always watch your videos.
Thank you brother. Keep supporting and watch the videos. I hope you build some cool projects with this 🚀
@@AIwithTarun Yes brother, keep uploading videos regularly.
Thank you very muchTarun. when I try first approach I get only few line message like "The image shows a product package for Bournvita, a nutritional supplement. The package is primarily orange and white,etc. Not full details like your output.
Can you let me know wny ? I used same System_prompt, Instructions,etc.
Interesting. Can you provide temperature = 0 and may I know what LLM are you using?
@@AIwithTarun Thank you for your prompt response. I used Gemini Flash 2.0..
@ can you add temperature = 0 and rerun the code.
Please make more rag langchain end to end projects with grooq and Gemini api and ollama
@@subhashchandra3318 we already have 8-9 videos on that in my channel. But yes project based videos are pending. Maybe January 2nd week
Do u provide ai agent building services?
@@nikith15 as of now no. Maybe from next month or February
Bro any vdo for building agents for CRM works ?
I haven’t planned for it. Need to think about it. Meanwhile if you have any questions on building it, join our Discord channel, we can have discussion over there
I want to create a LocalRAG system (chat with PDF) using Llama 3.2 and text embeddings. However, the results often include hallucinated information. Do you have any suggestions on how to train and test the model to ensure the system provides accurate answers?
There are various factor to check with you are working on RAG using Open Source LLMs:
- Have you used the Prompt template as used in Llama3.2? If your context is getting extracted you need to augment your prompt to reduce hallucinations [However this is not 100% accurate but it reduces the risk]
- On the retriever part, you need to check if the relevant documents is retrieved or not for the user query. This is where you need to try CRAG or Re-reranking to improve the performance.
You can join our Discord server, we can take this discussion further to see where things are going wrong.
Here is my repo: github.com/lucifertrj/Awesome-RAG/
I have most of the colab notebook that uses Open Source LLM itself.
bro insted of taviliy can we use duck duck go
@@Rits1804-l4r yes we can. Sometimes DuckDuckGo gives RateLimit Error. So I picked Tavily