Thank you so much, madam. You really considered my comment regarding RAG implementation without any secret key. Thank you so much again.. keep posting and keep growing !! I will definitely share this video with all my Network. Happy coding !
Thanks Aarohi Mam for your valuable video. Can I change prompt response format in such a way that it fills the details in a fixed template:, like, filling tender fields from the requirements/specifications in a pdf file? Please guide further through the details!
Mam please create an intelligent chatbot using Streamlit and Langchain (RAG), where the chatbot can receive voice input in Urdu/Hindi, process it, and return both text and audio responses in Urdu/hindi. The chatbot should be able to interact with users fluently, allowing for seamless audio-to-text and text-to-audio communication in the Urdu/Hindi Workflow: ● Build the Streamlit interface for real-time Urdu/hindi audio input and output. ● Integrate Langchain (RAG) with an LLM (Language Model) API to generate dynamic responses based on the user’s input. (use PDF files only) ● Ensure the chatbot responds not only with a text-based answer in Urdu/Hindi but also converts that response back to audio and plays it for the user language.
iam not able to laod the huggingfaceembeddings. it shows this error. The specified module could not be found. Error loading "C:\Users\aj441\anaconda3\envs\llmenv\lib\site-packages\torch\lib\fbgemm.dll" or one of its dependencies.
I also faced the same issue. Intsead of using the requirements.txt file, just install them directly using these commands: 1. conda create -n env_langchain2 python=3.10 2. conda activate env_langchain2 3. conda install pytorch torchvision torchaudio cpuonly -c pytorch 4. pip install transformers 5. pip install sentence-transformers 6. pip install langchain langchain_community langchain-huggingface langchain_experimental langchain_chroma langchainhub 7. pip install streamlit 8. conda install jupyter 9. jupyter notebook Then test your installation by running this script in Jupyter Notebook: import torch import transformers import sentence_transformers import langchain print("PyTorch version:", torch.__version__) print("Transformers version:", transformers.__version__) print("Sentence Transformers version:", sentence_transformers.__version__) print("LangChain version:", langchain.__version__) It worked for me! Let me know if you still face issues.
Thanks a lot, Madam !!!. You are awesome at Explaining things in a very calm and simple way, whereas some RUclipsrs exaggerate. :)
If I have the option to subscribe 1M times, I will do so. But for one ID, there is only one subscription. U r awesome !!!!
Thank you so much, madam. You really considered my comment regarding RAG implementation without any secret key. Thank you so much again.. keep posting and keep growing !! I will definitely share this video with all my Network. Happy coding !
You are most welcome 🙂
I have only 1 like option. Again and again try to like these videos. Really helpful.
Glad my videos helped you 🙂
Thanks.
You're welcome
Very helpful for me
Glad it helped
Very Helpful
keep it up mam !!
Thanks a lot
Thanks Aarohi Mam for your valuable video. Can I change prompt response format in such a way that it fills the details in a fixed template:, like, filling tender fields from the requirements/specifications in a pdf file? Please guide further through the details!
Good work
Thank you so much 😀
Very nice and thank you very much.
please help with training models ways or examples
You are the best
@@GradientPlayz Thank you
Thank you very much for your effort.
I want to ask you if i can use colab or kaggle notebook instead of running the code in my local machine ?
Yes, you can
post regular video about Generative AI - full course
I will try my best.
How can I convert my unstructured data into structured data?
post video about how to fine-tune "Claude 3.5 sonnet API" - full course video for developers..please
Noted!
Mam please create an intelligent chatbot using Streamlit and Langchain (RAG), where the
chatbot can receive voice input in Urdu/Hindi, process it, and return both text and audio responses in
Urdu/hindi. The chatbot should be able to interact with users fluently, allowing for seamless
audio-to-text and text-to-audio communication in the Urdu/Hindi
Workflow:
● Build the Streamlit interface for real-time Urdu/hindi audio input and output.
● Integrate Langchain (RAG) with an LLM (Language Model) API to generate dynamic responses
based on the user’s input. (use PDF files only)
● Ensure the chatbot responds not only with a text-based answer in Urdu/Hindi but also converts that
response back to audio and plays it for the user language.
Noted!
please it said that i have to download the model and it took 9.5 GB , is it true ? or there is other method without downloading it
You need to download the pretrained model. You can try using some other LLM which is smaller as compare to this model.
@@CodeWithAarohi OK thank u so so so much 💗💗
iam not able to laod the huggingfaceembeddings. it shows this error.
The specified module could not be found. Error loading "C:\Users\aj441\anaconda3\envs\llmenv\lib\site-packages\torch\lib\fbgemm.dll" or one of its dependencies.
@@amalkuttu8274 are you running through anaconda or cmd prompt
@@CodeWithAarohi anaconda
I also faced the same issue. Intsead of using the requirements.txt file, just install them directly using these commands:
1. conda create -n env_langchain2 python=3.10
2. conda activate env_langchain2
3. conda install pytorch torchvision torchaudio cpuonly -c pytorch
4. pip install transformers
5. pip install sentence-transformers
6. pip install langchain langchain_community langchain-huggingface langchain_experimental langchain_chroma langchainhub
7. pip install streamlit
8. conda install jupyter
9. jupyter notebook
Then test your installation by running this script in Jupyter Notebook:
import torch
import transformers
import sentence_transformers
import langchain
print("PyTorch version:", torch.__version__)
print("Transformers version:", transformers.__version__)
print("Sentence Transformers version:", sentence_transformers.__version__)
print("LangChain version:", langchain.__version__)
It worked for me! Let me know if you still face issues.
@@eashan2405 I will surely look that.