Thanks so much, I followed your tutorial and got Chroma DB working. My config was slightly different as I wanted to download the Mini LLM model into a certain directory first. Tried with a few sentences using the all-MiniLM-L6-v2 and it was really exciting to have your own LLM working on my machine. Hmm, makes you want to go out and get that NVDIA GPU 😀. This is definitely a goto channel to stay updated on python
very interesting video! could you please make a tutorial based on this databse you created, that explains how to generate answer with a gen model from openAi or Ollama to answer the query?
Failed to build chroma-hnswlib ERROR: Could not build wheels for chroma-hnswlib, which is required to install pyproject.toml-based projects How to resolve this error?
Install anaconda navigator and on anaconda command prompt run: conda install -c conda-forge chromadb This should work. Verify by going to python env and import chromadb after successful installation.
many thanks, helped a lot! however I've been confused about not getting the right answer from my query when i ask the exact same question...(contents are simple table form, like "today's breakfast-fruit"..there do exist lots of professional words in my content. should I train my embedding model first like making it embed the words following some rules? Thanks a lot.
I am trying to embed document using together embedding llm and chromadb vector database, after embedding The LLM still not able to answer some question already in pdf files that i have embedded. I am using harry potter part 1 pdf
With the free embedding model, you can just pass the documents (and the query) to the model as demonstrated at the beginning of the video. The Chroma client will automatically create embeddings with the default model.
My issue isn't adding the data, it's deleting it: try: debug_var = collection.delete(where={"source": document_id}, ids=[document_id]) print(f"
DEBUG VAR: {debug_var}
") return jsonify({"message": "Curriculum deleted successfully"}) This try block always passes, even when the document doesn't exist. And if it exists, I can try to delete the same document 999 times and it will print "Curriculum deleted successfully" everytime. Also, the value of the debug var is always None.
PGVector Video: ruclips.net/video/FDBnyJu_Ndg/видео.html
☕ Buy a coffee: ko-fi.com/bugbytes
⭐Top resource to learn Python - datacamp.pxf.io/kOjKkV ⭐
Finally something worth watching, without tons of additional libraries covering SDKs. Thank you!
Glad you liked the video, tried to keep it simple here and steer away from other SDKs/langchain etc. Thanks for the comment!
This covers RAG, end to end. Thank you
Thanks for watching!
this is very clean and informative.
Thanks very much!
Thanks so much, I followed your tutorial and got Chroma DB working. My config was slightly different as I wanted to download the Mini LLM model into a certain directory first. Tried with a few sentences using the all-MiniLM-L6-v2 and it was really exciting to have your own LLM working on my machine. Hmm, makes you want to go out and get that NVDIA GPU 😀. This is definitely a goto channel to stay updated on python
Thanks a lot! And agreed, it's really cool to have a good LLM available locally for easy (and free) usage.
Thanks for this video. I was looking in to vector db recently
Hope the video was helpful!
Great as always / simple and effective.
Thanks a lot Marko!
Excellent presentation.
@@SteveCarroll2011 thanks very much!
Great Tutorial
@@abhisheknigam3768 thanks!
@@bugbytes3923 Can you please make a video on Rag, Vector Database. Also on Gen Ai with Databricks if possible.
Great session. Thankyou
Thanks a lot!
Thanks for this video.
No problem, thanks for watching!
Very usible session
Thank You. I O U a "Pot of Coffee" or you favorite beverage !
Thank you!
very interesting video! could you please make a tutorial based on this databse you created, that explains how to generate answer with a gen model from openAi or Ollama to answer the query?
it really helps me a lot Tx
Glad to hear that, thank you!
🎉🎉🎉
Thanks!
many thanks.. very nicely explained..any plans to create a tutorial video on Streamlit App with chromaDB in future?
Thanks a lot! For sure, that's in the near future!
Your accent is cool af my guy.
Haha! Thanks a lot bro!
Failed to build chroma-hnswlib
ERROR: Could not build wheels for chroma-hnswlib, which is required to install pyproject.toml-based projects
How to resolve this error?
Install anaconda navigator and on anaconda command prompt run: conda install -c conda-forge chromadb
This should work. Verify by going to python env and import chromadb after successful installation.
many thanks, helped a lot! however I've been confused about not getting the right answer from my query when i ask the exact same question...(contents are simple table form, like "today's breakfast-fruit"..there do exist lots of professional words in my content. should I train my embedding model first like making it embed the words following some rules? Thanks a lot.
Tutorial of circumeo hosting please, how to setup domain name . ?
I am trying to embed document using together embedding llm and chromadb vector database, after embedding The LLM still not able to answer some question already in pdf files that i have embedded. I am using harry potter part 1 pdf
U used openAI embedding, but the defoult solution is free, will there be tutorial with a free embedding model?
With the free embedding model, you can just pass the documents (and the query) to the model as demonstrated at the beginning of the video. The Chroma client will automatically create embeddings with the default model.
My issue isn't adding the data, it's deleting it:
try:
debug_var = collection.delete(where={"source": document_id}, ids=[document_id])
print(f"
DEBUG VAR: {debug_var}
")
return jsonify({"message": "Curriculum deleted successfully"})
This try block always passes, even when the document doesn't exist. And if it exists, I can try to delete the same document 999 times and it will print "Curriculum deleted successfully" everytime. Also, the value of the debug var is always None.
Cool video, but very strong accent. On par with a strong indian accent.