Hey yall, in case you didn't get good full text search results like me, the CEO of Supabase (Paul Copplestone) sent me this to use instead: supabase.com/docs/guides/database/extensions/pgroonga
Nice workshop, thank you for sharing! You mentioned early on that you tried decomposing your queries if they were multi-hop queries / abstract queries. Would you still suggest that approach or is there any new research specifically on this matter? Imagine a query in which a user want to retrieve information from multiple documents at get a comparison or summarization.
I'm still doing the same for my app, and what I'm hoping to do eventually is to prompt the query expansion step so it's expanding in a coherent way. E.g question is about how X affects Y -> find X, find Y
For experimenting I would recommend using no database at all. You can simply use the cosine similarity (i.e. from torch functional) or quickly implement it and you are nearly done. Just use some argsort to get the best matches. It's like five lines of code or so. For easy store/load you can use pickle to serialize/unserialize the object that holds the embeddings. It is fast on CPU too, but of course you can run it on GPU without any bigger changes. No services required.
Hey, great content! Thanks for sharing your knowledge. However, instead of just using tsvector in PostgreSQL, you can leverage sparse vector search by utilizing the pg_search extension, right?
Hey yall, in case you didn't get good full text search results like me, the CEO of Supabase (Paul Copplestone) sent me this to use instead: supabase.com/docs/guides/database/extensions/pgroonga
Watching this 2 months later. Great video, thanks for sharing
Glad you enjoyed it!
3 Months im here and enjoying
4 months here love it gonna put this in my graduate thesis
Thanks for sharing, bro! Greetings from Colombia
My pleasure!
Nice video sir. I have already been experimenting with the colab - sincerest thanks
Great to hear!
very good workshop. straight to the point
Very informative! A great resource. Thanks for sharing your wealth of knowledge!!
Nice workshop, thank you for sharing! You mentioned early on that you tried decomposing your queries if they were multi-hop queries / abstract queries. Would you still suggest that approach or is there any new research specifically on this matter? Imagine a query in which a user want to retrieve information from multiple documents at get a comparison or summarization.
I'm still doing the same for my app, and what I'm hoping to do eventually is to prompt the query expansion step so it's expanding in a coherent way. E.g question is about how X affects Y -> find X, find Y
@@devlearnllm Thank you for your response. How exactly would you go about this? Have you played with knowledge graph (GraphRAG) like Neo4j etc?
For experimenting I would recommend using no database at all. You can simply use the cosine similarity (i.e. from torch functional) or quickly implement it and you are nearly done. Just use some argsort to get the best matches. It's like five lines of code or so. For easy store/load you can use pickle to serialize/unserialize the object that holds the embeddings. It is fast on CPU too, but of course you can run it on GPU without any bigger changes.
No services required.
good point
Nice workshop! I'll definitely try out the hybrid search. Do you recon it'll work with nomic text embeddings and ollama?
Most likely!
Incredible content. Thank you.
Much appreciated!
Hey, great content! Thanks for sharing your knowledge. However, instead of just using tsvector in PostgreSQL, you can leverage sparse vector search by utilizing the pg_search extension, right?
yup, they're both full text search. Or use pgroonga
You should've tried Qdrant.
Hi. I have a video request. Is there a way to contact you?
tally.so/r/n9djRQ
@@devlearnllm done. Thanks
what happens if we use all the 25000 cases, will it work?
Most likely. Pinecone, Weaviate and pgvector are very performant.
great
is it possible to run it with Ollama?
Most likely
Is Openai embedding v3 model better than Bert?
Hard to tell unless experiments are run.
huggingface.co/spaces/mteb/leaderboard
I just use Google 🙃