Amazing video, thank you very much! It's obvious there was a lot of work involved to make it in such a well structure way. Very easy to follow, you know how to teach :)
I want to thank you for this walkthrough. This was very informative. And I know it must have taken quite a lot of time and effort to make it. So thank you!!
Thank you for the well described demo. The recommended vector db for this stack is probably opensearch which does the same as pinecone but you have more control and you own it and its more expensive.
meh, opensearch doesn't scale beyond 1M vecs well and their vec search implementation is nothing special - if you want open source I'd recommend qdrant (also rust like Pinecone) or weaviate
@jamesbriggs Thanks for the video James! I was wondering what issues you've experienced with scaling OpenSearch? We're considering it for our large-scale business use case and had thought it would be a good fit for larger-scale use
Thank you for your presentation. I clicked the Subscribe button, although I didn't delve into the video content. During your talk, I recall you mentioning the open-source LLM and discussing AWS pricing. This led me to prioritize a cost-effective solution that allows for scalability. Have you considered running an ollama model locally and setting up a tunnel with a port endpoint for a public URL? I appreciate any feedback you can provide." 😊
James -Great video and I like how you referred by to your flow chart diagram. My task is I am working on the "Corppus" of publicly available engineer technical standards documents that are only available in PDF or Word documents. I want to encode the words (tokens) in those document into a vector database and then take through LLM Bing GPT Transformation Architecture and then using RAG to focus only on the tokens (words) for that "corpus" of engineering standards. Why? This because right now I do a “Control F Search” which takes forever with my clients to find the standards which includes both words and diagrams, pictures (different modality) -- so instead of spending hours on "Control F" my plan is to convert those documents to the vector database and enable a "generative search" in "natural language" instead of "Control F search". Does this make sense? Your video is giving me the pathway to success.
Great video and I like how you referred to your flow chart diagram. I am working on the "Corpus" of publicly available engineer technical standards documents that are only available in PDF or Word documents. I want to encode the words (tokens) in those document into a vector database and then take through LLM Bing GPT Transformation Architecture and then using RAG to focus only on the tokens (words) for that "corpus" of engineering standards. Why? This because right now I do a “Control F Search” which takes forever with my clients to find the standards which includes both words and diagrams, pictures (different modality) -- so instead of spending hours on "Control F" my plan is to convert those documents to the vector database and enable a "generative search" in "natural language" instead of "Control F search". Does this make sense? Your video is giving me the pathway to success.
Thank you! this is very informative! when we put our embeddings into pinecone vector db, is our data going outside? I would be ok to push my sensitive data to aws s3 bucket, but where does that pinecone db resides in?
@jamesbriggs Hi james, thank you for the amazing video, I have a question.. it's possible to deploy models (embedding and LLM) in the same endpoint ? Just for save monye considering that in the RAG pipelines the embedding step and the retrieval are sequencial steps
You need to define your llm in step 2 of asking the model directly.... llm = HuggingFacePredictor( endpoint_name="flan-t5-demo" # Use the name of your deployed endpoint )
What is the best way to learn deep learning fundamentals via implementation (let's say pick a trivial problem of build a recommendation system for movies) using pytorch in Aug 26, 2023?
no it's more like Colab + ML infra, you can also use langchain with sagemaker - the why is for the infra component, hosting open source LLMs is super easy
👋🏼 Check out the article version of the video here:
www.pinecone.io/learn/sagemaker-rag/
James can teach a 9 year old what a RAG is!
I try my best haha
This channel provides so much valuable information for free and I really appreciate it!
glad to hear :)
Amazing video, thank you very much! It's obvious there was a lot of work involved to make it in such a well structure way. Very easy to follow, you know how to teach :)
I want to thank you for this walkthrough. This was very informative. And I know it must have taken quite a lot of time and effort to make it. So thank you!!
Thank you for the well described demo. The recommended vector db for this stack is probably opensearch which does the same as pinecone but you have more control and you own it and its more expensive.
meh, opensearch doesn't scale beyond 1M vecs well and their vec search implementation is nothing special - if you want open source I'd recommend qdrant (also rust like Pinecone) or weaviate
@jamesbriggs Thanks for the video James! I was wondering what issues you've experienced with scaling OpenSearch? We're considering it for our large-scale business use case and had thought it would be a good fit for larger-scale use
Thank you so much.. really appreciate...love from India
Thank you for your presentation. I clicked the Subscribe button, although I didn't delve into the video content. During your talk, I recall you mentioning the open-source LLM and discussing AWS pricing. This led me to prioritize a cost-effective solution that allows for scalability. Have you considered running an ollama model locally and setting up a tunnel with a port endpoint for a public URL? I appreciate any feedback you can provide." 😊
Thank you for your great effort 🤗
James -Great video and I like how you referred by to your flow chart diagram. My task is I am working on the "Corppus" of publicly available engineer technical standards documents that are only available in PDF or Word documents. I want to encode the words (tokens) in those document into a vector database and then take through LLM Bing GPT Transformation Architecture and then using RAG to focus only on the tokens (words) for that "corpus" of engineering standards. Why? This because right now I do a “Control F Search” which takes forever with my clients to find the standards which includes both words and diagrams, pictures (different modality) -- so instead of spending hours on "Control F" my plan is to convert those documents to the vector database and enable a "generative search" in "natural language" instead of "Control F search". Does this make sense? Your video is giving me the pathway to success.
awesome content, thank you so much. very good explanation. i love watching your videos. i try to follow them and learn 😊
happy to hear that! :)
Thanks for your valuable videos as always. Can you discuss fine tuning llama 2 7b or 13b using dataset & deploy in sagemaker.....
Great video and I like how you referred to your flow chart diagram. I am working on the "Corpus" of publicly available engineer technical standards documents that are only available in PDF or Word documents. I want to encode the words (tokens) in those document into a vector database and then take through LLM Bing GPT Transformation Architecture and then using RAG to focus only on the tokens (words) for that "corpus" of engineering standards. Why? This because right now I do a “Control F Search” which takes forever with my clients to find the standards which includes both words and diagrams, pictures (different modality) -- so instead of spending hours on "Control F" my plan is to convert those documents to the vector database and enable a "generative search" in "natural language" instead of "Control F search". Does this make sense? Your video is giving me the pathway to success.
Well just too good
Excellent
Great video
will you be a video on deployment? Great video btw.
Thank you! this is very informative! when we put our embeddings into pinecone vector db, is our data going outside? I would be ok to push my sensitive data to aws s3 bucket, but where does that pinecone db resides in?
@jamesbriggs Hi james, thank you for the amazing video, I have a question.. it's possible to deploy models (embedding and LLM) in the same endpoint ? Just for save monye considering that in the RAG pipelines the embedding step and the retrieval are sequencial steps
You need to define your llm in step 2 of asking the model directly....
llm = HuggingFacePredictor(
endpoint_name="flan-t5-demo" # Use the name of your deployed endpoint
)
how to create vector vector database for pdf documents ?
What is the best way to learn deep learning fundamentals via implementation (let's say pick a trivial problem of build a recommendation system for movies) using pytorch in Aug 26, 2023?
Neat but why? Is sagemaker just langchain hosted at Aws?
no it's more like Colab + ML infra, you can also use langchain with sagemaker - the why is for the infra component, hosting open source LLMs is super easy
tu código no corre una mierda bro
you use in article wrong LLM 'HF_MODEL_ID':'meta-llama/Llama-2-7b' but it suppose to be MiniLM