create a AI teacher using swarm agents, like it should personal ai teacher for all subjects with proper accuracy and according to my knowledge modify the content my level of understanding format type
Thank you very much Sir. I was planning for a small project as I am jobless now and this will be perfect fit foe the same if I can use it properly. In case I get into any design issues, I will consult here. Thanks again. Have a great time learning and teaching. All the best.
Very informative. Thanks for this. Throttled my network to read the tech bits on your website. Couldn’t find a Community menu, I think that will engage users. More users on website may lead to more conversion.
Amazing! I want to do a personal project based on nearly this concept but I want everything to be locally handled including the chatgpt answer generation part. Although it's not related to text but majorly images. Great video anyways.
I have one query, for caching what are you essentially caching? Is it only the semantic query relevance if multiple users asks similar query? Would appreciate if you throw some light off
I have used ElasticSearch (OpenSearch) earlier: it's very expensive and not as good as a vector DB like PGVector. The problem with text search is that it relies too much on written words instead of the context between them. Maybe I could have used the OpenSearch better, but my experience with Neon (PGVector) is better.
Hey Gaurav, excellent explanation as always. I have a couple of questions I wanted to ask: 1. How do you create a vector database? Does Neon DB assist in this process? Do we simply pass the transcript to Neon DB, and it returns the vector file for that transcript? 2. Is this what our request to ChatGPT looks like? -> "What is load balancing?" and here are some vector databases for context: transcript-vector.txt. 3. So, does ChatGPT receive the query from the user and the vector database for the transcript provided by Neon DB? 4. I also don't understand how Neon DB selects which transcript vector to send to ChatGPT. Thank you for the video.
Thanks Raymax! 1. Creating a database a one click operation in Neon. No we have to first get an embedding for the transcript using a model. There is LlamaIndex, etc... for this. 2. User prompt: What is chat gpt. System Prompt: You are a system design teacher who answers in 2-3 sentences. Something like the above. 3. ChatGPT talks to our server, the InterviewReady server. Our server finds files most relevant to a query from NeonDB and uses them to add context to the query. 4. It's a vector embedding search. There are various search algorithms for this, like K-nearest and HSNW.
hey gaurav can we also store images in vector database? example can i store youtube thumbnails ? and while prompting gpt can use those as reference and provide me a better outcome love your content 🙌🏽
Thanks! You can store the files. The images can be converted to vectors, yes. But OpenAI is not suitable for this type of embedding, by my limited understanding. A multimodal system will do better.
Great ecplanation and thanks a lot for this concept!!!I have one doubt. Will the data in neon db be static? How are you storing the data in the db for a particular transcript?
1. Is Neon db hosted in your vpc? 2. How do you maintain files/ embedding in vector db (say- For another RAG project where users wants to chat with their documents then how do we query vector db where multiple users have their own documents)? 3. Do you think running llm instance locally (eg. ollama) is better option!!
Small doubt ,At the beginning, you mentioned using ChatGPT to store transcripts in the vector database. I believe you meant to say that you used a text embedding model from OpenAI to generate and store vector embeddings.
No. He stores the transcripts. He uses the "Retrieval" part, i.e. getting similar video files using Neon and tells the GPT model via API to consider files with index [i,j,k ...] to be used for augmentation for answer generation. I intentionally broke down the RAG across the sentence so that you know it is not something fancy.
Hi, so am I correct in understanding that after feeding ChatGPT all your transcript files, the queries with no context didn’t yield good response; then with context pointing to a specific file yielded better response; and with context to multiple related files it yielded even better response?
When a question is asked, it needs to be sent to Neon DB, where similar context transcripts are retrieved from the database. These transcripts are then sent to OpenAI, and the response is returned and displayed to the user. I want to know if this process happens instantly, or if I am comprehending it incorrectly.
While working with AWS transcribtion for audio queries its very slow and time consuming is their any way to increase performance for this process specially using audio .mp3 file from s3 bucket
So basically, You are storing raw files using openai files api, and storing vector embedding in neon vector db, when a user makes a prompt you fetch the similar vectors from the neondb and inject in prompt sent to openai text generation api. what if user deletes the file, we sure can delete it from openai files, but how do we delete the vector embedding of a particular file ?
@deesiInGermany i found that we have to attach metadata with the vector and when the file is deleted, we have to search embedding that have same name in metadata. I also wondered that if we cannot use uploaded files directly in the prompt like "refer file {file1}", than whats the point of storing it on openai server. We should store it on our server only for better flexibility in future.
@@dhruvwills thanks Dhruv for your reply. I am starting my AI journey with a project. Just confused here if we are storing our files in Vector DB and then will only share the part of the context with the question to gpt, then what's a point of storing files in ChatGpt. Also storing theses files will consume tokens means extra cost
@@deesiInGermany the primary objective of files in gpt is to finetune model, but if you do it then it will generalize for each user. Thats is why we are providing the content in context, so in this case storing files on openai is just acting as a database, this is not necessary, you can store files on your own server and generate embedding using openai api, then store that embeddings in neondb. Here is how it would work, You would use openai api to send the text which was stored on the file and openai will return embedding which you will store on neon vector db. Earlier there used to be answers and query api in openai but that is now deprecated. They suggest to use embedding now. So basically, attach embedding in context, and you are good to go.
How do you check whether your system is not used other than asking questions related to your videos only as Amazon also got this wrong in its first try where it written python code for the user
people like me who do not have money or do not want to spend money on chatgpt ,can user ollama to run a LLM model on a local instance and run llama3, or mistral for RAG
Thank you for watching!
Resource links:
Neon DB - fyi.neon.tech/gs1
AWS Transcribe: aws.amazon.com/pm/transcribe
create a AI teacher using swarm agents, like it should personal ai teacher for all subjects with proper accuracy and according to my knowledge modify the content my level of understanding format type
Just what I was looking for!!
Building a project with similar specification. This video helped me understand the system design aspect of it.
Cheers!
wa..o... superb!! keep your smart work like that, waiting for your next video.
Very nicely explained. Just right amount of information.
Thank you!
Keep UP the GOOD WORK , your videos are very good and of good quality
Thank you!
Very cool and concise video. Was not aware of this and thanks for sharing.
Glad it was helpful!
Thank you very much Sir. I was planning for a small project as I am jobless now and this will be perfect fit foe the same if I can use it properly. In case I get into any design issues, I will consult here. Thanks again. Have a great time learning and teaching. All the best.
All the best!
Bro thankyou for this concept😊😊😊
Great video. Your humour "depending on your salary you can set up the instance you want ...xD"
😁
I had this challenge previously. Thank you os much for this.
Cheers!
Very informative. Thanks for this.
Throttled my network to read the tech bits on your website.
Couldn’t find a Community menu, I think that will engage users. More users on website may lead to more conversion.
Hahaha, that's a nice way to read the tech bits 😁
I am looking to add a community section in 2025. Thanks for the feedback!
a muchhh needed video, thankss a lottt sirrr😭😭😭😭😭😭😭😭😭😭😭😭
Cheers!
Amazing! I want to do a personal project based on nearly this concept but I want everything to be locally handled including the chatgpt answer generation part. Although it's not related to text but majorly images. Great video anyways.
Go for it!
The generation is also possible if you have your own transformers.
I have one query, for caching what are you essentially caching? Is it only the semantic query relevance if multiple users asks similar query?
Would appreciate if you throw some light off
Great video, Gaurav. Also, from where do I get that cool tshirt?
Thanks!
I got it as a gift, so I am not sure where. I think it's from a creator shop in Instagram though.
Hi Gaurav, did you consider elasticsearch as well? I mean just for benchmarks
I have used ElasticSearch (OpenSearch) earlier: it's very expensive and not as good as a vector DB like PGVector.
The problem with text search is that it relies too much on written words instead of the context between them. Maybe I could have used the OpenSearch better, but my experience with Neon (PGVector) is better.
Hey Gaurav, excellent explanation as always. I have a couple of questions I wanted to ask:
1. How do you create a vector database? Does Neon DB assist in this process? Do we simply pass the transcript to Neon DB, and it returns the vector file for that transcript?
2. Is this what our request to ChatGPT looks like? -> "What is load balancing?" and here are some vector databases for context: transcript-vector.txt.
3. So, does ChatGPT receive the query from the user and the vector database for the transcript provided by Neon DB?
4. I also don't understand how Neon DB selects which transcript vector to send to ChatGPT.
Thank you for the video.
Thanks Raymax!
1. Creating a database a one click operation in Neon. No we have to first get an embedding for the transcript using a model. There is LlamaIndex, etc... for this.
2. User prompt: What is chat gpt. System Prompt: You are a system design teacher who answers in 2-3 sentences.
Something like the above.
3. ChatGPT talks to our server, the InterviewReady server. Our server finds files most relevant to a query from NeonDB and uses them to add context to the query.
4. It's a vector embedding search. There are various search algorithms for this, like K-nearest and HSNW.
@@gkcs thankyou sir !! 🫡
Thank you Gaurav 😁
Thank you!
hey gaurav
can we also store images in vector database?
example can i store youtube thumbnails ? and while prompting gpt can use those as reference and provide me a better outcome
love your content 🙌🏽
Thanks!
You can store the files. The images can be converted to vectors, yes. But OpenAI is not suitable for this type of embedding, by my limited understanding.
A multimodal system will do better.
@@gkcs thanks 👍
Great ecplanation and thanks a lot for this concept!!!I have one doubt. Will the data in neon db be static? How are you storing the data in the db for a particular transcript?
Thanks! Yes the data is static for a transcript. I can update the vector embeddings by clearing the data and replacing all the vectors again.
@@gkcs thanks a lot for the reply 😇
1. Is Neon db hosted in your vpc?
2. How do you maintain files/ embedding in vector db (say- For another RAG project where users wants to chat with their documents then how do we query vector db where multiple users have their own documents)?
3. Do you think running llm instance locally (eg. ollama) is better option!!
Ammmaazing boi... thanks
Cheers :D
Small doubt ,At the beginning, you mentioned using ChatGPT to store transcripts in the vector database. I believe you meant to say that you used a text embedding model from OpenAI to generate and store vector embeddings.
That's right. We got the embeddings with OpenAI:
platform.openai.com/docs/api-reference/embeddings
No. He stores the transcripts. He uses the "Retrieval" part, i.e. getting similar video files using Neon and tells the GPT model via API to consider files with index [i,j,k ...] to be used for augmentation for answer generation. I intentionally broke down the RAG across the sentence so that you know it is not something fancy.
I really really like your videos
Thank you!
Sorry, May i kindly know what steps are involved in augmentation (a in rag)
helpful, thank you.
Thank you :D
Hi, so am I correct in understanding that after feeding ChatGPT all your transcript files, the queries with no context didn’t yield good response; then with context pointing to a specific file yielded better response; and with context to multiple related files it yielded even better response?
That's right 😁
great application :)
can you please share the source code so that the extensions can be made out of it.
Thanks Gaurav
Cheers 😁
Can we utilised not diamond ai tool which provides free apis for different ai models, so does it works the same?
Awesome 👍
Thank you!
I am using digital ocean droplet to host Postgres,
can i use Neon?
how do you rate Neon??
I found Neon fast and good.
@ i want overall rating sir
Like
Can we trust for production ready app, which may cater lakh around user?
@@BloggerVikash Read the fine manual
@@PrabhakarKumar97 yes i have gone through
Currently avoiding neon
Because of the region.
Indian region is not available
4:18 uff the humour
:p
Another way for transcripts is to use youtube as well. It generates transcripts whenever a file is uploaded.
That's a great idea, thank you!
Are you suggesting we pass all the matching files for getting queries answered? Would it not be very expensive?
When a question is asked, it needs to be sent to Neon DB, where similar context transcripts are retrieved from the database. These transcripts are then sent to OpenAI, and the response is returned and displayed to the user. I want to know if this process happens instantly, or if I am comprehending it incorrectly.
It doesn't happen instantly, it takes about 6 seconds for OpenAI to respond.
We are working on showing a loader while this happens, so the UX is good.
While working with AWS transcribtion for audio queries its very slow and time consuming is their any way to increase performance for this process specially using audio .mp3 file from s3 bucket
Adobe takes time too and so does Vimeo. I found AWS easier since it's an API and runs async.
Waao very useful
So basically,
You are storing raw files using openai files api,
and storing vector embedding in neon vector db,
when a user makes a prompt you fetch the similar vectors from the neondb and inject in prompt sent to openai text generation api.
what if user deletes the file, we sure can delete it from openai files, but how do we delete the vector embedding of a particular file ?
Too have the same doubt
@deesiInGermany i found that we have to attach metadata with the vector and when the file is deleted, we have to search embedding that have same name in metadata.
I also wondered that if we cannot use uploaded files directly in the prompt like "refer file {file1}", than whats the point of storing it on openai server. We should store it on our server only for better flexibility in future.
@@dhruvwills thanks Dhruv for your reply.
I am starting my AI journey with a project. Just confused here if we are storing our files in Vector DB and then will only share the part of the context with the question to gpt, then what's a point of storing files in ChatGpt. Also storing theses files will consume tokens means extra cost
@@deesiInGermany the primary objective of files in gpt is to finetune model, but if you do it then it will generalize for each user.
Thats is why we are providing the content in context, so in this case storing files on openai is just acting as a database, this is not necessary, you can store files on your own server and generate embedding using openai api, then store that embeddings in neondb.
Here is how it would work,
You would use openai api to send the text which was stored on the file and openai will return embedding which you will store on neon vector db.
Earlier there used to be answers and query api in openai but that is now deprecated. They suggest to use embedding now.
So basically, attach embedding in context, and you are good to go.
@dhruvwills perfect Dhruv. I'll give it a try. Thank you
would you be able to open source the entire solution? Like a git repo?
bro, how to make embeddings in vector database, like use openai embeddings api for that or neon did that?
You can use openAI embeddings, yes.
You can also use other open source embedding algorithms. We used OpenAI because it's something we have heard of :p
Thanks
🎉🎉 Amazing
Cheers!
How do you check whether your system is not used other than asking questions related to your videos only as Amazon also got this wrong in its first try where it written python code for the user
people like me who do not have money or do not want to spend money on chatgpt ,can user ollama to run a LLM model on a local instance and run llama3, or mistral for RAG
Yes that's right.
like marvel, the post credits 🤣🤣🤣
:D
PG-vector 💀
Woke up to this
Good morning!
💖
pg vector = PostgreSQL ?
@@Hercules159 Yes it's a vector database extension with the Postgres DB.
@gkcs thx a lot
Osm
Thank you!
hahaha ! that laugh for 100$ was epic
Bootstrapped budget constraints :P
Waooo
Thank you!
Awesome 👌
Thanks!