Thanks! Which route is more logical to take assuming for a project built with gpt-4o-mini? SQL pipeline/chain like in the previous video or Knowledge Graph approach in this video?💯
Thanks for the support, @cevat_kelle! The choice between approaches (agents, RAG, knowledge graph, etc.) depends on your data type and goals. If you need to connect multiple databases where relationships are key, the knowledge graph is a solid option. For Q&A with unstructured data without building a graph, RAG works best. If you're querying SQL databases and interacting with tabular data, agent-based approaches are ideal. It all comes down to your objectives and data structure. I recommend checking my next video (it will be uploaded in a couple of days). It is a more advanced version of what I have covered so far. There, I'll cover how to design an agentic system that connects to multiple databases (vectorDB, SQL DB, etc.) and selects the right one automatically for answering questions (RAG and SQL agents will work together). I’ll also demonstrate querying large SQL databases when agents can't retrieve the correct answer. I assume it will be an interesting scenario for industrial applications. Also based on experience, gpt-40-mini does not work well as a SQL agent. I recommend using gpt-3.5-turbo or gpt-4 depending on your budget. 4o-mini does well on RAG though. I will discuss this as well in the next video
Thank you to Farzad-R, for providing such insightful content on the latest advancements in AI, including RAG using Knowledge Graphs and LLM Agents; it has been incredibly informative and inspiring!
Previous video of this series: ruclips.net/video/ZtltjSjFPDg/видео.htmlsi=xtdg3UOq3uFJsih4 Next video of this series: ruclips.net/video/xsCedrNP9w8/видео.htmlsi=Tj9ddmIvqqYMuShW
In this series you've used OpenAI API throughout, but i want to use an open source LLM like Llama 3.2 , would it give me the results like OpenAi ApI? Or similar to these?
Thank you for the thorough course. You saved me a lot of time and effort to start working with Neojs. Here's an idea for improvement: Some questions may require thinking step by step, and for each step you may need to make a query to retrieve information from the database. Although there may be one question, and to answer it you may need to make several queries to the database, including, in some cases, queries to a vector database and in some to a graph database. You might consider using function calling and further improving the prompt
Thanks for watching! I am glad the video was helpful. You are right and that is a very good point. As I mentioned in the video all these agents can be combined and used in bigger systems together to solve more complex problems as the one that you mentioned. Thanks for the insight!
We have a project of hundreds of sql tables with dozens of columns each table. We create the context for the sql agent by previously retrieving shema info and catalogue examples from a vector database. Do you think graph is a good approach in this scenario to improve the agent or the other approach for tabular data could fit better. Thanks
It really depends on what you want to achieve from the system. If the relationship between the content of the tables is important, and if you cannot create the connection by implementing logics and using sql-agents directly on those tables, then knowledge graph can be a good solution. But again, it is hard to give a precise answer without knowing the details of the project and the objectives
This video is really helpful to those who are stuck with RAG and tabular data. Quick question, when would you use Graph agent, and when would you use SQL agent? And what if you do if you have a mix of text and tabular data?
Thanks! The main difference is that a SQL LLM agent is good for querying databases. However, if you are planning to extract specific information and details from a series of data and you are looking to connect some data points to create more meaningful data paths, knowledge graph (KG) is the way to go. KG is for more specialized use-cases imo.
How about connecting to an existing database rather than creating one as you showed? And also, what if the existing database contains both numeric values and letters. For example, what is the status of the customer order? and How many orders did the customer requests? What framework is best to use?
It depends on the structure of the data. If it is a tabular data or sql data the best way to interact with it is using sql agents. I have a detailed video for those agents. If for getting the answer, the knowledge among multiple databases need to be used together, then graph agents can be the better choice. And finally, graph agents only work with graph database. In case you have other databases such as SQL, I recommend you to watch my other video that is focused on those databases. "Chat with tabular data using sql agents"
Hi can you recommend a vector db for rag using CSV data. I have 1000+ rows . I aslo want to deploy the chatbot in server so which database i should pinecone or chroma? If i use chroma it will save embeddings locally right so can it be used on server? Or should i use pinecone instead
It depends on your server. You can also use chroma on a server. To find the best choice, I'd need much more info about the project. But both chroma nad pinecone are great choices so whichever works for you would be a good choice
@@airoundtable iam using azure services for deployment. And am working on multi agent type chatbot which includes conversational rag along with some extra function calls for getting extra functionality. My dataset is csv which contain 1000+ rows as i mentioned
As usual, a useful and powerful video.. This is exactly what we need - I have a suggestion to make a detailed video about: Chat With Document with Knowledge graph data base | Converting Document to KG and query to cypher. - personally I need it, because I'm working on my Master project in critical data that Doesn't accept any LLM hallucination, in addition to RAG retrieving limitations ... kindly keep in that in your mind, and I'm waiting for that. Thanks for your effort :)
Thanks! Happy to hear the content was useful. That is indeed the subject of the next video (RAG with knowledge Graph on PDF and text files). I am writing the code for it. Just to mention, it won't be focused on Q&A as I described how we can perform Q&A with documents using knowledge graph through the Microsoft project that I explained in this video. The next video will be focused on RAG so it will have some uncertainty at the end due to the interinsic characteristics of RAG. But if you want your project to be accurate, I recommend a similiar approach as the Microsoft project on text data.
I don't have a video about combining them. But to combine them first there need to be a logical use case that needs a specific roadmap and mapping, since combining them together will not solve a general problem. But in case you have a usecase in mind you can definitely do it
Which python version are you using for this? Some notebooks crash so im suspecting python version issues - very useful tutorial by the way! Great work!
I don't have Neo4j on my desktop anymore, but there was a way that I could access the database's information in the desktop app. I think I was using the 3 dots next to the graph database name but I am not sure. If you get it right, it will open a new window and give you access to do a bunch of stuff (ex. querying the database) and on that page, you can also see the URL. Hope this helps.
i never tested Grok. I am not sure which one would perform better (I have a sense that GPT4 would perform better). But please feel free to test and check the results
what if I have the data, like annual report of the company, which consists of lots of tables and graphs and text part, how do i store this data in vector database? as you can guess almost every table will have its own format and column names etc. I am able to store and use text part properly, but how do i go about the tabular data
There are multiple ways to properly handle tabular data but they very much depen on the nature of your data and how hard you want to go at that problem. 1. There are some libraries that you can use for extracting information from tabular data. Example: camelot 2. Depending on the nature of your data, if you can prepare a dataset and fine-tune multi-modal LLMs they are able to parse data from your tabular data. You can convert the table to image, for instancy by using pymupdf 'library' and then pass it to the model for extracting information. My next video is about this topic and I will fine-tunde a model to extract JSON data from image of reciepts. (Video will be out in one or two weeks) 3. `unstructured` library also made a lot of effort in doing this task but the last time that I tested the approach was not as efficient as needed.
Great video! Question: I want to use graphDB to combine my unstructed knowledge in pdfs with the constructed user data in MongoDB. My aim is that the LLM can retrieve both the necessary data and the knowledge needed to solve the problem to approach a user request. But the only problem is that the user-data is changing, is there a way to update my graphDB everytime I add/change something to my mongoDB (that my application is essentially running in)?
Thanks! Yes, you can achieve this. You should create an automated pipeline that monitors changes in your MongoDB database and updates the GraphDB accordingly. Instead of recreating the GraphDB from scratch each time, you should implement a mechanism that reflects only the recent changes from MongoDB. To do this, you need logic that detects changes in MongoDB and updates the GraphDB appropriately. For example, you can set up multiple checks, and for each check, use different Cypher queries to add or modify content in GraphDB based on the changes detected in MongoDB.
@@airoundtable Thanks for the answer. But I realized GraphDB is probably not the best design choice for me. Because in my case the LLM will have to make the correlation between the user-data and the knowledge to be able to interpret the data. I decided to have seperate DBs: VectorDB that will hold the solution source of the possible problems and MongoDB that will hold the user-data that essentially points to the problem. This made the architecture a little complicated but I believe it will give better output than a graphDB. What do you think?
I see you are creating a Vector Index at 48:56. But Are you using that anywhere? I see that you are using the embeddings created further down the program. Then I wonder why you create the Vector Index?
@@airoundtable First of all, great stuff, really helpful and an awesome crash course on the subject with the right amount of details (plus w/o any BS), so thank you. I had the same question @sajinmohammed.p.e.5, you created the embeddings and stored them in graph DB but the empty vector Index you created "CREATE VECTOR INDEX movie_tagline_embeddings", I didn't understand how that is used? As If I am getting it right then you're creating an embedding for every question and finding the consecutive embedding from the graph db and sending it to the LLM to get the match, Would you mind explaining that part please? PS: Sorry @sajinmohammed.p.e.5, for hi-jacking your thread. but if you have the same question as me then you're welcome. :P
I didn't understand the +-60% accuracy. But if you are referring to the accuracy of this technique, I should say this approach is probably on the frontline of retrieval techniques and it is very new. So,I expect it to become better and better over time with the advancements in the field. I also should say that between GraphRAG with tabular data and using Graph Agents to directly query the DB, I prefer the second approach and I would use agents
I talked about it in the tutorial. Having multiple tables is not the challenge here. In that scenario, the way that you want to connect those tables is the main aspect. In the video, I talked about how I assume that I have another table with some extra information and I want to merge them into the knowledge graph. Typically you would need a connection point between those tables (e.g: a mutual column for mapping) and then you can start building the knowledge graph. I talk about it at 35:00
@@airoundtable Hi Farzad, thanks for the prompt response. What I wanted to ask is that do we always need to merge the multiple tables in a single table to generate the graph ? Can we create the graph through the hybrid approach without merging the multiple tables into a single table ? If i have a lot of tables it will not be possible to combine them always in a single table.
@@mayankgoyal4213 Sure, we can. You don't need to merge the databases but you need to implement the logic that creates the proper graph DB for you considering the constraints (e.g: number and size of databases). Data pipeline can be designed based on your needs
Yes. You need to change the agent's LLM though. I haven't tested it myself but the following URLs are good starts to understand how to design langchain agents using LLAMA. Just make sure to use a very powerful model that can handle the complexity of this task. - medium.com/@sandyshah1990/langchain-agents-and-function-calling-using-llama-2-locally-29ce057e4789 - github.com/pinecone-io/examples/blob/master/learn/generation/llm-field-guide/llama-2/llama-2-70b-chat-agent.ipynb
@@airoundtable Next step for this series i am dreaming of would be an option to control terminal commands with simple prompts by already knowing the RAG all your necessary systemfiles. Or having the RAG at least not asking anymore about your system when searching for help to administrate system and server specific stuff via terminal. Do you coincidentally wanna become my dreamcatcher :D ?
@@RealLexable :))) That sounds like a very interesting idea! Although I might not have the time to dive into it in the next two months, but if you’re working on something like this and run into any challenges, I’d be happy to help!
You are right, llama-index is great and has a lot of tools. I will check it again soon. My last interaction with it was for a video in which I compared llama-index and Langchain for RAG. It was a while ago and I know they have evolved the framework alot.
@@sujit5013 I appreciate the kind words. I am working on two videos now. But I will keep llama-index inmind and check it out later for sure. Thanks again for the suggestion!
Hi Darkmatter9583. I would be happy to help. You can go through the tutorials and ask your questions. I work on multiple projects at the moment but I will respond to questions whenever I can. In case you would like a head start in the field, send me a description of your background and what you want to accomplish. I will try to guide you in the right direction. You can send me a message on Linkedin.
Thanks! Which route is more logical to take assuming for a project built with gpt-4o-mini? SQL pipeline/chain like in the previous video or Knowledge Graph approach in this video?💯
Thanks for the support, @cevat_kelle!
The choice between approaches (agents, RAG, knowledge graph, etc.) depends on your data type and goals. If you need to connect multiple databases where relationships are key, the knowledge graph is a solid option. For Q&A with unstructured data without building a graph, RAG works best. If you're querying SQL databases and interacting with tabular data, agent-based approaches are ideal. It all comes down to your objectives and data structure.
I recommend checking my next video (it will be uploaded in a couple of days). It is a more advanced version of what I have covered so far. There, I'll cover how to design an agentic system that connects to multiple databases (vectorDB, SQL DB, etc.) and selects the right one automatically for answering questions (RAG and SQL agents will work together). I’ll also demonstrate querying large SQL databases when agents can't retrieve the correct answer. I assume it will be an interesting scenario for industrial applications.
Also based on experience, gpt-40-mini does not work well as a SQL agent. I recommend using gpt-3.5-turbo or gpt-4 depending on your budget. 4o-mini does well on RAG though. I will discuss this as well in the next video
@@airoundtablefantastic! Iam looking forward to it. Great content, please keep up the good work!
Is it possible to stream the result in chunks while using the GraphCypherQAChain?
Thank you to Farzad-R, for providing such insightful content on the latest advancements in AI, including RAG using Knowledge Graphs and LLM Agents; it has been incredibly informative and inspiring!
Thanks for the kind words! I'm glad the content was helpful
Indeed, OMG thank you very much for the incredible detail and your patient and thorough explanations. I am excited to the remainder of the series.
you have no idea how useful this tutorial is...thank you very much.
Thanks, I am glad it helped!
Hello, May I ask your help to place the link of the previous video with the SQL agent that was mentioned at the beginning? Many thanks!
Previous video of this series: ruclips.net/video/ZtltjSjFPDg/видео.htmlsi=xtdg3UOq3uFJsih4
Next video of this series: ruclips.net/video/xsCedrNP9w8/видео.htmlsi=Tj9ddmIvqqYMuShW
In this series you've used OpenAI API throughout, but i want to use an open source LLM like Llama 3.2 , would it give me the results like OpenAi ApI? Or similar to these?
It depends on the LLM. If you choose a powerful model you can get good results. But the challenge would be on the hardware side
You are super!!!. No 15 mins BS others call projects. Thank you so much.
Appreciate it! I am glad you liked the video. Thanks for watching
Thank you for the thorough course. You saved me a lot of time and effort to start working with Neojs.
Here's an idea for improvement: Some questions may require thinking step by step, and for each step you may need to make a query to retrieve information from the database. Although there may be one question, and to answer it you may need to make several queries to the database, including, in some cases, queries to a vector database and in some to a graph database.
You might consider using function calling and further improving the prompt
Thanks for watching! I am glad the video was helpful.
You are right and that is a very good point. As I mentioned in the video all these agents can be combined and used in bigger systems together to solve more complex problems as the one that you mentioned.
Thanks for the insight!
We have a project of hundreds of sql tables with dozens of columns each table. We create the context for the sql agent by previously retrieving shema info and catalogue examples from a vector database. Do you think graph is a good approach in this scenario to improve the agent or the other approach for tabular data could fit better. Thanks
It really depends on what you want to achieve from the system. If the relationship between the content of the tables is important, and if you cannot create the connection by implementing logics and using sql-agents directly on those tables, then knowledge graph can be a good solution. But again, it is hard to give a precise answer without knowing the details of the project and the objectives
@@airoundtable thank you!!
A++ video. Very informative and detailed.
Thanks! I am glad you liked the video
This video is really helpful to those who are stuck with RAG and tabular data. Quick question, when would you use Graph agent, and when would you use SQL agent? And what if you do if you have a mix of text and tabular data?
Thanks!
The main difference is that a SQL LLM agent is good for querying databases. However, if you are planning to extract specific information and details from a series of data and you are looking to connect some data points to create more meaningful data paths, knowledge graph (KG) is the way to go. KG is for more specialized use-cases imo.
How about connecting to an existing database rather than creating one as you showed? And also, what if the existing database contains both numeric values and letters. For example, what is the status of the customer order? and How many orders did the customer requests? What framework is best to use?
It depends on the structure of the data. If it is a tabular data or sql data the best way to interact with it is using sql agents. I have a detailed video for those agents. If for getting the answer, the knowledge among multiple databases need to be used together, then graph agents can be the better choice. And finally, graph agents only work with graph database. In case you have other databases such as SQL, I recommend you to watch my other video that is focused on those databases.
"Chat with tabular data using sql agents"
Video full of Knowledge and well explained. I am looking for you channel to grow more !
Thanks! I appreciate the kind words. I am glad that the video was helpful
Hi can you recommend a vector db for rag using CSV data. I have 1000+ rows . I aslo want to deploy the chatbot in server so which database i should pinecone or chroma? If i use chroma it will save embeddings locally right so can it be used on server? Or should i use pinecone instead
It depends on your server. You can also use chroma on a server. To find the best choice, I'd need much more info about the project. But both chroma nad pinecone are great choices so whichever works for you would be a good choice
@@airoundtable iam using azure services for deployment. And am working on multi agent type chatbot which includes conversational rag along with some extra function calls for getting extra functionality. My dataset is csv which contain 1000+ rows as i mentioned
As usual, a useful and powerful video.. This is exactly what we need - I have a suggestion to make a detailed video about: Chat With Document with Knowledge graph data base | Converting Document to KG and query to cypher. - personally I need it, because I'm working on my Master project in critical data that Doesn't accept any LLM hallucination, in addition to RAG retrieving limitations ... kindly keep in that in your mind, and I'm waiting for that.
Thanks for your effort :)
Thanks! Happy to hear the content was useful. That is indeed the subject of the next video (RAG with knowledge Graph on PDF and text files). I am writing the code for it. Just to mention, it won't be focused on Q&A as I described how we can perform Q&A with documents using knowledge graph through the Microsoft project that I explained in this video. The next video will be focused on RAG so it will have some uncertainty at the end due to the interinsic characteristics of RAG. But if you want your project to be accurate, I recommend a similiar approach as the Microsoft project on text data.
is there a video talking about how to combine RAG, SQL agent, and Knowledge Graph?
I don't have a video about combining them. But to combine them first there need to be a logical use case that needs a specific roadmap and mapping, since combining them together will not solve a general problem. But in case you have a usecase in mind you can definitely do it
omg this is such an underrated video i was dying for such content tytytytyt
Thanks :)) I am glad you liked th econtent
Which python version are you using for this? Some notebooks crash so im suspecting python version issues - very useful tutorial by the way! Great work!
Thanks!
I use 3.11
Hi , i cant find url of neo4j desktop for connection
I don't have Neo4j on my desktop anymore, but there was a way that I could access the database's information in the desktop app. I think I was using the 3 dots next to the graph database name but I am not sure. If you get it right, it will open a new window and give you access to do a bunch of stuff (ex. querying the database) and on that page, you can also see the URL. Hope this helps.
Can we use the Grok api or the performance will not be the same with open ai ?
i never tested Grok. I am not sure which one would perform better (I have a sense that GPT4 would perform better). But please feel free to test and check the results
Awesome tutorial! 👍
Thanks!
what if I have the data, like annual report of the company, which consists of lots of tables and graphs and text part, how do i store this data in vector database? as you can guess almost every table will have its own format and column names etc. I am able to store and use text part properly, but how do i go about the tabular data
There are multiple ways to properly handle tabular data but they very much depen on the nature of your data and how hard you want to go at that problem.
1. There are some libraries that you can use for extracting information from tabular data. Example: camelot
2. Depending on the nature of your data, if you can prepare a dataset and fine-tune multi-modal LLMs they are able to parse data from your tabular data. You can convert the table to image, for instancy by using pymupdf 'library' and then pass it to the model for extracting information. My next video is about this topic and I will fine-tunde a model to extract JSON data from image of reciepts. (Video will be out in one or two weeks)
3. `unstructured` library also made a lot of effort in doing this task but the last time that I tested the approach was not as efficient as needed.
Great video!
Question: I want to use graphDB to combine my unstructed knowledge in pdfs with the constructed user data in MongoDB. My aim is that the LLM can retrieve both the necessary data and the knowledge needed to solve the problem to approach a user request. But the only problem is that the user-data is changing, is there a way to update my graphDB everytime I add/change something to my mongoDB (that my application is essentially running in)?
Thanks!
Yes, you can achieve this. You should create an automated pipeline that monitors changes in your MongoDB database and updates the GraphDB accordingly. Instead of recreating the GraphDB from scratch each time, you should implement a mechanism that reflects only the recent changes from MongoDB.
To do this, you need logic that detects changes in MongoDB and updates the GraphDB appropriately. For example, you can set up multiple checks, and for each check, use different Cypher queries to add or modify content in GraphDB based on the changes detected in MongoDB.
@@airoundtable Thanks for the answer. But I realized GraphDB is probably not the best design choice for me. Because in my case the LLM will have to make the correlation between the user-data and the knowledge to be able to interpret the data. I decided to have seperate DBs: VectorDB that will hold the solution source of the possible problems and MongoDB that will hold the user-data that essentially points to the problem. This made the architecture a little complicated but I believe it will give better output than a graphDB. What do you think?
@@airoundtable Could I approach you through mail? I would love a feedback on my complete architecture.
I see you are creating a Vector Index at 48:56. But Are you using that anywhere? I see that you are using the embeddings created further down the program. Then I wonder why you create the Vector Index?
I am adding the vector embeddings of my data to that vector index at 49:50. Then I use that vector index for RAG in this chatbot
@@airoundtable First of all, great stuff, really helpful and an awesome crash course on the subject with the right amount of details (plus w/o any BS), so thank you.
I had the same question @sajinmohammed.p.e.5, you created the embeddings and stored them in graph DB but the empty vector Index you created "CREATE VECTOR INDEX movie_tagline_embeddings", I didn't understand how that is used? As If I am getting it right then you're creating an embedding for every question and finding the consecutive embedding from the graph db and sending it to the LLM to get the match, Would you mind explaining that part please?
PS: Sorry @sajinmohammed.p.e.5, for hi-jacking your thread. but if you have the same question as me then you're welcome. :P
So +/- 60% accuracy from graphRAG with tabular info? Not there yet!
I didn't understand the +-60% accuracy.
But if you are referring to the accuracy of this technique, I should say this approach is probably on the frontline of retrieval techniques and it is very new. So,I expect it to become better and better over time with the advancements in the field. I also should say that between GraphRAG with tabular data and using Graph Agents to directly query the DB, I prefer the second approach and I would use agents
the video was great and amazing there is one issue you use only some paid llms , also use open source llms , this will help us more
Thanks for the suggestion. I will try to include more applications with O.S LLMs.
In the example you took a single table for movie Database. Can you please take an example with multiple tables.
I talked about it in the tutorial. Having multiple tables is not the challenge here. In that scenario, the way that you want to connect those tables is the main aspect. In the video, I talked about how I assume that I have another table with some extra information and I want to merge them into the knowledge graph. Typically you would need a connection point between those tables (e.g: a mutual column for mapping) and then you can start building the knowledge graph. I talk about it at 35:00
@@airoundtable Hi Farzad, thanks for the prompt response. What I wanted to ask is that do we always need to merge the multiple tables in a single table to generate the graph ? Can we create the graph through the hybrid approach without merging the multiple tables into a single table ? If i have a lot of tables it will not be possible to combine them always in a single table.
@@mayankgoyal4213 Sure, we can. You don't need to merge the databases but you need to implement the logic that creates the proper graph DB for you considering the constraints (e.g: number and size of databases). Data pipeline can be designed based on your needs
appreciate it brother. Looking forward learning more together.
Thanks!
Can you give me the link for the details of the medical prompts / chatbot
Here is the repo:
github.com/neo4j-partners/neo4j-generative-ai-azure
I was having an assumption that a graph db based knowledge graph can potentially outperform tabular representation of knowledge graph.
That is very probable. Graph DBs also make it easier to scale.
Is lamma also usable with it?
Yes. You need to change the agent's LLM though. I haven't tested it myself but the following URLs are good starts to understand how to design langchain agents using LLAMA. Just make sure to use a very powerful model that can handle the complexity of this task.
- medium.com/@sandyshah1990/langchain-agents-and-function-calling-using-llama-2-locally-29ce057e4789
- github.com/pinecone-io/examples/blob/master/learn/generation/llm-field-guide/llama-2/llama-2-70b-chat-agent.ipynb
@@airoundtable Next step for this series i am dreaming of would be an option to control terminal commands with simple prompts by already knowing the RAG all your necessary systemfiles. Or having the RAG at least not asking anymore about your system when searching for help to administrate system and server specific stuff via terminal. Do you coincidentally wanna become my dreamcatcher :D ?
@@RealLexable :))) That sounds like a very interesting idea! Although I might not have the time to dive into it in the next two months, but if you’re working on something like this and run into any challenges, I’d be happy to help!
Great video! Totally worth watching 💯
Thanks! I am glad you liked it
really great content, useful, focused, original !
Thanks! I am glad you liked the video and the content
Can you do a series on llama-index? They have a lot of tools and it’s so different from building using langchain
You are right, llama-index is great and has a lot of tools. I will check it again soon. My last interaction with it was for a video in which I compared llama-index and Langchain for RAG. It was a while ago and I know they have evolved the framework alot.
Thank you. There aren’t enough llamas-index tutorials and I think you’ll explain them better than anyone! Learning so much from your videos.
@@sujit5013 I appreciate the kind words. I am working on two videos now. But I will keep llama-index inmind and check it out later for sure. Thanks again for the suggestion!
Excellent video. thank you so much!
Thanks! Glad it was helpful!
Solid content as usual 🙂🚀
Thanks!
Well done
thanks farzad.
Thanks Mohsen!
Thanks!
could you be my mentor?? your knowledge is awesome, i would like to learn thar wat
Hello Darkmatter9583. Thanks! I answered your other message
keep it up !
Hi,
Is there a way to reach you by email?
Thanks
Hi. Yes, you can find my email and social media links here: farzad-r.github.io/
@@airoundtable Appreciate that. I just sent you an email. Looking forward to hearing from you.
can you be my teacher? 🙏
Hi Darkmatter9583. I would be happy to help. You can go through the tutorials and ask your questions. I work on multiple projects at the moment but I will respond to questions whenever I can. In case you would like a head start in the field, send me a description of your background and what you want to accomplish. I will try to guide you in the right direction. You can send me a message on Linkedin.