Great video thank you! I'm wondering how long did it take to get query response? Do you know how much was sent? And how much did it cost per answer? Thanks again.
Thanks, Microsoft GraphRAG is extremely slow, also it costed me 7$ to build the index using the example pdf book in the demo. There are langchain and llma-index versions of GraphRAG, which are faster and cheaper. I suggested you try them with open source llms, for ex using ollama. Don't use OpenAI
Great content and very practical. Well done! Ran fine, but when I ran python -m graphrag.index --init --root . , all folders appeared except output (which I add manually). In the second step, artifacts and reports do not appear, but well the automatically generated docs that you describe.
GraphRAG builds its own custom Knowledge graph with some new concepts like communities and summaries. We can't use RDF KGs GraphRAG unless you modify the source code substantially. I think, the best approach would be to build a RAG on top of your RDF KG. With all these RAG's the main thing, extracting the information relevant to the query and providing that to an LLM to create an answer.
@@SridharKumarKannam thanks for the answer. :) Yes, as it generates KG internally, its not a good idea to modify source code right now as it is already have many open issues for GRAPHRAG. I didn't find any good implementation source to build RAG on top of my RDF KG.
no you can not. The GraphRAG query engine expects the graph to be in a specific format, for example, communities and summaries, etc. You need to rebuild the graph.
GraphRAG builds its own custom Knowledge graph with some new concepts like communities and summaries. We can't use RDF KGs GraphRAG unless you modify the source code substantially. I think, the best approach would be to build a RAG on top of your RDF KG. With all these RAG's the main thing, extracting the information relevant to the query and providing that to an LLM to create an answer.
if you build your own RAG system, you can say in the prompt that the generator LLM should use only the context provided to create an answer and not use any self-knowledge. In GraphRAG such things are already configured, if you want, you can modify the prompt which is sent to the generator. If you found this content helpful, please consider sharing it with others who might benefit. Your support is greatly appreciated :)
Do you also have git which you can share. Don’t find the description with prompt and other things. Your code is simple one to get started. While following the Microsoft documentation it is allowing to create registry and API Management and I want to quickly do POC on this one. Great effort though. Thanks for sharing
If you index different documents at different point of time. We end up with multiple artifacts in the output folder. How should one do a search over all outputs. Like a production level application
If you found this content useful, pleases consider sharing it with others who might benefit. Your support is greatly appreciated :)
Your videos are very helpful. Thanks for making detailed videos.
Thank you.
If you found this content helpful, please consider sharing it with others who might benefit. Your support is greatly appreciated :)
Great video thank you!
I'm wondering how long did it take to get query response?
Do you know how much was sent?
And how much did it cost per answer?
Thanks again.
Thanks, Microsoft GraphRAG is extremely slow, also it costed me 7$ to build the index using the example pdf book in the demo. There are langchain and llma-index versions of GraphRAG, which are faster and cheaper. I suggested you try them with open source llms, for ex using ollama. Don't use OpenAI
Outstanding video, thank you for putting this together.
Thanks for your support. Pleases consider sharing it in your communities who might benefit. Your support is greatly appreciated :)
Great content and very practical. Well done! Ran fine, but when I ran python -m graphrag.index --init --root . , all folders appeared except output (which I add manually). In the second step, artifacts and reports do not appear, but well the automatically generated docs that you describe.
ya, there are a lot of issues with it as you can see from git repo issues, but it works. All the best.
Who are you my man. You bring high quality content consistently.
Thanks for your support :)
This is excellent. Thanks you for the
thanks for your support :)
currently GraphRag only works on Txt files? so it cannot perform the multimodal GraphRAG, for example Images etc
yes, its text only. Pls check - ruclips.net/video/YYaYYSXNa0Y/видео.html
I have not gotten it to work with Ollama so far, any reports of it actually working with open source models?
It worked for me. What error are you getting? Check these - github.com/microsoft/graphrag/issues?q=is%3Aissue+is%3Aopen+ollama
Can I use Knowledge graph (RDF format) as input or what process I need to do ?
GraphRAG builds its own custom Knowledge graph with some new concepts like communities and summaries. We can't use RDF KGs GraphRAG unless you modify the source code substantially. I think, the best approach would be to build a RAG on top of your RDF KG. With all these RAG's the main thing, extracting the information relevant to the query and providing that to an LLM to create an answer.
@@SridharKumarKannam thanks for the answer. :) Yes, as it generates KG internally, its not a good idea to modify source code right now as it is already have many open issues for GRAPHRAG. I didn't find any good implementation source to build RAG on top of my RDF KG.
Can we use a knowledge graph directly without passing with text, for exemple if i have a knowledge graph can i use it directly
no you can not. The GraphRAG query engine expects the graph to be in a specific format, for example, communities and summaries, etc. You need to rebuild the graph.
@@SridharKumarKannam COuld u plz bit explain rebuilding process how to do it ? I have a KG with RDF (.ttl) format
GraphRAG builds its own custom Knowledge graph with some new concepts like communities and summaries. We can't use RDF KGs GraphRAG unless you modify the source code substantially. I think, the best approach would be to build a RAG on top of your RDF KG. With all these RAG's the main thing, extracting the information relevant to the query and providing that to an LLM to create an answer.
No Graph Database for example Neo4j involve in GraphRAG
no, its not mandatory. Only if you want to visualise your KG, you can use neo4j...
👍👍👍
Great explanation!
Thanks for your support :)
Hi, how can I configure it so that it only responds based on the context of my content and not additional information. Thanks :)
if you build your own RAG system, you can say in the prompt that the generator LLM should use only the context provided to create an answer and not use any self-knowledge.
In GraphRAG such things are already configured, if you want, you can modify the prompt which is sent to the generator.
If you found this content helpful, please consider sharing it with others who might benefit. Your support is greatly appreciated :)
You are making peak content
@siddeshsakhalkar6117 do you mean its not useful?
great video
Thanks for your support :)
You have a Github where you put your code?
the link describing the steps is added in the description.
Do you also have git which you can share. Don’t find the description with prompt and other things. Your code is simple one to get started. While following the Microsoft documentation it is allowing to create registry and API Management and I want to quickly do POC on this one. Great effort though. Thanks for sharing
You mentioned that it cost $6 to process, whose API key did the money go to?
OpenAI
I tried creating a microsoft graph rag,on running root query it is showing error in forming final community reports ,can you please help me out
whats the error? Pls check your error message in this issues thread - github.com/microsoft/graphrag/issues
I have an Azure Open AI subscription. I added those config under llm. Do I also need to add embeddings in settings.yaml file
no, by default its openai embeddings, it should be fine...
@@chrispioline8469 i build a graph using neo4j
Should I create an embedding?
Or the embedding are created automatic
If you index different documents at different point of time. We end up with multiple artifacts in the output folder.
How should one do a search over all outputs. Like a production level application
as far as I know, the current architecture doesn't supports incremental indexing, if we add new docs, we need to reindex all the docs again.