Want to HIRE us to implement AI into your Business or Workflow? Fill out this work form: td730kenue7.typeform.com/to/WndMD5l7 💗 Thank you so much for watching guys! I would highly appreciate it if you subscribe (turn on notifcation bell), like, and comment what else you want to see! 📆 Book a 1-On-1 Consulting Call WIth Me: calendly.com/worldzofai/ai-consulting-call-1 🔥 Become a Patron (Private Discord): patreon.com/WorldofAi 🧠 Follow me on Twitter: twitter.com/intheworldofai Love y'all and have an amazing day fellas. Thank you so much guys! Love yall!
It's not created for production use. It is an example implementation based on the paper from local to global. Indexing takes a long time because many LLM calls are used to extract entities, relationships and community summaries based on detected communities via the Leiden algorithm. In fact, it's easy to spend 10 to 20 euros simply for indexing a few documents. They do use caching so that a second indexing step does not consume tokens as long as you do not change chunk size etc...
Want to HIRE us to implement AI into your Business or Workflow? Fill out this work form: td730kenue7.typeform.com/to/WndMD5l7
💗 Thank you so much for watching guys! I would highly appreciate it if you subscribe (turn on notifcation bell), like, and comment what else you want to see!
📆 Book a 1-On-1 Consulting Call WIth Me: calendly.com/worldzofai/ai-consulting-call-1
🔥 Become a Patron (Private Discord): patreon.com/WorldofAi
🧠 Follow me on Twitter: twitter.com/intheworldofai
Love y'all and have an amazing day fellas. Thank you so much guys! Love yall!
great video! yet, could you just show a short snippet of code, how to use it with Ollama?
Moshi AI: Real-Time Personal AI Voice Assistant - Beats GPT-4o!: ruclips.net/video/hvP8mUWx7Rw/видео.html
[Must Watch]:
Verba: Ultimate RAG Engine - Semantic Search, Embeddings, Vector Search, & More!: ruclips.net/video/3LLlORBJ72w/видео.htmlsi=g1mO3CAzXRaovCzw
Gemini Code Interpreter: Handle Code Tasks Autonomously!: ruclips.net/video/8wVLNGu4AT4/видео.htmlsi=a2fkEk63omrrMb3M
Maestro: Text-To-Application - Create Software With A Single Prompt!: ruclips.net/video/u-9sgBPcTCs/видео.htmlsi=XpHQvFWQn29zmwYt
HybridRAG: Ultimate RAG Engine - Knowledge Graphs + Vector Retrieval! Better Than GraphRAG! - ruclips.net/video/rtmDQO3ESoE/видео.html
Phidata: Build a Team of Autonomous AI Agents! - ruclips.net/video/BF00mIAavvM/видео.html
LAgent: Opensource AI Agentic Framework - Enables Code Interpreter, Function Calling, and More! - ruclips.net/video/SFAVp7aJSvA/видео.html
RagFlow: Ultimate RAG Engine - Semantic Search, Embeddings, Vector Search + Supports Graph!: ruclips.net/video/ApA-7G7FGRc/видео.html
awesome video thank you
How do you think we can use this in a production application? I noticed indexing documents took me around 3 minutes when I use gpt-3.5-turbo.
It's not created for production use. It is an example implementation based on the paper from local to global. Indexing takes a long time because many LLM calls are used to extract entities, relationships and community summaries based on detected communities via the Leiden algorithm. In fact, it's easy to spend 10 to 20 euros simply for indexing a few documents. They do use caching so that a second indexing step does not consume tokens as long as you do not change chunk size etc...
Do we need to reindex all documents, everytime we add new document.
Is there any way to run it programitically
I wonder why this isn't used by the LLM themselves.
What do you mean?
I tried it just gave me prompts
Maybe that's in the backlog
Nice video! Keep it up, please!
Is it better than Verba?
I never got verba to ingest properly