AUDIO: With the automatic audio dubbing from RUclips /Google you hear a synthetic voice in your regional language. To hear my original voice in English, switch to "Default" or "English" in the settings. Thank you.
Great video again! I have a special request. You referenced several papers and earlier videos. Can you mention them in the description? It helps us to look back later, without interrupting the flow of following the rest of the video
Reminds me a lot of a plan I had a few years ago, which I hope to finally try this year. Use a graph database like Neo4j to arrange knowledge for the LLM. This was before I knew what Rag was. This is my year for AI, and I plan to test out a lot of things. Slowly building up a better and better AI. Definitely adding this to the list. (I tend to do themes instead of new year resolution, so AI being the theme is a big deal. Last year was Chrysalis to try to grab the healing I got from 2020's year of Cocoon. And I'm hoping 2026 will be robotics.)
While I agree with you that KGs are going to rapidly increase in importance, and the FastoG looks promising, but is this not what Dimitry at Infranodus has been doing for a long time. The paper mentions GraphGPT but seems to have ignored Infranodus.
Thanks for the update on graph rag systems. How about performance AMD flexibility when adding new information? It would be nice if u provide the link for the GitHub repo.
No, Discover AI does not have any videos specifically focusing on LightRAG. The techniques discussed, such as RAG, grokking, TextGrad, DSPy, and multi-agent systems, offer alternative approaches for improving LLM performance and handling external knowledge, which often addresses the same goals as LightRAG (reducing computational cost and latency). While LightRAG itself isn't covered, the videos provide valuable insights and tools for building efficient and powerful AI systems.
I want a language model that runs on my smartphone so I can help build an open source and decentralized global platform for collective terrestrial intelligence, CTI.
AUDIO: With the automatic audio dubbing from RUclips /Google you hear a synthetic voice in your regional language. To hear my original voice in English, switch to "Default" or "English" in the settings. Thank you.
Great video again! I have a special request. You referenced several papers and earlier videos. Can you mention them in the description? It helps us to look back later, without interrupting the flow of following the rest of the video
I second this.
Third!
I love the enthusiasm and cheer you bring to the research process
Thanks for a great journey into the future of knowledge sharing!
Another fantastic video on a wonderful channel. Hugely appreciated.
It’s interesting perhaps to consider Wolfram’s ideas on hypergraphs.
Any links that we can look at in the meantime?
+1 to hypergraphs
Reminds me a lot of a plan I had a few years ago, which I hope to finally try this year. Use a graph database like Neo4j to arrange knowledge for the LLM. This was before I knew what Rag was.
This is my year for AI, and I plan to test out a lot of things. Slowly building up a better and better AI. Definitely adding this to the list.
(I tend to do themes instead of new year resolution, so AI being the theme is a big deal. Last year was Chrysalis to try to grab the healing I got from 2020's year of Cocoon. And I'm hoping 2026 will be robotics.)
While I agree with you that KGs are going to rapidly increase in importance, and the FastoG looks promising, but is this not what Dimitry at Infranodus has been doing for a long time.
The paper mentions GraphGPT but seems to have ignored Infranodus.
My goodness this tech is moving so fast. Thank you for your videos. Amazing content.
Thanks for the update on graph rag systems. How about performance AMD flexibility when adding new information? It would be nice if u provide the link for the GitHub repo.
totally agree on structure being x factor
@code4AI Would you please link to the papers and media that you're presenting in your otherwise wonderful videos?
A video in Deepseek-V3 anytime soon? Must be in the oven as I type this :D
❤
Woo knowledge graphs and MCTS! Only thing that could make this better is if you combined with evolutionary algorithms 🔥💜
Did you enjoy the recent Jeff Clunes article on this topic?
@christopherd.winnan8701 it's not on my radar. What's the title? Is it the October SWE-search?
Ok.. 36, i better get started with my 2nd.. jesus... Can you walk us through implementtion of using SVD and Smoothening with spectral algebriac
Thank god I have a live voice agent on hand that can translate all this stuff into ordinary ELI5 English.
do you have video about LightRAG
No, Discover AI does not have any videos specifically focusing on LightRAG. The techniques discussed, such as RAG, grokking, TextGrad, DSPy, and multi-agent systems, offer alternative approaches for improving LLM performance and handling external knowledge, which often addresses the same goals as LightRAG (reducing computational cost and latency). While LightRAG itself isn't covered, the videos provide valuable insights and tools for building efficient and powerful AI systems.
@@irbsurfer1585 pls do video about LightRAG an etc.. practicable rags
I want a language model that runs on my smartphone so I can help build an open source and decentralized global platform for collective terrestrial intelligence, CTI.
Just a decent smart phone that lived up to hype for be sufficient for many of us...