NLP & AI database integration | Get Insights from database using NLP | Chat with database | AI | NLP
HTML-ΠΊΠΎΠ΄
- ΠΠΏΡΠ±Π»ΠΈΠΊΠΎΠ²Π°Π½ΠΎ: 22 ΠΈΡΠ» 2024
- Get ready for some exciting techπ!
In this video we're building a Streamlit app that helps you get insights from a SQL database using Natural Language (NLP)! Imagine being able to ask questions like "What's the total sales?" or "Which products are most popular?" and getting instant answers from your SQL database, all without leaving your local machine!
We are using Llama 3, an open-source Large Language Model (LLM) that runs locally on our machine. This means we keep our data safe and secure within our own network. Here is how you can set up Ollama and OpenWebUI for the local LLM set up:
Ollama: β’ How to run LLM Locally...
OpenWebUI: β’ Build custom private c...
Link to AI Playlist: hnawaz007.github.io/ai.html
DBT series for database development: hnawaz007.github.io/mds.html
How to install Postgres & restore sample database: β’ How to install Postgre...
Follow the step-by-step guide on how to build this app and unlock the power of natural language insights from your data.
Link to GitHub repo: github.com/hnawaz007/pythonda...
#ai #chatwithdatabase #opensourceai
Link to Channel's site:
hnawaz007.github.io/
--------------------------------------------------------------
π₯Subscribe to our channel:
/ haqnawaz
π Links
-----------------------------------------
#οΈβ£ Follow me on social media! #οΈβ£
π GitHub: github.com/hnawaz007
πΈ Instagram: / bi_insights_inc
π LinkedIn: / haq-nawaz
π / hnawaz100
π hnawaz007.github.io/
-----------------------------------------
Topics in this video (click to jump around):
==================================
0:00 - Overview of the App
1:18 - Custom LLM for SQL
2:40 - Develop LangChain Chain
3:52 - First Chain to Generate SQL
3:53 - Second Chain database interface
5:55 - Streamlit App
6:46 - Test the NLP and AI database integration
8:01 - Use Cases for this App ΠΠ°ΡΠΊΠ°
Wonderful as always and just in time. Was going to build a similar use case that auto generates database docs for business users next week. This comes in handyπ
Thank you again and again
Glad it was helpful! Happy coding.
Great video
What if the response from database exhaustes the context window of the model.
Thanks. If you are encountering model's maximum context length then you can try the following.
1. Choose a different LLM that supports a larger context window.
2. Brute Force Chunk the document, and extract content from each chunk.
3. RAG Chunk the document, only extract content from a subset of chunks that look βrelevantβ.
Here an example of these from LangChain.
js.langchain.com/v0.1/docs/use_cases/extraction/how_to/handle_long_text/
can we run llama 3 locally on any simple VPS Server, or do we need GPUS ?
Hi you'd need a gpu to run llm. By the way VPS servers can have GPUs.