There are no words to say thanks for what you've given to the developers community. This video and the code provided has helped me a lot and saved tons of hours researching. Sincerely.
That is an incredible work ! I already loved v1, and for sure will love more that one ! Thank you Just tested both versions this evening on the same Azure SQL DB (Northwind) and same Azure Open AI GPT4 model. Simple prompts are working well on both versions. But prompts like "please tell me orders sent to European countries and summarize it by customer and countries and display total sales" are working fine on v1, but I often get errors (Error: Incorrect syntax near the keyword 'Order', ... ) on v2
Thanks! That's interesting to hear because both use the same core logic for the AI communication, I'll have to tinker with that. Hoping to get a video up for v3 soon.
@@alexthecodewolf I noticed a potential issue with the Azure OpenAI deployment name in the v2 code. It seems the app is consistently looking for a deployment named “wolf” instead of the custom name that is configured. This might be due to the line: ChatClient chatClient = aiClient.GetChatClient("wolfo");. Additionally, could we consider inputting and storing AI context about the database content to enhance the AI results?
Thanks, I built both your apps this weekend and went through the code. Your summary at the end was very good about RAG vs direct schema and if it is possible to cache the schema. Maybe that is where if they run the SQL manually I'm going to add this in as a second option to "train a RAG" and for the RAG to be passed in to make it get even better with its responses.
@@mwaikangemotinga1424 you just need OpneAI keys substitute in the code run the database maybe using SSMS connect with a connection string to the database then voila
Hi, Thanks for sharing , It is very helpfully. I tried your application with codellama but repones time is very slow. Also do we need to pass schema and other details to Prompt for each execution. or we can execute it if schema changed by user.
Hey, thanks for the comments here. Unfortunately I haven't figured out a good way to "cache" the schema with the AI model or service, currently you do have to send the schema with every prompt. Models usually aren't able to store data on their own for reference regardless of the service - even those that support "conversation history" require you to send the whole prompt thread, or in the case of cloud services are storing it for you somewhere. In terms of the performance, I haven't used codellama beyond some basic testing, but local AI model performance will depend heavily on your PC and particularly your GPU. If you don't have a decent GPU local AI models will be very slow, such as if you're using a laptop with onboard graphics.
Hey :) Thank you for the great stuff - could you let me know how we can connect this to a Postgresql Database in Supabase? Where all would the changes reflect? TIA
Hey, thanks - I believe this should work if you just paste in a Postgres connection string like I do in the video with SQL Server, but I haven't tested it.
Would it be possible to use the AI to check if the column content is related to the initial question? Or just use AI to build the query and then it's more literal?
Thank you for this amazing useful tool, It has so much potential! Question that may be relevant to many devs maintaining legacy systems: The tables in my database have meaningless names (e.g. instead of Customers the name would be TBL003). Would it be possible to explain the app about the context of the data that each table contains, so when a user asks about customers it will know to refer to TBL003? Maybe by using a very comprehensive system prompt? Again, thank you so much for this contribution!
I bet you can do that because currently how the code works is it sends the database schema along with the prompt to generate the query from the LLM but because you need to add more context along with the database schema you might easily exceed context window.
Hi, when I put the whole SQL schema, table definition and column definition, I got the token limitation problem. The token limit for gpt-35-turbo is 4,096 tokens. The token limits for gpt-4 and gpt-4-32k are 8,192 and 32,768, respectively. These limits include the token count from both the message list sent and the model response. In my case, the whole schema with all definitions take account for over 366 256 tokens. Do you have any solutions for that? I have tried both gpt 3.5 turbo and gpt 4 but I got inference error with the token limitation.
Hello, i have a couple of very complex SQL queries to fill reports, how can i train the AI to execute the queries when they ask for that information, for example " show me the sales report of 2024" is there some kind of binding?
You'd probably want to use a feature called function calling - this is where you can configure the AI to call specific functions in your code if it thinks they're the best option for a specific prompt response. You can read more about it in the link below, I'm also hoping to make a video about this: learn.microsoft.com/en-us/azure/ai-services/openai/how-to/function-calling
Hi, I am currently building an app like Anki using VSC. I'm a beginner and need some help to finish my project. I need help sorting databases and all the basic stuff. Any videos or tips would be much appreciated! (p.s Deadline is 22 AUGUST)
I'm using GPT4 in the video, but it should work with other advanced models as well. For example, I have another video that shows how to set this up using Phi3 and Ollama locally, towards the end of the video ruclips.net/video/177qX6mpyMg/видео.html
It works with regular openai as well. Or even other similar models such as local models like phi3. You'll need a fairly advanced model that understands structures though whatever you use
Hey, the easiest way to run the project assumes you have Visual Studio and .NET 8 installed. Jus go to the root of the folder you downloaded and double click the DbChatPro.sln file to open it in VIsual Studio. Once it loads in Visual Studio, right click on the DbChatPro project (not the .Client one) and select "set as startup project". Then select the green run button at the top of VIsual Studio. This project assumes you have some experience working with .NET projects, but those steps should hopefully get you going.
There are no words to say thanks for what you've given to the developers community. This video and the code provided has helped me a lot and saved tons of hours researching.
Sincerely.
Thank you, that means a lot!
That is an incredible work ! I already loved v1, and for sure will love more that one ! Thank you
Just tested both versions this evening on the same Azure SQL DB (Northwind) and same Azure Open AI GPT4 model.
Simple prompts are working well on both versions.
But prompts like "please tell me orders sent to European countries and summarize it by customer and countries and display total sales" are working fine on v1, but I often get errors (Error: Incorrect syntax near the keyword 'Order', ... ) on v2
Thanks! That's interesting to hear because both use the same core logic for the AI communication, I'll have to tinker with that. Hoping to get a video up for v3 soon.
@@alexthecodewolf I noticed a potential issue with the Azure OpenAI deployment name in the v2 code. It seems the app is consistently looking for a deployment named “wolf” instead of the custom name that is configured. This might be due to the line: ChatClient chatClient = aiClient.GetChatClient("wolfo");.
Additionally, could we consider inputting and storing AI context about the database content to enhance the AI results?
Thanks so much for this it has made my path easier. I have it working against ollama running llama 3.1 and it runs so well locally.
Nice, local AI models are awesome.
Hi there, are you using Llama 3.1 8B?
Thanks, I built both your apps this weekend and went through the code. Your summary at the end was very good about RAG vs direct schema and if it is possible to cache the schema. Maybe that is where if they run the SQL manually I'm going to add this in as a second option to "train a RAG" and for the RAG to be passed in to make it get even better with its responses.
Thanks so much!
Can one of you set this up for me ..I am not a technical person or founder - I can pay?
@@mwaikangemotinga1424 you just need OpneAI keys substitute in the code run the database maybe using SSMS connect with a connection string to the database then voila
Thanks for taking the time to share with us!
Thank you very much!
Great Video!, Can you show How do we send the query data to to AI to generate the Data analytics and charts
This is a good suggestion, I'll look into it.
what about Vanna AI?@@alexthecodewolf
Thank you. Would love to see a demo with a local Ilm n sql
My local AI video has a demo of wiring up the V1 version of this app to a local LLM ruclips.net/video/177qX6mpyMg/видео.html
I need to find some time to play around with this. I have 15 years of experience in T-SQL and am a C# developer. I might find time to send suggestions
Huge thanks for contributing to the community
Excellent work @alex. Thanks for sharing. May I know how to add proxy in the code?. My network is behind company firewall. I'm unable to run open ai.
Hi, Thanks for sharing , It is very helpfully. I tried your application with codellama but repones time is very slow. Also do we need to pass schema and other details to Prompt for each execution. or we can execute it if schema changed by user.
Hey, thanks for the comments here. Unfortunately I haven't figured out a good way to "cache" the schema with the AI model or service, currently you do have to send the schema with every prompt. Models usually aren't able to store data on their own for reference regardless of the service - even those that support "conversation history" require you to send the whole prompt thread, or in the case of cloud services are storing it for you somewhere.
In terms of the performance, I haven't used codellama beyond some basic testing, but local AI model performance will depend heavily on your PC and particularly your GPU. If you don't have a decent GPU local AI models will be very slow, such as if you're using a laptop with onboard graphics.
Hey :) Thank you for the great stuff - could you let me know how we can connect this to a Postgresql Database in Supabase? Where all would the changes reflect?
TIA
Hey, thanks - I believe this should work if you just paste in a Postgres connection string like I do in the video with SQL Server, but I haven't tested it.
Would it be possible to use the AI to check if the column content is related to the initial question? Or just use AI to build the query and then it's more literal?
Thank you for this amazing useful tool, It has so much potential!
Question that may be relevant to many devs maintaining legacy systems: The tables in my database have meaningless names (e.g. instead of Customers the name would be TBL003). Would it be possible to explain the app about the context of the data that each table contains, so when a user asks about customers it will know to refer to TBL003? Maybe by using a very comprehensive system prompt?
Again, thank you so much for this contribution!
I bet you can do that because currently how the code works is it sends the database schema along with the prompt to generate the query from the LLM but because you need to add more context along with the database schema you might easily exceed context window.
Excellent piece! Thank you very much for this
Glad you enjoyed it!
Great video! Can you make one on how to make and use an OpenAI account?
Thanks, Do you mean how to use this same app/scenario but connect to non-Azure OpenAI, or do you just mean a general OpenAI or Azure OpenAI intro?
@@alexthecodewolf I meant 'general OpenAI or Azure OpenAI intro', tnx!
Hi, when I put the whole SQL schema, table definition and column definition, I got the token limitation problem. The token limit for gpt-35-turbo is 4,096 tokens. The token limits for gpt-4 and gpt-4-32k are 8,192 and 32,768, respectively. These limits include the token count from both the message list sent and the model response. In my case, the whole schema with all definitions take account for over 366 256 tokens. Do you have any solutions for that? I have tried both gpt 3.5 turbo and gpt 4 but I got inference error with the token limitation.
Use Gemini Pro I had the same problem. Gemini Pro has 2M tokens
Hello, i have a couple of very complex SQL queries to fill reports, how can i train the AI to execute the queries when they ask for that information, for example " show me the sales report of 2024" is there some kind of binding?
You'd probably want to use a feature called function calling - this is where you can configure the AI to call specific functions in your code if it thinks they're the best option for a specific prompt response. You can read more about it in the link below, I'm also hoping to make a video about this:
learn.microsoft.com/en-us/azure/ai-services/openai/how-to/function-calling
Useful and well created app and video :)
Hi, I am currently building an app like Anki using VSC. I'm a beginner and need some help to finish my project. I need help sorting databases and all the basic stuff. Any videos or tips would be much appreciated! (p.s Deadline is 22 AUGUST)
hello what is the model of AI that you use ? thanks advanced
I'm using GPT4 in the video, but it should work with other advanced models as well. For example, I have another video that shows how to set this up using Phi3 and Ollama locally, towards the end of the video ruclips.net/video/177qX6mpyMg/видео.html
Does this work only with azure open ai or can I try chatgpt direct also?
It works with regular openai as well. Or even other similar models such as local models like phi3. You'll need a fairly advanced model that understands structures though whatever you use
Excellent Tutorial!! Super Thanks
is this safe to upload any private database with database chat app?
how to make worked with Firebird database at local server
sorry i have downloaded the code but how to run or execute it ?
any kind soul can guide me please ?
Hey, the easiest way to run the project assumes you have Visual Studio and .NET 8 installed. Jus go to the root of the folder you downloaded and double click the DbChatPro.sln file to open it in VIsual Studio. Once it loads in Visual Studio, right click on the DbChatPro project (not the .Client one) and select "set as startup project". Then select the green run button at the top of VIsual Studio. This project assumes you have some experience working with .NET projects, but those steps should hopefully get you going.
Can this be done in Node.js?
Yes, it can definitely be done in Node, there's an Azure OpenAI SDK for node and plenty of JavaScript UI frameworks.
Thanks
Thank you very much!
Great job!
how to run it