Yes, you can add as many tables as you like. The function that retrieves the schema will provide all the columns and tables as input to the LLM. You only need to add a few example SQL queries (few shots) for those tables so the LLM can understand how to JOIN them if necessary.
That's a good remark. Currently, the second model makes an assumption about the initial question based solely on the SQL response provided. For a robust approach, the initial question needs to be added to the prompt of the chain_query function. By including both the initial question and the SQL response as input fields, the final answer will be more accurate.
Finally!!! I was waiting for it!!
I hope you find the tutorial helpful!
I like the idea of putting the few shot examples into a vector database. That would be a nice video to make.
I'll definitely consider making it. Stay tuned!
Please upload the next part by adding the few shots in vector DB, would be really helpful :-)
Thank you for the comment! I'll be making this video soon.
老師教的真的很好
Thank you so much!
Will it work if i have more tables in the database ?
Yes, you can add as many tables as you like. The function that retrieves the schema will provide all the columns and tables as input to the LLM. You only need to add a few example SQL queries (few shots) for those tables so the LLM can understand how to JOIN them if necessary.
How do the second model knows the initial question if only the sql response was provided?
That's a good remark. Currently, the second model makes an assumption about the initial question based solely on the SQL response provided. For a robust approach, the initial question needs to be added to the prompt of the chain_query function. By including both the initial question and the SQL response as input fields, the final answer will be more accurate.
Can you come up with a SQL agent chat with Llama3
Yes, that's a valid approach.