Thanks for the comment. I appreciate it! As of this moment there is no SQL sanitation that would filter out malicious queries. This is a very good point, and something could be explored. One of the factors that would protect the database from injections is that it’s the LLM which creates the query based on the prompt and there are constraints in the prompt which restrict what can go into the query. However, I’m sure that some very creative prompt injection strategies could have the LLM produce unexpected and undesirable behaviour. Its something to experiment with and research further.
@@aarondunn-zt7ev at the very least I suppose limiting the access of the user running the LLM's queries would prevent any little Bobby tables incidents 😅
Nice video, informative and the pacing is great.
Is there any sanitization of the LLMs input into the query sent to the db?
Thanks for the comment. I appreciate it! As of this moment there is no SQL sanitation that would filter out malicious queries. This is a very good point, and something could be explored. One of the factors that would protect the database from injections is that it’s the LLM which creates the query based on the prompt and there are constraints in the prompt which restrict what can go into the query. However, I’m sure that some very creative prompt injection strategies could have the LLM produce unexpected and undesirable behaviour. Its something to experiment with and research further.
@@aarondunn-zt7ev at the very least I suppose limiting the access of the user running the LLM's queries would prevent any little Bobby tables incidents 😅