I have used the openwebui standard pipeline, and it looks like I can't put more than one table in the DB_table field. That's too much of a downside! Did you come across a solution?
Hi Jordan, thanks. I am missing the steps where you created the custom "Database Rag Pipeline with Display". From the Pipelines page you completed the database details and set the Text-to-sql Model to Llama3, but where do you configure the connection between the pipeline valves and the "Database Rag Pipeline with Display" to be an option to be selected?
@@KunaalNaik @martinsmuts2557 just posted a video reviewing the code: ruclips.net/video/iLVyEgxGbg4/видео.html repo is here: github.com/JordanNanos/example-pipelines
Jordan thanks, I have a single gpu runpod setup would you recommend just adding a docker postgresql to existing pod? and is the python code using langchain stored in the pod pipeline settings? this sort of reminds me of AWS serverless Lambda but simpler
@@RedCloudServices if you’d like to save money I would run Postgres in docker on the same VM you’ve already got. That will also simplify networking. Over time you might want to start/stop those services independently in the event of an upgrade to docker or your VM. Or you might want to scale independently. In that case you might want a separate VM for your DB and a separate one for your UI. Or you might consider running kubernetes. Yes the python code is all contained within the pipelines container and uses llama-index not langchain (though you could use langchain too). Just a choice I made.
@@RedCloudServices in other words, you’ll need to pip install the packages that the pipeline depends on, inside the pipelines container. Watch the other video I linked for more detail on how to do this.
@@jordannanos yep! just watched it. I just learned openwebui does not allow Vision only models or multi modal LLMs like Gemini. Was hoping to setup a pipeline using a vision model 🤷♂️ also it’s not clear how to edit or setup whatever vector db it’s using
I have used the openwebui standard pipeline, and it looks like I can't put more than one table in the DB_table field. That's too much of a downside! Did you come across a solution?
Hi Jordan, thanks. I am missing the steps where you created the custom "Database Rag Pipeline with Display". From the Pipelines page you completed the database details and set the Text-to-sql Model to Llama3, but where do you configure the connection between the pipeline valves and the "Database Rag Pipeline with Display" to be an option to be selected?
@@martinsmuts2557 it’s a single .py file that is uploaded to the pipelines container. I’ll cover that in more detail in a future video
@@jordannanos Do create this video soon!
@@KunaalNaik @martinsmuts2557 just posted a video reviewing the code: ruclips.net/video/iLVyEgxGbg4/видео.html
repo is here: github.com/JordanNanos/example-pipelines
Hi. Could you link us to the source code of the pipeline?
code is here: github.com/JordanNanos/example-pipelines
video reviewing the code: ruclips.net/video/iLVyEgxGbg4/видео.html
Jordan thanks, I have a single gpu runpod setup would you recommend just adding a docker postgresql to existing pod? and is the python code using langchain stored in the pod pipeline settings? this sort of reminds me of AWS serverless Lambda but simpler
@@RedCloudServices if you’d like to save money I would run Postgres in docker on the same VM you’ve already got. That will also simplify networking.
Over time you might want to start/stop those services independently in the event of an upgrade to docker or your VM. Or you might want to scale independently. In that case you might want a separate VM for your DB and a separate one for your UI. Or you might consider running kubernetes.
Yes the python code is all contained within the pipelines container and uses llama-index not langchain (though you could use langchain too). Just a choice I made.
@@RedCloudServices in other words, you’ll need to pip install the packages that the pipeline depends on, inside the pipelines container. Watch the other video I linked for more detail on how to do this.
@@jordannanos yep! just watched it. I just learned openwebui does not allow Vision only models or multi modal LLMs like Gemini. Was hoping to setup a pipeline using a vision model 🤷♂️ also it’s not clear how to edit or setup whatever vector db it’s using
Hey Jordan!
Can I change your pipelines for work in SQL Server?
@@renatopaschoalim1209 yes, it’s tested with Postgres and MySQL. If you know how connect to SQL server with python, you’ll be able to use the pipeline
nice video. saw the link from twitter. my question is, is there a way to speed up the results after you ask it a question?
Yes, working to improve the LLM response and SQL query time
hi