Wow, this is great! I'm loving the Bubble series. Would it be possible for you to create a video in the future that demonstrates how the Vector search result could work using this workflow? For instance, a search for the ideal gift based on the user's preferences, where the results are filtered by Open AI. The system would filter through a CMS list and provide the perfect gift suggestion based on the user's selected preferences and also include image/link to take you to the page.
Hello. This is great. Have this all up and running at a basic level. What/how is the best way to keep the last document uploaded in the search window (Bubble config?). Where/how is the question prompt setup? I ask a question and it appears to not use the whole document? github file? I asked how long the document is and it thinks it is 10 sentences thanks
Thanks for informative video. Would it be possible to make an extended version of this video? Where you go through every step (e.g. in bubble) and explain the code for example? As a no-coder i understand the rough concept but making it work is another story.
Is text-embedding ada 002's embeddings compatible with embeddings made from cohere's multilingual model. (For doing cosine search) Basically some of my documents are non-english but the search would be done in english. So I was wondering what would be the result of using cohere to embed source documents but let ada handle the embedding generation of the input prompt for similarity matching so that instead of spending 0.001 on cohere per prompt it can cost fraction of that for the 20 or so tokens used by tex-embedding-ada-002
Ada uses a different dimensional embedding compared to Cohere, so they won't be compatible. For a multilingual setup, I would highly suggest testing with Cohere, their performance is quite nice. In this code, we are using cohere for embedding and openAI for final answer generation.
@@menloparklab Thanks for the reply. Also, can you do more elaborate tutorials on how to work with cohere models (specially hosting the backend and front end locally for personal use)? There are people like us who barely know any code and unfortunately while youtube is flooded with gpt tutorials, your videos are the only ones on cohere. It seems like theres no other public github repo (except yours) or materials on google for workig with cohere. Thank you. P.s: unfortunately I couldnt get this repo to work as I couldnt work through errors I was getting trying to initialize calls. Thats why Im desperate for more videos on cohere lol
Nice video thx ! I have a question: is it possible to connect this with the normal chatGPT so that when we ask a question about a document it can also give us information about something found on the internet on the same subject ? I would appreciate it if it’s possible (and build it on bubble of course)
Yes, one simple way is to increase the temperature setting so it can be a bit more creative + modify prompt (currently it asks for it to only answer from provided search results).
@@menloparklab Thanks for your reply, and to modify the prompt I do it in bubble ? (is it in the body section?). And what would you write in the modified prompt so it uses also internet for his search ?
That would be in the source code actually, you might have to modify the github codebase a bit. You will write something like "Answer the question based on the provided contexts. If it's not available in the context, answer it based on your knowledge accurately without making up stuff"
Very nice, subscribed :) Could you expand on this and make an example in which it searches multiple documents and returns the relevant documents and makes a reference button to those documents so you can just click the reference button to view the document?
For that, you might have to change the code of the app.py file. One of the last lines has return_only_outputs=True, you'll have to make it return_only_outputs=False. This should send the sources as API results that could be used in your frontend. Hope this helps.
How can I delete the given documents from qdrant trough bubble (add a /delete or something like that in the render code?) ? It's for safety issues for my users
This is an awesome tutorial. Thank you so much for this. I know this might be stupid but how do i do it if i found it to check on multiple files in 1 query? Thanks again!
If you are sending the files using fileuploader in bubble, this workflow works well. But if you use a multi-file uploader, you might have to split the workflow. Where one workflow creates a "Document" data type for each document selected in the multi-file uploader. And another workflow runs the embeddings API call on each new "Document" saved in the database.
This is awesome, saw a little bit about your code, why you didn't use text splitter? Is it done directly in the loader? I'm trying to replicate using URL loader.
Yes, that's right. It's done directly in the loader. You can override the default RecursiveCharecterTextSplitter by passing an argument to the load_and_split function as follows: load_and_split(text_splitter= splitter_of_your_choice ). Hope this helps.
Hey! I will ask two things. The first is that the reason I can't clone this bubble is because I'm on the free version? The other is, if I add another api instead, will this application run successfully?
This is just a view only version of the bubble app, not because of the free version. Working to build a template. Yes, you could as many APIs as needed, no issues. Retrieval blocks have "additional parameter" where you could add system message.
Is it possible to make a bubble app where each user can chat with his documents like in this video, but can chat with all the documents not only 1, and that it also has memory to make a conversation? It would help me and others like me if you could make a video on that !
On initialization of the Embed API Call I get this error: There was an issue setting up your call. Raw response for the API Status code 500 500 Internal Server Error Internal Server Error The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application. What can it be?
There could be different reasons.1) (most likely) Free instance type in render goes to sleep after 15 mins inactivity, maybe you might wanna have the webapp refreshed in a seperate tab before initializing the API call. 2) Error in setting up the API config (header or type etc) in API connector 3) imporper JSON format Let me know if error still persists.
If you were to click on the link provided by render, does it open a webpage with {"Hello":"World"}. If so then your backend is deployed properly. Then it might be something with the api call. Make sure you have your header setup and the JSON code properly. I will add the JSON code to video description to make it easy.
Not allowed to add to the video description, so adding here: Embed JSON for the bubble: {"collection_name": "", "file_url": ""} Retrieve JSON for bubble: {"collection_name": "", "query": ""}
oh yes, definitely. You will probably have to setup using similar steps as listed on the readme file. Maybe first make a virtual env first and then install all the dependencies in it.
Very nice tutorial, Menlo. When I run the requirements.txt, I got the error: Could not find a version that satisfies the requirement detectron2==0.6 How should I fix it?
Thanks. I’ve had dependency issues if Python version is < 3.8. Would suggest for you to try with Python version above 3.8 and let me know if it works. I tested with 3.11.1 in my case.
@@menloparklab Yes I'm deploying on render, just following your tutorial. It was a first deployment. If I omit the detectron2==0.6, the deployment is successful.
Trying to build such apps I found a huge difficulty with Arabic text. I am not even intermediate in coding and tech but I am trying using chatgpt… What’s the best text loader for Arabic? And what’s the best way to preserve Arabic type script when chunking and vectorizing? I tried this using UnstructuredPDFReader or Loader but it turned the text all jebbrish. Thanks!
@@menloparklab Thanks. This is the problem that's blocking me. There is no info on the net as the Arabic community is not very active in this. I appreciate it buddy.
Thank you for sharing good content ! I have a problem with ( openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens, however you requested 4393) Could you please help ? it happen when i use retrieve request
Usually it happens when your question is too long. Or if the search gave a lot of results that are sent to OpenAI. You can change the K parameter in retrieve, if you are familiar with the code.
Hello, how do I fix this when deploying on render? Apr 29 04:53:01 PM ERROR: No matching distribution found for Flask==2.3.1 (from -r requirements.txt (line 13)) Apr 29 04:53:01 PM WARNING: You are using pip version 20.1.1; however, version 23.1.2 is available. Apr 29 04:53:01 PM You should consider upgrading via the '/opt/render/project/src/.venv/bin/python -m pip install --upgrade pip' command. Apr 29 04:53:01 PM ==> Build failed 😞 Apr 29 04:53:01 PM ==> Generating container image from build. This may take a few minutes...
Thank you for your amazing video. Im getting this error message when initializing the api embed api call: There was an issue setting up your call. Raw response for the API Status code 503
Service Suspended This service has been suspended by its owner.
now I get this : There was an issue setting up your call. Raw response for the API Status code 500 500 Internal Server Error Internal Server Error The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.
What an outstanding idea and tutorial! You've given wings to the Bubble!
Thanks for the kind words!
MPLyour chanel is top of top in AI!!!
Wow, this is great! I'm loving the Bubble series. Would it be possible for you to create a video in the future that demonstrates how the Vector search result could work using this workflow? For instance, a search for the ideal gift based on the user's preferences, where the results are filtered by Open AI. The system would filter through a CMS list and provide the perfect gift suggestion based on the user's selected preferences and also include image/link to take you to the page.
Great stuff Misbah.. thank you.
Good to see you
Please upload such good content with Consistency😊😊
Hello. This is great. Have this all up and running at a basic level.
What/how is the best way to keep the last document uploaded in the search window (Bubble config?).
Where/how is the question prompt setup? I ask a question and it appears to not use the whole document? github file? I asked how long the document is and it thinks it is 10 sentences
thanks
Thanks for informative video. Would it be possible to make an extended version of this video? Where you go through every step (e.g. in bubble) and explain the code for example? As a no-coder i understand the rough concept but making it work is another story.
Sure, that could be done. Might make it more like a series of videos to go from a blank canvas to a full app.
@@menloparklab Thanks!
Is text-embedding ada 002's embeddings compatible with embeddings made from cohere's multilingual model. (For doing cosine search)
Basically some of my documents are non-english but the search would be done in english. So I was wondering what would be the result of using cohere to embed source documents but let ada handle the embedding generation of the input prompt for similarity matching so that instead of spending 0.001 on cohere per prompt it can cost fraction of that for the 20 or so tokens used by tex-embedding-ada-002
Ada uses a different dimensional embedding compared to Cohere, so they won't be compatible. For a multilingual setup, I would highly suggest testing with Cohere, their performance is quite nice. In this code, we are using cohere for embedding and openAI for final answer generation.
@@menloparklab Thanks for the reply. Also, can you do more elaborate tutorials on how to work with cohere models (specially hosting the backend and front end locally for personal use)? There are people like us who barely know any code and unfortunately while youtube is flooded with gpt tutorials, your videos are the only ones on cohere. It seems like theres no other public github repo (except yours) or materials on google for workig with cohere. Thank you.
P.s: unfortunately I couldnt get this repo to work as I couldnt work through errors I was getting trying to initialize calls. Thats why Im desperate for more videos on cohere lol
Very good!! do you plan to create more videos using bubble and langchain?
Thanks. Yes, planning to make more vids on langchain x bubble. Feel free to suggest any topic of interest.
Nice video thx ! I have a question: is it possible to connect this with the normal chatGPT so that when we ask a question about a document it can also give us information about something found on the internet on the same subject ? I would appreciate it if it’s possible (and build it on bubble of course)
Yes, one simple way is to increase the temperature setting so it can be a bit more creative + modify prompt (currently it asks for it to only answer from provided search results).
@@menloparklab Thanks for your reply, and to modify the prompt I do it in bubble ? (is it in the body section?). And what would you write in the modified prompt so it uses also internet for his search ?
That would be in the source code actually, you might have to modify the github codebase a bit.
You will write something like "Answer the question based on the provided contexts. If it's not available in the context, answer it based on your knowledge accurately without making up stuff"
Very nice, subscribed :) Could you expand on this and make an example in which it searches multiple documents and returns the relevant documents and makes a reference button to those documents so you can just click the reference button to view the document?
For that, you might have to change the code of the app.py file. One of the last lines has return_only_outputs=True, you'll have to make it return_only_outputs=False. This should send the sources as API results that could be used in your frontend. Hope this helps.
How can I delete the given documents from qdrant trough bubble (add a /delete or something like that in the render code?) ? It's for safety issues for my users
That would be an API call directly to qdrant with the specific filter in place to target the right document. They have good documentation on that.
Great tutorial thank you for this! I am curious, why did you go with Cohere rather than OpenAI?
Thanks! Cohere offer a multilingual API which supports 100+ languages and performs better for a multilingual setting.
Instead of using qudrant and render to process the file, could you process the file from bubble to flowise, using the flowise API?
Yes, that’s possible as well
why it require github and render to get an endpoints? Doesn't Cohere have its own endpoints that can be used directly?
They don’t have an endpoint to search a document
Where to get this slide please
This is an awesome tutorial. Thank you so much for this. I know this might be stupid but how do i do it if i found it to check on multiple files in 1 query? Thanks again!
If you are sending the files using fileuploader in bubble, this workflow works well. But if you use a multi-file uploader, you might have to split the workflow. Where one workflow creates a "Document" data type for each document selected in the multi-file uploader. And another workflow runs the embeddings API call on each new "Document" saved in the database.
This is awesome, saw a little bit about your code, why you didn't use text splitter? Is it done directly in the loader? I'm trying to replicate using URL loader.
Yes, that's right. It's done directly in the loader. You can override the default RecursiveCharecterTextSplitter by passing an argument to the load_and_split function as follows: load_and_split(text_splitter= splitter_of_your_choice ). Hope this helps.
Hey! I will ask two things. The first is that the reason I can't clone this bubble is because I'm on the free version? The other is, if I add another api instead, will this application run successfully?
and I want to type prompt to customize it. Where should I add it?
This is just a view only version of the bubble app, not because of the free version. Working to build a template.
Yes, you could as many APIs as needed, no issues. Retrieval blocks have "additional parameter" where you could add system message.
Is it possible to make a bubble app where each user can chat with his documents like in this video, but can chat with all the documents not only 1, and that it also has memory to make a conversation?
It would help me and others like me if you could make a video on that !
Yes , that's also possible. Will need to add a filter parameter.
On initialization of the Embed API Call I get this error:
There was an issue setting up your call.
Raw response for the API
Status code 500
500 Internal Server Error
Internal Server Error
The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.
What can it be?
There could be different reasons.1) (most likely) Free instance type in render goes to sleep after 15 mins inactivity, maybe you might wanna have the webapp refreshed in a seperate tab before initializing the API call. 2) Error in setting up the API config (header or type etc) in API connector 3) imporper JSON format
Let me know if error still persists.
I was able to fix it... My mistake was typing the names of the environment variables wrong
@automatizapedidos6078 - Thanks for sharing. @kennyron7600 let us know if env variables naming solves the error.
@@coffeeandteaYT What does the error say? Could you try it with a different pdf file link? Maybe it's not reading the link properly.
I'm trying to get this running - I followed all the steps but gett a Status 400 on the API connecter - Do you know what could be causing that?
If you were to click on the link provided by render, does it open a webpage with {"Hello":"World"}. If so then your backend is deployed properly. Then it might be something with the api call. Make sure you have your header setup and the JSON code properly. I will add the JSON code to video description to make it easy.
Not allowed to add to the video description, so adding here:
Embed JSON for the bubble:
{"collection_name": "",
"file_url": ""}
Retrieve JSON for bubble:
{"collection_name": "",
"query": ""}
@@menloparklab yeah, I do see the "hello":"world" on my link - but now I'm getting a status 500 on the call.
@@Therealishmatt try refreshing your webapp link in a separate tab before initializing the API call in bubble. Hope that solves it.
why i am getting the service has been suspended by the owner when I press process file??
You might wanna confirm the API link used in your api connector.
Hi, would it be possible to install langchain-cohere-qdrant-retrieval on my own vps server at digitalocean.com instead of installing at render.com?
oh yes, definitely. You will probably have to setup using similar steps as listed on the readme file. Maybe first make a virtual env first and then install all the dependencies in it.
Hello did you get this working on digital ocean? Thx
Very nice tutorial, Menlo. When I run the requirements.txt, I got the error: Could not find a version that satisfies the requirement detectron2==0.6 How should I fix it?
Thanks. I’ve had dependency issues if Python version is < 3.8. Would suggest for you to try with Python version above 3.8 and let me know if it works. I tested with 3.11.1 in my case.
@@menloparklab I tested with Python 3.11.1 as you recommended.
Are you deploying on render? If so, is it a first deployment, or a redeployment?
@@menloparklab Yes I'm deploying on render, just following your tutorial. It was a first deployment. If I omit the detectron2==0.6, the deployment is successful.
Trying to build such apps I found a huge difficulty with Arabic text. I am not even intermediate in coding and tech but I am trying using chatgpt… What’s the best text loader for Arabic? And what’s the best way to preserve Arabic type script when chunking and vectorizing? I tried this using UnstructuredPDFReader or Loader but it turned the text all jebbrish. Thanks!
That's a good question. I haven't tried many different loaders for different languages, but will let you know once I get a chance to test.
@@menloparklab Thanks. This is the problem that's blocking me. There is no info on the net as the Arabic community is not very active in this. I appreciate it buddy.
Thank you for sharing good content ! I have a problem with ( openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens, however you requested 4393) Could you please help ? it happen when i use retrieve request
Usually it happens when your question is too long. Or if the search gave a lot of results that are sent to OpenAI. You can change the K parameter in retrieve, if you are familiar with the code.
@@menloparklab thank you so much for your answer 🙏🙏🙏
Hello, how do I fix this when deploying on render?
Apr 29 04:53:01 PM ERROR: No matching distribution found for Flask==2.3.1 (from -r requirements.txt (line 13))
Apr 29 04:53:01 PM WARNING: You are using pip version 20.1.1; however, version 23.1.2 is available.
Apr 29 04:53:01 PM You should consider upgrading via the '/opt/render/project/src/.venv/bin/python -m pip install --upgrade pip' command.
Apr 29 04:53:01 PM ==> Build failed 😞
Apr 29 04:53:01 PM ==> Generating container image from build. This may take a few minutes...
Hey there! Is the python version set to 3.11.1 on the env variables?
@@menloparklab Yes
Now it worked! Thank you for this amazing tutorial... Congrats on creating this
@@automatizapedidos6078 glad it works! Thanks for the kind words.
Hey, its AMAZING! The first no-code frontend video i've seen so far. Is it possible to make template with bubble logic out of this and share it?
I added the link to the bubble app created to the video description now, hope it helps. Also discord link added.
Thank you for your amazing video. Im getting this error message when initializing the api embed api call:
There was an issue setting up your call.
Raw response for the API
Status code 503
Service Suspended
This service has been suspended by its owner.
now I get this :
There was an issue setting up your call.
Raw response for the API
Status code 500
500 Internal Server Error
Internal Server Error
The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.
now this one: There was an issue setting up your call.
Raw error for the API
ESOCKETTIMEDOUT
I think its the free instance
Hey there! So the 500 error might be due to the pdf file used to test, would suggest testing with a different pdf link.
@@menloparklab I did and it worked, thank you very much
I need to search 13 PDF's at once. It's software documentation.
Yes thats doable with this setup. Give it a try! Let me know how it goes. ;)