wow you are the most genious teacher in this section like AI App field. thease days I concentrate on your video clips and it makes me happy. I am looking forward to see the lang_chain tutorial. Thanks.
Thank you Leon for the clear explanations and use cases. I think from my side doing more practical use cases in levels of complexity would certainly help. Dankie Leon.
hello Leon Sir, Thank you soo much creating this amazing series. The videos are very informative ,its cleared my basics about langchain modules and are serving as a great help to understand flowise.
@@leonvanzyl What's the reason it can't see some of the information in the text file? I did everything the same way but it doesn't see some of the information, I put contacts there but it doesn't see them, what could be the reason?
Thank you. so clear explanations! Just the answers are not always so good. I created a story in English with a man and a woman who met in Thailand. When I asked "who is john?" (the man), it answered me in Portuguese, other times in French.... no idea why... it also struggle to answer precisely to some questions too. When i paste the same text in Chatgpt (3.5) and ask the same questions it's far more accurate and precise.
Maybe there is some content on the documents that is confusing the model. On the agent you can click on Advanced and change the system prompt. Try setting a system prompt in your preferred language.
Leon, I know this is an older video, and Flowise is now 2.0, but I was working through it and have a question. How do I delete one data file and use another? Delete the In-Memory Vector Store? The videos are great, keep making them. Wayne
Hey Wayne, the in memory store will get cleared whenever you restart the server / Flowise 👍. Flowise does not offer any way to delete entries from a vector store directly.
Very good video on introductory to Flowise, quality content. Keep it up Leon. If have chance, please try to cover the Autogen. Going for revolutionary change in AI
Leon, your examples are the best..cant wiat for the next tutorial...pls also explain how to upload a mix of docs , ie mix of pdf, csv, txt etc etc...TQs..
Great very clear and helpful videos thank you so much Leon. one issue I haven't been able to figure out in flowise is how to find the folder path for "folder with files document loader". it keeps telling me there is an error in the file - it doesn't exist. Could you please suggest how to give it a local file directory with the precise path name. ( im using Mac OS and am not a natural coder :-) )
How can I use the Folder with Files and appropriate splitters? For example, loader = DirectoryLoader('../../../../../', glob="**/*.py", loader_cls=PythonLoader) to load just Python files in a certain path (on which then I can use the proper Code Text Splitter on top)?
@leonvanzyl great video you're the best, One simple question, does flowise expose some sort of api for loading file (pdf,csv,etc) after the bot was already created. Long story short, update the bot's knowledge on demand. Thanks 🙏
thanks for the knowledge you are providing, i had an issue with some documents when the bot all the sudden switch to spanish language where all the dataset ingested were in english! how to address this issue? thanks again!
I just have a question. It posible to file loaders, text splitter and embeddings vector store but include a template prompt to give a specific role and give some rules to the agent? I ll be trying but to me had been a little confuse how to implement it
How do I add a custom system prompt to this type of workflow, so that in addition to chatting with the pdf document the AI will have a context for providing answers
Hey Leon, thx for the video - enjoying this series! I would be interested in seeing how you would handle recursive querying of a json file and then saving that output to another file in a predetermined format. Been trying to experiment with these type of flows and flowise seems perfect for these small projects.
I would imagine that a prompt chain would suffice. Basically, chain one will load the documents / fetch from the vector store based on your query. Chain two will then take the results from chain one and then reformat the results. There are some other aspects of this that need to be clarified (like writing a file to the filesystem), but in a nutshell that is the approach I would go with. This could make for an interesting tutorial. Do you perhaps have a good example?
amazing tutorials , thank you very much , and for suggestion what is next , is it possible to let the end user to upload his document , so we can share the chat with him and allowing him to upload this files on fly
Thank you for the video. It's interesting for me, when answering requires several vectors to retrieve, will it do so? I asked in the Langchain Discord and got the answer that for that they have these kwars parameters. Still not clear, how it works. What if to answer the question it needs to do some analysis to understand, what embeddings might be needed to answer the question.
Thanks a lot. I have one question. Is there a way that loading doc will be dinamically I mean if i want to use de curl api can I send dinamically documents?
Thank you! I've started a series on Langflow, which you might be interested in. Might cover Botpress in combination with Flowise and Langflow as well, if there's an interest.
Thanks for the tutorial! A question: at 11:39 it says that "There is no information provided about what Lucas does for a living or his occupation". However, at 11:11 I can see that the document says "Emily's eyes met those of a rugged and handsome farmer named Lucas." Why doesn't it know what Lucas is a farmer? How would you go about fixing this?
This is the nature with building these bots. We possibly would have had a better inference with a model like GPT4. In practice, when building chatbots that are trained on client / business data, you'll experience this a lot. This is then solved by adding additional context to the knowledge base or trying different models.
Tutorials are great. I tried a couple of the webscrapers but not impressive results. I am scraping a simple page which has a list of books with title, date, author, presenter for a book club. If you ask a simple questions like "any books with china in the title" it will only produce one result when there 5 in the table. Can I do anything with the parameter settings to make this better? Thanks😀😀
Hi, thanks for the video! I want to use Folder with Files but I don't understand how to apply the URL / Path in Flowise so it can read the files? I have everything setup in Render + Pinecone, could you please help?
hello leon sir, i had a doubt regarding the file loader , which will scrap the website from url. So on one youtube channel , a guy commented that the scrapping tool only scraps the homepage , i tried as well it was scrapping only homepage. If its true sir , is there a way to scrape the entire website and not just home page
Leon thanks for a great introductory video. Was extremely useful to me. I have a question. Will the upsert document when run on the same webpage using a scraping tool like Cheerio continuously increase the Pinecone database with ever larger datasets, reproducing the same data multiple times? In other words, do these Pinecone apis somehow prevent duplication of data and unnecessary growth of the underlying database?
Brilliant question, and I think I actually do answer it in the following video. Everytime you run the Flow, it will upload the documents to the vector DB.. creating duplicates. A better solution is to seperate the injest and retrieval processes into seperate flows.
@@leonvanzyl Understood. But a typical website will update a bit of its data from time to time, with only minor changes. So is the only option to delete the entire database each time before rewriting the new upserted data? That seems grossly inefficient but maybe it's the only way. Also can one use 2 or 3 cheerio modules within the the same scraping session.? The idea is not to have to scrape an entire website, but say 2 or 3 subdirectories.
@@tradejolt9580 I actually agree with you, it seems inefficient. However, this is exactly how enterprise level apps like Voiceflow do it as well. You cannot "update" the knowledge base. Instead, you have to delete it and reload the latest set. You could add metadata to the data so that you only delete and inject specific items.
Can you use flowwise to fetch predefined questions (pulled from a market research interview guide) in a research interview transcript and then show the respondents answer to that question from the transcript? Learning that fetching this data using openAI uses a large # of tokens. I’m not a dev person, but working with a dev team, and trying to find out cost saving solutions
You can save on Tokens by splitting the transcript.. like a TextSplitter. If the transcript includes some indicator of who is speaking, you could add a prompt / system message to tell the model to only include the response from person X.
Hi, your videos and explanations are awesome. I wish to learn about how we can train the model with our data. Is that possible with Flowise? Please create a video on this topic.
Thank you for the feedback 🙏. I'm not convinced that training (aka. fine tuning) is worth it for most businesses. I already have videos in this series that shows how you can upload files containing your business information and the bot will use that as the knowledge base.
how do i stop it from searching for answers from the internet if it cant find answer in PDF? Its giving me answers from the internet if it cant find answers in the provided PDF
ThankYou! Great tutorial! I followed along and understood it all as you demonstrated, however when I ran the Chat to test the app I got an error message "Error: Request failed with status code 401" Any idea why or what I can do to resolve?
Hi Leon, First of all like al other comments perfect and well explained. I have an issue that i put in a dutch pdf (local vector base) but when i ask in Dutch the bot respons in english. I'm going to pinecone (as in your upcoming video) but is there a way to put a prompt section in the flow where i can set settings as preferred language and items like if not found "say this or that" ? I run it at this point locally and not in the cloud. just for get the flow right. Secondly just curious. could i have several PDF's that have overlapping items in one flow that can be combined within the outcome. So like ten flows that gives on one topic in the chat the outcome of a question?
Thanks for the info. The language issue seems to crop up all the time in the comments. I've been trying to reach out to the developers to see if they can look into it. I suspect that the agent is being primed in English. Flowise is still actively being worked on though, so let's hope this gets resolved soon
@@leonvanzyl is there a way to put in the chain an prompt field where I can give instructions to the bot. Like in your previous video? Then i could define the language and instructions of the outcome...
There really isn't a limit on the amount of information. I think there is a file size upload limit, but you could just split the information across multiple files and use the folder uploader.
how can you determine whether the Conversational Retrieval QA Chain used the knowledge within the document itself vs. general ChatOpenAI to inform its' response?
Can you help me? I am reproducing this tutorial and, on chat, I receive this error: "TypeError: Cannot read properties of undefined (reading 'replace')". Where can I find directions to understand and solve it? Thanks.
Did you find a solution yet? Best is the ensure that you have the latest version of Flowise installed. Also ensure that you save the flow before testing.
Hi Leon, i used flowise locally but it seems that it uses a lot of my C Disk space, do you know how can i remove the database that was used locally, because i will starting using pinecone instead.
How can we have this and the previous video together ? An overall virtual assistant, that can answer questions from documents if needed, and from chatgpt overall ?
anyone can help me with flowise? it repeats the answer eg when i say thanks after i recive the answer. im trying to have normal converstaional and using memory vector store
Can we use both pinecone and in memory vector stores in the same chat flow? I'm trying to connect both, but it won't take. Perhaps there is a workaround?
how can you store more information as metadata and retrieve this metadata and use in chat? Example: I want to display a image in the chat, the image url is a metadata that I will store with the text in the vector database.
In-memory will work just fine for prototypes and pet projects. For production you need to consider storage space, scaling, performance and backups. You might need to use a service provider like Pinecone.
When a Flowise application goes live? Does it run through every block in the diagram every time a message is sent in the chat? Like for loading data from documents and putting into Pincone, does it run that the first time you deploy the app or everytime a message is sent in the chat
It seems to run through all blocks. I therefore recommend splitting the loading of the data from the chatbot itself. I cover that in one of the videos in this series.
Hi Leon, I want to build a flow with pdf file but I also want to receive responses directly from OpenAI in cases that are not in the PDF. How can I construct this flow?
It's an either or situation. You would need two seperate Chatflows. In your scenario you might be better off using something like Botpress or Voiceflow, where you can have a fallback response if the answer is not in the knowledge base.
Hello Leon, could you help me with something please. I can not yet undestand how the "Format Prompt Values" at the Prompt template works. What would be a useful flow to integrate that funcitonality. I know its a wide question but some orientation would be very helpful. Thanks.
Hey there. A practical example would be a translater. The prompt template could then format an input language to an output language. Both of these values could be variables. Hope that helps 😃. It does take some practice though, so don't be afraid to experiment.
Amazing content man , i can tell you know what you're doing but i have a small question , how can i add a prompt to a QA chatbot that answers from a document i tried adding another chain and a prompt but didnt work
Thank you for the feedback! I don't think there is a way to add a prompt template to a QA Retrieval chain (and agents) at this stage. Maybe someone in the comments have a workaround..
Hi , thanks a lot for the helpful tutorial. It works but when i ask it something like "Write a blog post about {character in the story} it just says "i dont know" . Is there any way to solve this?
Hey there. The bot is behaving correctly. Remember that the purpose of the bot is to answer questions related to the knowledge base, and not to generate / come up with its own content. Therefore it is saying that "write a blog about" does not exist in your knowledge base.
@@leonvanzyl thanks, can flowise combine answers from multiple documents? e.g. if part of the answer is in PDF 1 and another part is in PDF 15, can it reference both answers together or can it only reference one file at a time?
Hello, thank you very much for the video. Could I integrate several document loaders to load many files in the database? For example, several Text File to enter many books in that format of a certain subject and connect it with the same Pinecone box. Thank you
Hey sir, nice tutorial . i have a small problem, i use render to deploy my flowise workflow, and i did the same steps in this video, but each time i ask question , i got this message : "Error: Request failed with status code 429". Can you please tell me how to overcome it ?
Thank you! Not sure about that error. Ensure that the API key is valid, the flow is saved and that you haven't reached your free credit limit on OpenAI.
thank you for this amazing series. i tried this workflow but with local llm but it give an error TypeError: Cannot read properties of undefined (reading '0')
Hello Leonvanzyl, I am getting the same error (TypeError: Cannot read properties of undefined (reading '0')) with different document loader nodes, i have tries Docx files, folder with files, and File Loader nodes but i get the same error. I am using chatAntropic model with VoyageAI embeddings, any suggestions ? Thank you
@@leonvanzyl great work! Keep adding more such useful stuffs. Advantages and disadvantages of various vector databases and how to use each of them. 2. How to use local ai chat models on flowise etc
Good afternoon Leon. I am using GPT-3.5 turbo to upload a 10Mb document. When replying in chat I get an error: "Error: Request failed with status code 429". What does this mean and how can I avoid it?
Your tutorials are concise and to the point. Pure knowledge and no BS. Keep making these amazing videos as they are very helpful.
Incredible feedback. Thank you.
@@leonvanzyl when ever I type a question in the chat I'm getting an error (request failed with status code 249).
Do you have any idea how to fix it?
Thank God I came across your videos. You explain every single detail that others leave out. You’re videos are awesome!
Awesome comment. Thank you 🙏
Wow, there is finally some powerful and flexible tech that is easy for the end user. Completely game changing
It's really quick and easy to use as well once you get the hang of it.
wow you are the most genious teacher in this section like AI App field. thease days I concentrate on your video clips and it makes me happy. I am looking forward to see the lang_chain tutorial. Thanks.
Awesome comment. Thank you! 🙏
Thanks!
This is incredible, thank you very much for the support!!
This is explained so well. I’m going to have to binge watch the rest of the series!
Thanks 🙏
Just wanted to say this video appeared just as I needed it. So thank you!
That is awesome!! Glad I could help ☺️
Thank you so much! This was a fantastic explanation. Concise, clear, visual guides and then an example. Perfect.
Thank you 🙏
Thank you Leon for the clear explanations and use cases. I think from my side doing more practical use cases in levels of complexity would certainly help. Dankie Leon.
You're welcome 🤗
hello Leon Sir,
Thank you soo much creating this amazing series. The videos are very informative ,its cleared my basics about langchain modules and are serving as a great help to understand flowise.
Thank you for the amazing feedback 🙏
@@leonvanzyl What's the reason it can't see some of the information in the text file? I did everything the same way but it doesn't see some of the information, I put contacts there but it doesn't see them, what could be the reason?
Fantastic, this video is exactly what I was looking for. Awesome series. Thank you.
You're welcome 🤗
Dude you are a G. Thanks for these vids. I turned on notifications just on your vids
Thank you!
thanks, your videos are great..direct to the point, explains all details in a very direct way...Its so helpful
Thank you for the amazing comment
The best Flowise tutorials! Thank you
Thank you!
Respectful work..Kudos. Neat and to the point with less assumptions of the skills the listener. Good video editing work as well.
Thank you for the feedback. It definitely helps me to know what I'm doing right or wrong.
Am i permitted to send queries? if yes, how should it be? email?
Thank you so much Leon! Your videos are such a great contribution to so many peoples lives. Cheers mate !
Thank you so much for the feedback!
Love your work here. It is so well done!
Thank you
Awesome content! Really like your style. Short, sweet and structured.
Thank you for the feedback 👍
Hey thanks a lot for explaining how it operates beneath the surface!
You're welcome 🤗.
More videos of flowiseai please. Hoping to see other functionalities.
Double endorse. Why aren't there more concise, diverse and thorough videos like Leon van Zyls?
So clear and well explained, thank you!
Thank you for the feedback 🙏
Perfect tutorial, glad I found it, thank you!
Thank you 🙏
lower chunks like 200, and overlap 20 , seems to give really more accurate results to like 500 and 200 overlap, been testing a bunch. Thank you.
Awesome feedback. Thank you!
Thank you for this awesome playlist.
You're welcome 🤗
Thank you. so clear explanations! Just the answers are not always so good. I created a story in English with a man and a woman who met in Thailand. When I asked "who is john?" (the man), it answered me in Portuguese, other times in French.... no idea why... it also struggle to answer precisely to some questions too. When i paste the same text in Chatgpt (3.5) and ask the same questions it's far more accurate and precise.
Maybe there is some content on the documents that is confusing the model.
On the agent you can click on Advanced and change the system prompt. Try setting a system prompt in your preferred language.
Excellent tutorials. This is really helpful, thanks.
Thank you
Thanks for making these videos!
You're welcome 🤗
Cool Nice chatbot I liked it :0
This is awesome! Thank you for this!
You're welcome 🤗
Leon, I know this is an older video, and Flowise is now 2.0, but I was working through it and have a question. How do I delete one data file and use another? Delete the In-Memory Vector Store? The videos are great, keep making them. Wayne
Hey Wayne, the in memory store will get cleared whenever you restart the server / Flowise 👍.
Flowise does not offer any way to delete entries from a vector store directly.
Very good video on introductory to Flowise, quality content. Keep it up Leon. If have chance, please try to cover the Autogen. Going for revolutionary change in AI
Thank you for the feedback.
Definitely going to cover AutoGen. Been playing with it.
Thank you, I love your video.
Thank you 😊
Leon, your examples are the best..cant wiat for the next tutorial...pls also explain how to upload a mix of docs , ie mix of pdf, csv, txt etc etc...TQs..
That's awesome feedback. Thank you 🙏.
Will definitely do more videos on Flowise.
Great very clear and helpful videos thank you so much Leon. one issue I haven't been able to figure out in flowise is how to find the folder path for "folder with files document loader". it keeps telling me there is an error in the file - it doesn't exist. Could you please suggest how to give it a local file directory with the precise path name. ( im using Mac OS and am not a natural coder :-) )
Thank you for the feedback.
I had that same issue once. Was sorted after updating Flowise.
Hi Leon, whatever I do, I dont get an answer pertaining to the document. always hmm...
How can I use the Folder with Files and appropriate splitters? For example, loader = DirectoryLoader('../../../../../', glob="**/*.py", loader_cls=PythonLoader) to load just Python files in a certain path (on which then I can use the proper Code Text Splitter on top)?
These videos are great. It would be amazing if you could show the use cases for the examples on the marketplace.
Thank you!
I think you're going to love the video releasing on Sunday/Monday 😉.
ur a legend
Thank you! 😊
@leonvanzyl great video you're the best,
One simple question, does flowise expose some sort of api for loading file (pdf,csv,etc) after the bot was already created. Long story short, update the bot's knowledge on demand.
Thanks 🙏
Thank you for the feedback!
I'm actually not sure. Will reach out to Flowise devs to see what's possible.
Will create a video on it 👍
thanks for the knowledge you are providing, i had an issue with some documents when the bot all the sudden switch to spanish language where all the dataset ingested were in english! how to address this issue? thanks again!
I haven't seen that myself. Interesting 🤔.
dude Leon, I really like the pace of your videos. Very valuable. Can I download the flow somewhere?
Thank you for the feedback!
I haven't uploaded the flows anywhere, but you can find some variant of them in the marketplace 👍
I just have a question. It posible to file loaders, text splitter and embeddings vector store but include a template prompt to give a specific role and give some rules to the agent? I ll be trying but to me had been a little confuse how to implement it
How do I add a custom system prompt to this type of workflow, so that in addition to chatting with the pdf document the AI will have a context for providing answers
Hey Leon, thx for the video - enjoying this series! I would be interested in seeing how you would handle recursive querying of a json file and then saving that output to another file in a predetermined format. Been trying to experiment with these type of flows and flowise seems perfect for these small projects.
Thanks for the feedback!
That is an awesome use-case.
@@leonvanzyl Do you think flowise suited to these types of flows or is it better to just use Langchain?
I would imagine that a prompt chain would suffice.
Basically, chain one will load the documents / fetch from the vector store based on your query.
Chain two will then take the results from chain one and then reformat the results.
There are some other aspects of this that need to be clarified (like writing a file to the filesystem), but in a nutshell that is the approach I would go with.
This could make for an interesting tutorial. Do you perhaps have a good example?
@@leonvanzyl Thanks - I do have a good example. May I send you an email with a brief outline?
@@robjjohnsen absolutely. Think you can find my email in the About page on the channel.
amazing tutorials , thank you very much , and for suggestion what is next , is it possible to let the end user to upload his document , so we can share the chat with him and allowing him to upload this files on fly
Thank you for the feedback ☺️
Thank you for the video. It's interesting for me, when answering requires several vectors to retrieve, will it do so? I asked in the Langchain Discord and got the answer that for that they have these kwars parameters. Still not clear, how it works. What if to answer the question it needs to do some analysis to understand, what embeddings might be needed to answer the question.
Thank you for the feedback.
I'll see if I can find an answer for you.
Can we replace both OpenAI model and open AI embeddings to a LocalAI model and LocalAI embeddings in the above flowise example?
Thanks a lot.
I have one question. Is there a way that loading doc will be dinamically I mean if i want to use de curl api can I send dinamically documents?
Thanks for making these videos. These are what I was looking for.
Can you please make some videos on other tools also?
Thank you!
I've started a series on Langflow, which you might be interested in.
Might cover Botpress in combination with Flowise and Langflow as well, if there's an interest.
What is the process with updating a document or file that was upserted? If I want to delete and update the knowledgebase, do I just delete the file?
Is there a way you could do this where the user uploads the PDF file they want to talk to?
Thanks for the tutorial! A question: at 11:39 it says that "There is no information provided about what Lucas does for a living or his occupation". However, at 11:11 I can see that the document says "Emily's eyes met those of a rugged and handsome farmer named Lucas." Why doesn't it know what Lucas is a farmer? How would you go about fixing this?
This is the nature with building these bots. We possibly would have had a better inference with a model like GPT4.
In practice, when building chatbots that are trained on client / business data, you'll experience this a lot. This is then solved by adding additional context to the knowledge base or trying different models.
Tutorials are great. I tried a couple of the webscrapers but not impressive results. I am scraping a simple page which has a list of books with title, date, author, presenter for a book club. If you ask a simple questions like "any books with china in the title" it will only produce one result when there 5 in the table. Can I do anything with the parameter settings to make this better? Thanks😀😀
Try to increase the chunk size and the amount of docs returned from vector store 👍
Thanks for the clear and concise video!
Do you know how to restrict the context of the answers to only the provided document(s)?
You're welcome!
The model would most likely just tell you that it doesn't have the answer if it couldn't find it in the docs.
Hi, thanks for the video! I want to use Folder with Files but I don't understand how to apply the URL / Path in Flowise so it can read the files? I have everything setup in Render + Pinecone, could you please help?
Will see what I can do 😉
hello leon sir,
i had a doubt regarding the file loader , which will scrap the website from url. So on one youtube channel , a guy commented that the scrapping tool only scraps the homepage , i tried as well it was scrapping only homepage. If its true sir , is there a way to scrape the entire website and not just home page
I'm looking into a solution for this. Will create a video on it.
great explanation!. Are we going to have a no4 tutorial? I would like to know how to deploy flowise on a server. Thanks!
Thank you!
I'm working on a deployment tutorial.
Instead of open AI can we use LocalAI in the above example ?
Leon thanks for a great introductory video. Was extremely useful to me. I have a question. Will the upsert document when run on the same webpage using a scraping tool like Cheerio continuously increase the Pinecone database with ever larger datasets, reproducing the same data multiple times? In other words, do these Pinecone apis somehow prevent duplication of data and unnecessary growth of the underlying database?
Brilliant question, and I think I actually do answer it in the following video.
Everytime you run the Flow, it will upload the documents to the vector DB.. creating duplicates. A better solution is to seperate the injest and retrieval processes into seperate flows.
@@leonvanzyl Understood. But a typical website will update a bit of its data from time to time, with only minor changes. So is the only option to delete the entire database each time before rewriting the new upserted data? That seems grossly inefficient but maybe it's the only way.
Also can one use 2 or 3 cheerio modules within the the same scraping session.? The idea is not to have to scrape an entire website, but say 2 or 3 subdirectories.
@@tradejolt9580 I actually agree with you, it seems inefficient. However, this is exactly how enterprise level apps like Voiceflow do it as well.
You cannot "update" the knowledge base. Instead, you have to delete it and reload the latest set.
You could add metadata to the data so that you only delete and inject specific items.
Can you use flowwise to fetch predefined questions (pulled from a market research interview guide) in a research interview transcript and then show the respondents answer to that question from the transcript? Learning that fetching this data using openAI uses a large # of tokens. I’m not a dev person, but working with a dev team, and trying to find out cost saving solutions
You can save on Tokens by splitting the transcript.. like a TextSplitter.
If the transcript includes some indicator of who is speaking, you could add a prompt / system message to tell the model to only include the response from person X.
Good day Leon. Thanks again for another awesome video. How does this work with a CSV agent? TIA
Thank you!
You could just replace the text uploader with a CSV uploader, or am I missing something? 😁
@@leonvanzyl I was overcomplicating things. You are correct.
Hi, your videos and explanations are awesome. I wish to learn about how we can train the model with our data. Is that possible with Flowise? Please create a video on this topic.
Thank you for the feedback 🙏.
I'm not convinced that training (aka. fine tuning) is worth it for most businesses.
I already have videos in this series that shows how you can upload files containing your business information and the bot will use that as the knowledge base.
Thank you for your prompt reply. Please share the video link to learn about the bot and PDF as knowledge base.
@@dr.s.gomathi6100 lol, it's the very same video that you commented on 😁. You can use the PDF loader instead of the Text Loader. Same thing.
@@leonvanzyl oh god😂😂 i just watched few minutes. Will surely watch completely. Thank you so much. Have a great day ahead.
how do i stop it from searching for answers from the internet if it cant find answer in PDF? Its giving me answers from the internet if it cant find answers in the provided PDF
Thats exactly the Tutorial I was looking for, thanks. One question: Could I also uses this vector base as a data set for finetuning or training?
Glad I could help 😃.
Good question. The vectors are basically an array of numbers. I'll need to look I to this.
Do you have any video using faldero with files? Thanks so much
ThankYou! Great tutorial! I followed along and understood it all as you demonstrated, however when I ran the Chat to test the app I got an error message "Error: Request failed with status code 401" Any idea why or what I can do to resolve?
Hi Leon, First of all like al other comments perfect and well explained.
I have an issue that i put in a dutch pdf (local vector base) but when i ask in Dutch the bot respons in english. I'm going to pinecone (as in your upcoming video) but is there a way to put a prompt section in the flow where i can set settings as preferred language and items like if not found "say this or that" ? I run it at this point locally and not in the cloud. just for get the flow right.
Secondly just curious. could i have several PDF's that have overlapping items in one flow that can be combined within the outcome. So like ten flows that gives on one topic in the chat the outcome of a question?
Thanks for the info. The language issue seems to crop up all the time in the comments.
I've been trying to reach out to the developers to see if they can look into it.
I suspect that the agent is being primed in English.
Flowise is still actively being worked on though, so let's hope this gets resolved soon
@@leonvanzyl is there a way to put in the chain an prompt field where I can give instructions to the bot. Like in your previous video? Then i could define the language and instructions of the outcome...
Hi! Amazing video! Please, tell how much information could be in the file? I mean tokens and when using Inmemory vector store.
Thank you!
There really isn't a limit on the amount of information.
I think there is a file size upload limit, but you could just split the information across multiple files and use the folder uploader.
Thank you! @@leonvanzyl
how can you determine whether the Conversational Retrieval QA Chain used the knowledge within the document itself vs. general ChatOpenAI to inform its' response?
Hey there, I released an updated to this video a few days ago. In that video I show exactly how to get the source documents in the response.
@@leonvanzyl very cool will check it out
Can you help me? I am reproducing this tutorial and, on chat, I receive this error: "TypeError: Cannot read properties of undefined (reading 'replace')". Where can I find directions to understand and solve it? Thanks.
Did you find a solution yet?
Best is the ensure that you have the latest version of Flowise installed. Also ensure that you save the flow before testing.
Hi, the chatbot is answering me with "Hmm, I'm not sure." when I ask "Who is Emily?" or "Who is Lucas".
Any help?
Wow great stuff, can you go with use cases where you automate with make🎉
Thank you! I have a few videos on integrating make, like the appointment booking one
Hi Leon, i used flowise locally but it seems that it uses a lot of my C Disk space, do you know how can i remove the database that was used locally, because i will starting using pinecone instead.
I'll try and find a solution for you. Which node did you use for the vector DB?
@@leonvanzyl i used the In-Memory Vector store. Thanks a lot for your help leon!
I want to OCR my pdf and then do RAG on it using Flowise Multi-Document QnA. is there a way to do it?
How do I host this on a web server if I want to serve it to cleints?
I'm working on a deployment video 👍
Have you been able to use the 'Folder With Files' ? Can't find and docs.
Yes, think I used it in one for my more recent videos. Could be the Mr Beast clone actually.
hi, do we need a prompt template for this?
How can we have this and the previous video together ? An overall virtual assistant, that can answer questions from documents if needed, and from chatgpt overall ?
Within tools, the is a LLMChain tool that you can add to the agent. Will create a video on it soon.
Great Video thank you so much ☺️
Can you build a data extract from pdf to email or excel or maybe Google sheet flow with flowise?
Hi, there's a way to split the output text into multiple messages via a line break "
" for example?
I don't think so, but let me look into it. Will include this in the upcoming series.
anyone can help me with flowise? it repeats the answer eg when i say thanks after i recive the answer. im trying to have normal converstaional and using memory vector store
Can we use both pinecone and in memory vector stores in the same chat flow? I'm trying to connect both, but it won't take. Perhaps there is a workaround?
I don't think so.
Can you share the reason for why you'd like to do this?
how can you store more information as metadata and retrieve this metadata and use in chat? Example: I want to display a image in the chat, the image url is a metadata that I will store with the text in the vector database.
Working on an updated Flowise series that will cover metadata and advanced retrieval.
How does one add a prompt for the document retrieval qa chain? The llm kinda hallucinates without a prompt.
I believe the Agent node allows you to set the system message. Click on Additional Parameters.
What type of embeddings do I use here? My model is the Groq llama3-70b-8192
thanks
You're welcome 🤗
a question, can you link multiple Docx Files to a single InMemory Vector Store?
Absolutely. There is no limit to how many documents you can upload.
Amazing work. Is it possible to upload a txt file using curl?
Don't think it's possible.
What's the difference between using the in-memory vector store and Pinecone?
In-memory will work just fine for prototypes and pet projects.
For production you need to consider storage space, scaling, performance and backups. You might need to use a service provider like Pinecone.
When a Flowise application goes live? Does it run through every block in the diagram every time a message is sent in the chat? Like for loading data from documents and putting into Pincone, does it run that the first time you deploy the app or everytime a message is sent in the chat
It seems to run through all blocks. I therefore recommend splitting the loading of the data from the chatbot itself. I cover that in one of the videos in this series.
@@leonvanzyl yup saw that as I finished your other videos. Thanks!
@@jacobriedel5326 hehe, cool 😎
Hi Leon, I want to build a flow with pdf file but I also want to receive responses directly from OpenAI in cases that are not in the PDF. How can I construct this flow?
It's an either or situation.
You would need two seperate Chatflows.
In your scenario you might be better off using something like Botpress or Voiceflow, where you can have a fallback response if the answer is not in the knowledge base.
@@leonvanzyl Thank you for the great tutorial and your answer
Hello Leon, could you help me with something please. I can not yet undestand how the "Format Prompt Values" at the Prompt template works. What would be a useful flow to integrate that funcitonality. I know its a wide question but some orientation would be very helpful. Thanks.
Hey there. A practical example would be a translater.
The prompt template could then format an input language to an output language. Both of these values could be variables.
Hope that helps 😃.
It does take some practice though, so don't be afraid to experiment.
Amazing content man , i can tell you know what you're doing
but i have a small question , how can i add a prompt to a QA chatbot that answers from a document
i tried adding another chain and a prompt but didnt work
Thank you for the feedback!
I don't think there is a way to add a prompt template to a QA Retrieval chain (and agents) at this stage.
Maybe someone in the comments have a workaround..
Hi , thanks a lot for the helpful tutorial. It works but when i ask it something like "Write a blog post about {character in the story} it just says "i dont know" . Is there any way to solve this?
Hey there. The bot is behaving correctly. Remember that the purpose of the bot is to answer questions related to the knowledge base, and not to generate / come up with its own content.
Therefore it is saying that "write a blog about" does not exist in your knowledge base.
@@leonvanzyl thanks, can flowise combine answers from multiple documents? e.g. if part of the answer is in PDF 1 and another part is in PDF 15, can it reference both answers together or can it only reference one file at a time?
Hello, thank you very much for the video. Could I integrate several document loaders to load many files in the database? For example, several Text File to enter many books in that format of a certain subject and connect it with the same Pinecone box. Thank you
I realized that I can upload a complete folder 😅
Awesome, glad you came right 😁
@@hernandocastroarana6206 how do you do that
Hey sir, nice tutorial . i have a small problem, i use render to deploy my flowise workflow, and i did the same steps in this video, but each time i ask question , i got this message : "Error: Request failed with status code 429".
Can you please tell me how to overcome it ?
Thank you!
Not sure about that error. Ensure that the API key is valid, the flow is saved and that you haven't reached your free credit limit on OpenAI.
thank you for this amazing series. i tried this workflow but with local llm but it give an error
TypeError: Cannot read properties of undefined (reading '0')
Which embedding node and model are you using? You won't be able to combine OpenAI embedding with most local models.
@@leonvanzyl i use (chat local ai) with (localai embedding) with lm studio and zephyr model
Hello Leonvanzyl,
I am getting the same error (TypeError: Cannot read properties of undefined (reading '0')) with different document loader nodes, i have tries Docx files, folder with files, and File Loader nodes but i get the same error.
I am using chatAntropic model with VoyageAI embeddings, any suggestions ?
Thank you
Please make a video on how to connect this chatbot with Botpress and configure input and output from Botpress.
Coincidentally this is on my backlog 😄👍
How can we have a PowerPoint added here? I see only PDF, text files here
I don't think that a PowerPoint loader exists... yet.
These loaders are constantly being updated though.
@@leonvanzyl great work! Keep adding more such useful stuffs. Advantages and disadvantages of various vector databases and how to use each of them. 2. How to use local ai chat models on flowise etc
@@nishantkumar-lw6ce greatly appreciate your feedback 🙏
Good afternoon Leon.
I am using GPT-3.5 turbo to upload a 10Mb document. When replying in chat I get an error:
"Error: Request failed with status code 429". What does this mean and how can I avoid it?
Not sure. Could be a file size limit..