Combining Multiple Chains (Prompt Chaining) - FlowiseAI Tutorial #3
HTML-код
- Опубликовано: 30 июн 2024
- #flowiseai #flowise #openai #langchain
We can combine the power of multiple AI models together using Prompt Chaining.
This can be useful to conditionally calling certain models for certain scenarios. We will look at how to conditionally run chains in a future video, but this is an important step for getting familiar with the fundamentals.
☕ Buy me a coffee:
www.buymeacoffee.com/leonvanzyl
📑 Useful Links:
OpenAI API Keys: platform.openai.com/api-keys
💬 Chat with Like-Minded Individuals on Discord:
/ discord
🧠 I can build your chatbots for you!
www.cognaitiv.ai
🕒 TIMESTAMPS:
00:00 - Introduction to Prompt Chains
00:34 - Chain #1 Adjusting the Prompt Template
01:32 - Adding a second chain
02:46 - Chain #2 Prompt Template
03:09 - Connecting multiple chains
04:14 - Testing Prompt Chain
04:33 - Changing AI Model
06:02 - A note about the final output - Наука
Enjoy guys! Prompt Chaining is a lot of fun and makes it possible to call chains conditionally, We will look at the IfElse node in this series, and this video will lay the foundation.
Please hit LIKE to support my channel 🙏🙏
Many times I need to get creative with prompt writing to get good results. Now I can just let the AI write the prompts for me.
This is awesome. Both the project and the tutorials. When I was trying to learn CrewAI I thought how cool it would be to have all those fields and parameters in a UI instead of hard coded text. And here we are!
I'm from Brazil and I'm reviewing and applying your Flowise playlist for the second time.
Your free content is better than the courses I bought here.
Thank you very much for sharing your knowledge.
Thank you!
Just amazing, thank you! Also, I really appreciate how you explain those little beginner details and avoid assumptions in your explanations - a great tutorial, and a masterclass on how to create a tutorial, much appreciated!
Thank you!
Again great tutorial. Looking forward to the rest of the series!
Thank you! Working on the rest. Anything you would like to see?
This is brilliant. The possibilities are endless. Thank you!
Glad it was helpful!
God bless you man! I have been battling the UI and couldn't get this to work until I saw your excellent video series! Subbed! Doing god's work!
You're welcome 🤗
You are an AMAZING teacher. Thanks!
Thank you 😊
@leonvanzyl awesome content.
One thing that I'd mention is 'gpt 3.5 turbo instruct' is not a weak model.
But infact "instruct" models are different from "chat" models.
Chat models are optimised for chatting flow whereas instruct are optimised to follow instructions,
In that case instruct models expect a specific format and little more details in set of instructions that it needs to follow, and often time ends up giving more deterministic output compared to chat models
Great content!
Thank you
Fantastic video, thanks heaps!
You're welcome 🤗
Thank you very much for introducing this amazing tool. I've been bashing through various agent systems and UI's and the flexibility her looks amazing. Its time for me to try to pick an approach/UI I can go deeper on and Langchain looks like it will be one of the survivors. Amazing flexibility here re open source, etc. Look forward to more tutorials... especially with Assistants and Tools.
I agree with you. Langchain and Flowise have stood the test of time for me, and really good frameworks for building solid apps.
This is a great video and very helpful!
You're welcome 😁
Thank you for all of your videos. Can you explain how to add Supabase Retrieval to Prompt Chains? I can't seem to find anything that allows for this.
Great Video, thank you! I always get the recipe and the critic, any idea why? You mentioned in the video that it will return only the critic. Thanks again!
Excellent Leon! This is video is more clear than from the previous course!
The 'output parsers' from Flowise with which nodes should connect? Or they can be there just standing alone?
Thank you
Next video will be about output parsers 👍
Thanks!
Thank you very much for the support 💓
which console are you using and where do I go to review the data?
Thank you very much for the explanation! How can we combine ConversationalRetrieverQAChains with that? They don't use Templates
nice video. do you cover anyway how to send the intermediate output to the user, not just the final output?
or how to route to a specific chain based on user inputs? ie similar to detecting intents in other chatbot systems.
Hey there.
I think you need something like Voiceflow or Botpress instead, where you can guide the conversation.
You could still integrate Flowise into those applications in order to generate responses for certain steps.
I ahve a query you remeber you had done a pokemon name and pokemon id using flowise. So what i have done i have create 2 separate chatflows one for generating random pokemon name and another for getting id of a pokemon. What i desire to do is i call this 1st chatflow(the random pokemon name) to give me a random pokemon name and then it passes this name to my pokemon id flow(and it gives me id and name of the pokemon generated by the 1st flow in my 2nd flow chat window). Any ideas how i can do it?
Firstly, thank you for all that you do, Leon.
I have a specific question: is it possible to create a chat flow within flowise AI where I could create one auto GPT supervisory agent and six auto GPT agents, all linked together in a process where each agent would have specific category of responsibility and the process would go through continuing iterations until the final outcome was acceptable?
I know this could be done with chains as you have shown us in this tutorial, could it be done with autoGPT’s, which, for my application would be desirable in order to capture their greater reasoning capabilities.
Nah, not that I'm aware of. I think you need something like CrewAI where you can create agent "teams" that work collaboratively to achieve a goal.
This is worth looking into though. I'll check with the Flowise team and will create a video once I figure this out.
Question so instead of LLM we can use chat models. So for chat model If we keep the chatGPT to gpt-3.5-turbo-instruct it will give same results correct ?
I have a question:
Does the context also follows up witht he output predition or just the answer is posted
I did exactly the same and it worked well. However, I would like to add a pdf file that is in the pinecone vector at the last node. It's possible? How ?
By the way, congratulations on the video series. It's helping a lot 🙏
Thank you Leon, can you please make a tutorial about chaining different Chatflows where each Chatflow has its own functions, knowledge and purpose. Thank you!
Fun use case. Will see what I can put together
Thanks
You're welcome
All your tutorials are very extremely helpful and very well presented. Thanks! Can you please do a tutorial on uploading a document in Indonesian and translating to English (or any two languages). I can't work out the logic of the flow. Thanks again!!
i.e. I ask questions in English and results are in English even though the document is Indonesian.
I actually did something similar for a German client whereby the KB was in German, but they wanted users to be able to chat in other languages (like English) and the bot should translate the information and respond in English as well.
It's really simple. Change the system prompt to instruct the model to respond in English. Click on additional settings on the chain / agent node to change the system prompt.
Hi Leon, quick question, is there nyway in flowise to view your token usage?
You can attach an analytics platform like Langsmith. Highly recommend you sign up for their waitlist. We will cover Langsmith in this series as well.
Great content! I would like to see how to injest an entire website (like for a church or small business), and store in pinecone (or other) for use by OpenAI. Want to use a prompt template to provide clear instructions to the AI. Not sure how to do this in Flowise.
Thank you! We'll definitely look at retrieval chains soon, where you can load custom data as a knowledge source 👍
Great video as always.
How do you get both the recipe and the critic in the result (not only in the console)?
In the critic prompt, tell the model to also repeat the recipe before writing review.
Thanks. I should figure that out by myself.
I guess I was lazy 😊
@@GilbertMizrahi no worries, it was a great question.
That didn't work for me. The result was that critic listed the last few lines of the recipe, followed by the review. Also, even when following your exact same steps, without instructing critic to list the recipe he always lists at least one to three lines of the recipe before the review. Any ideas?
@@leonvanzyl
@@GilbertMizrahi did you ever get this to work? mine ignores the requests and still only shows the final result.
Awesome tutorials. What LLM's or chat models we could use from Hugging Face?
Thank you!
There are a few, but I really like the Llama 2 models for chatbots.
@@leonvanzyl Do you have tutorials on that?
Thank you for your well organized, detail videos. I want to ask a few questions.1. How can we manage agent conversations, if we have a agent team instead of sequential agent . For example software develepor, architect, ui designer and manager agent that have to talk to each other. There is a hyerarchical and collaborative relation between them. 2- how can we make custom node 3- will it be a sub flow node soon. How can we do sub flow nodes.
Hey there, thank you for the feedback.
I'm not sure that I understand points 2 and 3. Do you have an example to explain those points please?
As for point 1, Flowise does not allow for collaborative agents as far as I'm aware. Think you'll have to use frameworks like AutoGen and CrewAI (I have videos on those as well).
@@leonvanzyli saw node-red and react flow that can handle flow charts. İ guess flowise uses some of these. İ think that subflow can be added and much more functionality that doesnt exist yet.
Would be great to see how this could help applying moderation
Noted
🎯 Key Takeaways for quick navigation:
00:00 *🍳 Podemos combinar múltiplas cadeias de conversação para criar chatbots interativos e poderosos.*
01:32 *📝 O output de uma cadeia pode ser passado como input para outra cadeia, permitindo criar uma sequência lógica de interações.*
04:22 *💡 A qualidade do output pode variar dependendo do modelo usado; modelos mais avançados podem melhorar significativamente a qualidade das respostas.*
06:21 *🛠️ A console pode ser útil para debugar fluxos de conversação, mostrando o output de cada cadeia.*
Made with HARPA AI
Hello, after changing the llm model to chat model gpt-3.5 1106, it still shows the answer cut in half. It doesn't show the first part of the answer. Do you know why this happens?
Try increasing the max token size perhaps
Is there a site where I can share these workflows? Like CIVITAI for image generation
If there is such a site, please let me know! !
Unfortunately not.
Is there any way to introduce a prompt module in a RAG flow?
No, but you can change the system message. Same thing.
@@leonvanzyl I'll try and let you know. Thanks
did anyone else's chat output show both the recipe and critic response?
can you create guide how use open source LLM's
Will do 😊
Thank you
great, but yes or yes I need to pay right? to interact with all gpt models 😐
You can use local models as well, but slightly more complex to set up. Also not really "free" once you go to production either.
Besides, OpenAI is really cheap to use. You also won't be able to use the Open Source models once we delve into more complex use cases, like when we go into function calling and tools.
I am Latin, there is not much "flowise" content and it attracts me, if you recommend buying the chatGPT, I will do it but how and how much should I limit the tokens for a web chatbot?@@leonvanzyl
Hello again, I have an educational chatbot project that perfectly examines pdf, text,url and excel files(because the tabular information in the url may not be taken into account, I want it to review large files in high quality) I want it to respond in quality according to the information there and the gpt3.5 turbo 1106 model, I use langchain at this time, I also connect my database to this bot and the relational information in the database(mssql) along with the files is also analyzed correctly and provided in a high-quality way with the information in the files and create a q&a chat advanced chatbot i want to do all the files and I want the database to be examined and I want it to give the correct answer properly. I also want to customize the behavior as I want by entering instruction into my code. Please help me for the fee ? I use langchain also.
This sounds like a really cool and complex project Enes. It involves quite a lot of different loaders and functions. I would suggest that you use GPT4 Turbo instead of GPT3.5 for improved results and function calling.
@@leonvanzylThank you leon for your advice . İf it is possible I want to get your help to improve my code. I also sent an email to you :)
greate content but you have not give anwer to my question
Lol, what's the question that I missed in the video?
@@leonvanzyl how i can develop this with flowise which nodes i use or something please guide me 1. **Google Sheets Trigger**: Detect when a new row is added to the "MAIN" Google Sheet.
2. **Keyword Analysis**: Use the Serpstat API to fetch keywords, semantically similar keywords, and related questions for the identified keyword in the sheet.
3. **Google Drive Organization**:
- Create a new folder named after the keyword (e.g., "FIVER").
- Inside this folder, create a new Google Sheet with the same name.
- Write the fetched keywords into a tab named "SERP".
- Create a subfolder named "Content".
4. **Data Management**:
- Remove the processed row from the "MAIN" sheet.
- Use GPT Assistant A to analyze the keyword sheet and identify the top 10 broad single-word keywords, noting them in a tab titled "Pillar Keywords".
- Utilize GPT Assistant B to generate blog post titles for each "Pillar Keyword" and store them in a tab named "Pillar Titles".
5. **Long-Tail Keyword Generation**:
- Have GPT Assistant A derive 10 long-tail keywords for each "Pillar Title".
- Store these in a tab titled "LT Keywords".
6. **Supporting Blog Titles**:
- Instruct GPT Assistant B to create 7-10 supporting blog titles for each long-tail keyword.
- Add these to a new tab called "Supporting Blog Titles".
7. **Content Outlining**:
- Ask GPT Assistant C to draft outlines for each "Pillar Article Title", incorporating keywords from the "LT Keywords" tab.
- Repeat the process for each "Supporting Blog Article Title".
- Combine all outlines in a tab named "Outline".
8. **Silo Structure Improvement**:
- Employ GPT Assistant D to enhance the silo structure and develop an interlinking schema, based on the "Outline" tab.
9. **Image Generation with Dalle-3**:
- Generate images relevant to the first title in the "Pillar Article" tab.
- Upscale one image and save both the upscaled and original images to the "Content" folder.
- Document image details in the "Content Img" tab.
10. **Content Creation**:
- Task GPT Assistant E with writing a blog article based on the first title from the "Outline" tab.
- Integrate tables and markdown formatting as required.
11. **Additional Content Elements**:
- Create FAQs related to the article with GPT Assistant F.
- Develop meta titles, descriptions, and excerpts with GPT Assistant G.
- Instruct GPT Assistant H to write YAML markdown for each article.
12. **Finalizing and Publishing**:
- Save the articles in both markdown and HTML formats.
- Discuss options for syncing to AWS S3, Vercel, or WordPress.
13. **Repeat Process**:
- Apply the same steps for subsequent pillar and supporting articles.
- Continuously loop until all rows in the "MAIN" sheet are processed.
@@leonvanzyl i am not able to see your above
reply
1. **Google Sheets Trigger**: Detect when a new row is added to the "MAIN" Google Sheet.
2. **Keyword Analysis**: Use the Serpstat API to fetch keywords, semantically similar keywords, and related questions for the identified keyword in the sheet.
3. **Google Drive Organization**:
- Create a new folder named after the keyword (e.g., "FIVER").
- Inside this folder, create a new Google Sheet with the same name.
- Write the fetched keywords into a tab named "SERP".
- Create a subfolder named "Content".
4. **Data Management**:
- Remove the processed row from the "MAIN" sheet.
- Use GPT Assistant A to analyze the keyword sheet and identify the top 10 broad single-word keywords, noting them in a tab titled "Pillar Keywords".
- Utilize GPT Assistant B to generate blog post titles for each "Pillar Keyword" and store them in a tab named "Pillar Titles".
5. **Long-Tail Keyword Generation**:
- Have GPT Assistant A derive 10 long-tail keywords for each "Pillar Title".
- Store these in a tab titled "LT Keywords".
6. **Supporting Blog Titles**:
- Instruct GPT Assistant B to create 7-10 supporting blog titles for each long-tail keyword.
- Add these to a new tab called "Supporting Blog Titles".
7. **Content Outlining**:
- Ask GPT Assistant C to draft outlines for each "Pillar Article Title", incorporating keywords from the "LT Keywords" tab.
- Repeat the process for each "Supporting Blog Article Title".
- Combine all outlines in a tab named "Outline".
8. **Silo Structure Improvement**:
- Employ GPT Assistant D to enhance the silo structure and develop an interlinking schema, based on the "Outline" tab.
9. **Image Generation with Dalle-3**:
- Generate images relevant to the first title in the "Pillar Article" tab.
- Upscale one image and save both the upscaled and original images to the "Content" folder.
- Document image details in the "Content Img" tab.
10. **Content Creation**:
- Task GPT Assistant E with writing a blog article based on the first title from the "Outline" tab.
- Integrate tables and markdown formatting as required.
11. **Additional Content Elements**:
- Create FAQs related to the article with GPT Assistant F.
- Develop meta titles, descriptions, and excerpts with GPT Assistant G.
- Instruct GPT Assistant H to write YAML markdown for each article.
12. **Finalizing and Publishing**:
- Save the articles in both markdown and HTML formats.
- Discuss options for syncing to AWS S3, Vercel, or WordPress.
13. **Repeat Process**:
- Apply the same steps for subsequent pillar and supporting articles.
- Continuously loop until all rows in the "MAIN" sheet are processed. this is question how i develop it with flowise
@@awaisTecnical11 This video is about prompt chaining thought 🤷♂️.
I can assist you with your individual queries through my agency if you need assistance. It's really difficult to cover all permutations in videos. Happy to help you in person. Link in description.