The ability to direct the conversation while still using custom gpts is incredible! It was my biggest frustration with designing a chatbot that could “do everything” but it can’t and can’t do it in repeatable ways. Voiceflow has changed everything for me
Really loving all the content around assistants api. Will definitely give it a shot using voice flow without it. Great stuff, you’re an excellent teacher. Very clear and easy to understand. Please keep them coming!
Hope this was helpful! It's important to know what you're looking to build so you can figure out which method works better 1. Unstructured, single assistant - Assistants API directly via the API step 2. Structured, multiple assistants - Voiceflow directly using the methods described
I really enjoyed this. This gave me a much better understanding of the Assistants API vs Assistant Agent builders like Voiceflow. Thank you. It will come down to use case and as you mentioned structured and unstructured data but overall I still love the ability to design a conversation making it more versatile for use cases.
Hello let's say I have 2 paths in my workflow one going to the Assistant API and the other going to an intent. Let's say that I began my conversation and go first in the AI assistant path and ask all my question, whenI finish and I whant to go in the other path, how to move out from the AI assistant to the other path please ?
Excellent video, Daniel. So many concepts! Is there any way to structure theese videos to arm a step by step voiceflow learning path? I'm a little bit lost
This approach is going to be awesome once the intent and entity functionality use AI to make it more consistently accurate (as you mentioned). Looking forward to trying it out once that is in place. Thanks for a great video. 🙂
I was wondering why, in this video, there are multiple moments of "black screen" and the audio is glitched. Then I realised that it seems to happen during the phrase "assistants API"... and the syllable "ass..." is muted. I wonder if this was an intentional language filter being applied?
Using Voiceflow will be much faster than this - thats why we removed the other tutorial which uses the assistants API directly. There were too many issues with timeouts from the OpenAI side.
Why is the option either AI model or Knowledge Base? Why not both? My frustration with Voiceflow is growing as the answers provided are just not as good as OpenAI Assistants (but you guys broke that template…)
Great question - we did that to limit hallucinations in case users dont want the assistant to generate an answer. To do this you would just connect the 'not found' path to the AI response to use memory. It gives you more granular control over how the assistant responds.
Thank you for all the content you share I learned to use VoiceFlow only with your videos 😅😅 I am building a chatbot for nutritionist. I must store information for each patient and it is based on this information that the chatbot will be able to offer him advice during all the continuation of the discussions. If you say that in VoiceFlow, there are only the last 10 messages that are in memory how can I get around this problem. I would also like to know how I can transcribe the history of all the discussions a user has had with the bot in order to improve the quality of the chatbot and to show the nutritionist which advice the bot gave to their patient. Thank you so much ☺️
@@junioruseni5986you need to have an DB like Airtable with information of each patient, login and password for each one and when the patient speak to the bot, there is an call API to Airtable to have specific informations, you give it to the AI model in Voiceflow and you have a Chabot with specific informations for each users 😉
Much faster because you don't have to keep waiting for the thread retrieval to be completed from OpenAI. This process automates the entire process of creating a thread and appending messages to it/running it. Because Voiceflow is processing that all for you with the memory object.
This method words better if you want to build in a more structured granular fashion since you can structure it with 'multiple' assistants. But if that isnt important to you and one main assistant is better, then the other method of working directly with the Assistants API also works.
The ability to direct the conversation while still using custom gpts is incredible! It was my biggest frustration with designing a chatbot that could “do everything” but it can’t and can’t do it in repeatable ways. Voiceflow has changed everything for me
Thats awesome to hear!
Really loving all the content around assistants api. Will definitely give it a shot using voice flow without it. Great stuff, you’re an excellent teacher. Very clear and easy to understand. Please keep them coming!
Thank you for the feedback! Would love to hear how it goes. Keep us posted :)
Hope this was helpful! It's important to know what you're looking to build so you can figure out which method works better
1. Unstructured, single assistant - Assistants API directly via the API step
2. Structured, multiple assistants - Voiceflow directly using the methods described
I really enjoyed this. This gave me a much better understanding of the Assistants API vs Assistant Agent builders like Voiceflow. Thank you.
It will come down to use case and as you mentioned structured and unstructured data but overall I still love the ability to design a conversation making it more versatile for use cases.
Awesome to hear! That was the goal :)
Exactly at 1:18 did anyone else saw “provided in JSON string”. I know this can be dealt with. 😅
Hello let's say I have 2 paths in my workflow one going to the Assistant API and the other going to an intent. Let's say that I began my conversation and go first in the AI assistant path and ask all my question, whenI finish and I whant to go in the other path, how to move out from the AI assistant to the other path please ?
This guy is the best content creator from Voiceflow ❤
Template Here!: www.voiceflow.com/templates/assistant-api-alternative-google-maps
Excellent video, Daniel. So many concepts! Is there any way to structure theese videos to arm a step by step voiceflow learning path? I'm a little bit lost
how do I add feedback like thumbs up and down to each gneerated answer to the user?, similar as chatgpt does or as you do with the voiceflow bot
Thank you. You have changed my world. I would let you know how it goes.
This approach is going to be awesome once the intent and entity functionality use AI to make it more consistently accurate (as you mentioned). Looking forward to trying it out once that is in place. Thanks for a great video. 🙂
Appreciate it! Were working on improving it. V2 of the NLU will be releasing in January.
I was wondering why, in this video, there are multiple moments of "black screen" and the audio is glitched. Then I realised that it seems to happen during the phrase "assistants API"... and the syllable "ass..." is muted. I wonder if this was an intentional language filter being applied?
Hey I just made a chatbot with assistant api but it is just too slow to respond! Is vf better in this aspect or is it the same?
Using Voiceflow will be much faster than this - thats why we removed the other tutorial which uses the assistants API directly. There were too many issues with timeouts from the OpenAI side.
Relevant Video: Accessing Memory in Voiceflow
ruclips.net/video/7ApcugbrvfY/видео.html
This video should be pinned to the top!
Why is the option either AI model or Knowledge Base? Why not both? My frustration with Voiceflow is growing as the answers provided are just not as good as OpenAI Assistants (but you guys broke that template…)
Great question - we did that to limit hallucinations in case users dont want the assistant to generate an answer.
To do this you would just connect the 'not found' path to the AI response to use memory.
It gives you more granular control over how the assistant responds.
@@Voiceflow So just add the option "Both" for those who want the best of both worlds
Great video! Many would highly appreciate it if you can produce a video discussing Multilingual Voice Chatbot using Voiceflow.
Is it posible to capture whatsapp location sent by user in a flow?
Thanks very helpful, can you give this template?
www.voiceflow.com/templates/assistant-api-alternative-google-maps
Next level stuff here! 👍
Awesome to hear!
Can it be implemented in an app using React Native?
Yes! You can do that using our Dialog API developer.voiceflow.com/docs/get-started
Thank you for all the content you share I learned to use VoiceFlow only with your videos 😅😅
I am building a chatbot for nutritionist.
I must store information for each patient and it is based on this information that the chatbot will be able to offer him advice during all the continuation of the discussions. If you say that in VoiceFlow, there are only the last 10 messages that are in memory how can I get around this problem. I would also like to know how I can transcribe the history of all the discussions a user has had with the bot in order to improve the quality of the chatbot and to show the nutritionist which advice the bot gave to their patient.
Thank you so much ☺️
Can I have an answer please 😊
If it can be possible
@@junioruseni5986you need to have an DB like Airtable with information of each patient, login and password for each one and when the patient speak to the bot, there is an call API to Airtable to have specific informations, you give it to the AI model in Voiceflow and you have a Chabot with specific informations for each users 😉
What was your experience with the speed difference between OpenAI and voiceflow features?
Much faster because you don't have to keep waiting for the thread retrieval to be completed from OpenAI. This process automates the entire process of creating a thread and appending messages to it/running it. Because Voiceflow is processing that all for you with the memory object.
This method words better if you want to build in a more structured granular fashion since you can structure it with 'multiple' assistants. But if that isnt important to you and one main assistant is better, then the other method of working directly with the Assistants API also works.
@@Voiceflowand we can get almost same functionality? What about setting token count etc? How do I match with assistant API?
Awesome