This explanation is great! I thought I was only doing everything manually in Voiceflow instead of using the built-in drag-and-drop functions but I'm actually creating a little bit of hybrid intent + rag architecture system in order to make the answer to the query of the user, accurate. Thanks, Denys!
Thanks for the feedback, in this case we used a customer dataset. I'll add a .vf file to the blog when showing the intent splitting. We'll include .vf files for future releases
I appreciate these videos and your hard work, but I'm struggling to grasp what's going on. It seems like an enhancement of Intents, possibly classifying them to better suit the RAG architecture. Speaking of the users perspective, without use cases or comparisons to old logic/approach, I believe the information doesn't really stick in people's mind.
This explanation is great! I thought I was only doing everything manually in Voiceflow instead of using the built-in drag-and-drop functions but I'm actually creating a little bit of hybrid intent + rag architecture system in order to make the answer to the query of the user, accurate. Thanks, Denys!
With all this context, can we get a template or .vf file to see this in practice?
Thanks for the feedback, in this case we used a customer dataset. I'll add a .vf file to the blog when showing the intent splitting.
We'll include .vf files for future releases
I didn't understend enithin, honest question: we are talking about the chat boot of voiceflow? or this is about another product?
I appreciate these videos and your hard work, but I'm struggling to grasp what's going on. It seems like an enhancement of Intents, possibly classifying them to better suit the RAG architecture. Speaking of the users perspective, without use cases or comparisons to old logic/approach, I believe the information doesn't really stick in people's mind.