Fantastic demo. Could you go into how many credits are consumed with some examples? Is it one credit per query, does the credit consumption change based on the LLM being used, for your use cases is the essentials plan enough or do you need more, things like that. Thanks
Thanks! Great questions, will cover that in depth in an upcoming video. The short answer is the credits are opaque right now. They are not per query, from what i can tell the main factors of credit usage are: 1. the amount of input tokens (i.e. a small prompt vs. a large transcript) 2. The amount of output tokens (i.e. short reply vs. re-writing a long blog post) 3. The model chosen (i.e. GPT 3.5 less credits used than 4 or Claude 3 etc.) I'm sure there is some markup vs. purely using the models directly, but that is in return for having it able to see all of your knowledgebases, swap models with one click, etc. Within workflows different steps can use different models so once we get a workflow working well with an expensive model and generate some good output examples, we then add those to the prompt and try it with a cheaper model.
Yep, it does automatically sync! For all the plans except enterprise, it syncs once a day. With their enterprise plan, it will sync closer to real time.
Thanks a lot!
Fantastic demo. Could you go into how many credits are consumed with some examples? Is it one credit per query, does the credit consumption change based on the LLM being used, for your use cases is the essentials plan enough or do you need more, things like that. Thanks
Thanks! Great questions, will cover that in depth in an upcoming video. The short answer is the credits are opaque right now. They are not per query, from what i can tell the main factors of credit usage are: 1. the amount of input tokens (i.e. a small prompt vs. a large transcript) 2. The amount of output tokens (i.e. short reply vs. re-writing a long blog post) 3. The model chosen (i.e. GPT 3.5 less credits used than 4 or Claude 3 etc.) I'm sure there is some markup vs. purely using the models directly, but that is in return for having it able to see all of your knowledgebases, swap models with one click, etc. Within workflows different steps can use different models so once we get a workflow working well with an expensive model and generate some good output examples, we then add those to the prompt and try it with a cheaper model.
Could you put your computer screen in dark mode? I can't see anything in light mode. Thank you.
That's crazy
If you add more content to the Notion database, does it automatically sync up to cassidy?
Yep, it does automatically sync! For all the plans except enterprise, it syncs once a day. With their enterprise plan, it will sync closer to real time.