How to Stream Responses from the OpenAI API
HTML-код
- Опубликовано: 18 июл 2023
- Learning how to stream responses is essential to good UX in AI-powered applications but it can be a bit intimidating to beginners. Let's walk through a sample implementation!
NOTE: as of this writing, GPT-4 is 'generally' available to current paying customers of the OpenAI API. If that's not you and GPT-4 isn't working, use 'gpt-3.5-turbo' instead of 'gpt-4'
Repo with streaming implementation: github.com/Adam-Thometz/OpenA...
Repo for exam generator: github.com/Adam-Thometz/Exam-...
API reference for ChatCompletion API: platform.openai.com/docs/api-...
Thanks! Loved the demo
Glad you liked it!
Thank you!!! I was wondering how it works and I am super new to ui ux stuff. gonna try that :)) thanks again!!
Glad you found it helpful!
Hey thank you for the video, it helped me out so much. I was able to get it running but some prompts keep stopping abruptly and I need to tell it to "continue" to get it going again. Is there anyway around this? sorry if this is a dumb question
Great content!
Thanks!
what is the output token limit for turbo 3.5 16k context model?
I know I'm late but 4096
do you have a discord channel?
One day :)
@3:06 lol, after looking at the source code I know the reason why its response said "remember, you owe me" and asked for a chocolate bar. 🍫
[spoiler]Because of your default System prompt![/spoiler]
Haha glad you noticed! :)