Each new generative UI example from you guys is implemented in a different pattern. From manually intercepting response types to the latest streamui+tools. Does the team feel this present pattern is mature or are they unhappy with it and will be redoing it next month?
Hey, AI SDK maintainer here! I'm not sure what you're referring to regarding "intercepting response types", but `streamUI` is pretty stable. It's the same as the previous experimental `render` function, but with a more consistent name as other APIs like `streamText` and `streamObject`. Also worth mentioning that the Generative UI APIs are designed to be general enough and fit into any UI patterns and AI pipelines, which means that there isn't only one way to do Generative UI. For example, you can just use `streamUI` + tools to handle LLM + UI, or combine low-level utilities like `createStreamableUI`/`createStreamableValue` and your existing pipeline for flexibility. Happy to answer any questions!
@@shuding What are your recommendations for maintaining type-safety in the application, especially surrounding the server actions and use of the `useActions` hook? Unless I'm missing something, it seems like the required use of the hook defeats one of the major benefits of server actions: end to end type safety. Is there maybe a lower-level approach that I could take to bypass the hook?
Could you please do a tutorial on the combined use of ai ask with long chain adapter that utilizes eg. simple RAG? From your documentation it is not clear how to implement it properly.
it is me only or this tutorial and the repo does not work? I got Error: `useUIState` must be used inside an provider. to resolve it add import { AI } from "./action"; to root layout: export default function RootLayout({ children, }: Readonly) { return (
This is cool and all, but it seems that around every corner in this SDK (especially with the rsc stuff) all the types are just 'any'. In my opinion, you can't really call your library "The AI Framework for TypeScript" and then not have strong types. This is especially annoying because in my eyes it defeats one of the major benefits of server actions: end-to-end type safety. Is there a way to bypass some of the abstractions, like the useActions hook?
For example, a Ai has a getWeather tool The context of the conversation achieves the following effects: user: hello ai: Hello! How can I assist you today? user: How's the weather today? tool: getWeather("local") tool_result: {“weather”:"sunny","maxTemperature":35,"minTemperature":0} ai: The weather is good today, but the temperature difference is a bit large, so please keep warm. I hope that on the client page, users can see in sequence: Ai wants to use the getWeather tool, the call result of the getWeather tool, and Ai's answer based on the call result. How can I achieve this?
I could not find an example in the Docs where the model can use Tools and return RSC while also being able to Stream a response when no Tool is used. All the example I could find use generateText() or streamUi() so the text response Is not a Stream. Should I use a combination of streamText() + Tools + createStreamableUi() to Stream text and have Tools that can return RSC?
Hey! With `streamUI`, if no tool is used, the text response is streamed via the component returned from the `text` function. Is that what you're looking to do?
@@nicoalbanese10 Yes, that's correct. I would like to stream the text token by token when the model does not use a tool. Can I achieve that using streamUI?
You can run any asynchronous javascript code within a tools' execute function. So you would first want to find the exact location based on the search query (eg. openstreetmap). Then pass that to a weather api (eg. open-meteo) and return the resulting temperature 😊
Hey, awesome demo - thanks. We're using llamaindex in Python for our LLM backend that uses RAG. I want to use tools that pass react components to the frontend - how would I accomplish this? Thank you
I added $10 to test open ai and ai sdk and I had 100% of “unknown error” calls and $8 used?! What in the world it wasnt like this before. Loads of retries in the background (should be opted out from the start)
Someone knows how to Improve the response using the API of openAI? Seems like the chat-gtp web app the results are a lot better. Using the API returns very similar responses, in this case, asking "tell me a joke" answer the same thing over and over again. Another great API btw
As always by Next over engineered and over complicated. We just need 4 functions. streamUI, receiveUI, streamText, receiveText. Everything else much easier to do without your helper functions.
Each new generative UI example from you guys is implemented in a different pattern. From manually intercepting response types to the latest streamui+tools. Does the team feel this present pattern is mature or are they unhappy with it and will be redoing it next month?
Nice observation
Yeah, but this is why it experimental. Use at your own risk!
Hey, AI SDK maintainer here! I'm not sure what you're referring to regarding "intercepting response types", but `streamUI` is pretty stable. It's the same as the previous experimental `render` function, but with a more consistent name as other APIs like `streamText` and `streamObject`.
Also worth mentioning that the Generative UI APIs are designed to be general enough and fit into any UI patterns and AI pipelines, which means that there isn't only one way to do Generative UI. For example, you can just use `streamUI` + tools to handle LLM + UI, or combine low-level utilities like `createStreamableUI`/`createStreamableValue` and your existing pipeline for flexibility.
Happy to answer any questions!
@@shuding What are your recommendations for maintaining type-safety in the application, especially surrounding the server actions and use of the `useActions` hook? Unless I'm missing something, it seems like the required use of the hook defeats one of the major benefits of server actions: end to end type safety. Is there maybe a lower-level approach that I could take to bypass the hook?
@@shuding Can you use libraries like shadcn and nextui for the streamed components? Last time I tried, it wasnt working
Thanks for the walkthough! Looking fwd to build generative ui stuff :)
I tried out their streaming ui a few months ago! that was pretty dope!
Was just about to send you this video for our reference. Glad to see you already watched it!
Could you please do a tutorial on the combined use of ai ask with long chain adapter that utilizes eg. simple RAG? From your documentation it is not clear how to implement it properly.
U need vectors
Great Overview!
A Video incorporating RAG with Vercel AI SDK would be awesome!
Thanks for the suggestion - this is on our list!
Hyped! 🥳
Vercel always coming through for developers
Thanks! Great job simplifying it.
great explain by AI SDK of Vercel its really helpful and easiest for building application with the use of predefined function and method
it is me only or this tutorial and the repo does not work? I got Error: `useUIState` must be used inside an provider. to resolve it add import { AI } from "./action"; to root layout:
export default function RootLayout({
children,
}: Readonly) {
return (
{children}
);
}
If you are calling getAiState or useUIState, it has to happen inside of So you could create a new component and do something like
and inside of Chat, that's where you would use uiUIState
This is cool and all, but it seems that around every corner in this SDK (especially with the rsc stuff) all the types are just 'any'. In my opinion, you can't really call your library "The AI Framework for TypeScript" and then not have strong types. This is especially annoying because in my eyes it defeats one of the major benefits of server actions: end-to-end type safety. Is there a way to bypass some of the abstractions, like the useActions hook?
Thanks for this great video! Looking forward to trying my hands out on the latest release!
Kudos to the Vercel AI team too!
ilove ai sdk and kirimase both!
Let me recall the kirimase dream I had
Thank you guys!
how does it stream a structured object? doesn't the stream come back as JSON? how can it parse it if it's not fully complete?
fantastic!
Very good explanation. If possible, could you connect streamUI with assistants? I also had a lot of difficulty separating the tools into other files.
For example, a Ai has a getWeather tool
The context of the conversation achieves the following effects:
user: hello
ai: Hello! How can I assist you today?
user: How's the weather today?
tool: getWeather("local")
tool_result: {“weather”:"sunny","maxTemperature":35,"minTemperature":0}
ai: The weather is good today, but the temperature difference is a bit large, so please keep warm.
I hope that on the client page, users can see in sequence: Ai wants to use the getWeather tool, the call result of the getWeather tool, and Ai's answer based on the call result.
How can I achieve this?
will it work for React Native?, i wanna build my ai chatbot mobile app.
i have the same question hehe
when you stream an object, you get a partial. how do you get the final/full response (not partial)?
I could not find an example in the Docs where the model can use Tools and return RSC while also being able to Stream a response when no Tool is used.
All the example I could find use generateText() or streamUi() so the text response Is not a Stream.
Should I use a combination of streamText() + Tools + createStreamableUi() to Stream text and have Tools that can return RSC?
Great question im interested to know too
Hey! With `streamUI`, if no tool is used, the text response is streamed via the component returned from the `text` function. Is that what you're looking to do?
@@nicoalbanese10 Yes, that's correct. I would like to stream the text token by token when the model does not use a tool. Can I achieve that using streamUI?
Awesome!
Hey i'm struggling a bit with the useChat for multiple conversations. How can i keep multiple conversations active at once? Any tips?
NICO!!!!!
Anyone have a great example on how to get the user's actual location here?
You can run any asynchronous javascript code within a tools' execute function. So you would first want to find the exact location based on the search query (eg. openstreetmap). Then pass that to a weather api (eg. open-meteo) and return the resulting temperature 😊
@@nicoalbanese10 thanks for the suggestion, nico!
Hey, awesome demo - thanks. We're using llamaindex in Python for our LLM backend that uses RAG. I want to use tools that pass react components to the frontend - how would I accomplish this? Thank you
Is there an example with streamUI and error handling for things like finishReason and usage?
Vercel ai sdk + RAG tutorial
Is it still not possible to use both the regular tools to fetch data and the tools to return components ?
I added $10 to test open ai and ai sdk and I had 100% of “unknown error” calls and $8 used?! What in the world it wasnt like this before.
Loads of retries in the background (should be opted out from the start)
I can't concentrate because they keep saying next
Sorry about that, will work on it for the next one!
dope!!
Someone knows how to Improve the response using the API of openAI? Seems like the chat-gtp web app the results are a lot better.
Using the API returns very similar responses, in this case, asking "tell me a joke" answer the same thing over and over again.
Another great API btw
is this compatible with sveltekit 5?
yes
New version of ai package has too much abstractions
Agreed. It destroys the type-safety, especially with RSC and the useActions hook.
Where's Lee?
I'm here :)
how do you make the code animations?
I'm just gonna leave a comment here so I'm notified if someone responds 👀
.
/free
Great use of zod ❤
+1
sick
god i wish i knew about vercel ai like 6 months ago lol
No need to add any openAi api key?
When will i be able to use this with langchain?
can I use ai vercel in vue.js
how to create these coding videos with animations like this kindly create a video on that as well @vercel
Can I use this in next.js out of the box?
Can we fine tune this model ?
Where do you keep the API Key?
How do I pass the API key?
Just add it to the env vars the lib will do the rest
Can it work with ollama
Ugh I hate using gpt. Why isn’t anyone doing Gemini especially since it’s in free beta
But how do you npm?
I made 8 subscribers 😊
train without stops
is it free?
It's just a library, using it is free, however using the models, Like Gemini and GPT4o requires a token, which will almost always cost money
@@mohammednasser2159 thank you
@@mohammednasser2159 thank you! Yes exactly.
🤩❣...
Is this a real human or synthetic?
looks like a bio robot for me
As always by Next over engineered and over complicated. We just need 4 functions. streamUI, receiveUI, streamText, receiveText. Everything else much easier to do without your helper functions.
I wish I could be part of nextjs team. This is insane 😩 why am I just seeing this? Thanks @team_nextjs