Комментарии •

  • @GAllium14
    @GAllium14 4 месяца назад +7

    That's crazy 🔥🔥🔥

    • @raj_talks_tech
      @raj_talks_tech 4 месяца назад +2

      Ikr. Best use-case for RSCs just got unlocked !

  • @sucoder
    @sucoder 3 месяца назад +1

    Generative UI 🎉 hearing it first time here

  • @anupkubade2486
    @anupkubade2486 3 месяца назад +2

    Does it mean, the SDK returns the react component like Weather or StockPrice directly to client. Or it just returns the supportive data to create your own Weather or StockPrice components?

    • @raj_talks_tech
      @raj_talks_tech 3 месяца назад +1

      Streams the component to the client !

  • @jamespratama9730
    @jamespratama9730 2 месяца назад

    This video is great!
    How can the AI responses be saved to the backend? Do you just save the content as a string, and presumably it will be formatted as React code so that when the user returns it can be fetched and rendered as before?

    • @raj_talks_tech
      @raj_talks_tech 2 месяца назад +1

      You dont have to save the rendered code. Just the responses in as jsonb I guess. Should work for more parts.
      The rendering is handled by the framework so u don't have to worry about it

    • @jamespratama9730
      @jamespratama9730 Месяц назад +1

      ​@@raj_talks_tech Thank you Raj! I watched your other videos on creating virtualized list. Wondering if you've found the best approach to creating virtualized list in combination with AI UI State? With Virtuoso, it seems not possible to invert EndReached and put it on top, unless you pay for their premium chat license. + On UIState the the messages have complex UIs that aren't so simple as a regular list with different generated UIs depending on the message.

    • @raj_talks_tech
      @raj_talks_tech Месяц назад

      @@jamespratama9730 Nope I believe virtuoso completely open-source. Can you check this codebase for startReached code codesandbox.io/p/sandbox/adoring-bhabha-hl168?file=%2Fsrc%2FApp.tsx

    • @jamespratama9730
      @jamespratama9730 Месяц назад

      @@raj_talks_tech Awesome thanks so much!

  • @user-ih5gm7mp9w
    @user-ih5gm7mp9w 3 месяца назад

    Its technically cool but can someone explain the use case? If I have a service or site what is the benefit of having llm gen components? Is it seperation of concerns witb dynamic display? (I.e you could have thousands of possible components depending on the context, a level of dynamism not practical natively)?
    Im new to all this so hope to hear from the wise old timers

    • @raj_talks_tech
      @raj_talks_tech 3 месяца назад +2

      Its definitely not ground-breaking rather more of an experience. Next.js has been pushing the idea of "fetching data on the server" for quite sometime. And this enables us to not stream any-data to the client and then build the UI rather get user-experience fixed by streaming components as the data becomes available.
      This could mean that you could also render view in parts. In the parts we used to think of data and UI as separate entities, now the lines are blurry.

  • @zezhenxu9113
    @zezhenxu9113 3 дня назад +1

    Just with how Vercel is currently implementing it, it is more just a framework for calling LLMs. The idea of picking a widget to display is nothing new pre-LLM. I guess the idea of streaming the UI is new, but if I need to define the UI component beforehands, I don't see a reason to move to NextJS and SSR just for that. Presumably it will bring some build and run time optimizations since you won't have all of the components on client side.. but the trade-off of mixing too much frontend logic into backend really won't work for large applications.
    Also, unless LLM is generating the UI itself, I really cannot see the difference between asking backend to return the UI v.s. returning the data representing the state of the UI. I doubt LLM will be capable of generating interactive UI as well.

    • @raj_talks_tech
      @raj_talks_tech 3 дня назад

      @@zezhenxu9113 agreed. It just helps think about building UI with streaming in mind

  • @justbemeditation1860
    @justbemeditation1860 3 месяца назад +4

    Please wake me up 😢

  • @maskedvillainai
    @maskedvillainai 3 месяца назад +2

    Please start by learning machine learning fundamentals before popping put a bunch of regurgitated Ai models for which the goal of solving any problem at all in products has become a fairy tell. This solves nothing more than playing with tools that did the legwork already and want you to think you’re doing something complicated and brag worthy .

    • @raj_talks_tech
      @raj_talks_tech 3 месяца назад

      I am just the messenger here 😂

  • @evanethens
    @evanethens 3 месяца назад +1

    What about code privacy? How to scale here ?

    • @raj_talks_tech
      @raj_talks_tech 3 месяца назад +2

      As long as you dont send any data to your ChatGPT APIs privacy shouldn't be a concern here. If you are sending some data to ChatGPT or any LLM API abstraction layer then you most likely have to take care of it.
      With regards to scale, it depends on your system architecture