LLM as Next-Gen Operating System (MemGPT, Berkeley)

Поделиться
HTML-код
  • Опубликовано: 11 дек 2024

Комментарии • 10

  • @headrobotics
    @headrobotics Год назад +3

    Ideally one or more LLMs could be available as an interface on top of all OSs and then those models would have access to all the underlying functions to accomplish the tasks.

    • @oldmankatan7383
      @oldmankatan7383 Год назад +2

      If you haven't, check out the recent research on Mixture of Experts! 😊
      It sounds like the best solution for many problems will be several models, where only the models with expertise in the subject area get "a vote" on the answer. Like a team of people in the system that have a meeting every time you make a request.

  • @Nickolas_Create
    @Nickolas_Create Год назад +1

    Microsoft already has an AI orchestration SDK, which is called Semantic Kernel.

  • @jayakrishnanp5988
    @jayakrishnanp5988 Год назад

    There you go…
    I have this vision on such a different setup
    And taking the evolution of communication to the next level….
    ❤❤❤

  • @dono4516
    @dono4516 Год назад +1

    and now we just need something open source, which can be ran privately

  • @olegt3978
    @olegt3978 Год назад

    Please make a video on fine-tuning for generating in structured data gormat like svg for posters,flyers. How much training data is needed? Where to get such training data,synthetic?
    There exist still no generation of posters and other vector graphics which are notmally created by illustrator.

  • @AGIBreakout
    @AGIBreakout Год назад

    What new OS services (memory i/o, and disk i/o, API Calls, etc ) does an LLM managing app need that's different from other apps?

  • @Newlighte
    @Newlighte Год назад

    Anything video on vllm

  • @The_Conspiracy_Analyst
    @The_Conspiracy_Analyst 9 месяцев назад +1

    What a colossally bad idea. I can't think of a buggier and more unreliable technology on which to base an operating system. I'd rather have an OS based on coin tosses or dice rolls. At least THEN I could BOUND the error (BPP) and have SOME assurances. LLMs just give you byzantine faults.

  • @bubbajones5873
    @bubbajones5873 Год назад

    First! 🎉