Augmented Language Models (LLM Bootcamp)

Поделиться
HTML-код
  • Опубликовано: 8 сен 2024

Комментарии • 17

  • @ayanghosh8226
    @ayanghosh8226 Год назад +12

    Love the logical sequence of presenting the complications in advanced LLM applications. One of the best resources on the web, if one wants a solid mental map of how and when to augment LLMs.

  • @loic7572
    @loic7572 Год назад +7

    This is the best bootcamp I've ever watched. I only wish I had known about the RUclips channel before.

  • @RohanKumar-vx5sb
    @RohanKumar-vx5sb Год назад +6

    u’re the best. this has been the singular most useful and up to date analysis of LLM advancements

  • @jeromeeusebius
    @jeromeeusebius 11 месяцев назад

    Great resource for understanding RAG and the various ways to improve reliability and accuracy of LLM's. Thanks for sharing.

  • @robertcormia7970
    @robertcormia7970 Год назад

    Another fantastic video (webinar) helping to build on foundational knowledge of LLMs. Clear explainations of chains, tools, APIs, and "process". Can't wait to watch the next one (LLMOPs)

  • @lukeliem9216
    @lukeliem9216 Год назад

    This talk is very informative about building LLM-based apps with proprietary datasets.

  •  Год назад +2

    Awesome! Thanks Josh for the presentation!

  • @saratbhargavachinni5544
    @saratbhargavachinni5544 Год назад +1

    Great talk! Thanks for sharing

  • @za_daleko
    @za_daleko 11 месяцев назад

    Thanx for this knowledge. Greetings from Poland.

  • @fudanjx
    @fudanjx Год назад +3

    Quick Summary:
    Introduction:
    Language models are powerful but lack knowledge of the world. We can augment them by providing relevant context and data.
    Witnesses:
    - Retrieval: Searching a corpus and providing relevant documents as context.
    - Chains: Using one language model to develop context for another.
    - Tools: Giving models access to APIs and external data.
    Testimonies:
    Retrieval:
    - Simplest way is adding relevant facts to context window.
    - As corpus scales, treat it as an information retrieval problem.
    - Embeddings and vector databases can improve retrieval.
    Chains:
    - Use one language model to develop context for another.
    - Can help encode complex reasoning and get around token limits.
    - Tools like Langchain provide examples of chain patterns.
    Tools:
    - Give models access to APIs and external data.
    - Chains involve manually designing tool use.
    - Plugins let models decide when to use tools.
    Key Takeaways:
    - Start with rules and heuristics to provide context.
    - As knowledge base scales, think about information retrieval.
    - Chains can help with complex reasoning and token limits.
    - Tools give models access to external knowledge.
    Conclusion:
    Augmenting language models with relevant context and data can significantly improve their capabilities. There are a variety of techniques to provide that augmentation, each with trade-offs around flexibility, reliability,
    and complexity.

  • @user-cq5ue1cj3n
    @user-cq5ue1cj3n Год назад +1

    great. I had to listen at a 0,75 speed, not to miss anything.

  • @deeplearningpartnership
    @deeplearningpartnership Год назад

    Cool

  • @domlahaix
    @domlahaix Год назад

    Crocodile, Ball.... unless you're working for Lacoste 😀

  • @kennethcarvalho3684
    @kennethcarvalho3684 10 месяцев назад

    isnt this a search like google

  • @SavanVyas91
    @SavanVyas91 Год назад

    What’s his name where can I find him?