How to Construct Domain Specific LLM Evaluation Systems: Hamel Husain and Emil Sedgh

Поделиться
HTML-код
  • Опубликовано: 4 ноя 2024
  • Many failed AI products share a common root cause: a failure to create robust evaluation systems. Evaluation systems allow you to improve your AI quickly in a systematic way and unlock superpowers like the ability to curate data for fine-tuning. However, many practitioners struggle with how to construct evaluation systems that are specific to their problems.
    In this talk, we will walk through a detailed example of how to construct domain-specific evaluation systems.
    Recorded live in San Francisco at the AI Engineer World's Fair. See the full schedule of talks at www.ai.enginee... & join us at the AI Engineer World's Fair in 2025! Get your tickets today at ai.engineer/2025
    About Hamel
    Hamel Husain started working with language models five years ago when he led the team that created CodeSearchNet, a precursor to GitHub CoPilot. Since then, he has seen many successful and unsuccessful approaches to building LLM products. Hamel is also an active open source maintainer and contributor of a wide range of ML/AI projects. Hamel is currently an independent consultant.
    About Emil
    Emil is CTO at Rechat, where he leads the development of Lucy, an AI personal assistant designed to support real estate agents.

Комментарии • 2

  • @hosseinderakhshan8632
    @hosseinderakhshan8632 Месяц назад +5

    Can't say enough how much I am proud of you!

  • @maxjesch
    @maxjesch Месяц назад +1

    super relevant content! Thanks for sharing!