Ray Observability 2.0: How to Debug Your Ray Applications with New Observability Tooling

Поделиться
HTML-код
  • Опубликовано: 18 окт 2024
  • While Ray seamlessly scales offline and online ML computation, it can also bring its own set of challenges, particularly when it comes to debugging large scale ML workloads. To address these challenges, we will highlight recent advancements in observability tooling within Ray and Anyscale and how users can utilize these new tools to effectively debug both offline (preprocessing, training, tuning, inference) and online (serving) ML workloads.
    In this talk, we will discuss the various tools available to Ray/Anyscale users in their journey of developing and deploying a ML application in production. We will demo developing a ML workload and bringing it to production and the many tools provided by Anyscale and Ray. By the end of the talk, beginner users will understand the most valuable and fundamental observability tools that are available and advanced users will get a glimpse of some of the more advanced functionality for debugging really tricky errors.
    Takeaways: Introduce new observability tools in Ray/Anyscale to Ray users and teach them how to use these tools when developing real world workloads.
    Find the slide deck here: drive.google.c...
    About Anyscale
    ---
    Anyscale is the AI Application Platform for developing, running, and scaling AI.
    www.anyscale.com/
    If you're interested in a managed Ray service, check out:
    www.anyscale.c...
    About Ray
    ---
    Ray is the most popular open source framework for scaling and productionizing AI workloads. From Generative AI and LLMs to computer vision, Ray powers the world’s most ambitious AI workloads.
    docs.ray.io/en...
    #llm #machinelearning #ray #deeplearning #distributedsystems #python #genai

Комментарии •