Kari Briski On Overcoming The Complexities Of Deploying Generative AI

Поделиться
HTML-код
  • Опубликовано: 2 окт 2024
  • The last year has seen significant progress in training state-of-the-art foundation LLMs. Enterprises have sought to quickly deploy these models into their products, leading to the need to ramp up inference solutions in their infrastructure. However, deploying inference solutions involves its own level of complexity, requiring end users to choose and orchestrate across hardware, software, and the requirements of the products the inferencing is needed for. In this video, Kari Briski of NVIDIA explores how NIM abstracts away this complexity for technical users - tightly coupling and simplifying the SDKs, hardware, and model - doing the hard work for the technical team implementing the inferencing pipeline. Learn how NIM enables this natively in any environment without data entering or leaving.
    Subscribe for more! www.snowflake.c...
    Explore sample code, download tools, and connect with peers: developers.sno...

Комментарии •