Accelerate AI training workloads with Google Cloud TPUs and GPUs

Поделиться
HTML-код
  • Опубликовано: 16 июл 2024
  • Training large AI models at scale requires high-performance and purpose-built infrastructure. This session will guide you through the key considerations for choosing tensor processing units (TPUs) and graphics processing unit (GPUs) for your training needs. Explore the strengths of each accelerator for various workloads, like large language models and generative AI models. Discover best practices for training and optimizing your training workflow on Google Cloud using TPUs and GPUs. Understand the performance and cost implications, along with cost-optimization strategies at scale.
    Speakers: Vaibhav Singh, Rob Martin, Amanpreet Singh, Erik Nijkamp
    Watch more:
    All sessions from Google Cloud Next → goo.gle/next24
    #GoogleCloudNext
    ARC219
  • НаукаНаука

Комментарии •