How to define a storage infrastructure for AI and analytical workloads

Поделиться
HTML-код
  • Опубликовано: 30 июн 2024
  • This session is for AI/ML and data practitioners who want to build AI/ML data pipelines at scale and select the right combination of block, file, and object storage solution for your use case. Learn how to optimize all your AI/ML workloads like data preparation, training, tuning, inference, and serving with the best storage solution and easily integrate them into your Compute Engine, Google Kubernetes Engine, or Vertex workflows. We’ll also dive into how to optimize analytics workloads with Cloud Storage and Anywhere Cache.
    Speakers: David Stiver, Alex Bain, Jason Wu, Yusuke Yachide
    Watch more:
    All sessions from Google Cloud Next → goo.gle/next24
    #GoogleCloudNext
    ARC306
  • НаукаНаука

Комментарии •