LLM Training on Run:ai with NVIDIA NeMo Megatron 💥

Поделиться
HTML-код
  • Опубликовано: 26 июл 2023
  • Omer Dayan shows how it feels to train large models on 32 A100 80GB GPUs with Run:ai. Optimized infra, optimized software stack, fastest training possible.
    All on Kubernetes with Run:ai and NVIDIA’s NeMo Megatron
    Thanks to our partners at NVIDIA for the collaboration on this one.
  • НаукаНаука

Комментарии •