WWDC24: Deploy machine learning and AI models on-device with Core ML | Apple

Поделиться
HTML-код
  • Опубликовано: 28 сен 2024
  • Learn new ways to optimize speed and memory performance when you convert and run machine learning and AI models through Core ML. We’ll cover new options for model representations, performance insights, execution, and model stitching which can be used together to create compelling and private on-device experiences.
    Discuss this video on the Apple Developer Forums:
    developer.appl...
    Explore related documentation, sample code, and more:
    Stable Diffusion with Core ML on Apple Silicon: machinelearnin...
    Core ML: developer.appl...
    Introducing Core ML: developer.appl...
    Improve Core ML integration with async prediction: developer.appl...
    Use Core ML Tools for machine learning model compression: developer.appl...
    Convert PyTorch models to Core ML: developer.appl...
    00:00 - Introduction
    01:07 - Integration
    03:29 - MLTensor
    08:30 - Models with state
    12:33 - Multifunction models
    15:27 - Performance tools
    More Apple Developer resources:
    Video sessions: apple.co/Video...
    Documentation: apple.co/Devel...
    Forums: apple.co/Devel...
    App: apple.co/Devel...

Комментарии •