Deploy a custom Machine Learning model to mobile

Поделиться
HTML-код
  • Опубликовано: 18 сен 2024
  • Walk through the steps to author, optimize, and deploy a custom TensorFlow Lite model to mobile using best practices and the latest developer tooling. This includes using the model authoring APIs, applying and debugging model optimization techniques such as quantization, benchmarking on a real device, and deployment to Android.
    Resource:
    Converting your model - goo.gle/3M5yux0
    Analyze your converted model - goo.gle/3L76HLc
    TensorFlow Model Optimization - goo.gle/39T3iCX
    Quantization Debugger - goo.gle/3N4uE7h
    Performance best practices - goo.gle/3Lgcxdv
    TensorFlow Lite website - goo.gle/37BhVdk
    TensorFlow Forum - goo.gle/3L0RxY0
    TensorFlow website → goo.gle/3KejoUZ
    Follow on Twitter - goo.gle/3sq7a4C
    Speakers: Arun Venkatesan, Yu-Cheng Ling, Adam Koch
    Watch more:
    All Google I/O 2022 Sessions → goo.gle/IO22_A...
    ML/AI at I/O 2022 playlist → goo.gle/IO22_M...
    All Google I/O 2022 technical sessions → goo.gle/IO22_S...
    Subscribe to TensorFlow → goo.gle/Tensor...
    #GoogleIO

Комментарии • 2

  • @alexanderschober2709
    @alexanderschober2709 Год назад +2

    Thanks for the nice introduction to the TFLite pipeline. You were speaking about how this framework supports you with deploying a model into production. But I would like to learn more about the last part :) Are there resources where I can learn more about the remote deployment of my models onto edge devices and monitoring their performance?

  • @andreamonicque8663
    @andreamonicque8663 Год назад

    Thanks for that!!! This stuff helped me a lot!