Building with AI on Android | Spotlight Week

Поделиться
HTML-код
  • Опубликовано: 23 дек 2024

Комментарии • 6

  • @AndroidDevelopers
    @AndroidDevelopers  2 месяца назад +7

    Watch more Spotlight Weeks → goo.gle/SpotlightWeeks

  • @JorgeNonell-p9p
    @JorgeNonell-p9p 2 месяца назад +4

    Great presentation guys! From my readings, seems like one advantage of LiteRT over MediaPipe is having finer grain controls over what models are used, including the ability to quantize them. One use case I am thinking about for adding on-device AI capability to our android app is to be able to support various hardware configurations with more and less memory for example. Do you think it would be possible to switch to a model at runtime that would perform best on the hardware of that particular device? Would that require a ton of work to implement in your opinion?

    • @OliGaymondAtGoogle
      @OliGaymondAtGoogle 2 месяца назад +1

      Absolutely, it's very common that apps will have more than one model to target different device capabilities. For example you may use a high quality model for real-time effects on recent devices and then have a smaller model that you use for older devices.

  • @shubhamjain608
    @shubhamjain608 2 месяца назад +4

    Nice conversation 👍

  • @StreetsOfBoston
    @StreetsOfBoston 2 месяца назад

    At 13:24, it's suggested to use the Gemini models in the cloud.
    Since this is an Android discussion, not a web discussion, this would mean that to be able to reach and be able to access the Gemini Cloud, you would need to embed/ship your API-key with your APK.... not very safe and subject to reverse engineering. Having your API key 'stolen'/leaked can get costly.....

    • @iurysza
      @iurysza Месяц назад +1

      For that usecase you can use Firebase Vertex AI sdk.