@@byunghwaraWith older hardware, you are probably better off to run it remote. LLMs need a lot of memory and processing power. I have tested text to image generation with OnnxStream and Stable Diffusion, and that can run with only 512MB. But I haven't seen something similar with LLMs. Unless you can find a very small LLM, but that will mean it's limited in functionality.
They forgot to mention that Gemini Nano only runs on Pixel 9+, which limits its practicality for app developers at the moment. It would also be helpful to know the hardware requirements for running MediaPipe LLM models. My point is: it's great to have all these models, but where can we actually use them beyond just experimentation?
Really only for Pixel 9 ? Because I am building simple chat using gemini nano based on their documentation with S24 Ultra, I have problem. The AIcore seems to be failed.
This needs to be opened up to all Gemini Nano devices like Pixel 8 and Galaxy S. Pixel 8 Pro: AICore failed with error type 2-INFERENCE_ERROR and error code 8-NOT_AVAILABLE: Required LLM feature not found
Mobile is one giant game of functionality vs battery for everything, so this isn't any different :) I haven't played with Gemini Nano yet really, but I can give you an answer for MediaPipe: depending on the model and its processing needs, you'll see more or less battery drain. On my Pixel 8 Pro I see less than 1% battery drop per query with Gemma 2b. Right now I think we're still in the exciting very early stages of "we've made this work!", and next is multiple improvement stages where things work better and more efficiently (especially on battery usage). By the time everything works really well, it'll be less exciting because people will have had LLMs in their apps for a while so it'll be old news, but that's just the cycle of introducing new tech that we (developers) have always seen.
Great presentation guys. Really excited to try and add some local ai to my company's mobile app. Towards the end of the presentation Terence said MediaPipe will be more for researchers looking to play around with the latest models, but is there any technical limitation why we wouldn't be able to deploy an app using one of the smaller llama models(for example) via MediaPipe if it fits our use case better than gemini nano? Im guessing nano is probably going to be the most efficient model running on google hardware, but are there any other differences to consider when deciding to start a project with nano vs MediaPipe?
Nope, no limitations. Generally it's that models need to be pulled down remotely in the app and stored on the device that is considered the restriction. If that's OK with your use-case though, feel free to do it.
How do you use on-device Generative AI for your Android app?
Let us know below!
It's for Samsung Galaxy A15? I commented before you did
Will older devices with much less capable hardware be supported in the future?
@@byunghwaraWith older hardware, you are probably better off to run it remote. LLMs need a lot of memory and processing power.
I have tested text to image generation with OnnxStream and Stable Diffusion, and that can run with only 512MB. But I haven't seen something similar with LLMs. Unless you can find a very small LLM, but that will mean it's limited in functionality.
@@LivingLinux Thanks for the explanation! Very helpful!
1:26
They forgot to mention that Gemini Nano only runs on Pixel 9+, which limits its practicality for app developers at the moment.
It would also be helpful to know the hardware requirements for running MediaPipe LLM models.
My point is: it's great to have all these models, but where can we actually use them beyond just experimentation?
Really only for Pixel 9 ?
Because I am building simple chat using gemini nano based on their documentation with S24 Ultra, I have problem.
The AIcore seems to be failed.
@@piopanjaitan looks like MediaPipe is more suitable in this case
Pixel8 got Nano.
This needs to be opened up to all Gemini Nano devices like Pixel 8 and Galaxy S.
Pixel 8 Pro:
AICore failed with error type 2-INFERENCE_ERROR and error code 8-NOT_AVAILABLE: Required LLM feature not found
How does on-device inference affect mobile phone battery consumption and usage time?
Mobile is one giant game of functionality vs battery for everything, so this isn't any different :) I haven't played with Gemini Nano yet really, but I can give you an answer for MediaPipe: depending on the model and its processing needs, you'll see more or less battery drain. On my Pixel 8 Pro I see less than 1% battery drop per query with Gemma 2b.
Right now I think we're still in the exciting very early stages of "we've made this work!", and next is multiple improvement stages where things work better and more efficiently (especially on battery usage). By the time everything works really well, it'll be less exciting because people will have had LLMs in their apps for a while so it'll be old news, but that's just the cycle of introducing new tech that we (developers) have always seen.
does this work on the Samsung S24 Plus?
Great presentation guys. Really excited to try and add some local ai to my company's mobile app. Towards the end of the presentation Terence said MediaPipe will be more for researchers looking to play around with the latest models, but is there any technical limitation why we wouldn't be able to deploy an app using one of the smaller llama models(for example) via MediaPipe if it fits our use case better than gemini nano? Im guessing nano is probably going to be the most efficient model running on google hardware, but are there any other differences to consider when deciding to start a project with nano vs MediaPipe?
Nope, no limitations. Generally it's that models need to be pulled down remotely in the app and stored on the device that is considered the restriction. If that's OK with your use-case though, feel free to do it.
Bring it up to the 7 series if can for pixels
Love it...
Google always doing the lords work. Thank you
👍
چرا فارسی نیست یه آی پی آی نداره LLM باشد ـ شخص تشخیص نمی ده بعد موتور کوانتومی یه ج چی پست الکترونیک نذاشتن درک زبانی ١٠ برابر می شد
Godd
🍎😕
🫶🏽
Gemime.a nano kaggle medium
👍