A walkthrough for Android’s on-device GenAI solutions | Spotlight Week

แชร์
ฝัง
  • เผยแพร่เมื่อ 16 ม.ค. 2025

ความคิดเห็น • 31

  • @AndroidDevelopers
    @AndroidDevelopers  3 หลายเดือนก่อน +11

    How do you use on-device Generative AI for your Android app?
    Let us know below!

    • @VICTORdiula
      @VICTORdiula 3 หลายเดือนก่อน +2

      It's for Samsung Galaxy A15? I commented before you did

    • @byunghwara
      @byunghwara 3 หลายเดือนก่อน +1

      Will older devices with much less capable hardware be supported in the future?

    • @LivingLinux
      @LivingLinux 3 หลายเดือนก่อน +1

      ​@@byunghwaraWith older hardware, you are probably better off to run it remote. LLMs need a lot of memory and processing power.
      I have tested text to image generation with OnnxStream and Stable Diffusion, and that can run with only 512MB. But I haven't seen something similar with LLMs. Unless you can find a very small LLM, but that will mean it's limited in functionality.

    • @byunghwara
      @byunghwara 3 หลายเดือนก่อน

      @@LivingLinux Thanks for the explanation! Very helpful!

    • @SamyBo-me7yj
      @SamyBo-me7yj 3 หลายเดือนก่อน

      1:26

  • @MrSomethingggg
    @MrSomethingggg 3 หลายเดือนก่อน +1

    They forgot to mention that Gemini Nano only runs on Pixel 9+, which limits its practicality for app developers at the moment.
    It would also be helpful to know the hardware requirements for running MediaPipe LLM models.
    My point is: it's great to have all these models, but where can we actually use them beyond just experimentation?

    • @piopanjaitan
      @piopanjaitan 3 หลายเดือนก่อน

      Really only for Pixel 9 ?
      Because I am building simple chat using gemini nano based on their documentation with S24 Ultra, I have problem.
      The AIcore seems to be failed.

    • @MrSomethingggg
      @MrSomethingggg 3 หลายเดือนก่อน

      @@piopanjaitan looks like MediaPipe is more suitable in this case

    • @vap0rtranz
      @vap0rtranz 25 วันที่ผ่านมา

      Pixel8 got Nano.

  • @andyfelber442
    @andyfelber442 2 หลายเดือนก่อน +1

    does this work on the Samsung S24 Plus?

  • @liujuncn
    @liujuncn 3 หลายเดือนก่อน +2

    How does on-device inference affect mobile phone battery consumption and usage time?

    • @ptruiz_google
      @ptruiz_google 3 หลายเดือนก่อน +3

      Mobile is one giant game of functionality vs battery for everything, so this isn't any different :) I haven't played with Gemini Nano yet really, but I can give you an answer for MediaPipe: depending on the model and its processing needs, you'll see more or less battery drain. On my Pixel 8 Pro I see less than 1% battery drop per query with Gemma 2b.
      Right now I think we're still in the exciting very early stages of "we've made this work!", and next is multiple improvement stages where things work better and more efficiently (especially on battery usage). By the time everything works really well, it'll be less exciting because people will have had LLMs in their apps for a while so it'll be old news, but that's just the cycle of introducing new tech that we (developers) have always seen.

  • @Hana2736_
    @Hana2736_ 2 หลายเดือนก่อน

    This needs to be opened up to all Gemini Nano devices like Pixel 8 and Galaxy S.
    Pixel 8 Pro:
    AICore failed with error type 2-INFERENCE_ERROR and error code 8-NOT_AVAILABLE: Required LLM feature not found

  • @JorgeNonell-p9p
    @JorgeNonell-p9p 3 หลายเดือนก่อน

    Great presentation guys. Really excited to try and add some local ai to my company's mobile app. Towards the end of the presentation Terence said MediaPipe will be more for researchers looking to play around with the latest models, but is there any technical limitation why we wouldn't be able to deploy an app using one of the smaller llama models(for example) via MediaPipe if it fits our use case better than gemini nano? Im guessing nano is probably going to be the most efficient model running on google hardware, but are there any other differences to consider when deciding to start a project with nano vs MediaPipe?

    • @paultr88
      @paultr88 3 หลายเดือนก่อน +2

      Nope, no limitations. Generally it's that models need to be pulled down remotely in the app and stored on the device that is considered the restriction. If that's OK with your use-case though, feel free to do it.

  • @shavinthafernando7197
    @shavinthafernando7197 2 หลายเดือนก่อน

    Bring it up to the 7 series if can for pixels

  • @ragingFlameCreations
    @ragingFlameCreations 3 หลายเดือนก่อน +1

    Google always doing the lords work. Thank you

  • @VICTORdiula
    @VICTORdiula 3 หลายเดือนก่อน +1

    👍

  • @wazupp2641
    @wazupp2641 3 หลายเดือนก่อน

    Love it...

  • @MariusSerban-q8j
    @MariusSerban-q8j 3 หลายเดือนก่อน

    Godd

  • @Max86421
    @Max86421 3 หลายเดือนก่อน

    🍎😕

  • @BarberMood
    @BarberMood 29 วันที่ผ่านมา

    چرا فارسی نیست یه آی پی آی نداره LLM باشد ـ شخص تشخیص نمی ده بعد موتور کوانتومی یه ج چی پست الکترونیک نذاشتن درک زبانی ١٠ برابر می شد

  • @Dev8S9
    @Dev8S9 3 หลายเดือนก่อน

    🫶🏽

  • @brewerclan4059
    @brewerclan4059 3 หลายเดือนก่อน

    Gemime.a nano kaggle medium

  • @andreialcaza
    @andreialcaza 3 หลายเดือนก่อน

    👍