PocketPal AI vs. Private LLM: Large Context and Reasoning with Llama 3.2 3B

Поделиться
HTML-код
  • Опубликовано: 11 янв 2025

Комментарии • 3

  • @TheFelixAlabi
    @TheFelixAlabi 9 дней назад

    Can I have this on iPhone 13 Pro? Will it make my device hot while running? Do I need to download models separately after downloading on App Store? And most importantly, does your model have access to the internet if I do actually need it?

    • @PrivateLLM
      @PrivateLLM  4 дня назад

      The iPhone 13 Pro, with 6GB RAM, can comfortably run smaller 3B models like Qwen 2.5/Llama 3.2/Gemma 2 2B. For best performance, we recommend devices with 8GB RAM, such as the iPhone 15 Pro or iPhone 16 to run larger models like Qwen 2.5 7B or Llama 3.1 8B. Local LLM inference is computationally intensive, so your device may get warm depending on usage.
      Models need to be downloaded separately after installing the app (app includes base model pre-installed). The app processes everything locally and models don’t have internet access

    • @TheFelixAlabi
      @TheFelixAlabi 4 дня назад

      @PrivateLLM thanks for clarifying