Local LLM-Powered Voice Assistant on Raspberry Pi

Поделиться
HTML-код
  • Опубликовано: 24 авг 2024
  • The recent GPT-4o audio demo proved that an LLM-powered voice assistant done correctly could create an awe-inspiring experience. Unfortunately, GPT-4o's audio feature is not publicly available. Instead of busy waiting, we will show you how to build a voice assistant of the same caliber right now in less than 400 lines of Python using Picovoice's on-device voice AI and local LLM stacks.
    Why use Picovoice's tech stack instead of OpenAI? Picovoice is on-device, meaning voice processing and LLM inference are performed locally without the user's data traveling to a third-party API. Products built with Picovoice are private by design, compliant (GDPR, HIPPA, ...), and real-time without unreliable network (API) latency.
    Blog: picovoice.ai/b...
    GitHub: github.com/Pic...

Комментарии • 4

  • @zopenzop2225
    @zopenzop2225 2 месяца назад +1

    What about raspberry pi 3, can it run on that? or is 4 needed?

    • @picovoice
      @picovoice  2 месяца назад

      picoLLM supports 4 and 5 at the moment.

  • @IronSpidey02
    @IronSpidey02 2 месяца назад +2

    Can it run on a raspberry pi 4?