Run Ollama in Raspberry Pi 5 vs 4 | Large Language Models in Small Computers

Поделиться
HTML-код
  • Опубликовано: 26 янв 2025

Комментарии • 8

  • @DenzilFerreira
    @DenzilFerreira Месяц назад +3

    If you do “ollama run llama3.1 -verbose” you get the tokens/second after the answer 😉

  • @Rdv_1981
    @Rdv_1981 Месяц назад +1

    I can't even run Ollama without 100% cpu usage on my Dell PowerEdgeR630 server.

    • @DevXplaining
      @DevXplaining  Месяц назад

      Hahaha, yes, if you are having too much processing power at your fingertips, running LLMs locally is the modern solution to the problem for sure :)

  • @NoProg
    @NoProg Месяц назад +1

    ollama on a pi5. while possible its idiotic.

    • @DevXplaining
      @DevXplaining  Месяц назад +1

      Haha, depends on the definition of idiotic - since it includes multiple useful use cases, when models are lightweight enough, for example better Alexa I'm working on.
      But crazy it definitely is! :)

    • @JesseHuang-w1k
      @JesseHuang-w1k Месяц назад +1

      EDIT: Now that I read it again, it sounds a little bit of idiotic.......
      It is not idiotic. A lot of us are making cyberdecks with Pi 4 or 5, and we expect to use it when we are "surviving" in the wild or war zone or something similar (which will probably never happen, hopefully).
      And it is fun. I have made a ceberdeck with pi 5, and I am running Ollama off line, also with full Wikipedia archived, tones of pictures of mushrooms and plants and etc classified. I am also training a model so I can use the Pi camera to take a picture of the mushroom and it can tell me whether it is poisonous。
      This is not idiotic at all!