Haha, depends on the definition of idiotic - since it includes multiple useful use cases, when models are lightweight enough, for example better Alexa I'm working on. But crazy it definitely is! :)
EDIT: Now that I read it again, it sounds a little bit of idiotic....... It is not idiotic. A lot of us are making cyberdecks with Pi 4 or 5, and we expect to use it when we are "surviving" in the wild or war zone or something similar (which will probably never happen, hopefully). And it is fun. I have made a ceberdeck with pi 5, and I am running Ollama off line, also with full Wikipedia archived, tones of pictures of mushrooms and plants and etc classified. I am also training a model so I can use the Pi camera to take a picture of the mushroom and it can tell me whether it is poisonous。 This is not idiotic at all!
If you do “ollama run llama3.1 -verbose” you get the tokens/second after the answer 😉
Ooo, lovely! I'll try that!
I can't even run Ollama without 100% cpu usage on my Dell PowerEdgeR630 server.
Hahaha, yes, if you are having too much processing power at your fingertips, running LLMs locally is the modern solution to the problem for sure :)
ollama on a pi5. while possible its idiotic.
Haha, depends on the definition of idiotic - since it includes multiple useful use cases, when models are lightweight enough, for example better Alexa I'm working on.
But crazy it definitely is! :)
EDIT: Now that I read it again, it sounds a little bit of idiotic.......
It is not idiotic. A lot of us are making cyberdecks with Pi 4 or 5, and we expect to use it when we are "surviving" in the wild or war zone or something similar (which will probably never happen, hopefully).
And it is fun. I have made a ceberdeck with pi 5, and I am running Ollama off line, also with full Wikipedia archived, tones of pictures of mushrooms and plants and etc classified. I am also training a model so I can use the Pi camera to take a picture of the mushroom and it can tell me whether it is poisonous。
This is not idiotic at all!