$120 Raspberry Pi5 Can Run 14 Billion Parameter LLM Models … Slowly
It is possible to load and run 14 Billion parameter llm AI models on Raspberry Pi5 with 16 GB of memory ($120). However, they can be slow with about 0.6 tokens per second. A 13 billion parameter model can run at 1.36 tokens per second. Improved firmware (better SDRAM timing) improved results.
Keep reading with a 7-day free trial
Subscribe to next BIG future to keep reading this post and get 7 days of free access to the full post archives.

