只用CPU跑「小型」語言模型可行嗎? / Is Running "Small" Language Models on CPUs Only Feasible?
布丁布丁吃布丁
只用CPU跑「小型」語言模型可行嗎? / Is Running "Small" Language Models on CPUs Only Feasible?
很多人都說跑大型語言模型需要很高級的GPU,其實相對於門檻較高的大型語言模型,小型語言模型也一直在如火如荼地發展。最近我嘗試用12核CPU跟32GB的RAM來跑Gemma2:2B,意外地很順利呢。
Many people say that running large language models requires high-end GPUs. However, relative to the higher barrier to entry of large language models, small language models have also been developing rapidly. Recently, I experimented with running Gemma2:2B using a 12-core CPU and 32GB of RAM, and it went surprisingly smoothly.
(more...)
Comments