[Ollama](https://github.com/ollama/ollama) enables you to easily run large language models (LLMs) locally. It supports Llama 3, Mistral, Gemma and [many others](https://ollama.com/library).
<blockquoteclass="twitter-tweet"data-media-max-width="560"><plang="en"dir="ltr">❄️You can now perform LLM inference with Ollama in services-flake!<ahref="https://t.co/rtHIYdnPfb">https://t.co/rtHIYdnPfb</a><ahref="https://t.co/1hBqMyViEm">pic.twitter.com/1hBqMyViEm</a></p>— NixOS Asia (@nixos_asia) <ahref="https://twitter.com/nixos_asia/status/1800855562072322052?ref_src=twsrc%5Etfw">June 12, 2024</a></blockquote><scriptasyncsrc="https://platform.twitter.com/widgets.js"charset="utf-8"></script>