From 4cb3824de9628a6d4ffa4ec08031c76ca1a0a2ab Mon Sep 17 00:00:00 2001 From: Sridhar Ratnakumar <3998+srid@users.noreply.github.com> Date: Thu, 13 Jun 2024 14:12:56 -0400 Subject: [PATCH] chore(docs): ollama: better intro, and GPU pointers --- doc/ollama.md | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/doc/ollama.md b/doc/ollama.md index f605f91..779ae89 100644 --- a/doc/ollama.md +++ b/doc/ollama.md @@ -1,6 +1,6 @@ # Ollama -[Ollama](https://github.com/ollama/ollama) enables you to get up and running with Llama 3, Mistral, Gemma, and other large language models. +[Ollama](https://github.com/ollama/ollama) enables you to easily run large language models (LLMs) locally. It supports Llama 3, Mistral, Gemma and [many others](https://ollama.com/library). ## Getting Started @@ -15,7 +15,9 @@ By default Ollama uses the CPU for inference. To enable GPU acceleration: -### Cuda +### CUDA + +For NVIDIA GPUs. ```nix # In `perSystem.process-compose.` @@ -29,6 +31,8 @@ By default Ollama uses the CPU for inference. To enable GPU acceleration: ### ROCm +For Radeon GPUs. + ```nix # In `perSystem.process-compose.` {