From adea3811ea53148a223efdb8fa900d7ee5f9b879 Mon Sep 17 00:00:00 2001 From: Jared Van Bortel Date: Mon, 25 Mar 2024 11:38:38 -0400 Subject: [PATCH] docs: fix mention of Q6_K quantization in README Signed-off-by: Jared Van Bortel --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 1b418646..a04833a6 100644 --- a/README.md +++ b/README.md @@ -47,7 +47,7 @@ A GPT4All model is a 3GB - 8GB file that you can download and plug into the GPT4 ### What's New ([Issue Tracker](https://github.com/orgs/nomic-ai/projects/2)) - **October 19th, 2023**: GGUF Support Launches with Support for: - Mistral 7b base model, an updated model gallery on [gpt4all.io](https://gpt4all.io), several new local code models including Rift Coder v1.5 - - [Nomic Vulkan](https://blog.nomic.ai/posts/gpt4all-gpu-inference-with-vulkan) support for Q4_0, Q6 quantizations in GGUF. + - [Nomic Vulkan](https://blog.nomic.ai/posts/gpt4all-gpu-inference-with-vulkan) support for Q4\_0 and Q4\_1 quantizations in GGUF. - Offline build support for running old versions of the GPT4All Local LLM Chat Client. - **September 18th, 2023**: [Nomic Vulkan](https://blog.nomic.ai/posts/gpt4all-gpu-inference-with-vulkan) launches supporting local LLM inference on AMD, Intel, Samsung, Qualcomm and NVIDIA GPUs. - **August 15th, 2023**: GPT4All API launches allowing inference of local LLMs from docker containers.