mirror of
https://github.com/nomic-ai/gpt4all.git
synced 2024-10-26 22:00:32 +03:00
Update README.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
This commit is contained in:
parent
89a59e7f99
commit
3444a47cad
@ -40,7 +40,7 @@ Learn more in the [documentation](https://docs.gpt4all.io).
|
|||||||
|
|
||||||
A GPT4All model is a 3GB - 8GB file that you can download and plug into the GPT4All open-source ecosystem software. **Nomic AI** supports and maintains this software ecosystem to enforce quality and security alongside spearheading the effort to allow any person or enterprise to easily train and deploy their own on-edge large language models.
|
A GPT4All model is a 3GB - 8GB file that you can download and plug into the GPT4All open-source ecosystem software. **Nomic AI** supports and maintains this software ecosystem to enforce quality and security alongside spearheading the effort to allow any person or enterprise to easily train and deploy their own on-edge large language models.
|
||||||
|
|
||||||
### What's New
|
### What's New ([Issue Tracker](https://github.com/orgs/nomic-ai/projects/2))
|
||||||
- **October 19th, 2023**: GGUF Support Launches with Support for:
|
- **October 19th, 2023**: GGUF Support Launches with Support for:
|
||||||
- Mistral 7b base model, an updated model gallery on [gpt4all.io](https://gpt4all.io), several new local code models including Rift Coder v1.5
|
- Mistral 7b base model, an updated model gallery on [gpt4all.io](https://gpt4all.io), several new local code models including Rift Coder v1.5
|
||||||
- [Nomic Vulkan](https://blog.nomic.ai/posts/gpt4all-gpu-inference-with-vulkan) support for Q4_0, Q6 quantizations in GGUF.
|
- [Nomic Vulkan](https://blog.nomic.ai/posts/gpt4all-gpu-inference-with-vulkan) support for Q4_0, Q6 quantizations in GGUF.
|
||||||
@ -49,6 +49,7 @@ A GPT4All model is a 3GB - 8GB file that you can download and plug into the GPT4
|
|||||||
- **August 15th, 2023**: GPT4All API launches allowing inference of local LLMs from docker containers.
|
- **August 15th, 2023**: GPT4All API launches allowing inference of local LLMs from docker containers.
|
||||||
- **July 2023**: Stable support for LocalDocs, a GPT4All Plugin that allows you to privately and locally chat with your data.
|
- **July 2023**: Stable support for LocalDocs, a GPT4All Plugin that allows you to privately and locally chat with your data.
|
||||||
|
|
||||||
|
|
||||||
### Chat Client
|
### Chat Client
|
||||||
Run any GPT4All model natively on your home desktop with the auto-updating desktop chat client. See <a href="https://gpt4all.io">GPT4All Website</a> for a full list of open-source models you can run with this powerful desktop application.
|
Run any GPT4All model natively on your home desktop with the auto-updating desktop chat client. See <a href="https://gpt4all.io">GPT4All Website</a> for a full list of open-source models you can run with this powerful desktop application.
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user