Update private-llm.md: Typos Fixed (#1015)

* Update intro.md

* Update qa.md

* Update private-llm.md
This commit is contained in:
Aryan Malik 2023-08-23 13:39:21 +05:30 committed by GitHub
parent 2b74ebc1f0
commit 0c568ac978
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -6,7 +6,7 @@ sidebar_position: 1
Quivr now has the capability to use a private LLM model powered by GPT4All (other open source models coming soon).
This is simular to the functionality provided by the PrivateGPT project.
This is similar to the functionality provided by the PrivateGPT project.
This means that your data never leaves the server. The LLM is downloaded to the server and runs inference on your question locally.
@ -14,10 +14,10 @@ This means that your data never leaves the server. The LLM is downloaded to the
Set the 'private' flag to True in the /backend/.env file. You can also set other model parameters in the .env file.
Download the GPT4All model from [here](https://gpt4all.io/models/ggml-gpt4all-j-v1.3-groovy.bin) and place it in the /backend/local_models folder. Or you can download any model from their ecosystem on there [website](https://gpt4all.io/index.html).
Download the GPT4All model from [here](https://gpt4all.io/models/ggml-gpt4all-j-v1.3-groovy.bin) and place it in the /backend/local_models folder. Or you can download any model from their ecosystem on their [website](https://gpt4all.io/index.html).
## Future Plans
We are planning to add more models to the private LLM feature. We are also planning on using a local embedding model from Hugging Face to reduce our reliance on OpenAI's API.
We will also be adding the ability to use a private LLM model from in the frontend and api. Currently it is only available if you self host the backend.
We will also be adding the ability to use a private LLM model from the frontend and api. Currently it is only available if you self host the backend.