mirror of
https://github.com/StanGirard/quivr.git
synced 2024-11-09 17:15:54 +03:00
docs: update Quivr doc (#1531)
Issue: https://github.com/StanGirard/quivr/issues/1526
This commit is contained in:
parent
91a3faffaa
commit
be87177366
@ -1,8 +0,0 @@
|
||||
{
|
||||
"label": "API",
|
||||
"position": 1,
|
||||
"link": {
|
||||
"type": "generated-index",
|
||||
"description": "How does the backend works?"
|
||||
}
|
||||
}
|
@ -1,8 +0,0 @@
|
||||
{
|
||||
"label": "API",
|
||||
"position": 2,
|
||||
"link": {
|
||||
"type": "generated-index",
|
||||
"description": "How does the API works ?"
|
||||
}
|
||||
}
|
@ -1,9 +0,0 @@
|
||||
{
|
||||
"label": "Brains",
|
||||
"position": 3,
|
||||
"link": {
|
||||
"type": "generated-index",
|
||||
"description": "What are brains?"
|
||||
}
|
||||
}
|
||||
|
@ -1,39 +0,0 @@
|
||||
---
|
||||
sidebar_position: 1
|
||||
---
|
||||
|
||||
# Introduction to Brains
|
||||
|
||||
Quivr has a concept of "Brains". They are ring fenced bodies of information that can be used to provide context to Large Language Models (LLMs) to answer questions on a particular topic.
|
||||
|
||||
LLMs are trained on a large variety of data but to answer a question on a specific topic or to be used to make deductions around a specific topic, they need to be supplied with the context of that topic.
|
||||
|
||||
Quivr uses brains as an intuitive way to provide that context.
|
||||
|
||||
When a brain is selected in Quivr, the LLM will be provided with only the context of that brain. This allows users to build brains for specific topics and then use them to answer questions about that topic.
|
||||
|
||||
In the future there will be the functionality to share brains with other users of Quivr.
|
||||
|
||||
## How to use Brains
|
||||
|
||||
To use a brain, simply select the menu from using the Brain icon in the header at the top right of the Quivr interface.
|
||||
|
||||
You can create a new brain by clicking the "Create Brain" button. You will be prompted to enter a name for the brain. If you wish you can also just use the default brain for your account.
|
||||
|
||||
To switch to a different brain, simply click on the brain name in the menu and select the brain you wish to use.
|
||||
|
||||
If you have not chosen a brain, you can assume that any documentation you upload will be added to the default brain.
|
||||
|
||||
**Note: If you are having problems with the chat functionality, try selecting a brain from the menu. The default brain is not always selected automatically and you will need a brain selected to use the chat functionality.**
|
||||
|
||||
## Using Resend API
|
||||
|
||||
We have integrated [Resend](https://resend.com/docs/introduction), an email API for developers, in our application to handle sharing brains with an email invitation.
|
||||
|
||||
Two environment variables have been introduced to handle this integration:
|
||||
|
||||
- RESEND_API_KEY: This is the unique API key provided by Resend for our application. It allows us to communicate with the Resend platform in a secure way.
|
||||
|
||||
- RESEND_EMAIL_ADDRESS: This is the email address we use as the sender address when sending emails through Resend.
|
||||
|
||||
After fetching our Resend API key and email address from environment variables, we use it to send an email via resend.Emails.send method.
|
8
docs/docs/Developers/contribution/_category_.json
Normal file
8
docs/docs/Developers/contribution/_category_.json
Normal file
@ -0,0 +1,8 @@
|
||||
{
|
||||
"label": "⌨️ Contribute to Quivr",
|
||||
"position": 1,
|
||||
"link": {
|
||||
"type": "generated-index",
|
||||
"description": "Want to contribute to Quivr? Here's how to get started."
|
||||
}
|
||||
}
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
sidebar_position: 1
|
||||
title: Architecture
|
||||
sidebar_position: 3
|
||||
title: 🏛️ Architecture
|
||||
---
|
||||
|
||||
Quivr is using FastAPI to provide a RESTful API for the backend. The API is currently in beta and is subject to change. The API is available at [https://api.quivr.app](https://api.quivr.app).
|
||||
@ -11,14 +11,12 @@ You can find the Swagger documentation for the API at [https://api.quivr.app/doc
|
||||
|
||||
This documentation outlines the key points and usage instructions for interacting with the API backend. Please follow the guidelines below to use the backend services effectively.
|
||||
|
||||
|
||||
## FastAPI
|
||||
|
||||
FastAPI is a modern, fast (high-performance), web framework for building APIs with Python 3.6+ based on standard Python type hints. FastAPI is a class-based API framework that is built on top of Starlette and Pydantic. FastAPI is a great choice for building APIs because it is easy to use, fast, and has a lot of great features.
|
||||
|
||||
We decided to choose FastAPI because it is a modern, fast, and easy-to-use API framework. FastAPI is also very well documented and has a lot of great features that make it easy to build APIs. FastAPI is also very well supported by the community and has a lot of great features that make it easy to build APIs.
|
||||
|
||||
|
||||
## Authentication
|
||||
|
||||
The API uses API keys for authentication. You can generate an API key by signing in to the frontend application and navigating to the `/config` page. The API key will be required to authenticate your requests to the backend.
|
||||
@ -31,5 +29,4 @@ Authorization: Bearer {api_key}
|
||||
|
||||
Replace `{api_key}` with the generated API key obtained from the frontend
|
||||
|
||||
You can find more information in the [Authentication](/docs/Developers/backend/api/getting_started) section of the documentation.
|
||||
|
||||
You can find more information in the [Authentication](/docs/Developers/useQuivr/get_your_api_key) section of the documentation.
|
@ -1,6 +1,6 @@
|
||||
{
|
||||
"label": "Chains",
|
||||
"position": 4,
|
||||
"label": "⛓️ Chains",
|
||||
"position": 5,
|
||||
"link": {
|
||||
"type": "generated-index",
|
||||
"description": "What are chains?"
|
Before Width: | Height: | Size: 62 KiB After Width: | Height: | Size: 62 KiB |
@ -1,6 +1,6 @@
|
||||
{
|
||||
"label": "Frontend",
|
||||
"position": 2,
|
||||
"label": "💻 Frontend",
|
||||
"position": 4,
|
||||
"link": {
|
||||
"type": "generated-index"
|
||||
}
|
Before Width: | Height: | Size: 25 KiB After Width: | Height: | Size: 25 KiB |
Before Width: | Height: | Size: 27 KiB After Width: | Height: | Size: 27 KiB |
@ -1,11 +1,14 @@
|
||||
---
|
||||
sidebar_position: 4
|
||||
title: 🆘 Contributing
|
||||
sidebar_position: 1
|
||||
title: 🆘 Guidelines
|
||||
---
|
||||
|
||||
# Contributing to Quivr
|
||||
|
||||
Thanks for your interest in contributing to Quivr! Here you'll find guidelines for contributing and steps on how you can contribute.
|
||||
|
||||
### Repo: [Quivr Github](https://github.com/stanGirard/quivr)
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Contributing to Quivr](#contributing-to-quivr)
|
97
docs/docs/Developers/contribution/install.md
Normal file
97
docs/docs/Developers/contribution/install.md
Normal file
@ -0,0 +1,97 @@
|
||||
---
|
||||
sidebar_position: 2
|
||||
title: 🧑💻 Install Quivr
|
||||
---
|
||||
|
||||
Sure, here's an improved version of your markdown document:
|
||||
|
||||
# Prerequisites 📋
|
||||
|
||||
Before you begin, make sure you have the following tools and accounts installed and set up:
|
||||
|
||||
- Docker
|
||||
- Docker Compose
|
||||
- A Supabase account with:
|
||||
- A new Supabase project
|
||||
- Supabase Project API key
|
||||
- Supabase Project URL
|
||||
|
||||
## Installation Steps 💽
|
||||
|
||||
Follow these steps to install and set up the Quivr project:
|
||||
|
||||
### Step 0: Installation Video (Optional)
|
||||
|
||||
If needed, you can watch the installation process on YouTube [here](https://www.youtube.com/watch?v=rC-s4QdfY80&feature=youtu.be).
|
||||
|
||||
### Step 1: Clone the Repository
|
||||
|
||||
Use one of the following commands to clone the Quivr repository:
|
||||
|
||||
- If you don't have an SSH key set up:
|
||||
|
||||
```
|
||||
git clone https://github.com/StanGirard/Quivr.git
|
||||
cd Quivr
|
||||
```
|
||||
|
||||
- If you have an SSH key set up or want to add it (guide [here](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/adding-a-new-ssh-key-to-your-github-account)):
|
||||
|
||||
```
|
||||
git clone git@github.com:StanGirard/Quivr.git
|
||||
cd Quivr
|
||||
```
|
||||
|
||||
### Step 2: Use the Install Helper Script
|
||||
|
||||
Run the install_helper.sh script to automate the setup process. This script will help you set up your environment files and execute the necessary migrations. Ensure you have the following prerequisites installed:
|
||||
|
||||
```bash
|
||||
brew install gum # Windows (via Scoop) scoop install charm-gum
|
||||
brew install postgresql # Windows (via Scoop) scoop install postgresql
|
||||
```
|
||||
|
||||
```bash
|
||||
chmod +x install_helper.sh
|
||||
./install_helper.sh
|
||||
```
|
||||
|
||||
If you prefer manual setup, you can follow the steps below.
|
||||
|
||||
### Step 2 - Additional Configuration: Copy Environment Files
|
||||
|
||||
Copy the environment files as follows:
|
||||
|
||||
- Copy .backend_env.example to backend/.env
|
||||
- Copy .frontend_env.example to frontend/.env
|
||||
|
||||
### Step 3: Update Environment Variables
|
||||
|
||||
Edit the backend/.env and frontend/.env files with the following information:
|
||||
|
||||
- `supabase_service_key`: Found in your Supabase dashboard under Project Settings -> API (Use the anon public key from the Project API keys section).
|
||||
- `JWT_SECRET_KEY`: Found in your Supabase settings under Project Settings -> API -> JWT Settings -> JWT Secret.
|
||||
- `NEXT_PUBLIC_BACKEND_URL`: Set to localhost:5050 for Docker. Update if your backend is running on a different machine.
|
||||
|
||||
### Step 4: Run Migration Scripts
|
||||
|
||||
Run the migration.sh script to execute the migration scripts. You have two options:
|
||||
|
||||
- `Create all tables`: For the first-time setup.
|
||||
- `Run migrations`: When updating your database.
|
||||
|
||||
You can also run the script on the Supabase database via the web interface (SQL Editor -> New query -> paste the script -> Run). All migration scripts can be found in the scripts folder.
|
||||
|
||||
If you're migrating from an old version of Quivr, run the scripts in the migration script to update your data in chronological order.
|
||||
|
||||
### Step 5: Launch the Application
|
||||
|
||||
Run the following command to launch the application:
|
||||
|
||||
```
|
||||
docker compose -f docker-compose.dev.yml up --build
|
||||
```
|
||||
|
||||
### Step 6: Navigate to localhost:3000 in your browser
|
||||
|
||||
Open your web browser and navigate to [localhost:3000](http://localhost:3000).
|
@ -1,6 +1,6 @@
|
||||
{
|
||||
"label": "LLM",
|
||||
"position": 3,
|
||||
"position": 6,
|
||||
"link": {
|
||||
"type": "generated-index",
|
||||
"description": "How does the LLM (Large Language Model Work)?"
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
sidebar_position: 1
|
||||
title: Testing Strategies
|
||||
sidebar_position: 7
|
||||
title: 🧪 Testing Strategies
|
||||
---
|
||||
|
||||
## Backend
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
sidebar_position: 2
|
||||
title: Using Quivr fully locally
|
||||
sidebar_position: 3
|
||||
title: 📍 Run Quivr fully locally
|
||||
---
|
||||
|
||||
# Using Quivr fully locally
|
||||
@ -8,16 +8,18 @@ title: Using Quivr fully locally
|
||||
## Headers
|
||||
|
||||
The following is a guide to set up everything for using Quivr locally:
|
||||
##### Table of Contents
|
||||
* [Database](#database)
|
||||
* [Embeddings](#embeddings)
|
||||
* [LLM for inference](#llm)
|
||||
|
||||
##### Table of Contents
|
||||
|
||||
- [Database](#database)
|
||||
- [Embeddings](#embeddings)
|
||||
- [LLM for inference](#llm)
|
||||
|
||||
It is a first, working setup, but a lot of work has to be done to e.g. find the appropriate settings for the model.
|
||||
|
||||
Importantly, this will currently only work on tag v0.0.46.
|
||||
|
||||
The guide was put together in collaboration with members of the Quivr Discord, **Using Quivr fully locally** thread. That is a good place to discuss it.
|
||||
The guide was put together in collaboration with members of the Quivr Discord, **Using Quivr fully locally** thread. That is a good place to discuss it.
|
||||
|
||||
This worked for me, but I sometimes got strange results (the output contains repeating answers/questions). Maybe because `stopping_criteria=stopping_criteria` must be uncommented in `transformers.pipeline`. Will update this page as I continue learning.
|
||||
|
||||
@ -28,9 +30,10 @@ This worked for me, but I sometimes got strange results (the output contains rep
|
||||
Instead of relying on a remote Supabase instance, we have to set it up locally. Follow the instructions on https://supabase.com/docs/guides/self-hosting/docker.
|
||||
|
||||
Troubleshooting:
|
||||
* If the Quivr backend container cannot reach Supabase on port 8000, change the Quivr backend container to use the host network.
|
||||
* If email service does not work, add a user using the supabase web ui, and check "Auto Confirm User?".
|
||||
* http://localhost:8000/project/default/auth/users
|
||||
|
||||
- If the Quivr backend container cannot reach Supabase on port 8000, change the Quivr backend container to use the host network.
|
||||
- If email service does not work, add a user using the supabase web ui, and check "Auto Confirm User?".
|
||||
- http://localhost:8000/project/default/auth/users
|
||||
|
||||
<a name="embeddings"/>
|
||||
|
||||
@ -39,16 +42,19 @@ Troubleshooting:
|
||||
First, let's get local embeddings to work with GPT4All. Instead of relying on OpenAI for generating embeddings of both the prompt and the documents we upload, we will use a local LLM for this.
|
||||
|
||||
Remove any existing data from the postgres database:
|
||||
* `supabase/docker $ docker compose down -v`
|
||||
* `supabase/docker $ rm -rf volumes/db/data/`
|
||||
* `supabase/docker $ docker compose up -d`
|
||||
|
||||
- `supabase/docker $ docker compose down -v`
|
||||
- `supabase/docker $ rm -rf volumes/db/data/`
|
||||
- `supabase/docker $ docker compose up -d`
|
||||
|
||||
Change the vector dimensions in the necessary Quivr SQL files:
|
||||
* Replace all occurrences of 1536 by 768, in Quivr's `scripts\tables.sql`
|
||||
* Run tables.sql in the Supabase web ui SQL editor: http://localhost:8000
|
||||
|
||||
- Replace all occurrences of 1536 by 768, in Quivr's `scripts\tables.sql`
|
||||
- Run tables.sql in the Supabase web ui SQL editor: http://localhost:8000
|
||||
|
||||
Change the Quivr code to use local LLM (GPT4All) and local embeddings:
|
||||
* add code to `backend\core\llm\private_gpt4all.py`
|
||||
|
||||
- add code to `backend\core\llm\private_gpt4all.py`
|
||||
|
||||
```python
|
||||
from langchain.embeddings import HuggingFaceEmbeddings
|
||||
@ -73,18 +79,19 @@ Update Quivr `backend/core/.env`'s Private LLM Variables:
|
||||
```
|
||||
|
||||
Download GPT4All model:
|
||||
* `$ cd backend/core/local_models/`
|
||||
* `wget https://gpt4all.io/models/ggml-gpt4all-j-v1.3-groovy.bin`
|
||||
|
||||
- `$ cd backend/core/local_models/`
|
||||
- `wget https://gpt4all.io/models/ggml-gpt4all-j-v1.3-groovy.bin`
|
||||
|
||||
Ensure the Quivr backend docker container has CUDA and the GPT4All package:
|
||||
|
||||
```
|
||||
FROM pytorch/pytorch:2.0.1-cuda11.7-cudnn8-devel
|
||||
#FROM python:3.11-bullseye
|
||||
|
||||
|
||||
ARG DEBIAN_FRONTEND=noninteractive
|
||||
ENV DEBIAN_FRONTEND=noninteractive
|
||||
|
||||
|
||||
RUN pip install gpt4all
|
||||
```
|
||||
|
||||
@ -112,9 +119,9 @@ $ curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.
|
||||
$ sudo apt-get update
|
||||
|
||||
$ sudo apt-get install -y nvidia-container-toolkit
|
||||
|
||||
|
||||
$ nvidia-ctk --version
|
||||
|
||||
|
||||
$ sudo systemctl restart docker
|
||||
```
|
||||
|
||||
@ -170,7 +177,7 @@ Update the Quivr backend dockerfile:
|
||||
|
||||
```
|
||||
ENV HUGGINGFACEHUB_API_TOKEN=hf_XXX
|
||||
|
||||
|
||||
RUN pip install accelerate
|
||||
```
|
||||
|
||||
@ -186,10 +193,10 @@ Update the `private_gpt4all.py` file as follows:
|
||||
from langchain.llms import HuggingFacePipeline
|
||||
from langchain.embeddings import HuggingFaceEmbeddings
|
||||
...
|
||||
|
||||
model_id = "stabilityai/StableBeluga-13B"
|
||||
|
||||
model_id = "stabilityai/StableBeluga-13B"
|
||||
...
|
||||
|
||||
|
||||
def _create_llm(
|
||||
self,
|
||||
model,
|
||||
@ -213,7 +220,7 @@ Update the `private_gpt4all.py` file as follows:
|
||||
logger.info("--- model path %s",model_path)
|
||||
|
||||
model_id = "stabilityai/StableBeluga-13B"
|
||||
|
||||
|
||||
llm = transformers.AutoModelForCausalLM.from_pretrained(
|
||||
model_id,
|
||||
use_cache=True,
|
||||
|
8
docs/docs/Developers/useQuivr/_category_.json
Normal file
8
docs/docs/Developers/useQuivr/_category_.json
Normal file
@ -0,0 +1,8 @@
|
||||
{
|
||||
"label": "🔗 Use Quivr Backend",
|
||||
"position": 2,
|
||||
"link": {
|
||||
"type": "generated-index",
|
||||
"description": "Quivr is 100% API driven, so you can use it with any frontend framework or language."
|
||||
}
|
||||
}
|
8
docs/docs/Developers/useQuivr/brain/_category_.json
Normal file
8
docs/docs/Developers/useQuivr/brain/_category_.json
Normal file
@ -0,0 +1,8 @@
|
||||
{
|
||||
"label": "🧠 Brain",
|
||||
"position": 3,
|
||||
"link": {
|
||||
"type": "generated-index",
|
||||
"description": "A brain groups all your knowledge in a single place. You can create as many brains as you want, and each brain can have its own set of knowledge."
|
||||
}
|
||||
}
|
42
docs/docs/Developers/useQuivr/brain/create_a_brain.md
Normal file
42
docs/docs/Developers/useQuivr/brain/create_a_brain.md
Normal file
@ -0,0 +1,42 @@
|
||||
---
|
||||
sidebar_position: 1
|
||||
title: 🆕 Create a brain
|
||||
---
|
||||
|
||||
To create a brain, you need to make a POST request to the `/brains/` endpoint. This endpoint requires authentication, and you need to provide the following parameters in the request body:
|
||||
|
||||
- `name` (Optional): The name of the brain. If not provided, it defaults to "Default brain."
|
||||
|
||||
- `description` (Optional): A description of the brain. If not provided, it defaults to "This is a description."
|
||||
|
||||
- `status` (Optional): The status of the brain, which can be "private" or another value of your choice. If not provided, it defaults to "private."
|
||||
|
||||
- `model` (Optional): The model to use for the brain.
|
||||
|
||||
- `temperature` (Optional): The temperature setting for the brain. If not provided, it defaults to 0.0.
|
||||
|
||||
- `max_tokens` (Optional): The maximum number of tokens for the output. If not provided, it defaults to 256.
|
||||
|
||||
- `openai_api_key` (Optional): An API key for OpenAI. If not provided, it defaults to None.
|
||||
|
||||
- `prompt_id` (Optional): A UUID associated with a prompt.
|
||||
|
||||
Here's an example request using `curl`:
|
||||
|
||||
```http
|
||||
POST /brains/ HTTP/1.1
|
||||
Host: your-api-url
|
||||
Authorization: Bearer YOUR_ACCESS_TOKEN
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"name": "My Custom Brain",
|
||||
"description": "This is my brain description",
|
||||
"status": "private",
|
||||
"model": "gpt-3.5-turbo",
|
||||
"temperature": 0.8,
|
||||
"max_tokens": 512,
|
||||
"openai_api_key": "YOUR_OPENAI_API_KEY",
|
||||
"prompt_id": "YOUR_PROMPT_UUID"
|
||||
}
|
||||
```
|
@ -0,0 +1,22 @@
|
||||
---
|
||||
sidebar_position: 4
|
||||
title: 🧠 Get a Brain
|
||||
---
|
||||
|
||||
To get a brain, you need to make a GET request to the following endpoints.
|
||||
|
||||
### Retrieve a Specific Brain by ID
|
||||
|
||||
To retrieve details of a specific brain by its ID, make a GET request to the following endpoint:
|
||||
|
||||
```http
|
||||
GET /brains/{brain_id}/
|
||||
```
|
||||
|
||||
### Retrieve the Default Brain
|
||||
|
||||
When working with the default brain for the current user, you can make a GET request to the following endpoint:
|
||||
|
||||
```http
|
||||
GET /brains/default/
|
||||
```
|
22
docs/docs/Developers/useQuivr/brain/set_defaut_brain.md
Normal file
22
docs/docs/Developers/useQuivr/brain/set_defaut_brain.md
Normal file
@ -0,0 +1,22 @@
|
||||
---
|
||||
sidebar_position: 3
|
||||
title: 🫵 Set a Default Brain
|
||||
---
|
||||
|
||||
To set a brain as the default for the current user, you need to make a POST request to the following endpoint:
|
||||
|
||||
Replace `{brain_id}` with the unique identifier of the brain you want to set as the default.
|
||||
|
||||
### Request Parameters
|
||||
|
||||
You should include the following parameters in the request:
|
||||
|
||||
- **brain_id**: The unique identifier (UUID) of the brain you want to set as the default.
|
||||
|
||||
### Example Request
|
||||
|
||||
```http
|
||||
POST /brains/{brain_id}/default HTTP/1.1
|
||||
Host: your-api-host.com
|
||||
Authorization: Bearer YOUR_ACCESS_TOKEN
|
||||
```
|
56
docs/docs/Developers/useQuivr/brain/update_a_brain.md
Normal file
56
docs/docs/Developers/useQuivr/brain/update_a_brain.md
Normal file
@ -0,0 +1,56 @@
|
||||
---
|
||||
sidebar_position: 2
|
||||
title: ✍️ Update a Brain
|
||||
---
|
||||
|
||||
To update a brain, you need to make a PUT request to the following endpoint:
|
||||
|
||||
`/brains/{brain_id}/`
|
||||
|
||||
Replace `{brain_id}` with the unique identifier of the brain you want to update.
|
||||
|
||||
### Request Parameters
|
||||
|
||||
You should include the following parameters in the request:
|
||||
|
||||
- **brain_id**: The unique identifier (UUID) of the brain you want to update.
|
||||
|
||||
- **Authorization Header**: You must include a valid bearer token in the Authorization header to authenticate the request. This token can be obtained by following the authentication process.
|
||||
|
||||
- **Brain Update Data**: In the request body, you should provide the data you want to update for the brain. You can include the following optional fields:
|
||||
|
||||
- **name**: The name of the brain.
|
||||
|
||||
- **description**: A description of the brain.
|
||||
|
||||
- **temperature**: The temperature setting for the brain.
|
||||
|
||||
- **model**: The model used by the brain.
|
||||
|
||||
- **max_tokens**: The maximum number of tokens for generated responses.
|
||||
|
||||
- **openai_api_key**: An optional API key associated with the brain.
|
||||
|
||||
- **status**: The status of the brain, which can be "public" or "private."
|
||||
|
||||
- **prompt_id**: An optional UUID that associates the brain with a specific prompt.
|
||||
|
||||
### Example Request
|
||||
|
||||
```http
|
||||
PUT /brains/{brain_id}/ HTTP/1.1
|
||||
Host: your-api-host.com
|
||||
Authorization: Bearer {your_access_token}
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"name": "Updated Brain Name",
|
||||
"description": "Updated brain description.",
|
||||
"temperature": 0.7,
|
||||
"model": "gpt-3.5-turbo",
|
||||
"max_tokens": 150,
|
||||
"openai_api_key": "your-api-key",
|
||||
"status": "private",
|
||||
"prompt_id": "123e4567-e89b-12d3-a456-426655440000"
|
||||
}
|
||||
```
|
@ -1,9 +1,8 @@
|
||||
---
|
||||
sidebar_position: 2
|
||||
sidebar_position: 4
|
||||
title: 🤖 Chat system
|
||||
---
|
||||
|
||||
# Chat system
|
||||
|
||||
**URL**: https://api.quivr.app/chat
|
||||
|
||||
**Swagger**: https://api.quivr.app/docs
|
||||
@ -43,7 +42,7 @@ Users can create multiple chat sessions, each with its own set of chat messages.
|
||||
- Description: This endpoint allows adding a new question to a chat. It generates an answer for the question using different models based on the provided model type.
|
||||
|
||||
Models like gpt-4-0613 and gpt-3.5-turbo-0613 use a custom OpenAI function-based answer generator.
|
||||
![Function based answer generator](../../../../static/img/answer_schema.png)
|
||||
![Function based answer generator](../../../static/img/answer_schema.png)
|
||||
|
||||
6. **Get the chat history:**
|
||||
- HTTP method: GET
|
@ -1,9 +1,8 @@
|
||||
---
|
||||
sidebar_position: 3
|
||||
sidebar_position: 5
|
||||
title: 😩 Error Handling
|
||||
---
|
||||
|
||||
# Error Handling
|
||||
|
||||
**URL**: https://api.quivr.app/chat
|
||||
|
||||
**Swagger**: https://api.quivr.app/docs
|
9
docs/docs/Developers/useQuivr/get_your_api_key.md
Normal file
9
docs/docs/Developers/useQuivr/get_your_api_key.md
Normal file
@ -0,0 +1,9 @@
|
||||
---
|
||||
sidebar_position: 1
|
||||
title: 🔐 Get your API key
|
||||
---
|
||||
|
||||
To use the Quivr API, you need to get an API key. You can get one by following these steps:
|
||||
|
||||
1. Go to [user settings page](https://www.quivr.app/user)
|
||||
2. Generate a new API key by clicking on the "Create new Key" button
|
@ -1,9 +1,8 @@
|
||||
---
|
||||
sidebar_position: 1
|
||||
sidebar_position: 2
|
||||
title: ❓ How to use the API
|
||||
---
|
||||
|
||||
# How to use the API
|
||||
|
||||
**URL**: https://api.quivr.app
|
||||
|
||||
**Swagger**: https://api.quivr.app/docs
|
@ -1,4 +0,0 @@
|
||||
{
|
||||
"position": 4,
|
||||
"label": "📚 Reference"
|
||||
}
|
@ -1,5 +1,5 @@
|
||||
{
|
||||
"label": "🕺 User Guide",
|
||||
"label": "📚 User Guide",
|
||||
"position": 2,
|
||||
"link": {
|
||||
"type": "generated-index",
|
||||
|
14
docs/docs/User_Guide/intro.md
Normal file
14
docs/docs/User_Guide/intro.md
Normal file
@ -0,0 +1,14 @@
|
||||
---
|
||||
sidebar_position: 1
|
||||
title: 🕺 Getting started
|
||||
---
|
||||
|
||||
## How to get started ? 👀
|
||||
|
||||
:::tip
|
||||
It takes less than **5 seconds** to get started with Quivr. You can even use your Google account to sign up.
|
||||
:::
|
||||
|
||||
- Create an account on [Quivr](https://quivr.app)
|
||||
- Upload your files
|
||||
- Ask questions to Quivr
|
@ -1,253 +0,0 @@
|
||||
---
|
||||
sidebar_position: 2
|
||||
title: Using Quivr fully locally
|
||||
---
|
||||
|
||||
# Using Quivr fully locally
|
||||
|
||||
## Headers
|
||||
|
||||
The following is a guide to set up everything for using Quivr locally:
|
||||
##### Table of Contents
|
||||
* [Database](#database)
|
||||
* [Embeddings](#embeddings)
|
||||
* [LLM for inference](#llm)
|
||||
|
||||
It is a first, working setup, but a lot of work has to be done to e.g. find the appropriate settings for the model.
|
||||
|
||||
Importantly, this will currently only work on tag v0.0.46.
|
||||
|
||||
The guide was put together in collaboration with members of the Quivr Discord, **Using Quivr fully locally** thread. That is a good place to discuss it.
|
||||
|
||||
This worked for me, but I sometimes got strange results (the output contains repeating answers/questions). Maybe because `stopping_criteria=stopping_criteria` must be uncommented in `transformers.pipeline`. Will update this page as I continue learning.
|
||||
|
||||
<a name="database"/>
|
||||
|
||||
## Local Supabase
|
||||
|
||||
Instead of relying on a remote Supabase instance, we have to set it up locally. Follow the instructions on https://supabase.com/docs/guides/self-hosting/docker.
|
||||
|
||||
Troubleshooting:
|
||||
* If the Quivr backend container cannot reach Supabase on port 8000, change the Quivr backend container to use the host network.
|
||||
* If email service does not work, add a user using the supabase web ui, and check "Auto Confirm User?".
|
||||
* http://localhost:8000/project/default/auth/users
|
||||
|
||||
<a name="embeddings"/>
|
||||
|
||||
## Local embeddings
|
||||
|
||||
First, let's get local embeddings to work with GPT4All. Instead of relying on OpenAI for generating embeddings of both the prompt and the documents we upload, we will use a local LLM for this.
|
||||
|
||||
Remove any existing data from the postgres database:
|
||||
* `supabase/docker $ docker compose down -v`
|
||||
* `supabase/docker $ rm -rf volumes/db/data/`
|
||||
* `supabase/docker $ docker compose up -d`
|
||||
|
||||
Change the vector dimensions in the necessary Quivr SQL files:
|
||||
* Replace all occurrences of 1536 by 768, in Quivr's `scripts\tables.sql`
|
||||
* Run tables.sql in the Supabase web ui SQL editor: http://localhost:8000
|
||||
|
||||
Change the Quivr code to use local LLM (GPT4All) and local embeddings:
|
||||
* add code to `backend\core\llm\private_gpt4all.py`
|
||||
|
||||
```python
|
||||
from langchain.embeddings import HuggingFaceEmbeddings
|
||||
...
|
||||
def embeddings(self) -> HuggingFaceEmbeddings:
|
||||
emb = HuggingFaceEmbeddings(
|
||||
model_name="sentence-transformers/all-mpnet-base-v2",
|
||||
model_kwargs={'device': 'cuda'},
|
||||
encode_kwargs={'normalize_embeddings': False}
|
||||
)
|
||||
return emb
|
||||
```
|
||||
|
||||
Note that there may be better models out there for generating the embeddings: https://huggingface.co/spaces/mteb/leaderboard
|
||||
|
||||
Update Quivr `backend/core/.env`'s Private LLM Variables:
|
||||
|
||||
```
|
||||
#Private LLM Variables
|
||||
PRIVATE=True
|
||||
MODEL_PATH=./local_models/ggml-gpt4all-j-v1.3-groovy.bin
|
||||
```
|
||||
|
||||
Download GPT4All model:
|
||||
* `$ cd backend/core/local_models/`
|
||||
* `wget https://gpt4all.io/models/ggml-gpt4all-j-v1.3-groovy.bin`
|
||||
|
||||
Ensure the Quivr backend docker container has CUDA and the GPT4All package:
|
||||
|
||||
```
|
||||
FROM pytorch/pytorch:2.0.1-cuda11.7-cudnn8-devel
|
||||
#FROM python:3.11-bullseye
|
||||
|
||||
ARG DEBIAN_FRONTEND=noninteractive
|
||||
ENV DEBIAN_FRONTEND=noninteractive
|
||||
|
||||
RUN pip install gpt4all
|
||||
```
|
||||
|
||||
Modify the docker-compose yml file (for backend container). The following example is for using 2 GPUs:
|
||||
|
||||
```
|
||||
...
|
||||
network_mode: host
|
||||
deploy:
|
||||
resources:
|
||||
reservations:
|
||||
devices:
|
||||
- driver: nvidia
|
||||
count: 2
|
||||
capabilities: [gpu]
|
||||
```
|
||||
|
||||
Install nvidia container toolkit on the host, https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html:
|
||||
|
||||
```
|
||||
$ wget https://nvidia.github.io/nvidia-docker/gpgkey --no-check-certificate
|
||||
$ sudo apt-key add gpgkey
|
||||
$ distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
|
||||
$ curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
|
||||
$ sudo apt-get update
|
||||
|
||||
$ sudo apt-get install -y nvidia-container-toolkit
|
||||
|
||||
$ nvidia-ctk --version
|
||||
|
||||
$ sudo systemctl restart docker
|
||||
```
|
||||
|
||||
At this moment, if we try to upload a pdf, we get an error:
|
||||
|
||||
```
|
||||
backend-core | 1989-01-01 21:51:41,211 [ERROR] utils.vectors: Error creating vector for document {'code': '22000', 'details': None, 'hint': None, 'message': 'expected 768 dimensions, not 1536'}
|
||||
```
|
||||
|
||||
This can be remedied by using local embeddings for document embeddings. In backend/core/utils/vectors.py, replace:
|
||||
|
||||
```python
|
||||
# def create_vector(self, doc, user_openai_api_key=None):
|
||||
# logger.info("Creating vector for document")
|
||||
# logger.info(f"Document: {doc}")
|
||||
# if user_openai_api_key:
|
||||
# self.commons["documents_vector_store"]._embedding = OpenAIEmbeddings(
|
||||
# openai_api_key=user_openai_api_key
|
||||
# ) # pyright: ignore reportPrivateUsage=none
|
||||
# try:
|
||||
# sids = self.commons["documents_vector_store"].add_documents([doc])
|
||||
# if sids and len(sids) > 0:
|
||||
# return sids
|
||||
|
||||
# except Exception as e:
|
||||
# logger.error(f"Error creating vector for document {e}")
|
||||
|
||||
def create_vector(self, doc, user_openai_api_key=None):
|
||||
logger.info("Creating vector for document")
|
||||
logger.info(f"Document: {doc}")
|
||||
self.commons["documents_vector_store"]._embedding = HuggingFaceEmbeddings(
|
||||
model_name="sentence-transformers/all-mpnet-base-v2",
|
||||
model_kwargs={'device': 'cuda'},
|
||||
encode_kwargs={'normalize_embeddings': False}
|
||||
) # pyright: ignore reportPrivateUsage=none
|
||||
logger.info('||| creating emedding')
|
||||
try:
|
||||
sids = self.commons["documents_vector_store"].add_documents([doc])
|
||||
if sids and len(sids) > 0:
|
||||
return sids
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error creating vector for document {e}")
|
||||
```
|
||||
|
||||
<a name="llm"/>
|
||||
|
||||
## Local LLM
|
||||
|
||||
The final step is to use a local model from HuggingFace for inference. (The HF token is optional, only required for certain models on HF.)
|
||||
|
||||
Update the Quivr backend dockerfile:
|
||||
|
||||
```
|
||||
ENV HUGGINGFACEHUB_API_TOKEN=hf_XXX
|
||||
|
||||
RUN pip install accelerate
|
||||
```
|
||||
|
||||
Update the `private_gpt4all.py` file as follows:
|
||||
|
||||
```python
|
||||
import langchain
|
||||
langchain.debug = True
|
||||
langchain.verbose = True
|
||||
|
||||
import os
|
||||
import transformers
|
||||
from langchain.llms import HuggingFacePipeline
|
||||
from langchain.embeddings import HuggingFaceEmbeddings
|
||||
...
|
||||
|
||||
model_id = "stabilityai/StableBeluga-13B"
|
||||
...
|
||||
|
||||
def _create_llm(
|
||||
self,
|
||||
model,
|
||||
streaming=False,
|
||||
callbacks=None,
|
||||
) -> BaseLLM:
|
||||
"""
|
||||
Override the _create_llm method to enforce the use of a private model.
|
||||
:param model: Language model name to be used.
|
||||
:param streaming: Whether to enable streaming of the model
|
||||
:param callbacks: Callbacks to be used for streaming
|
||||
:return: Language model instance
|
||||
"""
|
||||
|
||||
model_path = self.model_path
|
||||
|
||||
logger.info("Using private model: %s", model)
|
||||
logger.info("Streaming is set to %s", streaming)
|
||||
logger.info("--- model %s",model)
|
||||
|
||||
logger.info("--- model path %s",model_path)
|
||||
|
||||
model_id = "stabilityai/StableBeluga-13B"
|
||||
|
||||
llm = transformers.AutoModelForCausalLM.from_pretrained(
|
||||
model_id,
|
||||
use_cache=True,
|
||||
load_in_4bit=True,
|
||||
device_map='auto',
|
||||
#use_auth_token=hf_auth
|
||||
)
|
||||
logger.info('<<< transformers.AutoModelForCausalLM.from_pretrained')
|
||||
|
||||
llm.eval()
|
||||
logger.info('<<< eval')
|
||||
|
||||
tokenizer = transformers.AutoTokenizer.from_pretrained(
|
||||
model_id,
|
||||
use_auth_token=hf_auth
|
||||
)
|
||||
logger.info('<<< transformers.AutoTokenizer.from_pretrained')
|
||||
|
||||
generate_text = transformers.pipeline(
|
||||
model=llm, tokenizer=tokenizer,
|
||||
return_full_text=True, # langchain expects the full text
|
||||
task='text-generation',
|
||||
# we pass model parameters here too
|
||||
#stopping_criteria=stopping_criteria, # without this model rambles during chat
|
||||
temperature=0.5, # 'randomness' of outputs, 0.0 is the min and 1.0 the max
|
||||
max_new_tokens=512, # mex number of tokens to generate in the output
|
||||
repetition_penalty=1.1 # without this output begins repeating
|
||||
)
|
||||
logger.info('<<< generate_text = transformers.pipeline(')
|
||||
|
||||
result = HuggingFacePipeline(pipeline=generate_text)
|
||||
|
||||
logger.info('<<< generate_text = transformers.pipeline(')
|
||||
|
||||
logger.info("<<< created llm HuggingFace")
|
||||
return result
|
||||
```
|
@ -1,12 +1,32 @@
|
||||
---
|
||||
title: Concept of Brain
|
||||
sidebar_position: 2
|
||||
title: 🧠 Concept of Brain
|
||||
---
|
||||
|
||||
:::info
|
||||
A few brains were harmed in the making of this documentation 🤯😏
|
||||
:::
|
||||
|
||||
# Introduction to Brains
|
||||
|
||||
A **brain** is a concept that we created to allow you to **create** and **organize** your knowledge in Quivr.
|
||||
Quivr has a concept of "Brains". They are ring fenced bodies of information that can be used to provide context to Large Language Models (LLMs) to answer questions on a particular topic.
|
||||
|
||||
LLMs are trained on a large variety of data but to answer a question on a specific topic or to be used to make deductions around a specific topic, they need to be supplied with the context of that topic.
|
||||
|
||||
Quivr uses brains as an intuitive way to provide that context.
|
||||
|
||||
When a brain is selected in Quivr, the LLM will be provided with only the context of that brain. This allows users to build brains for specific topics and then use them to answer questions about that topic.
|
||||
|
||||
In the future there will be the functionality to share brains with other users of Quivr.
|
||||
|
||||
## How to use Brains
|
||||
|
||||
To use a brain, simply select the menu from using the Brain icon in the header at the top right of the Quivr interface.
|
||||
|
||||
You can create a new brain by clicking the "Create Brain" button. You will be prompted to enter a name for the brain. If you wish you can also just use the default brain for your account.
|
||||
|
||||
To switch to a different brain, simply click on the brain name in the menu and select the brain you wish to use.
|
||||
|
||||
If you have not chosen a brain, you can assume that any documentation you upload will be added to the default brain.
|
||||
|
||||
**Note: If you are having problems with the chat functionality, try selecting a brain from the menu. The default brain is not always selected automatically and you will need a brain selected to use the chat functionality.**
|
||||
|
Loading…
Reference in New Issue
Block a user