feat(docs): reworked the website (#1523)

Docs time !
This commit is contained in:
Stan Girard 2023-10-30 17:08:15 +01:00 committed by GitHub
parent 9be4a57979
commit 6323931a8b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
30 changed files with 121 additions and 238 deletions

View File

@ -0,0 +1,8 @@
{
"label": "🧑‍💻 Developer Docs",
"position": 3,
"link": {
"type": "generated-index",
"description": "How to use Quivr as a Dev ?"
}
}

View File

@ -1,8 +1,8 @@
{
"label": "Quivr's Brain",
"position": 3,
"label": "API",
"position": 1,
"link": {
"type": "generated-index",
"description": "How does the backend works?"
}
}
}

View File

@ -43,7 +43,7 @@ Users can create multiple chat sessions, each with its own set of chat messages.
- Description: This endpoint allows adding a new question to a chat. It generates an answer for the question using different models based on the provided model type.
Models like gpt-4-0613 and gpt-3.5-turbo-0613 use a custom OpenAI function-based answer generator.
![Function based answer generator](../../../static/img/answer_schema.png)
![Function based answer generator](../../../../static/img/answer_schema.png)
6. **Get the chat history:**
- HTTP method: GET

View File

Before

Width:  |  Height:  |  Size: 62 KiB

After

Width:  |  Height:  |  Size: 62 KiB

View File

@ -4,7 +4,7 @@ sidebar_position: 1
# Introduction to Chains
Quivr uses a framework called [Langchain](https://python.langchain.com/docs/get_started/introduction.html) for the majority of the interaction with the Large Language Models (LLMs).
Quivr uses a framework called [Langchain](https://python.langchain.com/docs/introduction.html) for the majority of the interaction with the Large Language Models (LLMs).
Langchain provides functionality to connect multiple components such as LLMs, document retrievers, and other components together to form a "chain" of components.

View File

@ -31,5 +31,5 @@ Authorization: Bearer {api_key}
Replace `{api_key}` with the generated API key obtained from the frontend
You can find more information in the [Authentication](/docs/backend/api/getting_started) section of the documentation.
You can find more information in the [Authentication](/docs/Developers/backend/api/getting_started) section of the documentation.

View File

@ -1,7 +1,7 @@
{
"label": "Quivr's Face",
"position": 4,
"label": "Frontend",
"position": 2,
"link": {
"type": "generated-index"
}
}
}

View File

Before

Width:  |  Height:  |  Size: 25 KiB

After

Width:  |  Height:  |  Size: 25 KiB

View File

Before

Width:  |  Height:  |  Size: 27 KiB

After

Width:  |  Height:  |  Size: 27 KiB

View File

@ -1,6 +1,6 @@
{
"label": "LLM",
"position": 2,
"position": 3,
"link": {
"type": "generated-index",
"description": "How does the LLM (Large Language Model Work)?"

View File

@ -0,0 +1,36 @@
---
sidebar_position: 1
title: Hugging Face Integration 🤗
---
# Private Language Models with Quivr
Quivr introduces the groundbreaking feature of integrating private Large Language Models (LLMs) powered by HuggingFace. This enhancement ensures your data's confidentiality, as all processing is performed locally on your server.
## Running Mistral with Huggingface Inference Endpoint
### 1. Deploy the Model
- Navigate to the [Mistral AI model page](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on Huggingface.
- Select the option for 'Inference Endpoints'.
- Please note that we recommend the Mistral 7B Instruct model, especially tailored for chat applications.
### 2. Create the Endpoint
- Feel free to assign a custom name to your endpoint.
- Select a location and adhere to the recommended instance size.
- Click to confirm and create your endpoint.
### 3. Obtain Credentials
- Allow some time for your instance to initialize.
- Securely copy both the API URL and your Bearer Token for future use.
### 4. Install Quivr
- To set up Quivr, kindly follow the concise 3-step installation guide provided in our [readme.md](https://github.com/Quivr/README.md).
- Important: Configure environmental variables in your backend/.env file, including the Huggingface API key for seamless integration.
### 5. Configure Your Supabase Instance
- Within your Supabase instance, locate the user_settings table.
- Here, input the following path: "huggingface/mistralai/Mistral-7B-Instruct-v0.1".
As a result, you'll have Quivr running locally with Mistral, now hosted via Huggingface. For those interested in a hassle-free experience, visit [Quivr.app](https://quivr.app) to leverage Mistral at no cost, all thanks to Huggingface. The source code for this setup is [available here](https://github.com/Quivr/SourceCode).
Experience the enhanced privacy and control with Quivr's Private LLM feature today!

View File

@ -1,3 +1,4 @@
{
"position": 4
"position": 4,
"label": "📚 Reference"
}

View File

@ -1,8 +1,8 @@
{
"label": "Getting Started",
"position": 1,
"label": "🕺 User Guide",
"position": 2,
"link": {
"type": "generated-index",
"description": "How to start using Quivr"
}
}
}

View File

@ -1,23 +0,0 @@
---
sidebar_position: 1
---
# Private LLM
Quivr now has the capability to use a private LLM model powered by GPT4All (other open source models coming soon).
This is similar to the functionality provided by the PrivateGPT project.
This means that your data never leaves the server. The LLM is downloaded to the server and runs inference on your question locally.
## How to use
Set the 'private' flag to True in the /backend/.env file. You can also set other model parameters in the .env file.
Download the GPT4All model from [here](https://gpt4all.io/models/ggml-gpt4all-j-v1.3-groovy.bin) and place it in the /backend/local_models folder. Or you can download any model from their ecosystem on their [website](https://gpt4all.io/index.html).
## Future Plans
We are planning to add more models to the private LLM feature. We are also planning on using a local embedding model from Hugging Face to reduce our reliance on OpenAI's API.
We will also be adding the ability to use a private LLM model from the frontend and api. Currently it is only available if you self host the backend.

View File

@ -1,21 +1,31 @@
---
sidebar_position: 4
title: 🆘 Contributing
---
# Contributing to Quivr
Thanks for your interest in contributing to Quivr! Here you'll find guidelines for contributing and steps on how you can contribute.
## Table of Contents
- [Community](#community)
- [Roadmap](#roadmap-and-issues)
- [How to Contribute](#how-to-contribute)
- [Reporting Bugs](#reporting-bugs)
- [Feature Requests](#feature-requests)
- [Code Contributions](#code-contributions)
- [Submission Guidelines](#submission-guidelines)
- [Coding Rules](#coding-rules)
- [Frontend Guidelines](#frontend-guidelines)
- [Backend Guidelines](#backend-guidelines)
- [Making a Pull Request](#making-a-pull-request)
- [Contact](#contact)
- [Contributing to Quivr](#contributing-to-quivr)
- [Table of Contents](#table-of-contents)
- [Community](#community)
- [Roadmap and Issues](#roadmap-and-issues)
- [How to Contribute](#how-to-contribute)
- [Reporting Bugs](#reporting-bugs)
- [Feature Requests](#feature-requests)
- [Code Contributions](#code-contributions)
- [Submission Guidelines](#submission-guidelines)
- [Coding Rules](#coding-rules)
- [Frontend Guidelines](#frontend-guidelines)
- [Coding Conventions](#coding-conventions)
- [Testing](#testing)
- [Backend Guidelines](#backend-guidelines)
- [Coding Conventions](#coding-conventions-1)
- [Testing](#testing-1)
- [Making a Pull Request](#making-a-pull-request)
- [Contact](#contact)
## Community

View File

@ -1,144 +0,0 @@
---
sidebar_position: 1
title: Getting Started
---
# Intro
Quivr, your second brain, utilizes the power of GenerativeAI to store and retrieve unstructured information. Think of it as Obsidian, but turbocharged with AI capabilities.
## Key Features 🎯
- **Universal Data Acceptance**: Quivr can handle almost any type of data you throw at it. Text, images, code snippets, we've got you covered.
- **Generative AI**: Quivr employs advanced AI to assist you in generating and retrieving information.
- **Fast and Efficient**: Designed with speed and efficiency at its core. Quivr ensures rapid access to your data.
- **Secure**: Your data, your control. Always.
- **File Compatibility**:
- Text
- Markdown
- PDF
- Powerpoint
- Excel
- Word
- Audio
- Video
- **Open Source**: Freedom is beautiful, so is Quivr. Open source and free to use.
## Demo Highlights 🎥
### **Demo**:
<video width="640" height="480" controls>
<source src="https://github.com/StanGirard/quivr/assets/19614572/a6463b73-76c7-4bc0-978d-70562dca71f5" type="video/mp4"/>
Your browser does not support the video tag.
</video>
## Getting Started: 🚀
Follow these instructions to get a copy of the project up and running on your local machine for development and testing purposes.
You can find everything on the documentation [here](https://brain.quivr.app/)
### Prerequisites 📋
Before you proceed, ensure you have the following installed:
- Docker
- Docker Compose
Additionally, you'll need a [Supabase](https://supabase.com/) account for:
- Creating a new Supabase project
- Supabase Project API key
- Supabase Project URL
### Installation Steps 💽
- **Step 0**: If needed, here is the installation explained on Youtube [here](https://youtu.be/rC-s4QdfY80)
- **Step 1**: Clone the repository using **one** of these commands:
- If you don't have an SSH key set up:
```bash
git clone https://github.com/StanGirard/Quivr.git && cd Quivr
```
- If you have an SSH key set up or want to add it ([guide here](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/adding-a-new-ssh-key-to-your-github-account))
```bash
git clone git@github.com:StanGirard/Quivr.git && cd Quivr
```
- **Step 2**: Copy the `.XXXXX_env` files
```bash
cp .backend_env.example backend/.env
cp .frontend_env.example frontend/.env
```
- **Step 3**: Update the `backend/.env` and `frontend/.env` file
> _Your `supabase_service_key` can be found in your Supabase dashboard under Project Settings -> API. Use the `anon` `public` key found in the `Project API keys` section._
> _Your `JWT_SECRET_KEY`can be found in your supabase settings under Project Settings -> API -> JWT Settings -> JWT Secret_
> _The `NEXT_PUBLIC_BACKEND_URL` is set to localhost:5050 for the docker. Update it if you are running the backend on a different machine._
> _To activate vertexAI with PaLM from GCP follow the instructions [here](https://python.langchain.com/en/latest/modules/models/llms/integrations/google_vertex_ai_palm.html) and update `backend/.env`- It is an advanced feature, please be expert in GCP before trying to use it_
- [ ] Change variables in `backend/.env`
- [ ] Change variables in `frontend/.env`
- **Step 4**: Create your database tables and functions with one of these two options:
a. Run the following migration scripts on the Supabase database via the web interface (SQL Editor -> `New query`)
[Creation Script 1](https://github.com/stangirard/quivr/tree/main/scripts/tables.sql)
b. Use the `migration.sh` script to run the migration scripts
```bash
chmod +x migration.sh
./migration.sh
```
Choose either create_scripts if it's your first time or migrations if you are updating your database.
All the scripts can be found in the [scripts](https://github.com/stangirard/quivr/tree/main/scripts) folder
> _If you come from an old version of Quivr, run the scripts in [migration script](https://github.com/stangirard/quivr/tree/main/scripts) to migrate your data to the new version in the order of date_
- **Step 5**: Launch the app
```bash
docker compose up --build
```
- **Step 6**: Navigate to `localhost:3000` in your browser
- **Step 7**: Want to contribute to the project?
```
docker compose -f docker-compose.dev.yml up --build
```
## Contributors ✨
Thanks goes to these wonderful people:
<a href="https://github.com/stangirard/quivr/graphs/contributors">
<img src="https://contrib.rocks/image?repo=stangirard/quivr" />
</a>
## Contribute 🤝
Got a pull request? Open it, and we'll review it as soon as possible. Check out our project board [here](https://github.com/users/StanGirard/projects/5) to see what we're currently focused on, and feel free to bring your fresh ideas to the table!
- [Open Issues](https://github.com/StanGirard/quivr/issues)
- [Open Pull Requests](https://github.com/StanGirard/quivr/pulls)
- [Good First Issues](https://github.com/StanGirard/quivr/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)
- [Frontend Issues](https://github.com/StanGirard/quivr/issues?q=is%3Aopen+is%3Aissue+label%3Afrontend)
- [Backend Issues](https://github.com/StanGirard/quivr/issues?q=is%3Aopen+is%3Aissue+label%3Abackend)
## License 📄
This project is licensed under the Apache 2.0 License - see the [LICENSE.md](https://github.com/StanGirard/quivr/blob/main/LICENSE) file for details

34
docs/docs/intro.md Normal file
View File

@ -0,0 +1,34 @@
---
sidebar_position: 1
title: 🚀 Welcome to Quivr
---
# Intro
Quivr, your second brain, utilizes the power of GenerativeAI to store and retrieve unstructured information. Think of it as Obsidian, but turbocharged with AI capabilities.
## Key Features 🎯
- **Universal Data Acceptance**: Quivr can handle almost any type of data you throw at it. Text, images, code snippets, we've got you covered.
- **Generative AI**: Quivr employs advanced AI to assist you in generating and retrieving information.
- **Fast and Efficient**: Designed with speed and efficiency at its core. Quivr ensures rapid access to your data.
- **Secure**: Your data, your control. Always.
- **File Compatibility**:
- Text
- Markdown
- PDF
- Powerpoint
- Excel
- Word
- Audio
- Video
- **Open Source**: Freedom is beautiful, so is Quivr. Open source and free to use.
## Demo Highlights 🎥
### **Demo**:
<video width="640" height="480" controls>
<source src="https://github.com/StanGirard/quivr/assets/19614572/a6463b73-76c7-4bc0-978d-70562dca71f5" type="video/mp4"/>
Your browser does not support the video tag.
</video>

View File

@ -1,6 +1,6 @@
---
sidebar_position: 6
title: Privacy Policy
title: 👀 Privacy Policy
---
## Privacy Policy for Quivr

View File

@ -1,48 +1,9 @@
---
sidebar_position: 5
title: 🎯 Roadmap
---
# Roadmap
## 🚀 What's next?
<iframe width="800" height="450" src="https://whimsical.com/embed/U1XffvPhZxXtNT5Y2ucGvg"></iframe>
## The Vision
Quivr is a platform for building second brains and personnal assistants. It is a tool for thinking, learning and creating.
- [X] **v0.1** - the Webapp
- [x] Basic CRUD operations
- [X] Basic authentication
- [X] One brain per user
- [X] Multiple chats
- [X] Use own keys
- [ ] **v0.2** - Time to share
- [X] Improved Documentation
- [ ] Add a tutorial
- [ ] Add a FAQ
- [X] Create a website - https://brain.quivr.app
- [X] Create public & private brains
- [ ] Allow users to share/subcribe brains
- [ ] Improved UX/UI
- [ ] Stream response
- [X] Better error handling
- [X] Mobile friendly
- [ ] Chat with one or more files
- [X] Better API Interface
- [ ] Refactor backend & frontend for better development experience
- [ ] **v0.3** - Make it smarter & private
- [ ] Private Brains
- [X] Use PrivateGPT as LLM and Embedding
- [ ] Better Storage
- [ ] Use a vectorstore for storing embeddings
- [ ] Add metadata
- [ ] Improve metadata for files
- [ ] Autonomous Agents - Eureka!
- [ ] Create AGI agents tools
- [ ] Summarization
Good for now ?! 😙
A personal assistant that can help you with your daily tasks, such as scheduling meetings, taking notes, and answering questions.
![](/img/north-star-quivr.png)

View File

@ -95,7 +95,7 @@ const config = {
items: [
{
label: 'Get Started',
to: '/docs/get_started/intro',
to: '/docs/intro',
},
],
},

View File

@ -1,5 +1,5 @@
import { Redirect } from "@docusaurus/router";
import React from "react";
export default function Home() {
return <Redirect to="docs/get_started/intro.html" />;
return <Redirect to="docs/intro.html" />;
}

BIN
docs/static/img/north-star-quivr.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 119 KiB