docs(Examples): Add documentation for chatbot, chatbot_voice and quivr-whisper examples (#3502)

# Description

Added documentation for chatbot, chatbot_voice and quivr-whisper
examples

## Checklist before requesting a review

Please delete options that are not relevant.

- [ ] My code follows the style guidelines of this project
- [ ] I have performed a self-review of my code
- [ ] I have commented hard-to-understand areas
- [ ] I have ideally added tests that prove my fix is effective or that
my feature works
- [ ] New and existing unit tests pass locally with my changes
- [ ] Any dependent changes have been merged

## Screenshots (if appropriate):

---------

Co-authored-by: Stan Girard <girard.stanislas@gmail.com>
This commit is contained in:
Aditya Nandan 2024-11-28 16:32:09 +05:30 committed by GitHub
parent d1d608d19f
commit 541e4f0593
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
7 changed files with 333 additions and 4 deletions

Binary file not shown.

View File

@ -0,0 +1,105 @@
# Chatbot with Chainlit
This example demonstrates a simple chatbot using **Quivr** and **Chainlit**, where users can upload a `.txt` file and ask questions based on its content.
---
## Prerequisites
- **Python**: Version 3.8 or higher.
- **OpenAI API Key**: Ensure you have a valid OpenAI API key.
---
## Installation
1. Clone the repository and navigate to the appropriate directory:
```bash
git clone https://github.com/QuivrHQ/quivr
cd examples/chatbot
```
2. Set the OpenAI API key as an environment variable:
```bash
export OPENAI_API_KEY='<your-key-here>'
```
3. Install the required dependencies:
```bash
pip install -r requirements.lock
```
---
## Running the Chatbot
1. Start the Chainlit server:
```bash
chainlit run main.py
```
2. Open your web browser and navigate to the URL displayed in the terminal (default: `http://localhost:8000`).
---
## Using the Chatbot
### File Upload
1. On the chatbot interface, upload a `.txt` file when prompted.
2. Ensure the file size is under **20MB**.
3. After uploading, the file is processed, and you will be notified when the chatbot is ready.
### Asking Questions
1. Type your questions into the chat input and press Enter.
2. The chatbot will respond based on the content of the uploaded file.
3. Relevant file sources for the answers are displayed in the chat.
---
## How It Works
1. **File Upload**:
- Users upload a `.txt` file, which is temporarily saved.
- The chatbot processes the file using Quivr to create a "brain."
2. **Session Handling**:
- Chainlit manages the session to retain the file path and brain context.
3. **Question Answering**:
- The chatbot uses the `ask_streaming` method from Quivr to process user queries.
- Responses are streamed incrementally for faster feedback.
- Relevant file excerpts (sources) are extracted and displayed.
4. **Retrieval Configuration**:
- A YAML file (`basic_rag_workflow.yaml`) defines retrieval parameters for Quivr.
---
## Workflow
### Chat Start
1. Waits for the user to upload a `.txt` file.
2. Processes the file and creates a "brain."
3. Notifies the user when the system is ready for questions.
### On User Message
1. Retrieves the "brain" from the session.
2. Processes the user's question with Quivr.
3. Streams the response and displays it in the chat.
4. Extracts and shows relevant sources from the file.
---
## Features
1. **File Processing**: Creates a context-aware "brain" from the uploaded file.
2. **Streaming Responses**: Delivers answers incrementally for better user experience.
3. **Source Highlighting**: Displays file excerpts relevant to the answers.
---
Enjoy interacting with your text files in a seamless Q&A format!

View File

@ -0,0 +1,107 @@
# Voice Chatbot with Chainlit
This example demonstrates how to create a voice-enabled chatbot using **Quivr** and **Chainlit**. The chatbot lets users upload a text file, ask questions about its content, and interact using speech.
---
## Prerequisites
- **Python**: Version 3.8 or higher.
- **OpenAI API Key**: Ensure you have a valid OpenAI API key.
---
## Installation
1. Clone the repository and navigate to the appropriate directory:
```bash
git clone https://github.com/QuivrHQ/quivr
cd examples/chatbot_voice
```
2. Set the OpenAI API key as an environment variable:
```bash
export OPENAI_API_KEY='<your-key-here>'
```
3. Install the required dependencies:
```bash
pip install -r requirements.lock
```
---
## Running the Chatbot
1. Start the Chainlit server:
```bash
chainlit run main.py
```
2. Open your web browser and navigate to the URL displayed in the terminal (default: `http://localhost:8000`).
---
## Using the Chatbot
### File Upload
1. Once the interface loads, the chatbot will prompt you to upload a `.txt` file.
2. Click on the upload area or drag-and-drop a text file. Ensure the file size is under **20MB**.
3. After processing, the chatbot will notify you that its ready for interaction.
### Asking Questions
1. Type your questions in the input box or upload an audio file containing your question.
2. If using text input, the chatbot will respond with an answer derived from the uploaded file's content.
3. If using audio input:
- The chatbot converts speech to text using OpenAI Whisper.
- Processes the text query and provides a response.
- Converts the response to audio, enabling hands-free interaction.
---
## Features
1. **Text File Processing**: Creates a "brain" for the uploaded file using Quivr for question answering.
2. **Speech-to-Text (STT)**: Transcribes user-uploaded audio queries using OpenAI Whisper.
3. **Text-to-Speech (TTS)**: Converts chatbot responses into audio for a seamless voice chat experience.
4. **Source Display**: Shows relevant file sources for each response.
5. **Real-Time Updates**: Uses streaming for live feedback during processing.
---
## How It Works
1. **File Upload**: The user uploads a `.txt` file, which is temporarily saved and processed into a "brain" using Quivr.
2. **Session Handling**: Chainlit manages user sessions to retain the uploaded file and brain context.
3. **Voice Interaction**:
- Audio queries are processed via the OpenAI Whisper API.
- Responses are generated and optionally converted into audio for playback.
4. **Streaming**: The chatbot streams its answers incrementally, improving response speed.
---
## Workflow
### Chat Start
1. Waits for a text file upload.
2. Processes the file into a "brain."
3. Notifies the user when ready for interaction.
### On User Message
1. Extracts the "brain" and queries it using the message content.
2. Streams the response back to the user.
3. Displays file sources related to the response.
### Audio Interaction
1. Captures and processes audio chunks during user input.
2. Converts captured audio into text using Whisper.
3. Queries the brain and provides both text and audio responses.
---
Enjoy interacting with your documents in both text and voice modes!

View File

@ -0,0 +1,114 @@
# Voice Chatbot with Flask
This example demonstrates a simple chatbot using **Flask** and **Quivr**, where users can upload a `.txt` file and ask questions based on its content. It supports speech-to-text and text-to-speech capabilities for a seamless interactive experience.
<video style="width:100%" muted="" controls="" alt="type:video">
<source src="../assets/chatbot_voice_flask.mp4" type="video/mp4">
</video>
---
## Prerequisites
- **Python**: Version 3.8 or higher.
- **OpenAI API Key**: Ensure you have a valid OpenAI API key.
---
## Installation
1. Clone the repository and navigate to the project directory:
```bash
git clone https://github.com/QuivrHQ/quivr
cd examples/quivr-whisper
```
2. Set the OpenAI API key as an environment variable:
```bash
export OPENAI_API_KEY='<your-key-here>'
```
3. Install the required dependencies:
```bash
pip install -r requirements.lock
```
---
## Running the Application
1. Start the Flask server:
```bash
python app.py
```
2. Open your web browser and navigate to the URL displayed in the terminal (default: `http://localhost:5000`).
---
## Using the Chatbot
### File Upload
1. On the interface, upload a `.txt` file.
2. Ensure the file format is supported and its size is manageable.
3. The file will be processed, and a "brain" instance will be created.
### Asking Questions
1. Use the microphone to record your question (audio upload).
2. The chatbot will process your question and respond with an audio answer.
---
## How It Works
### File Upload
- Users upload a `.txt` file.
- The file is saved to the `uploads` directory and used to create a "brain" using **Quivr**.
### Session Management
- Each session is associated with a unique ID, allowing the system to cache the user's "brain."
### Speech-to-Text
- User audio files are processed with OpenAI's **Whisper** model to generate transcripts.
### Question Answering
- The "brain" processes the transcribed text, retrieves relevant answers, and generates a response.
### Text-to-Speech
- The answer is converted to audio using OpenAI's text-to-speech model and returned to the user.
---
## Workflow
1. **Upload File**:
- The user uploads a `.txt` file.
- A "brain" is created and cached for the session.
2. **Ask Questions**:
- The user uploads an audio file containing a question.
- The question is transcribed, processed, and answered using the "brain."
3. **Answer Delivery**:
- The answer is converted to audio and returned to the user as a Base64-encoded string.
---
## Features
1. **File Upload and Processing**:
- Creates a context-aware "brain" from the uploaded text file.
2. **Audio-based Interaction**:
- Supports speech-to-text for input and text-to-speech for responses.
3. **Session Management**:
- Retains user context throughout the interaction.
4. **Integration with OpenAI**:
- Uses OpenAI models for transcription, answer generation, and audio synthesis.
---
Enjoy interacting with your text files through an intuitive voice-based interface!

View File

@ -89,4 +89,7 @@ nav:
- Examples: - Examples:
- examples/index.md - examples/index.md
- examples/custom_storage.md - examples/custom_storage.md
- examples/chatbot.md
- examples/chatbot_voice.md
- examples/chatbot_voice_flask.md
- Enterprise: https://docs.quivr.app/ - Enterprise: https://docs.quivr.app/

View File

@ -25,8 +25,6 @@ anthropic==0.36.1
anyio==4.6.2.post1 anyio==4.6.2.post1
# via anthropic # via anthropic
# via httpx # via httpx
appnope==0.1.4
# via ipykernel
asttokens==2.4.1 asttokens==2.4.1
# via stack-data # via stack-data
attrs==24.2.0 attrs==24.2.0
@ -78,6 +76,8 @@ fsspec==2024.9.0
# via huggingface-hub # via huggingface-hub
ghp-import==2.1.0 ghp-import==2.1.0
# via mkdocs # via mkdocs
greenlet==3.1.1
# via sqlalchemy
griffe==1.2.0 griffe==1.2.0
# via mkdocstrings-python # via mkdocstrings-python
h11==0.14.0 h11==0.14.0

View File

@ -25,8 +25,6 @@ anthropic==0.36.1
anyio==4.6.2.post1 anyio==4.6.2.post1
# via anthropic # via anthropic
# via httpx # via httpx
appnope==0.1.4
# via ipykernel
asttokens==2.4.1 asttokens==2.4.1
# via stack-data # via stack-data
attrs==24.2.0 attrs==24.2.0
@ -78,6 +76,8 @@ fsspec==2024.9.0
# via huggingface-hub # via huggingface-hub
ghp-import==2.1.0 ghp-import==2.1.0
# via mkdocs # via mkdocs
greenlet==3.1.1
# via sqlalchemy
griffe==1.2.0 griffe==1.2.0
# via mkdocstrings-python # via mkdocstrings-python
h11==0.14.0 h11==0.14.0