docs: document OpenLIT integration (#2386)

Signed-off-by: patcher9 <patcher99@dokulabs.com>
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
This commit is contained in:
patcher9 2024-06-05 20:35:21 +05:30 committed by GitHub
parent d3d777bc51
commit d43bfa0a53
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
3 changed files with 49 additions and 0 deletions

View File

@ -67,6 +67,7 @@ An alternative way to install GPT4All is to use one of the offline installers av
* :parrot::link: [Langchain](https://python.langchain.com/en/latest/modules/models/llms/integrations/gpt4all.html)
* :card_file_box: [Weaviate Vector Database](https://github.com/weaviate/weaviate) - [module docs](https://weaviate.io/developers/weaviate/modules/retriever-vectorizer-modules/text2vec-gpt4all)
* :telescope: [OpenLIT (OTel-native Monitoring)](https://github.com/openlit/openlit) - [Docs](https://docs.openlit.io/latest/integrations/gpt4all)
## Contributing

View File

@ -0,0 +1,47 @@
# Monitoring
Leverage OpenTelemetry to perform real-time monitoring of your LLM application using [OpenLIT](https://github.com/openlit/openlit). This tool helps you easily collect data on user interactions, performance metrics, and other key information, which can assist in enhancing the functionality and dependability of your GPT4All based LLM application.
## How it works?
OpenLIT adds automatic OTel instrumentation to the GPT4all SDK. It covers the `generate` and `embedding` functions, helping to track LLM usage by gathering inputs and outputs. This allows users to monitor and evaluate the performance and behavior of their LLM application in different environments. Additionally, you have the flexibility to view and analyze the generated traces and metrics either in the OpenLIT UI or by exporting them to widely used observability tools like Grafana and DataDog for more comprehensive analysis and visualization.
## Getting Started
Heres a straightforward guide to help you set up and start monitoring your application:
### 1. Install the OpenLIT SDK
Open your terminal and run:
```shell
pip install openlit
```
### 2. Setup Monitoring for your Application
In your application, initiate OpenLIT as outlined below:
```python
from gpt4all import GPT4All
import openlit
openlit.init() # Initialize OpenLIT monitoring
model = GPT4All(model_name='orca-mini-3b-gguf2-q4_0.gguf')
# Start a chat session and send queries
with model.chat_session():
response1 = model.generate(prompt='hello', temp=0)
response2 = model.generate(prompt='write me a short poem', temp=0)
response3 = model.generate(prompt='thank you', temp=0)
print(model.current_chat_session)
```
This setup wraps your gpt4all model interactions, capturing valuable data about each request and response.
### Visualize
Once you've set up data collection with OpenLIT, you can visualize and analyze this information to better understand your application's performance:
- **Using OpenLIT UI:** Connect to OpenLIT's UI to start exploring performance metrics. Visit the OpenLIT [Quickstart Guide](https://docs.openlit.io/latest/quickstart) for step-by-step details.
- **Integrate with existing Observability Tools:** If you use tools like Grafana or DataDog, you can integrate the data collected by OpenLIT. For instructions on setting up these connections, check the OpenLIT [Connections Guide](https://docs.openlit.io/latest/connections/intro).

View File

@ -14,6 +14,7 @@ nav:
- 'GPT4All in Python':
- 'Generation': 'gpt4all_python.md'
- 'Embedding': 'gpt4all_python_embedding.md'
- 'Monitoring with OpenLIT': 'gpt4all_monitoring.md'
- 'GPT4ALL in NodeJs': 'gpt4all_nodejs.md'
- 'gpt4all_cli.md'
- 'Wiki':