Merge pull request #2304 from kqlio67/main

Add new provider, enhance functionality, and update docs
This commit is contained in:
Tekky 2024-10-30 09:54:54 +01:00 committed by GitHub
commit 1c8061af55
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
22 changed files with 760 additions and 547 deletions

View File

@ -1,12 +1,11 @@
![248433934-7886223b-c1d1-4260-82aa-da5741f303bb](https://github.com/xtekky/gpt4free/assets/98614666/ea012c87-76e0-496a-8ac4-e2de090cc6c9)
<a href="https://trendshift.io/repositories/1692" target="_blank"><img src="https://trendshift.io/api/badge/repositories/1692" alt="xtekky%2Fgpt4free | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
---
Written by [@xtekky](https://github.com/xtekky)
<p align="center"><strong>Written by <a href="https://github.com/xtekky">@xtekky</a></strong></p>
<div id="top"></div>
@ -17,7 +16,7 @@ Written by [@xtekky](https://github.com/xtekky)
> _"gpt4free"_ serves as a **PoC** (proof of concept), demonstrating the development of an API package with multi-provider requests, with features like timeouts, load balance and flow control.
> [!NOTE]
> <sup><strong>Lastet version:</strong></sup> [![PyPI version](https://img.shields.io/pypi/v/g4f?color=blue)](https://pypi.org/project/g4f) [![Docker version](https://img.shields.io/docker/v/hlohaus789/g4f?label=docker&color=blue)](https://hub.docker.com/r/hlohaus789/g4f)
> <sup><strong>Latest version:</strong></sup> [![PyPI version](https://img.shields.io/pypi/v/g4f?color=blue)](https://pypi.org/project/g4f) [![Docker version](https://img.shields.io/docker/v/hlohaus789/g4f?label=docker&color=blue)](https://hub.docker.com/r/hlohaus789/g4f)
> <sup><strong>Stats:</strong></sup> [![Downloads](https://static.pepy.tech/badge/g4f)](https://pepy.tech/project/g4f) [![Downloads](https://static.pepy.tech/badge/g4f/month)](https://pepy.tech/project/g4f)
```sh
@ -30,10 +29,11 @@ docker pull hlohaus789/g4f
## 🆕 What's New
- **For comprehensive details on new features and updates, please refer to our [Releases](https://github.com/xtekky/gpt4free/releases) page**
- **Installation Guide for Windows (.exe):** 💻 [#installation-guide-for-windows](#installation-guide-for-windows-exe)
- **Installation Guide for Windows (.exe):** 💻 [Installation Guide for Windows (.exe)](#installation-guide-for-windows-exe)
- **Join our Telegram Channel:** 📨 [telegram.me/g4f_channel](https://telegram.me/g4f_channel)
- **Join our Discord Group:** 💬 [discord.gg/XfybzPXPH5](https://discord.gg/XfybzPXPH5)
## 🔻 Site Takedown
Is your site on this repository and you want to take it down? Send an email to takedown@g4f.ai with proof it is yours and it will be removed as fast as possible. To prevent reproduction please secure your API. 😉
@ -53,33 +53,32 @@ Is your site on this repository and you want to take it down? Send an email to t
- [ ] 🚧 Improve compatibility and error handling
## 📚 Table of Contents
- [🆕 What's New](#-whats-new)
- [📚 Table of Contents](#-table-of-contents)
- [🛠️ Getting Started](#-getting-started)
- [Docker Container Guide](#docker-container-guide)
- [Installation Guide for Windows (.exe)](#installation-guide-for-windows-exe)
- [Use python](#use-python)
- [Prerequisites](#prerequisites)
- [Install using PyPI package:](#install-using-pypi-package)
- [Install from source:](#install-from-source)
- [Install using Docker:](#install-using-docker)
- [💡 Usage](#-usage)
- [Text Generation](#text-generation)
- [Image Generation](#image-generation)
- [Web UI](#web-ui)
- [Interference API](docs/interference.md)
- [Local inference](docs/local.md)
- [Configuration](#configuration)
- [🚀 Providers and Models](docs/providers-and-models.md)
- [🔗 Powered by gpt4free](#-powered-by-gpt4free)
- [🤝 Contribute](#-contribute)
- [How do i create a new Provider?](#guide-how-do-i-create-a-new-provider)
- [How can AI help me with writing code?](#guide-how-can-ai-help-me-with-writing-code)
- [🙌 Contributors](#-contributors)
- [©️ Copyright](#-copyright)
- [⭐ Star History](#-star-history)
- [📄 License](#-license)
- [🆕 What's New](#-whats-new)
- [📚 Table of Contents](#-table-of-contents)
- [🛠️ Getting Started](#-getting-started)
- [Docker Container Guide](#docker-container-guide)
- [Installation Guide for Windows (.exe)](#installation-guide-for-windows-exe)
- [Use python](#use-python)
- [Prerequisites](#prerequisites)
- [Install using PyPI package](#install-using-pypi-package)
- [Install from source](#install-from-source)
- [Install using Docker](#install-using-docker)
- [💡 Usage](#-usage)
- [Text Generation](#text-generation)
- [Image Generation](#image-generation)
- [Web UI](#web-ui)
- [Interference API](#interference-api)
- [Local Inference](docs/local.md)
- [Configuration](#configuration)
- [🚀 Providers and Models](docs/providers-and-models.md)
- [🔗 Powered by gpt4free](#-powered-by-gpt4free)
- [🤝 Contribute](#-contribute)
- [How do i create a new Provider?](#guide-how-do-i-create-a-new-provider)
- [How can AI help me with writing code?](#guide-how-can-ai-help-me-with-writing-code)
- [🙌 Contributors](#-contributors)
- [©️ Copyright](#-copyright)
- [⭐ Star History](#-star-history)
- [📄 License](#-license)
## 🛠️ Getting Started
@ -123,7 +122,7 @@ To ensure the seamless operation of our application, please follow the instructi
By following these steps, you should be able to successfully install and run the application on your Windows system. If you encounter any issues during the installation process, please refer to our Issue Tracker or try to get contact over Discord for assistance.
Run the **Webview UI** on other Platfroms:
Run the **Webview UI** on other Platforms:
- [/docs/guides/webview](docs/webview.md)
@ -771,10 +770,10 @@ set G4F_PROXY=http://host:port
We welcome contributions from the community. Whether you're adding new providers or features, or simply fixing typos and making small improvements, your input is valued. Creating a pull request is all it takes our co-pilot will handle the code review process. Once all changes have been addressed, we'll merge the pull request into the main branch and release the updates at a later time.
###### Guide: How do i create a new Provider?
- Read: [/docs/guides/create_provider](docs/guides/create_provider.md)
- Read: [Create Provider Guide](docs/guides/create_provider.md)
###### Guide: How can AI help me with writing code?
- Read: [/docs/guides/help_me](docs/guides/help_me.md)
- Read: [AI Assistance Guide](docs/guides/help_me.md)
## 🙌 Contributors
A list of all contributors is available [here](https://github.com/xtekky/gpt4free/graphs/contributors)
@ -866,4 +865,7 @@ This project is licensed under <a href="https://github.com/xtekky/gpt4free/blob/
</tr>
</table>
---
<p align="right">(<a href="#top">🔼 Back to top</a>)</p>

View File

@ -10,6 +10,7 @@ The G4F async client API is designed to be compatible with the OpenAI API, makin
- [Key Features](#key-features)
- [Getting Started](#getting-started)
- [Initializing the Client](#initializing-the-client)
- [Creating Chat Completions](#creating-chat-completions)
- [Configuration](#configuration)
- [Usage Examples](#usage-examples)
- [Text Completions](#text-completions)
@ -51,6 +52,28 @@ client = Client(
)
```
## Creating Chat Completions
**Heres an improved example of creating chat completions:**
```python
response = await async_client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{
"role": "user",
"content": "Say this is a test"
}
]
# Add other parameters as needed
)
```
**This example:**
- Asks a specific question `Say this is a test`
- Configures various parameters like temperature and max_tokens for more control over the output
- Disables streaming for a complete response
You can adjust these parameters based on your specific needs.
### Configuration
@ -164,7 +187,7 @@ async def main():
response = await client.images.async_generate(
prompt="a white siamese cat",
model="dall-e-3"
model="flux"
)
image_url = response.data[0].url
@ -185,7 +208,7 @@ async def main():
response = await client.images.async_generate(
prompt="a white siamese cat",
model="dall-e-3",
model="flux",
response_format="b64_json"
)
@ -217,7 +240,7 @@ async def main():
)
task2 = client.images.async_generate(
model="dall-e-3",
model="flux",
prompt="a white siamese cat"
)

View File

@ -7,6 +7,7 @@
- [Getting Started](#getting-started)
- [Switching to G4F Client](#switching-to-g4f-client)
- [Initializing the Client](#initializing-the-client)
- [Creating Chat Completions](#creating-chat-completions)
- [Configuration](#configuration)
- [Usage Examples](#usage-examples)
- [Text Completions](#text-completions)
@ -56,6 +57,28 @@ client = Client(
# Add any other necessary parameters
)
```
## Creating Chat Completions
**Heres an improved example of creating chat completions:**
```python
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{
"role": "user",
"content": "Say this is a test"
}
]
# Add any other necessary parameters
)
```
**This example:**
- Asks a specific question `Say this is a test`
- Configures various parameters like temperature and max_tokens for more control over the output
- Disables streaming for a complete response
You can adjust these parameters based on your specific needs.
## Configuration
@ -129,7 +152,7 @@ from g4f.client import Client
client = Client()
response = client.images.generate(
model="dall-e-3",
model="flux",
prompt="a white siamese cat"
# Add any other necessary parameters
)
@ -139,6 +162,23 @@ image_url = response.data[0].url
print(f"Generated image URL: {image_url}")
```
#### Base64 Response Format
```python
from g4f.client import Client
client = Client()
response = client.images.generate(
model="flux",
prompt="a white siamese cat",
response_format="b64_json"
)
base64_text = response.data[0].b64_json
print(base64_text)
```
### Creating Image Variations

View File

@ -1,23 +1,30 @@
# G4F - Interference API Usage Guide
## Table of Contents
- [Introduction](#introduction)
- [Running the Interference API](#running-the-interference-api)
- [From PyPI Package](#from-pypi-package)
- [From Repository](#from-repository)
- [Usage with OpenAI Library](#usage-with-openai-library)
- [Usage with Requests Library](#usage-with-requests-library)
- [Using the Interference API](#using-the-interference-api)
- [Basic Usage](#basic-usage)
- [With OpenAI Library](#with-openai-library)
- [With Requests Library](#with-requests-library)
- [Key Points](#key-points)
- [Conclusion](#conclusion)
## Introduction
The Interference API allows you to serve other OpenAI integrations with G4F. It acts as a proxy, translating requests to the OpenAI API into requests to the G4F providers.
The G4F Interference API is a powerful tool that allows you to serve other OpenAI integrations using G4F (Gpt4free). It acts as a proxy, translating requests intended for the OpenAI API into requests compatible with G4F providers. This guide will walk you through the process of setting up, running, and using the Interference API effectively.
## Running the Interference API
**You can run the Interference API in two ways:** using the PyPI package or from the repository.
### From PyPI Package
**You can run the Interference API directly from the G4F PyPI package:**
**To run the Interference API directly from the G4F PyPI package, use the following Python code:**
```python
from g4f.api import run_api
@ -25,37 +32,80 @@ run_api()
```
### From Repository
Alternatively, you can run the Interference API from the cloned repository.
**If you prefer to run the Interference API from the cloned repository, you have two options:**
**Run the server with:**
1. **Using the command line:**
```bash
g4f api
```
or
2. **Using Python:**
```bash
python -m g4f.api.run
```
**Once running, the API will be accessible at:** `http://localhost:1337/v1`
## Usage with OpenAI Library
## Using the Interference API
### Basic Usage
**You can interact with the Interference API using curl commands for both text and image generation:**
**For text generation:**
```bash
curl -X POST "http://localhost:1337/v1/chat/completions" \
-H "Content-Type: application/json" \
-d '{
"messages": [
{
"role": "user",
"content": "Hello"
}
],
"model": "gpt-3.5-turbo"
}'
```
**For image generation:**
1. **url:**
```bash
curl -X POST "http://localhost:1337/v1/images/generate" \
-H "Content-Type: application/json" \
-d '{
"prompt": "a white siamese cat",
"model": "flux",
"response_format": "url"
}'
```
2. **b64_json**
```bash
curl -X POST "http://localhost:1337/v1/images/generate" \
-H "Content-Type: application/json" \
-d '{
"prompt": "a white siamese cat",
"model": "flux",
"response_format": "b64_json"
}'
```
### With OpenAI Library
**You can use the Interference API with the OpenAI Python library by changing the `base_url`:**
```python
from openai import OpenAI
client = OpenAI(
api_key="",
# Change the API base URL to the local interference API
base_url="http://localhost:1337/v1"
base_url="http://localhost:1337/v1"
)
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "write a poem about a tree"}],
messages=[{"role": "user", "content": "Write a poem about a tree"}],
stream=True,
)
@ -68,20 +118,20 @@ else:
content = token.choices[0].delta.content
if content is not None:
print(content, end="", flush=True)
```
```
## Usage with Requests Library
You can also send requests directly to the Interference API using the requests library.
### With Requests Library
**Send a POST request to `/v1/chat/completions` with the request body containing the model and other parameters:**
**You can also send requests directly to the Interference API using the `requests` library:**
```python
import requests
url = "http://localhost:1337/v1/chat/completions"
body = {
"model": "gpt-3.5-turbo",
"model": "gpt-3.5-turbo",
"stream": False,
"messages": [
{"role": "assistant", "content": "What can you do?"}
@ -92,18 +142,20 @@ json_response = requests.post(url, json=body).json().get('choices', [])
for choice in json_response:
print(choice.get('message', {}).get('content', ''))
```
## Key Points
- The Interference API translates OpenAI API requests into G4F provider requests
- You can run it from the PyPI package or the cloned repository
- It supports usage with the OpenAI Python library by changing the `base_url`
- Direct requests can be sent to the API endpoints using libraries like `requests`
- The Interference API translates OpenAI API requests into G4F provider requests.
- It can be run from either the PyPI package or the cloned repository.
- The API supports usage with the OpenAI Python library by changing the `base_url`.
- Direct requests can be sent to the API endpoints using libraries like `requests`.
- Both text and image generation are supported.
**_The Interference API allows easy integration of G4F with existing OpenAI-based applications and tools._**
## Conclusion
The G4F Interference API provides a seamless way to integrate G4F with existing OpenAI-based applications and tools. By following this guide, you should now be able to set up, run, and use the Interference API effectively. Whether you're using it for text generation, image creation, or as a drop-in replacement for OpenAI in your projects, the Interference API offers flexibility and power for your AI-driven applications.
---

View File

@ -51,6 +51,7 @@ This document provides an overview of various AI providers and models, including
|[free.netfly.top](https://free.netfly.top)|`g4f.Provider.FreeNetfly`|✔|❌|❌|?|![Cloudflare](https://img.shields.io/badge/Cloudflare-f48d37)|❌|
|[gemini.google.com](https://gemini.google.com)|`g4f.Provider.Gemini`|✔|❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|✔|
|[ai.google.dev](https://ai.google.dev)|`g4f.Provider.GeminiPro`|✔|❌|✔|?|![Active](https://img.shields.io/badge/Active-brightgreen)|✔|
|[app.giz.ai](https://app.giz.ai/assistant/)|`g4f.Provider.GizAI`|`gemini-flash, gemini-pro, gpt-4o-mini, gpt-4o, claude-3.5-sonnet, claude-3-haiku, llama-3.1-70b, llama-3.1-8b, mistral-large`|`sdxl, sd-1.5, sd-3.5, dalle-3, flux-schnell, flux1-pro`|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌|
|[developers.sber.ru](https://developers.sber.ru/gigachat)|`g4f.Provider.GigaChat`|✔|❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|✔|
|[gprochat.com](https://gprochat.com)|`g4f.Provider.GPROChat`|`gemini-pro`|❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌|
|[console.groq.com/playground](https://console.groq.com/playground)|`g4f.Provider.Groq`|✔|❌|❌|?|![Active](https://img.shields.io/badge/Active-brightgreen)|✔|
@ -63,10 +64,7 @@ This document provides an overview of various AI providers and models, including
|[app.myshell.ai/chat](https://app.myshell.ai/chat)|`g4f.Provider.MyShell`|✔|❌|?|?|![Disabled](https://img.shields.io/badge/Disabled-red)|❌|
|[nexra.aryahcr.cc/bing](https://nexra.aryahcr.cc/documentation/bing/en)|`g4f.Provider.NexraBing`|✔|❌|❌|✔|![Disabled](https://img.shields.io/badge/Disabled-red)|❌|
|[nexra.aryahcr.cc/blackbox](https://nexra.aryahcr.cc/documentation/blackbox/en)|`g4f.Provider.NexraBlackbox`|`blackboxai` |❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌|
|[nexra.aryahcr.cc/chatgpt](https://nexra.aryahcr.cc/documentation/chatgpt/en)|`g4f.Provider.NexraChatGPT`|`gpt-4, gpt-3.5-turbo, gpt-3` |❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌|
|[nexra.aryahcr.cc/chatgpt](https://nexra.aryahcr.cc/documentation/chatgpt/en)|`g4f.Provider.NexraChatGPT4o`|`gpt-4o` |❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌|
|[nexra.aryahcr.cc/chatgpt](https://nexra.aryahcr.cc/documentation/chatgpt/en)|`g4f.Provider.NexraChatGptV2`|`gpt-4` |❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌|
|[nexra.aryahcr.cc/chatgpt](https://nexra.aryahcr.cc/documentation/chatgpt/en)|`g4f.Provider.NexraChatGptWeb`|`gpt-4` |❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌|
|[nexra.aryahcr.cc/chatgpt](https://nexra.aryahcr.cc/documentation/chatgpt/en)|`g4f.Provider.NexraChatGPT`|`gpt-4, gpt-3.5-turbo, gpt-3, gpt-4o` |❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌|
|[nexra.aryahcr.cc/dall-e](https://nexra.aryahcr.cc/documentation/dall-e/en)|`g4f.Provider.NexraDallE`|❌|`dalle`|❌|❌|![Active](https://img.shields.io/badge/Active-brightgreen)|❌|
|[nexra.aryahcr.cc/dall-e](https://nexra.aryahcr.cc/documentation/dall-e/en)|`g4f.Provider.NexraDallE2`|❌|`dalle-2`|❌|❌|![Active](https://img.shields.io/badge/Active-brightgreen)|❌|
|[nexra.aryahcr.cc/emi](https://nexra.aryahcr.cc/documentation/emi/en)|`g4f.Provider.NexraEmi`|❌|`emi`|❌|❌|![Active](https://img.shields.io/badge/Active-brightgreen)|❌|
@ -108,18 +106,18 @@ This document provides an overview of various AI providers and models, including
|-------|---------------|-----------|---------|
|gpt-3|OpenAI|1+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-base)|
|gpt-3.5-turbo|OpenAI|5+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-3-5-turbo)|
|gpt-4|OpenAI|33+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4)|
|gpt-4-turbo|OpenAI|2+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4)|
|gpt-4o|OpenAI|7+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4o)|
|gpt-4|OpenAI|7+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4)|
|gpt-4-turbo|OpenAI|3+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4)|
|gpt-4o|OpenAI|10+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4o)|
|gpt-4o-mini|OpenAI|14+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4o-mini)|
|o1|OpenAI|1+ Providers|[platform.openai.com](https://openai.com/index/introducing-openai-o1-preview/)|
|o1-mini|OpenAI|1+ Providers|[platform.openai.com](https://openai.com/index/openai-o1-mini-advancing-cost-efficient-reasoning/)|
|o1-mini|OpenAI|2+ Providers|[platform.openai.com](https://openai.com/index/openai-o1-mini-advancing-cost-efficient-reasoning/)|
|llama-2-7b|Meta Llama|1+ Providers|[huggingface.co](https://huggingface.co/meta-llama/Llama-2-7b)|
|llama-2-13b|Meta Llama|1+ Providers|[llama.com](https://www.llama.com/llama2/)|
|llama-3-8b|Meta Llama|4+ Providers|[ai.meta.com](https://ai.meta.com/blog/meta-llama-3/)|
|llama-3-70b|Meta Llama|4+ Providers|[ai.meta.com](https://ai.meta.com/blog/meta-llama-3/)|
|llama-3.1-8b|Meta Llama|7+ Providers|[ai.meta.com](https://ai.meta.com/blog/meta-llama-3-1/)|
|llama-3.1-70b|Meta Llama|13+ Providers|[ai.meta.com](https://ai.meta.com/blog/meta-llama-3-1/)|
|llama-3.1-70b|Meta Llama|14+ Providers|[ai.meta.com](https://ai.meta.com/blog/meta-llama-3-1/)|
|llama-3.1-405b|Meta Llama|5+ Providers|[ai.meta.com](https://ai.meta.com/blog/meta-llama-3-1/)|
|llama-3.2-1b|Meta Llama|1+ Providers|[huggingface.co](https://huggingface.co/meta-llama/Llama-3.2-1B)|
|llama-3.2-3b|Meta Llama|1+ Providers|[huggingface.co](https://huggingface.co/blog/llama32)|
@ -127,17 +125,17 @@ This document provides an overview of various AI providers and models, including
|llama-3.2-90b|Meta Llama|2+ Providers|[ai.meta.com](https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/)|
|llamaguard-7b|Meta Llama|1+ Providers|[huggingface.co](https://huggingface.co/meta-llama/LlamaGuard-7b)|
|llamaguard-2-8b|Meta Llama|1+ Providers|[huggingface.co](https://huggingface.co/meta-llama/Meta-Llama-Guard-2-8B)|
|mistral-7b|Mistral AI|5+ Providers|[mistral.ai](https://mistral.ai/news/announcing-mistral-7b/)|
|mistral-7b|Mistral AI|4+ Providers|[mistral.ai](https://mistral.ai/news/announcing-mistral-7b/)|
|mixtral-8x7b|Mistral AI|6+ Providers|[mistral.ai](https://mistral.ai/news/mixtral-of-experts/)|
|mixtral-8x22b|Mistral AI|3+ Providers|[mistral.ai](https://mistral.ai/news/mixtral-8x22b/)|
|mistral-nemo|Mistral AI|1+ Providers|[huggingface.co](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407)|
|mistral-large|Mistral AI|1+ Providers|[mistral.ai](https://mistral.ai/news/mistral-large-2407/)|
|mistral-nemo|Mistral AI|2+ Providers|[huggingface.co](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407)|
|mistral-large|Mistral AI|2+ Providers|[mistral.ai](https://mistral.ai/news/mistral-large-2407/)|
|mixtral-8x7b-dpo|NousResearch|1+ Providers|[huggingface.co](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO)|
|yi-34b|NousResearch|1+ Providers|[huggingface.co](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B)|
|hermes-3|NousResearch|1+ Providers|[huggingface.co](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B)|
|hermes-3|NousResearch|2+ Providers|[huggingface.co](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B)|
|gemini|Google DeepMind|1+ Providers|[deepmind.google](http://deepmind.google/technologies/gemini/)|
|gemini-flash|Google DeepMind|3+ Providers|[deepmind.google](https://deepmind.google/technologies/gemini/flash/)|
|gemini-pro|Google DeepMind|9+ Providers|[deepmind.google](https://deepmind.google/technologies/gemini/pro/)|
|gemini-flash|Google DeepMind|4+ Providers|[deepmind.google](https://deepmind.google/technologies/gemini/flash/)|
|gemini-pro|Google DeepMind|10+ Providers|[deepmind.google](https://deepmind.google/technologies/gemini/pro/)|
|gemma-2b|Google|5+ Providers|[huggingface.co](https://huggingface.co/google/gemma-2b)|
|gemma-2b-9b|Google|1+ Providers|[huggingface.co](https://huggingface.co/google/gemma-2-9b)|
|gemma-2b-27b|Google|2+ Providers|[huggingface.co](https://huggingface.co/google/gemma-2-27b)|
@ -145,10 +143,10 @@ This document provides an overview of various AI providers and models, including
|gemma-2|Google|2+ Providers|[huggingface.co](https://huggingface.co/blog/gemma2)|
|gemma_2_27b|Google|1+ Providers|[huggingface.co](https://huggingface.co/blog/gemma2)|
|claude-2.1|Anthropic|1+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-2)|
|claude-3-haiku|Anthropic|3+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-haiku)|
|claude-3-haiku|Anthropic|4+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-haiku)|
|claude-3-sonnet|Anthropic|2+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-family)|
|claude-3-opus|Anthropic|2+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-family)|
|claude-3.5-sonnet|Anthropic|5+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-5-sonnet)|
|claude-3.5-sonnet|Anthropic|6+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-5-sonnet)|
|blackboxai|Blackbox AI|2+ Providers|[docs.blackbox.chat](https://docs.blackbox.chat/blackbox-ai-1)|
|blackboxai-pro|Blackbox AI|1+ Providers|[docs.blackbox.chat](https://docs.blackbox.chat/blackbox-ai-1)|
|yi-1.5-9b|01-ai|1+ Providers|[huggingface.co](https://huggingface.co/01-ai/Yi-1.5-9B)|
@ -196,11 +194,12 @@ This document provides an overview of various AI providers and models, including
### Image Models
| Model | Base Provider | Providers | Website |
|-------|---------------|-----------|---------|
|sdxl|Stability AI|2+ Providers|[huggingface.co](https://huggingface.co/docs/diffusers/en/using-diffusers/sdxl)|
|sdxl|Stability AI|1+ Providers|[huggingface.co](https://huggingface.co/docs/diffusers/en/using-diffusers/sdxl)|
|sdxl-lora|Stability AI|1+ Providers|[huggingface.co](https://huggingface.co/blog/lcm_lora)|
|sdxl-turbo|Stability AI|1+ Providers|[huggingface.co](https://huggingface.co/stabilityai/sdxl-turbo)|
|sd-1.5|Stability AI|1+ Providers|[huggingface.co](https://huggingface.co/runwayml/stable-diffusion-v1-5)|
|sd-3|Stability AI|1+ Providers|[huggingface.co](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_3)|
|sd-3.5|Stability AI|1+ Providers|[stability.ai](https://stability.ai/news/introducing-stable-diffusion-3-5)|
|playground-v2.5|Playground AI|1+ Providers|[huggingface.co](https://huggingface.co/playgroundai/playground-v2.5-1024px-aesthetic)|
|flux|Black Forest Labs|2+ Providers|[github.com/black-forest-labs/flux](https://github.com/black-forest-labs/flux)|
|flux-pro|Black Forest Labs|2+ Providers|[github.com/black-forest-labs/flux](https://github.com/black-forest-labs/flux)|
@ -210,10 +209,9 @@ This document provides an overview of various AI providers and models, including
|flux-disney|Flux AI|1+ Providers|[]()|
|flux-pixel|Flux AI|1+ Providers|[]()|
|flux-4o|Flux AI|1+ Providers|[]()|
|flux-schnell|Black Forest Labs|1+ Providers|[huggingface.co](https://huggingface.co/black-forest-labs/FLUX.1-schnell)|
|flux-schnell|Black Forest Labs|2+ Providers|[huggingface.co](https://huggingface.co/black-forest-labs/FLUX.1-schnell)|
|dalle|OpenAI|1+ Providers|[openai.com](https://openai.com/index/dall-e/)|
|dalle-2|OpenAI|1+ Providers|[openai.com](https://openai.com/index/dall-e-2/)|
|dalle-3|OpenAI|2+ Providers|[openai.com](https://openai.com/index/dall-e-3/)|
|emi||1+ Providers|[]()|
|any-dark||1+ Providers|[]()|
|midjourney|Midjourney|1+ Providers|[docs.midjourney.com](https://docs.midjourney.com/docs/model-versions)|

View File

@ -10,7 +10,7 @@ from .helper import format_prompt
class AI365VIP(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://chat.ai365vip.com"
api_endpoint = "/api/chat"
working = True
working = False
default_model = 'gpt-3.5-turbo'
models = [
'gpt-3.5-turbo',

View File

@ -59,10 +59,6 @@ class AiMathGPT(AsyncGeneratorProvider, ProviderModelMixin):
async with ClientSession(headers=headers) as session:
data = {
"messages": [
{
"role": "system",
"content": ""
},
{
"role": "user",
"content": format_prompt(messages)

View File

@ -51,7 +51,6 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
'ReactAgent',
'XcodeAgent',
'AngularJSAgent',
'RepoMap',
]
agentMode = {
@ -78,7 +77,6 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
'ReactAgent': {'mode': True, 'id': "React Agent"},
'XcodeAgent': {'mode': True, 'id': "Xcode Agent"},
'AngularJSAgent': {'mode': True, 'id': "AngularJS Agent"},
'RepoMap': {'mode': True, 'id': "repomap"},
}
userSelectedModel = {
@ -174,7 +172,7 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
proxy: Optional[str] = None,
image: ImageType = None,
image_name: str = None,
websearch: bool = False,
web_search: bool = False,
**kwargs
) -> AsyncGenerator[Union[str, ImageResponse], None]:
"""
@ -186,7 +184,7 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
proxy (Optional[str]): Proxy URL, if needed.
image (ImageType): Image data to be processed, if any.
image_name (str): Name of the image file, if an image is provided.
websearch (bool): Enables or disables web search mode.
web_search (bool): Enables or disables web search mode.
**kwargs: Additional keyword arguments.
Yields:
@ -276,7 +274,7 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
"clickedForceWebSearch": False,
"visitFromDelta": False,
"mobileClient": False,
"webSearchMode": websearch,
"webSearchMode": web_search,
"userSelectedModel": cls.userSelectedModel.get(model, model)
}
@ -313,7 +311,7 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin):
else:
yield cleaned_response
else:
if websearch:
if web_search:
match = re.search(r'\$~~~\$(.*?)\$~~~\$', cleaned_response, re.DOTALL)
if match:
source_part = match.group(1).strip()

151
g4f/Provider/GizAI.py Normal file
View File

@ -0,0 +1,151 @@
from __future__ import annotations
import json
from aiohttp import ClientSession
from ..typing import AsyncResult, Messages
from ..image import ImageResponse
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
from .helper import format_prompt
class GizAI(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://app.giz.ai/assistant/"
api_endpoint = "https://app.giz.ai/api/data/users/inferenceServer.infer"
working = True
supports_system_message = True
supports_message_history = True
# Chat models
default_model = 'chat-gemini-flash'
chat_models = [
default_model,
'chat-gemini-pro',
'chat-gpt4m',
'chat-gpt4',
'claude-sonnet',
'claude-haiku',
'llama-3-70b',
'llama-3-8b',
'mistral-large',
'chat-o1-mini'
]
# Image models
image_models = [
'flux1',
'sdxl',
'sd',
'sd35',
]
models = [*chat_models, *image_models]
model_aliases = {
# Chat model aliases
"gemini-flash": "chat-gemini-flash",
"gemini-pro": "chat-gemini-pro",
"gpt-4o-mini": "chat-gpt4m",
"gpt-4o": "chat-gpt4",
"claude-3.5-sonnet": "claude-sonnet",
"claude-3-haiku": "claude-haiku",
"llama-3.1-70b": "llama-3-70b",
"llama-3.1-8b": "llama-3-8b",
"o1-mini": "chat-o1-mini",
# Image model aliases
"sd-1.5": "sd",
"sd-3.5": "sd35",
"flux-schnell": "flux1",
}
@classmethod
def get_model(cls, model: str) -> str:
if model in cls.models:
return model
elif model in cls.model_aliases:
return cls.model_aliases[model]
else:
return cls.default_model
@classmethod
def is_image_model(cls, model: str) -> bool:
return model in cls.image_models
@classmethod
async def create_async_generator(
cls,
model: str,
messages: Messages,
proxy: str = None,
**kwargs
) -> AsyncResult:
model = cls.get_model(model)
headers = {
'Accept': 'application/json, text/plain, */*',
'Accept-Language': 'en-US,en;q=0.9',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive',
'Content-Type': 'application/json',
'Origin': 'https://app.giz.ai',
'Pragma': 'no-cache',
'Sec-Fetch-Dest': 'empty',
'Sec-Fetch-Mode': 'cors',
'Sec-Fetch-Site': 'same-origin',
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.36',
'sec-ch-ua': '"Not?A_Brand";v="99", "Chromium";v="130"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"Linux"'
}
async with ClientSession() as session:
if cls.is_image_model(model):
# Image generation
prompt = messages[-1]["content"]
data = {
"model": model,
"input": {
"width": "1024",
"height": "1024",
"steps": 4,
"output_format": "webp",
"batch_size": 1,
"mode": "plan",
"prompt": prompt
}
}
async with session.post(
cls.api_endpoint,
headers=headers,
data=json.dumps(data),
proxy=proxy
) as response:
response.raise_for_status()
response_data = await response.json()
if response_data.get('status') == 'completed' and response_data.get('output'):
for url in response_data['output']:
yield ImageResponse(images=url, alt="Generated Image")
else:
# Chat completion
data = {
"model": model,
"input": {
"messages": [
{
"type": "human",
"content": format_prompt(messages)
}
],
"mode": "plan"
},
"noStream": True
}
async with session.post(
cls.api_endpoint,
headers=headers,
data=json.dumps(data),
proxy=proxy
) as response:
response.raise_for_status()
result = await response.json()
yield result.get('output', '')

View File

@ -47,6 +47,7 @@ from .FreeChatgpt import FreeChatgpt
from .FreeGpt import FreeGpt
from .FreeNetfly import FreeNetfly
from .GeminiPro import GeminiPro
from .GizAI import GizAI
from .GPROChat import GPROChat
from .HuggingChat import HuggingChat
from .HuggingFace import HuggingFace

View File

@ -1,45 +1,52 @@
from __future__ import annotations
import asyncio
import json
import requests
from typing import Any, Dict
from ...typing import CreateResult, Messages
from ..base_provider import ProviderModelMixin, AbstractProvider
from ...typing import AsyncResult, Messages
from ..base_provider import AsyncGeneratorProvider, ProviderModelMixin
from ..helper import format_prompt
class NexraChatGPT(AbstractProvider, ProviderModelMixin):
class NexraChatGPT(AsyncGeneratorProvider, ProviderModelMixin):
label = "Nexra ChatGPT"
url = "https://nexra.aryahcr.cc/documentation/chatgpt/en"
api_endpoint = "https://nexra.aryahcr.cc/api/chat/gpt"
api_endpoint_nexra_chatgpt = "https://nexra.aryahcr.cc/api/chat/gpt"
api_endpoint_nexra_chatgpt4o = "https://nexra.aryahcr.cc/api/chat/complements"
api_endpoint_nexra_chatgpt_v2 = "https://nexra.aryahcr.cc/api/chat/complements"
api_endpoint_nexra_gptweb = "https://nexra.aryahcr.cc/api/chat/gptweb"
working = True
supports_system_message = True
supports_message_history = True
supports_stream = True
default_model = 'gpt-3.5-turbo'
models = ['gpt-4', 'gpt-4-0613', 'gpt-4-0314', 'gpt-4-32k-0314', default_model, 'gpt-3.5-turbo-16k', 'gpt-3.5-turbo-0613', 'gpt-3.5-turbo-16k-0613', 'gpt-3.5-turbo-0301', 'text-davinci-003', 'text-davinci-002', 'code-davinci-002', 'gpt-3', 'text-curie-001', 'text-babbage-001', 'text-ada-001', 'davinci', 'curie', 'babbage', 'ada', 'babbage-002', 'davinci-002']
nexra_chatgpt = [
'gpt-4', 'gpt-4-0613', 'gpt-4-0314', 'gpt-4-32k-0314',
default_model, 'gpt-3.5-turbo-16k', 'gpt-3.5-turbo-0613', 'gpt-3.5-turbo-16k-0613', 'gpt-3.5-turbo-0301',
'text-davinci-003', 'text-davinci-002', 'code-davinci-002', 'gpt-3', 'text-curie-001', 'text-babbage-001', 'text-ada-001', 'davinci', 'curie', 'babbage', 'ada', 'babbage-002', 'davinci-002'
]
nexra_chatgpt4o = ['gpt-4o']
nexra_chatgptv2 = ['chatgpt']
nexra_gptweb = ['gptweb']
models = nexra_chatgpt + nexra_chatgpt4o + nexra_chatgptv2 + nexra_gptweb
model_aliases = {
"gpt-4": "gpt-4-0613",
"gpt-4": "gpt-4-32k",
"gpt-4": "gpt-4-0314",
"gpt-4": "gpt-4-32k-0314",
"gpt-4-32k": "gpt-4-32k-0314",
"gpt-3.5-turbo": "gpt-3.5-turbo-16k",
"gpt-3.5-turbo": "gpt-3.5-turbo-0613",
"gpt-3.5-turbo": "gpt-3.5-turbo-16k-0613",
"gpt-3.5-turbo": "gpt-3.5-turbo-0301",
"gpt-3.5-turbo-0613": "gpt-3.5-turbo-16k-0613",
"gpt-3": "text-davinci-003",
"gpt-3": "text-davinci-002",
"gpt-3": "code-davinci-002",
"gpt-3": "text-curie-001",
"gpt-3": "text-babbage-001",
"gpt-3": "text-ada-001",
"gpt-3": "text-ada-001",
"gpt-3": "davinci",
"gpt-3": "curie",
"gpt-3": "babbage",
"gpt-3": "ada",
"gpt-3": "babbage-002",
"gpt-3": "davinci-002",
"text-davinci-002": "code-davinci-002",
"text-curie-001": "text-babbage-001",
"text-ada-001": "davinci",
"curie": "babbage",
"ada": "babbage-002",
"davinci-002": "davinci-002",
"chatgpt": "chatgpt",
"gptweb": "gptweb"
}
@classmethod
@ -50,40 +57,229 @@ class NexraChatGPT(AbstractProvider, ProviderModelMixin):
return cls.model_aliases[model]
else:
return cls.default_model
@classmethod
def create_completion(
async def create_async_generator(
cls,
model: str,
messages: Messages,
stream: bool = False,
proxy: str = None,
markdown: bool = False,
**kwargs
) -> AsyncResult:
if model in cls.nexra_chatgpt:
async for chunk in cls._create_async_generator_nexra_chatgpt(model, messages, proxy, **kwargs):
yield chunk
elif model in cls.nexra_chatgpt4o:
async for chunk in cls._create_async_generator_nexra_chatgpt4o(model, messages, stream, proxy, markdown, **kwargs):
yield chunk
elif model in cls.nexra_chatgptv2:
async for chunk in cls._create_async_generator_nexra_chatgpt_v2(model, messages, stream, proxy, markdown, **kwargs):
yield chunk
elif model in cls.nexra_gptweb:
async for chunk in cls._create_async_generator_nexra_gptweb(model, messages, proxy, **kwargs):
yield chunk
@classmethod
async def _create_async_generator_nexra_chatgpt(
cls,
model: str,
messages: Messages,
proxy: str = None,
markdown: bool = False,
**kwargs
) -> CreateResult:
) -> AsyncResult:
model = cls.get_model(model)
headers = {
'Content-Type': 'application/json'
"Content-Type": "application/json"
}
prompt = format_prompt(messages)
data = {
"messages": [],
"prompt": format_prompt(messages),
"messages": messages,
"prompt": prompt,
"model": model,
"markdown": markdown
}
response = requests.post(cls.api_endpoint, headers=headers, json=data)
return cls.process_response(response)
loop = asyncio.get_event_loop()
try:
response = await loop.run_in_executor(None, cls._sync_post_request, cls.api_endpoint_nexra_chatgpt, data, headers, proxy)
filtered_response = cls._filter_response(response)
for chunk in filtered_response:
yield chunk
except Exception as e:
print(f"Error during API request (nexra_chatgpt): {e}")
@classmethod
def process_response(cls, response):
async def _create_async_generator_nexra_chatgpt4o(
cls,
model: str,
messages: Messages,
stream: bool = False,
proxy: str = None,
markdown: bool = False,
**kwargs
) -> AsyncResult:
model = cls.get_model(model)
headers = {
"Content-Type": "application/json"
}
prompt = format_prompt(messages)
data = {
"messages": [
{
"role": "user",
"content": prompt
}
],
"stream": stream,
"markdown": markdown,
"model": model
}
loop = asyncio.get_event_loop()
try:
response = await loop.run_in_executor(None, cls._sync_post_request, cls.api_endpoint_nexra_chatgpt4o, data, headers, proxy, stream)
if stream:
async for chunk in cls._process_streaming_response(response):
yield chunk
else:
for chunk in cls._process_non_streaming_response(response):
yield chunk
except Exception as e:
print(f"Error during API request (nexra_chatgpt4o): {e}")
@classmethod
async def _create_async_generator_nexra_chatgpt_v2(
cls,
model: str,
messages: Messages,
stream: bool = False,
proxy: str = None,
markdown: bool = False,
**kwargs
) -> AsyncResult:
model = cls.get_model(model)
headers = {
"Content-Type": "application/json"
}
prompt = format_prompt(messages)
data = {
"messages": [
{
"role": "user",
"content": prompt
}
],
"stream": stream,
"markdown": markdown,
"model": model
}
loop = asyncio.get_event_loop()
try:
response = await loop.run_in_executor(None, cls._sync_post_request, cls.api_endpoint_nexra_chatgpt_v2, data, headers, proxy, stream)
if stream:
async for chunk in cls._process_streaming_response(response):
yield chunk
else:
for chunk in cls._process_non_streaming_response(response):
yield chunk
except Exception as e:
print(f"Error during API request (nexra_chatgpt_v2): {e}")
@classmethod
async def _create_async_generator_nexra_gptweb(
cls,
model: str,
messages: Messages,
proxy: str = None,
markdown: bool = False,
**kwargs
) -> AsyncResult:
model = cls.get_model(model)
headers = {
"Content-Type": "application/json"
}
prompt = format_prompt(messages)
data = {
"prompt": prompt,
"markdown": markdown,
}
loop = asyncio.get_event_loop()
try:
response = await loop.run_in_executor(None, cls._sync_post_request, cls.api_endpoint_nexra_gptweb, data, headers, proxy)
for chunk in response.iter_content(1024):
if chunk:
decoded_chunk = chunk.decode().lstrip('_')
try:
response_json = json.loads(decoded_chunk)
if response_json.get("status"):
yield response_json.get("gpt", "")
except json.JSONDecodeError:
continue
except Exception as e:
print(f"Error during API request (nexra_gptweb): {e}")
@staticmethod
def _sync_post_request(url: str, data: Dict[str, Any], headers: Dict[str, str], proxy: str = None, stream: bool = False) -> requests.Response:
proxies = {
"http": proxy,
"https": proxy,
} if proxy else None
try:
response = requests.post(url, json=data, headers=headers, proxies=proxies, stream=stream)
response.raise_for_status()
return response
except requests.RequestException as e:
print(f"Request failed: {e}")
raise
@staticmethod
def _process_non_streaming_response(response: requests.Response) -> str:
if response.status_code == 200:
try:
data = response.json()
return data.get('gpt', '')
content = response.text.lstrip('')
data = json.loads(content)
return data.get('message', '')
except json.JSONDecodeError:
return "Error: Unable to decode JSON response"
else:
return f"Error: {response.status_code}"
@staticmethod
async def _process_streaming_response(response: requests.Response):
full_message = ""
for line in response.iter_lines(decode_unicode=True):
if line:
try:
line = line.lstrip('')
data = json.loads(line)
if data.get('finish'):
break
message = data.get('message', '')
if message:
yield message[len(full_message):]
full_message = message
except json.JSONDecodeError:
pass
@staticmethod
def _filter_response(response: requests.Response) -> str:
response_json = response.json()
return response_json.get("gpt", "")

View File

@ -1,86 +0,0 @@
from __future__ import annotations
import json
import requests
from ...typing import CreateResult, Messages
from ..base_provider import ProviderModelMixin, AbstractProvider
from ..helper import format_prompt
class NexraChatGPT4o(AbstractProvider, ProviderModelMixin):
label = "Nexra ChatGPT4o"
url = "https://nexra.aryahcr.cc/documentation/chatgpt/en"
api_endpoint = "https://nexra.aryahcr.cc/api/chat/complements"
working = True
supports_stream = True
default_model = "gpt-4o"
models = [default_model]
@classmethod
def get_model(cls, model: str) -> str:
return cls.default_model
@classmethod
def create_completion(
cls,
model: str,
messages: Messages,
stream: bool,
proxy: str = None,
markdown: bool = False,
**kwargs
) -> CreateResult:
model = cls.get_model(model)
headers = {
'Content-Type': 'application/json'
}
data = {
"messages": [
{
"role": "user",
"content": format_prompt(messages)
}
],
"stream": stream,
"markdown": markdown,
"model": model
}
response = requests.post(cls.api_endpoint, headers=headers, json=data, stream=stream)
if stream:
return cls.process_streaming_response(response)
else:
return cls.process_non_streaming_response(response)
@classmethod
def process_non_streaming_response(cls, response):
if response.status_code == 200:
try:
content = response.text.lstrip('')
data = json.loads(content)
return data.get('message', '')
except json.JSONDecodeError:
return "Error: Unable to decode JSON response"
else:
return f"Error: {response.status_code}"
@classmethod
def process_streaming_response(cls, response):
full_message = ""
for line in response.iter_lines(decode_unicode=True):
if line:
try:
line = line.lstrip('')
data = json.loads(line)
if data.get('finish'):
break
message = data.get('message', '')
if message and message != full_message:
yield message[len(full_message):]
full_message = message
except json.JSONDecodeError:
pass

View File

@ -1,92 +0,0 @@
from __future__ import annotations
import json
import requests
from ...typing import CreateResult, Messages
from ..base_provider import ProviderModelMixin, AbstractProvider
from ..helper import format_prompt
class NexraChatGptV2(AbstractProvider, ProviderModelMixin):
label = "Nexra ChatGPT v2"
url = "https://nexra.aryahcr.cc/documentation/chatgpt/en"
api_endpoint = "https://nexra.aryahcr.cc/api/chat/complements"
working = True
supports_stream = True
default_model = 'chatgpt'
models = [default_model]
model_aliases = {"gpt-4": "chatgpt"}
@classmethod
def get_model(cls, model: str) -> str:
if model in cls.models:
return model
elif model in cls.model_aliases:
return cls.model_aliases[model]
else:
return cls.default_model
@classmethod
def create_completion(
cls,
model: str,
messages: Messages,
stream: bool,
proxy: str = None,
markdown: bool = False,
**kwargs
) -> CreateResult:
model = cls.get_model(model)
headers = {
'Content-Type': 'application/json'
}
data = {
"messages": [
{
"role": "user",
"content": format_prompt(messages)
}
],
"stream": stream,
"markdown": markdown,
"model": model
}
response = requests.post(cls.api_endpoint, headers=headers, json=data, stream=stream)
if stream:
return cls.process_streaming_response(response)
else:
return cls.process_non_streaming_response(response)
@classmethod
def process_non_streaming_response(cls, response):
if response.status_code == 200:
try:
content = response.text.lstrip('')
data = json.loads(content)
return data.get('message', '')
except json.JSONDecodeError:
return "Error: Unable to decode JSON response"
else:
return f"Error: {response.status_code}"
@classmethod
def process_streaming_response(cls, response):
full_message = ""
for line in response.iter_lines(decode_unicode=True):
if line:
try:
line = line.lstrip('')
data = json.loads(line)
if data.get('finish'):
break
message = data.get('message', '')
if message:
yield message[len(full_message):]
full_message = message
except json.JSONDecodeError:
pass

View File

@ -1,64 +0,0 @@
from __future__ import annotations
import json
import requests
from ...typing import CreateResult, Messages
from ..base_provider import ProviderModelMixin, AbstractProvider
from ..helper import format_prompt
class NexraChatGptWeb(AbstractProvider, ProviderModelMixin):
label = "Nexra ChatGPT Web"
url = "https://nexra.aryahcr.cc/documentation/chatgpt/en"
working = True
default_model = "gptweb"
models = [default_model]
model_aliases = {"gpt-4": "gptweb"}
api_endpoints = {"gptweb": "https://nexra.aryahcr.cc/api/chat/gptweb"}
@classmethod
def get_model(cls, model: str) -> str:
if model in cls.models:
return model
elif model in cls.model_aliases:
return cls.model_aliases[model]
else:
return cls.default_model
@classmethod
def create_completion(
cls,
model: str,
messages: Messages,
proxy: str = None,
markdown: bool = False,
**kwargs
) -> CreateResult:
model = cls.get_model(model)
api_endpoint = cls.api_endpoints.get(model, cls.api_endpoints[cls.default_model])
headers = {
'Content-Type': 'application/json'
}
data = {
"prompt": format_prompt(messages),
"markdown": markdown
}
response = requests.post(api_endpoint, headers=headers, json=data)
return cls.process_response(response)
@classmethod
def process_response(cls, response):
if response.status_code == 200:
try:
content = response.text.lstrip('_')
json_response = json.loads(content)
return json_response.get('gpt', '')
except json.JSONDecodeError:
return "Error: Unable to decode JSON response"
else:
return f"Error: {response.status_code}"

View File

@ -1,9 +1,6 @@
from .NexraBing import NexraBing
from .NexraBlackbox import NexraBlackbox
from .NexraChatGPT import NexraChatGPT
from .NexraChatGPT4o import NexraChatGPT4o
from .NexraChatGptV2 import NexraChatGptV2
from .NexraChatGptWeb import NexraChatGptWeb
from .NexraDallE import NexraDallE
from .NexraDallE2 import NexraDallE2
from .NexraEmi import NexraEmi

View File

@ -14,17 +14,18 @@ from starlette.status import HTTP_422_UNPROCESSABLE_ENTITY, HTTP_401_UNAUTHORIZE
from fastapi.encoders import jsonable_encoder
from fastapi.middleware.cors import CORSMiddleware
from pydantic import BaseModel
from typing import Union, Optional
from typing import Union, Optional, Iterator
import g4f
import g4f.debug
from g4f.client import Client
from g4f.client import Client, ChatCompletion, ChatCompletionChunk, ImagesResponse
from g4f.typing import Messages
from g4f.cookies import read_cookie_files
def create_app():
def create_app(g4f_api_key: str = None):
app = FastAPI()
api = Api(app)
# Add CORS middleware
app.add_middleware(
CORSMiddleware,
allow_origin_regex=".*",
@ -32,18 +33,19 @@ def create_app():
allow_methods=["*"],
allow_headers=["*"],
)
api = Api(app, g4f_api_key=g4f_api_key)
api.register_routes()
api.register_authorization()
api.register_validation_exception_handler()
# Read cookie files if not ignored
if not AppConfig.ignore_cookie_files:
read_cookie_files()
return app
def create_app_debug():
g4f.debug.logging = True
return create_app()
class ChatCompletionsForm(BaseModel):
class ChatCompletionsConfig(BaseModel):
messages: Messages
model: str
provider: Optional[str] = None
@ -55,15 +57,12 @@ class ChatCompletionsForm(BaseModel):
web_search: Optional[bool] = None
proxy: Optional[str] = None
class ImagesGenerateForm(BaseModel):
model: Optional[str] = None
provider: Optional[str] = None
class ImageGenerationConfig(BaseModel):
prompt: str
response_format: Optional[str] = None
api_key: Optional[str] = None
proxy: Optional[str] = None
model: Optional[str] = None
response_format: str = "url"
class AppConfig():
class AppConfig:
ignored_providers: Optional[list[str]] = None
g4f_api_key: Optional[str] = None
ignore_cookie_files: bool = False
@ -74,16 +73,23 @@ class AppConfig():
for key, value in data.items():
setattr(cls, key, value)
list_ignored_providers: list[str] = None
def set_list_ignored_providers(ignored: list[str]):
global list_ignored_providers
list_ignored_providers = ignored
class Api:
def __init__(self, app: FastAPI) -> None:
def __init__(self, app: FastAPI, g4f_api_key=None) -> None:
self.app = app
self.client = Client()
self.g4f_api_key = g4f_api_key
self.get_g4f_api_key = APIKeyHeader(name="g4f-api-key")
def register_authorization(self):
@self.app.middleware("http")
async def authorization(request: Request, call_next):
if AppConfig.g4f_api_key and request.url.path in ["/v1/chat/completions", "/v1/completions"]:
if self.g4f_api_key and request.url.path in ["/v1/chat/completions", "/v1/completions", "/v1/images/generate"]:
try:
user_g4f_api_key = await self.get_g4f_api_key(request)
except HTTPException as e:
@ -92,22 +98,26 @@ class Api:
status_code=HTTP_401_UNAUTHORIZED,
content=jsonable_encoder({"detail": "G4F API key required"}),
)
if not secrets.compare_digest(AppConfig.g4f_api_key, user_g4f_api_key):
if not secrets.compare_digest(self.g4f_api_key, user_g4f_api_key):
return JSONResponse(
status_code=HTTP_403_FORBIDDEN,
content=jsonable_encoder({"detail": "Invalid G4F API key"}),
)
return await call_next(request)
response = await call_next(request)
return response
def register_validation_exception_handler(self):
@self.app.exception_handler(RequestValidationError)
async def validation_exception_handler(request: Request, exc: RequestValidationError):
details = exc.errors()
modified_details = [{
"loc": error["loc"],
"message": error["msg"],
"type": error["type"],
} for error in details]
modified_details = []
for error in details:
modified_details.append({
"loc": error["loc"],
"message": error["msg"],
"type": error["type"],
})
return JSONResponse(
status_code=HTTP_422_UNPROCESSABLE_ENTITY,
content=jsonable_encoder({"detail": modified_details}),
@ -121,25 +131,23 @@ class Api:
@self.app.get("/v1")
async def read_root_v1():
return HTMLResponse('g4f API: Go to '
'<a href="/v1/chat/completions">chat/completions</a> '
'or <a href="/v1/models">models</a>.')
'<a href="/v1/chat/completions">chat/completions</a>, '
'<a href="/v1/models">models</a>, or '
'<a href="/v1/images/generate">images/generate</a>.')
@self.app.get("/v1/models")
async def models():
model_list = {
model: g4f.models.ModelUtils.convert[model]
model_list = dict(
(model, g4f.models.ModelUtils.convert[model])
for model in g4f.Model.__all__()
}
)
model_list = [{
'id': model_id,
'object': 'model',
'created': 0,
'owned_by': model.base_provider
} for model_id, model in model_list.items()]
return JSONResponse({
"object": "list",
"data": model_list,
})
return JSONResponse(model_list)
@self.app.get("/v1/models/{model_name}")
async def model_info(model_name: str):
@ -155,7 +163,7 @@ class Api:
return JSONResponse({"error": "The model does not exist."})
@self.app.post("/v1/chat/completions")
async def chat_completions(config: ChatCompletionsForm, request: Request = None, provider: str = None):
async def chat_completions(config: ChatCompletionsConfig, request: Request = None, provider: str = None):
try:
config.provider = provider if config.provider is None else config.provider
if config.api_key is None and request is not None:
@ -164,17 +172,27 @@ class Api:
auth_header = auth_header.split(None, 1)[-1]
if auth_header and auth_header != "Bearer":
config.api_key = auth_header
# Use the asynchronous create method and await it
response = await self.client.chat.completions.async_create(
# Create the completion response
response = self.client.chat.completions.create(
**{
**AppConfig.defaults,
**config.dict(exclude_none=True),
},
ignored=AppConfig.ignored_providers
)
if not config.stream:
# Check if the response is synchronous or asynchronous
if isinstance(response, ChatCompletion):
# Synchronous response
return JSONResponse(response.to_json())
if not config.stream:
# If the response is an iterator but not streaming, collect the result
response_list = list(response) if isinstance(response, Iterator) else [response]
return JSONResponse(response_list[0].to_json())
# Streaming response
async def streaming():
try:
async for chunk in response:
@ -185,41 +203,38 @@ class Api:
logging.exception(e)
yield f'data: {format_exception(e, config)}\n\n'
yield "data: [DONE]\n\n"
return StreamingResponse(streaming(), media_type="text/event-stream")
except Exception as e:
logging.exception(e)
return Response(content=format_exception(e, config), status_code=500, media_type="application/json")
@self.app.post("/v1/images/generate")
async def generate_image(config: ImageGenerationConfig):
try:
response: ImagesResponse = await self.client.images.async_generate(
prompt=config.prompt,
model=config.model,
response_format=config.response_format
)
# Convert Image objects to dictionaries
response_data = [image.to_dict() for image in response.data]
return JSONResponse({"data": response_data})
except Exception as e:
logging.exception(e)
return Response(content=format_exception(e, config), status_code=500, media_type="application/json")
@self.app.post("/v1/completions")
async def completions():
return Response(content=json.dumps({'info': 'Not working yet.'}, indent=4), media_type="application/json")
@self.app.post("/v1/images/generations")
async def images_generate(config: ImagesGenerateForm, request: Request = None, provider: str = None):
try:
config.provider = provider if config.provider is None else config.provider
if config.api_key is None and request is not None:
auth_header = request.headers.get("Authorization")
if auth_header is not None:
auth_header = auth_header.split(None, 1)[-1]
if auth_header and auth_header != "Bearer":
config.api_key = auth_header
# Use the asynchronous generate method and await it
response = await self.client.images.async_generate(
**config.dict(exclude_none=True),
)
return JSONResponse(response.to_json())
except Exception as e:
logging.exception(e)
return Response(content=format_exception(e, config), status_code=500, media_type="application/json")
def format_exception(e: Exception, config: ChatCompletionsForm) -> str:
def format_exception(e: Exception, config: Union[ChatCompletionsConfig, ImageGenerationConfig]) -> str:
last_provider = g4f.get_last_provider(True)
return json.dumps({
"error": {"message": f"{e.__class__.__name__}: {e}"},
"model": last_provider.get("model") if last_provider else config.model,
"provider": last_provider.get("name") if last_provider else config.provider
"model": last_provider.get("model") if last_provider else getattr(config, 'model', None),
"provider": last_provider.get("name") if last_provider else getattr(config, 'provider', None)
})
def run_api(
@ -228,18 +243,22 @@ def run_api(
bind: str = None,
debug: bool = False,
workers: int = None,
use_colors: bool = None
use_colors: bool = None,
g4f_api_key: str = None
) -> None:
print(f'Starting server... [g4f v-{g4f.version.utils.current_version}]' + (" (debug)" if debug else ""))
if use_colors is None:
use_colors = debug
if bind is not None:
host, port = bind.split(":")
if debug:
g4f.debug.logging = True
uvicorn.run(
f"g4f.api:create_app{'_debug' if debug else ''}",
host=host, port=int(port),
workers=workers,
use_colors=use_colors,
factory=True,
"g4f.api:create_app",
host=host,
port=int(port),
workers=workers,
use_colors=use_colors,
factory=True,
reload=debug
)

View File

@ -531,3 +531,4 @@ class Images:
async def create_variation(self, image: Union[str, bytes], model: str = None, response_format: str = "url", **kwargs):
# Existing implementation, adjust if you want to support b64_json here as well
pass

View File

@ -224,28 +224,35 @@
</div>
</div>
<div class="buttons">
<div class="field">
<select name="model" id="model">
<option value="">Model: Default</option>
<option value="gpt-4">gpt-4</option>
<option value="gpt-3.5-turbo">gpt-3.5-turbo</option>
<option value="llama-3-70b-chat">llama-3-70b-chat</option>
<option value="llama-3.1-70b">llama-3.1-70b</option>
<option value="gemini-pro">gemini-pro</option>
<option value="">----</option>
</select>
<select name="model2" id="model2" class="hidden"></select>
</div>
<div class="field">
<select name="provider" id="provider">
<option value="">Provider: Auto</option>
<option value="Bing">Bing</option>
<option value="OpenaiChat">OpenAI ChatGPT</option>
<option value="Gemini">Gemini</option>
<option value="Liaobots">Liaobots</option>
<option value="MetaAI">Meta AI</option>
<option value="You">You</option>
<option value="">----</option>
<div class="field">
<select name="model" id="model">
<option value="">Model: Default</option>
<option value="gpt-4">gpt-4</option>
<option value="gpt-4o">gpt-4o</option>
<option value="gpt-4o-mini">gpt-4o-mini</option>
<option value="llama-3.1-70b">llama-3.1-70b</option>
<option value="llama-3.1-70b">llama-3.1-405b</option>
<option value="llama-3.1-70b">mixtral-8x7b</option>
<option value="gemini-pro">gemini-pro</option>
<option value="gemini-flash">gemini-flash</option>
<option value="claude-3-haiku">claude-3-haiku</option>
<option value="claude-3.5-sonnet">claude-3.5-sonnet</option>
<option value="">----</option>
</select>
<select name="model2" id="model2" class="hidden"></select>
</div>
<div class="field">
<select name="provider" id="provider">
<option value="">Provider: Auto</option>
<option value="OpenaiChat">OpenAI ChatGPT</option>
<option value="Gemini">Gemini</option>
<option value="MetaAI">Meta AI</option>
<option value="DeepInfraChat">DeepInfraChat</option>
<option value="Blackbox">Blackbox</option>
<option value="HuggingChat">HuggingChat</option>
<option value="DDG">DDG</option>
<option value="Pizzagpt">Pizzagpt</option>
<option value="">----</option>
</select>
</div>
</div>

View File

@ -338,6 +338,14 @@ const prepare_messages = (messages, message_index = -1) => {
messages = messages.filter((_, index) => message_index >= index);
}
let new_messages = [];
if (systemPrompt?.value) {
new_messages.push({
"role": "system",
"content": systemPrompt.value
});
}
// Remove history, if it's selected
if (document.getElementById('history')?.checked) {
if (message_index == null) {
@ -347,13 +355,6 @@ const prepare_messages = (messages, message_index = -1) => {
}
}
let new_messages = [];
if (systemPrompt?.value) {
new_messages.push({
"role": "system",
"content": systemPrompt.value
});
}
messages.forEach((new_message) => {
// Include only not regenerated messages
if (new_message && !new_message.regenerate) {
@ -366,6 +367,7 @@ const prepare_messages = (messages, message_index = -1) => {
return new_messages;
}
async function add_message_chunk(message) {
if (message.type == "conversation") {
console.info("Conversation used:", message.conversation)

View File

@ -2,12 +2,11 @@ from __future__ import annotations
import logging
import os
import os.path
import uuid
import asyncio
import time
from aiohttp import ClientSession
from typing import Iterator, Optional
from typing import Iterator, Optional, AsyncIterator, Union
from flask import send_from_directory
from g4f import version, models
@ -20,21 +19,20 @@ from g4f.Provider import ProviderType, __providers__, __map__
from g4f.providers.base_provider import ProviderModelMixin, FinishReason
from g4f.providers.conversation import BaseConversation
conversations: dict[dict[str, BaseConversation]] = {}
# Define the directory for generated images
images_dir = "./generated_images"
# Function to ensure the images directory exists
def ensure_images_dir():
if not os.path.exists(images_dir):
os.makedirs(images_dir)
conversations: dict[dict[str, BaseConversation]] = {}
class Api:
@staticmethod
def get_models() -> list[str]:
"""
Return a list of all models.
Fetches and returns a list of all available models in the system.
Returns:
List[str]: A list of model names.
"""
return models._all_models
@staticmethod
@ -82,9 +80,6 @@ class Api:
@staticmethod
def get_providers() -> list[str]:
"""
Return a list of all working providers.
"""
return {
provider.__name__: (
provider.label if hasattr(provider, "label") else provider.__name__
@ -99,12 +94,6 @@ class Api:
@staticmethod
def get_version():
"""
Returns the current and latest version of the application.
Returns:
dict: A dictionary containing the current and latest version.
"""
try:
current_version = version.utils.current_version
except VersionNotFoundError:
@ -115,18 +104,10 @@ class Api:
}
def serve_images(self, name):
ensure_images_dir()
return send_from_directory(os.path.abspath(images_dir), name)
def _prepare_conversation_kwargs(self, json_data: dict, kwargs: dict):
"""
Prepares arguments for chat completion based on the request data.
Reads the request and prepares the necessary arguments for handling
a chat completion request.
Returns:
dict: Arguments prepared for chat completion.
"""
model = json_data.get('model') or models.default
provider = json_data.get('provider')
messages = json_data['messages']
@ -134,7 +115,7 @@ class Api:
if api_key is not None:
kwargs["api_key"] = api_key
if json_data.get('web_search'):
if provider in ("Bing", "HuggingChat"):
if provider:
kwargs['web_search'] = True
else:
from .internet import get_search_message
@ -159,13 +140,11 @@ class Api:
result = ChatCompletion.create(**kwargs)
first = True
if isinstance(result, ImageResponse):
# Якщо результат є ImageResponse, обробляємо його як одиночний елемент
if first:
first = False
yield self._format_json("provider", get_last_provider(True))
yield self._format_json("content", str(result))
else:
# Якщо результат є ітерабельним, обробляємо його як раніше
for chunk in result:
if first:
first = False
@ -181,7 +160,6 @@ class Api:
elif isinstance(chunk, ImagePreview):
yield self._format_json("preview", chunk.to_string())
elif isinstance(chunk, ImageResponse):
# Обробка ImageResponse
images = asyncio.run(self._copy_images(chunk.get_list(), chunk.options.get("cookies")))
yield self._format_json("content", str(ImageResponse(images, chunk.alt)))
elif not isinstance(chunk, FinishReason):
@ -190,8 +168,8 @@ class Api:
logging.exception(e)
yield self._format_json('error', get_error_message(e))
# Додайте цей метод до класу Api
async def _copy_images(self, images: list[str], cookies: Optional[Cookies] = None):
ensure_images_dir()
async with ClientSession(
connector=get_connector(None, os.environ.get("G4F_PROXY")),
cookies=cookies
@ -212,16 +190,6 @@ class Api:
return await asyncio.gather(*[copy_image(image) for image in images])
def _format_json(self, response_type: str, content):
"""
Formats and returns a JSON response.
Args:
response_type (str): The type of the response.
content: The content to be included in the response.
Returns:
str: A JSON formatted string.
"""
return {
'type': response_type,
response_type: content
@ -229,15 +197,6 @@ class Api:
def get_error_message(exception: Exception) -> str:
"""
Generates a formatted error message from an exception.
Args:
exception (Exception): The exception to format.
Returns:
str: A formatted error message string.
"""
message = f"{type(exception).__name__}: {exception}"
provider = get_last_provider()
if provider is None:

View File

@ -27,6 +27,10 @@ class Website:
'function': redirect_home,
'methods': ['GET', 'POST']
},
'/images/': {
'function': redirect_home,
'methods': ['GET', 'POST']
},
}
def _chat(self, conversation_id):
@ -35,4 +39,4 @@ class Website:
return render_template('index.html', chat_id=conversation_id)
def _index(self):
return render_template('index.html', chat_id=str(uuid.uuid4()))
return render_template('index.html', chat_id=str(uuid.uuid4()))

View File

@ -23,7 +23,6 @@ from .Provider import (
DDG,
DeepInfra,
DeepInfraChat,
DeepInfraImage,
Editee,
Free2GPT,
FreeChatgpt,
@ -31,6 +30,7 @@ from .Provider import (
FreeNetfly,
Gemini,
GeminiPro,
GizAI,
GigaChat,
GPROChat,
HuggingChat,
@ -42,9 +42,6 @@ from .Provider import (
NexraBing,
NexraBlackbox,
NexraChatGPT,
NexraChatGPT4o,
NexraChatGptV2,
NexraChatGptWeb,
NexraDallE,
NexraDallE2,
NexraEmi,
@ -87,6 +84,8 @@ class Model:
"""Returns a list of all model names."""
return _all_models
### Default ###
default = Model(
name = "",
base_provider = "",
@ -113,6 +112,8 @@ default = Model(
])
)
############
### Text ###
############
@ -136,13 +137,13 @@ gpt_35_turbo = Model(
gpt_4o = Model(
name = 'gpt-4o',
base_provider = 'OpenAI',
best_provider = IterListProvider([NexraChatGPT4o, Blackbox, ChatGptEs, AmigoChat, DarkAI, Editee, Liaobots, Airforce, OpenaiChat])
best_provider = IterListProvider([NexraChatGPT, Blackbox, ChatGptEs, AmigoChat, DarkAI, Editee, GizAI, Airforce, Liaobots, OpenaiChat])
)
gpt_4o_mini = Model(
name = 'gpt-4o-mini',
base_provider = 'OpenAI',
best_provider = IterListProvider([DDG, ChatGptEs, FreeNetfly, Pizzagpt, MagickPen, AmigoChat, RubiksAI, Liaobots, Airforce, ChatgptFree, Koala, OpenaiChat, ChatGpt])
best_provider = IterListProvider([DDG, ChatGptEs, FreeNetfly, Pizzagpt, MagickPen, AmigoChat, RubiksAI, Liaobots, Airforce, GizAI, ChatgptFree, Koala, OpenaiChat, ChatGpt])
)
gpt_4_turbo = Model(
@ -154,7 +155,7 @@ gpt_4_turbo = Model(
gpt_4 = Model(
name = 'gpt-4',
base_provider = 'OpenAI',
best_provider = IterListProvider([Chatgpt4Online, Ai4Chat, NexraBing, NexraChatGPT, NexraChatGptV2, NexraChatGptWeb, Airforce, Bing, OpenaiChat, gpt_4_turbo.best_provider, gpt_4o.best_provider, gpt_4o_mini.best_provider])
best_provider = IterListProvider([Chatgpt4Online, Ai4Chat, NexraBing, NexraChatGPT, Airforce, Bing, OpenaiChat, gpt_4_turbo.best_provider, gpt_4o.best_provider, gpt_4o_mini.best_provider])
)
# o1
@ -167,7 +168,7 @@ o1 = Model(
o1_mini = Model(
name = 'o1-mini',
base_provider = 'OpenAI',
best_provider = AmigoChat
best_provider = IterListProvider([AmigoChat, GizAI])
)
@ -216,13 +217,13 @@ llama_3_70b = Model(
llama_3_1_8b = Model(
name = "llama-3.1-8b",
base_provider = "Meta Llama",
best_provider = IterListProvider([Blackbox, DeepInfraChat, ChatHub, Cloudflare, Airforce, PerplexityLabs])
best_provider = IterListProvider([Blackbox, DeepInfraChat, ChatHub, Cloudflare, Airforce, GizAI, PerplexityLabs])
)
llama_3_1_70b = Model(
name = "llama-3.1-70b",
base_provider = "Meta Llama",
best_provider = IterListProvider([DDG, HuggingChat, Blackbox, FreeGpt, TeachAnything, Free2GPT, DeepInfraChat, DarkAI, Airforce, AiMathGPT, RubiksAI, HuggingFace, PerplexityLabs])
best_provider = IterListProvider([DDG, HuggingChat, Blackbox, FreeGpt, TeachAnything, Free2GPT, DeepInfraChat, DarkAI, Airforce, AiMathGPT, RubiksAI, GizAI, HuggingFace, PerplexityLabs])
)
llama_3_1_405b = Model(
@ -299,7 +300,7 @@ mistral_nemo = Model(
mistral_large = Model(
name = "mistral-large",
base_provider = "Mistral",
best_provider = Editee
best_provider = IterListProvider([Editee, GizAI])
)
@ -347,13 +348,13 @@ phi_3_5_mini = Model(
gemini_pro = Model(
name = 'gemini-pro',
base_provider = 'Google DeepMind',
best_provider = IterListProvider([GeminiPro, Blackbox, AIChatFree, GPROChat, NexraGeminiPro, AmigoChat, Editee, Liaobots, Airforce])
best_provider = IterListProvider([GeminiPro, Blackbox, AIChatFree, GPROChat, NexraGeminiPro, AmigoChat, Editee, GizAI, Airforce, Liaobots])
)
gemini_flash = Model(
name = 'gemini-flash',
base_provider = 'Google DeepMind',
best_provider = IterListProvider([Blackbox, Liaobots, Airforce])
best_provider = IterListProvider([Blackbox, GizAI, Airforce, Liaobots])
)
gemini = Model(
@ -424,14 +425,14 @@ claude_3_sonnet = Model(
claude_3_haiku = Model(
name = 'claude-3-haiku',
base_provider = 'Anthropic',
best_provider = IterListProvider([DDG, Airforce, Liaobots])
best_provider = IterListProvider([DDG, Airforce, GizAI, Liaobots])
)
# claude 3.5
claude_3_5_sonnet = Model(
name = 'claude-3.5-sonnet',
base_provider = 'Anthropic',
best_provider = IterListProvider([Blackbox, Editee, AmigoChat, Airforce, Liaobots])
best_provider = IterListProvider([Blackbox, Editee, AmigoChat, Airforce, GizAI, Liaobots])
)
@ -753,14 +754,14 @@ sdxl_lora = Model(
sdxl = Model(
name = 'sdxl',
base_provider = 'Stability AI',
best_provider = IterListProvider([ReplicateHome, DeepInfraImage])
best_provider = IterListProvider([ReplicateHome])
)
sd_1_5 = Model(
name = 'sd-1.5',
base_provider = 'Stability AI',
best_provider = NexraSD15
best_provider = IterListProvider([NexraSD15, GizAI])
)
@ -771,6 +772,13 @@ sd_3 = Model(
)
sd_3_5 = Model(
name = 'sd-3.5',
base_provider = 'Stability AI',
best_provider = GizAI
)
### Playground ###
playground_v2_5 = Model(
name = 'playground-v2.5',
@ -791,7 +799,7 @@ flux = Model(
flux_pro = Model(
name = 'flux-pro',
base_provider = 'Flux AI',
best_provider = IterListProvider([AmigoChat, NexraFluxPro])
best_provider = IterListProvider([NexraFluxPro, AmigoChat])
)
@ -840,7 +848,7 @@ flux_4o = Model(
flux_schnell = Model(
name = 'flux-schnell',
base_provider = 'Flux AI',
best_provider = ReplicateHome
best_provider = IterListProvider([ReplicateHome, GizAI])
)
@ -1123,6 +1131,7 @@ class ModelUtils:
'sdxl-turbo': sdxl_turbo,
'sd-1.5': sd_1_5,
'sd-3': sd_3,
'sd-3.5': sd_3_5,
### Playground ###