mirror of
https://github.com/xtekky/gpt4free.git
synced 2024-12-23 02:52:29 +03:00
Merge pull request #2465 from hlohaus/neww
Use custom user data dir for each provider
This commit is contained in:
commit
5969983d83
22
README.md
22
README.md
@ -132,17 +132,27 @@ To ensure the seamless operation of our application, please follow the instructi
|
||||
|
||||
By following these steps, you should be able to successfully install and run the application on your Windows system. If you encounter any issues during the installation process, please refer to our Issue Tracker or try to get contact over Discord for assistance.
|
||||
|
||||
Run the **Webview UI** on other Platforms:
|
||||
---
|
||||
|
||||
- [/docs/webview](docs/webview.md)
|
||||
### Learn More About the GUI
|
||||
|
||||
##### Use your smartphone:
|
||||
For detailed instructions on how to set up, configure, and use the GPT4Free GUI, refer to the **GUI Documentation**:
|
||||
|
||||
Run the Web UI on Your Smartphone:
|
||||
- [GUI Documentation](docs/gui.md)
|
||||
|
||||
- [/docs/guides/phone](docs/guides/phone.md)
|
||||
This guide includes step-by-step details on provider selection, managing conversations, using advanced features like speech recognition, and more.
|
||||
|
||||
#### Use python
|
||||
---
|
||||
|
||||
### Use Your Smartphone
|
||||
|
||||
Run the Web UI on your smartphone for easy access on the go. Check out the dedicated guide to learn how to set up and use the GUI on your mobile device:
|
||||
|
||||
- [Run on Smartphone Guide](docs/guides/phone.md)
|
||||
|
||||
---
|
||||
|
||||
### Use python
|
||||
|
||||
##### Prerequisites:
|
||||
|
||||
|
147
docs/gui.md
Normal file
147
docs/gui.md
Normal file
@ -0,0 +1,147 @@
|
||||
# G4F - GUI Documentation
|
||||
|
||||
## Overview
|
||||
The G4F GUI is a self-contained, user-friendly interface designed for interacting with multiple AI models from various providers. It allows users to generate text, code, and images effortlessly. Advanced features such as speech recognition, file uploads, conversation backup/restore, and more are included. Both the backend and frontend are fully integrated into the GUI, making setup simple and seamless.
|
||||
|
||||
## Features
|
||||
|
||||
### 1. **Multiple Providers and Models**
|
||||
- **Provider/Model Selection via Dropdown:** Use the select box to choose a specific **provider/model combination**.
|
||||
- **Pinning Provider/Model Combinations:** After selecting a provider and model from the dropdown, click the **pin button** to add the combination to the pinned list.
|
||||
- **Remove Pinned Combinations:** Each pinned provider/model combination is displayed as a button. Clicking on the button removes it from the pinned list.
|
||||
- **Send Requests to Multiple Providers:** You can pin multiple provider/model combinations and send requests to all of them simultaneously, enabling fast and comprehensive content generation.
|
||||
|
||||
### 2. **Text, Code, and Image Generation**
|
||||
- **Text and Code Generation:** Enter prompts to generate text or code outputs.
|
||||
- **Image Generation:** Provide text prompts to generate images, which are shown as thumbnails. Clicking on a thumbnail opens the image in a lightbox view.
|
||||
|
||||
### 3. **Gallery Functionality**
|
||||
- **Image Thumbnails:** Generated images appear as small thumbnails within the conversation.
|
||||
- **Lightbox View:** Clicking a thumbnail opens the image in full size, along with the prompt used to generate it.
|
||||
- **Automatic Image Download:** Enable automatic downloading of generated images through the settings.
|
||||
|
||||
### 4. **Conversation Management**
|
||||
- **Message Reuse:** While messages can't be edited, you can copy and reuse them.
|
||||
- **Message Deletion:** Conversations can be deleted for a cleaner workspace.
|
||||
- **Conversation List:** The left sidebar displays a list of active and past conversations for easy navigation.
|
||||
- **Change Conversation Title:** By clicking the three dots next to a conversation title, you can either delete or change its title.
|
||||
- **Backup and Restore Conversations:** Backup and restore all conversations and messages as a JSON file (accessible via the settings).
|
||||
|
||||
### 5. **Speech Recognition and Synthesis**
|
||||
- **Speech Input:** Use speech recognition to input prompts by speaking instead of typing.
|
||||
- **Speech Output (Text-to-Speech):** The generated text can be read aloud using speech synthesis.
|
||||
- **Custom Language Settings:** Configure the language used for speech recognition to match your preference.
|
||||
|
||||
### 6. **File Uploads**
|
||||
- **Image Uploads:** Upload images that will be appended to your message and sent to the AI provider.
|
||||
- **Text File Uploads:** Upload text files, and their contents will be added to the message to provide more detailed input to the AI.
|
||||
|
||||
### 7. **Web Access and Settings**
|
||||
- **DuckDuckGo Web Access:** Enable web access through DuckDuckGo for privacy-focused browsing.
|
||||
- **Theme Toggle:** Switch between **dark mode** and **light mode** in the settings.
|
||||
- **Provider Visibility:** Hide unused providers in the settings using toggle buttons.
|
||||
- **Log Access:** View application logs, including error messages and debug logs, through the settings.
|
||||
|
||||
### 8. **Authentication**
|
||||
- **Basic Authentication:** Set a password for Basic Authentication using the `--g4f-api-key` argument when starting the web server.
|
||||
|
||||
## Installation
|
||||
|
||||
You can install the G4F GUI either as a full stack or in a lightweight version:
|
||||
|
||||
1. **Full Stack Installation** (includes all packages, including browser support and drivers):
|
||||
```bash
|
||||
pip install -U g4f[all]
|
||||
```
|
||||
|
||||
2. **Slim Installation** (does not include browser drivers, suitable for headless environments):
|
||||
```bash
|
||||
pip install -U g4f[slim]
|
||||
```
|
||||
|
||||
- **Full Stack Installation:** This installs all necessary dependencies, including browser support for web-based interactions.
|
||||
- **Slim Installation:** This version is lighter, with no browser support, ideal for environments where browser interactions are not required.
|
||||
|
||||
## Setup
|
||||
|
||||
### Setting the Environment Variable
|
||||
|
||||
It is **recommended** to set a `G4F_API_KEY` environment variable for authentication. You can do this as follows:
|
||||
|
||||
On **Linux/macOS**:
|
||||
```bash
|
||||
export G4F_API_KEY="your-api-key-here"
|
||||
```
|
||||
|
||||
On **Windows**:
|
||||
```bash
|
||||
set G4F_API_KEY="your-api-key-here"
|
||||
```
|
||||
|
||||
### Start the GUI and Backend
|
||||
|
||||
Run the following command to start both the GUI and backend services based on the G4F client:
|
||||
|
||||
```bash
|
||||
python -m g4f --debug --port 8080 --g4f-api-key $G4F_API_KEY
|
||||
```
|
||||
|
||||
This starts the GUI at `http://localhost:8080` with all necessary backend components running seamlessly.
|
||||
|
||||
### Access the GUI
|
||||
|
||||
Once the server is running, open your browser and navigate to:
|
||||
|
||||
```
|
||||
http://localhost:8080/chat/
|
||||
```
|
||||
|
||||
## Using the Interface
|
||||
|
||||
1. **Select and Manage Providers/Models:**
|
||||
- Use the **select box** to view the list of available providers and models.
|
||||
- Select a **provider/model combination** from the dropdown.
|
||||
- Click the **pin button** to add the combination to your pinned list.
|
||||
- To **unpin** a combination, click the corresponding button in the pinned list.
|
||||
|
||||
2. **Input a Prompt:**
|
||||
- Enter your prompt manually or use **speech recognition** to dictate it.
|
||||
- You can also upload **images** or **text files** to be included in the prompt.
|
||||
|
||||
3. **Generate Content:**
|
||||
- Click the "Generate" button to produce the content.
|
||||
- The AI will generate text, code, or images depending on the prompt.
|
||||
|
||||
4. **View and Interact with Results:**
|
||||
- **For Text/Code:** The generated content will appear in the conversation window.
|
||||
- **For Images:** Generated images will be shown as thumbnails. Click on them to view in full size.
|
||||
|
||||
5. **Backup and Restore Conversations:**
|
||||
- Backup all your conversations as a **JSON file** and restore them at any time via the settings.
|
||||
|
||||
6. **Manage Conversations:**
|
||||
- Delete or rename any conversation by clicking the three dots next to the conversation title.
|
||||
|
||||
### Gallery Functionality
|
||||
|
||||
- **Image Thumbnails:** All generated images are shown as thumbnails within the conversation window.
|
||||
- **Lightbox View:** Clicking a thumbnail opens the image in a larger view along with the associated prompt.
|
||||
- **Automatic Image Download:** Enable automatic downloading of generated images in the settings.
|
||||
|
||||
## Settings Configuration
|
||||
|
||||
1. **API Key:** Set your API key when starting the server by defining the `G4F_API_KEY` environment variable.
|
||||
2. **Provider Visibility:** Hide unused providers through the settings.
|
||||
3. **Theme:** Toggle between **dark mode** and **light mode**. Disabling dark mode switches to a white theme.
|
||||
4. **DuckDuckGo Access:** Enable DuckDuckGo for privacy-focused web browsing.
|
||||
5. **Speech Recognition Language:** Set your preferred language for speech recognition.
|
||||
6. **Log Access:** View logs, including error and debug messages, from the settings menu.
|
||||
7. **Automatic Image Download:** Enable this feature to automatically download generated images.
|
||||
|
||||
## Known Issues
|
||||
|
||||
- **Gallery Loading:** Large images may take time to load depending on system performance.
|
||||
- **Speech Recognition Accuracy:** Accuracy may vary depending on microphone quality or speech clarity.
|
||||
- **Provider Downtime:** Some AI providers may experience downtime or disruptions.
|
||||
|
||||
[Return to Home](/)
|
@ -35,7 +35,7 @@ class TestBackendApi(unittest.TestCase):
|
||||
|
||||
def test_get_providers(self):
|
||||
response = self.api.get_providers()
|
||||
self.assertIsInstance(response, dict)
|
||||
self.assertIsInstance(response, list)
|
||||
self.assertTrue(len(response) > 0)
|
||||
|
||||
def test_search(self):
|
||||
|
@ -29,13 +29,9 @@ from .. import debug
|
||||
|
||||
class Conversation(BaseConversation):
|
||||
conversation_id: str
|
||||
cookie_jar: CookieJar
|
||||
access_token: str
|
||||
|
||||
def __init__(self, conversation_id: str, cookie_jar: CookieJar, access_token: str = None):
|
||||
def __init__(self, conversation_id: str):
|
||||
self.conversation_id = conversation_id
|
||||
self.cookie_jar = cookie_jar
|
||||
self.access_token = access_token
|
||||
|
||||
class Copilot(AbstractProvider, ProviderModelMixin):
|
||||
label = "Microsoft Copilot"
|
||||
@ -50,6 +46,9 @@ class Copilot(AbstractProvider, ProviderModelMixin):
|
||||
|
||||
websocket_url = "wss://copilot.microsoft.com/c/api/chat?api-version=2"
|
||||
conversation_url = f"{url}/c/api/conversations"
|
||||
|
||||
_access_token: str = None
|
||||
_cookies: CookieJar = None
|
||||
|
||||
@classmethod
|
||||
def create_completion(
|
||||
@ -69,42 +68,43 @@ class Copilot(AbstractProvider, ProviderModelMixin):
|
||||
raise MissingRequirementsError('Install or update "curl_cffi" package | pip install -U curl_cffi')
|
||||
|
||||
websocket_url = cls.websocket_url
|
||||
access_token = None
|
||||
headers = None
|
||||
cookies = conversation.cookie_jar if conversation is not None else None
|
||||
if cls.needs_auth or image is not None:
|
||||
if conversation is None or conversation.access_token is None:
|
||||
if cls._access_token is None:
|
||||
try:
|
||||
access_token, cookies = readHAR(cls.url)
|
||||
cls._access_token, cls._cookies = readHAR(cls.url)
|
||||
except NoValidHarFileError as h:
|
||||
debug.log(f"Copilot: {h}")
|
||||
try:
|
||||
get_running_loop(check_nested=True)
|
||||
access_token, cookies = asyncio.run(get_access_token_and_cookies(cls.url, proxy))
|
||||
cls._access_token, cls._cookies = asyncio.run(get_access_token_and_cookies(cls.url, proxy))
|
||||
except MissingRequirementsError:
|
||||
raise h
|
||||
else:
|
||||
access_token = conversation.access_token
|
||||
debug.log(f"Copilot: Access token: {access_token[:7]}...{access_token[-5:]}")
|
||||
websocket_url = f"{websocket_url}&accessToken={quote(access_token)}"
|
||||
headers = {"authorization": f"Bearer {access_token}"}
|
||||
debug.log(f"Copilot: Access token: {cls._access_token[:7]}...{cls._access_token[-5:]}")
|
||||
websocket_url = f"{websocket_url}&accessToken={quote(cls._access_token)}"
|
||||
headers = {"authorization": f"Bearer {cls._access_token}"}
|
||||
|
||||
with Session(
|
||||
timeout=timeout,
|
||||
proxy=proxy,
|
||||
impersonate="chrome",
|
||||
headers=headers,
|
||||
cookies=cookies,
|
||||
cookies=cls._cookies,
|
||||
) as session:
|
||||
if cls._access_token is not None:
|
||||
cls._cookies = session.cookies.jar
|
||||
response = session.get("https://copilot.microsoft.com/c/api/user")
|
||||
raise_for_status(response)
|
||||
debug.log(f"Copilot: User: {response.json().get('firstName', 'null')}")
|
||||
user = response.json().get('firstName')
|
||||
if user is None:
|
||||
cls._access_token = None
|
||||
debug.log(f"Copilot: User: {user or 'null'}")
|
||||
if conversation is None:
|
||||
response = session.post(cls.conversation_url)
|
||||
raise_for_status(response)
|
||||
conversation_id = response.json().get("id")
|
||||
if return_conversation:
|
||||
yield Conversation(conversation_id, session.cookies.jar, access_token)
|
||||
yield Conversation(conversation_id)
|
||||
prompt = format_prompt(messages)
|
||||
debug.log(f"Copilot: Created conversation: {conversation_id}")
|
||||
else:
|
||||
@ -162,7 +162,7 @@ class Copilot(AbstractProvider, ProviderModelMixin):
|
||||
raise RuntimeError(f"Invalid response: {last_msg}")
|
||||
|
||||
async def get_access_token_and_cookies(url: str, proxy: str = None, target: str = "ChatAI",):
|
||||
browser = await get_nodriver(proxy=proxy)
|
||||
browser = await get_nodriver(proxy=proxy, user_data_dir="copilot")
|
||||
page = await browser.get(url)
|
||||
access_token = None
|
||||
while access_token is None:
|
||||
|
58
g4f/Provider/Flux.py
Normal file
58
g4f/Provider/Flux.py
Normal file
@ -0,0 +1,58 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
from aiohttp import ClientSession
|
||||
|
||||
from ..typing import AsyncResult, Messages
|
||||
from ..image import ImageResponse, ImagePreview
|
||||
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
|
||||
|
||||
class Flux(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
label = "Flux Provider"
|
||||
url = "https://black-forest-labs-flux-1-dev.hf.space"
|
||||
api_endpoint = "/gradio_api/call/infer"
|
||||
working = True
|
||||
default_model = 'flux-1-dev'
|
||||
models = [default_model]
|
||||
image_models = [default_model]
|
||||
|
||||
@classmethod
|
||||
async def create_async_generator(
|
||||
cls, model: str, messages: Messages, prompt: str = None, api_key: str = None, proxy: str = None, **kwargs
|
||||
) -> AsyncResult:
|
||||
headers = {
|
||||
"Content-Type": "application/json",
|
||||
"Accept": "application/json",
|
||||
}
|
||||
if api_key is not None:
|
||||
headers["Authorization"] = f"Bearer {api_key}"
|
||||
async with ClientSession(headers=headers) as session:
|
||||
prompt = messages[-1]["content"] if prompt is None else prompt
|
||||
data = {
|
||||
"data": [prompt, 0, True, 1024, 1024, 3.5, 28]
|
||||
}
|
||||
async with session.post(f"{cls.url}{cls.api_endpoint}", json=data, proxy=proxy) as response:
|
||||
response.raise_for_status()
|
||||
event_id = (await response.json()).get("event_id")
|
||||
async with session.get(f"{cls.url}{cls.api_endpoint}/{event_id}") as event_response:
|
||||
event_response.raise_for_status()
|
||||
event = None
|
||||
async for chunk in event_response.content:
|
||||
if chunk.startswith(b"event: "):
|
||||
event = chunk[7:].decode(errors="replace").strip()
|
||||
if chunk.startswith(b"data: "):
|
||||
if event == "error":
|
||||
raise RuntimeError(f"GPU token limit exceeded: {chunk.decode(errors='replace')}")
|
||||
if event in ("complete", "generating"):
|
||||
try:
|
||||
data = json.loads(chunk[6:])
|
||||
if data is None:
|
||||
continue
|
||||
url = data[0]["url"]
|
||||
except (json.JSONDecodeError, KeyError, TypeError) as e:
|
||||
raise RuntimeError(f"Failed to parse image URL: {chunk.decode(errors='replace')}", e)
|
||||
if event == "generating":
|
||||
yield ImagePreview(url, prompt)
|
||||
else:
|
||||
yield ImageResponse(url, prompt)
|
||||
break
|
@ -39,6 +39,7 @@ from .TeachAnything import TeachAnything
|
||||
from .Upstage import Upstage
|
||||
from .You import You
|
||||
from .Mhystical import Mhystical
|
||||
from .Flux import Flux
|
||||
|
||||
import sys
|
||||
|
||||
@ -59,4 +60,4 @@ __map__: dict[str, ProviderType] = dict([
|
||||
])
|
||||
|
||||
class ProviderUtils:
|
||||
convert: dict[str, ProviderType] = __map__
|
||||
convert: dict[str, ProviderType] = __map__
|
@ -9,11 +9,11 @@ from ..bing.create_images import create_images, create_session
|
||||
|
||||
class BingCreateImages(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
label = "Microsoft Designer in Bing"
|
||||
parent = "Bing"
|
||||
url = "https://www.bing.com/images/create"
|
||||
working = True
|
||||
needs_auth = True
|
||||
image_models = ["dall-e"]
|
||||
image_models = ["dall-e-3"]
|
||||
models = image_models
|
||||
|
||||
def __init__(self, cookies: Cookies = None, proxy: str = None, api_key: str = None) -> None:
|
||||
if api_key is not None:
|
||||
|
@ -69,7 +69,7 @@ class Gemini(AsyncGeneratorProvider):
|
||||
if debug.logging:
|
||||
print("Skip nodriver login in Gemini provider")
|
||||
return
|
||||
browser = await get_nodriver(proxy=proxy)
|
||||
browser = await get_nodriver(proxy=proxy, user_data_dir="gemini")
|
||||
login_url = os.environ.get("G4F_LOGIN_URL")
|
||||
if login_url:
|
||||
yield f"Please login: [Google Gemini]({login_url})\n\n"
|
||||
|
@ -12,6 +12,7 @@ from ...typing import CreateResult, Messages, Cookies
|
||||
from ...errors import MissingRequirementsError
|
||||
from ...requests.raise_for_status import raise_for_status
|
||||
from ...cookies import get_cookies
|
||||
from ...image import ImageResponse
|
||||
from ..base_provider import ProviderModelMixin, AbstractProvider, BaseConversation
|
||||
from ..helper import format_prompt
|
||||
from ... import debug
|
||||
@ -26,10 +27,12 @@ class HuggingChat(AbstractProvider, ProviderModelMixin):
|
||||
working = True
|
||||
supports_stream = True
|
||||
needs_auth = True
|
||||
default_model = "meta-llama/Meta-Llama-3.1-70B-Instruct"
|
||||
|
||||
default_model = "Qwen/Qwen2.5-72B-Instruct"
|
||||
image_models = [
|
||||
"black-forest-labs/FLUX.1-dev"
|
||||
]
|
||||
models = [
|
||||
'Qwen/Qwen2.5-72B-Instruct',
|
||||
default_model,
|
||||
'meta-llama/Meta-Llama-3.1-70B-Instruct',
|
||||
'CohereForAI/c4ai-command-r-plus-08-2024',
|
||||
'Qwen/QwQ-32B-Preview',
|
||||
@ -39,8 +42,8 @@ class HuggingChat(AbstractProvider, ProviderModelMixin):
|
||||
'NousResearch/Hermes-3-Llama-3.1-8B',
|
||||
'mistralai/Mistral-Nemo-Instruct-2407',
|
||||
'microsoft/Phi-3.5-mini-instruct',
|
||||
*image_models
|
||||
]
|
||||
|
||||
model_aliases = {
|
||||
"qwen-2.5-72b": "Qwen/Qwen2.5-72B-Instruct",
|
||||
"llama-3.1-70b": "meta-llama/Meta-Llama-3.1-70B-Instruct",
|
||||
@ -52,6 +55,7 @@ class HuggingChat(AbstractProvider, ProviderModelMixin):
|
||||
"hermes-3": "NousResearch/Hermes-3-Llama-3.1-8B",
|
||||
"mistral-nemo": "mistralai/Mistral-Nemo-Instruct-2407",
|
||||
"phi-3.5-mini": "microsoft/Phi-3.5-mini-instruct",
|
||||
"flux-dev": "black-forest-labs/FLUX.1-dev",
|
||||
}
|
||||
|
||||
@classmethod
|
||||
@ -109,7 +113,7 @@ class HuggingChat(AbstractProvider, ProviderModelMixin):
|
||||
"is_retry": False,
|
||||
"is_continue": False,
|
||||
"web_search": web_search,
|
||||
"tools": []
|
||||
"tools": ["000000000000000000000001"] if model in cls.image_models else [],
|
||||
}
|
||||
|
||||
headers = {
|
||||
@ -162,14 +166,18 @@ class HuggingChat(AbstractProvider, ProviderModelMixin):
|
||||
|
||||
elif line["type"] == "finalAnswer":
|
||||
break
|
||||
|
||||
full_response = full_response.replace('<|im_end|', '').replace('\u0000', '').strip()
|
||||
elif line["type"] == "file":
|
||||
url = f"https://huggingface.co/chat/conversation/{conversation.conversation_id}/output/{line['sha']}"
|
||||
yield ImageResponse(url, alt=messages[-1]["content"], options={"cookies": cookies})
|
||||
|
||||
full_response = full_response.replace('<|im_end|', '').replace('\u0000', '').strip()
|
||||
if not stream:
|
||||
yield full_response
|
||||
|
||||
@classmethod
|
||||
def create_conversation(cls, session: Session, model: str):
|
||||
if model in cls.image_models:
|
||||
model = cls.default_model
|
||||
json_data = {
|
||||
'model': model,
|
||||
}
|
||||
|
@ -1,21 +1,25 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import base64
|
||||
import random
|
||||
|
||||
from ...typing import AsyncResult, Messages
|
||||
from ..base_provider import AsyncGeneratorProvider, ProviderModelMixin
|
||||
from ...errors import ModelNotFoundError
|
||||
from ...requests import StreamSession, raise_for_status
|
||||
from ...image import ImageResponse
|
||||
|
||||
from .HuggingChat import HuggingChat
|
||||
|
||||
class HuggingFace(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
url = "https://huggingface.co/chat"
|
||||
working = True
|
||||
needs_auth = True
|
||||
supports_message_history = True
|
||||
default_model = HuggingChat.default_model
|
||||
models = HuggingChat.models
|
||||
default_image_model = "black-forest-labs/FLUX.1-dev"
|
||||
models = [*HuggingChat.models, default_image_model]
|
||||
image_models = [default_image_model]
|
||||
model_aliases = HuggingChat.model_aliases
|
||||
|
||||
@classmethod
|
||||
@ -29,6 +33,7 @@ class HuggingFace(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
api_key: str = None,
|
||||
max_new_tokens: int = 1024,
|
||||
temperature: float = 0.7,
|
||||
prompt: str = None,
|
||||
**kwargs
|
||||
) -> AsyncResult:
|
||||
model = cls.get_model(model)
|
||||
@ -50,16 +55,22 @@ class HuggingFace(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
}
|
||||
if api_key is not None:
|
||||
headers["Authorization"] = f"Bearer {api_key}"
|
||||
params = {
|
||||
"return_full_text": False,
|
||||
"max_new_tokens": max_new_tokens,
|
||||
"temperature": temperature,
|
||||
**kwargs
|
||||
}
|
||||
payload = {"inputs": format_prompt(messages), "parameters": params, "stream": stream}
|
||||
if model in cls.image_models:
|
||||
stream = False
|
||||
prompt = messages[-1]["content"] if prompt is None else prompt
|
||||
payload = {"inputs": prompt, "parameters": {"seed": random.randint(0, 2**32)}}
|
||||
else:
|
||||
params = {
|
||||
"return_full_text": False,
|
||||
"max_new_tokens": max_new_tokens,
|
||||
"temperature": temperature,
|
||||
**kwargs
|
||||
}
|
||||
payload = {"inputs": format_prompt(messages), "parameters": params, "stream": stream}
|
||||
async with StreamSession(
|
||||
headers=headers,
|
||||
proxy=proxy
|
||||
proxy=proxy,
|
||||
timeout=600
|
||||
) as session:
|
||||
async with session.post(f"{api_base.rstrip('/')}/models/{model}", json=payload) as response:
|
||||
if response.status == 404:
|
||||
@ -78,7 +89,12 @@ class HuggingFace(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
if chunk:
|
||||
yield chunk
|
||||
else:
|
||||
yield (await response.json())[0]["generated_text"].strip()
|
||||
if response.headers["content-type"].startswith("image/"):
|
||||
base64_data = base64.b64encode(b"".join([chunk async for chunk in response.iter_content()]))
|
||||
url = f"data:{response.headers['content-type']};base64,{base64_data.decode()}"
|
||||
yield ImageResponse(url, prompt)
|
||||
else:
|
||||
yield (await response.json())[0]["generated_text"].strip()
|
||||
|
||||
def format_prompt(messages: Messages) -> str:
|
||||
system_messages = [message["content"] for message in messages if message["role"] == "system"]
|
||||
|
@ -142,7 +142,7 @@ def readHAR(url: str) -> tuple[str, str]:
|
||||
return api_key, user_agent
|
||||
|
||||
async def get_access_token_and_user_agent(url: str, proxy: str = None):
|
||||
browser = await get_nodriver(proxy=proxy)
|
||||
browser = await get_nodriver(proxy=proxy, user_data_dir="designer")
|
||||
page = await browser.get(url)
|
||||
user_agent = await page.evaluate("navigator.userAgent")
|
||||
access_token = None
|
||||
|
@ -510,7 +510,7 @@ class OpenaiChat(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
|
||||
@classmethod
|
||||
async def nodriver_auth(cls, proxy: str = None):
|
||||
browser = await get_nodriver(proxy=proxy)
|
||||
browser = await get_nodriver(proxy=proxy, user_data_dir="chatgpt")
|
||||
page = browser.main_tab
|
||||
def on_request(event: nodriver.cdp.network.RequestWillBeSent):
|
||||
if event.request.url == start_url or event.request.url.startswith(conversation_url):
|
||||
|
@ -276,16 +276,12 @@ class Api:
|
||||
HTTP_200_OK: {"model": List[ModelResponseModel]},
|
||||
})
|
||||
async def models():
|
||||
model_list = dict(
|
||||
(model, g4f.models.ModelUtils.convert[model])
|
||||
for model in g4f.Model.__all__()
|
||||
)
|
||||
return [{
|
||||
'id': model_id,
|
||||
'object': 'model',
|
||||
'created': 0,
|
||||
'owned_by': model.base_provider
|
||||
} for model_id, model in model_list.items()]
|
||||
} for model_id, model in g4f.models.ModelUtils.convert.items()]
|
||||
|
||||
@self.app.get("/v1/models/{model_name}", responses={
|
||||
HTTP_200_OK: {"model": ModelResponseModel},
|
||||
|
@ -74,7 +74,7 @@ def iter_response(
|
||||
finish_reason = "stop"
|
||||
|
||||
if stream:
|
||||
yield ChatCompletionChunk.construct(chunk, None, completion_id, int(time.time()))
|
||||
yield ChatCompletionChunk.model_construct(chunk, None, completion_id, int(time.time()))
|
||||
|
||||
if finish_reason is not None:
|
||||
break
|
||||
@ -84,12 +84,12 @@ def iter_response(
|
||||
finish_reason = "stop" if finish_reason is None else finish_reason
|
||||
|
||||
if stream:
|
||||
yield ChatCompletionChunk.construct(None, finish_reason, completion_id, int(time.time()))
|
||||
yield ChatCompletionChunk.model_construct(None, finish_reason, completion_id, int(time.time()))
|
||||
else:
|
||||
if response_format is not None and "type" in response_format:
|
||||
if response_format["type"] == "json_object":
|
||||
content = filter_json(content)
|
||||
yield ChatCompletion.construct(content, finish_reason, completion_id, int(time.time()))
|
||||
yield ChatCompletion.model_construct(content, finish_reason, completion_id, int(time.time()))
|
||||
|
||||
# Synchronous iter_append_model_and_provider function
|
||||
def iter_append_model_and_provider(response: ChatCompletionResponseType) -> ChatCompletionResponseType:
|
||||
@ -138,7 +138,7 @@ async def async_iter_response(
|
||||
finish_reason = "stop"
|
||||
|
||||
if stream:
|
||||
yield ChatCompletionChunk.construct(chunk, None, completion_id, int(time.time()))
|
||||
yield ChatCompletionChunk.model_construct(chunk, None, completion_id, int(time.time()))
|
||||
|
||||
if finish_reason is not None:
|
||||
break
|
||||
@ -146,12 +146,12 @@ async def async_iter_response(
|
||||
finish_reason = "stop" if finish_reason is None else finish_reason
|
||||
|
||||
if stream:
|
||||
yield ChatCompletionChunk.construct(None, finish_reason, completion_id, int(time.time()))
|
||||
yield ChatCompletionChunk.model_construct(None, finish_reason, completion_id, int(time.time()))
|
||||
else:
|
||||
if response_format is not None and "type" in response_format:
|
||||
if response_format["type"] == "json_object":
|
||||
content = filter_json(content)
|
||||
yield ChatCompletion.construct(content, finish_reason, completion_id, int(time.time()))
|
||||
yield ChatCompletion.model_construct(content, finish_reason, completion_id, int(time.time()))
|
||||
finally:
|
||||
await safe_aclose(response)
|
||||
|
||||
@ -422,7 +422,7 @@ class Images:
|
||||
last_provider = get_last_provider(True)
|
||||
if response_format == "url":
|
||||
# Return original URLs without saving locally
|
||||
images = [Image.construct(url=image, revised_prompt=response.alt) for image in response.get_list()]
|
||||
images = [Image.model_construct(url=image, revised_prompt=response.alt) for image in response.get_list()]
|
||||
else:
|
||||
# Save locally for None (default) case
|
||||
images = await copy_images(response.get_list(), response.get("cookies"), proxy)
|
||||
@ -430,11 +430,11 @@ class Images:
|
||||
async def process_image_item(image_file: str) -> Image:
|
||||
with open(os.path.join(images_dir, os.path.basename(image_file)), "rb") as file:
|
||||
image_data = base64.b64encode(file.read()).decode()
|
||||
return Image.construct(b64_json=image_data, revised_prompt=response.alt)
|
||||
return Image.model_construct(b64_json=image_data, revised_prompt=response.alt)
|
||||
images = await asyncio.gather(*[process_image_item(image) for image in images])
|
||||
else:
|
||||
images = [Image.construct(url=f"/images/{os.path.basename(image)}", revised_prompt=response.alt) for image in images]
|
||||
return ImagesResponse.construct(
|
||||
images = [Image.model_construct(url=f"/images/{os.path.basename(image)}", revised_prompt=response.alt) for image in images]
|
||||
return ImagesResponse.model_construct(
|
||||
created=int(time.time()),
|
||||
data=images,
|
||||
model=last_provider.get("model") if model is None else model,
|
||||
|
@ -10,7 +10,7 @@ try:
|
||||
except ImportError:
|
||||
class BaseModel():
|
||||
@classmethod
|
||||
def construct(cls, **data):
|
||||
def model_construct(cls, **data):
|
||||
new = cls()
|
||||
for key, value in data.items():
|
||||
setattr(new, key, value)
|
||||
@ -19,6 +19,13 @@ except ImportError:
|
||||
def __init__(self, **config):
|
||||
pass
|
||||
|
||||
class BaseModel(BaseModel):
|
||||
@classmethod
|
||||
def model_construct(cls, **data):
|
||||
if hasattr(super(), "model_construct"):
|
||||
return super().model_construct(**data)
|
||||
return cls.construct(**data)
|
||||
|
||||
class ChatCompletionChunk(BaseModel):
|
||||
id: str
|
||||
object: str
|
||||
@ -28,21 +35,21 @@ class ChatCompletionChunk(BaseModel):
|
||||
choices: List[ChatCompletionDeltaChoice]
|
||||
|
||||
@classmethod
|
||||
def construct(
|
||||
def model_construct(
|
||||
cls,
|
||||
content: str,
|
||||
finish_reason: str,
|
||||
completion_id: str = None,
|
||||
created: int = None
|
||||
):
|
||||
return super().construct(
|
||||
return super().model_construct(
|
||||
id=f"chatcmpl-{completion_id}" if completion_id else None,
|
||||
object="chat.completion.cunk",
|
||||
created=created,
|
||||
model=None,
|
||||
provider=None,
|
||||
choices=[ChatCompletionDeltaChoice.construct(
|
||||
ChatCompletionDelta.construct(content),
|
||||
choices=[ChatCompletionDeltaChoice.model_construct(
|
||||
ChatCompletionDelta.model_construct(content),
|
||||
finish_reason
|
||||
)]
|
||||
)
|
||||
@ -52,8 +59,8 @@ class ChatCompletionMessage(BaseModel):
|
||||
content: str
|
||||
|
||||
@classmethod
|
||||
def construct(cls, content: str):
|
||||
return super().construct(role="assistant", content=content)
|
||||
def model_construct(cls, content: str):
|
||||
return super().model_construct(role="assistant", content=content)
|
||||
|
||||
class ChatCompletionChoice(BaseModel):
|
||||
index: int
|
||||
@ -61,8 +68,8 @@ class ChatCompletionChoice(BaseModel):
|
||||
finish_reason: str
|
||||
|
||||
@classmethod
|
||||
def construct(cls, message: ChatCompletionMessage, finish_reason: str):
|
||||
return super().construct(index=0, message=message, finish_reason=finish_reason)
|
||||
def model_construct(cls, message: ChatCompletionMessage, finish_reason: str):
|
||||
return super().model_construct(index=0, message=message, finish_reason=finish_reason)
|
||||
|
||||
class ChatCompletion(BaseModel):
|
||||
id: str
|
||||
@ -78,21 +85,21 @@ class ChatCompletion(BaseModel):
|
||||
}])
|
||||
|
||||
@classmethod
|
||||
def construct(
|
||||
def model_construct(
|
||||
cls,
|
||||
content: str,
|
||||
finish_reason: str,
|
||||
completion_id: str = None,
|
||||
created: int = None
|
||||
):
|
||||
return super().construct(
|
||||
return super().model_construct(
|
||||
id=f"chatcmpl-{completion_id}" if completion_id else None,
|
||||
object="chat.completion",
|
||||
created=created,
|
||||
model=None,
|
||||
provider=None,
|
||||
choices=[ChatCompletionChoice.construct(
|
||||
ChatCompletionMessage.construct(content),
|
||||
choices=[ChatCompletionChoice.model_construct(
|
||||
ChatCompletionMessage.model_construct(content),
|
||||
finish_reason
|
||||
)],
|
||||
usage={
|
||||
@ -107,8 +114,8 @@ class ChatCompletionDelta(BaseModel):
|
||||
content: str
|
||||
|
||||
@classmethod
|
||||
def construct(cls, content: Optional[str]):
|
||||
return super().construct(role="assistant", content=content)
|
||||
def model_construct(cls, content: Optional[str]):
|
||||
return super().model_construct(role="assistant", content=content)
|
||||
|
||||
class ChatCompletionDeltaChoice(BaseModel):
|
||||
index: int
|
||||
@ -116,8 +123,8 @@ class ChatCompletionDeltaChoice(BaseModel):
|
||||
finish_reason: Optional[str]
|
||||
|
||||
@classmethod
|
||||
def construct(cls, delta: ChatCompletionDelta, finish_reason: Optional[str]):
|
||||
return super().construct(index=0, delta=delta, finish_reason=finish_reason)
|
||||
def model_construct(cls, delta: ChatCompletionDelta, finish_reason: Optional[str]):
|
||||
return super().model_construct(index=0, delta=delta, finish_reason=finish_reason)
|
||||
|
||||
class Image(BaseModel):
|
||||
url: Optional[str]
|
||||
@ -125,8 +132,8 @@ class Image(BaseModel):
|
||||
revised_prompt: Optional[str]
|
||||
|
||||
@classmethod
|
||||
def construct(cls, url: str = None, b64_json: str = None, revised_prompt: str = None):
|
||||
return super().construct(**filter_none(
|
||||
def model_construct(cls, url: str = None, b64_json: str = None, revised_prompt: str = None):
|
||||
return super().model_construct(**filter_none(
|
||||
url=url,
|
||||
b64_json=b64_json,
|
||||
revised_prompt=revised_prompt
|
||||
@ -139,10 +146,10 @@ class ImagesResponse(BaseModel):
|
||||
created: int
|
||||
|
||||
@classmethod
|
||||
def construct(cls, data: List[Image], created: int = None, model: str = None, provider: str = None):
|
||||
def model_construct(cls, data: List[Image], created: int = None, model: str = None, provider: str = None):
|
||||
if created is None:
|
||||
created = int(time())
|
||||
return super().construct(
|
||||
return super().model_construct(
|
||||
data=data,
|
||||
model=model,
|
||||
provider=provider,
|
||||
|
@ -34,8 +34,8 @@ try:
|
||||
|
||||
browsers = [
|
||||
_g4f,
|
||||
chrome, chromium, opera, opera_gx,
|
||||
brave, edge, vivaldi, firefox,
|
||||
chrome, chromium, firefox, opera, opera_gx,
|
||||
brave, edge, vivaldi,
|
||||
]
|
||||
has_browser_cookie3 = True
|
||||
except ImportError:
|
||||
|
@ -20,6 +20,7 @@
|
||||
<script src="/static/js/chat.v1.js" defer></script>
|
||||
<script src="https://cdn.jsdelivr.net/npm/markdown-it@13.0.1/dist/markdown-it.min.js"></script>
|
||||
<link rel="stylesheet" href="/static/css/dracula.min.css">
|
||||
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/photoswipe/dist/photoswipe.css">
|
||||
<script>
|
||||
MathJax = {
|
||||
chtml: {
|
||||
@ -37,9 +38,40 @@
|
||||
</script>
|
||||
<script src="https://cdn.jsdelivr.net/npm/gpt-tokenizer/dist/cl100k_base.js" async></script>
|
||||
<script src="/static/js/text_to_speech/index.js" async></script>
|
||||
<!--
|
||||
<script src="/static/js/whisper-web/index.js" async></script>
|
||||
-->
|
||||
<script type="module" async>
|
||||
import PhotoSwipeLightbox from 'https://cdn.jsdelivr.net/npm/photoswipe/dist/photoswipe-lightbox.esm.js';
|
||||
const lightbox = new PhotoSwipeLightbox({
|
||||
gallery: '#messages',
|
||||
children: 'a:has(img)',
|
||||
secondaryZoomLevel: 2,
|
||||
pswpModule: () => import('https://cdn.jsdelivr.net/npm/photoswipe'),
|
||||
});
|
||||
lightbox.addFilter('itemData', (itemData, index) => {
|
||||
const img = itemData.element.querySelector('img');
|
||||
itemData.width = img.naturalWidth;
|
||||
itemData.height = img.naturalHeight;
|
||||
return itemData;
|
||||
});
|
||||
lightbox.on('uiRegister', function() {
|
||||
lightbox.pswp.ui.registerElement({
|
||||
name: 'custom-caption',
|
||||
order: 9,
|
||||
isButton: false,
|
||||
appendTo: 'root',
|
||||
html: 'Caption text',
|
||||
onInit: (el, pswp) => {
|
||||
lightbox.pswp.on('change', () => {
|
||||
const currSlideElement = lightbox.pswp.currSlide.data.element;
|
||||
let captionHTML = '';
|
||||
if (currSlideElement) {
|
||||
el.innerHTML = currSlideElement.querySelector('img').getAttribute('alt');
|
||||
}
|
||||
});
|
||||
}
|
||||
});
|
||||
});
|
||||
lightbox.init();
|
||||
</script>
|
||||
<script>
|
||||
const user_image = '<img src="/static/img/user.png" alt="your avatar">';
|
||||
const gpt_image = '<img src="/static/img/gpt.png" alt="your avatar">';
|
||||
@ -261,16 +293,17 @@
|
||||
<option value="">Provider: Auto</option>
|
||||
<option value="OpenaiChat">OpenAI ChatGPT</option>
|
||||
<option value="Copilot">Microsoft Copilot</option>
|
||||
<option value="ChatGpt">ChatGpt</option>
|
||||
<option value="Gemini">Gemini</option>
|
||||
<option value="MetaAI">Meta AI</option>
|
||||
<option value="DeepInfraChat">DeepInfraChat</option>
|
||||
<option value="Blackbox">Blackbox</option>
|
||||
<option value="Gemini">Google Gemini</option>
|
||||
<option value="DDG">DuckDuckGo</option>
|
||||
<option value="Pizzagpt">Pizzagpt</option>
|
||||
<option disabled="disabled">----</option>
|
||||
</select>
|
||||
</div>
|
||||
<div class="field">
|
||||
<button id="pin">
|
||||
<i class="fa-solid fa-thumbtack"></i>
|
||||
</button>
|
||||
</div>
|
||||
<div id="pin_container" class="field"></div>
|
||||
</div>
|
||||
</div>
|
||||
<div class="log hidden"></div>
|
||||
|
@ -63,6 +63,7 @@
|
||||
--conversations-hover: #c7a2ff4d;
|
||||
--scrollbar: var(--colour-3);
|
||||
--scrollbar-thumb: var(--blur-bg);
|
||||
--button-hover: var(--colour-5);
|
||||
}
|
||||
|
||||
:root {
|
||||
@ -533,7 +534,7 @@ body.white .gradient{
|
||||
|
||||
.stop_generating, .toolbar .regenerate {
|
||||
position: absolute;
|
||||
z-index: 1000000;
|
||||
z-index: 100000;
|
||||
top: 0;
|
||||
right: 0;
|
||||
}
|
||||
@ -729,13 +730,8 @@ label[for="camera"] {
|
||||
|
||||
|
||||
select {
|
||||
-webkit-border-radius: 8px;
|
||||
-moz-border-radius: 8px;
|
||||
border-radius: 8px;
|
||||
|
||||
-webkit-backdrop-filter: blur(20px);
|
||||
backdrop-filter: blur(20px);
|
||||
|
||||
cursor: pointer;
|
||||
background-color: var(--colour-1);
|
||||
border: 1px solid var(--blur-border);
|
||||
@ -745,11 +741,47 @@ select {
|
||||
overflow: hidden;
|
||||
outline: none;
|
||||
padding: 8px 16px;
|
||||
|
||||
appearance: none;
|
||||
width: 160px;
|
||||
}
|
||||
|
||||
.buttons button {
|
||||
border-radius: 8px;
|
||||
backdrop-filter: blur(20px);
|
||||
cursor: pointer;
|
||||
background-color: var(--colour-1);
|
||||
border: 1px solid var(--blur-border);
|
||||
color: var(--colour-3);
|
||||
padding: 8px;
|
||||
}
|
||||
|
||||
.buttons button.pinned span {
|
||||
max-width: 160px;
|
||||
overflow: hidden;
|
||||
text-wrap: nowrap;
|
||||
margin-right: 16px;
|
||||
display: block;
|
||||
text-overflow: ellipsis;
|
||||
}
|
||||
|
||||
.buttons button.pinned i {
|
||||
position: absolute;
|
||||
top: 10px;
|
||||
right: 6px;
|
||||
}
|
||||
|
||||
select:hover,
|
||||
.buttons button:hover,
|
||||
.stop_generating button:hover,
|
||||
.toolbar .regenerate button:hover,
|
||||
#send-button:hover {
|
||||
background-color: var(--button-hover);
|
||||
}
|
||||
|
||||
#provider option:disabled[value], #model option:disabled[value] {
|
||||
display: none;
|
||||
}
|
||||
|
||||
#systemPrompt, .settings textarea {
|
||||
font-size: 15px;
|
||||
width: 100%;
|
||||
@ -761,6 +793,39 @@ select {
|
||||
resize: vertical;
|
||||
}
|
||||
|
||||
.pswp {
|
||||
--pswp-placeholder-bg: #000 !important;
|
||||
}
|
||||
.pswp img {
|
||||
object-fit: contain;
|
||||
}
|
||||
.pswp__img--placeholder--blank{
|
||||
display: none !important;
|
||||
}
|
||||
.pswp__custom-caption {
|
||||
opacity: 0 !important;
|
||||
background: rgba(0, 0, 0, 0.3);
|
||||
font-size: 16px;
|
||||
color: #fff;
|
||||
width: calc(100% - 32px);
|
||||
max-width: 400px;
|
||||
padding: 2px 8px;
|
||||
border-radius: 4px;
|
||||
position: absolute;
|
||||
left: 50%;
|
||||
bottom: 16px;
|
||||
transform: translateX(-50%);
|
||||
max-height: 100px;
|
||||
overflow: auto;
|
||||
}
|
||||
.pswp__custom-caption:hover {
|
||||
opacity: 1 !important;
|
||||
}
|
||||
.pswp__custom-caption a {
|
||||
color: #fff;
|
||||
text-decoration: underline;
|
||||
}
|
||||
|
||||
.slide-systemPrompt {
|
||||
position: absolute;
|
||||
top: 0;
|
||||
@ -1112,6 +1177,7 @@ ul {
|
||||
--colour-3: #212529;
|
||||
--scrollbar: var(--colour-1);
|
||||
--scrollbar-thumb: var(--gradient);
|
||||
--button-hover: var(--colour-4);
|
||||
}
|
||||
|
||||
.white .message .assistant .fa-xmark {
|
||||
|
@ -3,7 +3,7 @@ const message_box = document.getElementById(`messages`);
|
||||
const messageInput = document.getElementById(`message-input`);
|
||||
const box_conversations = document.querySelector(`.top`);
|
||||
const stop_generating = document.querySelector(`.stop_generating`);
|
||||
const regenerate = document.querySelector(`.regenerate`);
|
||||
const regenerate_button = document.querySelector(`.regenerate`);
|
||||
const sidebar = document.querySelector(".conversations");
|
||||
const sidebar_button = document.querySelector(".mobile-sidebar");
|
||||
const sendButton = document.getElementById("send-button");
|
||||
@ -21,7 +21,7 @@ const chat = document.querySelector(".conversation");
|
||||
const album = document.querySelector(".images");
|
||||
const log_storage = document.querySelector(".log");
|
||||
|
||||
const optionElements = document.querySelectorAll(".settings input, .settings textarea, #model, #model2, #provider")
|
||||
const optionElementsSelector = ".settings input, .settings textarea, #model, #model2, #provider";
|
||||
|
||||
let provider_storage = {};
|
||||
let message_storage = {};
|
||||
@ -364,7 +364,7 @@ const handle_ask = async () => {
|
||||
}
|
||||
</div>
|
||||
<div class="count">
|
||||
${count_words_and_tokens(message, get_selected_model())}
|
||||
${count_words_and_tokens(message, get_selected_model()?.value)}
|
||||
<i class="fa-solid fa-volume-high"></i>
|
||||
<i class="fa-regular fa-clipboard"></i>
|
||||
<a><i class="fa-brands fa-whatsapp"></i></a>
|
||||
@ -375,7 +375,19 @@ const handle_ask = async () => {
|
||||
</div>
|
||||
`;
|
||||
highlight(message_box);
|
||||
await ask_gpt(message_id);
|
||||
|
||||
const all_pinned = document.querySelectorAll(".buttons button.pinned")
|
||||
if (all_pinned.length > 0) {
|
||||
all_pinned.forEach((el, idx) => ask_gpt(
|
||||
idx == 0 ? message_id : get_message_id(),
|
||||
-1,
|
||||
idx != 0,
|
||||
el.dataset.provider,
|
||||
el.dataset.model
|
||||
));
|
||||
} else {
|
||||
await ask_gpt(message_id);
|
||||
}
|
||||
};
|
||||
|
||||
async function safe_remove_cancel_button() {
|
||||
@ -387,16 +399,21 @@ async function safe_remove_cancel_button() {
|
||||
stop_generating.classList.add("stop_generating-hidden");
|
||||
}
|
||||
|
||||
regenerate.addEventListener("click", async () => {
|
||||
regenerate.classList.add("regenerate-hidden");
|
||||
setTimeout(()=>regenerate.classList.remove("regenerate-hidden"), 3000);
|
||||
await hide_message(window.conversation_id);
|
||||
await ask_gpt(get_message_id());
|
||||
regenerate_button.addEventListener("click", async () => {
|
||||
regenerate_button.classList.add("regenerate-hidden");
|
||||
setTimeout(()=>regenerate_button.classList.remove("regenerate-hidden"), 3000);
|
||||
const all_pinned = document.querySelectorAll(".buttons button.pinned")
|
||||
if (all_pinned.length > 0) {
|
||||
all_pinned.forEach((el) => ask_gpt(get_message_id(), -1, true, el.dataset.provider, el.dataset.model));
|
||||
} else {
|
||||
await hide_message(window.conversation_id);
|
||||
await ask_gpt(get_message_id());
|
||||
}
|
||||
});
|
||||
|
||||
stop_generating.addEventListener("click", async () => {
|
||||
stop_generating.classList.add("stop_generating-hidden");
|
||||
regenerate.classList.remove("regenerate-hidden");
|
||||
regenerate_button.classList.remove("regenerate-hidden");
|
||||
let key;
|
||||
for (key in controller_storage) {
|
||||
if (!controller_storage[key].signal.aborted) {
|
||||
@ -487,6 +504,8 @@ async function add_message_chunk(message, message_id) {
|
||||
p.innerText = message.error;
|
||||
log_storage.appendChild(p);
|
||||
} else if (message.type == "preview") {
|
||||
if (content_map.inner.clientHeight > 200)
|
||||
content_map.inner.style.height = content_map.inner.clientHeight + "px";
|
||||
content_map.inner.innerHTML = markdown_render(message.preview);
|
||||
} else if (message.type == "content") {
|
||||
message_storage[message_id] += message.content;
|
||||
@ -505,6 +524,7 @@ async function add_message_chunk(message, message_id) {
|
||||
content_map.inner.innerHTML = html;
|
||||
content_map.count.innerText = count_words_and_tokens(message_storage[message_id], provider_storage[message_id]?.model);
|
||||
highlight(content_map.inner);
|
||||
content_map.inner.style.height = "";
|
||||
} else if (message.type == "log") {
|
||||
let p = document.createElement("p");
|
||||
p.innerText = message.log;
|
||||
@ -538,7 +558,11 @@ imageInput?.addEventListener("click", (e) => {
|
||||
}
|
||||
});
|
||||
|
||||
const ask_gpt = async (message_id, message_index = -1) => {
|
||||
const ask_gpt = async (message_id, message_index = -1, regenerate = false, provider = null, model = null) => {
|
||||
if (!model && !provider) {
|
||||
model = get_selected_model()?.value || null;
|
||||
provider = providerSelect.options[providerSelect.selectedIndex].value;
|
||||
}
|
||||
let messages = await get_messages(window.conversation_id);
|
||||
messages = prepare_messages(messages, message_index);
|
||||
message_storage[message_id] = "";
|
||||
@ -553,7 +577,7 @@ const ask_gpt = async (message_id, message_index = -1) => {
|
||||
|
||||
const message_el = document.createElement("div");
|
||||
message_el.classList.add("message");
|
||||
if (message_index != -1) {
|
||||
if (message_index != -1 || regenerate) {
|
||||
message_el.classList.add("regenerate");
|
||||
}
|
||||
message_el.innerHTML += `
|
||||
@ -593,14 +617,13 @@ const ask_gpt = async (message_id, message_index = -1) => {
|
||||
try {
|
||||
const input = imageInput && imageInput.files.length > 0 ? imageInput : cameraInput;
|
||||
const file = input && input.files.length > 0 ? input.files[0] : null;
|
||||
const provider = providerSelect.options[providerSelect.selectedIndex].value;
|
||||
const auto_continue = document.getElementById("auto_continue")?.checked;
|
||||
const download_images = document.getElementById("download_images")?.checked;
|
||||
let api_key = get_api_key_by_provider(provider);
|
||||
await api("conversation", {
|
||||
id: message_id,
|
||||
conversation_id: window.conversation_id,
|
||||
model: get_selected_model(),
|
||||
model: model,
|
||||
web_search: document.getElementById("switch").checked,
|
||||
provider: provider,
|
||||
messages: messages,
|
||||
@ -632,7 +655,8 @@ const ask_gpt = async (message_id, message_index = -1) => {
|
||||
message_storage[message_id],
|
||||
message_provider,
|
||||
message_index,
|
||||
synthesize_storage[message_id]
|
||||
synthesize_storage[message_id],
|
||||
regenerate
|
||||
);
|
||||
await safe_load_conversation(window.conversation_id, message_index == -1);
|
||||
} else {
|
||||
@ -645,7 +669,7 @@ const ask_gpt = async (message_id, message_index = -1) => {
|
||||
await safe_remove_cancel_button();
|
||||
await register_message_buttons();
|
||||
await load_conversations();
|
||||
regenerate.classList.remove("regenerate-hidden");
|
||||
regenerate_button.classList.remove("regenerate-hidden");
|
||||
};
|
||||
|
||||
async function scroll_to_bottom() {
|
||||
@ -848,7 +872,7 @@ const load_conversation = async (conversation_id, scroll=true) => {
|
||||
message_box.innerHTML = elements;
|
||||
register_message_buttons();
|
||||
highlight(message_box);
|
||||
regenerate.classList.remove("regenerate-hidden");
|
||||
regenerate_button.classList.remove("regenerate-hidden");
|
||||
|
||||
if (scroll) {
|
||||
message_box.scrollTo({ top: message_box.scrollHeight, behavior: "smooth" });
|
||||
@ -960,7 +984,8 @@ const add_message = async (
|
||||
conversation_id, role, content,
|
||||
provider = null,
|
||||
message_index = -1,
|
||||
synthesize_data = null
|
||||
synthesize_data = null,
|
||||
regenerate = false
|
||||
) => {
|
||||
const conversation = await get_conversation(conversation_id);
|
||||
if (!conversation) return;
|
||||
@ -972,6 +997,9 @@ const add_message = async (
|
||||
if (synthesize_data) {
|
||||
new_message.synthesize = synthesize_data;
|
||||
}
|
||||
if (regenerate) {
|
||||
new_message.regenerate = true;
|
||||
}
|
||||
if (message_index == -1) {
|
||||
conversation.items.push(new_message);
|
||||
} else {
|
||||
@ -1118,6 +1146,7 @@ function open_album() {
|
||||
}
|
||||
|
||||
const register_settings_storage = async () => {
|
||||
const optionElements = document.querySelectorAll(optionElementsSelector);
|
||||
optionElements.forEach((element) => {
|
||||
if (element.type == "textarea") {
|
||||
element.addEventListener('input', async (event) => {
|
||||
@ -1145,6 +1174,7 @@ const register_settings_storage = async () => {
|
||||
}
|
||||
|
||||
const load_settings_storage = async () => {
|
||||
const optionElements = document.querySelectorAll(optionElementsSelector);
|
||||
optionElements.forEach((element) => {
|
||||
if (!(value = appStorage.getItem(element.id))) {
|
||||
return;
|
||||
@ -1226,7 +1256,7 @@ const count_input = async () => {
|
||||
if (timeoutId) clearTimeout(timeoutId);
|
||||
timeoutId = setTimeout(() => {
|
||||
if (countFocus.value) {
|
||||
inputCount.innerText = count_words_and_tokens(countFocus.value, get_selected_model());
|
||||
inputCount.innerText = count_words_and_tokens(countFocus.value, get_selected_model()?.value);
|
||||
} else {
|
||||
inputCount.innerText = "";
|
||||
}
|
||||
@ -1267,6 +1297,38 @@ async function on_load() {
|
||||
load_conversations();
|
||||
}
|
||||
|
||||
const load_provider_option = (input, provider_name) => {
|
||||
if (input.checked) {
|
||||
modelSelect.querySelectorAll(`option[data-disabled_providers*="${provider_name}"]`).forEach(
|
||||
(el) => {
|
||||
el.dataset.disabled_providers = el.dataset.disabled_providers ? el.dataset.disabled_providers.split(" ").filter((provider) => provider!=provider_name).join(" ") : "";
|
||||
el.dataset.providers = (el.dataset.providers ? el.dataset.providers + " " : "") + provider_name;
|
||||
modelSelect.querySelectorAll(`option[value="${el.value}"]`).forEach((o)=>o.removeAttribute("disabled", "disabled"))
|
||||
}
|
||||
);
|
||||
providerSelect.querySelectorAll(`option[value="${provider_name}"]`).forEach(
|
||||
(el) => el.removeAttribute("disabled")
|
||||
);
|
||||
providerSelect.querySelectorAll(`option[data-parent="${provider_name}"]`).forEach(
|
||||
(el) => el.removeAttribute("disabled")
|
||||
);
|
||||
} else {
|
||||
modelSelect.querySelectorAll(`option[data-providers*="${provider_name}"]`).forEach(
|
||||
(el) => {
|
||||
el.dataset.providers = el.dataset.providers ? el.dataset.providers.split(" ").filter((provider) => provider!=provider_name).join(" ") : "";
|
||||
el.dataset.disabled_providers = (el.dataset.disabled_providers ? el.dataset.disabled_providers + " " : "") + provider_name;
|
||||
if (!el.dataset.providers) modelSelect.querySelectorAll(`option[value="${el.value}"]`).forEach((o)=>o.setAttribute("disabled", "disabled"))
|
||||
}
|
||||
);
|
||||
providerSelect.querySelectorAll(`option[value="${provider_name}"]`).forEach(
|
||||
(el) => el.setAttribute("disabled", "disabled")
|
||||
);
|
||||
providerSelect.querySelectorAll(`option[data-parent="${provider_name}"]`).forEach(
|
||||
(el) => el.setAttribute("disabled", "disabled")
|
||||
);
|
||||
}
|
||||
};
|
||||
|
||||
async function on_api() {
|
||||
let prompt_lock = false;
|
||||
messageInput.addEventListener("keydown", async (evt) => {
|
||||
@ -1292,22 +1354,44 @@ async function on_api() {
|
||||
await handle_ask();
|
||||
});
|
||||
messageInput.focus();
|
||||
|
||||
register_settings_storage();
|
||||
|
||||
let provider_options = [];
|
||||
try {
|
||||
models = await api("models");
|
||||
models.forEach((model) => {
|
||||
let option = document.createElement("option");
|
||||
option.value = option.text = model;
|
||||
option.value = model.name;
|
||||
option.text = model.name + (model.image ? " (Image Generation)" : "");
|
||||
option.dataset.providers = model.providers.join(" ");
|
||||
modelSelect.appendChild(option);
|
||||
});
|
||||
providers = await api("providers")
|
||||
Object.entries(providers).forEach(([provider, label]) => {
|
||||
providers.sort((a, b) => a.label.localeCompare(b.label));
|
||||
providers.forEach((provider) => {
|
||||
let option = document.createElement("option");
|
||||
option.value = provider;
|
||||
option.text = label;
|
||||
option.value = provider.name;
|
||||
option.dataset.label = provider.label;
|
||||
option.text = provider.label
|
||||
+ (provider.vision ? " (Image Upload)" : "")
|
||||
+ (provider.image ? " (Image Generation)" : "")
|
||||
+ (provider.webdriver ? " (Webdriver)" : "")
|
||||
+ (provider.auth ? " (Auth)" : "");
|
||||
if (provider.parent)
|
||||
option.dataset.parent = provider.parent;
|
||||
providerSelect.appendChild(option);
|
||||
|
||||
if (!provider.parent) {
|
||||
option = document.createElement("div");
|
||||
option.classList.add("field");
|
||||
option.innerHTML = `
|
||||
<div class="field">
|
||||
<span class="label">Enable ${provider.label}</span>
|
||||
<input id="Provider${provider.name}" type="checkbox" name="Provider${provider.name}" checked="">
|
||||
<label for="Provider${provider.name}" class="toogle" title="Remove provider from dropdown"></label>
|
||||
</div>`;
|
||||
option.querySelector("input").addEventListener("change", (event) => load_provider_option(event.target, provider.name));
|
||||
settings.querySelector(".paper").appendChild(option);
|
||||
provider_options[provider.name] = option;
|
||||
}
|
||||
});
|
||||
await load_provider_models(appStorage.getItem("provider"));
|
||||
} catch (e) {
|
||||
@ -1316,8 +1400,11 @@ async function on_api() {
|
||||
document.location.href = `/chat/error`;
|
||||
}
|
||||
}
|
||||
|
||||
register_settings_storage();
|
||||
await load_settings_storage()
|
||||
Object.entries(provider_options).forEach(
|
||||
([provider_name, option]) => load_provider_option(option.querySelector("input"), provider_name)
|
||||
);
|
||||
|
||||
const hide_systemPrompt = document.getElementById("hide-systemPrompt")
|
||||
const slide_systemPrompt_icon = document.querySelector(".slide-systemPrompt i");
|
||||
@ -1455,9 +1542,12 @@ systemPrompt?.addEventListener("input", async () => {
|
||||
|
||||
function get_selected_model() {
|
||||
if (modelProvider.selectedIndex >= 0) {
|
||||
return modelProvider.options[modelProvider.selectedIndex].value;
|
||||
return modelProvider.options[modelProvider.selectedIndex];
|
||||
} else if (modelSelect.selectedIndex >= 0) {
|
||||
return modelSelect.options[modelSelect.selectedIndex].value;
|
||||
model = modelSelect.options[modelSelect.selectedIndex];
|
||||
if (model.value) {
|
||||
return model;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@ -1554,6 +1644,7 @@ async function load_provider_models(providerIndex=null) {
|
||||
models.forEach((model) => {
|
||||
let option = document.createElement('option');
|
||||
option.value = model.model;
|
||||
option.dataset.label = model.model;
|
||||
option.text = `${model.model}${model.image ? " (Image Generation)" : ""}${model.vision ? " (Image Upload)" : ""}`;
|
||||
option.selected = model.default;
|
||||
modelProvider.appendChild(option);
|
||||
@ -1564,6 +1655,32 @@ async function load_provider_models(providerIndex=null) {
|
||||
}
|
||||
};
|
||||
providerSelect.addEventListener("change", () => load_provider_models());
|
||||
document.getElementById("pin").addEventListener("click", async () => {
|
||||
const pin_container = document.getElementById("pin_container");
|
||||
let selected_provider = providerSelect.options[providerSelect.selectedIndex];
|
||||
selected_provider = selected_provider.value ? selected_provider : null;
|
||||
const selected_model = get_selected_model();
|
||||
if (selected_provider || selected_model) {
|
||||
const pinned = document.createElement("button");
|
||||
pinned.classList.add("pinned");
|
||||
if (selected_provider) pinned.dataset.provider = selected_provider.value;
|
||||
if (selected_model) pinned.dataset.model = selected_model.value;
|
||||
pinned.innerHTML = `
|
||||
<span>
|
||||
${selected_provider ? selected_provider.dataset.label || selected_provider.text : ""}
|
||||
${selected_provider && selected_model ? "/" : ""}
|
||||
${selected_model ? selected_model.dataset.label || selected_model.text : ""}
|
||||
</span>
|
||||
<i class="fa-regular fa-circle-xmark"></i>`;
|
||||
pinned.addEventListener("click", () => pin_container.removeChild(pinned));
|
||||
let all_pinned = pin_container.querySelectorAll(".pinned");
|
||||
while (all_pinned.length > 4) {
|
||||
pin_container.removeChild(all_pinned[0])
|
||||
all_pinned = pin_container.querySelectorAll(".pinned");
|
||||
}
|
||||
pin_container.appendChild(pinned);
|
||||
}
|
||||
});
|
||||
|
||||
function save_storage() {
|
||||
let filename = `chat ${new Date().toLocaleString()}.json`.replaceAll(":", "-");
|
||||
|
@ -8,11 +8,12 @@ from flask import send_from_directory
|
||||
from inspect import signature
|
||||
|
||||
from g4f import version, models
|
||||
from g4f import get_last_provider, ChatCompletion
|
||||
from g4f import get_last_provider, ChatCompletion, get_model_and_provider
|
||||
from g4f.errors import VersionNotFoundError
|
||||
from g4f.image import ImagePreview, ImageResponse, copy_images, ensure_images_dir, images_dir
|
||||
from g4f.Provider import ProviderType, __providers__, __map__
|
||||
from g4f.providers.base_provider import ProviderModelMixin
|
||||
from g4f.providers.retry_provider import IterListProvider
|
||||
from g4f.providers.response import BaseConversation, FinishReason, SynthesizeData
|
||||
from g4f.client.service import convert_to_provider
|
||||
from g4f import debug
|
||||
@ -23,7 +24,15 @@ conversations: dict[dict[str, BaseConversation]] = {}
|
||||
class Api:
|
||||
@staticmethod
|
||||
def get_models():
|
||||
return models._all_models
|
||||
return [{
|
||||
"name": model.name,
|
||||
"image": isinstance(model, models.ImageModel),
|
||||
"providers": [
|
||||
getattr(provider, "parent", provider.__name__)
|
||||
for provider in providers
|
||||
]
|
||||
}
|
||||
for model, providers in models.__models__.values()]
|
||||
|
||||
@staticmethod
|
||||
def get_provider_models(provider: str, api_key: str = None):
|
||||
@ -47,15 +56,15 @@ class Api:
|
||||
|
||||
@staticmethod
|
||||
def get_providers() -> dict[str, str]:
|
||||
return {
|
||||
provider.__name__: (provider.label if hasattr(provider, "label") else provider.__name__)
|
||||
+ (" (Image Generation)" if getattr(provider, "image_models", None) else "")
|
||||
+ (" (Image Upload)" if getattr(provider, "default_vision_model", None) else "")
|
||||
+ (" (WebDriver)" if "webdriver" in provider.get_parameters() else "")
|
||||
+ (" (Auth)" if provider.needs_auth else "")
|
||||
for provider in __providers__
|
||||
if provider.working
|
||||
}
|
||||
return [{
|
||||
"name": provider.__name__,
|
||||
"label": provider.label if hasattr(provider, "label") else provider.__name__,
|
||||
"parent": getattr(provider, "parent", None),
|
||||
"image": getattr(provider, "image_models", None) is not None,
|
||||
"vision": getattr(provider, "default_vision_model", None) is not None,
|
||||
"webdriver": "webdriver" in provider.get_parameters(),
|
||||
"auth": provider.needs_auth,
|
||||
} for provider in __providers__ if provider.working]
|
||||
|
||||
@staticmethod
|
||||
def get_version() -> dict:
|
||||
@ -114,47 +123,49 @@ class Api:
|
||||
print(text)
|
||||
debug.log_handler = log_handler
|
||||
proxy = os.environ.get("G4F_PROXY")
|
||||
provider = kwargs.get("provider")
|
||||
model, provider_handler = get_model_and_provider(
|
||||
kwargs.get("model"), provider,
|
||||
stream=True,
|
||||
ignore_stream=True
|
||||
)
|
||||
first = True
|
||||
try:
|
||||
result = ChatCompletion.create(**kwargs)
|
||||
first = True
|
||||
if isinstance(result, ImageResponse):
|
||||
result = ChatCompletion.create(**{**kwargs, "model": model, "provider": provider_handler})
|
||||
for chunk in result:
|
||||
if first:
|
||||
first = False
|
||||
yield self._format_json("provider", get_last_provider(True))
|
||||
yield self._format_json("content", str(result))
|
||||
else:
|
||||
for chunk in result:
|
||||
if first:
|
||||
first = False
|
||||
yield self._format_json("provider", get_last_provider(True))
|
||||
if isinstance(chunk, BaseConversation):
|
||||
if provider:
|
||||
if provider not in conversations:
|
||||
conversations[provider] = {}
|
||||
conversations[provider][conversation_id] = chunk
|
||||
yield self._format_json("conversation", conversation_id)
|
||||
elif isinstance(chunk, Exception):
|
||||
logger.exception(chunk)
|
||||
yield self._format_json("message", get_error_message(chunk))
|
||||
elif isinstance(chunk, ImagePreview):
|
||||
yield self._format_json("preview", chunk.to_string())
|
||||
elif isinstance(chunk, ImageResponse):
|
||||
images = chunk
|
||||
if download_images:
|
||||
images = asyncio.run(copy_images(chunk.get_list(), chunk.get("cookies"), proxy))
|
||||
images = ImageResponse(images, chunk.alt)
|
||||
yield self._format_json("content", str(images))
|
||||
elif isinstance(chunk, SynthesizeData):
|
||||
yield self._format_json("synthesize", chunk.to_json())
|
||||
elif not isinstance(chunk, FinishReason):
|
||||
yield self._format_json("content", str(chunk))
|
||||
if debug.logs:
|
||||
for log in debug.logs:
|
||||
yield self._format_json("log", str(log))
|
||||
debug.logs = []
|
||||
yield self.handle_provider(provider_handler, model)
|
||||
if isinstance(chunk, BaseConversation):
|
||||
if provider is not None:
|
||||
if provider not in conversations:
|
||||
conversations[provider] = {}
|
||||
conversations[provider][conversation_id] = chunk
|
||||
yield self._format_json("conversation", conversation_id)
|
||||
elif isinstance(chunk, Exception):
|
||||
logger.exception(chunk)
|
||||
yield self._format_json("message", get_error_message(chunk))
|
||||
elif isinstance(chunk, ImagePreview):
|
||||
yield self._format_json("preview", chunk.to_string())
|
||||
elif isinstance(chunk, ImageResponse):
|
||||
images = chunk
|
||||
if download_images:
|
||||
images = asyncio.run(copy_images(chunk.get_list(), chunk.get("cookies"), proxy))
|
||||
images = ImageResponse(images, chunk.alt)
|
||||
yield self._format_json("content", str(images))
|
||||
elif isinstance(chunk, SynthesizeData):
|
||||
yield self._format_json("synthesize", chunk.to_json())
|
||||
elif not isinstance(chunk, FinishReason):
|
||||
yield self._format_json("content", str(chunk))
|
||||
if debug.logs:
|
||||
for log in debug.logs:
|
||||
yield self._format_json("log", str(log))
|
||||
debug.logs = []
|
||||
except Exception as e:
|
||||
logger.exception(e)
|
||||
yield self._format_json('error', get_error_message(e))
|
||||
if first:
|
||||
yield self.handle_provider(provider_handler, model)
|
||||
|
||||
def _format_json(self, response_type: str, content):
|
||||
return {
|
||||
@ -162,9 +173,12 @@ class Api:
|
||||
response_type: content
|
||||
}
|
||||
|
||||
def handle_provider(self, provider_handler, model):
|
||||
if isinstance(provider_handler, IterListProvider):
|
||||
provider_handler = provider_handler.last_provider
|
||||
if issubclass(provider_handler, ProviderModelMixin) and provider_handler.last_model is not None:
|
||||
model = provider_handler.last_model
|
||||
return self._format_json("provider", {**provider_handler.get_dict(), "model": model})
|
||||
|
||||
def get_error_message(exception: Exception) -> str:
|
||||
message = f"{type(exception).__name__}: {exception}"
|
||||
provider = get_last_provider()
|
||||
if provider is None:
|
||||
return message
|
||||
return f"{provider.__name__}: {message}"
|
||||
return f"{type(exception).__name__}: {exception}"
|
@ -55,6 +55,12 @@ class Backend_Api(Api):
|
||||
return jsonify(response)
|
||||
return response
|
||||
|
||||
def jsonify_providers(**kwargs):
|
||||
response = self.get_providers(**kwargs)
|
||||
if isinstance(response, list):
|
||||
return jsonify(response)
|
||||
return response
|
||||
|
||||
self.routes = {
|
||||
'/backend-api/v2/models': {
|
||||
'function': jsonify_models,
|
||||
@ -65,7 +71,7 @@ class Backend_Api(Api):
|
||||
'methods': ['GET']
|
||||
},
|
||||
'/backend-api/v2/providers': {
|
||||
'function': self.get_providers,
|
||||
'function': jsonify_providers,
|
||||
'methods': ['GET']
|
||||
},
|
||||
'/backend-api/v2/version': {
|
||||
|
@ -40,6 +40,7 @@ def fix_url(url: str) -> str:
|
||||
def fix_title(title: str) -> str:
|
||||
if title:
|
||||
return title.replace("\n", "").replace('"', '')
|
||||
return ""
|
||||
|
||||
def to_image(image: ImageType, is_svg: bool = False) -> Image:
|
||||
"""
|
||||
@ -229,6 +230,8 @@ def format_images_markdown(images: Union[str, list], alt: str, preview: Union[st
|
||||
Returns:
|
||||
str: The formatted markdown string.
|
||||
"""
|
||||
if isinstance(images, list) and len(images) == 1:
|
||||
images = images[0]
|
||||
if isinstance(images, str):
|
||||
result = f"[![{fix_title(alt)}]({fix_url(preview.replace('{image}', images) if preview else images)})]({fix_url(images)})"
|
||||
else:
|
||||
|
@ -38,6 +38,7 @@ from .Provider import (
|
||||
RubiksAI,
|
||||
TeachAnything,
|
||||
Upstage,
|
||||
Flux,
|
||||
)
|
||||
|
||||
@dataclass(unsafe_hash=True)
|
||||
@ -59,6 +60,9 @@ class Model:
|
||||
"""Returns a list of all model names."""
|
||||
return _all_models
|
||||
|
||||
class ImageModel(Model):
|
||||
pass
|
||||
|
||||
### Default ###
|
||||
default = Model(
|
||||
name = "",
|
||||
@ -559,100 +563,98 @@ any_uncensored = Model(
|
||||
#############
|
||||
|
||||
### Stability AI ###
|
||||
sdxl = Model(
|
||||
sdxl = ImageModel(
|
||||
name = 'sdxl',
|
||||
base_provider = 'Stability AI',
|
||||
best_provider = IterListProvider([ReplicateHome, Airforce])
|
||||
|
||||
)
|
||||
|
||||
sd_3 = Model(
|
||||
sd_3 = ImageModel(
|
||||
name = 'sd-3',
|
||||
base_provider = 'Stability AI',
|
||||
best_provider = ReplicateHome
|
||||
|
||||
)
|
||||
|
||||
### Playground ###
|
||||
playground_v2_5 = Model(
|
||||
playground_v2_5 = ImageModel(
|
||||
name = 'playground-v2.5',
|
||||
base_provider = 'Playground AI',
|
||||
best_provider = ReplicateHome
|
||||
|
||||
)
|
||||
|
||||
|
||||
### Flux AI ###
|
||||
flux = Model(
|
||||
flux = ImageModel(
|
||||
name = 'flux',
|
||||
base_provider = 'Flux AI',
|
||||
best_provider = IterListProvider([Blackbox, Airforce])
|
||||
)
|
||||
|
||||
flux_pro = Model(
|
||||
flux_pro = ImageModel(
|
||||
name = 'flux-pro',
|
||||
base_provider = 'Flux AI',
|
||||
best_provider = Airforce
|
||||
)
|
||||
|
||||
flux_dev = Model(
|
||||
flux_dev = ImageModel(
|
||||
name = 'flux-dev',
|
||||
base_provider = 'Flux AI',
|
||||
best_provider = AmigoChat
|
||||
best_provider = IterListProvider([Flux, AmigoChat, HuggingChat, HuggingFace])
|
||||
)
|
||||
|
||||
flux_realism = Model(
|
||||
flux_realism = ImageModel(
|
||||
name = 'flux-realism',
|
||||
base_provider = 'Flux AI',
|
||||
best_provider = IterListProvider([Airforce, AmigoChat])
|
||||
)
|
||||
|
||||
flux_anime = Model(
|
||||
flux_anime = ImageModel(
|
||||
name = 'flux-anime',
|
||||
base_provider = 'Flux AI',
|
||||
best_provider = Airforce
|
||||
)
|
||||
|
||||
flux_3d = Model(
|
||||
flux_3d = ImageModel(
|
||||
name = 'flux-3d',
|
||||
base_provider = 'Flux AI',
|
||||
best_provider = Airforce
|
||||
)
|
||||
|
||||
flux_disney = Model(
|
||||
flux_disney = ImageModel(
|
||||
name = 'flux-disney',
|
||||
base_provider = 'Flux AI',
|
||||
best_provider = Airforce
|
||||
)
|
||||
|
||||
flux_pixel = Model(
|
||||
flux_pixel = ImageModel(
|
||||
name = 'flux-pixel',
|
||||
base_provider = 'Flux AI',
|
||||
best_provider = Airforce
|
||||
)
|
||||
|
||||
flux_4o = Model(
|
||||
flux_4o = ImageModel(
|
||||
name = 'flux-4o',
|
||||
base_provider = 'Flux AI',
|
||||
best_provider = Airforce
|
||||
)
|
||||
|
||||
### OpenAI ###
|
||||
dall_e_3 = Model(
|
||||
dall_e_3 = ImageModel(
|
||||
name = 'dall-e-3',
|
||||
base_provider = 'OpenAI',
|
||||
best_provider = IterListProvider([Airforce, CopilotAccount, OpenaiAccount, MicrosoftDesigner, BingCreateImages])
|
||||
)
|
||||
|
||||
### Recraft ###
|
||||
recraft_v3 = Model(
|
||||
recraft_v3 = ImageModel(
|
||||
name = 'recraft-v3',
|
||||
base_provider = 'Recraft',
|
||||
best_provider = AmigoChat
|
||||
)
|
||||
|
||||
### Other ###
|
||||
any_dark = Model(
|
||||
any_dark = ImageModel(
|
||||
name = 'any-dark',
|
||||
base_provider = 'Other',
|
||||
best_provider = Airforce
|
||||
@ -863,4 +865,17 @@ class ModelUtils:
|
||||
'any-dark': any_dark,
|
||||
}
|
||||
|
||||
_all_models = list(ModelUtils.convert.keys())
|
||||
# Create a list of all working models
|
||||
__models__ = {model.name: (model, providers) for model, providers in [
|
||||
(model, [provider for provider in providers if provider.working])
|
||||
for model, providers in [
|
||||
(model, model.best_provider.providers
|
||||
if isinstance(model.best_provider, IterListProvider)
|
||||
else [model.best_provider]
|
||||
if model.best_provider is not None
|
||||
else [])
|
||||
for model in ModelUtils.convert.values()]
|
||||
] if providers}
|
||||
# Update the ModelUtils.convert with the working models
|
||||
ModelUtils.convert = {model.name: model for model, _ in __models__.values()}
|
||||
_all_models = list(ModelUtils.convert.keys())
|
@ -98,7 +98,7 @@ class AbstractProvider(BaseProvider):
|
||||
default_value = f'"{param.default}"' if isinstance(param.default, str) else param.default
|
||||
args += f" = {default_value}" if param.default is not Parameter.empty else ""
|
||||
args += ","
|
||||
|
||||
|
||||
return f"g4f.Provider.{cls.__name__} supports: ({args}\n)"
|
||||
|
||||
class AsyncProvider(AbstractProvider):
|
||||
@ -240,6 +240,7 @@ class ProviderModelMixin:
|
||||
models: list[str] = []
|
||||
model_aliases: dict[str, str] = {}
|
||||
image_models: list = None
|
||||
last_model: str = None
|
||||
|
||||
@classmethod
|
||||
def get_models(cls) -> list[str]:
|
||||
@ -255,5 +256,6 @@ class ProviderModelMixin:
|
||||
model = cls.model_aliases[model]
|
||||
elif model not in cls.get_models() and cls.models:
|
||||
raise ModelNotSupportedError(f"Model is not supported: {model} in: {cls.__name__}")
|
||||
cls.last_model = model
|
||||
debug.last_model = model
|
||||
return model
|
@ -78,7 +78,7 @@ class BaseProvider(ABC):
|
||||
Returns:
|
||||
Dict[str, str]: A dictionary with provider's details.
|
||||
"""
|
||||
return {'name': cls.__name__, 'url': cls.url}
|
||||
return {'name': cls.__name__, 'url': cls.url, 'label': getattr(cls, 'label', None)}
|
||||
|
||||
class BaseRetryProvider(BaseProvider):
|
||||
"""
|
||||
|
@ -174,10 +174,10 @@ def merge_cookies(cookies: Iterator[Morsel], response: Response) -> Cookies:
|
||||
for cookie in response.cookies.jar:
|
||||
cookies[cookie.name] = cookie.value
|
||||
|
||||
async def get_nodriver(proxy: str = None, **kwargs)-> Browser:
|
||||
async def get_nodriver(proxy: str = None, user_data_dir = "nodriver", **kwargs)-> Browser:
|
||||
if not has_nodriver:
|
||||
raise MissingRequirementsError('Install "nodriver" package | pip install -U nodriver')
|
||||
user_data_dir = user_config_dir("g4f-nodriver") if has_platformdirs else None
|
||||
user_data_dir = user_config_dir(f"g4f-{user_data_dir}") if has_platformdirs else None
|
||||
debug.log(f"Open nodriver with user_dir: {user_data_dir}")
|
||||
return await nodriver.start(
|
||||
user_data_dir=user_data_dir,
|
||||
|
Loading…
Reference in New Issue
Block a user