diff --git a/README.md b/README.md
index 3b847e49..3bff730b 100644
--- a/README.md
+++ b/README.md
@@ -132,17 +132,27 @@ To ensure the seamless operation of our application, please follow the instructi
By following these steps, you should be able to successfully install and run the application on your Windows system. If you encounter any issues during the installation process, please refer to our Issue Tracker or try to get contact over Discord for assistance.
-Run the **Webview UI** on other Platforms:
+---
-- [/docs/webview](docs/webview.md)
+### Learn More About the GUI
-##### Use your smartphone:
+For detailed instructions on how to set up, configure, and use the GPT4Free GUI, refer to the **GUI Documentation**:
-Run the Web UI on Your Smartphone:
+- [GUI Documentation](docs/gui.md)
-- [/docs/guides/phone](docs/guides/phone.md)
+This guide includes step-by-step details on provider selection, managing conversations, using advanced features like speech recognition, and more.
-#### Use python
+---
+
+### Use Your Smartphone
+
+Run the Web UI on your smartphone for easy access on the go. Check out the dedicated guide to learn how to set up and use the GUI on your mobile device:
+
+- [Run on Smartphone Guide](docs/guides/phone.md)
+
+---
+
+### Use python
##### Prerequisites:
diff --git a/docs/gui.md b/docs/gui.md
new file mode 100644
index 00000000..aa946bb6
--- /dev/null
+++ b/docs/gui.md
@@ -0,0 +1,147 @@
+# G4F - GUI Documentation
+
+## Overview
+The G4F GUI is a self-contained, user-friendly interface designed for interacting with multiple AI models from various providers. It allows users to generate text, code, and images effortlessly. Advanced features such as speech recognition, file uploads, conversation backup/restore, and more are included. Both the backend and frontend are fully integrated into the GUI, making setup simple and seamless.
+
+## Features
+
+### 1. **Multiple Providers and Models**
+ - **Provider/Model Selection via Dropdown:** Use the select box to choose a specific **provider/model combination**.
+ - **Pinning Provider/Model Combinations:** After selecting a provider and model from the dropdown, click the **pin button** to add the combination to the pinned list.
+ - **Remove Pinned Combinations:** Each pinned provider/model combination is displayed as a button. Clicking on the button removes it from the pinned list.
+ - **Send Requests to Multiple Providers:** You can pin multiple provider/model combinations and send requests to all of them simultaneously, enabling fast and comprehensive content generation.
+
+### 2. **Text, Code, and Image Generation**
+ - **Text and Code Generation:** Enter prompts to generate text or code outputs.
+ - **Image Generation:** Provide text prompts to generate images, which are shown as thumbnails. Clicking on a thumbnail opens the image in a lightbox view.
+
+### 3. **Gallery Functionality**
+ - **Image Thumbnails:** Generated images appear as small thumbnails within the conversation.
+ - **Lightbox View:** Clicking a thumbnail opens the image in full size, along with the prompt used to generate it.
+ - **Automatic Image Download:** Enable automatic downloading of generated images through the settings.
+
+### 4. **Conversation Management**
+ - **Message Reuse:** While messages can't be edited, you can copy and reuse them.
+ - **Message Deletion:** Conversations can be deleted for a cleaner workspace.
+ - **Conversation List:** The left sidebar displays a list of active and past conversations for easy navigation.
+ - **Change Conversation Title:** By clicking the three dots next to a conversation title, you can either delete or change its title.
+ - **Backup and Restore Conversations:** Backup and restore all conversations and messages as a JSON file (accessible via the settings).
+
+### 5. **Speech Recognition and Synthesis**
+ - **Speech Input:** Use speech recognition to input prompts by speaking instead of typing.
+ - **Speech Output (Text-to-Speech):** The generated text can be read aloud using speech synthesis.
+ - **Custom Language Settings:** Configure the language used for speech recognition to match your preference.
+
+### 6. **File Uploads**
+ - **Image Uploads:** Upload images that will be appended to your message and sent to the AI provider.
+ - **Text File Uploads:** Upload text files, and their contents will be added to the message to provide more detailed input to the AI.
+
+### 7. **Web Access and Settings**
+ - **DuckDuckGo Web Access:** Enable web access through DuckDuckGo for privacy-focused browsing.
+ - **Theme Toggle:** Switch between **dark mode** and **light mode** in the settings.
+ - **Provider Visibility:** Hide unused providers in the settings using toggle buttons.
+ - **Log Access:** View application logs, including error messages and debug logs, through the settings.
+
+### 8. **Authentication**
+ - **Basic Authentication:** Set a password for Basic Authentication using the `--g4f-api-key` argument when starting the web server.
+
+## Installation
+
+You can install the G4F GUI either as a full stack or in a lightweight version:
+
+1. **Full Stack Installation** (includes all packages, including browser support and drivers):
+ ```bash
+ pip install -U g4f[all]
+ ```
+
+2. **Slim Installation** (does not include browser drivers, suitable for headless environments):
+ ```bash
+ pip install -U g4f[slim]
+ ```
+
+ - **Full Stack Installation:** This installs all necessary dependencies, including browser support for web-based interactions.
+ - **Slim Installation:** This version is lighter, with no browser support, ideal for environments where browser interactions are not required.
+
+## Setup
+
+### Setting the Environment Variable
+
+It is **recommended** to set a `G4F_API_KEY` environment variable for authentication. You can do this as follows:
+
+On **Linux/macOS**:
+```bash
+export G4F_API_KEY="your-api-key-here"
+```
+
+On **Windows**:
+```bash
+set G4F_API_KEY="your-api-key-here"
+```
+
+### Start the GUI and Backend
+
+Run the following command to start both the GUI and backend services based on the G4F client:
+
+```bash
+python -m g4f --debug --port 8080 --g4f-api-key $G4F_API_KEY
+```
+
+This starts the GUI at `http://localhost:8080` with all necessary backend components running seamlessly.
+
+### Access the GUI
+
+Once the server is running, open your browser and navigate to:
+
+```
+http://localhost:8080/chat/
+```
+
+## Using the Interface
+
+1. **Select and Manage Providers/Models:**
+ - Use the **select box** to view the list of available providers and models.
+ - Select a **provider/model combination** from the dropdown.
+ - Click the **pin button** to add the combination to your pinned list.
+ - To **unpin** a combination, click the corresponding button in the pinned list.
+
+2. **Input a Prompt:**
+ - Enter your prompt manually or use **speech recognition** to dictate it.
+ - You can also upload **images** or **text files** to be included in the prompt.
+
+3. **Generate Content:**
+ - Click the "Generate" button to produce the content.
+ - The AI will generate text, code, or images depending on the prompt.
+
+4. **View and Interact with Results:**
+ - **For Text/Code:** The generated content will appear in the conversation window.
+ - **For Images:** Generated images will be shown as thumbnails. Click on them to view in full size.
+
+5. **Backup and Restore Conversations:**
+ - Backup all your conversations as a **JSON file** and restore them at any time via the settings.
+
+6. **Manage Conversations:**
+ - Delete or rename any conversation by clicking the three dots next to the conversation title.
+
+### Gallery Functionality
+
+- **Image Thumbnails:** All generated images are shown as thumbnails within the conversation window.
+- **Lightbox View:** Clicking a thumbnail opens the image in a larger view along with the associated prompt.
+- **Automatic Image Download:** Enable automatic downloading of generated images in the settings.
+
+## Settings Configuration
+
+1. **API Key:** Set your API key when starting the server by defining the `G4F_API_KEY` environment variable.
+2. **Provider Visibility:** Hide unused providers through the settings.
+3. **Theme:** Toggle between **dark mode** and **light mode**. Disabling dark mode switches to a white theme.
+4. **DuckDuckGo Access:** Enable DuckDuckGo for privacy-focused web browsing.
+5. **Speech Recognition Language:** Set your preferred language for speech recognition.
+6. **Log Access:** View logs, including error and debug messages, from the settings menu.
+7. **Automatic Image Download:** Enable this feature to automatically download generated images.
+
+## Known Issues
+
+- **Gallery Loading:** Large images may take time to load depending on system performance.
+- **Speech Recognition Accuracy:** Accuracy may vary depending on microphone quality or speech clarity.
+- **Provider Downtime:** Some AI providers may experience downtime or disruptions.
+
+[Return to Home](/)
\ No newline at end of file
diff --git a/etc/unittest/backend.py b/etc/unittest/backend.py
index a90bf253..75ab6b47 100644
--- a/etc/unittest/backend.py
+++ b/etc/unittest/backend.py
@@ -35,7 +35,7 @@ class TestBackendApi(unittest.TestCase):
def test_get_providers(self):
response = self.api.get_providers()
- self.assertIsInstance(response, dict)
+ self.assertIsInstance(response, list)
self.assertTrue(len(response) > 0)
def test_search(self):
diff --git a/g4f/Provider/Copilot.py b/g4f/Provider/Copilot.py
index ee9daf33..23f175ac 100644
--- a/g4f/Provider/Copilot.py
+++ b/g4f/Provider/Copilot.py
@@ -29,13 +29,9 @@ from .. import debug
class Conversation(BaseConversation):
conversation_id: str
- cookie_jar: CookieJar
- access_token: str
- def __init__(self, conversation_id: str, cookie_jar: CookieJar, access_token: str = None):
+ def __init__(self, conversation_id: str):
self.conversation_id = conversation_id
- self.cookie_jar = cookie_jar
- self.access_token = access_token
class Copilot(AbstractProvider, ProviderModelMixin):
label = "Microsoft Copilot"
@@ -50,6 +46,9 @@ class Copilot(AbstractProvider, ProviderModelMixin):
websocket_url = "wss://copilot.microsoft.com/c/api/chat?api-version=2"
conversation_url = f"{url}/c/api/conversations"
+
+ _access_token: str = None
+ _cookies: CookieJar = None
@classmethod
def create_completion(
@@ -69,42 +68,43 @@ class Copilot(AbstractProvider, ProviderModelMixin):
raise MissingRequirementsError('Install or update "curl_cffi" package | pip install -U curl_cffi')
websocket_url = cls.websocket_url
- access_token = None
headers = None
- cookies = conversation.cookie_jar if conversation is not None else None
if cls.needs_auth or image is not None:
- if conversation is None or conversation.access_token is None:
+ if cls._access_token is None:
try:
- access_token, cookies = readHAR(cls.url)
+ cls._access_token, cls._cookies = readHAR(cls.url)
except NoValidHarFileError as h:
debug.log(f"Copilot: {h}")
try:
get_running_loop(check_nested=True)
- access_token, cookies = asyncio.run(get_access_token_and_cookies(cls.url, proxy))
+ cls._access_token, cls._cookies = asyncio.run(get_access_token_and_cookies(cls.url, proxy))
except MissingRequirementsError:
raise h
- else:
- access_token = conversation.access_token
- debug.log(f"Copilot: Access token: {access_token[:7]}...{access_token[-5:]}")
- websocket_url = f"{websocket_url}&accessToken={quote(access_token)}"
- headers = {"authorization": f"Bearer {access_token}"}
+ debug.log(f"Copilot: Access token: {cls._access_token[:7]}...{cls._access_token[-5:]}")
+ websocket_url = f"{websocket_url}&accessToken={quote(cls._access_token)}"
+ headers = {"authorization": f"Bearer {cls._access_token}"}
with Session(
timeout=timeout,
proxy=proxy,
impersonate="chrome",
headers=headers,
- cookies=cookies,
+ cookies=cls._cookies,
) as session:
+ if cls._access_token is not None:
+ cls._cookies = session.cookies.jar
response = session.get("https://copilot.microsoft.com/c/api/user")
raise_for_status(response)
- debug.log(f"Copilot: User: {response.json().get('firstName', 'null')}")
+ user = response.json().get('firstName')
+ if user is None:
+ cls._access_token = None
+ debug.log(f"Copilot: User: {user or 'null'}")
if conversation is None:
response = session.post(cls.conversation_url)
raise_for_status(response)
conversation_id = response.json().get("id")
if return_conversation:
- yield Conversation(conversation_id, session.cookies.jar, access_token)
+ yield Conversation(conversation_id)
prompt = format_prompt(messages)
debug.log(f"Copilot: Created conversation: {conversation_id}")
else:
@@ -162,7 +162,7 @@ class Copilot(AbstractProvider, ProviderModelMixin):
raise RuntimeError(f"Invalid response: {last_msg}")
async def get_access_token_and_cookies(url: str, proxy: str = None, target: str = "ChatAI",):
- browser = await get_nodriver(proxy=proxy)
+ browser = await get_nodriver(proxy=proxy, user_data_dir="copilot")
page = await browser.get(url)
access_token = None
while access_token is None:
diff --git a/g4f/Provider/Flux.py b/g4f/Provider/Flux.py
new file mode 100644
index 00000000..05983678
--- /dev/null
+++ b/g4f/Provider/Flux.py
@@ -0,0 +1,58 @@
+from __future__ import annotations
+
+import json
+from aiohttp import ClientSession
+
+from ..typing import AsyncResult, Messages
+from ..image import ImageResponse, ImagePreview
+from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
+
+class Flux(AsyncGeneratorProvider, ProviderModelMixin):
+ label = "Flux Provider"
+ url = "https://black-forest-labs-flux-1-dev.hf.space"
+ api_endpoint = "/gradio_api/call/infer"
+ working = True
+ default_model = 'flux-1-dev'
+ models = [default_model]
+ image_models = [default_model]
+
+ @classmethod
+ async def create_async_generator(
+ cls, model: str, messages: Messages, prompt: str = None, api_key: str = None, proxy: str = None, **kwargs
+ ) -> AsyncResult:
+ headers = {
+ "Content-Type": "application/json",
+ "Accept": "application/json",
+ }
+ if api_key is not None:
+ headers["Authorization"] = f"Bearer {api_key}"
+ async with ClientSession(headers=headers) as session:
+ prompt = messages[-1]["content"] if prompt is None else prompt
+ data = {
+ "data": [prompt, 0, True, 1024, 1024, 3.5, 28]
+ }
+ async with session.post(f"{cls.url}{cls.api_endpoint}", json=data, proxy=proxy) as response:
+ response.raise_for_status()
+ event_id = (await response.json()).get("event_id")
+ async with session.get(f"{cls.url}{cls.api_endpoint}/{event_id}") as event_response:
+ event_response.raise_for_status()
+ event = None
+ async for chunk in event_response.content:
+ if chunk.startswith(b"event: "):
+ event = chunk[7:].decode(errors="replace").strip()
+ if chunk.startswith(b"data: "):
+ if event == "error":
+ raise RuntimeError(f"GPU token limit exceeded: {chunk.decode(errors='replace')}")
+ if event in ("complete", "generating"):
+ try:
+ data = json.loads(chunk[6:])
+ if data is None:
+ continue
+ url = data[0]["url"]
+ except (json.JSONDecodeError, KeyError, TypeError) as e:
+ raise RuntimeError(f"Failed to parse image URL: {chunk.decode(errors='replace')}", e)
+ if event == "generating":
+ yield ImagePreview(url, prompt)
+ else:
+ yield ImageResponse(url, prompt)
+ break
\ No newline at end of file
diff --git a/g4f/Provider/__init__.py b/g4f/Provider/__init__.py
index 98ad1041..ee366117 100644
--- a/g4f/Provider/__init__.py
+++ b/g4f/Provider/__init__.py
@@ -39,6 +39,7 @@ from .TeachAnything import TeachAnything
from .Upstage import Upstage
from .You import You
from .Mhystical import Mhystical
+from .Flux import Flux
import sys
@@ -59,4 +60,4 @@ __map__: dict[str, ProviderType] = dict([
])
class ProviderUtils:
- convert: dict[str, ProviderType] = __map__
+ convert: dict[str, ProviderType] = __map__
\ No newline at end of file
diff --git a/g4f/Provider/needs_auth/BingCreateImages.py b/g4f/Provider/needs_auth/BingCreateImages.py
index b95a78c3..c2d403d7 100644
--- a/g4f/Provider/needs_auth/BingCreateImages.py
+++ b/g4f/Provider/needs_auth/BingCreateImages.py
@@ -9,11 +9,11 @@ from ..bing.create_images import create_images, create_session
class BingCreateImages(AsyncGeneratorProvider, ProviderModelMixin):
label = "Microsoft Designer in Bing"
- parent = "Bing"
url = "https://www.bing.com/images/create"
working = True
needs_auth = True
- image_models = ["dall-e"]
+ image_models = ["dall-e-3"]
+ models = image_models
def __init__(self, cookies: Cookies = None, proxy: str = None, api_key: str = None) -> None:
if api_key is not None:
diff --git a/g4f/Provider/needs_auth/Gemini.py b/g4f/Provider/needs_auth/Gemini.py
index 3c842f3c..85b4ad0b 100644
--- a/g4f/Provider/needs_auth/Gemini.py
+++ b/g4f/Provider/needs_auth/Gemini.py
@@ -69,7 +69,7 @@ class Gemini(AsyncGeneratorProvider):
if debug.logging:
print("Skip nodriver login in Gemini provider")
return
- browser = await get_nodriver(proxy=proxy)
+ browser = await get_nodriver(proxy=proxy, user_data_dir="gemini")
login_url = os.environ.get("G4F_LOGIN_URL")
if login_url:
yield f"Please login: [Google Gemini]({login_url})\n\n"
diff --git a/g4f/Provider/needs_auth/HuggingChat.py b/g4f/Provider/needs_auth/HuggingChat.py
index fc50e4d8..2f3dbb57 100644
--- a/g4f/Provider/needs_auth/HuggingChat.py
+++ b/g4f/Provider/needs_auth/HuggingChat.py
@@ -12,6 +12,7 @@ from ...typing import CreateResult, Messages, Cookies
from ...errors import MissingRequirementsError
from ...requests.raise_for_status import raise_for_status
from ...cookies import get_cookies
+from ...image import ImageResponse
from ..base_provider import ProviderModelMixin, AbstractProvider, BaseConversation
from ..helper import format_prompt
from ... import debug
@@ -26,10 +27,12 @@ class HuggingChat(AbstractProvider, ProviderModelMixin):
working = True
supports_stream = True
needs_auth = True
- default_model = "meta-llama/Meta-Llama-3.1-70B-Instruct"
-
+ default_model = "Qwen/Qwen2.5-72B-Instruct"
+ image_models = [
+ "black-forest-labs/FLUX.1-dev"
+ ]
models = [
- 'Qwen/Qwen2.5-72B-Instruct',
+ default_model,
'meta-llama/Meta-Llama-3.1-70B-Instruct',
'CohereForAI/c4ai-command-r-plus-08-2024',
'Qwen/QwQ-32B-Preview',
@@ -39,8 +42,8 @@ class HuggingChat(AbstractProvider, ProviderModelMixin):
'NousResearch/Hermes-3-Llama-3.1-8B',
'mistralai/Mistral-Nemo-Instruct-2407',
'microsoft/Phi-3.5-mini-instruct',
+ *image_models
]
-
model_aliases = {
"qwen-2.5-72b": "Qwen/Qwen2.5-72B-Instruct",
"llama-3.1-70b": "meta-llama/Meta-Llama-3.1-70B-Instruct",
@@ -52,6 +55,7 @@ class HuggingChat(AbstractProvider, ProviderModelMixin):
"hermes-3": "NousResearch/Hermes-3-Llama-3.1-8B",
"mistral-nemo": "mistralai/Mistral-Nemo-Instruct-2407",
"phi-3.5-mini": "microsoft/Phi-3.5-mini-instruct",
+ "flux-dev": "black-forest-labs/FLUX.1-dev",
}
@classmethod
@@ -109,7 +113,7 @@ class HuggingChat(AbstractProvider, ProviderModelMixin):
"is_retry": False,
"is_continue": False,
"web_search": web_search,
- "tools": []
+ "tools": ["000000000000000000000001"] if model in cls.image_models else [],
}
headers = {
@@ -162,14 +166,18 @@ class HuggingChat(AbstractProvider, ProviderModelMixin):
elif line["type"] == "finalAnswer":
break
-
- full_response = full_response.replace('<|im_end|', '').replace('\u0000', '').strip()
+ elif line["type"] == "file":
+ url = f"https://huggingface.co/chat/conversation/{conversation.conversation_id}/output/{line['sha']}"
+ yield ImageResponse(url, alt=messages[-1]["content"], options={"cookies": cookies})
+ full_response = full_response.replace('<|im_end|', '').replace('\u0000', '').strip()
if not stream:
yield full_response
@classmethod
def create_conversation(cls, session: Session, model: str):
+ if model in cls.image_models:
+ model = cls.default_model
json_data = {
'model': model,
}
diff --git a/g4f/Provider/needs_auth/HuggingFace.py b/g4f/Provider/needs_auth/HuggingFace.py
index 1884f415..94530252 100644
--- a/g4f/Provider/needs_auth/HuggingFace.py
+++ b/g4f/Provider/needs_auth/HuggingFace.py
@@ -1,21 +1,25 @@
from __future__ import annotations
import json
+import base64
+import random
from ...typing import AsyncResult, Messages
from ..base_provider import AsyncGeneratorProvider, ProviderModelMixin
from ...errors import ModelNotFoundError
from ...requests import StreamSession, raise_for_status
+from ...image import ImageResponse
from .HuggingChat import HuggingChat
class HuggingFace(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://huggingface.co/chat"
working = True
- needs_auth = True
supports_message_history = True
default_model = HuggingChat.default_model
- models = HuggingChat.models
+ default_image_model = "black-forest-labs/FLUX.1-dev"
+ models = [*HuggingChat.models, default_image_model]
+ image_models = [default_image_model]
model_aliases = HuggingChat.model_aliases
@classmethod
@@ -29,6 +33,7 @@ class HuggingFace(AsyncGeneratorProvider, ProviderModelMixin):
api_key: str = None,
max_new_tokens: int = 1024,
temperature: float = 0.7,
+ prompt: str = None,
**kwargs
) -> AsyncResult:
model = cls.get_model(model)
@@ -50,16 +55,22 @@ class HuggingFace(AsyncGeneratorProvider, ProviderModelMixin):
}
if api_key is not None:
headers["Authorization"] = f"Bearer {api_key}"
- params = {
- "return_full_text": False,
- "max_new_tokens": max_new_tokens,
- "temperature": temperature,
- **kwargs
- }
- payload = {"inputs": format_prompt(messages), "parameters": params, "stream": stream}
+ if model in cls.image_models:
+ stream = False
+ prompt = messages[-1]["content"] if prompt is None else prompt
+ payload = {"inputs": prompt, "parameters": {"seed": random.randint(0, 2**32)}}
+ else:
+ params = {
+ "return_full_text": False,
+ "max_new_tokens": max_new_tokens,
+ "temperature": temperature,
+ **kwargs
+ }
+ payload = {"inputs": format_prompt(messages), "parameters": params, "stream": stream}
async with StreamSession(
headers=headers,
- proxy=proxy
+ proxy=proxy,
+ timeout=600
) as session:
async with session.post(f"{api_base.rstrip('/')}/models/{model}", json=payload) as response:
if response.status == 404:
@@ -78,7 +89,12 @@ class HuggingFace(AsyncGeneratorProvider, ProviderModelMixin):
if chunk:
yield chunk
else:
- yield (await response.json())[0]["generated_text"].strip()
+ if response.headers["content-type"].startswith("image/"):
+ base64_data = base64.b64encode(b"".join([chunk async for chunk in response.iter_content()]))
+ url = f"data:{response.headers['content-type']};base64,{base64_data.decode()}"
+ yield ImageResponse(url, prompt)
+ else:
+ yield (await response.json())[0]["generated_text"].strip()
def format_prompt(messages: Messages) -> str:
system_messages = [message["content"] for message in messages if message["role"] == "system"]
diff --git a/g4f/Provider/needs_auth/MicrosoftDesigner.py b/g4f/Provider/needs_auth/MicrosoftDesigner.py
index 715f11ac..57b96e2d 100644
--- a/g4f/Provider/needs_auth/MicrosoftDesigner.py
+++ b/g4f/Provider/needs_auth/MicrosoftDesigner.py
@@ -142,7 +142,7 @@ def readHAR(url: str) -> tuple[str, str]:
return api_key, user_agent
async def get_access_token_and_user_agent(url: str, proxy: str = None):
- browser = await get_nodriver(proxy=proxy)
+ browser = await get_nodriver(proxy=proxy, user_data_dir="designer")
page = await browser.get(url)
user_agent = await page.evaluate("navigator.userAgent")
access_token = None
diff --git a/g4f/Provider/needs_auth/OpenaiChat.py b/g4f/Provider/needs_auth/OpenaiChat.py
index d9ed6078..3ce5b1a1 100644
--- a/g4f/Provider/needs_auth/OpenaiChat.py
+++ b/g4f/Provider/needs_auth/OpenaiChat.py
@@ -510,7 +510,7 @@ class OpenaiChat(AsyncGeneratorProvider, ProviderModelMixin):
@classmethod
async def nodriver_auth(cls, proxy: str = None):
- browser = await get_nodriver(proxy=proxy)
+ browser = await get_nodriver(proxy=proxy, user_data_dir="chatgpt")
page = browser.main_tab
def on_request(event: nodriver.cdp.network.RequestWillBeSent):
if event.request.url == start_url or event.request.url.startswith(conversation_url):
diff --git a/g4f/api/__init__.py b/g4f/api/__init__.py
index 1d79b7d0..b036603e 100644
--- a/g4f/api/__init__.py
+++ b/g4f/api/__init__.py
@@ -276,16 +276,12 @@ class Api:
HTTP_200_OK: {"model": List[ModelResponseModel]},
})
async def models():
- model_list = dict(
- (model, g4f.models.ModelUtils.convert[model])
- for model in g4f.Model.__all__()
- )
return [{
'id': model_id,
'object': 'model',
'created': 0,
'owned_by': model.base_provider
- } for model_id, model in model_list.items()]
+ } for model_id, model in g4f.models.ModelUtils.convert.items()]
@self.app.get("/v1/models/{model_name}", responses={
HTTP_200_OK: {"model": ModelResponseModel},
diff --git a/g4f/client/__init__.py b/g4f/client/__init__.py
index d95618f1..52349e72 100644
--- a/g4f/client/__init__.py
+++ b/g4f/client/__init__.py
@@ -74,7 +74,7 @@ def iter_response(
finish_reason = "stop"
if stream:
- yield ChatCompletionChunk.construct(chunk, None, completion_id, int(time.time()))
+ yield ChatCompletionChunk.model_construct(chunk, None, completion_id, int(time.time()))
if finish_reason is not None:
break
@@ -84,12 +84,12 @@ def iter_response(
finish_reason = "stop" if finish_reason is None else finish_reason
if stream:
- yield ChatCompletionChunk.construct(None, finish_reason, completion_id, int(time.time()))
+ yield ChatCompletionChunk.model_construct(None, finish_reason, completion_id, int(time.time()))
else:
if response_format is not None and "type" in response_format:
if response_format["type"] == "json_object":
content = filter_json(content)
- yield ChatCompletion.construct(content, finish_reason, completion_id, int(time.time()))
+ yield ChatCompletion.model_construct(content, finish_reason, completion_id, int(time.time()))
# Synchronous iter_append_model_and_provider function
def iter_append_model_and_provider(response: ChatCompletionResponseType) -> ChatCompletionResponseType:
@@ -138,7 +138,7 @@ async def async_iter_response(
finish_reason = "stop"
if stream:
- yield ChatCompletionChunk.construct(chunk, None, completion_id, int(time.time()))
+ yield ChatCompletionChunk.model_construct(chunk, None, completion_id, int(time.time()))
if finish_reason is not None:
break
@@ -146,12 +146,12 @@ async def async_iter_response(
finish_reason = "stop" if finish_reason is None else finish_reason
if stream:
- yield ChatCompletionChunk.construct(None, finish_reason, completion_id, int(time.time()))
+ yield ChatCompletionChunk.model_construct(None, finish_reason, completion_id, int(time.time()))
else:
if response_format is not None and "type" in response_format:
if response_format["type"] == "json_object":
content = filter_json(content)
- yield ChatCompletion.construct(content, finish_reason, completion_id, int(time.time()))
+ yield ChatCompletion.model_construct(content, finish_reason, completion_id, int(time.time()))
finally:
await safe_aclose(response)
@@ -422,7 +422,7 @@ class Images:
last_provider = get_last_provider(True)
if response_format == "url":
# Return original URLs without saving locally
- images = [Image.construct(url=image, revised_prompt=response.alt) for image in response.get_list()]
+ images = [Image.model_construct(url=image, revised_prompt=response.alt) for image in response.get_list()]
else:
# Save locally for None (default) case
images = await copy_images(response.get_list(), response.get("cookies"), proxy)
@@ -430,11 +430,11 @@ class Images:
async def process_image_item(image_file: str) -> Image:
with open(os.path.join(images_dir, os.path.basename(image_file)), "rb") as file:
image_data = base64.b64encode(file.read()).decode()
- return Image.construct(b64_json=image_data, revised_prompt=response.alt)
+ return Image.model_construct(b64_json=image_data, revised_prompt=response.alt)
images = await asyncio.gather(*[process_image_item(image) for image in images])
else:
- images = [Image.construct(url=f"/images/{os.path.basename(image)}", revised_prompt=response.alt) for image in images]
- return ImagesResponse.construct(
+ images = [Image.model_construct(url=f"/images/{os.path.basename(image)}", revised_prompt=response.alt) for image in images]
+ return ImagesResponse.model_construct(
created=int(time.time()),
data=images,
model=last_provider.get("model") if model is None else model,
diff --git a/g4f/client/stubs.py b/g4f/client/stubs.py
index 414651de..57532769 100644
--- a/g4f/client/stubs.py
+++ b/g4f/client/stubs.py
@@ -10,7 +10,7 @@ try:
except ImportError:
class BaseModel():
@classmethod
- def construct(cls, **data):
+ def model_construct(cls, **data):
new = cls()
for key, value in data.items():
setattr(new, key, value)
@@ -19,6 +19,13 @@ except ImportError:
def __init__(self, **config):
pass
+class BaseModel(BaseModel):
+ @classmethod
+ def model_construct(cls, **data):
+ if hasattr(super(), "model_construct"):
+ return super().model_construct(**data)
+ return cls.construct(**data)
+
class ChatCompletionChunk(BaseModel):
id: str
object: str
@@ -28,21 +35,21 @@ class ChatCompletionChunk(BaseModel):
choices: List[ChatCompletionDeltaChoice]
@classmethod
- def construct(
+ def model_construct(
cls,
content: str,
finish_reason: str,
completion_id: str = None,
created: int = None
):
- return super().construct(
+ return super().model_construct(
id=f"chatcmpl-{completion_id}" if completion_id else None,
object="chat.completion.cunk",
created=created,
model=None,
provider=None,
- choices=[ChatCompletionDeltaChoice.construct(
- ChatCompletionDelta.construct(content),
+ choices=[ChatCompletionDeltaChoice.model_construct(
+ ChatCompletionDelta.model_construct(content),
finish_reason
)]
)
@@ -52,8 +59,8 @@ class ChatCompletionMessage(BaseModel):
content: str
@classmethod
- def construct(cls, content: str):
- return super().construct(role="assistant", content=content)
+ def model_construct(cls, content: str):
+ return super().model_construct(role="assistant", content=content)
class ChatCompletionChoice(BaseModel):
index: int
@@ -61,8 +68,8 @@ class ChatCompletionChoice(BaseModel):
finish_reason: str
@classmethod
- def construct(cls, message: ChatCompletionMessage, finish_reason: str):
- return super().construct(index=0, message=message, finish_reason=finish_reason)
+ def model_construct(cls, message: ChatCompletionMessage, finish_reason: str):
+ return super().model_construct(index=0, message=message, finish_reason=finish_reason)
class ChatCompletion(BaseModel):
id: str
@@ -78,21 +85,21 @@ class ChatCompletion(BaseModel):
}])
@classmethod
- def construct(
+ def model_construct(
cls,
content: str,
finish_reason: str,
completion_id: str = None,
created: int = None
):
- return super().construct(
+ return super().model_construct(
id=f"chatcmpl-{completion_id}" if completion_id else None,
object="chat.completion",
created=created,
model=None,
provider=None,
- choices=[ChatCompletionChoice.construct(
- ChatCompletionMessage.construct(content),
+ choices=[ChatCompletionChoice.model_construct(
+ ChatCompletionMessage.model_construct(content),
finish_reason
)],
usage={
@@ -107,8 +114,8 @@ class ChatCompletionDelta(BaseModel):
content: str
@classmethod
- def construct(cls, content: Optional[str]):
- return super().construct(role="assistant", content=content)
+ def model_construct(cls, content: Optional[str]):
+ return super().model_construct(role="assistant", content=content)
class ChatCompletionDeltaChoice(BaseModel):
index: int
@@ -116,8 +123,8 @@ class ChatCompletionDeltaChoice(BaseModel):
finish_reason: Optional[str]
@classmethod
- def construct(cls, delta: ChatCompletionDelta, finish_reason: Optional[str]):
- return super().construct(index=0, delta=delta, finish_reason=finish_reason)
+ def model_construct(cls, delta: ChatCompletionDelta, finish_reason: Optional[str]):
+ return super().model_construct(index=0, delta=delta, finish_reason=finish_reason)
class Image(BaseModel):
url: Optional[str]
@@ -125,8 +132,8 @@ class Image(BaseModel):
revised_prompt: Optional[str]
@classmethod
- def construct(cls, url: str = None, b64_json: str = None, revised_prompt: str = None):
- return super().construct(**filter_none(
+ def model_construct(cls, url: str = None, b64_json: str = None, revised_prompt: str = None):
+ return super().model_construct(**filter_none(
url=url,
b64_json=b64_json,
revised_prompt=revised_prompt
@@ -139,10 +146,10 @@ class ImagesResponse(BaseModel):
created: int
@classmethod
- def construct(cls, data: List[Image], created: int = None, model: str = None, provider: str = None):
+ def model_construct(cls, data: List[Image], created: int = None, model: str = None, provider: str = None):
if created is None:
created = int(time())
- return super().construct(
+ return super().model_construct(
data=data,
model=model,
provider=provider,
diff --git a/g4f/cookies.py b/g4f/cookies.py
index 52f7a40f..832e31c9 100644
--- a/g4f/cookies.py
+++ b/g4f/cookies.py
@@ -34,8 +34,8 @@ try:
browsers = [
_g4f,
- chrome, chromium, opera, opera_gx,
- brave, edge, vivaldi, firefox,
+ chrome, chromium, firefox, opera, opera_gx,
+ brave, edge, vivaldi,
]
has_browser_cookie3 = True
except ImportError:
diff --git a/g4f/gui/client/index.html b/g4f/gui/client/index.html
index 301de1b8..cd1f72a6 100644
--- a/g4f/gui/client/index.html
+++ b/g4f/gui/client/index.html
@@ -20,6 +20,7 @@
+
-
+