mirror of
https://github.com/xtekky/gpt4free.git
synced 2024-12-23 11:02:40 +03:00
Merge pull request #2463 from hlohaus/neww
Add Authentication Setup Guide
This commit is contained in:
commit
0cb9ed0cbb
10
README.md
10
README.md
@ -107,7 +107,7 @@ docker run \
|
|||||||
hlohaus789/g4f:latest-slim \
|
hlohaus789/g4f:latest-slim \
|
||||||
rm -r -f /app/g4f/ \
|
rm -r -f /app/g4f/ \
|
||||||
&& pip install -U g4f[slim] \
|
&& pip install -U g4f[slim] \
|
||||||
&& python -m g4f.cli api --gui --debug
|
&& python -m g4f --debug
|
||||||
```
|
```
|
||||||
It also updates the `g4f` package at startup and installs any new required dependencies.
|
It also updates the `g4f` package at startup and installs any new required dependencies.
|
||||||
|
|
||||||
@ -134,7 +134,7 @@ By following these steps, you should be able to successfully install and run the
|
|||||||
|
|
||||||
Run the **Webview UI** on other Platforms:
|
Run the **Webview UI** on other Platforms:
|
||||||
|
|
||||||
- [/docs/guides/webview](docs/webview.md)
|
- [/docs/webview](docs/webview.md)
|
||||||
|
|
||||||
##### Use your smartphone:
|
##### Use your smartphone:
|
||||||
|
|
||||||
@ -204,7 +204,7 @@ image_url = response.data[0].url
|
|||||||
print(f"Generated image URL: {image_url}")
|
print(f"Generated image URL: {image_url}")
|
||||||
```
|
```
|
||||||
|
|
||||||
[![Image with cat](/docs/cat.jpeg)](docs/client.md)
|
[![Image with cat](/docs/images/cat.jpeg)](docs/client.md)
|
||||||
|
|
||||||
#### **Full Documentation for Python API**
|
#### **Full Documentation for Python API**
|
||||||
- **New:**
|
- **New:**
|
||||||
@ -241,6 +241,10 @@ This API is designed for straightforward implementation and enhanced compatibili
|
|||||||
|
|
||||||
### Configuration
|
### Configuration
|
||||||
|
|
||||||
|
#### Authentication
|
||||||
|
|
||||||
|
Refer to the [G4F Authentication Setup Guide](docs/authentication.md) for detailed instructions on setting up authentication.
|
||||||
|
|
||||||
#### Cookies
|
#### Cookies
|
||||||
|
|
||||||
Cookies are essential for using Meta AI and Microsoft Designer to create images.
|
Cookies are essential for using Meta AI and Microsoft Designer to create images.
|
||||||
|
@ -145,7 +145,7 @@ async def main():
|
|||||||
provider=g4f.Provider.CopilotAccount
|
provider=g4f.Provider.CopilotAccount
|
||||||
)
|
)
|
||||||
|
|
||||||
image = requests.get("https://raw.githubusercontent.com/xtekky/gpt4free/refs/heads/main/docs/cat.jpeg", stream=True).raw
|
image = requests.get("https://raw.githubusercontent.com/xtekky/gpt4free/refs/heads/main/docs/images/cat.jpeg", stream=True).raw
|
||||||
|
|
||||||
response = await client.chat.completions.create(
|
response = await client.chat.completions.create(
|
||||||
model=g4f.models.default,
|
model=g4f.models.default,
|
||||||
|
139
docs/authentication.md
Normal file
139
docs/authentication.md
Normal file
@ -0,0 +1,139 @@
|
|||||||
|
# G4F Authentication Setup Guide
|
||||||
|
|
||||||
|
This documentation explains how to set up Basic Authentication for the GUI and API key authentication for the API when running the G4F server.
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
Before proceeding, ensure you have the following installed:
|
||||||
|
- Python 3.x
|
||||||
|
- G4F package installed (ensure it is set up and working)
|
||||||
|
- Basic knowledge of using environment variables on your operating system
|
||||||
|
|
||||||
|
## Steps to Set Up Authentication
|
||||||
|
|
||||||
|
### 1. API Key Authentication for Both GUI and API
|
||||||
|
|
||||||
|
To secure both the GUI and the API, you'll authenticate using an API key. The API key should be injected via an environment variable and passed to both the GUI (via Basic Authentication) and the API.
|
||||||
|
|
||||||
|
#### Steps to Inject the API Key Using Environment Variables:
|
||||||
|
|
||||||
|
1. **Set the environment variable** for your API key:
|
||||||
|
|
||||||
|
On Linux/macOS:
|
||||||
|
```bash
|
||||||
|
export G4F_API_KEY="your-api-key-here"
|
||||||
|
```
|
||||||
|
|
||||||
|
On Windows (Command Prompt):
|
||||||
|
```bash
|
||||||
|
set G4F_API_KEY="your-api-key-here"
|
||||||
|
```
|
||||||
|
|
||||||
|
On Windows (PowerShell):
|
||||||
|
```bash
|
||||||
|
$env:G4F_API_KEY="your-api-key-here"
|
||||||
|
```
|
||||||
|
|
||||||
|
Replace `your-api-key-here` with your actual API key.
|
||||||
|
|
||||||
|
2. **Run the G4F server with the API key injected**:
|
||||||
|
|
||||||
|
Use the following command to start the G4F server. The API key will be passed to both the GUI and the API:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python -m g4f --debug --port 8080 --g4f-api-key $G4F_API_KEY
|
||||||
|
```
|
||||||
|
|
||||||
|
- `--debug` enables debug mode for more verbose logs.
|
||||||
|
- `--port 8080` specifies the port on which the server will run (you can change this if needed).
|
||||||
|
- `--g4f-api-key` specifies the API key for both the GUI and the API.
|
||||||
|
|
||||||
|
#### Example:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export G4F_API_KEY="my-secret-api-key"
|
||||||
|
python -m g4f --debug --port 8080 --g4f-api-key $G4F_API_KEY
|
||||||
|
```
|
||||||
|
|
||||||
|
Now, both the GUI and API will require the correct API key for access.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 2. Accessing the GUI with Basic Authentication
|
||||||
|
|
||||||
|
The GUI uses **Basic Authentication**, where the **username** can be any value, and the **password** is your API key.
|
||||||
|
|
||||||
|
#### Example:
|
||||||
|
|
||||||
|
To access the GUI, open your web browser and navigate to `http://localhost:8080/chat/`. You will be prompted for a username and password.
|
||||||
|
|
||||||
|
- **Username**: You can use any username (e.g., `user` or `admin`).
|
||||||
|
- **Password**: Enter your API key (the same key you set in the `G4F_API_KEY` environment variable).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 3. Python Example for Accessing the API
|
||||||
|
|
||||||
|
To interact with the API, you can send requests by including the `g4f-api-key` in the headers. Here's an example of how to do this using the `requests` library in Python.
|
||||||
|
|
||||||
|
#### Example Code to Send a Request:
|
||||||
|
|
||||||
|
```python
|
||||||
|
import requests
|
||||||
|
|
||||||
|
url = "http://localhost:8080/v1/chat/completions"
|
||||||
|
|
||||||
|
# Body of the request
|
||||||
|
body = {
|
||||||
|
"model": "your-model-name", # Replace with your model name
|
||||||
|
"provider": "your-provider", # Replace with the provider name
|
||||||
|
"messages": [
|
||||||
|
{
|
||||||
|
"role": "user",
|
||||||
|
"content": "Hello"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
|
||||||
|
# API Key (can be set as an environment variable)
|
||||||
|
api_key = "your-api-key-here" # Replace with your actual API key
|
||||||
|
|
||||||
|
# Send the POST request
|
||||||
|
response = requests.post(url, json=body, headers={"g4f-api-key": api_key})
|
||||||
|
|
||||||
|
# Check the response
|
||||||
|
print(response.status_code)
|
||||||
|
print(response.json())
|
||||||
|
```
|
||||||
|
|
||||||
|
In this example:
|
||||||
|
- Replace `"your-api-key-here"` with your actual API key.
|
||||||
|
- `"model"` and `"provider"` should be replaced with the appropriate model and provider you're using.
|
||||||
|
- The `messages` array contains the conversation you want to send to the API.
|
||||||
|
|
||||||
|
#### Response:
|
||||||
|
|
||||||
|
The response will contain the output of the API request, such as the model's completion or other relevant data, which you can then process in your application.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 4. Testing the Setup
|
||||||
|
|
||||||
|
- **Accessing the GUI**: Open a web browser and navigate to `http://localhost:8080/chat/`. The GUI will now prompt you for a username and password. You can enter any username (e.g., `admin`), and for the password, enter the API key you set up in the environment variable.
|
||||||
|
|
||||||
|
- **Accessing the API**: Use the Python code example above to send requests to the API. Ensure the correct API key is included in the `g4f-api-key` header.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 5. Troubleshooting
|
||||||
|
|
||||||
|
- **GUI Access Issues**: If you're unable to access the GUI, ensure that you are using the correct API key as the password.
|
||||||
|
- **API Access Issues**: If the API is rejecting requests, verify that the `G4F_API_KEY` environment variable is correctly set and passed to the server. You can also check the server logs for more detailed error messages.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
By following the steps above, you will have successfully set up Basic Authentication for the G4F GUI (using any username and the API key as the password) and API key authentication for the API. This ensures that only authorized users can access both the interface and make API requests.
|
||||||
|
|
||||||
|
[Return to Home](/)
|
@ -181,7 +181,7 @@ client = Client(
|
|||||||
)
|
)
|
||||||
|
|
||||||
response = client.images.create_variation(
|
response = client.images.create_variation(
|
||||||
image=open("cat.jpg", "rb"),
|
image=open("docs/images/cat.jpg", "rb"),
|
||||||
model="dall-e-3",
|
model="dall-e-3",
|
||||||
# Add any other necessary parameters
|
# Add any other necessary parameters
|
||||||
)
|
)
|
||||||
@ -235,7 +235,7 @@ client = Client(
|
|||||||
)
|
)
|
||||||
|
|
||||||
image = requests.get("https://raw.githubusercontent.com/xtekky/gpt4free/refs/heads/main/docs/cat.jpeg", stream=True).raw
|
image = requests.get("https://raw.githubusercontent.com/xtekky/gpt4free/refs/heads/main/docs/cat.jpeg", stream=True).raw
|
||||||
# Or: image = open("docs/cat.jpeg", "rb")
|
# Or: image = open("docs/images/cat.jpeg", "rb")
|
||||||
|
|
||||||
response = client.chat.completions.create(
|
response = client.chat.completions.create(
|
||||||
model=g4f.models.default,
|
model=g4f.models.default,
|
||||||
|
Before Width: | Height: | Size: 8.5 KiB After Width: | Height: | Size: 8.5 KiB |
Before Width: | Height: | Size: 5.7 KiB After Width: | Height: | Size: 5.7 KiB |
Before Width: | Height: | Size: 8.1 KiB After Width: | Height: | Size: 8.1 KiB |
@ -6,6 +6,7 @@ from aiohttp import ClientSession
|
|||||||
|
|
||||||
from ..typing import AsyncResult, Messages
|
from ..typing import AsyncResult, Messages
|
||||||
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
|
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
|
||||||
|
from .. import debug
|
||||||
|
|
||||||
class Blackbox2(AsyncGeneratorProvider, ProviderModelMixin):
|
class Blackbox2(AsyncGeneratorProvider, ProviderModelMixin):
|
||||||
url = "https://www.blackbox.ai"
|
url = "https://www.blackbox.ai"
|
||||||
@ -13,7 +14,7 @@ class Blackbox2(AsyncGeneratorProvider, ProviderModelMixin):
|
|||||||
working = True
|
working = True
|
||||||
supports_system_message = True
|
supports_system_message = True
|
||||||
supports_message_history = True
|
supports_message_history = True
|
||||||
|
supports_stream = False
|
||||||
default_model = 'llama-3.1-70b'
|
default_model = 'llama-3.1-70b'
|
||||||
models = [default_model]
|
models = [default_model]
|
||||||
|
|
||||||
@ -62,8 +63,8 @@ class Blackbox2(AsyncGeneratorProvider, ProviderModelMixin):
|
|||||||
raise KeyError("'prompt' key not found in the response")
|
raise KeyError("'prompt' key not found in the response")
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
if attempt == max_retries - 1:
|
if attempt == max_retries - 1:
|
||||||
yield f"Error after {max_retries} attempts: {str(e)}"
|
raise RuntimeError(f"Error after {max_retries} attempts: {str(e)}")
|
||||||
else:
|
else:
|
||||||
wait_time = delay * (2 ** attempt) + random.uniform(0, 1)
|
wait_time = delay * (2 ** attempt) + random.uniform(0, 1)
|
||||||
print(f"Attempt {attempt + 1} failed. Retrying in {wait_time:.2f} seconds...")
|
debug.log(f"Attempt {attempt + 1} failed. Retrying in {wait_time:.2f} seconds...")
|
||||||
await asyncio.sleep(wait_time)
|
await asyncio.sleep(wait_time)
|
||||||
|
@ -76,9 +76,8 @@ class Gemini(AsyncGeneratorProvider):
|
|||||||
page = await browser.get(f"{cls.url}/app")
|
page = await browser.get(f"{cls.url}/app")
|
||||||
await page.select("div.ql-editor.textarea", 240)
|
await page.select("div.ql-editor.textarea", 240)
|
||||||
cookies = {}
|
cookies = {}
|
||||||
for c in await page.browser.cookies.get_all():
|
for c in await page.send(nodriver.cdp.network.get_cookies([cls.url])):
|
||||||
if c.domain.endswith(".google.com"):
|
cookies[c.name] = c.value
|
||||||
cookies[c.name] = c.value
|
|
||||||
await page.close()
|
await page.close()
|
||||||
cls._cookies = cookies
|
cls._cookies = cookies
|
||||||
|
|
||||||
@ -92,7 +91,6 @@ class Gemini(AsyncGeneratorProvider):
|
|||||||
connector: BaseConnector = None,
|
connector: BaseConnector = None,
|
||||||
image: ImageType = None,
|
image: ImageType = None,
|
||||||
image_name: str = None,
|
image_name: str = None,
|
||||||
response_format: str = None,
|
|
||||||
return_conversation: bool = False,
|
return_conversation: bool = False,
|
||||||
conversation: Conversation = None,
|
conversation: Conversation = None,
|
||||||
language: str = "en",
|
language: str = "en",
|
||||||
@ -113,7 +111,7 @@ class Gemini(AsyncGeneratorProvider):
|
|||||||
async for chunk in cls.nodriver_login(proxy):
|
async for chunk in cls.nodriver_login(proxy):
|
||||||
yield chunk
|
yield chunk
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
raise MissingAuthError('Missing "__Secure-1PSID" cookie', e)
|
raise MissingAuthError('Missing or invalid "__Secure-1PSID" cookie', e)
|
||||||
if not cls._snlm0e:
|
if not cls._snlm0e:
|
||||||
if cls._cookies is None or "__Secure-1PSID" not in cls._cookies:
|
if cls._cookies is None or "__Secure-1PSID" not in cls._cookies:
|
||||||
raise MissingAuthError('Missing "__Secure-1PSID" cookie')
|
raise MissingAuthError('Missing "__Secure-1PSID" cookie')
|
||||||
@ -153,7 +151,7 @@ class Gemini(AsyncGeneratorProvider):
|
|||||||
) as response:
|
) as response:
|
||||||
await raise_for_status(response)
|
await raise_for_status(response)
|
||||||
image_prompt = response_part = None
|
image_prompt = response_part = None
|
||||||
last_content_len = 0
|
last_content = ""
|
||||||
async for line in response.content:
|
async for line in response.content:
|
||||||
try:
|
try:
|
||||||
try:
|
try:
|
||||||
@ -171,32 +169,26 @@ class Gemini(AsyncGeneratorProvider):
|
|||||||
yield Conversation(response_part[1][0], response_part[1][1], response_part[4][0][0])
|
yield Conversation(response_part[1][0], response_part[1][1], response_part[4][0][0])
|
||||||
content = response_part[4][0][1][0]
|
content = response_part[4][0][1][0]
|
||||||
except (ValueError, KeyError, TypeError, IndexError) as e:
|
except (ValueError, KeyError, TypeError, IndexError) as e:
|
||||||
print(f"{cls.__name__}:{e.__class__.__name__}:{e}")
|
debug.log(f"{cls.__name__}:{e.__class__.__name__}:{e}")
|
||||||
continue
|
continue
|
||||||
match = re.search(r'\[Imagen of (.*?)\]', content)
|
match = re.search(r'\[Imagen of (.*?)\]', content)
|
||||||
if match:
|
if match:
|
||||||
image_prompt = match.group(1)
|
image_prompt = match.group(1)
|
||||||
content = content.replace(match.group(0), '')
|
content = content.replace(match.group(0), '')
|
||||||
yield content[last_content_len:]
|
pattern = r"http://googleusercontent.com/image_generation_content/\d+"
|
||||||
last_content_len = len(content)
|
content = re.sub(pattern, "", content)
|
||||||
if image_prompt:
|
if last_content and content.startswith(last_content):
|
||||||
try:
|
yield content[len(last_content):]
|
||||||
images = [image[0][3][3] for image in response_part[4][0][12][7][0]]
|
else:
|
||||||
if response_format == "b64_json":
|
yield content
|
||||||
|
last_content = content
|
||||||
|
if image_prompt:
|
||||||
|
try:
|
||||||
|
images = [image[0][3][3] for image in response_part[4][0][12][7][0]]
|
||||||
|
image_prompt = image_prompt.replace("a fake image", "")
|
||||||
yield ImageResponse(images, image_prompt, {"cookies": cls._cookies})
|
yield ImageResponse(images, image_prompt, {"cookies": cls._cookies})
|
||||||
else:
|
except TypeError:
|
||||||
resolved_images = []
|
pass
|
||||||
preview = []
|
|
||||||
for image in images:
|
|
||||||
async with client.get(image, allow_redirects=False) as fetch:
|
|
||||||
image = fetch.headers["location"]
|
|
||||||
async with client.get(image, allow_redirects=False) as fetch:
|
|
||||||
image = fetch.headers["location"]
|
|
||||||
resolved_images.append(image)
|
|
||||||
preview.append(image.replace('=s512', '=s200'))
|
|
||||||
yield ImageResponse(resolved_images, image_prompt, {"orginal_links": images, "preview": preview})
|
|
||||||
except TypeError:
|
|
||||||
pass
|
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
async def synthesize(cls, params: dict, proxy: str = None) -> AsyncIterator[bytes]:
|
async def synthesize(cls, params: dict, proxy: str = None) -> AsyncIterator[bytes]:
|
||||||
|
@ -5,4 +5,8 @@ from .OpenaiChat import OpenaiChat
|
|||||||
class OpenaiAccount(OpenaiChat):
|
class OpenaiAccount(OpenaiChat):
|
||||||
needs_auth = True
|
needs_auth = True
|
||||||
parent = "OpenaiChat"
|
parent = "OpenaiChat"
|
||||||
image_models = ["dall-e"]
|
image_models = ["dall-e-3", "gpt-4", "gpt-4o"]
|
||||||
|
default_vision_model = "gpt-4o"
|
||||||
|
default_image_model = "dall-e-3"
|
||||||
|
models = [*OpenaiChat.fallback_models, default_image_model]
|
||||||
|
model_aliases = {default_image_model: default_vision_model}
|
@ -7,6 +7,7 @@ import json
|
|||||||
import base64
|
import base64
|
||||||
import time
|
import time
|
||||||
import requests
|
import requests
|
||||||
|
import random
|
||||||
from copy import copy
|
from copy import copy
|
||||||
|
|
||||||
try:
|
try:
|
||||||
@ -77,11 +78,8 @@ class OpenaiChat(AsyncGeneratorProvider, ProviderModelMixin):
|
|||||||
supports_message_history = True
|
supports_message_history = True
|
||||||
supports_system_message = True
|
supports_system_message = True
|
||||||
default_model = "auto"
|
default_model = "auto"
|
||||||
default_vision_model = "gpt-4o"
|
fallback_models = [default_model, "gpt-4", "gpt-4o", "gpt-4o-mini", "gpt-4o-canmore", "o1-preview", "o1-mini"]
|
||||||
default_image_model = "dall-e-3"
|
|
||||||
fallback_models = [default_model, "gpt-4", "gpt-4o", "gpt-4o-mini", "gpt-4o-canmore", "o1-preview", "o1-mini", default_image_model]
|
|
||||||
vision_models = fallback_models
|
vision_models = fallback_models
|
||||||
image_models = fallback_models
|
|
||||||
synthesize_content_type = "audio/mpeg"
|
synthesize_content_type = "audio/mpeg"
|
||||||
|
|
||||||
_api_key: str = None
|
_api_key: str = None
|
||||||
@ -97,7 +95,6 @@ class OpenaiChat(AsyncGeneratorProvider, ProviderModelMixin):
|
|||||||
response.raise_for_status()
|
response.raise_for_status()
|
||||||
data = response.json()
|
data = response.json()
|
||||||
cls.models = [model.get("slug") for model in data.get("models")]
|
cls.models = [model.get("slug") for model in data.get("models")]
|
||||||
cls.models.append(cls.default_image_model)
|
|
||||||
except Exception:
|
except Exception:
|
||||||
cls.models = cls.fallback_models
|
cls.models = cls.fallback_models
|
||||||
return cls.models
|
return cls.models
|
||||||
@ -184,7 +181,6 @@ class OpenaiChat(AsyncGeneratorProvider, ProviderModelMixin):
|
|||||||
"content": {"content_type": "text", "parts": [message["content"]]},
|
"content": {"content_type": "text", "parts": [message["content"]]},
|
||||||
"id": str(uuid.uuid4()),
|
"id": str(uuid.uuid4()),
|
||||||
"create_time": int(time.time()),
|
"create_time": int(time.time()),
|
||||||
"id": str(uuid.uuid4()),
|
|
||||||
"metadata": {"serialization_metadata": {"custom_symbol_offsets": []}, "system_hints": system_hints},
|
"metadata": {"serialization_metadata": {"custom_symbol_offsets": []}, "system_hints": system_hints},
|
||||||
} for message in messages]
|
} for message in messages]
|
||||||
|
|
||||||
@ -295,28 +291,29 @@ class OpenaiChat(AsyncGeneratorProvider, ProviderModelMixin):
|
|||||||
Raises:
|
Raises:
|
||||||
RuntimeError: If an error occurs during processing.
|
RuntimeError: If an error occurs during processing.
|
||||||
"""
|
"""
|
||||||
if model == cls.default_image_model:
|
|
||||||
model = cls.default_model
|
|
||||||
if cls.needs_auth:
|
if cls.needs_auth:
|
||||||
await cls.login(proxy)
|
await cls.login(proxy)
|
||||||
|
|
||||||
async with StreamSession(
|
async with StreamSession(
|
||||||
proxy=proxy,
|
proxy=proxy,
|
||||||
impersonate="chrome",
|
impersonate="chrome",
|
||||||
timeout=timeout
|
timeout=timeout
|
||||||
) as session:
|
) as session:
|
||||||
|
image_request = None
|
||||||
if not cls.needs_auth:
|
if not cls.needs_auth:
|
||||||
cls._create_request_args(cookies)
|
if cls._headers is None:
|
||||||
RequestConfig.proof_token = get_config(cls._headers.get("user-agent"))
|
cls._create_request_args(cookies)
|
||||||
async with session.get(cls.url, headers=INIT_HEADERS) as response:
|
async with session.get(cls.url, headers=INIT_HEADERS) as response:
|
||||||
|
cls._update_request_args(session)
|
||||||
|
await raise_for_status(response)
|
||||||
|
else:
|
||||||
|
async with session.get(cls.url, headers=cls._headers) as response:
|
||||||
cls._update_request_args(session)
|
cls._update_request_args(session)
|
||||||
await raise_for_status(response)
|
await raise_for_status(response)
|
||||||
try:
|
try:
|
||||||
image_request = await cls.upload_image(session, cls._headers, image, image_name) if image else None
|
image_request = await cls.upload_image(session, cls._headers, image, image_name) if image else None
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
image_request = None
|
debug.log("OpenaiChat: Upload image failed")
|
||||||
debug.log("OpenaiChat: Upload image failed")
|
debug.log(f"{e.__class__.__name__}: {e}")
|
||||||
debug.log(f"{e.__class__.__name__}: {e}")
|
|
||||||
model = cls.get_model(model)
|
model = cls.get_model(model)
|
||||||
if conversation is None:
|
if conversation is None:
|
||||||
conversation = Conversation(conversation_id, str(uuid.uuid4()) if parent_id is None else parent_id)
|
conversation = Conversation(conversation_id, str(uuid.uuid4()) if parent_id is None else parent_id)
|
||||||
@ -348,6 +345,8 @@ class OpenaiChat(AsyncGeneratorProvider, ProviderModelMixin):
|
|||||||
raise MissingAuthError("No arkose token found in .har file")
|
raise MissingAuthError("No arkose token found in .har file")
|
||||||
|
|
||||||
if "proofofwork" in chat_requirements:
|
if "proofofwork" in chat_requirements:
|
||||||
|
if RequestConfig.proof_token is None:
|
||||||
|
RequestConfig.proof_token = get_config(cls._headers.get("user-agent"))
|
||||||
proofofwork = generate_proof_token(
|
proofofwork = generate_proof_token(
|
||||||
**chat_requirements["proofofwork"],
|
**chat_requirements["proofofwork"],
|
||||||
user_agent=cls._headers.get("user-agent"),
|
user_agent=cls._headers.get("user-agent"),
|
||||||
@ -363,13 +362,22 @@ class OpenaiChat(AsyncGeneratorProvider, ProviderModelMixin):
|
|||||||
"messages": None,
|
"messages": None,
|
||||||
"parent_message_id": conversation.message_id,
|
"parent_message_id": conversation.message_id,
|
||||||
"model": model,
|
"model": model,
|
||||||
"paragen_cot_summary_display_override": "allow",
|
"timezone_offset_min":-60,
|
||||||
"history_and_training_disabled": history_disabled and not auto_continue and not return_conversation,
|
"timezone":"Europe/Berlin",
|
||||||
"conversation_mode": {"kind":"primary_assistant"},
|
"history_and_training_disabled": history_disabled and not auto_continue and not return_conversation or not cls.needs_auth,
|
||||||
|
"conversation_mode":{"kind":"primary_assistant","plugin_ids":None},
|
||||||
|
"force_paragen":False,
|
||||||
|
"force_paragen_model_slug":"",
|
||||||
|
"force_rate_limit":False,
|
||||||
|
"reset_rate_limits":False,
|
||||||
"websocket_request_id": str(uuid.uuid4()),
|
"websocket_request_id": str(uuid.uuid4()),
|
||||||
"supported_encodings": ["v1"],
|
"system_hints": ["search"] if web_search else None,
|
||||||
"supports_buffering": True,
|
"supported_encodings":["v1"],
|
||||||
"system_hints": ["search"] if web_search else None
|
"conversation_origin":None,
|
||||||
|
"client_contextual_info":{"is_dark_mode":False,"time_since_loaded":random.randint(20, 500),"page_height":578,"page_width":1850,"pixel_ratio":1,"screen_height":1080,"screen_width":1920},
|
||||||
|
"paragen_stream_type_override":None,
|
||||||
|
"paragen_cot_summary_display_override":"allow",
|
||||||
|
"supports_buffering":True
|
||||||
}
|
}
|
||||||
if conversation.conversation_id is not None:
|
if conversation.conversation_id is not None:
|
||||||
data["conversation_id"] = conversation.conversation_id
|
data["conversation_id"] = conversation.conversation_id
|
||||||
@ -408,7 +416,7 @@ class OpenaiChat(AsyncGeneratorProvider, ProviderModelMixin):
|
|||||||
async for line in response.iter_lines():
|
async for line in response.iter_lines():
|
||||||
async for chunk in cls.iter_messages_line(session, line, conversation):
|
async for chunk in cls.iter_messages_line(session, line, conversation):
|
||||||
yield chunk
|
yield chunk
|
||||||
if not history_disabled:
|
if not history_disabled and RequestConfig.access_token is not None:
|
||||||
yield SynthesizeData(cls.__name__, {
|
yield SynthesizeData(cls.__name__, {
|
||||||
"conversation_id": conversation.conversation_id,
|
"conversation_id": conversation.conversation_id,
|
||||||
"message_id": conversation.message_id,
|
"message_id": conversation.message_id,
|
||||||
@ -495,7 +503,8 @@ class OpenaiChat(AsyncGeneratorProvider, ProviderModelMixin):
|
|||||||
cls._set_api_key(RequestConfig.access_token)
|
cls._set_api_key(RequestConfig.access_token)
|
||||||
except NoValidHarFileError:
|
except NoValidHarFileError:
|
||||||
if has_nodriver:
|
if has_nodriver:
|
||||||
await cls.nodriver_auth(proxy)
|
if RequestConfig.access_token is None:
|
||||||
|
await cls.nodriver_auth(proxy)
|
||||||
else:
|
else:
|
||||||
raise
|
raise
|
||||||
|
|
||||||
@ -505,7 +514,6 @@ class OpenaiChat(AsyncGeneratorProvider, ProviderModelMixin):
|
|||||||
page = browser.main_tab
|
page = browser.main_tab
|
||||||
def on_request(event: nodriver.cdp.network.RequestWillBeSent):
|
def on_request(event: nodriver.cdp.network.RequestWillBeSent):
|
||||||
if event.request.url == start_url or event.request.url.startswith(conversation_url):
|
if event.request.url == start_url or event.request.url.startswith(conversation_url):
|
||||||
RequestConfig.access_request_id = event.request_id
|
|
||||||
RequestConfig.headers = event.request.headers
|
RequestConfig.headers = event.request.headers
|
||||||
elif event.request.url in (backend_url, backend_anon_url):
|
elif event.request.url in (backend_url, backend_anon_url):
|
||||||
if "OpenAI-Sentinel-Proof-Token" in event.request.headers:
|
if "OpenAI-Sentinel-Proof-Token" in event.request.headers:
|
||||||
@ -527,25 +535,25 @@ class OpenaiChat(AsyncGeneratorProvider, ProviderModelMixin):
|
|||||||
await page.send(nodriver.cdp.network.enable())
|
await page.send(nodriver.cdp.network.enable())
|
||||||
page.add_handler(nodriver.cdp.network.RequestWillBeSent, on_request)
|
page.add_handler(nodriver.cdp.network.RequestWillBeSent, on_request)
|
||||||
page = await browser.get(cls.url)
|
page = await browser.get(cls.url)
|
||||||
try:
|
|
||||||
if RequestConfig.access_request_id is not None:
|
|
||||||
body = await page.send(get_response_body(RequestConfig.access_request_id))
|
|
||||||
if isinstance(body, tuple) and body:
|
|
||||||
body = body[0]
|
|
||||||
if body:
|
|
||||||
match = re.search(r'"accessToken":"(.*?)"', body)
|
|
||||||
if match:
|
|
||||||
RequestConfig.access_token = match.group(1)
|
|
||||||
except KeyError:
|
|
||||||
pass
|
|
||||||
for c in await page.send(nodriver.cdp.network.get_cookies([cls.url])):
|
|
||||||
RequestConfig.cookies[c.name] = c.value
|
|
||||||
user_agent = await page.evaluate("window.navigator.userAgent")
|
user_agent = await page.evaluate("window.navigator.userAgent")
|
||||||
await page.select("#prompt-textarea", 240)
|
await page.select("#prompt-textarea", 240)
|
||||||
|
while True:
|
||||||
|
if RequestConfig.access_token:
|
||||||
|
break
|
||||||
|
body = await page.evaluate("JSON.stringify(window.__remixContext)")
|
||||||
|
if body:
|
||||||
|
match = re.search(r'"accessToken":"(.*?)"', body)
|
||||||
|
if match:
|
||||||
|
RequestConfig.access_token = match.group(1)
|
||||||
|
break
|
||||||
|
await asyncio.sleep(1)
|
||||||
while True:
|
while True:
|
||||||
if RequestConfig.proof_token:
|
if RequestConfig.proof_token:
|
||||||
break
|
break
|
||||||
await asyncio.sleep(1)
|
await asyncio.sleep(1)
|
||||||
|
RequestConfig.data_build = await page.evaluate("document.documentElement.getAttribute('data-build')")
|
||||||
|
for c in await page.send(nodriver.cdp.network.get_cookies([cls.url])):
|
||||||
|
RequestConfig.cookies[c.name] = c.value
|
||||||
await page.close()
|
await page.close()
|
||||||
cls._create_request_args(RequestConfig.cookies, RequestConfig.headers, user_agent=user_agent)
|
cls._create_request_args(RequestConfig.cookies, RequestConfig.headers, user_agent=user_agent)
|
||||||
cls._set_api_key(RequestConfig.access_token)
|
cls._set_api_key(RequestConfig.access_token)
|
||||||
|
@ -25,7 +25,6 @@ conversation_url = "https://chatgpt.com/c/"
|
|||||||
class RequestConfig:
|
class RequestConfig:
|
||||||
cookies: dict = None
|
cookies: dict = None
|
||||||
headers: dict = None
|
headers: dict = None
|
||||||
access_request_id: str = None
|
|
||||||
access_token: str = None
|
access_token: str = None
|
||||||
proof_token: list = None
|
proof_token: list = None
|
||||||
turnstile_token: str = None
|
turnstile_token: str = None
|
||||||
@ -33,6 +32,7 @@ class RequestConfig:
|
|||||||
arkose_token: str = None
|
arkose_token: str = None
|
||||||
headers: dict = {}
|
headers: dict = {}
|
||||||
cookies: dict = {}
|
cookies: dict = {}
|
||||||
|
data_build: str = "prod-697873d7e78bb14df6e13af3a91fa237cc4db415"
|
||||||
|
|
||||||
class arkReq:
|
class arkReq:
|
||||||
def __init__(self, arkURL, arkBx, arkHeader, arkBody, arkCookies, userAgent):
|
def __init__(self, arkURL, arkBx, arkHeader, arkBody, arkCookies, userAgent):
|
||||||
|
@ -14,6 +14,8 @@ from datetime import (
|
|||||||
timezone
|
timezone
|
||||||
)
|
)
|
||||||
|
|
||||||
|
from .har_file import RequestConfig
|
||||||
|
|
||||||
cores = [16, 24, 32]
|
cores = [16, 24, 32]
|
||||||
screens = [3000, 4000, 6000]
|
screens = [3000, 4000, 6000]
|
||||||
maxAttempts = 500000
|
maxAttempts = 500000
|
||||||
@ -386,7 +388,7 @@ def get_config(user_agent):
|
|||||||
random.random(),
|
random.random(),
|
||||||
user_agent,
|
user_agent,
|
||||||
None,
|
None,
|
||||||
"prod-0b673b9a04fb6983c1417b587f2f31173eafa605", #document.documentElement.getAttribute("data-build"),
|
RequestConfig.data_build, #document.documentElement.getAttribute("data-build"),
|
||||||
"en-US",
|
"en-US",
|
||||||
"en-US,es-US,en,es",
|
"en-US,es-US,en,es",
|
||||||
0,
|
0,
|
||||||
@ -396,7 +398,8 @@ def get_config(user_agent):
|
|||||||
time.perf_counter(),
|
time.perf_counter(),
|
||||||
str(uuid.uuid4()),
|
str(uuid.uuid4()),
|
||||||
"",
|
"",
|
||||||
8
|
8,
|
||||||
|
int(time.time()),
|
||||||
]
|
]
|
||||||
|
|
||||||
return config
|
return config
|
||||||
|
9
g4f/__main__.py
Normal file
9
g4f/__main__.py
Normal file
@ -0,0 +1,9 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from .cli import get_api_parser, run_api_args
|
||||||
|
|
||||||
|
parser = get_api_parser()
|
||||||
|
args = parser.parse_args()
|
||||||
|
if args.gui is None:
|
||||||
|
args.gui = True
|
||||||
|
run_api_args(args)
|
@ -23,7 +23,7 @@ from starlette.status import (
|
|||||||
HTTP_500_INTERNAL_SERVER_ERROR,
|
HTTP_500_INTERNAL_SERVER_ERROR,
|
||||||
)
|
)
|
||||||
from fastapi.encoders import jsonable_encoder
|
from fastapi.encoders import jsonable_encoder
|
||||||
from fastapi.security import HTTPBearer, HTTPAuthorizationCredentials
|
from fastapi.security import HTTPBearer, HTTPAuthorizationCredentials, HTTPBasic
|
||||||
from fastapi.middleware.cors import CORSMiddleware
|
from fastapi.middleware.cors import CORSMiddleware
|
||||||
from starlette.responses import FileResponse
|
from starlette.responses import FileResponse
|
||||||
from pydantic import BaseModel, Field
|
from pydantic import BaseModel, Field
|
||||||
@ -50,7 +50,7 @@ logger = logging.getLogger(__name__)
|
|||||||
|
|
||||||
DEFAULT_PORT = 1337
|
DEFAULT_PORT = 1337
|
||||||
|
|
||||||
def create_app(g4f_api_key: str = None):
|
def create_app():
|
||||||
app = FastAPI()
|
app = FastAPI()
|
||||||
|
|
||||||
# Add CORS middleware
|
# Add CORS middleware
|
||||||
@ -62,7 +62,7 @@ def create_app(g4f_api_key: str = None):
|
|||||||
allow_headers=["*"],
|
allow_headers=["*"],
|
||||||
)
|
)
|
||||||
|
|
||||||
api = Api(app, g4f_api_key=g4f_api_key)
|
api = Api(app)
|
||||||
|
|
||||||
if AppConfig.gui:
|
if AppConfig.gui:
|
||||||
@app.get("/")
|
@app.get("/")
|
||||||
@ -86,9 +86,14 @@ def create_app(g4f_api_key: str = None):
|
|||||||
|
|
||||||
return app
|
return app
|
||||||
|
|
||||||
def create_app_debug(g4f_api_key: str = None):
|
def create_app_debug():
|
||||||
g4f.debug.logging = True
|
g4f.debug.logging = True
|
||||||
return create_app(g4f_api_key)
|
return create_app()
|
||||||
|
|
||||||
|
def create_app_with_gui_and_debug():
|
||||||
|
g4f.debug.logging = True
|
||||||
|
AppConfig.gui = True
|
||||||
|
return create_app()
|
||||||
|
|
||||||
class ChatCompletionsConfig(BaseModel):
|
class ChatCompletionsConfig(BaseModel):
|
||||||
messages: Messages = Field(examples=[[{"role": "system", "content": ""}, {"role": "user", "content": ""}]])
|
messages: Messages = Field(examples=[[{"role": "system", "content": ""}, {"role": "user", "content": ""}]])
|
||||||
@ -112,7 +117,7 @@ class ImageGenerationConfig(BaseModel):
|
|||||||
prompt: str
|
prompt: str
|
||||||
model: Optional[str] = None
|
model: Optional[str] = None
|
||||||
provider: Optional[str] = None
|
provider: Optional[str] = None
|
||||||
response_format: str = "url"
|
response_format: Optional[str] = None
|
||||||
api_key: Optional[str] = None
|
api_key: Optional[str] = None
|
||||||
proxy: Optional[str] = None
|
proxy: Optional[str] = None
|
||||||
|
|
||||||
@ -156,8 +161,8 @@ class ErrorResponse(Response):
|
|||||||
return cls(format_exception(exception, config), status_code)
|
return cls(format_exception(exception, config), status_code)
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def from_message(cls, message: str, status_code: int = HTTP_500_INTERNAL_SERVER_ERROR):
|
def from_message(cls, message: str, status_code: int = HTTP_500_INTERNAL_SERVER_ERROR, headers: dict = None):
|
||||||
return cls(format_exception(message), status_code)
|
return cls(format_exception(message), status_code, headers=headers)
|
||||||
|
|
||||||
def render(self, content) -> bytes:
|
def render(self, content) -> bytes:
|
||||||
return str(content).encode(errors="ignore")
|
return str(content).encode(errors="ignore")
|
||||||
@ -184,26 +189,57 @@ def set_list_ignored_providers(ignored: list[str]):
|
|||||||
list_ignored_providers = ignored
|
list_ignored_providers = ignored
|
||||||
|
|
||||||
class Api:
|
class Api:
|
||||||
def __init__(self, app: FastAPI, g4f_api_key=None) -> None:
|
def __init__(self, app: FastAPI) -> None:
|
||||||
self.app = app
|
self.app = app
|
||||||
self.client = AsyncClient()
|
self.client = AsyncClient()
|
||||||
self.g4f_api_key = g4f_api_key
|
|
||||||
self.get_g4f_api_key = APIKeyHeader(name="g4f-api-key")
|
self.get_g4f_api_key = APIKeyHeader(name="g4f-api-key")
|
||||||
self.conversations: dict[str, dict[str, BaseConversation]] = {}
|
self.conversations: dict[str, dict[str, BaseConversation]] = {}
|
||||||
|
|
||||||
security = HTTPBearer(auto_error=False)
|
security = HTTPBearer(auto_error=False)
|
||||||
|
basic_security = HTTPBasic()
|
||||||
|
|
||||||
|
async def get_username(self, request: Request):
|
||||||
|
credentials = await self.basic_security(request)
|
||||||
|
current_password_bytes = credentials.password.encode()
|
||||||
|
is_correct_password = secrets.compare_digest(
|
||||||
|
current_password_bytes, AppConfig.g4f_api_key.encode()
|
||||||
|
)
|
||||||
|
if not is_correct_password:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=HTTP_401_UNAUTHORIZED,
|
||||||
|
detail="Incorrect username or password",
|
||||||
|
headers={"WWW-Authenticate": "Basic"},
|
||||||
|
)
|
||||||
|
return credentials.username
|
||||||
|
|
||||||
def register_authorization(self):
|
def register_authorization(self):
|
||||||
|
if AppConfig.g4f_api_key:
|
||||||
|
print(f"Register authentication key: {''.join(['*' for _ in range(len(AppConfig.g4f_api_key))])}")
|
||||||
@self.app.middleware("http")
|
@self.app.middleware("http")
|
||||||
async def authorization(request: Request, call_next):
|
async def authorization(request: Request, call_next):
|
||||||
if self.g4f_api_key and request.url.path not in ("/", "/v1"):
|
if AppConfig.g4f_api_key is not None:
|
||||||
try:
|
try:
|
||||||
user_g4f_api_key = await self.get_g4f_api_key(request)
|
user_g4f_api_key = await self.get_g4f_api_key(request)
|
||||||
except HTTPException as e:
|
except HTTPException:
|
||||||
if e.status_code == 403:
|
user_g4f_api_key = None
|
||||||
|
if request.url.path.startswith("/v1"):
|
||||||
|
if user_g4f_api_key is None:
|
||||||
return ErrorResponse.from_message("G4F API key required", HTTP_401_UNAUTHORIZED)
|
return ErrorResponse.from_message("G4F API key required", HTTP_401_UNAUTHORIZED)
|
||||||
if not secrets.compare_digest(self.g4f_api_key, user_g4f_api_key):
|
if not secrets.compare_digest(AppConfig.g4f_api_key, user_g4f_api_key):
|
||||||
return ErrorResponse.from_message("Invalid G4F API key", HTTP_403_FORBIDDEN)
|
return ErrorResponse.from_message("Invalid G4F API key", HTTP_403_FORBIDDEN)
|
||||||
|
else:
|
||||||
|
path = request.url.path
|
||||||
|
if user_g4f_api_key is not None and path.startswith("/images/"):
|
||||||
|
if not secrets.compare_digest(AppConfig.g4f_api_key, user_g4f_api_key):
|
||||||
|
return ErrorResponse.from_message("Invalid G4F API key", HTTP_403_FORBIDDEN)
|
||||||
|
elif path.startswith("/backend-api/") or path.startswith("/images/") or path.startswith("/chat/") and path != "/chat/":
|
||||||
|
try:
|
||||||
|
username = await self.get_username(request)
|
||||||
|
except HTTPException as e:
|
||||||
|
return ErrorResponse.from_message(e.detail, e.status_code, e.headers)
|
||||||
|
response = await call_next(request)
|
||||||
|
response.headers["X-Username"] = username
|
||||||
|
return response
|
||||||
return await call_next(request)
|
return await call_next(request)
|
||||||
|
|
||||||
def register_validation_exception_handler(self):
|
def register_validation_exception_handler(self):
|
||||||
@ -370,9 +406,9 @@ class Api:
|
|||||||
model=config.model,
|
model=config.model,
|
||||||
provider=AppConfig.image_provider if config.provider is None else config.provider,
|
provider=AppConfig.image_provider if config.provider is None else config.provider,
|
||||||
**filter_none(
|
**filter_none(
|
||||||
response_format = config.response_format,
|
response_format=config.response_format,
|
||||||
api_key = config.api_key,
|
api_key=config.api_key,
|
||||||
proxy = config.proxy
|
proxy=config.proxy
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
for image in response.data:
|
for image in response.data:
|
||||||
@ -512,8 +548,12 @@ def run_api(
|
|||||||
host, port = bind.split(":")
|
host, port = bind.split(":")
|
||||||
if port is None:
|
if port is None:
|
||||||
port = DEFAULT_PORT
|
port = DEFAULT_PORT
|
||||||
|
if AppConfig.gui and debug:
|
||||||
|
method = "create_app_with_gui_and_debug"
|
||||||
|
else:
|
||||||
|
method = "create_app_debug" if debug else "create_app"
|
||||||
uvicorn.run(
|
uvicorn.run(
|
||||||
f"g4f.api:create_app{'_debug' if debug else ''}",
|
f"g4f.api:{method}",
|
||||||
host=host,
|
host=host,
|
||||||
port=int(port),
|
port=int(port),
|
||||||
workers=workers,
|
workers=workers,
|
||||||
|
17
g4f/cli.py
17
g4f/cli.py
@ -1,19 +1,18 @@
|
|||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import argparse
|
import argparse
|
||||||
|
from argparse import ArgumentParser
|
||||||
|
|
||||||
from g4f import Provider
|
from g4f import Provider
|
||||||
from g4f.gui.run import gui_parser, run_gui_args
|
from g4f.gui.run import gui_parser, run_gui_args
|
||||||
import g4f.cookies
|
import g4f.cookies
|
||||||
|
|
||||||
def main():
|
def get_api_parser():
|
||||||
parser = argparse.ArgumentParser(description="Run gpt4free")
|
api_parser = ArgumentParser(description="Run the API and GUI")
|
||||||
subparsers = parser.add_subparsers(dest="mode", help="Mode to run the g4f in.")
|
|
||||||
api_parser = subparsers.add_parser("api")
|
|
||||||
api_parser.add_argument("--bind", default=None, help="The bind string. (Default: 0.0.0.0:1337)")
|
api_parser.add_argument("--bind", default=None, help="The bind string. (Default: 0.0.0.0:1337)")
|
||||||
api_parser.add_argument("--port", default=None, help="Change the port of the server.")
|
api_parser.add_argument("--port", "-p", default=None, help="Change the port of the server.")
|
||||||
api_parser.add_argument("--debug", "-d", action="store_true", help="Enable verbose logging.")
|
api_parser.add_argument("--debug", "-d", action="store_true", help="Enable verbose logging.")
|
||||||
api_parser.add_argument("--gui", "-g", default=False, action="store_true", help="Add gui to the api.")
|
api_parser.add_argument("--gui", "-g", default=None, action="store_true", help="Add gui to the api.")
|
||||||
api_parser.add_argument("--model", default=None, help="Default model for chat completion. (incompatible with --reload and --workers)")
|
api_parser.add_argument("--model", default=None, help="Default model for chat completion. (incompatible with --reload and --workers)")
|
||||||
api_parser.add_argument("--provider", choices=[provider.__name__ for provider in Provider.__providers__ if provider.working],
|
api_parser.add_argument("--provider", choices=[provider.__name__ for provider in Provider.__providers__ if provider.working],
|
||||||
default=None, help="Default provider for chat completion. (incompatible with --reload and --workers)")
|
default=None, help="Default provider for chat completion. (incompatible with --reload and --workers)")
|
||||||
@ -29,6 +28,12 @@ def main():
|
|||||||
api_parser.add_argument("--cookie-browsers", nargs="+", choices=[browser.__name__ for browser in g4f.cookies.browsers],
|
api_parser.add_argument("--cookie-browsers", nargs="+", choices=[browser.__name__ for browser in g4f.cookies.browsers],
|
||||||
default=[], help="List of browsers to access or retrieve cookies from. (incompatible with --reload and --workers)")
|
default=[], help="List of browsers to access or retrieve cookies from. (incompatible with --reload and --workers)")
|
||||||
api_parser.add_argument("--reload", action="store_true", help="Enable reloading.")
|
api_parser.add_argument("--reload", action="store_true", help="Enable reloading.")
|
||||||
|
return api_parser
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser(description="Run gpt4free")
|
||||||
|
subparsers = parser.add_subparsers(dest="mode", help="Mode to run the g4f in.")
|
||||||
|
subparsers.add_parser("api", parents=[get_api_parser()], add_help=False)
|
||||||
subparsers.add_parser("gui", parents=[gui_parser()], add_help=False)
|
subparsers.add_parser("gui", parents=[gui_parser()], add_help=False)
|
||||||
|
|
||||||
args = parser.parse_args()
|
args = parser.parse_args()
|
||||||
|
@ -292,6 +292,7 @@ class Images:
|
|||||||
if proxy is None:
|
if proxy is None:
|
||||||
proxy = self.client.proxy
|
proxy = self.client.proxy
|
||||||
|
|
||||||
|
e = None
|
||||||
response = None
|
response = None
|
||||||
if isinstance(provider_handler, IterListProvider):
|
if isinstance(provider_handler, IterListProvider):
|
||||||
for provider in provider_handler.providers:
|
for provider in provider_handler.providers:
|
||||||
@ -300,7 +301,7 @@ class Images:
|
|||||||
if response is not None:
|
if response is not None:
|
||||||
provider_name = provider.__name__
|
provider_name = provider.__name__
|
||||||
break
|
break
|
||||||
except (MissingAuthError, NoValidHarFileError) as e:
|
except Exception as e:
|
||||||
debug.log(f"Image provider {provider.__name__}: {e}")
|
debug.log(f"Image provider {provider.__name__}: {e}")
|
||||||
else:
|
else:
|
||||||
response = await self._generate_image_response(provider_handler, provider_name, model, prompt, **kwargs)
|
response = await self._generate_image_response(provider_handler, provider_name, model, prompt, **kwargs)
|
||||||
@ -314,6 +315,8 @@ class Images:
|
|||||||
provider_name
|
provider_name
|
||||||
)
|
)
|
||||||
if response is None:
|
if response is None:
|
||||||
|
if e is not None:
|
||||||
|
raise e
|
||||||
raise NoImageResponseError(f"No image response from {provider_name}")
|
raise NoImageResponseError(f"No image response from {provider_name}")
|
||||||
raise NoImageResponseError(f"Unexpected response type: {type(response)}")
|
raise NoImageResponseError(f"Unexpected response type: {type(response)}")
|
||||||
|
|
||||||
@ -362,7 +365,7 @@ class Images:
|
|||||||
image: ImageType,
|
image: ImageType,
|
||||||
model: str = None,
|
model: str = None,
|
||||||
provider: Optional[ProviderType] = None,
|
provider: Optional[ProviderType] = None,
|
||||||
response_format: str = "url",
|
response_format: Optional[str] = None,
|
||||||
**kwargs
|
**kwargs
|
||||||
) -> ImagesResponse:
|
) -> ImagesResponse:
|
||||||
return asyncio.run(self.async_create_variation(
|
return asyncio.run(self.async_create_variation(
|
||||||
@ -374,7 +377,7 @@ class Images:
|
|||||||
image: ImageType,
|
image: ImageType,
|
||||||
model: Optional[str] = None,
|
model: Optional[str] = None,
|
||||||
provider: Optional[ProviderType] = None,
|
provider: Optional[ProviderType] = None,
|
||||||
response_format: str = "url",
|
response_format: Optional[str] = None,
|
||||||
proxy: Optional[str] = None,
|
proxy: Optional[str] = None,
|
||||||
**kwargs
|
**kwargs
|
||||||
) -> ImagesResponse:
|
) -> ImagesResponse:
|
||||||
@ -384,6 +387,7 @@ class Images:
|
|||||||
proxy = self.client.proxy
|
proxy = self.client.proxy
|
||||||
prompt = "create a variation of this image"
|
prompt = "create a variation of this image"
|
||||||
|
|
||||||
|
e = None
|
||||||
response = None
|
response = None
|
||||||
if isinstance(provider_handler, IterListProvider):
|
if isinstance(provider_handler, IterListProvider):
|
||||||
# File pointer can be read only once, so we need to convert it to bytes
|
# File pointer can be read only once, so we need to convert it to bytes
|
||||||
@ -394,7 +398,7 @@ class Images:
|
|||||||
if response is not None:
|
if response is not None:
|
||||||
provider_name = provider.__name__
|
provider_name = provider.__name__
|
||||||
break
|
break
|
||||||
except (MissingAuthError, NoValidHarFileError) as e:
|
except Exception as e:
|
||||||
debug.log(f"Image provider {provider.__name__}: {e}")
|
debug.log(f"Image provider {provider.__name__}: {e}")
|
||||||
else:
|
else:
|
||||||
response = await self._generate_image_response(provider_handler, provider_name, model, prompt, image=image, **kwargs)
|
response = await self._generate_image_response(provider_handler, provider_name, model, prompt, image=image, **kwargs)
|
||||||
@ -402,10 +406,11 @@ class Images:
|
|||||||
if isinstance(response, ImageResponse):
|
if isinstance(response, ImageResponse):
|
||||||
return await self._process_image_response(response, response_format, proxy, model, provider_name)
|
return await self._process_image_response(response, response_format, proxy, model, provider_name)
|
||||||
if response is None:
|
if response is None:
|
||||||
|
if e is not None:
|
||||||
|
raise e
|
||||||
raise NoImageResponseError(f"No image response from {provider_name}")
|
raise NoImageResponseError(f"No image response from {provider_name}")
|
||||||
raise NoImageResponseError(f"Unexpected response type: {type(response)}")
|
raise NoImageResponseError(f"Unexpected response type: {type(response)}")
|
||||||
|
|
||||||
|
|
||||||
async def _process_image_response(
|
async def _process_image_response(
|
||||||
self,
|
self,
|
||||||
response: ImageResponse,
|
response: ImageResponse,
|
||||||
@ -414,21 +419,21 @@ class Images:
|
|||||||
model: Optional[str] = None,
|
model: Optional[str] = None,
|
||||||
provider: Optional[str] = None
|
provider: Optional[str] = None
|
||||||
) -> ImagesResponse:
|
) -> ImagesResponse:
|
||||||
|
last_provider = get_last_provider(True)
|
||||||
if response_format == "url":
|
if response_format == "url":
|
||||||
# Return original URLs without saving locally
|
# Return original URLs without saving locally
|
||||||
images = [Image.construct(url=image, revised_prompt=response.alt) for image in response.get_list()]
|
images = [Image.construct(url=image, revised_prompt=response.alt) for image in response.get_list()]
|
||||||
elif response_format == "b64_json":
|
|
||||||
images = await copy_images(response.get_list(), response.options.get("cookies"), proxy)
|
|
||||||
async def process_image_item(image_file: str) -> Image:
|
|
||||||
with open(os.path.join(images_dir, os.path.basename(image_file)), "rb") as file:
|
|
||||||
image_data = base64.b64encode(file.read()).decode()
|
|
||||||
return Image.construct(b64_json=image_data, revised_prompt=response.alt)
|
|
||||||
images = await asyncio.gather(*[process_image_item(image) for image in images])
|
|
||||||
else:
|
else:
|
||||||
# Save locally for None (default) case
|
# Save locally for None (default) case
|
||||||
images = await copy_images(response.get_list(), response.options.get("cookies"), proxy)
|
images = await copy_images(response.get_list(), response.get("cookies"), proxy)
|
||||||
images = [Image.construct(url=f"/images/{os.path.basename(image)}", revised_prompt=response.alt) for image in images]
|
if response_format == "b64_json":
|
||||||
last_provider = get_last_provider(True)
|
async def process_image_item(image_file: str) -> Image:
|
||||||
|
with open(os.path.join(images_dir, os.path.basename(image_file)), "rb") as file:
|
||||||
|
image_data = base64.b64encode(file.read()).decode()
|
||||||
|
return Image.construct(b64_json=image_data, revised_prompt=response.alt)
|
||||||
|
images = await asyncio.gather(*[process_image_item(image) for image in images])
|
||||||
|
else:
|
||||||
|
images = [Image.construct(url=f"/images/{os.path.basename(image)}", revised_prompt=response.alt) for image in images]
|
||||||
return ImagesResponse.construct(
|
return ImagesResponse.construct(
|
||||||
created=int(time.time()),
|
created=int(time.time()),
|
||||||
data=images,
|
data=images,
|
||||||
@ -529,7 +534,7 @@ class AsyncImages(Images):
|
|||||||
image: ImageType,
|
image: ImageType,
|
||||||
model: str = None,
|
model: str = None,
|
||||||
provider: ProviderType = None,
|
provider: ProviderType = None,
|
||||||
response_format: str = "url",
|
response_format: Optional[str] = None,
|
||||||
**kwargs
|
**kwargs
|
||||||
) -> ImagesResponse:
|
) -> ImagesResponse:
|
||||||
return await self.async_create_variation(
|
return await self.async_create_variation(
|
||||||
|
@ -744,7 +744,11 @@ const delete_conversation = async (conversation_id) => {
|
|||||||
};
|
};
|
||||||
|
|
||||||
const set_conversation = async (conversation_id) => {
|
const set_conversation = async (conversation_id) => {
|
||||||
history.pushState({}, null, `/chat/${conversation_id}`);
|
try {
|
||||||
|
history.pushState({}, null, `/chat/${conversation_id}`);
|
||||||
|
} catch (e) {
|
||||||
|
console.error(e);
|
||||||
|
}
|
||||||
window.conversation_id = conversation_id;
|
window.conversation_id = conversation_id;
|
||||||
|
|
||||||
await clear_conversation();
|
await clear_conversation();
|
||||||
@ -898,7 +902,11 @@ async function add_conversation(conversation_id, content) {
|
|||||||
items: [],
|
items: [],
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
history.pushState({}, null, `/chat/${conversation_id}`);
|
try {
|
||||||
|
history.pushState({}, null, `/chat/${conversation_id}`);
|
||||||
|
} catch (e) {
|
||||||
|
console.error(e);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
async function save_system_message() {
|
async function save_system_message() {
|
||||||
@ -1287,23 +1295,29 @@ async function on_api() {
|
|||||||
|
|
||||||
register_settings_storage();
|
register_settings_storage();
|
||||||
|
|
||||||
models = await api("models");
|
try {
|
||||||
models.forEach((model) => {
|
models = await api("models");
|
||||||
let option = document.createElement("option");
|
models.forEach((model) => {
|
||||||
option.value = option.text = model;
|
let option = document.createElement("option");
|
||||||
modelSelect.appendChild(option);
|
option.value = option.text = model;
|
||||||
});
|
modelSelect.appendChild(option);
|
||||||
|
});
|
||||||
providers = await api("providers")
|
providers = await api("providers")
|
||||||
Object.entries(providers).forEach(([provider, label]) => {
|
Object.entries(providers).forEach(([provider, label]) => {
|
||||||
let option = document.createElement("option");
|
let option = document.createElement("option");
|
||||||
option.value = provider;
|
option.value = provider;
|
||||||
option.text = label;
|
option.text = label;
|
||||||
providerSelect.appendChild(option);
|
providerSelect.appendChild(option);
|
||||||
})
|
});
|
||||||
|
await load_provider_models(appStorage.getItem("provider"));
|
||||||
|
} catch (e) {
|
||||||
|
console.error(e)
|
||||||
|
if (document.location.pathname == "/chat/") {
|
||||||
|
document.location.href = `/chat/error`;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
await load_settings_storage()
|
await load_settings_storage()
|
||||||
await load_provider_models(appStorage.getItem("provider"));
|
|
||||||
|
|
||||||
const hide_systemPrompt = document.getElementById("hide-systemPrompt")
|
const hide_systemPrompt = document.getElementById("hide-systemPrompt")
|
||||||
const slide_systemPrompt_icon = document.querySelector(".slide-systemPrompt i");
|
const slide_systemPrompt_icon = document.querySelector(".slide-systemPrompt i");
|
||||||
@ -1465,7 +1479,7 @@ async function api(ressource, args=null, file=null, message_id=null) {
|
|||||||
const url = `/backend-api/v2/${ressource}`;
|
const url = `/backend-api/v2/${ressource}`;
|
||||||
const headers = {};
|
const headers = {};
|
||||||
if (api_key) {
|
if (api_key) {
|
||||||
headers.authorization = `Bearer ${api_key}`;
|
headers.x_api_key = api_key;
|
||||||
}
|
}
|
||||||
if (ressource == "conversation") {
|
if (ressource == "conversation") {
|
||||||
let body = JSON.stringify(args);
|
let body = JSON.stringify(args);
|
||||||
|
@ -110,8 +110,10 @@ class Api:
|
|||||||
def _create_response_stream(self, kwargs: dict, conversation_id: str, provider: str, download_images: bool = True) -> Iterator:
|
def _create_response_stream(self, kwargs: dict, conversation_id: str, provider: str, download_images: bool = True) -> Iterator:
|
||||||
def log_handler(text: str):
|
def log_handler(text: str):
|
||||||
debug.logs.append(text)
|
debug.logs.append(text)
|
||||||
print(text)
|
if debug.logging:
|
||||||
|
print(text)
|
||||||
debug.log_handler = log_handler
|
debug.log_handler = log_handler
|
||||||
|
proxy = os.environ.get("G4F_PROXY")
|
||||||
try:
|
try:
|
||||||
result = ChatCompletion.create(**kwargs)
|
result = ChatCompletion.create(**kwargs)
|
||||||
first = True
|
first = True
|
||||||
@ -139,7 +141,7 @@ class Api:
|
|||||||
elif isinstance(chunk, ImageResponse):
|
elif isinstance(chunk, ImageResponse):
|
||||||
images = chunk
|
images = chunk
|
||||||
if download_images:
|
if download_images:
|
||||||
images = asyncio.run(copy_images(chunk.get_list(), chunk.options.get("cookies")))
|
images = asyncio.run(copy_images(chunk.get_list(), chunk.get("cookies"), proxy))
|
||||||
images = ImageResponse(images, chunk.alt)
|
images = ImageResponse(images, chunk.alt)
|
||||||
yield self._format_json("content", str(images))
|
yield self._format_json("content", str(images))
|
||||||
elif isinstance(chunk, SynthesizeData):
|
elif isinstance(chunk, SynthesizeData):
|
||||||
|
@ -153,7 +153,7 @@ class Backend_Api(Api):
|
|||||||
return response
|
return response
|
||||||
|
|
||||||
def get_provider_models(self, provider: str):
|
def get_provider_models(self, provider: str):
|
||||||
api_key = None if request.authorization is None else request.authorization.token
|
api_key = request.headers.get("x_api_key")
|
||||||
models = super().get_provider_models(provider, api_key)
|
models = super().get_provider_models(provider, api_key)
|
||||||
if models is None:
|
if models is None:
|
||||||
return "Provider not found", 404
|
return "Provider not found", 404
|
||||||
|
27
g4f/image.py
27
g4f/image.py
@ -7,8 +7,7 @@ import uuid
|
|||||||
from io import BytesIO
|
from io import BytesIO
|
||||||
import base64
|
import base64
|
||||||
import asyncio
|
import asyncio
|
||||||
from aiohttp import ClientSession
|
from aiohttp import ClientSession, ClientError
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from PIL.Image import open as open_image, new as new_image
|
from PIL.Image import open as open_image, new as new_image
|
||||||
from PIL.Image import FLIP_LEFT_RIGHT, ROTATE_180, ROTATE_270, ROTATE_90
|
from PIL.Image import FLIP_LEFT_RIGHT, ROTATE_180, ROTATE_270, ROTATE_90
|
||||||
@ -20,6 +19,7 @@ from .typing import ImageType, Union, Image, Optional, Cookies
|
|||||||
from .errors import MissingRequirementsError
|
from .errors import MissingRequirementsError
|
||||||
from .providers.response import ResponseType
|
from .providers.response import ResponseType
|
||||||
from .requests.aiohttp import get_connector
|
from .requests.aiohttp import get_connector
|
||||||
|
from . import debug
|
||||||
|
|
||||||
ALLOWED_EXTENSIONS = {'png', 'jpg', 'jpeg', 'gif', 'webp', 'svg'}
|
ALLOWED_EXTENSIONS = {'png', 'jpg', 'jpeg', 'gif', 'webp', 'svg'}
|
||||||
|
|
||||||
@ -277,12 +277,14 @@ def ensure_images_dir():
|
|||||||
if not os.path.exists(images_dir):
|
if not os.path.exists(images_dir):
|
||||||
os.makedirs(images_dir)
|
os.makedirs(images_dir)
|
||||||
|
|
||||||
async def copy_images(images: list[str], cookies: Optional[Cookies] = None, proxy: Optional[str] = None):
|
async def copy_images(
|
||||||
|
images: list[str],
|
||||||
|
cookies: Optional[Cookies] = None,
|
||||||
|
proxy: Optional[str] = None
|
||||||
|
):
|
||||||
ensure_images_dir()
|
ensure_images_dir()
|
||||||
async with ClientSession(
|
async with ClientSession(
|
||||||
connector=get_connector(
|
connector=get_connector(proxy=proxy),
|
||||||
proxy=os.environ.get("G4F_PROXY") if proxy is None else proxy
|
|
||||||
),
|
|
||||||
cookies=cookies
|
cookies=cookies
|
||||||
) as session:
|
) as session:
|
||||||
async def copy_image(image: str) -> str:
|
async def copy_image(image: str) -> str:
|
||||||
@ -291,10 +293,15 @@ async def copy_images(images: list[str], cookies: Optional[Cookies] = None, prox
|
|||||||
with open(target, "wb") as f:
|
with open(target, "wb") as f:
|
||||||
f.write(extract_data_uri(image))
|
f.write(extract_data_uri(image))
|
||||||
else:
|
else:
|
||||||
async with session.get(image) as response:
|
try:
|
||||||
with open(target, "wb") as f:
|
async with session.get(image) as response:
|
||||||
async for chunk in response.content.iter_chunked(4096):
|
response.raise_for_status()
|
||||||
f.write(chunk)
|
with open(target, "wb") as f:
|
||||||
|
async for chunk in response.content.iter_chunked(4096):
|
||||||
|
f.write(chunk)
|
||||||
|
except ClientError as e:
|
||||||
|
debug.log(f"copy_images failed: {e.__class__.__name__}: {e}")
|
||||||
|
return image
|
||||||
with open(target, "rb") as f:
|
with open(target, "rb") as f:
|
||||||
extension = is_accepted_format(f.read(12)).split("/")[-1]
|
extension = is_accepted_format(f.read(12)).split("/")[-1]
|
||||||
extension = "jpg" if extension == "jpeg" else extension
|
extension = "jpg" if extension == "jpeg" else extension
|
||||||
|
@ -18,7 +18,7 @@ def is_cloudflare(text: str) -> bool:
|
|||||||
return '<div id="cf-please-wait">' in text or "<title>Just a moment...</title>" in text
|
return '<div id="cf-please-wait">' in text or "<title>Just a moment...</title>" in text
|
||||||
|
|
||||||
def is_openai(text: str) -> bool:
|
def is_openai(text: str) -> bool:
|
||||||
return "<p>Unable to load site</p>" in text
|
return "<p>Unable to load site</p>" in text or 'id="challenge-error-text"' in text
|
||||||
|
|
||||||
async def raise_for_status_async(response: Union[StreamResponse, ClientResponse], message: str = None):
|
async def raise_for_status_async(response: Union[StreamResponse, ClientResponse], message: str = None):
|
||||||
if response.status in (429, 402):
|
if response.status in (429, 402):
|
||||||
@ -27,8 +27,10 @@ async def raise_for_status_async(response: Union[StreamResponse, ClientResponse]
|
|||||||
if response.status == 403 and is_cloudflare(message):
|
if response.status == 403 and is_cloudflare(message):
|
||||||
raise CloudflareError(f"Response {response.status}: Cloudflare detected")
|
raise CloudflareError(f"Response {response.status}: Cloudflare detected")
|
||||||
elif response.status == 403 and is_openai(message):
|
elif response.status == 403 and is_openai(message):
|
||||||
raise ResponseStatusError(f"Response {response.status}: Bot are detected")
|
raise ResponseStatusError(f"Response {response.status}: OpenAI Bot detected")
|
||||||
elif not response.ok:
|
elif not response.ok:
|
||||||
|
if "<html>" in message:
|
||||||
|
message = "HTML content"
|
||||||
raise ResponseStatusError(f"Response {response.status}: {message}")
|
raise ResponseStatusError(f"Response {response.status}: {message}")
|
||||||
|
|
||||||
def raise_for_status(response: Union[Response, StreamResponse, ClientResponse, RequestsResponse], message: str = None):
|
def raise_for_status(response: Union[Response, StreamResponse, ClientResponse, RequestsResponse], message: str = None):
|
||||||
|
4
setup.py
4
setup.py
@ -8,6 +8,10 @@ here = os.path.abspath(os.path.dirname(__file__))
|
|||||||
with codecs.open(os.path.join(here, 'README.md'), encoding='utf-8') as fh:
|
with codecs.open(os.path.join(here, 'README.md'), encoding='utf-8') as fh:
|
||||||
long_description = '\n' + fh.read()
|
long_description = '\n' + fh.read()
|
||||||
|
|
||||||
|
long_description = long_description.replace("[!NOTE]", "")
|
||||||
|
long_description = long_description.replace("(docs/images/", "(https://raw.githubusercontent.com/xtekky/gpt4free/refs/heads/main/docs/images/")
|
||||||
|
long_description = long_description.replace("(docs/", "(https://github.com/xtekky/gpt4free/blob/main/docs/")
|
||||||
|
|
||||||
INSTALL_REQUIRE = [
|
INSTALL_REQUIRE = [
|
||||||
"requests",
|
"requests",
|
||||||
"aiohttp",
|
"aiohttp",
|
||||||
|
Loading…
Reference in New Issue
Block a user