Use custom user data dir for each provider

Reuse cookies and access token in Copilot
Send in the gui messages to multiple providers at once
Add GUI documenation
This commit is contained in:
Heiner Lohaus 2024-12-07 19:38:04 +01:00
parent 486190d838
commit 6a624acf55
15 changed files with 481 additions and 123 deletions

View File

@ -132,17 +132,27 @@ To ensure the seamless operation of our application, please follow the instructi
By following these steps, you should be able to successfully install and run the application on your Windows system. If you encounter any issues during the installation process, please refer to our Issue Tracker or try to get contact over Discord for assistance.
Run the **Webview UI** on other Platforms:
---
- [/docs/webview](docs/webview.md)
### Learn More About the GUI
##### Use your smartphone:
For detailed instructions on how to set up, configure, and use the GPT4Free GUI, refer to the **GUI Documentation**:
Run the Web UI on Your Smartphone:
- [GUI Documentation](docs/gui.md)
- [/docs/guides/phone](docs/guides/phone.md)
This guide includes step-by-step details on provider selection, managing conversations, using advanced features like speech recognition, and more.
#### Use python
---
### Use Your Smartphone
Run the Web UI on your smartphone for easy access on the go. Check out the dedicated guide to learn how to set up and use the GUI on your mobile device:
- [Run on Smartphone Guide](docs/guides/phone.md)
---
### Use python
##### Prerequisites:

147
docs/gui.md Normal file
View File

@ -0,0 +1,147 @@
# G4F - GUI Documentation
## Overview
The G4F GUI is a self-contained, user-friendly interface designed for interacting with multiple AI models from various providers. It allows users to generate text, code, and images effortlessly. Advanced features such as speech recognition, file uploads, conversation backup/restore, and more are included. Both the backend and frontend are fully integrated into the GUI, making setup simple and seamless.
## Features
### 1. **Multiple Providers and Models**
- **Provider/Model Selection via Dropdown:** Use the select box to choose a specific **provider/model combination**.
- **Pinning Provider/Model Combinations:** After selecting a provider and model from the dropdown, click the **pin button** to add the combination to the pinned list.
- **Remove Pinned Combinations:** Each pinned provider/model combination is displayed as a button. Clicking on the button removes it from the pinned list.
- **Send Requests to Multiple Providers:** You can pin multiple provider/model combinations and send requests to all of them simultaneously, enabling fast and comprehensive content generation.
### 2. **Text, Code, and Image Generation**
- **Text and Code Generation:** Enter prompts to generate text or code outputs.
- **Image Generation:** Provide text prompts to generate images, which are shown as thumbnails. Clicking on a thumbnail opens the image in a lightbox view.
### 3. **Gallery Functionality**
- **Image Thumbnails:** Generated images appear as small thumbnails within the conversation.
- **Lightbox View:** Clicking a thumbnail opens the image in full size, along with the prompt used to generate it.
- **Automatic Image Download:** Enable automatic downloading of generated images through the settings.
### 4. **Conversation Management**
- **Message Reuse:** While messages can't be edited, you can copy and reuse them.
- **Message Deletion:** Conversations can be deleted for a cleaner workspace.
- **Conversation List:** The left sidebar displays a list of active and past conversations for easy navigation.
- **Change Conversation Title:** By clicking the three dots next to a conversation title, you can either delete or change its title.
- **Backup and Restore Conversations:** Backup and restore all conversations and messages as a JSON file (accessible via the settings).
### 5. **Speech Recognition and Synthesis**
- **Speech Input:** Use speech recognition to input prompts by speaking instead of typing.
- **Speech Output (Text-to-Speech):** The generated text can be read aloud using speech synthesis.
- **Custom Language Settings:** Configure the language used for speech recognition to match your preference.
### 6. **File Uploads**
- **Image Uploads:** Upload images that will be appended to your message and sent to the AI provider.
- **Text File Uploads:** Upload text files, and their contents will be added to the message to provide more detailed input to the AI.
### 7. **Web Access and Settings**
- **DuckDuckGo Web Access:** Enable web access through DuckDuckGo for privacy-focused browsing.
- **Theme Toggle:** Switch between **dark mode** and **light mode** in the settings.
- **Provider Visibility:** Hide unused providers in the settings using toggle buttons.
- **Log Access:** View application logs, including error messages and debug logs, through the settings.
### 8. **Authentication**
- **Basic Authentication:** Set a password for Basic Authentication using the `--g4f-api-key` argument when starting the web server.
## Installation
You can install the G4F GUI either as a full stack or in a lightweight version:
1. **Full Stack Installation** (includes all packages, including browser support and drivers):
```bash
pip install -U g4f[all]
```
2. **Slim Installation** (does not include browser drivers, suitable for headless environments):
```bash
pip install -U g4f[slim]
```
- **Full Stack Installation:** This installs all necessary dependencies, including browser support for web-based interactions.
- **Slim Installation:** This version is lighter, with no browser support, ideal for environments where browser interactions are not required.
## Setup
### Setting the Environment Variable
It is **recommended** to set a `G4F_API_KEY` environment variable for authentication. You can do this as follows:
On **Linux/macOS**:
```bash
export G4F_API_KEY="your-api-key-here"
```
On **Windows**:
```bash
set G4F_API_KEY="your-api-key-here"
```
### Start the GUI and Backend
Run the following command to start both the GUI and backend services based on the G4F client:
```bash
python -m g4f --debug --port 8080 --g4f-api-key $G4F_API_KEY
```
This starts the GUI at `http://localhost:8080` with all necessary backend components running seamlessly.
### Access the GUI
Once the server is running, open your browser and navigate to:
```
http://localhost:8080/chat/
```
## Using the Interface
1. **Select and Manage Providers/Models:**
- Use the **select box** to view the list of available providers and models.
- Select a **provider/model combination** from the dropdown.
- Click the **pin button** to add the combination to your pinned list.
- To **unpin** a combination, click the corresponding button in the pinned list.
2. **Input a Prompt:**
- Enter your prompt manually or use **speech recognition** to dictate it.
- You can also upload **images** or **text files** to be included in the prompt.
3. **Generate Content:**
- Click the "Generate" button to produce the content.
- The AI will generate text, code, or images depending on the prompt.
4. **View and Interact with Results:**
- **For Text/Code:** The generated content will appear in the conversation window.
- **For Images:** Generated images will be shown as thumbnails. Click on them to view in full size.
5. **Backup and Restore Conversations:**
- Backup all your conversations as a **JSON file** and restore them at any time via the settings.
6. **Manage Conversations:**
- Delete or rename any conversation by clicking the three dots next to the conversation title.
### Gallery Functionality
- **Image Thumbnails:** All generated images are shown as thumbnails within the conversation window.
- **Lightbox View:** Clicking a thumbnail opens the image in a larger view along with the associated prompt.
- **Automatic Image Download:** Enable automatic downloading of generated images in the settings.
## Settings Configuration
1. **API Key:** Set your API key when starting the server by defining the `G4F_API_KEY` environment variable.
2. **Provider Visibility:** Hide unused providers through the settings.
3. **Theme:** Toggle between **dark mode** and **light mode**. Disabling dark mode switches to a white theme.
4. **DuckDuckGo Access:** Enable DuckDuckGo for privacy-focused web browsing.
5. **Speech Recognition Language:** Set your preferred language for speech recognition.
6. **Log Access:** View logs, including error and debug messages, from the settings menu.
7. **Automatic Image Download:** Enable this feature to automatically download generated images.
## Known Issues
- **Gallery Loading:** Large images may take time to load depending on system performance.
- **Speech Recognition Accuracy:** Accuracy may vary depending on microphone quality or speech clarity.
- **Provider Downtime:** Some AI providers may experience downtime or disruptions.
[Return to Home](/)

View File

@ -29,13 +29,9 @@ from .. import debug
class Conversation(BaseConversation):
conversation_id: str
cookie_jar: CookieJar
access_token: str
def __init__(self, conversation_id: str, cookie_jar: CookieJar, access_token: str = None):
def __init__(self, conversation_id: str):
self.conversation_id = conversation_id
self.cookie_jar = cookie_jar
self.access_token = access_token
class Copilot(AbstractProvider, ProviderModelMixin):
label = "Microsoft Copilot"
@ -50,6 +46,9 @@ class Copilot(AbstractProvider, ProviderModelMixin):
websocket_url = "wss://copilot.microsoft.com/c/api/chat?api-version=2"
conversation_url = f"{url}/c/api/conversations"
_access_token: str = None
_cookies: CookieJar = None
@classmethod
def create_completion(
@ -69,42 +68,43 @@ class Copilot(AbstractProvider, ProviderModelMixin):
raise MissingRequirementsError('Install or update "curl_cffi" package | pip install -U curl_cffi')
websocket_url = cls.websocket_url
access_token = None
headers = None
cookies = conversation.cookie_jar if conversation is not None else None
if cls.needs_auth or image is not None:
if conversation is None or conversation.access_token is None:
if cls._access_token is None:
try:
access_token, cookies = readHAR(cls.url)
cls._access_token, cls._cookies = readHAR(cls.url)
except NoValidHarFileError as h:
debug.log(f"Copilot: {h}")
try:
get_running_loop(check_nested=True)
access_token, cookies = asyncio.run(get_access_token_and_cookies(cls.url, proxy))
cls._access_token, cls._cookies = asyncio.run(get_access_token_and_cookies(cls.url, proxy))
except MissingRequirementsError:
raise h
else:
access_token = conversation.access_token
debug.log(f"Copilot: Access token: {access_token[:7]}...{access_token[-5:]}")
websocket_url = f"{websocket_url}&accessToken={quote(access_token)}"
headers = {"authorization": f"Bearer {access_token}"}
debug.log(f"Copilot: Access token: {cls._access_token[:7]}...{cls._access_token[-5:]}")
websocket_url = f"{websocket_url}&accessToken={quote(cls._access_token)}"
headers = {"authorization": f"Bearer {cls._access_token}"}
with Session(
timeout=timeout,
proxy=proxy,
impersonate="chrome",
headers=headers,
cookies=cookies,
cookies=cls._cookies,
) as session:
if cls._access_token is not None:
cls._cookies = session.cookies.jar
response = session.get("https://copilot.microsoft.com/c/api/user")
raise_for_status(response)
debug.log(f"Copilot: User: {response.json().get('firstName', 'null')}")
user = response.json().get('firstName')
if user is None:
cls._access_token = None
debug.log(f"Copilot: User: {user or 'null'}")
if conversation is None:
response = session.post(cls.conversation_url)
raise_for_status(response)
conversation_id = response.json().get("id")
if return_conversation:
yield Conversation(conversation_id, session.cookies.jar, access_token)
yield Conversation(conversation_id)
prompt = format_prompt(messages)
debug.log(f"Copilot: Created conversation: {conversation_id}")
else:
@ -162,7 +162,7 @@ class Copilot(AbstractProvider, ProviderModelMixin):
raise RuntimeError(f"Invalid response: {last_msg}")
async def get_access_token_and_cookies(url: str, proxy: str = None, target: str = "ChatAI",):
browser = await get_nodriver(proxy=proxy)
browser = await get_nodriver(proxy=proxy, user_data_dir="copilot")
page = await browser.get(url)
access_token = None
while access_token is None:

View File

@ -9,11 +9,11 @@ from ..bing.create_images import create_images, create_session
class BingCreateImages(AsyncGeneratorProvider, ProviderModelMixin):
label = "Microsoft Designer in Bing"
parent = "Bing"
url = "https://www.bing.com/images/create"
working = True
needs_auth = True
image_models = ["dall-e"]
image_models = ["dall-e-3"]
models = image_models
def __init__(self, cookies: Cookies = None, proxy: str = None, api_key: str = None) -> None:
if api_key is not None:

View File

@ -69,7 +69,7 @@ class Gemini(AsyncGeneratorProvider):
if debug.logging:
print("Skip nodriver login in Gemini provider")
return
browser = await get_nodriver(proxy=proxy)
browser = await get_nodriver(proxy=proxy, user_data_dir="gemini")
login_url = os.environ.get("G4F_LOGIN_URL")
if login_url:
yield f"Please login: [Google Gemini]({login_url})\n\n"

View File

@ -142,7 +142,7 @@ def readHAR(url: str) -> tuple[str, str]:
return api_key, user_agent
async def get_access_token_and_user_agent(url: str, proxy: str = None):
browser = await get_nodriver(proxy=proxy)
browser = await get_nodriver(proxy=proxy, user_data_dir="designer")
page = await browser.get(url)
user_agent = await page.evaluate("navigator.userAgent")
access_token = None

View File

@ -510,7 +510,7 @@ class OpenaiChat(AsyncGeneratorProvider, ProviderModelMixin):
@classmethod
async def nodriver_auth(cls, proxy: str = None):
browser = await get_nodriver(proxy=proxy)
browser = await get_nodriver(proxy=proxy, user_data_dir="chatgpt")
page = browser.main_tab
def on_request(event: nodriver.cdp.network.RequestWillBeSent):
if event.request.url == start_url or event.request.url.startswith(conversation_url):

View File

@ -20,6 +20,7 @@
<script src="/static/js/chat.v1.js" defer></script>
<script src="https://cdn.jsdelivr.net/npm/markdown-it@13.0.1/dist/markdown-it.min.js"></script>
<link rel="stylesheet" href="/static/css/dracula.min.css">
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/photoswipe/dist/photoswipe.css">
<script>
MathJax = {
chtml: {
@ -37,9 +38,34 @@
</script>
<script src="https://cdn.jsdelivr.net/npm/gpt-tokenizer/dist/cl100k_base.js" async></script>
<script src="/static/js/text_to_speech/index.js" async></script>
<!--
<script src="/static/js/whisper-web/index.js" async></script>
-->
<script type="module" async>
import PhotoSwipeLightbox from 'https://unpkg.com/photoswipe/dist/photoswipe-lightbox.esm.js';
const lightbox = new PhotoSwipeLightbox({
gallery: '#messages',
children: 'a:has(img)',
showHideAnimationType: 'none',
pswpModule: () => import('https://unpkg.com/photoswipe'),
});
lightbox.on('uiRegister', function() {
lightbox.pswp.ui.registerElement({
name: 'custom-caption',
order: 9,
isButton: false,
appendTo: 'root',
html: 'Caption text',
onInit: (el, pswp) => {
lightbox.pswp.on('change', () => {
const currSlideElement = lightbox.pswp.currSlide.data.element;
let captionHTML = '';
if (currSlideElement) {
el.innerHTML = currSlideElement.querySelector('img').getAttribute('alt');
}
});
}
});
});
lightbox.init();
</script>
<script>
const user_image = '<img src="/static/img/user.png" alt="your avatar">';
const gpt_image = '<img src="/static/img/gpt.png" alt="your avatar">';
@ -261,16 +287,16 @@
<option value="">Provider: Auto</option>
<option value="OpenaiChat">OpenAI ChatGPT</option>
<option value="Copilot">Microsoft Copilot</option>
<option value="ChatGpt">ChatGpt</option>
<option value="Gemini">Gemini</option>
<option value="MetaAI">Meta AI</option>
<option value="DeepInfraChat">DeepInfraChat</option>
<option value="Blackbox">Blackbox</option>
<option value="Gemini">Google Gemini</option>
<option value="DDG">DuckDuckGo</option>
<option value="Pizzagpt">Pizzagpt</option>
<option disabled="disabled">----</option>
</select>
</div>
<div class="field">
<button id="pin">
<i class="fa-solid fa-thumbtack"></i>
</button>
</div>
</div>
</div>
<div class="log hidden"></div>

View File

@ -63,6 +63,7 @@
--conversations-hover: #c7a2ff4d;
--scrollbar: var(--colour-3);
--scrollbar-thumb: var(--blur-bg);
--button-hover: var(--colour-5);
}
:root {
@ -533,7 +534,7 @@ body.white .gradient{
.stop_generating, .toolbar .regenerate {
position: absolute;
z-index: 1000000;
z-index: 100000;
top: 0;
right: 0;
}
@ -729,13 +730,8 @@ label[for="camera"] {
select {
-webkit-border-radius: 8px;
-moz-border-radius: 8px;
border-radius: 8px;
-webkit-backdrop-filter: blur(20px);
backdrop-filter: blur(20px);
cursor: pointer;
background-color: var(--colour-1);
border: 1px solid var(--blur-border);
@ -745,11 +741,47 @@ select {
overflow: hidden;
outline: none;
padding: 8px 16px;
appearance: none;
width: 160px;
}
.buttons button {
border-radius: 8px;
backdrop-filter: blur(20px);
cursor: pointer;
background-color: var(--colour-1);
border: 1px solid var(--blur-border);
color: var(--colour-3);
padding: 8px;
}
.buttons button.pinned span {
max-width: 160px;
overflow: hidden;
text-wrap: nowrap;
margin-right: 16px;
display: block;
text-overflow: ellipsis;
}
.buttons button.pinned i {
position: absolute;
top: 10px;
right: 6px;
}
select:hover,
.buttons button:hover,
.stop_generating button:hover,
.toolbar .regenerate button:hover,
#send-button:hover {
background-color: var(--button-hover);
}
#provider option:disabled[value] {
display: none;
}
#systemPrompt, .settings textarea {
font-size: 15px;
width: 100%;
@ -761,6 +793,39 @@ select {
resize: vertical;
}
.pswp {
--pswp-placeholder-bg: #000 !important;
}
.pswp img {
object-fit: contain;
}
.pswp__img--placeholder--blank{
display: none !important;
}
.pswp__custom-caption {
opacity: 0 !important;
background: rgba(0, 0, 0, 0.3);
font-size: 16px;
color: #fff;
width: calc(100% - 32px);
max-width: 400px;
padding: 2px 8px;
border-radius: 4px;
position: absolute;
left: 50%;
bottom: 16px;
transform: translateX(-50%);
max-height: 100px;
overflow: auto;
}
.pswp__custom-caption:hover {
opacity: 1 !important;
}
.pswp__custom-caption a {
color: #fff;
text-decoration: underline;
}
.slide-systemPrompt {
position: absolute;
top: 0;
@ -1112,6 +1177,7 @@ ul {
--colour-3: #212529;
--scrollbar: var(--colour-1);
--scrollbar-thumb: var(--gradient);
--button-hover: var(--colour-4);
}
.white .message .assistant .fa-xmark {

View File

@ -3,7 +3,7 @@ const message_box = document.getElementById(`messages`);
const messageInput = document.getElementById(`message-input`);
const box_conversations = document.querySelector(`.top`);
const stop_generating = document.querySelector(`.stop_generating`);
const regenerate = document.querySelector(`.regenerate`);
const regenerate_button = document.querySelector(`.regenerate`);
const sidebar = document.querySelector(".conversations");
const sidebar_button = document.querySelector(".mobile-sidebar");
const sendButton = document.getElementById("send-button");
@ -21,7 +21,7 @@ const chat = document.querySelector(".conversation");
const album = document.querySelector(".images");
const log_storage = document.querySelector(".log");
const optionElements = document.querySelectorAll(".settings input, .settings textarea, #model, #model2, #provider")
const optionElementsSelector = ".settings input, .settings textarea, #model, #model2, #provider";
let provider_storage = {};
let message_storage = {};
@ -364,7 +364,7 @@ const handle_ask = async () => {
}
</div>
<div class="count">
${count_words_and_tokens(message, get_selected_model())}
${count_words_and_tokens(message, get_selected_model()?.value)}
<i class="fa-solid fa-volume-high"></i>
<i class="fa-regular fa-clipboard"></i>
<a><i class="fa-brands fa-whatsapp"></i></a>
@ -375,7 +375,19 @@ const handle_ask = async () => {
</div>
`;
highlight(message_box);
await ask_gpt(message_id);
const all_pinned = document.querySelectorAll(".buttons button.pinned")
if (all_pinned.length > 0) {
all_pinned.forEach((el, idx) => ask_gpt(
idx == 0 ? message_id : get_message_id(),
-1,
idx != 0,
el.dataset.provider,
el.dataset.model
));
} else {
await ask_gpt(message_id);
}
};
async function safe_remove_cancel_button() {
@ -387,16 +399,21 @@ async function safe_remove_cancel_button() {
stop_generating.classList.add("stop_generating-hidden");
}
regenerate.addEventListener("click", async () => {
regenerate.classList.add("regenerate-hidden");
setTimeout(()=>regenerate.classList.remove("regenerate-hidden"), 3000);
await hide_message(window.conversation_id);
await ask_gpt(get_message_id());
regenerate_button.addEventListener("click", async () => {
regenerate_button.classList.add("regenerate-hidden");
setTimeout(()=>regenerate_button.classList.remove("regenerate-hidden"), 3000);
const all_pinned = document.querySelectorAll(".buttons button.pinned")
if (all_pinned.length > 0) {
all_pinned.forEach((el) => ask_gpt(get_message_id(), -1, true, el.dataset.provider, el.dataset.model));
} else {
await hide_message(window.conversation_id);
await ask_gpt(get_message_id());
}
});
stop_generating.addEventListener("click", async () => {
stop_generating.classList.add("stop_generating-hidden");
regenerate.classList.remove("regenerate-hidden");
regenerate_button.classList.remove("regenerate-hidden");
let key;
for (key in controller_storage) {
if (!controller_storage[key].signal.aborted) {
@ -538,7 +555,11 @@ imageInput?.addEventListener("click", (e) => {
}
});
const ask_gpt = async (message_id, message_index = -1) => {
const ask_gpt = async (message_id, message_index = -1, regenerate = false, provider = null, model = null) => {
if (!model && !provider) {
model = get_selected_model()?.value || null;
provider = providerSelect.options[providerSelect.selectedIndex].value;
}
let messages = await get_messages(window.conversation_id);
messages = prepare_messages(messages, message_index);
message_storage[message_id] = "";
@ -553,7 +574,7 @@ const ask_gpt = async (message_id, message_index = -1) => {
const message_el = document.createElement("div");
message_el.classList.add("message");
if (message_index != -1) {
if (message_index != -1 || regenerate) {
message_el.classList.add("regenerate");
}
message_el.innerHTML += `
@ -593,14 +614,13 @@ const ask_gpt = async (message_id, message_index = -1) => {
try {
const input = imageInput && imageInput.files.length > 0 ? imageInput : cameraInput;
const file = input && input.files.length > 0 ? input.files[0] : null;
const provider = providerSelect.options[providerSelect.selectedIndex].value;
const auto_continue = document.getElementById("auto_continue")?.checked;
const download_images = document.getElementById("download_images")?.checked;
let api_key = get_api_key_by_provider(provider);
await api("conversation", {
id: message_id,
conversation_id: window.conversation_id,
model: get_selected_model(),
model: model,
web_search: document.getElementById("switch").checked,
provider: provider,
messages: messages,
@ -632,7 +652,8 @@ const ask_gpt = async (message_id, message_index = -1) => {
message_storage[message_id],
message_provider,
message_index,
synthesize_storage[message_id]
synthesize_storage[message_id],
regenerate
);
await safe_load_conversation(window.conversation_id, message_index == -1);
} else {
@ -645,7 +666,7 @@ const ask_gpt = async (message_id, message_index = -1) => {
await safe_remove_cancel_button();
await register_message_buttons();
await load_conversations();
regenerate.classList.remove("regenerate-hidden");
regenerate_button.classList.remove("regenerate-hidden");
};
async function scroll_to_bottom() {
@ -848,7 +869,7 @@ const load_conversation = async (conversation_id, scroll=true) => {
message_box.innerHTML = elements;
register_message_buttons();
highlight(message_box);
regenerate.classList.remove("regenerate-hidden");
regenerate_button.classList.remove("regenerate-hidden");
if (scroll) {
message_box.scrollTo({ top: message_box.scrollHeight, behavior: "smooth" });
@ -960,7 +981,8 @@ const add_message = async (
conversation_id, role, content,
provider = null,
message_index = -1,
synthesize_data = null
synthesize_data = null,
regenerate = false
) => {
const conversation = await get_conversation(conversation_id);
if (!conversation) return;
@ -972,6 +994,9 @@ const add_message = async (
if (synthesize_data) {
new_message.synthesize = synthesize_data;
}
if (regenerate) {
new_message.regenerate = true;
}
if (message_index == -1) {
conversation.items.push(new_message);
} else {
@ -1118,6 +1143,7 @@ function open_album() {
}
const register_settings_storage = async () => {
const optionElements = document.querySelectorAll(optionElementsSelector);
optionElements.forEach((element) => {
if (element.type == "textarea") {
element.addEventListener('input', async (event) => {
@ -1145,6 +1171,7 @@ const register_settings_storage = async () => {
}
const load_settings_storage = async () => {
const optionElements = document.querySelectorAll(optionElementsSelector);
optionElements.forEach((element) => {
if (!(value = appStorage.getItem(element.id))) {
return;
@ -1226,7 +1253,7 @@ const count_input = async () => {
if (timeoutId) clearTimeout(timeoutId);
timeoutId = setTimeout(() => {
if (countFocus.value) {
inputCount.innerText = count_words_and_tokens(countFocus.value, get_selected_model());
inputCount.innerText = count_words_and_tokens(countFocus.value, get_selected_model()?.value);
} else {
inputCount.innerText = "";
}
@ -1267,6 +1294,24 @@ async function on_load() {
load_conversations();
}
const load_provider_option = (input, provider_name) => {
if (input.checked) {
providerSelect.querySelectorAll(`option[value="${provider_name}"]`).forEach(
(el) => el.removeAttribute("disabled")
);
providerSelect.querySelectorAll(`option[data-parent="${provider_name}"]`).forEach(
(el) => el.removeAttribute("disabled")
);
} else {
providerSelect.querySelectorAll(`option[value="${provider_name}"]`).forEach(
(el) => el.setAttribute("disabled", "disabled")
);
providerSelect.querySelectorAll(`option[data-parent="${provider_name}"]`).forEach(
(el) => el.setAttribute("disabled", "disabled")
);
}
};
async function on_api() {
let prompt_lock = false;
messageInput.addEventListener("keydown", async (evt) => {
@ -1292,22 +1337,42 @@ async function on_api() {
await handle_ask();
});
messageInput.focus();
register_settings_storage();
let provider_options = [];
try {
models = await api("models");
models.forEach((model) => {
let option = document.createElement("option");
option.value = option.text = model;
option.value = option.text = option.dataset.label = model;
modelSelect.appendChild(option);
});
providers = await api("providers")
Object.entries(providers).forEach(([provider, label]) => {
providers.sort((a, b) => a.label.localeCompare(b.label));
providers.forEach((provider) => {
let option = document.createElement("option");
option.value = provider;
option.text = label;
option.value = provider.name;
option.dataset.label = provider.label;
option.text = provider.label
+ (provider.vision ? " (Image Upload)" : "")
+ (provider.image ? " (Image Generation)" : "")
+ (provider.webdriver ? " (Webdriver)" : "")
+ (provider.auth ? " (Auth)" : "");
if (provider.parent)
option.dataset.parent = provider.parent;
providerSelect.appendChild(option);
if (!provider.parent) {
option = document.createElement("div");
option.classList.add("field");
option.innerHTML = `
<div class="field">
<span class="label">Enable ${provider.label}</span>
<input id="Provider${provider.name}" type="checkbox" name="Provider${provider.name}" checked="">
<label for="Provider${provider.name}" class="toogle" title="Remove provider from dropdown"></label>
</div>`;
option.querySelector("input").addEventListener("change", (event) => load_provider_option(event.target, provider.name));
settings.querySelector(".paper").appendChild(option);
provider_options[provider.name] = option;
}
});
await load_provider_models(appStorage.getItem("provider"));
} catch (e) {
@ -1316,8 +1381,11 @@ async function on_api() {
document.location.href = `/chat/error`;
}
}
register_settings_storage();
await load_settings_storage()
Object.entries(provider_options).forEach(
([provider_name, option]) => load_provider_option(option.querySelector("input"), provider_name)
);
const hide_systemPrompt = document.getElementById("hide-systemPrompt")
const slide_systemPrompt_icon = document.querySelector(".slide-systemPrompt i");
@ -1455,9 +1523,12 @@ systemPrompt?.addEventListener("input", async () => {
function get_selected_model() {
if (modelProvider.selectedIndex >= 0) {
return modelProvider.options[modelProvider.selectedIndex].value;
return modelProvider.options[modelProvider.selectedIndex];
} else if (modelSelect.selectedIndex >= 0) {
return modelSelect.options[modelSelect.selectedIndex].value;
model = modelSelect.options[modelSelect.selectedIndex];
if (model.value) {
return model;
}
}
}
@ -1554,6 +1625,7 @@ async function load_provider_models(providerIndex=null) {
models.forEach((model) => {
let option = document.createElement('option');
option.value = model.model;
option.dataset.label = model.model;
option.text = `${model.model}${model.image ? " (Image Generation)" : ""}${model.vision ? " (Image Upload)" : ""}`;
option.selected = model.default;
modelProvider.appendChild(option);
@ -1564,6 +1636,32 @@ async function load_provider_models(providerIndex=null) {
}
};
providerSelect.addEventListener("change", () => load_provider_models());
document.getElementById("pin").addEventListener("click", async () => {
const pin_container = document.getElementById("pin").parentElement;
let selected_provider = providerSelect.options[providerSelect.selectedIndex];
selected_provider = selected_provider.value ? selected_provider : null;
const selected_model = get_selected_model();
if (selected_provider || selected_model) {
const pinned = document.createElement("button");
pinned.classList.add("pinned");
if (selected_provider) pinned.dataset.provider = selected_provider.value;
if (selected_model) pinned.dataset.model = selected_model.value;
pinned.innerHTML = `
<span>
${selected_provider ? selected_provider.dataset.label || selected_provider.text : ""}
${selected_provider && selected_model ? "/" : ""}
${selected_model ? selected_model.dataset.label || selected_model.text : ""}
</span>
<i class="fa-regular fa-circle-xmark"></i>`;
pinned.addEventListener("click", () => pin_container.removeChild(pinned));
let all_pinned = pin_container.querySelectorAll(".pinned");
while (all_pinned.length > 4) {
pin_container.removeChild(all_pinned[0])
all_pinned = pin_container.querySelectorAll(".pinned");
}
pin_container.appendChild(pinned);
}
});
function save_storage() {
let filename = `chat ${new Date().toLocaleString()}.json`.replaceAll(":", "-");

View File

@ -8,11 +8,12 @@ from flask import send_from_directory
from inspect import signature
from g4f import version, models
from g4f import get_last_provider, ChatCompletion
from g4f import get_last_provider, ChatCompletion, get_model_and_provider
from g4f.errors import VersionNotFoundError
from g4f.image import ImagePreview, ImageResponse, copy_images, ensure_images_dir, images_dir
from g4f.Provider import ProviderType, __providers__, __map__
from g4f.providers.base_provider import ProviderModelMixin
from g4f.providers.retry_provider import BaseRetryProvider
from g4f.providers.response import BaseConversation, FinishReason, SynthesizeData
from g4f.client.service import convert_to_provider
from g4f import debug
@ -47,15 +48,15 @@ class Api:
@staticmethod
def get_providers() -> dict[str, str]:
return {
provider.__name__: (provider.label if hasattr(provider, "label") else provider.__name__)
+ (" (Image Generation)" if getattr(provider, "image_models", None) else "")
+ (" (Image Upload)" if getattr(provider, "default_vision_model", None) else "")
+ (" (WebDriver)" if "webdriver" in provider.get_parameters() else "")
+ (" (Auth)" if provider.needs_auth else "")
for provider in __providers__
if provider.working
}
return [{
"name": provider.__name__,
"label": provider.label if hasattr(provider, "label") else provider.__name__,
"parent": getattr(provider, "parent", None),
"image": getattr(provider, "image_models", None) is not None,
"vision": getattr(provider, "default_vision_model", None) is not None,
"webdriver": "webdriver" in provider.get_parameters(),
"auth": provider.needs_auth,
} for provider in __providers__ if provider.working]
@staticmethod
def get_version() -> dict:
@ -115,43 +116,44 @@ class Api:
debug.log_handler = log_handler
proxy = os.environ.get("G4F_PROXY")
try:
result = ChatCompletion.create(**kwargs)
model, provider = get_model_and_provider(
kwargs.get("model"), kwargs.get("provider"),
stream=True,
ignore_stream=True
)
result = ChatCompletion.create(**{**kwargs, "model": model, "provider": provider})
first = True
if isinstance(result, ImageResponse):
for chunk in result:
if first:
first = False
yield self._format_json("provider", get_last_provider(True))
yield self._format_json("content", str(result))
else:
for chunk in result:
if first:
first = False
yield self._format_json("provider", get_last_provider(True))
if isinstance(chunk, BaseConversation):
if provider:
if provider not in conversations:
conversations[provider] = {}
conversations[provider][conversation_id] = chunk
yield self._format_json("conversation", conversation_id)
elif isinstance(chunk, Exception):
logger.exception(chunk)
yield self._format_json("message", get_error_message(chunk))
elif isinstance(chunk, ImagePreview):
yield self._format_json("preview", chunk.to_string())
elif isinstance(chunk, ImageResponse):
images = chunk
if download_images:
images = asyncio.run(copy_images(chunk.get_list(), chunk.get("cookies"), proxy))
images = ImageResponse(images, chunk.alt)
yield self._format_json("content", str(images))
elif isinstance(chunk, SynthesizeData):
yield self._format_json("synthesize", chunk.to_json())
elif not isinstance(chunk, FinishReason):
yield self._format_json("content", str(chunk))
if debug.logs:
for log in debug.logs:
yield self._format_json("log", str(log))
debug.logs = []
if isinstance(provider, BaseRetryProvider):
provider = provider.last_provider
yield self._format_json("provider", {**provider.get_dict(), "model": model})
if isinstance(chunk, BaseConversation):
if provider:
if provider not in conversations:
conversations[provider] = {}
conversations[provider][conversation_id] = chunk
yield self._format_json("conversation", conversation_id)
elif isinstance(chunk, Exception):
logger.exception(chunk)
yield self._format_json("message", get_error_message(chunk))
elif isinstance(chunk, ImagePreview):
yield self._format_json("preview", chunk.to_string())
elif isinstance(chunk, ImageResponse):
images = chunk
if download_images:
images = asyncio.run(copy_images(chunk.get_list(), chunk.get("cookies"), proxy))
images = ImageResponse(images, chunk.alt)
yield self._format_json("content", str(images))
elif isinstance(chunk, SynthesizeData):
yield self._format_json("synthesize", chunk.to_json())
elif not isinstance(chunk, FinishReason):
yield self._format_json("content", str(chunk))
if debug.logs:
for log in debug.logs:
yield self._format_json("log", str(log))
debug.logs = []
except Exception as e:
logger.exception(e)
yield self._format_json('error', get_error_message(e))

View File

@ -55,6 +55,12 @@ class Backend_Api(Api):
return jsonify(response)
return response
def jsonify_providers(**kwargs):
response = self.get_providers(**kwargs)
if isinstance(response, list):
return jsonify(response)
return response
self.routes = {
'/backend-api/v2/models': {
'function': jsonify_models,
@ -65,7 +71,7 @@ class Backend_Api(Api):
'methods': ['GET']
},
'/backend-api/v2/providers': {
'function': self.get_providers,
'function': jsonify_providers,
'methods': ['GET']
},
'/backend-api/v2/version': {

View File

@ -40,6 +40,7 @@ def fix_url(url: str) -> str:
def fix_title(title: str) -> str:
if title:
return title.replace("\n", "").replace('"', '')
return ""
def to_image(image: ImageType, is_svg: bool = False) -> Image:
"""
@ -229,6 +230,8 @@ def format_images_markdown(images: Union[str, list], alt: str, preview: Union[st
Returns:
str: The formatted markdown string.
"""
if isinstance(images, list) and len(images) == 1:
images = images[0]
if isinstance(images, str):
result = f"[![{fix_title(alt)}]({fix_url(preview.replace('{image}', images) if preview else images)})]({fix_url(images)})"
else:

View File

@ -78,7 +78,7 @@ class BaseProvider(ABC):
Returns:
Dict[str, str]: A dictionary with provider's details.
"""
return {'name': cls.__name__, 'url': cls.url}
return {'name': cls.__name__, 'url': cls.url, 'label': getattr(cls, 'label', None)}
class BaseRetryProvider(BaseProvider):
"""

View File

@ -174,10 +174,10 @@ def merge_cookies(cookies: Iterator[Morsel], response: Response) -> Cookies:
for cookie in response.cookies.jar:
cookies[cookie.name] = cookie.value
async def get_nodriver(proxy: str = None, **kwargs)-> Browser:
async def get_nodriver(proxy: str = None, user_data_dir = "nodriver", **kwargs)-> Browser:
if not has_nodriver:
raise MissingRequirementsError('Install "nodriver" package | pip install -U nodriver')
user_data_dir = user_config_dir("g4f-nodriver") if has_platformdirs else None
user_data_dir = user_config_dir(f"g4f-{user_data_dir}") if has_platformdirs else None
debug.log(f"Open nodriver with user_dir: {user_data_dir}")
return await nodriver.start(
user_data_dir=user_data_dir,