Merge pull request #1274 from hlohaus/webdriver

Webdriver module, translate readme, support stream in create_async
This commit is contained in:
Tekky 2023-11-20 18:27:38 +00:00 committed by GitHub
commit e8d88c955f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
44 changed files with 1111 additions and 628 deletions

726
README-DE.md Normal file
View File

@ -0,0 +1,726 @@
<a href="./README.md">
<img src="https://img.shields.io/badge/open in-🇬🇧 english-blue.svg" alt="Open in EN">
</a>
![248433934-7886223b-c1d1-4260-82aa-da5741f303bb](https://github.com/xtekky/gpt4free/assets/98614666/ea012c87-76e0-496a-8ac4-e2de090cc6c9)
<a href='https://ko-fi.com/xtekky' target='_blank'><img height='35' style='border:0px;height:46px;' src='https://az743702.vo.msecnd.net/cdn/kofi3.png?v=0' border='0' alt='Kauf mir einen Kaffee auf ko-fi.com' />
<div id="top"></div>
> Durch die Nutzung dieses Repositories oder jeglichen damit verbundenen Code stimmen Sie dem [Rechtshinweis](LEGAL_NOTICE.md) zu. Der Autor ist nicht verantwortlich für Kopien, Forks, erneute Uploads durch andere Benutzer oder sonstige mit GPT4Free verbundene Aktivitäten. Dies ist das einzige Konto und Repository des Autors. Um Identitätsdiebstahl oder unverantwortliche Handlungen zu verhindern, halten Sie sich bitte an die GNU GPL-Lizenz, die dieses Repository verwendet.
```sh
pip install -U g4f
```
## 🆕 Was gibt es Neues
- Tritt unserem Telegram-Kanal bei: [t.me/g4f_channel](https://telegram.me/g4f_channel)
- Tritt unserer Discord-Gruppe bei: [discord.gg/XfybzPXPH5](https://discord.gg/XfybzPXPH5)
- Erkunde die g4f-Dokumentation (unvollständig): [g4f.mintlify.app](https://g4f.mintlify.app) | Trage zur Dokumentation bei: [github.com/xtekky/gpt4free-docs](https://github.com/xtekky/gpt4free-docs)
## 📚 Inhaltsverzeichnis
- [🆕 Was ist neu](#-was-ist-neu)
- [📚 Inhaltsverzeichnis](#-inhaltsverzeichnis)
- [🛠️ Erste Schritte](#-erste-schritte)
- [Voraussetzungen:](#voraussetzungen)
- [Projekt einrichten:](#projekt-einrichten)
- [Installation über PyPi](#installation-über-pypi)
- [oder](#oder)
- [Einrichten mit Docker:](#einrichten-mit-docker)
- [💡 Verwendung](#-verwendung)
- [Das `g4f` Paket](#das-g4f-paket)
- [ChatCompletion](#chatcompletion)
- [Vervollständigung](#vervollständigung)
- [Anbieter](#anbieter)
- [Cookies erforderlich](#cookies-erforderlich)
- [Async-Unterstützung](#async-unterstützung)
- [Proxy- und Timeout-Unterstützung](#proxy-und-timeout-unterstützung)
- [Interference openai-proxy API (Verwendung mit openai Python-Paket)](#interference-openai-proxy-api-verwendung-mit-openai-python-paket)
- [API von PyPi-Paket ausführen](#api-von-pypi-paket-ausführen)
- [API von Repository ausführen](#api-von-repository-ausführen)
- [🚀 Anbieter und Modelle](#-anbieter-und-modelle)
- [GPT-4](#gpt-4)
- [GPT-3.5](#gpt-35)
- [Andere](#andere)
- [Modelle](#modelle)
- [🔗 Verwandte GPT4Free-Projekte](#-verwandte-gpt4free-projekte)
- [🤝 Mitwirken](#-mitwirken)
- [Anbieter mit KI-Tool erstellen](#anbieter-mit-ki-tool-erstellen)
- [Anbieter erstellen](#anbieter-erstellen)
- [🙌 Mitwirkende](#-mitwirkende)
- [©️ Urheberrecht](#-urheberrecht)
- [⭐ Sternenhistorie](#-sternenhistorie)
- [📄 Lizenz](#-lizenz)
## 🛠️ Erste Schritte
#### Voraussetzungen:
1. [Python herunterladen und installieren](https://www.python.org/downloads/) (Version 3.10+ wird empfohlen).
#### Projekt einrichten:
##### Installation über pypi
```
pip install -U g4f
```
##### oder
1. Klonen Sie das GitHub-Repository:
```
git clone https://github.com/xtekky/gpt4free.git
```
2. Navigieren Sie zum Projektverzeichnis:
```
cd gpt4free
```
3. (Empfohlen) Erstellen Sie eine Python-Virtual-Umgebung:
Sie können der [Python-Offiziellen Dokumentation](https://docs.python.org/3/tutorial/venv.html) für virtuelle Umgebungen folgen.
```
python3 -m venv venv
```
4. Aktivieren Sie die virtuelle Umgebung:
- Unter Windows:
```
.\venv\Scripts\activate
```
- Unter macOS und Linux:
```
source venv/bin/activate
```
5. Installieren Sie die erforderlichen Python-Pakete aus `requirements.txt`:
```
pip install -r requirements.txt
```
6. Erstellen Sie eine Datei `test.py` im Stammverzeichnis und beginnen Sie mit der Verwendung des Repositories. Weitere Anweisungen finden Sie unten
```py
import g4f
...
```
##### Einrichten mit Docker:
Wenn Docker installiert ist, können Sie das Projekt ohne manuelle Installation von Abhängigkeiten einfach einrichten und ausführen.
1. Stellen Sie zunächst sicher, dass sowohl Docker als auch Docker Compose installiert sind.
- [Docker installieren](https://docs.docker.com/get-docker/)
- [Docker Compose installieren](https://docs.docker.com/compose/install/)
2. Klonen Sie das GitHub-Repo:
```bash
git clone https://github.com/xtekky/gpt4free.git
```
3. Navigieren Sie zum Projektverzeichnis:
```bash
cd gpt4free
```
4. Erstellen Sie das Docker-Image:
```bash
docker-compose build
```
5. Starten Sie den Dienst mit Docker Compose:
```bash
docker-compose up
```
Ihr Server wird jetzt unter `http://localhost:1337` ausgeführt. Sie können mit der API interagieren oder Ihre Tests wie gewohnt ausführen.
Um die Docker-Container zu stoppen, führen Sie einfach aus:
```bash
docker-compose down
```
> [!Note]
> Wenn Sie Docker verwenden, werden alle Änderungen, die Sie an Ihren lokalen Dateien vornehmen, im Docker-Container durch die Volumenabbildung in der `docker-compose.yml`-Datei widergespiegelt. Wenn Sie jedoch Abhängigkeiten hinzufügen oder entfernen, müssen Sie das Docker-Image mit `docker-compose build` neu erstellen.
## 💡 Verwendung
### Das `g4f` Paket
#### ChatCompletion
```python
import g4f
g4f.debug.logging = True # Aktiviere das Protokollieren
g4f.check_version = False # Deaktiviere die automatische Versionsüberprüfung
print(g4f.version) # Überprüfe die Version
print(g4f.Provider.Ails.params) # Unterstützte Argumente
# Automatische Auswahl des Anbieters
# Gestreamte Vervollständigung
response = g4f.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Hallo"}],
stream=True,
)
for message in response:
print(message, flush=True, end='')
# Normale Antwort
response = g4f.ChatCompletion.create(
model=g4f.models.gpt_4,
messages=[{"role": "user", "content": "Hallo"}],
) # Alternative Modellkonfiguration
print(response)
```
##### Completion
```python
import g4f
erlaubte_modelle = [
'code-davinci-002',
'text-ada-001',
'text-babbage-001',
'text-curie-001',
'text-davinci-002',
'text-davinci-003'
]
response = g4f.Completion.create(
model='text-davinci-003',
prompt='sage, dass dies ein Test ist'
)
print(response)
```
##### Anbieter
```python
import g4f
from g4f.Provider import (
AItianhu,
Aichat,
Bard,
Bing,
ChatBase,
ChatgptAi,
OpenaiChat,
Vercel,
You,
Yqcloud,
)
# Festlegen des Anbieters
response = g4f.ChatCompletion.create(
model="gpt-3.5-turbo",
provider=g4f.Provider.Aichat,
messages=[{"role": "user", "content": "Hallo"}],
stream=True,
)
for message in response:
print(message)
```
##### Verwendung des Browsers
Einige Anbieter verwenden einen Browser, um den Bot-Schutz zu umgehen.
Sie verwenden den Selenium-Webtreiber, um den Browser zu steuern.
Die Browsereinstellungen und die Anmeldedaten werden in einem benutzerdefinierten Verzeichnis gespeichert.
Wenn der Headless-Modus aktiviert ist, werden die Browserfenster unsichtbar geladen.
Aus Leistungsgründen wird empfohlen, die Browserinstanzen wiederzuverwenden
und sie am Ende selbst zu schließen:
```python
import g4f
from undetected_chromedriver import Chrome, ChromeOptions
from g4f.Provider import (
Bard,
Poe,
AItianhuSpace,
MyShell,
Phind,
PerplexityAi,
)
options = ChromeOptions()
options.add_argument("--incognito")
browser = Chrome(options=options, headless=True)
for idx in range(10):
response = g4f.ChatCompletion.create(
model=g4f.models.default,
provider=g4f.Provider.Phind,
messages=[{"role": "user", "content": "Schlage mir einen Namen vor."}],
browser=browser
)
print(f"{idx}:", response)
browser.quit()
```
##### Erforderliche Cookies
Cookies sind für die ordnungsgemäße Funktion einiger Dienstanbieter unerlässlich. Es ist unerlässlich, eine aktive Sitzung aufrechtzuerhalten, die in der Regel durch das Anmelden in Ihrem Konto erreicht wird.
Wenn Sie das g4f-Paket lokal ausführen, ruft das Paket automatisch Cookies aus Ihrem Webbrowser ab, indem es die `get_cookies`-Funktion verwendet. Wenn Sie es jedoch nicht lokal ausführen, müssen Sie die Cookies manuell bereitstellen, indem Sie sie als Parameter unter Verwendung des `cookies`-Parameters übergeben.
```python
import g4f
from g4f.Provider import (
Bing,
HuggingChat,
OpenAssistant,
)
# Verwendung
response = g4f.ChatCompletion.create(
model=g4f.models.default,
messages=[{"role": "user", "content": "Hallo"}],
provider=Bing,
#cookies=g4f.get_cookies(".google.com"),
cookies={"cookie_name": "value", "cookie_name2": "value2"},
auth=True
)
```
##### Unterstützung für asynchrone Ausführung
Um die Geschwindigkeit und Gesamtleistung zu verbessern, führen Sie Anbieter asynchron aus. Die Gesamtausführungszeit wird durch die Dauer der langsamsten Anbieterausführung bestimmt.
```python
import g4f
import asyncio
_providers = [
g4f.Provider.Aichat,
g4f.Provider.ChatBase,
g4f.Provider.Bing,
g4f.Provider.GptGo,
g4f.Provider.You,
g4f.Provider.Yqcloud,
]
async def run_provider(provider: g4f.Provider.BaseProvider):
try:
response = await g4f.ChatCompletion.create_async(
model=g4f.models.default,
messages=[{"role": "user", "content": "Hallo"}],
provider=provider,
)
print(f"{provider.__name__}:", response)
except Exception as e:
print(f"{provider.__name__}:", e)
async def run_all():
calls = [
run_provider(provider) for provider in _providers
]
await asyncio.gather(*calls)
asyncio.run(run_all())
```
##### Unterstützung für Proxy und Timeout
Alle Anbieter unterstützen das Angeben eines Proxy und das Erhöhen des Timeouts in den Erstellungsfunktionen.
```python
import g4f
response = g4f.ChatCompletion.create(
model=g4f.models.default,
messages=[{"role": "user", "content": "Hallo"}],
proxy="http://host:port",
# oder socks5://user:pass@host:port
timeout=120, # in Sekunden
)
print(f"Ergebnis:", response)
```
### Interference openai-proxy API (Verwendung mit dem openai Python-Paket)
#### Führen Sie die Interference API aus dem PyPi-Paket aus
```python
from g4f.api import run_api
run_api()
```
#### Führen Sie die Interference API aus dem Repository aus
Wenn Sie die Einbettungsfunktion verwenden möchten, benötigen Sie einen Hugging Face-Token. Sie können einen unter [Hugging Face Tokens](https://huggingface.co/settings/tokens) erhalten. Stellen Sie sicher, dass Ihre Rolle auf Schreiben eingestellt ist. Wenn Sie Ihren Token haben, verwenden Sie ihn einfach anstelle des OpenAI-API-Schlüssels.
Server ausführen:
```sh
g4f api
```
oder
```sh
python -m g4f.api
```
```python
import openai
# Setzen Sie Ihren Hugging Face-Token als API-Schlüssel, wenn Sie Einbettungen verwenden
# Wenn Sie keine Einbettungen verwenden, lassen Sie es leer
openai.api_key = "IHR_HUGGING_FACE_TOKEN" # Ersetzen Sie dies durch Ihren tatsächlichen Token
# Setzen Sie die API-Basis-URL, falls erforderlich, z.B. für eine lokale Entwicklungsumgebung
openai.api_base = "http://localhost:1337/v1"
def main():
chat_completion = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "schreibe ein Gedicht über einen Baum"}],
stream=True,
)
if isinstance(chat_completion, dict):
# Nicht gestreamt
print(chat_completion.choices[0].message.content)
else:
# Gestreamt
for token in chat_completion:
content = token["choices"][0]["delta"].get("content")
if content is not None:
print(content, end="", flush=True)
if __name__ == "__main__":
main()
```
## 🚀 Anbieter und Modelle
### GPT-4
| Website | Provider | GPT-3.5 | GPT-4 | Stream | Status | Auth |
| ------ | ------- | ------- | ----- | ------ | ------ | ---- |
| [bing.com](https://bing.com/chat) | `g4f.Provider.Bing` | ❌ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [chat.geekgpt.org](https://chat.geekgpt.org) | `g4f.Provider.GeekGpt` | ✔️ | ✔️ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [gptchatly.com](https://gptchatly.com) | `g4f.Provider.GptChatly` | ✔️ | ✔️ | ❌ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [liaobots.site](https://liaobots.site) | `g4f.Provider.Liaobots` | ✔️ | ✔️ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [www.phind.com](https://www.phind.com) | `g4f.Provider.Phind` | ❌ | ✔️ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [raycast.com](https://raycast.com) | `g4f.Provider.Raycast` | ✔️ | ✔️ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ✔️ |
### GPT-3.5
| Website | Provider | GPT-3.5 | GPT-4 | Stream | Status | Auth |
| ------ | ------- | ------- | ----- | ------ | ------ | ---- |
| [www.aitianhu.com](https://www.aitianhu.com) | `g4f.Provider.AItianhu` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [chat3.aiyunos.top](https://chat3.aiyunos.top/) | `g4f.Provider.AItianhuSpace` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [e.aiask.me](https://e.aiask.me) | `g4f.Provider.AiAsk` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [chat-gpt.org](https://chat-gpt.org/chat) | `g4f.Provider.Aichat` | ✔️ | ❌ | ❌ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [www.chatbase.co](https://www.chatbase.co) | `g4f.Provider.ChatBase` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [chatforai.store](https://chatforai.store) | `g4f.Provider.ChatForAi` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [chatgpt.ai](https://chatgpt.ai) | `g4f.Provider.ChatgptAi` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [chatgptx.de](https://chatgptx.de) | `g4f.Provider.ChatgptX` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [chat-shared2.zhile.io](https://chat-shared2.zhile.io) | `g4f.Provider.FakeGpt` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [freegpts1.aifree.site](https://freegpts1.aifree.site/) | `g4f.Provider.FreeGpt` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [gptalk.net](https://gptalk.net) | `g4f.Provider.GPTalk` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [ai18.gptforlove.com](https://ai18.gptforlove.com) | `g4f.Provider.GptForLove` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [gptgo.ai](https://gptgo.ai) | `g4f.Provider.GptGo` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [hashnode.com](https://hashnode.com) | `g4f.Provider.Hashnode` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [app.myshell.ai](https://app.myshell.ai/chat) | `g4f.Provider.MyShell` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [noowai.com](https://noowai.com) | `g4f.Provider.NoowAi` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [chat.openai.com](https://chat.openai.com) | `g4f.Provider.OpenaiChat` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ✔️ |
| [theb.ai](https://theb.ai) | `g4f.Provider.Theb` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ✔️ |
| [sdk.vercel.ai](https://sdk.vercel.ai) | `g4f.Provider.Vercel` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [you.com](https://you.com) | `g4f.Provider.You` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [chat9.yqcloud.top](https://chat9.yqcloud.top/) | `g4f.Provider.Yqcloud` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [chat.acytoo.com](https://chat.acytoo.com) | `g4f.Provider.Acytoo` | ✔️ | ❌ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [aibn.cc](https://aibn.cc) | `g4f.Provider.Aibn` | ✔️ | ❌ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [ai.ls](https://ai.ls) | `g4f.Provider.Ails` | ✔️ | ❌ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [chatgpt4online.org](https://chatgpt4online.org) | `g4f.Provider.Chatgpt4Online` | ✔️ | ❌ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [chat.chatgptdemo.net](https://chat.chatgptdemo.net) | `g4f.Provider.ChatgptDemo` | ✔️ | ❌ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [chatgptduo.com](https://chatgptduo.com) | `g4f.Provider.ChatgptDuo` | ✔️ | ❌ | ❌ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [chatgptfree.ai](https://chatgptfree.ai) | `g4f.Provider.ChatgptFree` | ✔️ | ❌ | ❌ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [chatgptlogin.ai](https://chatgptlogin.ai) | `g4f.Provider.ChatgptLogin` | ✔️ | ❌ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [cromicle.top](https://cromicle.top) | `g4f.Provider.Cromicle` | ✔️ | ❌ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [gptgod.site](https://gptgod.site) | `g4f.Provider.GptGod` | ✔️ | ❌ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [opchatgpts.net](https://opchatgpts.net) | `g4f.Provider.Opchatgpts` | ✔️ | ❌ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [chat.ylokh.xyz](https://chat.ylokh.xyz) | `g4f.Provider.Ylokh` | ✔️ | ❌ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
### Andere
| Website | Provider | GPT-3.5 | GPT-4 | Stream | Status | Auth |
| ------ | ------- | ------- | ----- | ------ | ------ | ---- |
| [bard.google.com](https://bard.google.com) | `g4f.Provider.Bard` | ❌ | ❌ | ❌ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ✔️ |
| [deepinfra.com](https://deepinfra.com) | `g4f.Provider.DeepInfra` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [huggingface.co](https://huggingface.co/chat) | `g4f.Provider.HuggingChat` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ✔️ |
| [www.llama2.ai](https://www.llama2.ai) | `g4f.Provider.Llama2` | ❌ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [open-assistant.io](https://open-assistant.io/chat) | `g4f.Provider.OpenAssistant` | ❌ | ❌ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ✔️ |
### Modelle
| Model | Base Provider | Provider | Website |
| --------------------------------------- | ------------- | ------------------- | ------------------------------------------- |
| palm | Google | g4f.Provider.Bard | [bard.google.com](https://bard.google.com/) |
| h2ogpt-gm-oasst1-en-2048-falcon-7b-v3 | Hugging Face | g4f.Provider.H2o | [www.h2o.ai](https://www.h2o.ai/) |
| h2ogpt-gm-oasst1-en-2048-falcon-40b-v1 | Hugging Face | g4f.Provider.H2o | [www.h2o.ai](https://www.h2o.ai/) |
| h2ogpt-gm-oasst1-en-2048-open-llama-13b | Hugging Face | g4f.Provider.H2o | [www.h2o.ai](https://www.h2o.ai/) |
| claude-instant-v1 | Anthropic | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) |
| claude-v1 | Anthropic | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) |
| claude-v2 | Anthropic | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) |
| command-light-nightly | Cohere | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) |
| command-nightly | Cohere | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) |
| gpt-neox-20b | Hugging Face | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) |
| oasst-sft-1-pythia-12b | Hugging Face | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) |
| oasst-sft-4-pythia-12b-epoch-3.5 | Hugging Face | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) |
| santacoder | Hugging Face | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) |
| bloom | Hugging Face | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) |
| flan-t5-xxl | Hugging Face | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) |
| code-davinci-002 | OpenAI | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) |
| gpt-3.5-turbo-16k | OpenAI | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) |
| gpt-3.5-turbo-16k-0613 | OpenAI | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) |
| gpt-4-0613 | OpenAI | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) |
| text-ada-001 | OpenAI | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) |
| text-babbage-001 | OpenAI | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) |
| text-curie-001 | OpenAI | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) |
| text-davinci-002 | OpenAI | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) |
| text-davinci-003 | OpenAI | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) |
| llama13b-v2-chat | Replicate | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) |
| llama7b-v2-chat | Replicate | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) |
## 🔗 Verwandte GPT4Free-Projekte
<table>
<thead align="center">
<tr border: none;>
<td><b>🎁 Projects</b></td>
<td><b>⭐ Stars</b></td>
<td><b>📚 Forks</b></td>
<td><b>🛎 Issues</b></td>
<td><b>📬 Pull requests</b></td>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://github.com/xtekky/gpt4free"><b>gpt4free</b></a></td>
<td><a href="https://github.com/xtekky/gpt4free/stargazers"><img alt="Stars" src="https://img.shields.io/github/stars/xtekky/gpt4free?style=flat-square&labelColor=343b41"/></a></td>
<td><a href="https://github.com/xtekky/gpt4free/network/members"><img alt="Forks" src="https://img.shields.io/github/forks/xtekky/gpt4free?style=flat-square&labelColor=343b41"/></a></td>
<td><a href="https://github.com/xtekky/gpt4free/issues"><img alt="Issues" src="https://img.shields.io/github/issues/xtekky/gpt4free?style=flat-square&labelColor=343b41"/></a></td>
<td><a href="https://github.com/xtekky/gpt4free/pulls"><img alt="Pull Requests" src="https://img.shields.io/github/issues-pr/xtekky/gpt4free?style=flat-square&labelColor=343b41"/></a></td>
</tr>
<td><a href="https://github.com/xiangsx/gpt4free-ts"><b>gpt4free-ts</b></a></td>
<td><a href="https://github.com/xiangsx/gpt4free-ts/stargazers"><img alt="Stars" src="https://img.shields.io/github/stars/xiangsx/gpt4free-ts?style=flat-square&labelColor=343b41"/></a></td>
<td><a href="https://github.com/xiangsx/gpt4free-ts/network/members"><img alt="Forks" src="https://img.shields.io/github/forks/xiangsx/gpt4free-ts?style=flat-square&labelColor=343b41"/></a></td>
<td><a href="https://github.com/xiangsx/gpt4free-ts/issues"><img alt="Issues" src="https://img.shields.io/github/issues/xiangsx/gpt4free-ts?style=flat-square&labelColor=343b41"/></a></td>
<td><a href="https://github.com/xiangsx/gpt4free-ts/pulls"><img alt="Pull Requests" src="https://img.shields.io/github/issues-pr/xiangsx/gpt4free-ts?style=flat-square&labelColor=343b41"/></a></td>
</tr>
<tr>
<td><a href="https://github.com/zukixa/cool-ai-stuff/"><b>Free AI API's & Potential Providers List</b></a></td>
<td><a href="https://github.com/zukixa/cool-ai-stuff/stargazers"><img alt="Stars" src="https://img.shields.io/github/stars/zukixa/cool-ai-stuff?style=flat-square&labelColor=343b41"/></a></td>
<td><a href="https://github.com/zukixa/cool-ai-stuff/network/members"><img alt="Forks" src="https://img.shields.io/github/forks/zukixa/cool-ai-stuff?style=flat-square&labelColor=343b41"/></a></td>
<td><a href="https://github.com/zukixa/cool-ai-stuff/issues"><img alt="Issues" src="https://img.shields.io/github/issues/zukixa/cool-ai-stuff?style=flat-square&labelColor=343b41"/></a></td>
<td><a href="https://github.com/zukixa/cool-ai-stuff/pulls"><img alt="Pull Requests" src="https://img.shields.io/github/issues-pr/zukixa/cool-ai-stuff?style=flat-square&labelColor=343b41"/></a></td>
</tr>
<tr>
<tr>
<td><a href="https://github.com/xtekky/chatgpt-clone"><b>ChatGPT-Clone</b></a></td>
<td><a href="https://github.com/xtekky/chatgpt-clone/stargazers"><img alt="Stars" src="https://img.shields.io/github/stars/xtekky/chatgpt-clone?style=flat-square&labelColor=343b41"/></a></td>
<td><a href="https://github.com/xtekky/chatgpt-clone/network/members"><img alt="Forks" src="https://img.shields.io/github/forks/xtekky/chatgpt-clone?style=flat-square&labelColor=343b41"/></a></td>
<td><a href="https://github.com/xtekky/chatgpt-clone/issues"><img alt="Issues" src="https://img.shields.io/github/issues/xtekky/chatgpt-clone?style=flat-square&labelColor=343b41"/></a></td>
<td><a href="https://github.com/xtekky/chatgpt-clone/pulls"><img alt="Pull Requests" src="https://img.shields.io/github/issues-pr/xtekky/chatgpt-clone?style=flat-square&labelColor=343b41"/></a></td>
</tr>
<tr>
<td><a href="https://github.com/mishalhossin/Discord-Chatbot-Gpt4Free"><b>ChatGpt Discord Bot</b></a></td>
<td><a href="https://github.com/mishalhossin/Discord-Chatbot-Gpt4Free/stargazers"><img alt="Stars" src="https://img.shields.io/github/stars/mishalhossin/Discord-Chatbot-Gpt4Free?style=flat-square&labelColor=343b41"/></a></td>
<td><a href="https://github.com/mishalhossin/Discord-Chatbot-Gpt4Free/network/members"><img alt="Forks" src="https://img.shields.io/github/forks/mishalhossin/Discord-Chatbot-Gpt4Free?style=flat-square&labelColor=343b41"/></a></td>
<td><a href="https://github.com/mishalhossin/Discord-Chatbot-Gpt4Free/issues"><img alt="Issues" src="https://img.shields.io/github/issues/mishalhossin/Discord-Chatbot-Gpt4Free?style=flat-square&labelColor=343b41"/></a></td>
<td><a href="https://github.com/mishalhossin/Coding-Chatbot-Gpt4Free/pulls"><img alt="Pull Requests" src="https://img.shields.io/github/issues-pr/mishalhossin/Discord-Chatbot-Gpt4Free?style=flat-square&labelColor=343b41"/></a></td>
<tr>
<td><a href="https://github.com/SamirXR/Nyx-Bot"><b>Nyx-Bot (Discord)</b></a></td>
<td><a href="https://github.com/SamirXR/Nyx-Bot/stargazers"><img alt="Stars" src="https://img.shields.io/github/stars/SamirXR/Nyx-Bot?style=flat-square&labelColor=343b41"/></a></td>
<td><a href="https://github.com/SamirXR/Nyx-Bot/network/members"><img alt="Forks" src="https://img.shields.io/github/forks/SamirXR/Nyx-Bot?style=flat-square&labelColor=343b41"/></a></td>
<td><a href="https://github.com/SamirXR/Nyx-Bot/issues"><img alt="Issues" src="https://img.shields.io/github/issues/SamirXR/Nyx-Bot?style=flat-square&labelColor=343b41"/></a></td>
<td><a href="https://github.com/SamirXR/Nyx-Bot/pulls"><img alt="Pull Requests" src="https://img.shields.io/github/issues-pr/SamirXR/Nyx-Bot?style=flat-square&labelColor=343b41"/></a></td>
</tr>
</tr>
<tr>
<td><a href="https://github.com/MIDORIBIN/langchain-gpt4free"><b>LangChain gpt4free</b></a></td>
<td><a href="https://github.com/MIDORIBIN/langchain-gpt4free/stargazers"><img alt="Stars" src="https://img.shields.io/github/stars/MIDORIBIN/langchain-gpt4free?style=flat-square&labelColor=343b41"/></a></td>
<td><a href="https://github.com/MIDORIBIN/langchain-gpt4free/network/members"><img alt="Forks" src="https://img.shields.io/github/forks/MIDORIBIN/langchain-gpt4free?style=flat-square&labelColor=343b41"/></a></td>
<td><a href="https://github.com/MIDORIBIN/langchain-gpt4free/issues"><img alt="Issues" src="https://img.shields.io/github/issues/MIDORIBIN/langchain-gpt4free?style=flat-square&labelColor=343b41"/></a></td>
<td><a href="https://github.com/MIDORIBIN/langchain-gpt4free/pulls"><img alt="Pull Requests" src="https://img.shields.io/github/issues-pr/MIDORIBIN/langchain-gpt4free?style=flat-square&labelColor=343b41"/></a></td>
</tr>
<tr>
<td><a href="https://github.com/HexyeDEV/Telegram-Chatbot-Gpt4Free"><b>ChatGpt Telegram Bot</b></a></td>
<td><a href="https://github.com/HexyeDEV/Telegram-Chatbot-Gpt4Free/stargazers"><img alt="Stars" src="https://img.shields.io/github/stars/HexyeDEV/Telegram-Chatbot-Gpt4Free?style=flat-square&labelColor=343b41"/></a></td>
<td><a href="https://github.com/HexyeDEV/Telegram-Chatbot-Gpt4Free/network/members"><img alt="Forks" src="https://img.shields.io/github/forks/HexyeDEV/Telegram-Chatbot-Gpt4Free?style=flat-square&labelColor=343b41"/></a></td>
<td><a href="https://github.com/HexyeDEV/Telegram-Chatbot-Gpt4Free/issues"><img alt="Issues" src="https://img.shields.io/github/issues/HexyeDEV/Telegram-Chatbot-Gpt4Free?style=flat-square&labelColor=343b41"/></a></td>
<td><a href="https://github.com/HexyeDEV/Telegram-Chatbot-Gpt4Free/pulls"><img alt="Pull Requests" src="https://img.shields.io/github/issues-pr/HexyeDEV/Telegram-Chatbot-Gpt4Free?style=flat-square&labelColor=343b41"/></a></td>
</tr>
<tr>
<td><a href="https://github.com/Lin-jun-xiang/chatgpt-line-bot"><b>ChatGpt Line Bot</b></a></td>
<td><a href="https://github.com/Lin-jun-xiang/chatgpt-line-bot/stargazers"><img alt="Stars" src="https://img.shields.io/github/stars/Lin-jun-xiang/chatgpt-line-bot?style=flat-square&labelColor=343b41"/></a></td>
<td><a href="https://github.com/Lin-jun-xiang/chatgpt-line-bot/network/members"><img alt="Forks" src="https://img.shields.io/github/forks/Lin-jun-xiang/chatgpt-line-bot?style=flat-square&labelColor=343b41"/></a></td>
<td><a href="https://github.com/Lin-jun-xiang/chatgpt-line-bot/issues"><img alt="Issues" src="https://img.shields.io/github/issues/Lin-jun-xiang/chatgpt-line-bot?style=flat-square&labelColor=343b41"/></a></td>
<td><a href="https://github.com/Lin-jun-xiang/chatgpt-line-bot/pulls"><img alt="Pull Requests" src="https://img.shields.io/github/issues-pr/Lin-jun-xiang/chatgpt-line-bot?style=flat-square&labelColor=343b41"/></a></td>
</tr>
<tr>
<td><a href="https://github.com/Lin-jun-xiang/action-translate-readme"><b>Action Translate Readme</b></a></td>
<td><a href="https://github.com/Lin-jun-xiang/action-translate-readme/stargazers"><img alt="Stars" src="https://img.shields.io/github/stars/Lin-jun-xiang/action-translate-readme?style=flat-square&labelColor=343b41"/></a></td>
<td><a href="https://github.com/Lin-jun-xiang/action-translate-readme/network/members"><img alt="Forks" src="https://img.shields.io/github/forks/Lin-jun-xiang/action-translate-readme?style=flat-square&labelColor=343b41"/></a></td>
<td><a href="https://github.com/Lin-jun-xiang/action-translate-readme/issues"><img alt="Issues" src="https://img.shields.io/github/issues/Lin-jun-xiang/action-translate-readme?style=flat-square&labelColor=343b41"/></a></td>
<td><a href="https://github.com/Lin-jun-xiang/action-translate-readme/pulls"><img alt="Pull Requests" src="https://img.shields.io/github/issues-pr/Lin-jun-xiang/action-translate-readme?style=flat-square&labelColor=343b41"/></a></td>
</tr>
<tr>
<td><a href="https://github.com/Lin-jun-xiang/docGPT-streamlit"><b>Langchain Document GPT</b></a></td>
<td><a href="https://github.com/Lin-jun-xiang/docGPT-streamlit/stargazers"><img alt="Stars" src="https://img.shields.io/github/stars/Lin-jun-xiang/docGPT-streamlit?style=flat-square&labelColor=343b41"/></a></td>
<td><a href="https://github.com/Lin-jun-xiang/docGPT-streamlit/network/members"><img alt="Forks" src="https://img.shields.io/github/forks/Lin-jun-xiang/docGPT-streamlit?style=flat-square&labelColor=343b41"/></a></td>
<td><a href="https://github.com/Lin-jun-xiang/docGPT-streamlit/issues"><img alt="Issues" src="https://img.shields.io/github/issues/Lin-jun-xiang/docGPT-streamlit?style=flat-square&labelColor=343b41"/></a></td>
<td><a href="https://github.com/Lin-jun-xiang/docGPT-streamlit/pulls"><img alt="Pull Requests" src="https://img.shields.io/github/issues-pr/Lin-jun-xiang/docGPT-streamlit?style=flat-square&labelColor=343b41"/></a></td>
</tr>
</tbody>
</table>
## 🤝 Mitwirken
#### Erstellen Sie einen Anbieter mit AI-Tool
Rufen Sie im Terminal das Skript `create_provider.py` auf:
```bash
python etc/tool/create_provider.py
```
1. Geben Sie Ihren Namen für den neuen Anbieter ein.
2. Kopieren Sie den `cURL`-Befehl aus den Entwicklertools Ihres Browsers und fügen Sie ihn ein.
3. Lassen Sie die KI den Anbieter für Sie erstellen.
4. Passen Sie den Anbieter nach Ihren Bedürfnissen an.
#### Anbieter erstellen
1. Überprüfen Sie die aktuelle [Liste potenzieller Anbieter](https://github.com/zukixa/cool-ai-stuff#ai-chat-websites) oder finden Sie Ihre eigene Anbieterquelle!
2. Erstellen Sie eine neue Datei in [g4f/Provider](./g4f/Provider) mit dem Namen des Anbieters.
3. Implementieren Sie eine Klasse, die von [BaseProvider](./g4f/Provider/base_provider.py) erbt.
```py
from __future__ import annotations
from ..typing import AsyncResult, Messages
from .base_provider import AsyncGeneratorProvider
class HogeService(AsyncGeneratorProvider):
url = "https://chat-gpt.com"
supports_gpt_35_turbo = True
working = True
@classmethod
async def create_async_generator(
cls,
model: str,
messages: Messages,
proxy: str = None,
**kwargs
) -> AsyncResult:
yield ""
```
4. Hier können Sie die Einstellungen anpassen, zum Beispiel, wenn die Website Streaming unterstützt, setzen Sie `supports_stream` auf `True`...
5. Schreiben Sie Code, um den Anbieter in `create_async_generator` anzufordern und die Antwort mit `yield` zurückzugeben, selbst wenn es sich um eine einmalige Antwort handelt. Zögern Sie nicht, sich bei anderen Anbietern inspirieren zu lassen.
6. Fügen Sie den Namen des Anbieters in [`g4f/Provider/__init__.py`](./g4f/Provider/__init__.py) hinzu.
```py
from .HogeService import HogeService
__all__ = [
HogeService,
]
```
7. Sie sind fertig! Testen Sie den Anbieter, indem Sie ihn aufrufen:
```py
import g4f
response = g4f.ChatCompletion.create(model='gpt-3.5-turbo', provider=g4f.Provider.PROVIDERNAME,
messages=[{"role": "user", "content": "test"}], stream=g4f.Provider.PROVIDERNAME.supports_stream)
for message in response:
print(message, flush=True, end='')
```
## 🙌 Mitwirkende
Eine Liste der Mitwirkenden ist [hier](https://github.com/xtekky/gpt4free/graphs/contributors) verfügbar.
Die Datei [`Vercel.py`](https://github.com/xtekky/gpt4free/blob/main/g4f/Provider/Vercel.py) enthält Code von [vercel-llm-api](https://github.com/ading2210/vercel-llm-api) von [@ading2210](https://github.com/ading2210), der unter der [GNU GPL v3](https://www.gnu.org/licenses/gpl-3.0.txt) lizenziert ist.
Top 1 Mitwirkender: [@hlohaus](https://github.com/hlohaus)
## ©️ Urheberrecht
This program is licensed under the [GNU GPL v3](https://www.gnu.org/licenses/gpl-3.0.txt)
```
xtekky/gpt4free: Copyright (C) 2023 xtekky
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
```
## ⭐ Sternenverlauf
<a href="https://github.com/xtekky/gpt4free/stargazers">
<img width="500" alt="Star History Chart" src="https://api.star-history.com/svg?repos=xtekky/gpt4free&type=Date">
</a>
## 📄 Lizenz
<table>
<tr>
<td>
<p align="center"> <img src="https://upload.wikimedia.org/wikipedia/commons/thumb/9/93/GPLv3_Logo.svg/1200px-GPLv3_Logo.svg.png" width="80%"></img>
</td>
<td>
<img src="https://img.shields.io/badge/Lizenz-GNU_GPL_v3.0-rot.svg"/> <br>
Dieses Projekt steht unter der <a href="./LICENSE">GNU_GPL_v3.0-Lizenz</a>.
</td>
</tr>
</table>
<p align="right">(<a href="#top">🔼 Zurück nach oben</a>)</p>

View File

@ -1,3 +1,7 @@
<a href="./README-DE.md">
<img src="https://img.shields.io/badge/öffnen in-🇩🇪 deutsch-bleu.svg" alt="Öffnen en DE">
</a>
![248433934-7886223b-c1d1-4260-82aa-da5741f303bb](https://github.com/xtekky/gpt4free/assets/98614666/ea012c87-76e0-496a-8ac4-e2de090cc6c9)
<a href='https://ko-fi.com/xtekky' target='_blank'><img height='35' style='border:0px;height:46px;' src='https://az743702.vo.msecnd.net/cdn/kofi3.png?v=0' border='0' alt='Buy Me a Coffee at ko-fi.com' />
@ -245,12 +249,7 @@ for message in response:
##### Using Browser
Some providers using a browser to bypass the bot protection.
They using the selenium webdriver to control the browser.
The browser settings and the login data are saved in a custom directory.
If the headless mode is enabled, the browser windows are loaded invisibly.
For performance reasons, it is recommended to reuse the browser instances
and close them yourself at the end:
Some providers using a a browser to bypass the bot protection. They using the selenium webdriver to control the browser. The browser settings and the login data are saved in a custom directory. If the headless mode is enabled, the browser windows are loaded invisibly. For performance reasons, it is recommended to reuse the browser instances and close them yourself at the end:
```python
import g4f
@ -266,16 +265,16 @@ from g4f.Provider import (
options = ChromeOptions()
options.add_argument("--incognito");
browser = Chrome(options=options, headless=True)
webdriver = Chrome(options=options, headless=True)
for idx in range(10):
response = g4f.ChatCompletion.create(
model=g4f.models.default,
provider=g4f.Provider.Phind,
messages=[{"role": "user", "content": "Suggest me a name."}],
browser=browser
webdriver=webdriver
)
print(f"{idx}:", response)
browser.quit()
webdriver.quit()
```
##### Cookies Required
@ -605,7 +604,7 @@ if __name__ == "__main__":
#### Create Provider with AI Tool
Call in your terminal the "create_provider" script:
Call in your terminal the `create_provider.py` script:
```bash
python etc/tool/create_provider.py
```
@ -628,8 +627,8 @@ from .base_provider import AsyncGeneratorProvider
class HogeService(AsyncGeneratorProvider):
url = "https://chat-gpt.com"
supports_gpt_35_turbo = True
working = True
supports_gpt_35_turbo = True
@classmethod
async def create_async_generator(
@ -644,7 +643,7 @@ class HogeService(AsyncGeneratorProvider):
4. Here, you can adjust the settings, for example, if the website does support streaming, set `supports_stream` to `True`...
5. Write code to request the provider in `create_async_generator` and `yield` the response, _even if_ it's a one-time response, do not hesitate to look at other providers for inspiration
6. Add the Provider Name in [g4f/Provider/**init**.py](./g4f/Provider/__init__.py)
6. Add the Provider Name in [`g4f/Provider/__init__.py`](./g4f/Provider/__init__.py)
```py
from .HogeService import HogeService
@ -708,7 +707,7 @@ along with this program. If not, see <https://www.gnu.org/licenses/>.
</td>
<td>
<img src="https://img.shields.io/badge/License-GNU_GPL_v3.0-red.svg"/> <br>
This project is licensed under <a href="./LICENSE">GNU_GPL_v3.0</a>. <img width=2300/>
This project is licensed under <a href="./LICENSE">GNU_GPL_v3.0</a>.
</td>
</tr>
</table>

View File

@ -0,0 +1,88 @@
import sys
from pathlib import Path
import asyncio
sys.path.append(str(Path(__file__).parent.parent.parent))
import g4f
g4f.debug.logging = True
from g4f.debug import access_token
provider = g4f.Provider.OpenaiChat
iso = "GE"
language = "german"
translate_prompt = f"""
Translate this markdown document to {language}.
Don't translate or change inline code examples.
```md
"""
keep_note = "Keep this: [!Note] as [!Note].\n"
blacklist = [
'## ©️ Copyright',
'## 🚀 Providers and Models',
'## 🔗 Related GPT4Free Projects'
]
whitelist = [
"### Other",
"### Models"
]
def read_text(text):
start = end = 0
new = text.strip().split('\n')
for i, line in enumerate(new):
if line.startswith('```'):
if not start:
start = i + 1
end = i
return '\n'.join(new[start:end]).strip()
async def translate(text):
prompt = translate_prompt + text.strip() + '\n```'
if "[!Note]" in text:
prompt = keep_note + prompt
result = read_text(await provider.create_async(
model="",
messages=[{"role": "user", "content": prompt}],
access_token=access_token
))
if text.endswith("```") and not result.endswith("```"):
result += "\n```"
return result
async def translate_part(part, i):
blacklisted = False
for headline in blacklist:
if headline in part:
blacklisted = True
if blacklisted:
lines = part.split('\n')
lines[0] = await translate(lines[0])
part = '\n'.join(lines)
for trans in whitelist:
if trans in part:
part = part.replace(trans, await translate(trans))
else:
part = await translate(part)
print(f"[{i}] translated")
return part
async def translate_readme(readme) -> str:
parts = readme.split('\n## ')
print(f"{len(parts)} parts...")
parts = await asyncio.gather(
*[translate_part("## " + part, i) for i, part in enumerate(parts)]
)
return "\n\n".join(parts)
with open("README.md", "r") as fp:
readme = fp.read()
print("Translate readme...")
readme = asyncio.run(translate_readme(readme))
file = f"README-{iso}.md"
with open(file, "w") as fp:
fp.write(readme)
print(f'"{file}" saved')

View File

@ -71,21 +71,9 @@ class AItianhu(AsyncGeneratorProvider):
if "detail" not in line:
raise RuntimeError(f"Response: {line}")
content = line["detail"]["choices"][0]["delta"].get("content")
content = line["detail"]["choices"][0]["delta"].get(
"content"
)
if content:
yield content
@classmethod
@property
def params(cls):
params = [
("model", "str"),
("messages", "list[dict[str, str]]"),
("stream", "bool"),
("proxy", "str"),
("temperature", "float"),
("top_p", "int"),
]
param = ", ".join([": ".join(p) for p in params])
return f"g4f.provider.{cls.__name__} supports: ({param})"

View File

@ -5,7 +5,8 @@ import random
from ..typing import CreateResult, Messages
from .base_provider import BaseProvider
from .helper import WebDriver, WebDriverSession, format_prompt, get_random_string
from .helper import format_prompt, get_random_string
from .webdriver import WebDriver, WebDriverSession
from .. import debug
class AItianhuSpace(BaseProvider):
@ -24,7 +25,7 @@ class AItianhuSpace(BaseProvider):
domain: str = None,
proxy: str = None,
timeout: int = 120,
web_driver: WebDriver = None,
webdriver: WebDriver = None,
headless: bool = True,
**kwargs
) -> CreateResult:
@ -39,7 +40,7 @@ class AItianhuSpace(BaseProvider):
url = f"https://{domain}"
prompt = format_prompt(messages)
with WebDriverSession(web_driver, "", headless=headless, proxy=proxy) as driver:
with WebDriverSession(webdriver, "", headless=headless, proxy=proxy) as driver:
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

View File

@ -58,15 +58,4 @@ class ChatBase(AsyncGeneratorProvider):
for incorrect_response in cls.list_incorrect_responses:
if incorrect_response in response_data:
raise RuntimeError("Incorrect response")
yield stream.decode()
@classmethod
@property
def params(cls):
params = [
("model", "str"),
("messages", "list[dict[str, str]]"),
("stream", "bool"),
]
param = ", ".join([": ".join(p) for p in params])
return f"g4f.provider.{cls.__name__} supports: ({param})"
yield stream.decode()

View File

@ -57,16 +57,6 @@ class ChatForAi(AsyncGeneratorProvider):
raise RuntimeError(f"Response: {chunk.decode()}")
yield chunk.decode()
@classmethod
@property
def params(cls):
params = [
("model", "str"),
("messages", "list[dict[str, str]]"),
("stream", "bool"),
]
param = ", ".join([": ".join(p) for p in params])
return f"g4f.provider.{cls.__name__} supports: ({param})"
def generate_signature(timestamp: int, message: str, id: str):
buffer = f"{timestamp}:{id}:{message}:7YN8z6d6"

View File

@ -47,16 +47,6 @@ class FreeGpt(AsyncGeneratorProvider):
raise RuntimeError("Rate limit reached")
yield chunk
@classmethod
@property
def params(cls):
params = [
("model", "str"),
("messages", "list[dict[str, str]]"),
("stream", "bool"),
]
param = ", ".join([": ".join(p) for p in params])
return f"g4f.provider.{cls.__name__} supports: ({param})"
def generate_signature(timestamp: int, message: str, secret: str = ""):
data = f"{timestamp}:{message}:{secret}"

View File

@ -70,16 +70,4 @@ class GeekGpt(BaseProvider):
raise RuntimeError(f'error | {e} :', json_data)
if content:
yield content
@classmethod
@property
def params(cls):
params = [
('model', 'str'),
('messages', 'list[dict[str, str]]'),
('stream', 'bool'),
('temperature', 'float'),
]
param = ', '.join([': '.join(p) for p in params])
return f'g4f.provider.{cls.__name__} supports: ({param})'
yield content

View File

@ -97,17 +97,3 @@ class Liaobots(AsyncGeneratorProvider):
async for stream in response.content.iter_any():
if stream:
yield stream.decode()
@classmethod
@property
def params(cls):
params = [
("model", "str"),
("messages", "list[dict[str, str]]"),
("stream", "bool"),
("proxy", "str"),
("auth", "str"),
]
param = ", ".join([": ".join(p) for p in params])
return f"g4f.provider.{cls.__name__} supports: ({param})"

View File

@ -4,7 +4,8 @@ import time, json
from ..typing import CreateResult, Messages
from .base_provider import BaseProvider
from .helper import WebDriver, WebDriverSession, format_prompt
from .helper import format_prompt
from .webdriver import WebDriver, WebDriverSession
class MyShell(BaseProvider):
url = "https://app.myshell.ai/chat"
@ -20,10 +21,10 @@ class MyShell(BaseProvider):
stream: bool,
proxy: str = None,
timeout: int = 120,
web_driver: WebDriver = None,
webdriver: WebDriver = None,
**kwargs
) -> CreateResult:
with WebDriverSession(web_driver, "", proxy=proxy) as driver:
with WebDriverSession(webdriver, "", proxy=proxy) as driver:
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
@ -52,15 +53,16 @@ response = await fetch("https://api.myshell.ai/v1/bot/chat/send_message", {
"body": '{body}',
"method": "POST"
})
window.reader = response.body.getReader();
window._reader = response.body.pipeThrough(new TextDecoderStream()).getReader();
"""
driver.execute_script(script.replace("{body}", json.dumps(data)))
script = """
chunk = await window.reader.read();
if (chunk['done']) return null;
text = (new TextDecoder()).decode(chunk['value']);
chunk = await window._reader.read();
if (chunk['done']) {
return null;
}
content = '';
text.split('\\n').forEach((line, index) => {
chunk['value'].split('\\n').forEach((line, index) => {
if (line.startsWith('data: ')) {
try {
const data = JSON.parse(line.substring('data: '.length));

View File

@ -56,16 +56,4 @@ class Opchatgpts(AsyncGeneratorProvider):
if line["type"] == "live":
yield line["data"]
elif line["type"] == "end":
break
@classmethod
@property
def params(cls):
params = [
("model", "str"),
("messages", "list[dict[str, str]]"),
("stream", "bool"),
("proxy", "str"),
]
param = ", ".join([": ".join(p) for p in params])
return f"g4f.provider.{cls.__name__} supports: ({param})"
break

View File

@ -4,7 +4,8 @@ import time
from ..typing import CreateResult, Messages
from .base_provider import BaseProvider
from .helper import WebDriver, WebDriverSession, format_prompt
from .helper import format_prompt
from .webdriver import WebDriver, WebDriverSession
class PerplexityAi(BaseProvider):
url = "https://www.perplexity.ai"
@ -20,12 +21,12 @@ class PerplexityAi(BaseProvider):
stream: bool,
proxy: str = None,
timeout: int = 120,
web_driver: WebDriver = None,
webdriver: WebDriver = None,
virtual_display: bool = True,
copilot: bool = False,
**kwargs
) -> CreateResult:
with WebDriverSession(web_driver, "", virtual_display=virtual_display, proxy=proxy) as driver:
with WebDriverSession(webdriver, "", virtual_display=virtual_display, proxy=proxy) as driver:
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

View File

@ -5,7 +5,8 @@ from urllib.parse import quote
from ..typing import CreateResult, Messages
from .base_provider import BaseProvider
from .helper import WebDriver, WebDriverSession, format_prompt
from .helper import format_prompt
from .webdriver import WebDriver, WebDriverSession
class Phind(BaseProvider):
url = "https://www.phind.com"
@ -21,11 +22,11 @@ class Phind(BaseProvider):
stream: bool,
proxy: str = None,
timeout: int = 120,
web_driver: WebDriver = None,
webdriver: WebDriver = None,
creative_mode: bool = None,
**kwargs
) -> CreateResult:
with WebDriverSession(web_driver, "", proxy=proxy) as driver:
with WebDriverSession(webdriver, "", proxy=proxy) as driver:
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
@ -34,40 +35,38 @@ class Phind(BaseProvider):
driver.get(f"{cls.url}/search?q={prompt}&source=searchbox")
# Register fetch hook
driver.execute_script("""
source = """
window._fetch = window.fetch;
window.fetch = (url, options) => {
// Call parent fetch method
const result = window._fetch(url, options);
window.fetch = async (url, options) => {
const response = await window._fetch(url, options);
if (url != "/api/infer/answer") {
return result;
return response;
}
// Load response reader
result.then((response) => {
if (!response.body.locked) {
window._reader = response.body.getReader();
}
});
// Return dummy response
return new Promise((resolve, reject) => {
resolve(new Response(new ReadableStream()))
});
copy = response.clone();
window._reader = response.body.pipeThrough(new TextDecoderStream()).getReader();
return copy;
}
""")
"""
driver.execute_cdp_cmd("Page.addScriptToEvaluateOnNewDocument", {
"source": source
})
# Need to change settings
if model.startswith("gpt-4") or creative_mode:
wait = WebDriverWait(driver, timeout)
wait = WebDriverWait(driver, timeout)
def open_dropdown():
# Open settings dropdown
wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "button.text-dark.dropdown-toggle")))
driver.find_element(By.CSS_SELECTOR, "button.text-dark.dropdown-toggle").click()
# Wait for dropdown toggle
wait.until(EC.visibility_of_element_located((By.XPATH, "//button[text()='GPT-4']")))
# Enable GPT-4
if model.startswith("gpt-4") or creative_mode:
# Enable GPT-4
if model.startswith("gpt-4"):
open_dropdown()
driver.find_element(By.XPATH, "//button[text()='GPT-4']").click()
# Enable creative mode
if creative_mode or creative_mode == None:
open_dropdown()
driver.find_element(By.ID, "Creative Mode").click()
# Submit changes
driver.find_element(By.CSS_SELECTOR, ".search-bar-input-group button[type='submit']").click()
@ -78,10 +77,11 @@ window.fetch = (url, options) => {
chunk = driver.execute_script("""
if(window._reader) {
chunk = await window._reader.read();
if (chunk['done']) return null;
text = (new TextDecoder()).decode(chunk['value']);
if (chunk['done']) {
return null;
}
content = '';
text.split('\\r\\n').forEach((line, index) => {
chunk['value'].split('\\r\\n').forEach((line, index) => {
if (line.startsWith('data: ')) {
line = line.substring('data: '.length);
if (!line.startsWith('<PHIND_METADATA>')) {

View File

@ -4,7 +4,7 @@ import time, json, time
from ..typing import CreateResult, Messages
from .base_provider import BaseProvider
from .helper import WebDriver, WebDriverSession
from .webdriver import WebDriver, WebDriverSession
class TalkAi(BaseProvider):
url = "https://talkai.info"
@ -19,10 +19,10 @@ class TalkAi(BaseProvider):
messages: Messages,
stream: bool,
proxy: str = None,
web_driver: WebDriver = None,
webdriver: WebDriver = None,
**kwargs
) -> CreateResult:
with WebDriverSession(web_driver, "", virtual_display=True, proxy=proxy) as driver:
with WebDriverSession(webdriver, "", virtual_display=True, proxy=proxy) as driver:
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

View File

@ -55,21 +55,4 @@ class Ylokh(AsyncGeneratorProvider):
yield content
else:
chat = await response.json()
yield chat["choices"][0]["message"].get("content")
@classmethod
@property
def params(cls):
params = [
("model", "str"),
("messages", "list[dict[str, str]]"),
("stream", "bool"),
("proxy", "str"),
("timeout", "int"),
("temperature", "float"),
("top_p", "float"),
]
param = ", ".join([": ".join(p) for p in params])
return f"g4f.provider.{cls.__name__} supports: ({param})"
yield chat["choices"][0]["message"].get("content")

View File

@ -3,6 +3,8 @@ from __future__ import annotations
from asyncio import AbstractEventLoop
from concurrent.futures import ThreadPoolExecutor
from abc import ABC, abstractmethod
from inspect import signature, Parameter
from types import NoneType
from .helper import get_event_loop, get_cookies, format_prompt
from ..typing import CreateResult, AsyncResult, Messages
@ -52,17 +54,42 @@ class BaseProvider(ABC):
executor,
create_func
)
@classmethod
@property
def params(cls) -> str:
params = [
("model", "str"),
("messages", "list[dict[str, str]]"),
("stream", "bool"),
]
param = ", ".join([": ".join(p) for p in params])
return f"g4f.provider.{cls.__name__} supports: ({param})"
if issubclass(cls, AsyncGeneratorProvider):
sig = signature(cls.create_async_generator)
elif issubclass(cls, AsyncProvider):
sig = signature(cls.create_async)
else:
sig = signature(cls.create_completion)
def get_type_name(annotation: type) -> str:
if hasattr(annotation, "__name__"):
annotation = annotation.__name__
elif isinstance(annotation, NoneType):
annotation = "None"
return str(annotation)
args = "";
for name, param in sig.parameters.items():
if name in ("self", "kwargs"):
continue
if name == "stream" and not cls.supports_stream:
continue
if args:
args += ", "
args += "\n"
args += " " + name
if name != "model" and param.annotation is not Parameter.empty:
args += f": {get_type_name(param.annotation)}"
if param.default == "":
args += ' = ""'
elif param.default is not Parameter.empty:
args += f" = {param.default}"
return f"g4f.Provider.{cls.__name__} supports: ({args}\n)"
class AsyncProvider(BaseProvider):

View File

@ -39,18 +39,6 @@ class Aibn(AsyncGeneratorProvider):
response.raise_for_status()
async for chunk in response.iter_content():
yield chunk.decode()
@classmethod
@property
def params(cls):
params = [
("model", "str"),
("messages", "list[dict[str, str]]"),
("stream", "bool"),
("temperature", "float"),
]
param = ", ".join([": ".join(p) for p in params])
return f"g4f.provider.{cls.__name__} supports: ({param})"
def generate_signature(timestamp: int, message: str, secret: str = "undefined"):

View File

@ -77,19 +77,6 @@ class Ails(AsyncGeneratorProvider):
yield token
@classmethod
@property
def params(cls):
params = [
("model", "str"),
("messages", "list[dict[str, str]]"),
("stream", "bool"),
("temperature", "float"),
]
param = ", ".join([": ".join(p) for p in params])
return f"g4f.provider.{cls.__name__} supports: ({param})"
def _hash(json_data: dict[str, str]) -> SHA256:
base_string: str = f'{json_data["t"]}:{json_data["m"]}:WI,2rU#_r:r~aF4aJ36[.Z(/8Rv93Rf:{len(json_data["m"])}'

View File

@ -69,16 +69,4 @@ class Aivvm(BaseProvider):
try:
yield chunk.decode("utf-8")
except UnicodeDecodeError:
yield chunk.decode("unicode-escape")
@classmethod
@property
def params(cls):
params = [
('model', 'str'),
('messages', 'list[dict[str, str]]'),
('stream', 'bool'),
('temperature', 'float'),
]
param = ', '.join([': '.join(p) for p in params])
return f'g4f.provider.{cls.__name__} supports: ({param})'
yield chunk.decode("unicode-escape")

View File

@ -44,15 +44,4 @@ class ChatgptDuo(AsyncProvider):
@classmethod
def get_sources(cls):
return cls._sources
@classmethod
@property
def params(cls):
params = [
("model", "str"),
("messages", "list[dict[str, str]]"),
("stream", "bool"),
]
param = ", ".join([": ".join(p) for p in params])
return f"g4f.provider.{cls.__name__} supports: ({param})"
return cls._sources

View File

@ -47,17 +47,4 @@ class CodeLinkAva(AsyncGeneratorProvider):
break
line = json.loads(line[6:-1])
if content := line["choices"][0]["delta"].get("content"):
yield content
@classmethod
@property
def params(cls):
params = [
("model", "str"),
("messages", "list[dict[str, str]]"),
("stream", "bool"),
("temperature", "float"),
]
param = ", ".join([": ".join(p) for p in params])
return f"g4f.provider.{cls.__name__} supports: ({param})"
yield content

View File

@ -60,18 +60,3 @@ class DfeHub(BaseProvider):
if b"content" in chunk:
data = json.loads(chunk.decode().split("data: ")[1])
yield (data["choices"][0]["delta"]["content"])
@classmethod
@property
def params(cls):
params = [
("model", "str"),
("messages", "list[dict[str, str]]"),
("stream", "bool"),
("temperature", "float"),
("presence_penalty", "int"),
("frequency_penalty", "int"),
("top_p", "int"),
]
param = ", ".join([": ".join(p) for p in params])
return f"g4f.provider.{cls.__name__} supports: ({param})"

View File

@ -87,21 +87,4 @@ class EasyChat(BaseProvider):
splitData = chunk.decode().split("data:")
if len(splitData) > 1:
yield json.loads(splitData[1])["choices"][0]["delta"]["content"]
@classmethod
@property
def params(cls):
params = [
("model", "str"),
("messages", "list[dict[str, str]]"),
("stream", "bool"),
("temperature", "float"),
("presence_penalty", "int"),
("frequency_penalty", "int"),
("top_p", "int"),
("active_server", "int"),
]
param = ", ".join([": ".join(p) for p in params])
return f"g4f.provider.{cls.__name__} supports: ({param})"
yield json.loads(splitData[1])["choices"][0]["delta"]["content"]

View File

@ -66,15 +66,4 @@ class Equing(BaseProvider):
if b'content' in line:
line_json = json.loads(line.decode('utf-8').split('data: ')[1])
if token := line_json['choices'][0]['delta'].get('content'):
yield token
@classmethod
@property
def params(cls):
params = [
("model", "str"),
("messages", "list[dict[str, str]]"),
("stream", "bool"),
]
param = ", ".join([": ".join(p) for p in params])
return f"g4f.provider.{cls.__name__} supports: ({param})"
yield token

View File

@ -74,15 +74,4 @@ class FastGpt(BaseProvider):
):
yield token
except:
continue
@classmethod
@property
def params(cls):
params = [
("model", "str"),
("messages", "list[dict[str, str]]"),
("stream", "bool"),
]
param = ", ".join([": ".join(p) for p in params])
return f"g4f.provider.{cls.__name__} supports: ({param})"
continue

View File

@ -55,22 +55,6 @@ class GetGpt(BaseProvider):
line_json = json.loads(line.decode('utf-8').split('data: ')[1])
yield (line_json['choices'][0]['delta']['content'])
@classmethod
@property
def params(cls):
params = [
('model', 'str'),
('messages', 'list[dict[str, str]]'),
('stream', 'bool'),
('temperature', 'float'),
('presence_penalty', 'int'),
('frequency_penalty', 'int'),
('top_p', 'int'),
('max_tokens', 'int'),
]
param = ', '.join([': '.join(p) for p in params])
return f'g4f.provider.{cls.__name__} supports: ({param})'
def _encrypt(e: str):
t = os.urandom(8).hex().encode('utf-8')

View File

@ -86,22 +86,4 @@ class H2o(AsyncGeneratorProvider):
f"{cls.url}/conversation/{conversationId}",
proxy=proxy,
) as response:
response.raise_for_status()
@classmethod
@property
def params(cls):
params = [
("model", "str"),
("messages", "list[dict[str, str]]"),
("stream", "bool"),
("temperature", "float"),
("truncate", "int"),
("max_new_tokens", "int"),
("do_sample", "bool"),
("repetition_penalty", "float"),
("return_full_text", "bool"),
]
param = ", ".join([": ".join(p) for p in params])
return f"g4f.provider.{cls.__name__} supports: ({param})"
response.raise_for_status()

View File

@ -48,16 +48,4 @@ class Lockchat(BaseProvider):
if b"content" in token:
token = json.loads(token.decode("utf-8").split("data: ")[1])
if token := token["choices"][0]["delta"].get("content"):
yield (token)
@classmethod
@property
def params(cls):
params = [
("model", "str"),
("messages", "list[dict[str, str]]"),
("stream", "bool"),
("temperature", "float"),
]
param = ", ".join([": ".join(p) for p in params])
return f"g4f.provider.{cls.__name__} supports: ({param})"
yield (token)

View File

@ -98,18 +98,6 @@ class Myshell(AsyncGeneratorProvider):
raise RuntimeError(f"Received unexpected message: {data_type}")
@classmethod
@property
def params(cls):
params = [
("model", "str"),
("messages", "list[dict[str, str]]"),
("stream", "bool"),
]
param = ", ".join([": ".join(p) for p in params])
return f"g4f.provider.{cls.__name__} supports: ({param})"
def generate_timestamp() -> str:
return str(
int(

View File

@ -58,17 +58,4 @@ class V50(BaseProvider):
)
if "https://fk1.v50.ltd" not in response.text:
yield response.text
@classmethod
@property
def params(cls):
params = [
("model", "str"),
("messages", "list[dict[str, str]]"),
("stream", "bool"),
("temperature", "float"),
("top_p", "int"),
]
param = ", ".join([": ".join(p) for p in params])
return f"g4f.provider.{cls.__name__} supports: ({param})"
yield response.text

View File

@ -50,18 +50,4 @@ class Vitalentum(AsyncGeneratorProvider):
break
line = json.loads(line[6:-1])
if content := line["choices"][0]["delta"].get("content"):
yield content
@classmethod
@property
def params(cls):
params = [
("model", "str"),
("messages", "list[dict[str, str]]"),
("stream", "bool"),
("proxy", "str"),
("temperature", "float"),
]
param = ", ".join([": ".join(p) for p in params])
return f"g4f.provider.{cls.__name__} supports: ({param})"
yield content

View File

@ -54,15 +54,4 @@ class Wuguokai(BaseProvider):
if len(_split) > 1:
yield _split[1].strip()
else:
yield _split[0].strip()
@classmethod
@property
def params(cls):
params = [
("model", "str"),
("messages", "list[dict[str, str]]"),
("stream", "bool")
]
param = ", ".join([": ".join(p) for p in params])
return f"g4f.provider.{cls.__name__} supports: ({param})"
yield _split[0].strip()

View File

@ -6,7 +6,6 @@ import webbrowser
import random
import string
import secrets
import time
from os import path
from asyncio import AbstractEventLoop
from platformdirs import user_config_dir
@ -21,26 +20,8 @@ from browser_cookie3 import (
firefox,
BrowserCookieError
)
try:
from selenium.webdriver.remote.webdriver import WebDriver
except ImportError:
class WebDriver():
pass
try:
from undetected_chromedriver import Chrome, ChromeOptions
except ImportError:
class Chrome():
def __init__():
raise RuntimeError('Please install the "undetected_chromedriver" package')
class ChromeOptions():
def add_argument():
pass
try:
from pyvirtualdisplay import Display
except ImportError:
pass
from ..typing import Dict, Messages, Union, Tuple
from ..typing import Dict, Messages
from .. import debug
# Change event loop policy on windows
@ -135,74 +116,11 @@ def format_prompt(messages: Messages, add_special_tokens=False) -> str:
return f"{formatted}\nAssistant:"
def get_browser(
user_data_dir: str = None,
headless: bool = False,
proxy: str = None,
options: ChromeOptions = None
) -> Chrome:
if user_data_dir == None:
user_data_dir = user_config_dir("g4f")
if proxy:
if not options:
options = ChromeOptions()
options.add_argument(f'--proxy-server={proxy}')
return Chrome(options=options, user_data_dir=user_data_dir, headless=headless)
class WebDriverSession():
def __init__(
self,
web_driver: WebDriver = None,
user_data_dir: str = None,
headless: bool = False,
virtual_display: bool = False,
proxy: str = None,
options: ChromeOptions = None
):
self.web_driver = web_driver
self.user_data_dir = user_data_dir
self.headless = headless
self.virtual_display = virtual_display
self.proxy = proxy
self.options = options
def reopen(
self,
user_data_dir: str = None,
headless: bool = False,
virtual_display: bool = False
) -> WebDriver:
if user_data_dir == None:
user_data_dir = self.user_data_dir
self.default_driver.quit()
if not virtual_display and self.virtual_display:
self.virtual_display.stop()
self.default_driver = get_browser(user_data_dir, headless, self.proxy)
return self.default_driver
def __enter__(self) -> WebDriver:
if self.web_driver:
return self.web_driver
if self.virtual_display == True:
self.virtual_display = Display(size=(1920,1080))
self.virtual_display.start()
self.default_driver = get_browser(self.user_data_dir, self.headless, self.proxy, self.options)
return self.default_driver
def __exit__(self, exc_type, exc_val, exc_tb):
if self.default_driver:
self.default_driver.close()
time.sleep(0.1)
self.default_driver.quit()
if self.virtual_display:
self.virtual_display.stop()
def get_random_string(length: int = 10) -> str:
return ''.join(
random.choice(string.ascii_lowercase + string.digits)
for _ in range(length)
)
def get_random_hex() -> str:
return secrets.token_hex(16).zfill(32)

View File

@ -4,7 +4,8 @@ import time
from ...typing import CreateResult, Messages
from ..base_provider import BaseProvider
from ..helper import WebDriver, WebDriverSession, format_prompt
from ..helper import format_prompt
from ..webdriver import WebDriver, WebDriverSession
class Bard(BaseProvider):
url = "https://bard.google.com"
@ -18,13 +19,13 @@ class Bard(BaseProvider):
messages: Messages,
stream: bool,
proxy: str = None,
web_driver: WebDriver = None,
webdriver: WebDriver = None,
user_data_dir: str = None,
headless: bool = True,
**kwargs
) -> CreateResult:
prompt = format_prompt(messages)
session = WebDriverSession(web_driver, user_data_dir, headless, proxy=proxy)
session = WebDriverSession(webdriver, user_data_dir, headless, proxy=proxy)
with session as driver:
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
@ -36,8 +37,8 @@ class Bard(BaseProvider):
wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "div.ql-editor.textarea")))
except:
# Reopen browser for login
if not web_driver:
driver = session.reopen(headless=False)
if not webdriver:
driver = session.reopen()
driver.get(f"{cls.url}/chat")
wait = WebDriverWait(driver, 240)
wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "div.ql-editor.textarea")))

View File

@ -59,17 +59,4 @@ class HuggingChat(AsyncGeneratorProvider):
break
async with session.delete(f"{cls.url}/conversation/{conversation_id}", proxy=proxy) as response:
response.raise_for_status()
@classmethod
@property
def params(cls):
params = [
("model", "str"),
("messages", "list[dict[str, str]]"),
("stream", "bool"),
("proxy", "str"),
]
param = ", ".join([": ".join(p) for p in params])
return f"g4f.provider.{cls.__name__} supports: ({param})"
response.raise_for_status()

View File

@ -87,15 +87,3 @@ class OpenAssistant(AsyncGeneratorProvider):
}
async with session.delete("https://open-assistant.io/api/chat", proxy=proxy, params=params) as response:
response.raise_for_status()
@classmethod
@property
def params(cls):
params = [
("model", "str"),
("messages", "list[dict[str, str]]"),
("stream", "bool"),
("proxy", "str"),
]
param = ", ".join([": ".join(p) for p in params])
return f"g4f.provider.{cls.__name__} supports: ({param})"

View File

@ -6,7 +6,8 @@ from asyncstdlib.itertools import tee
from async_property import async_cached_property
from ..base_provider import AsyncGeneratorProvider
from ..helper import get_browser, get_event_loop
from ..helper import get_event_loop
from ..webdriver import get_browser
from ...typing import AsyncResult, Messages
from ...requests import StreamSession
@ -38,7 +39,10 @@ class OpenaiChat(AsyncGeneratorProvider):
**kwargs
) -> Response:
if prompt:
messages.append({"role": "user", "content": prompt})
messages.append({
"role": "user",
"content": prompt
})
generator = cls.create_async_generator(
model,
messages,
@ -49,12 +53,9 @@ class OpenaiChat(AsyncGeneratorProvider):
response_fields=True,
**kwargs
)
fields: ResponseFields = await anext(generator)
if "access_token" not in kwargs:
kwargs["access_token"] = cls._access_token
return Response(
generator,
fields,
await anext(generator),
action,
messages,
kwargs
@ -87,7 +88,6 @@ class OpenaiChat(AsyncGeneratorProvider):
headers = {
"Accept": "text/event-stream",
"Authorization": f"Bearer {access_token}",
"Cookie": 'intercom-device-id-dgkjq2bp=0f047573-a750-46c8-be62-6d54b56e7bf0; ajs_user_id=user-iv3vxisaoNodwWpxmNpMfekH; ajs_anonymous_id=fd91be0b-0251-4222-ac1e-84b1071e9ec1; __Host-next-auth.csrf-token=d2b5f67d56f7dd6a0a42ae4becf2d1a6577b820a5edc88ab2018a59b9b506886%7Ce5c33eecc460988a137cbc72d90ee18f1b4e2f672104f368046df58e364376ac; _cfuvid=gt_mA.q6rue1.7d2.AR0KHpbVBS98i_ppfi.amj2._o-1700353424353-0-604800000; cf_clearance=GkHCfPSFU.NXGcHROoe4FantnqmnNcluhTNHz13Tk.M-1700353425-0-1-dfe77f81.816e9bc2.714615da-0.2.1700353425; __Secure-next-auth.callback-url=https%3A%2F%2Fchat.openai.com; intercom-session-dgkjq2bp=UWdrS1hHazk5VXN1c0V5Q1F0VXdCQmsyTU9pVjJMUkNpWnFnU3dKWmtIdGwxTC9wbjZuMk5hcEc0NWZDOGdndS0tSDNiaDNmMEdIL1RHU1dFWDBwOHFJUT09--f754361b91fddcd23a13b288dcb2bf8c7f509e91; _uasid="Z0FBQUFBQmxXVnV0a3dmVno4czRhcDc2ZVcwaUpSNUdZejlDR25YSk5NYTJQQkpyNmRvOGxjTHMyTlAxWmJhaURrMVhjLXZxQXdZeVpBbU1aczA5WUpHT2dwaS1MOWc4MnhyNWFnbGRzeGdJcGFKT0ZRdnBTMVJHcGV2MGNTSnVQY193c0hqUWIycHhQRVF4dENlZ3phcDdZeHgxdVhoalhrZmtZME9NbWhMQjdVR3Vzc3FRRk0ybjJjNWMwTWtIRjdPb19lUkFtRmV2MDVqd1kwWU11QTYtQkdZenEzVHhLMGplY1hZM3FlYUt1cVZaNWFTRldleEJETzJKQjk1VTJScy1GUnMxUVZWMnVxYklxMjdockVZbkZyd1R4U1RtMnA1ZzlSeXphdmVOVk9xeEdrRkVOSjhwTVd1QzFtQjhBcWdDaE92Q1VlM2pwcjFQTXRuLVJNRVlZSGpIdlZ0aGV3PT0="; _dd_s=rum=0&expire=1700356244884; __Secure-next-auth.session-token=eyJhbGciOiJkaXIiLCJlbmMiOiJBMjU2R0NNIn0..3aK6Fbdy2_8f07bf.8eT2xgonrCnz7ySY6qXFsg3kzL6UQfXKAYaw3tyn-6_X9657zy47k9qGvmi9mF0QKozj5jau3_Ca62AQQ7FmeC6Y2F1urtzqrXqwTTsQ2LuzFPIQkx6KKb2DXc8zW2-oyEzJ_EY5yxfLB2RlRkSh3M7bYNZh4_ltEcfkj38s_kIPGMxv34udtPWGWET99MCjkdwQWXylJag4s0fETA0orsBAKnGCyqAUNJbb_D7BYtGSV-MQ925kZMG6Di_QmfO0HQWURDYjmdRNcuy1PT_xJ1DJko8sjL42i4j3RhkNDkhqCIqyYImz2eHFWHW7rYKxTkrBhlCPMS5hRdcCswD7JYPcSBiwnVRYgyOocFGXoFvQgIZ2FX9NiZ3SMEVM1VwIGSE-qH0H2nMa8_iBvsOgOWJgKjVAvzzyzZvRVDUUHzJrikSFPNONVDU3h-04c1kVL4qIu9DfeTPN7n8AvNmYwMbro0L9-IUAeXNo4-pwF0Kt-AtTsamqWvMqnK4O_YOyLnDDlvkmnOvDC2d5uinwlQIxr6APO6qFfGLlHiLZemKoekxEE1Fx70dl-Ouhk1VIzbF3OC6XNNxeBm9BUYUiHdL0wj2H9rHgX4cz6ZmS_3VTgpD6UJh-evu5KJ2gIvjYmVbyzEN0aPNDxfvBaOm-Ezpy4bUJ2bUrOwNn-0knWkDiTvjYmNhCyefPCtCF6rpKNay8PCw_yh79C4SdEP6Q4V7LI0Tvdi5uz7kLCiBC4AT9L0ao1WDX03mkUOpjvzHDvPLmj8chW3lTVm_kA0eYGQY4wT0jzleWlfV0Q8rB2oYECNLWksA3F1zlGfcl4lQjprvTXRePkvAbMpoJEsZD3Ylq7-foLDLk4-M2LYAFZDs282AY04sFjAjQBxTELFCCuDgTIgTXSIskY_XCxpVXDbdLlbCJY7XVK45ybwtfqwlKRp8Mo0B131uQAFc-migHaUaoGujxJJk21bP8F0OmhNYHBo4FQqE1rQm2JH5bNM7txKeh5KXdJgVUVbRSr7OIp_OF5-Bx_v9eRBGAIDkue26E2-O8Rnrp5zQ5TnvecQLDaUzWavCLPwsZ0_gsOLBxNOmauNYZtF8IElCsQSFDdhoiMxXsYUm4ZYKEAy3GWq8HGTAvBhNkh1hvnI7y-d8-DOaZf_D_D98-olZfm-LUkeosLNpPB9rxYMqViCiW3KrXE9Yx0wlFm5ePKaVvR7Ym_EPhSOhJBKFPCvdTdMZSNPUcW0ZJBVByq0A9sxD51lYq3gaFyqh94S4s_ox182AQ3szGzHkdgLcnQmJG9OYvKxAVcd43eg6_gODAYhx02GjbMw-7JTAhyXSeCrlMteHyOXl8hai-3LilC3PmMzi7Vbu49dhF1s4LcVlUowen5ira44rQQaB26mdaOUoQfodgt66M3RTWGPXyK1Nb72AzSXsCKyaQPbzeb6cN0fdGSdG4ktwvR04eFNEkquo_3aKu2GmUKTD0XcRx9dYrfXjgY-X1DDTVs1YND2gRhdx7FFEeBVjtbj2UqmG3Rvd4IcHGe7OnYWw2MHDcol68SsR1KckXWwWREz7YTGUnDB2M1kx_H4W2mjclytnlHOnYU3RflegRPeSTbdzUZJvGKXCCz45luHkQWN_4DExE76D-9YqbFIz-RY5yL4h-Zs-i2xjm2K-4xCMM9nQIOqhLMqixIZQ2ldDAidKoYtbs5ppzbcBLyrZM96bq9DwRBY3aacqWdlRd-TfX0wv5KO4fo0sSh5FsuhuN0zcEV_NNXgqIEM_p14EcPqgbrAvCBQ8os70TRBQLXiF0EniSofGjxwF8kQvUk3C6Wfc8cTTeN-E6GxCVTn91HBwA1iSEZlRLMVb8_BcRJNqwbgnb_07jR6-eo42u88CR3KQdAWwbQRdMxsURFwZ0ujHXVGG0Ll6qCFBcHXWyDO1x1yHdHnw8_8yF26pnA2iPzrFR-8glMgIA-639sLuGAxjO1_ZuvJ9CAB41Az9S_jaZwaWy215Hk4-BRYD-MKmHtonwo3rrxhE67WJgbbu14efsw5nT6ow961pffgwXov5VA1Rg7nv1E8RvQOx7umWW6o8R4W6L8f2COsmPTXfgwIjoJKkjhUqAQ8ceG7cM0ET-38yaC0ObU8EkXfdGGgxI28qTEZWczG66_iM4hw7QEGCY5Cz2kbO6LETAiw9OsSigtBvDS7f0Ou0bZ41pdK7G3FmvdZAnjWPjObnDF4k4uWfn7mzt0fgj3FyqK20JezRDyGuAbUUhOvtZpc9sJpzxR34eXEZTouuALrHcGuNij4z6rx51FrQsaMtiup8QVrhtZbXtKLMYnWYSbkhuTeN2wY-xV1ZUsQlakIZszzGF7kuIG87KKWMpuPMvbXjz6Pp_gWJiIC6aQuk8xl5g0iBPycf_6Q-MtpuYxzNE2TpI1RyR9mHeXmteoRzrFiWp7yEC-QGNFyAJgxTqxM3CjHh1Jt6IddOsmn89rUo1dZM2Smijv_fbIv3avXLkIPX1KZjILeJCtpU0wAdsihDaRiRgDdx8fG__F8zuP0n7ziHas73cwrfg-Ujr6DhC0gTNxyd9dDA_oho9N7CQcy6EFmfNF2te7zpLony0859jtRv2t1TnpzAa1VvMK4u6mXuJ2XDo04_6GzLO3aPHinMdl1BcIAWnqAqWAu3euGFLTHOhXlfijut9N1OCifd_zWjhVtzlR39uFeCQBU5DyQArzQurdoMx8U1ETsnWgElxGSStRW-YQoPsAJ87eg9trqKspFpTVlAVN3t1GtoEAEhcwhe81SDssLmKGLc.7PqS6jRGTIfgTPlO7Ognvg; __cf_bm=VMWoAKEB45hQSwxXtnYXcurPaGZDJS4dMi6dIMFLwdw-1700355394-0-ATVsbq97iCaTaJbtYr8vtg1Zlbs3nLrJLKVBHYa2Jn7hhkGclqAy8Gbyn5ePEhDRqj93MsQmtayfYLqY5n4WiLY=; __cflb=0H28vVfF4aAyg2hkHFH9CkdHRXPsfCUf6VpYf2kz3RX'
}
async with StreamSession(
proxies={"https": proxy},
@ -95,24 +95,22 @@ class OpenaiChat(AsyncGeneratorProvider):
headers=headers,
timeout=timeout
) as session:
data = {
"action": action,
"arkose_token": await get_arkose_token(proxy, timeout),
"conversation_id": conversation_id,
"parent_message_id": parent_id,
"model": models[model],
"history_and_training_disabled": history_disabled and not auto_continue,
}
if action != "continue":
data["messages"] = [{
"id": str(uuid.uuid4()),
"author": {"role": "user"},
"content": {"content_type": "text", "parts": [messages[-1]["content"]]},
}]
first = True
end_turn = EndTurn()
while first or auto_continue and not end_turn.is_end:
first = False
while not end_turn.is_end:
data = {
"action": action,
"arkose_token": await get_arkose_token(proxy, timeout),
"conversation_id": conversation_id,
"parent_message_id": parent_id,
"model": models[model],
"history_and_training_disabled": history_disabled and not auto_continue,
}
if action != "continue":
data["messages"] = [{
"id": str(uuid.uuid4()),
"author": {"role": "user"},
"content": {"content_type": "text", "parts": [messages[-1]["content"]]},
}]
async with session.post(f"{cls.url}/backend-api/conversation", json=data) as response:
try:
response.raise_for_status()
@ -120,43 +118,38 @@ class OpenaiChat(AsyncGeneratorProvider):
raise RuntimeError(f"Error {response.status_code}: {await response.text()}")
last_message = 0
async for line in response.iter_lines():
if line.startswith(b"data: "):
line = line[6:]
if line == b"[DONE]":
break
try:
line = json.loads(line)
except:
continue
if "message" not in line:
continue
if "error" in line and line["error"]:
raise RuntimeError(line["error"])
if "message_type" not in line["message"]["metadata"]:
continue
if line["message"]["author"]["role"] != "assistant":
continue
if line["message"]["metadata"]["message_type"] in ("next", "continue", "variant"):
conversation_id = line["conversation_id"]
parent_id = line["message"]["id"]
if response_fields:
response_fields = False
yield ResponseFields(conversation_id, parent_id, end_turn)
new_message = line["message"]["content"]["parts"][0]
yield new_message[last_message:]
last_message = len(new_message)
if "finish_details" in line["message"]["metadata"]:
if line["message"]["metadata"]["finish_details"]["type"] == "max_tokens":
end_turn.end()
data = {
"action": "continue",
"arkose_token": await get_arkose_token(proxy, timeout),
"conversation_id": conversation_id,
"parent_message_id": parent_id,
"model": models[model],
"history_and_training_disabled": False,
}
if not line.startswith(b"data: "):
continue
line = line[6:]
if line == b"[DONE]":
break
try:
line = json.loads(line)
except:
continue
if "message" not in line:
continue
if "error" in line and line["error"]:
raise RuntimeError(line["error"])
if "message_type" not in line["message"]["metadata"]:
continue
if line["message"]["author"]["role"] != "assistant":
continue
if line["message"]["metadata"]["message_type"] in ("next", "continue", "variant"):
conversation_id = line["conversation_id"]
parent_id = line["message"]["id"]
if response_fields:
response_fields = False
yield ResponseFields(conversation_id, parent_id, end_turn)
new_message = line["message"]["content"]["parts"][0]
yield new_message[last_message:]
last_message = len(new_message)
if "finish_details" in line["message"]["metadata"]:
if line["message"]["metadata"]["finish_details"]["type"] == "stop":
end_turn.end()
if not auto_continue:
break
action = "continue"
await asyncio.sleep(5)
@classmethod
@ -167,7 +160,7 @@ class OpenaiChat(AsyncGeneratorProvider):
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = get_browser("~/openai", proxy=proxy)
driver = get_browser(proxy=proxy)
except ImportError:
return
try:
@ -193,18 +186,6 @@ class OpenaiChat(AsyncGeneratorProvider):
raise RuntimeError("Read access token failed")
return cls._access_token
@classmethod
@property
def params(cls):
params = [
("model", "str"),
("messages", "list[dict[str, str]]"),
("stream", "bool"),
("proxy", "str"),
("access_token", "str"),
]
param = ", ".join([": ".join(p) for p in params])
return f"g4f.provider.{cls.__name__} supports: ({param})"
async def get_arkose_token(proxy: str = None, timeout: int = None) -> str:
config = {
@ -293,7 +274,7 @@ class Response():
async def variant(self, **kwargs) -> Response:
if self.action != "next":
raise RuntimeError("Can't create variant with continue or variant request.")
raise RuntimeError("Can't create variant from continue or variant request.")
return await OpenaiChat.create(
**self._options,
messages=self._messages,

View File

@ -4,7 +4,8 @@ import time
from ...typing import CreateResult, Messages
from ..base_provider import BaseProvider
from ..helper import WebDriver, WebDriverSession, format_prompt
from ..helper import format_prompt
from ..webdriver import WebDriver, WebDriverSession
models = {
"meta-llama/Llama-2-7b-chat-hf": {"name": "Llama-2-7b"},
@ -33,7 +34,7 @@ class Poe(BaseProvider):
messages: Messages,
stream: bool,
proxy: str = None,
web_driver: WebDriver = None,
webdriver: WebDriver = None,
user_data_dir: str = None,
headless: bool = True,
**kwargs
@ -44,7 +45,7 @@ class Poe(BaseProvider):
raise ValueError(f"Model are not supported: {model}")
prompt = format_prompt(messages)
session = WebDriverSession(web_driver, user_data_dir, headless, proxy=proxy)
session = WebDriverSession(webdriver, user_data_dir, headless, proxy=proxy)
with session as driver:
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
@ -80,8 +81,8 @@ class Poe(BaseProvider):
wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "textarea[class^='GrowingTextArea']")))
except:
# Reopen browser for login
if not web_driver:
driver = session.reopen(headless=False)
if not webdriver:
driver = session.reopen()
driver.get(f"{cls.url}/{models[model]['name']}")
wait = WebDriverWait(driver, 240)
wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "textarea[class^='GrowingTextArea']")))

View File

@ -60,18 +60,3 @@ class Raycast(BaseProvider):
token = completion_chunk['text']
if token != None:
yield token
@classmethod
@property
def params(cls):
params = [
("model", "str"),
("messages", "list[dict[str, str]]"),
("stream", "bool"),
("temperature", "float"),
("top_p", "int"),
("model", "str"),
("auth", "str"),
]
param = ", ".join([": ".join(p) for p in params])
return f"g4f.provider.{cls.__name__} supports: ({param})"

View File

@ -4,7 +4,8 @@ import time
from ...typing import CreateResult, Messages
from ..base_provider import BaseProvider
from ..helper import WebDriver, WebDriverSession, format_prompt
from ..helper import format_prompt
from ..webdriver import WebDriver, WebDriverSession
models = {
"theb-ai": "TheB.AI",
@ -44,14 +45,14 @@ class Theb(BaseProvider):
messages: Messages,
stream: bool,
proxy: str = None,
web_driver: WebDriver = None,
webdriver: WebDriver = None,
virtual_display: bool = True,
**kwargs
) -> CreateResult:
if model in models:
model = models[model]
prompt = format_prompt(messages)
web_session = WebDriverSession(web_driver, virtual_display=virtual_display, proxy=proxy)
web_session = WebDriverSession(webdriver, virtual_display=virtual_display, proxy=proxy)
with web_session as driver:
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
@ -61,22 +62,16 @@ class Theb(BaseProvider):
# Register fetch hook
script = """
window._fetch = window.fetch;
window.fetch = (url, options) => {
window.fetch = async (url, options) => {
// Call parent fetch method
const result = window._fetch(url, options);
const response = await window._fetch(url, options);
if (!url.startsWith("/api/conversation")) {
return result;
}
// Load response reader
result.then((response) => {
if (!response.body.locked) {
window._reader = response.body.getReader();
}
});
// Return dummy response
return new Promise((resolve, reject) => {
resolve(new Response(new ReadableStream()))
});
// Copy response
copy = response.clone();
window._reader = response.body.pipeThrough(new TextDecoderStream()).getReader();
return copy;
}
window._last_message = "";
"""
@ -97,7 +92,6 @@ window._last_message = "";
wait = WebDriverWait(driver, 240)
wait.until(EC.visibility_of_element_located((By.ID, "textareaAutosize")))
time.sleep(200)
try:
driver.find_element(By.CSS_SELECTOR, ".driver-overlay").click()
driver.find_element(By.CSS_SELECTOR, ".driver-overlay").click()
@ -134,9 +128,8 @@ if(window._reader) {
if (chunk['done']) {
return null;
}
text = (new TextDecoder()).decode(chunk['value']);
message = '';
text.split('\\r\\n').forEach((line, index) => {
chunk['value'].split('\\r\\n').forEach((line, index) => {
if (line.startsWith('data: ')) {
try {
line = JSON.parse(line.substring('data: '.length));

92
g4f/Provider/webdriver.py Normal file
View File

@ -0,0 +1,92 @@
from __future__ import annotations
import time
from platformdirs import user_config_dir
try:
from selenium.webdriver.remote.webdriver import WebDriver
except ImportError:
class WebDriver():
pass
try:
from undetected_chromedriver import Chrome, ChromeOptions
except ImportError:
class Chrome():
def __init__():
raise RuntimeError('Please install the "undetected_chromedriver" package')
class ChromeOptions():
def add_argument():
pass
try:
from pyvirtualdisplay import Display
has_pyvirtualdisplay = True
except ImportError:
has_pyvirtualdisplay = False
def get_browser(
user_data_dir: str = None,
headless: bool = False,
proxy: str = None,
options: ChromeOptions = None
) -> Chrome:
if user_data_dir == None:
user_data_dir = user_config_dir("g4f")
if proxy:
if not options:
options = ChromeOptions()
options.add_argument(f'--proxy-server={proxy}')
return Chrome(options=options, user_data_dir=user_data_dir, headless=headless)
class WebDriverSession():
def __init__(
self,
webdriver: WebDriver = None,
user_data_dir: str = None,
headless: bool = False,
virtual_display: bool = False,
proxy: str = None,
options: ChromeOptions = None
):
self.webdriver = webdriver
self.user_data_dir = user_data_dir
self.headless = headless
self.virtual_display = None
if has_pyvirtualdisplay and virtual_display:
self.virtual_display = Display(size=(1920,1080))
self.proxy = proxy
self.options = options
self.default_driver = None
def reopen(
self,
user_data_dir: str = None,
headless: bool = False,
virtual_display: bool = False
) -> WebDriver:
if user_data_dir == None:
user_data_dir = self.user_data_dir
if self.default_driver:
self.default_driver.quit()
if not virtual_display and self.virtual_display:
self.virtual_display.stop()
self.virtual_display = None
self.default_driver = get_browser(user_data_dir, headless, self.proxy)
return self.default_driver
def __enter__(self) -> WebDriver:
if self.webdriver:
return self.webdriver
if self.virtual_display:
self.virtual_display.start()
self.default_driver = get_browser(self.user_data_dir, self.headless, self.proxy, self.options)
return self.default_driver
def __exit__(self, exc_type, exc_val, exc_tb):
if self.default_driver:
try:
self.default_driver.close()
except:
pass
time.sleep(0.1)
self.default_driver.quit()
if self.virtual_display:
self.virtual_display.stop()

View File

@ -1,8 +1,8 @@
from __future__ import annotations
from requests import get
from .models import Model, ModelUtils, _all_models
from .Provider import BaseProvider, RetryProvider
from .typing import Messages, CreateResult, Union, List
from .Provider import BaseProvider, AsyncGeneratorProvider, RetryProvider
from .typing import Messages, CreateResult, AsyncResult, Union, List
from . import debug
version = '0.1.8.7'
@ -80,13 +80,15 @@ class ChatCompletion:
messages : Messages,
provider : Union[type[BaseProvider], None] = None,
stream : bool = False,
ignored : List[str] = None, **kwargs) -> str:
if stream:
raise ValueError('"create_async" does not support "stream" argument')
ignored : List[str] = None,
**kwargs) -> Union[AsyncResult, str]:
model, provider = get_model_and_provider(model, provider, False, ignored)
if stream:
if isinstance(provider, type) and issubclass(provider, AsyncGeneratorProvider):
return await provider.create_async_generator(model.name, messages, **kwargs)
raise ValueError(f'{provider.__name__} does not support "stream" argument')
return await provider.create_async(model.name, messages, **kwargs)
class Completion:

View File

@ -1,6 +1,6 @@
requests
pycryptodome
curl_cffi
curl_cffi>=0.5.10b4
aiohttp
certifi
browser_cookie3
@ -23,6 +23,6 @@ flask
py-arkose-generator
asyncstdlib
async-property
selenium
undetected-chromedriver
asyncstdlib
async_property
async_property