Add Replicate Provider

Fix Bug in OpenaiChat and MetaAI
Add read cookie and .har files
Use factory for api app
This commit is contained in:
Heiner Lohaus 2024-04-21 22:39:00 +02:00
parent a26421bcd8
commit 3a23e81de9
14 changed files with 303 additions and 174 deletions

1
.gitignore vendored
View File

@ -55,6 +55,7 @@ local.py
image.py image.py
.buildozer .buildozer
hardir hardir
har_and_cookies
node_modules node_modules
models models
projects/windows/g4f projects/windows/g4f

147
README.md
View File

@ -91,7 +91,7 @@ As per the survey, here is a list of improvements to come
```sh ```sh
docker pull hlohaus789/g4f docker pull hlohaus789/g4f
docker run -p 8080:8080 -p 1337:1337 -p 7900:7900 --shm-size="2g" -v ${PWD}/hardir:/app/hardir hlohaus789/g4f:latest docker run -p 8080:8080 -p 1337:1337 -p 7900:7900 --shm-size="2g" -v ${PWD}/hardir:/app/har_and_cookies hlohaus789/g4f:latest
``` ```
3. **Access the Client:** 3. **Access the Client:**
@ -217,10 +217,11 @@ Access with: http://localhost:1337/v1
#### Cookies #### Cookies
You need cookies for BingCreateImages and the Gemini Provider. Cookies are essential for using Meta AI and Microsoft Designer to create images.
From Bing you need the "_U" cookie and from Gemini you need the "__Secure-1PSID" cookie. Additionally, cookies are required for the Google Gemini and WhiteRabbitNeo Provider.
Sometimes you doesn't need the "__Secure-1PSID" cookie, but some other auth cookies. From Bing, ensure you have the "_U" cookie, and from Google, all cookies starting with "__Secure-1PSID" are needed.
You can pass the cookies in the create function or you use the `set_cookies` setter before you run G4F:
You can pass these cookies directly to the create function or set them using the `set_cookies` method before running G4F:
```python ```python
from g4f.cookies import set_cookies from g4f.cookies import set_cookies
@ -228,10 +229,25 @@ from g4f.cookies import set_cookies
set_cookies(".bing.com", { set_cookies(".bing.com", {
"_U": "cookie value" "_U": "cookie value"
}) })
set_cookies(".google.com", { set_cookies(".google.com", {
"__Secure-1PSID": "cookie value" "__Secure-1PSID": "cookie value"
}) })
... ```
Alternatively, you can place your .har and cookie files in the `/har_and_cookies` directory. To export a cookie file, use the EditThisCookie extension available on the Chrome Web Store: [EditThisCookie Extension](https://chromewebstore.google.com/detail/editthiscookie/fngmhnnpilhplaeedifhccceomclgfbg).
You can also create .har files to capture cookies. If you need further assistance, refer to the next section.
```bash
python -m g4f.cli api --debug
```
```
Read .har file: ./har_and_cookies/you.com.har
Cookies added: 10 from .you.com
Read cookie file: ./har_and_cookies/google.json
Cookies added: 16 from .google.com
Starting server... [g4f v-0.0.0] (debug)
``` ```
#### .HAR File for OpenaiChat Provider #### .HAR File for OpenaiChat Provider
@ -249,7 +265,7 @@ To utilize the OpenaiChat provider, a .har file is required from https://chat.op
##### Storing the .HAR File ##### Storing the .HAR File
- Place the exported .har file in the `./hardir` directory if you are using Docker. Alternatively, you can store it in any preferred location within your current working directory. - Place the exported .har file in the `./har_and_cookies` directory if you are using Docker. Alternatively, you can store it in any preferred location within your current working directory.
Note: Ensure that your .har file is stored securely, as it may contain sensitive information. Note: Ensure that your .har file is stored securely, as it may contain sensitive information.
@ -273,13 +289,13 @@ set G4F_PROXY=http://host:port
| Website | Provider | GPT-3.5 | GPT-4 | Stream | Status | Auth | | Website | Provider | GPT-3.5 | GPT-4 | Stream | Status | Auth |
| ------ | ------- | ------- | ----- | ------ | ------ | ---- | | ------ | ------- | ------- | ----- | ------ | ------ | ---- |
| [bing.com](https://bing.com/chat) | `g4f.Provider.Bing` | ❌ | ✔️ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ | | [bing.com](https://bing.com/chat) | `g4f.Provider.Bing` | ❌ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [chatgpt.ai](https://chatgpt.ai) | `g4f.Provider.ChatgptAi` | ❌ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [chatgpt.ai](https://chatgpt.ai) | `g4f.Provider.ChatgptAi` | ❌ | ✔️ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [liaobots.site](https://liaobots.site) | `g4f.Provider.Liaobots` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [liaobots.site](https://liaobots.site) | `g4f.Provider.Liaobots` | ✔️ | ✔️ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [chat.openai.com](https://chat.openai.com) | `g4f.Provider.OpenaiChat` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ✔️ | | [chat.openai.com](https://chat.openai.com) | `g4f.Provider.OpenaiChat` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌+✔️ |
| [raycast.com](https://raycast.com) | `g4f.Provider.Raycast` | ✔️ | ✔️ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ✔️ | | [raycast.com](https://raycast.com) | `g4f.Provider.Raycast` | ✔️ | ✔️ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ✔️ |
| [beta.theb.ai](https://beta.theb.ai) | `g4f.Provider.Theb` | ✔️ | ✔️ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ | | [beta.theb.ai](https://beta.theb.ai) | `g4f.Provider.Theb` | ✔️ | ✔️ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [you.com](https://you.com) | `g4f.Provider.You` | ✔️ | ✔️ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ | | [you.com](https://you.com) | `g4f.Provider.You` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
## Best OpenSource Models ## Best OpenSource Models
While we wait for gpt-5, here is a list of new models that are at least better than gpt-3.5-turbo. **Some are better than gpt-4**. Expect this list to grow. While we wait for gpt-5, here is a list of new models that are at least better than gpt-3.5-turbo. **Some are better than gpt-4**. Expect this list to grow.
@ -296,19 +312,24 @@ While we wait for gpt-5, here is a list of new models that are at least better t
| Website | Provider | GPT-3.5 | GPT-4 | Stream | Status | Auth | | Website | Provider | GPT-3.5 | GPT-4 | Stream | Status | Auth |
| ------ | ------- | ------- | ----- | ------ | ------ | ---- | | ------ | ------- | ------- | ----- | ------ | ------ | ---- |
| [chat3.aiyunos.top](https://chat3.aiyunos.top/) | `g4f.Provider.AItianhuSpace` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ | | [chat3.aiyunos.top](https://chat3.aiyunos.top/) | `g4f.Provider.AItianhuSpace` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [chatforai.store](https://chatforai.store) | `g4f.Provider.ChatForAi` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [chat10.aichatos.xyz](https://chat10.aichatos.xyz) | `g4f.Provider.Aichatos` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [chatgpt4online.org](https://chatgpt4online.org) | `g4f.Provider.Chatgpt4Online` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [chatforai.store](https://chatforai.store) | `g4f.Provider.ChatForAi` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [chatgpt-free.cc](https://www.chatgpt-free.cc) | `g4f.Provider.ChatgptNext` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [chatgpt4online.org](https://chatgpt4online.org) | `g4f.Provider.Chatgpt4Online` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [chatgptx.de](https://chatgptx.de) | `g4f.Provider.ChatgptX` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [chatgpt-free.cc](https://www.chatgpt-free.cc) | `g4f.Provider.ChatgptNext` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [flowgpt.com](https://flowgpt.com/chat) | `g4f.Provider.FlowGpt` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [chatgptx.de](https://chatgptx.de) | `g4f.Provider.ChatgptX` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [freegptsnav.aifree.site](https://freegptsnav.aifree.site) | `g4f.Provider.FreeGpt` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ | | [f1.cnote.top](https://f1.cnote.top) | `g4f.Provider.Cnote` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [gpttalk.ru](https://gpttalk.ru) | `g4f.Provider.GptTalkRu` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [duckduckgo.com](https://duckduckgo.com/duckchat) | `g4f.Provider.DuckDuckGo` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [koala.sh](https://koala.sh) | `g4f.Provider.Koala` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [ecosia.org](https://www.ecosia.org) | `g4f.Provider.Ecosia` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [feedough.com](https://www.feedough.com) | `g4f.Provider.Feedough` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [flowgpt.com](https://flowgpt.com/chat) | `g4f.Provider.FlowGpt` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [freegptsnav.aifree.site](https://freegptsnav.aifree.site) | `g4f.Provider.FreeGpt` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [gpttalk.ru](https://gpttalk.ru) | `g4f.Provider.GptTalkRu` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [koala.sh](https://koala.sh) | `g4f.Provider.Koala` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [app.myshell.ai](https://app.myshell.ai/chat) | `g4f.Provider.MyShell` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ | | [app.myshell.ai](https://app.myshell.ai/chat) | `g4f.Provider.MyShell` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [perplexity.ai](https://www.perplexity.ai) | `g4f.Provider.PerplexityAi` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ | | [perplexity.ai](https://www.perplexity.ai) | `g4f.Provider.PerplexityAi` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [poe.com](https://poe.com) | `g4f.Provider.Poe` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ✔️ | | [poe.com](https://poe.com) | `g4f.Provider.Poe` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ✔️ |
| [talkai.info](https://talkai.info) | `g4f.Provider.TalkAi` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ | | [talkai.info](https://talkai.info) | `g4f.Provider.TalkAi` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [chat.vercel.ai](https://chat.vercel.ai) | `g4f.Provider.Vercel` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [chat.vercel.ai](https://chat.vercel.ai) | `g4f.Provider.Vercel` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [aitianhu.com](https://www.aitianhu.com) | `g4f.Provider.AItianhu` | ✔️ | ❌ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ | | [aitianhu.com](https://www.aitianhu.com) | `g4f.Provider.AItianhu` | ✔️ | ❌ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [chatgpt.bestim.org](https://chatgpt.bestim.org) | `g4f.Provider.Bestim` | ✔️ | ❌ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ | | [chatgpt.bestim.org](https://chatgpt.bestim.org) | `g4f.Provider.Bestim` | ✔️ | ❌ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [chatbase.co](https://www.chatbase.co) | `g4f.Provider.ChatBase` | ✔️ | ❌ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ | | [chatbase.co](https://www.chatbase.co) | `g4f.Provider.ChatBase` | ✔️ | ❌ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
@ -328,48 +349,68 @@ While we wait for gpt-5, here is a list of new models that are at least better t
| Website | Provider | GPT-3.5 | GPT-4 | Stream | Status | Auth | | Website | Provider | GPT-3.5 | GPT-4 | Stream | Status | Auth |
| ------ | ------- | ------- | ----- | ------ | ------ | ---- | | ------ | ------- | ------- | ----- | ------ | ------ | ---- |
| [openchat.team](https://openchat.team) | `g4f.Provider.Aura` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [openchat.team](https://openchat.team) | `g4f.Provider.Aura` | ❌ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [bard.google.com](https://bard.google.com) | `g4f.Provider.Bard` | ❌ | ❌ | ❌ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ✔️ | | [blackbox.ai](https://www.blackbox.ai) | `g4f.Provider.Blackbox` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [cohereforai-c4ai-command-r-plus.hf.space](https://cohereforai-c4ai-command-r-plus.hf.space) | `g4f.Provider.Cohere` | ❌ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [deepinfra.com](https://deepinfra.com) | `g4f.Provider.DeepInfra` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [deepinfra.com](https://deepinfra.com) | `g4f.Provider.DeepInfra` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [free.chatgpt.org.uk](https://free.chatgpt.org.uk) | `g4f.Provider.FreeChatgpt` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [free.chatgpt.org.uk](https://free.chatgpt.org.uk) | `g4f.Provider.FreeChatgpt` | ❌ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [gemini.google.com](https://gemini.google.com) | `g4f.Provider.Gemini` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ✔️ | | [gemini.google.com](https://gemini.google.com) | `g4f.Provider.Gemini` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ✔️ |
| [ai.google.dev](https://ai.google.dev) | `g4f.Provider.GeminiPro` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ✔️ | | [ai.google.dev](https://ai.google.dev) | `g4f.Provider.GeminiPro` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ✔️ |
| [gemini-chatbot-sigma.vercel.app](https://gemini-chatbot-sigma.vercel.app) | `g4f.Provider.GeminiProChat` | ❌ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ | | [gemini-chatbot-sigma.vercel.app](https://gemini-chatbot-sigma.vercel.app) | `g4f.Provider.GeminiProChat` | ❌ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [developers.sber.ru](https://developers.sber.ru/gigachat) | `g4f.Provider.GigaChat` | ❌ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ✔️ |
| [console.groq.com](https://console.groq.com/playground) | `g4f.Provider.Groq` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ✔️ |
| [huggingface.co](https://huggingface.co/chat) | `g4f.Provider.HuggingChat` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [huggingface.co](https://huggingface.co/chat) | `g4f.Provider.HuggingChat` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [huggingface.co](https://huggingface.co/chat) | `g4f.Provider.HuggingFace` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [huggingface.co](https://huggingface.co/chat) | `g4f.Provider.HuggingFace` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [llama2.ai](https://www.llama2.ai) | `g4f.Provider.Llama2` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [llama2.ai](https://www.llama2.ai) | `g4f.Provider.Llama` | ❌ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [meta.ai](https://www.meta.ai) | `g4f.Provider.MetaAI` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌+✔️ |
| [openrouter.ai](https://openrouter.ai) | `g4f.Provider.OpenRouter` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ✔️ |
| [labs.perplexity.ai](https://labs.perplexity.ai) | `g4f.Provider.PerplexityLabs` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [labs.perplexity.ai](https://labs.perplexity.ai) | `g4f.Provider.PerplexityLabs` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [pi.ai](https://pi.ai/talk) | `g4f.Provider.Pi` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ | | [pi.ai](https://pi.ai/talk) | `g4f.Provider.Pi` | ❌ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [theb.ai](https://theb.ai) | `g4f.Provider.ThebApi` | ❌ | ❌ | ❌ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ✔️ | | [replicate.com](https://replicate.com) | `g4f.Provider.ReplicateImage` | ❌ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [open-assistant.io](https://open-assistant.io/chat) | `g4f.Provider.OpenAssistant` | ❌ | ❌ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ✔️ | | [theb.ai](https://theb.ai) | `g4f.Provider.ThebApi` | ❌ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ✔️ |
| [whiterabbitneo.com](https://www.whiterabbitneo.com) | `g4f.Provider.WhiteRabbitNeo` | ❌ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ✔️ |
| [bard.google.com](https://bard.google.com) | `g4f.Provider.Bard` | ❌ | ❌ | ❌ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ✔️ |
### Models ### Models
| Model | Base Provider | Provider | Website | | Model | Base Provider | Provider | Website |
|-----------------------------| ------------- | -------- | ------- | | ----- | ------------- | -------- | ------- |
| gpt-3.5-turbo | OpenAI | 5+ Providers | [openai.com](https://openai.com/) | | gpt-3.5-turbo | OpenAI | 8+ Providers | [openai.com](https://openai.com/) |
| gpt-4 | OpenAI | 2+ Providers | [openai.com](https://openai.com/) | | gpt-4 | OpenAI | 2+ Providers | [openai.com](https://openai.com/) |
| gpt-4-turbo | OpenAI | g4f.Provider.Bing | [openai.com](https://openai.com/) | | gpt-4-turbo | OpenAI | g4f.Provider.Bing | [openai.com](https://openai.com/) |
| Llama-2-7b-chat-hf | Meta | 2+ Providers | [llama.meta.com](https://llama.meta.com/) | | Llama-2-7b-chat-hf | Meta | 2+ Providers | [llama.meta.com](https://llama.meta.com/) |
| Llama-2-13b-chat-hf | Meta | 2+ Providers | [llama.meta.com](https://llama.meta.com/) | | Llama-2-13b-chat-hf | Meta | 2+ Providers | [llama.meta.com](https://llama.meta.com/) |
| Llama-2-70b-chat-hf | Meta | 3+ Providers | [llama.meta.com](https://llama.meta.com/) | | Llama-2-70b-chat-hf | Meta | 3+ Providers | [llama.meta.com](https://llama.meta.com/) |
| Meta-Llama-3-8b | Meta | 3+ Providers | [llama.meta.com](https://llama.meta.com/) | | Meta-Llama-3-8b-instruct | Meta | 1+ Providers | [llama.meta.com](https://llama.meta.com/) |
| Meta-Llama-3-70b | Meta | 3+ Providers | [llama.meta.com](https://llama.meta.com/) | | Meta-Llama-3-70b-instruct | Meta | 2+ Providers | [llama.meta.com](https://llama.meta.com/) |
| CodeLlama-34b-Instruct-hf | Meta | 2+ Providers | [llama.meta.com](https://llama.meta.com/) | | CodeLlama-34b-Instruct-hf | Meta | g4f.Provider.HuggingChat | [llama.meta.com](https://llama.meta.com/) |
| CodeLlama-70b-Instruct-hf | Meta | 2+ Providers | [llama.meta.com](https://llama.meta.com/) | | CodeLlama-70b-Instruct-hf | Meta | 2+ Providers | [llama.meta.com](https://llama.meta.com/) |
| Mixtral-8x7B-Instruct-v0.1 | Huggingface | 4+ Providers | [huggingface.co](https://huggingface.co/) | | Mixtral-8x7B-Instruct-v0.1 | Huggingface | 4+ Providers | [huggingface.co](https://huggingface.co/) |
| Mistral-7B-Instruct-v0.1 | Huggingface | 4+ Providers | [huggingface.co](https://huggingface.co/) | | Mistral-7B-Instruct-v0.1 | Huggingface | 3+ Providers | [huggingface.co](https://huggingface.co/) |
| dolphin-2.6-mixtral-8x7b | Huggingface | g4f.Provider.DeepInfra | [huggingface.co](https://huggingface.co/) | | Mistral-7B-Instruct-v0.2 | Huggingface | g4f.Provider.DeepInfra | [huggingface.co](https://huggingface.co/) |
| lzlv_70b_fp16_hf | Huggingface | g4f.Provider.DeepInfra | [huggingface.co](https://huggingface.co/) | | zephyr-orpo-141b-A35b-v0.1 | Huggingface | 2+ Providers | [huggingface.co](https://huggingface.co/) |
| airoboros-70b | Huggingface | g4f.Provider.DeepInfra | [huggingface.co](https://huggingface.co/) | | dolphin-2.6-mixtral-8x7b | Huggingface | g4f.Provider.DeepInfra | [huggingface.co](https://huggingface.co/) |
| airoboros-l2-70b-gpt4-1.4.1 | Huggingface | g4f.Provider.DeepInfra | [huggingface.co](https://huggingface.co/) | | gemini | Google | g4f.Provider.Gemini | [gemini.google.com](https://gemini.google.com/) |
| openchat_3.5 | Huggingface | 2+ Providers | [huggingface.co](https://huggingface.co/) | | gemini-pro | Google | 2+ Providers | [gemini.google.com](https://gemini.google.com/) |
| gemini | Google | g4f.Provider.Gemini | [gemini.google.com](https://gemini.google.com/) | | claude-v2 | Anthropic | 1+ Providers | [anthropic.com](https://www.anthropic.com/) |
| gemini-pro | Google | 2+ Providers | [gemini.google.com](https://gemini.google.com/) | | claude-3-opus | Anthropic | g4f.Provider.You | [anthropic.com](https://www.anthropic.com/) |
| claude-v2 | Anthropic | 1+ Providers | [anthropic.com](https://www.anthropic.com/) | | claude-3-sonnet | Anthropic | g4f.Provider.You | [anthropic.com](https://www.anthropic.com/) |
| claude-3-opus | Anthropic | g4f.Provider.You | [anthropic.com](https://www.anthropic.com/) | | lzlv_70b_fp16_hf | Huggingface | g4f.Provider.DeepInfra | [huggingface.co](https://huggingface.co/) |
| claude-3-sonnet | Anthropic | g4f.Provider.You | [anthropic.com](https://www.anthropic.com/) | | airoboros-70b | Huggingface | g4f.Provider.DeepInfra | [huggingface.co](https://huggingface.co/) |
| pi | Inflection | g4f.Provider.Pi | [inflection.ai](https://inflection.ai/) | | openchat_3.5 | Huggingface | 2+ Providers | [huggingface.co](https://huggingface.co/) |
| pi | Inflection | g4f.Provider.Pi | [inflection.ai](https://inflection.ai/) |
### Image Models
| Label | Provider | Model | Website |
| ----- | -------- | ----- | ------- |
| Microsoft Designer | Bing | dall-e | [bing.com](https://www.bing.com/images/create) |
| OpenAI ChatGPT | Openai | dall-e | [chat.openai.com](https://chat.openai.com) |
| You.com | You | dall-e | [you.com](https://you.com) |
| DeepInfraImage | DeepInfra | stability-ai/sdxl | [deepinfra.com](https://deepinfra.com) |
| ReplicateImage | Replicate | stability-ai/sdxl | [replicate.com](https://replicate.com) |
| Gemini | Gemini | gemini | [gemini.google.com](https://gemini.google.com) |
| Meta AI | MetaAI | meta | [meta.ai](https://www.meta.ai) |
## 🔗 Powered by gpt4free ## 🔗 Powered by gpt4free

View File

@ -89,7 +89,7 @@ class MetaAI(AsyncGeneratorProvider):
headers = {} headers = {}
headers = { headers = {
'content-type': 'application/x-www-form-urlencoded', 'content-type': 'application/x-www-form-urlencoded',
'cookie': format_cookies(cookies), 'cookie': format_cookies(self.cookies),
'origin': 'https://www.meta.ai', 'origin': 'https://www.meta.ai',
'referer': 'https://www.meta.ai/', 'referer': 'https://www.meta.ai/',
'x-asbd-id': '129477', 'x-asbd-id': '129477',

84
g4f/Provider/Replicate.py Normal file
View File

@ -0,0 +1,84 @@
from __future__ import annotations
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
from .helper import format_prompt, filter_none
from ..typing import AsyncResult, Messages
from ..requests import raise_for_status
from ..requests.aiohttp import StreamSession
from ..errors import ResponseError, MissingAuthError
class Replicate(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://replicate.com"
working = True
default_model = "meta/meta-llama-3-70b-instruct"
@classmethod
async def create_async_generator(
cls,
model: str,
messages: Messages,
api_key: str = None,
proxy: str = None,
timeout: int = 180,
system_prompt: str = None,
max_new_tokens: int = None,
temperature: float = None,
top_p: float = None,
top_k: float = None,
stop: list = None,
extra_data: dict = {},
headers: dict = {
"accept": "application/json",
},
**kwargs
) -> AsyncResult:
model = cls.get_model(model)
if cls.needs_auth and api_key is None:
raise MissingAuthError("api_key is missing")
if api_key is not None:
headers["Authorization"] = f"Bearer {api_key}"
api_base = "https://api.replicate.com/v1/models/"
else:
api_base = "https://replicate.com/api/models/"
async with StreamSession(
proxy=proxy,
headers=headers,
timeout=timeout
) as session:
data = {
"stream": True,
"input": {
"prompt": format_prompt(messages),
**filter_none(
system_prompt=system_prompt,
max_new_tokens=max_new_tokens,
temperature=temperature,
top_p=top_p,
top_k=top_k,
stop_sequences=",".join(stop) if stop else None
),
**extra_data
},
}
url = f"{api_base.rstrip('/')}/{model}/predictions"
async with session.post(url, json=data) as response:
message = "Model not found" if response.status == 404 else None
await raise_for_status(response, message)
result = await response.json()
if "id" not in result:
raise ResponseError(f"Invalid response: {result}")
async with session.get(result["urls"]["stream"], headers={"Accept": "text/event-stream"}) as response:
await raise_for_status(response)
event = None
async for line in response.iter_lines():
if line.startswith(b"event: "):
event = line[7:]
if event == b"done":
break
elif event == b"output":
if line.startswith(b"data: "):
new_text = line[6:].decode()
if new_text:
yield new_text
else:
yield "\n"

View File

@ -9,7 +9,6 @@ from .deprecated import *
from .not_working import * from .not_working import *
from .selenium import * from .selenium import *
from .needs_auth import * from .needs_auth import *
from .unfinished import *
from .Aichatos import Aichatos from .Aichatos import Aichatos
from .Aura import Aura from .Aura import Aura
@ -46,6 +45,7 @@ from .MetaAI import MetaAI
from .MetaAIAccount import MetaAIAccount from .MetaAIAccount import MetaAIAccount
from .PerplexityLabs import PerplexityLabs from .PerplexityLabs import PerplexityLabs
from .Pi import Pi from .Pi import Pi
from .Replicate import Replicate
from .ReplicateImage import ReplicateImage from .ReplicateImage import ReplicateImage
from .Vercel import Vercel from .Vercel import Vercel
from .WhiteRabbitNeo import WhiteRabbitNeo from .WhiteRabbitNeo import WhiteRabbitNeo

View File

@ -340,9 +340,8 @@ class OpenaiChat(AsyncGeneratorProvider, ProviderModelMixin):
Raises: Raises:
RuntimeError: If an error occurs during processing. RuntimeError: If an error occurs during processing.
""" """
async with StreamSession( async with StreamSession(
proxies={"all": proxy}, proxy=proxy,
impersonate="chrome", impersonate="chrome",
timeout=timeout timeout=timeout
) as session: ) as session:
@ -364,26 +363,27 @@ class OpenaiChat(AsyncGeneratorProvider, ProviderModelMixin):
api_key = cls._api_key = None api_key = cls._api_key = None
cls._create_request_args() cls._create_request_args()
if debug.logging: if debug.logging:
print("OpenaiChat: Load default_model failed") print("OpenaiChat: Load default model failed")
print(f"{e.__class__.__name__}: {e}") print(f"{e.__class__.__name__}: {e}")
arkose_token = None arkose_token = None
if cls.default_model is None: if cls.default_model is None:
error = None
try: try:
arkose_token, api_key, cookies, headers = await getArkoseAndAccessToken(proxy) arkose_token, api_key, cookies, headers = await getArkoseAndAccessToken(proxy)
cls._create_request_args(cookies, headers) cls._create_request_args(cookies, headers)
cls._set_api_key(api_key) cls._set_api_key(api_key)
except NoValidHarFileError as e: except NoValidHarFileError as e:
... error = e
if cls._api_key is None: if cls._api_key is None:
await cls.nodriver_access_token() await cls.nodriver_access_token()
if cls._api_key is None and cls.needs_auth: if cls._api_key is None and cls.needs_auth:
raise e raise error
cls.default_model = cls.get_model(await cls.get_default_model(session, cls._headers)) cls.default_model = cls.get_model(await cls.get_default_model(session, cls._headers))
async with session.post( async with session.post(
f"{cls.url}/backend-anon/sentinel/chat-requirements" f"{cls.url}/backend-anon/sentinel/chat-requirements"
if not cls._api_key else if cls._api_key is None else
f"{cls.url}/backend-api/sentinel/chat-requirements", f"{cls.url}/backend-api/sentinel/chat-requirements",
json={"conversation_mode_kind": "primary_assistant"}, json={"conversation_mode_kind": "primary_assistant"},
headers=cls._headers headers=cls._headers
@ -412,7 +412,8 @@ class OpenaiChat(AsyncGeneratorProvider, ProviderModelMixin):
print("OpenaiChat: Upload image failed") print("OpenaiChat: Upload image failed")
print(f"{e.__class__.__name__}: {e}") print(f"{e.__class__.__name__}: {e}")
model = cls.get_model(model).replace("gpt-3.5-turbo", "text-davinci-002-render-sha") model = cls.get_model(model)
model = "text-davinci-002-render-sha" if model == "gpt-3.5-turbo" else model
if conversation is None: if conversation is None:
conversation = Conversation(conversation_id, str(uuid.uuid4()) if parent_id is None else parent_id) conversation = Conversation(conversation_id, str(uuid.uuid4()) if parent_id is None else parent_id)
else: else:

View File

@ -1,78 +0,0 @@
from __future__ import annotations
import asyncio
from ..base_provider import AsyncGeneratorProvider, ProviderModelMixin
from ..helper import format_prompt, filter_none
from ...typing import AsyncResult, Messages
from ...requests import StreamSession, raise_for_status
from ...image import ImageResponse
from ...errors import ResponseError, MissingAuthError
class Replicate(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://replicate.com"
working = True
default_model = "mistralai/mixtral-8x7b-instruct-v0.1"
api_base = "https://api.replicate.com/v1/models/"
@classmethod
async def create_async_generator(
cls,
model: str,
messages: Messages,
api_key: str = None,
proxy: str = None,
timeout: int = 180,
system_prompt: str = None,
max_new_tokens: int = None,
temperature: float = None,
top_p: float = None,
top_k: float = None,
stop: list = None,
extra_data: dict = {},
headers: dict = {},
**kwargs
) -> AsyncResult:
model = cls.get_model(model)
if api_key is None:
raise MissingAuthError("api_key is missing")
headers["Authorization"] = f"Bearer {api_key}"
async with StreamSession(
proxies={"all": proxy},
headers=headers,
timeout=timeout
) as session:
data = {
"stream": True,
"input": {
"prompt": format_prompt(messages),
**filter_none(
system_prompt=system_prompt,
max_new_tokens=max_new_tokens,
temperature=temperature,
top_p=top_p,
top_k=top_k,
stop_sequences=",".join(stop) if stop else None
),
**extra_data
},
}
url = f"{cls.api_base.rstrip('/')}/{model}/predictions"
async with session.post(url, json=data) as response:
await raise_for_status(response)
result = await response.json()
if "id" not in result:
raise ResponseError(f"Invalid response: {result}")
async with session.get(result["urls"]["stream"], headers={"Accept": "text/event-stream"}) as response:
await raise_for_status(response)
event = None
async for line in response.iter_lines():
if line.startswith(b"event: "):
event = line[7:]
elif event == b"output":
if line.startswith(b"data: "):
yield line[6:].decode()
elif not line.startswith(b"id: "):
continue#yield "+"+line.decode()
elif event == b"done":
break

View File

@ -1,3 +1,5 @@
from __future__ import annotations
import logging import logging
import json import json
import uvicorn import uvicorn
@ -8,14 +10,19 @@ from fastapi.exceptions import RequestValidationError
from starlette.status import HTTP_422_UNPROCESSABLE_ENTITY from starlette.status import HTTP_422_UNPROCESSABLE_ENTITY
from fastapi.encoders import jsonable_encoder from fastapi.encoders import jsonable_encoder
from pydantic import BaseModel from pydantic import BaseModel
from typing import List, Union, Optional from typing import Union, Optional
import g4f import g4f
import g4f.debug import g4f.debug
from g4f.client import AsyncClient from g4f.client import AsyncClient
from g4f.typing import Messages from g4f.typing import Messages
app = FastAPI() def create_app() -> FastAPI:
app = FastAPI()
api = Api(app)
api.register_routes()
api.register_validation_exception_handler()
return app
class ChatCompletionsConfig(BaseModel): class ChatCompletionsConfig(BaseModel):
messages: Messages messages: Messages
@ -29,16 +36,19 @@ class ChatCompletionsConfig(BaseModel):
web_search: Optional[bool] = None web_search: Optional[bool] = None
proxy: Optional[str] = None proxy: Optional[str] = None
list_ignored_providers: list[str] = None
def set_list_ignored_providers(ignored: list[str]):
global list_ignored_providers
list_ignored_providers = ignored
class Api: class Api:
def __init__(self, list_ignored_providers: List[str] = None) -> None: def __init__(self, app: FastAPI) -> None:
self.list_ignored_providers = list_ignored_providers self.app = app
self.client = AsyncClient() self.client = AsyncClient()
def set_list_ignored_providers(self, list: list):
self.list_ignored_providers = list
def register_validation_exception_handler(self): def register_validation_exception_handler(self):
@app.exception_handler(RequestValidationError) @self.app.exception_handler(RequestValidationError)
async def validation_exception_handler(request: Request, exc: RequestValidationError): async def validation_exception_handler(request: Request, exc: RequestValidationError):
details = exc.errors() details = exc.errors()
modified_details = [] modified_details = []
@ -54,17 +64,17 @@ class Api:
) )
def register_routes(self): def register_routes(self):
@app.get("/") @self.app.get("/")
async def read_root(): async def read_root():
return RedirectResponse("/v1", 302) return RedirectResponse("/v1", 302)
@app.get("/v1") @self.app.get("/v1")
async def read_root_v1(): async def read_root_v1():
return HTMLResponse('g4f API: Go to ' return HTMLResponse('g4f API: Go to '
'<a href="/v1/chat/completions">chat/completions</a> ' '<a href="/v1/chat/completions">chat/completions</a> '
'or <a href="/v1/models">models</a>.') 'or <a href="/v1/models">models</a>.')
@app.get("/v1/models") @self.app.get("/v1/models")
async def models(): async def models():
model_list = dict( model_list = dict(
(model, g4f.models.ModelUtils.convert[model]) (model, g4f.models.ModelUtils.convert[model])
@ -78,7 +88,7 @@ class Api:
} for model_id, model in model_list.items()] } for model_id, model in model_list.items()]
return JSONResponse(model_list) return JSONResponse(model_list)
@app.get("/v1/models/{model_name}") @self.app.get("/v1/models/{model_name}")
async def model_info(model_name: str): async def model_info(model_name: str):
try: try:
model_info = g4f.models.ModelUtils.convert[model_name] model_info = g4f.models.ModelUtils.convert[model_name]
@ -91,7 +101,7 @@ class Api:
except: except:
return JSONResponse({"error": "The model does not exist."}) return JSONResponse({"error": "The model does not exist."})
@app.post("/v1/chat/completions") @self.app.post("/v1/chat/completions")
async def chat_completions(config: ChatCompletionsConfig, request: Request = None, provider: str = None): async def chat_completions(config: ChatCompletionsConfig, request: Request = None, provider: str = None):
try: try:
config.provider = provider if config.provider is None else config.provider config.provider = provider if config.provider is None else config.provider
@ -103,7 +113,7 @@ class Api:
config.api_key = auth_header config.api_key = auth_header
response = self.client.chat.completions.create( response = self.client.chat.completions.create(
**config.dict(exclude_none=True), **config.dict(exclude_none=True),
ignored=self.list_ignored_providers ignored=list_ignored_providers
) )
except Exception as e: except Exception as e:
logging.exception(e) logging.exception(e)
@ -125,14 +135,10 @@ class Api:
return StreamingResponse(streaming(), media_type="text/event-stream") return StreamingResponse(streaming(), media_type="text/event-stream")
@app.post("/v1/completions") @self.app.post("/v1/completions")
async def completions(): async def completions():
return Response(content=json.dumps({'info': 'Not working yet.'}, indent=4), media_type="application/json") return Response(content=json.dumps({'info': 'Not working yet.'}, indent=4), media_type="application/json")
api = Api()
api.register_routes()
api.register_validation_exception_handler()
def format_exception(e: Exception, config: ChatCompletionsConfig) -> str: def format_exception(e: Exception, config: ChatCompletionsConfig) -> str:
last_provider = g4f.get_last_provider(True) last_provider = g4f.get_last_provider(True)
return json.dumps({ return json.dumps({
@ -156,4 +162,4 @@ def run_api(
host, port = bind.split(":") host, port = bind.split(":")
if debug: if debug:
g4f.debug.logging = True g4f.debug.logging = True
uvicorn.run("g4f.api:app", host=host, port=int(port), workers=workers, use_colors=use_colors)# uvicorn.run("g4f.api:create_app", host=host, port=int(port), workers=workers, use_colors=use_colors, factory=True)#

View File

@ -1,7 +1,11 @@
from __future__ import annotations
import argparse import argparse
from g4f import Provider from g4f import Provider
from g4f.gui.run import gui_parser, run_gui_args from g4f.gui.run import gui_parser, run_gui_args
from g4f.cookies import read_cookie_files
from g4f import debug
def main(): def main():
parser = argparse.ArgumentParser(description="Run gpt4free") parser = argparse.ArgumentParser(description="Run gpt4free")
@ -10,28 +14,36 @@ def main():
api_parser.add_argument("--bind", default="0.0.0.0:1337", help="The bind string.") api_parser.add_argument("--bind", default="0.0.0.0:1337", help="The bind string.")
api_parser.add_argument("--debug", action="store_true", help="Enable verbose logging.") api_parser.add_argument("--debug", action="store_true", help="Enable verbose logging.")
api_parser.add_argument("--workers", type=int, default=None, help="Number of workers.") api_parser.add_argument("--workers", type=int, default=None, help="Number of workers.")
api_parser.add_argument("--disable_colors", action="store_true", help="Don't use colors.") api_parser.add_argument("--disable-colors", action="store_true", help="Don't use colors.")
api_parser.add_argument("--ignore-cookie-files", action="store_true", help="Don't read .har and cookie files.")
api_parser.add_argument("--ignored-providers", nargs="+", choices=[provider for provider in Provider.__map__], api_parser.add_argument("--ignored-providers", nargs="+", choices=[provider for provider in Provider.__map__],
default=[], help="List of providers to ignore when processing request.") default=[], help="List of providers to ignore when processing request.")
subparsers.add_parser("gui", parents=[gui_parser()], add_help=False) subparsers.add_parser("gui", parents=[gui_parser()], add_help=False)
args = parser.parse_args() args = parser.parse_args()
if args.mode == "api": if args.mode == "api":
import g4f.api run_api_args(args)
g4f.api.api.set_list_ignored_providers(
args.ignored_providers
)
g4f.api.run_api(
bind=args.bind,
debug=args.debug,
workers=args.workers,
use_colors=not args.disable_colors
)
elif args.mode == "gui": elif args.mode == "gui":
run_gui_args(args) run_gui_args(args)
else: else:
parser.print_help() parser.print_help()
exit(1) exit(1)
def run_api_args(args):
if args.debug:
debug.logging = True
if not args.ignore_cookie_files:
read_cookie_files()
import g4f.api
g4f.api.set_list_ignored_providers(
args.ignored_providers
)
g4f.api.run_api(
bind=args.bind,
debug=args.debug,
workers=args.workers,
use_colors=not args.disable_colors
)
if __name__ == "__main__": if __name__ == "__main__":
main() main()

View File

@ -2,6 +2,7 @@ from __future__ import annotations
import os import os
import time import time
import json
try: try:
from platformdirs import user_config_dir from platformdirs import user_config_dir
@ -38,6 +39,7 @@ def get_cookies(domain_name: str = '', raise_requirements_error: bool = True, si
Returns: Returns:
Dict[str, str]: A dictionary of cookie names and values. Dict[str, str]: A dictionary of cookie names and values.
""" """
global _cookies
if domain_name in _cookies: if domain_name in _cookies:
return _cookies[domain_name] return _cookies[domain_name]
@ -46,6 +48,7 @@ def get_cookies(domain_name: str = '', raise_requirements_error: bool = True, si
return cookies return cookies
def set_cookies(domain_name: str, cookies: Cookies = None) -> None: def set_cookies(domain_name: str, cookies: Cookies = None) -> None:
global _cookies
if cookies: if cookies:
_cookies[domain_name] = cookies _cookies[domain_name] = cookies
elif domain_name in _cookies: elif domain_name in _cookies:
@ -84,6 +87,61 @@ def load_cookies_from_browsers(domain_name: str, raise_requirements_error: bool
print(f"Error reading cookies from {cookie_fn.__name__} for {domain_name}: {e}") print(f"Error reading cookies from {cookie_fn.__name__} for {domain_name}: {e}")
return cookies return cookies
def read_cookie_files(dirPath: str = "./har_and_cookies"):
global _cookies
harFiles = []
cookieFiles = []
for root, dirs, files in os.walk(dirPath):
for file in files:
if file.endswith(".har"):
harFiles.append(os.path.join(root, file))
elif file.endswith(".json"):
cookieFiles.append(os.path.join(root, file))
_cookies = {}
for path in harFiles:
with open(path, 'rb') as file:
try:
harFile = json.load(file)
except json.JSONDecodeError:
# Error: not a HAR file!
continue
if debug.logging:
print("Read .har file:", path)
new_cookies = {}
for v in harFile['log']['entries']:
v_cookies = {}
for c in v['request']['cookies']:
if c['domain'] not in v_cookies:
v_cookies[c['domain']] = {}
v_cookies[c['domain']][c['name']] = c['value']
for domain, c in v_cookies.items():
_cookies[domain] = c
new_cookies[domain] = len(c)
if debug.logging:
for domain, new_values in new_cookies.items():
print(f"Cookies added: {new_values} from {domain}")
for path in cookieFiles:
with open(path, 'rb') as file:
try:
cookieFile = json.load(file)
except json.JSONDecodeError:
# Error: not a json file!
continue
if not isinstance(cookieFile, list):
continue
if debug.logging:
print("Read cookie file:", path)
new_cookies = {}
for c in cookieFile:
if isinstance(c, dict) and "domain" in c:
if c["domain"] not in new_cookies:
new_cookies[c["domain"]] = {}
new_cookies[c["domain"]][c["name"]] = c["value"]
for domain, new_values in new_cookies.items():
if debug.logging:
print(f"Cookies added: {len(new_values)} from {domain}")
_cookies[domain] = new_values
def _g4f(domain_name: str) -> list: def _g4f(domain_name: str) -> list:
""" """
Load cookies from the 'g4f' browser (if exists). Load cookies from the 'g4f' browser (if exists).

View File

@ -12,9 +12,6 @@ def run_gui(host: str = '0.0.0.0', port: int = 8080, debug: bool = False) -> Non
if import_error is not None: if import_error is not None:
raise MissingRequirementsError(f'Install "gui" requirements | pip install -U g4f[gui]\n{import_error}') raise MissingRequirementsError(f'Install "gui" requirements | pip install -U g4f[gui]\n{import_error}')
if debug:
from g4f import debug
debug.logging = True
config = { config = {
'host' : host, 'host' : host,
'port' : port, 'port' : port,

View File

@ -5,4 +5,5 @@ def gui_parser():
parser.add_argument("-host", type=str, default="0.0.0.0", help="hostname") parser.add_argument("-host", type=str, default="0.0.0.0", help="hostname")
parser.add_argument("-port", type=int, default=8080, help="port") parser.add_argument("-port", type=int, default=8080, help="port")
parser.add_argument("-debug", action="store_true", help="debug mode") parser.add_argument("-debug", action="store_true", help="debug mode")
parser.add_argument("--ignore-cookie-files", action="store_true", help="Don't read .har and cookie files.")
return parser return parser

View File

@ -1,6 +1,12 @@
from .gui_parser import gui_parser from .gui_parser import gui_parser
from ..cookies import read_cookie_files
import g4f.debug
def run_gui_args(args): def run_gui_args(args):
if args.debug:
g4f.debug.logging = True
if not args.ignore_cookie_files:
read_cookie_files()
from g4f.gui import run_gui from g4f.gui import run_gui
host = args.host host = args.host
port = args.port port = args.port