mirror of
https://github.com/xtekky/gpt4free.git
synced 2024-11-22 15:05:57 +03:00
Improve readme, add smartphone guide
This commit is contained in:
parent
5807179605
commit
7953560303
89
README.md
89
README.md
@ -19,10 +19,10 @@ pip install -U g4f
|
||||
docker pull hlohaus789/g4f
|
||||
```
|
||||
|
||||
## 🆕 What's New
|
||||
## 🆕 What's New 🚀
|
||||
- How do I use my smartphone📱 to run g4f? [/docs/guides/phone](/docs/guides/phone.md)
|
||||
- Join our Telegram Channel: [t.me/g4f_channel](https://telegram.me/g4f_channel)
|
||||
- Join our Discord Group: [discord.gg/XfybzPXPH5](https://discord.gg/XfybzPXPH5)
|
||||
- Explore the g4f Documentation (unfinished): [g4f.mintlify.app](https://g4f.mintlify.app) | Contribute to the docs via: [github.com/xtekky/gpt4free-docs](https://github.com/xtekky/gpt4free-docs)
|
||||
|
||||
## Site Takedown
|
||||
Is your site on this repository and you want to take it down ? email takedown@g4f.ai with proof it is yours and it will be removed as fast as possible. - to prevent reproduction please secure your api ; )
|
||||
@ -34,13 +34,13 @@ You can always leave some feedback here: https://forms.gle/FeWV9RLEedfdkmFN6
|
||||
As per the survey, here is a list of improvements to come
|
||||
- [x] update the repository to include the new openai library syntax (ex: `Openai()` class) | completed, use `g4f.client.Client`
|
||||
- [ ] golang implementation
|
||||
- [ ] Improve Documentation (on g4f.mintlify.app) & Do video tutorials
|
||||
- [ ] Improve the provider status list & updates
|
||||
- [ ] 🚧 Improve Documentation (in /docs & Guides, Howtos, & Do video tutorials
|
||||
- [x] Improve the provider status list & updates
|
||||
- [ ] Tutorials on how to reverse sites to write your own wrapper (PoC only ofc)
|
||||
- [ ] Improve the Bing wrapper. (might write a new wrapper in golang as it is very fast)
|
||||
- [ ] Write a standard provider performance test to improve the stability
|
||||
- [ ] Potential support and development of local models
|
||||
- [ ] improve compatibility and error handling
|
||||
- [ ] 🚧 improve compatibility and error handling
|
||||
|
||||
## 📚 Table of Contents
|
||||
|
||||
@ -49,22 +49,17 @@ As per the survey, here is a list of improvements to come
|
||||
- [🛠️ Getting Started](#-getting-started)
|
||||
+ [Docker container](#docker-container)
|
||||
- [Quick start](#quick-start)
|
||||
+ [Use python package](#use-python-package)
|
||||
+ [Use python](#use-python)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Install using pypi](#install-using-pypi)
|
||||
+ [Docker for Developers](#docker-for-developers)
|
||||
- [Install using PyPI package:](#install-using-pypi-package-)
|
||||
- [Install from source:](#install-from-source-)
|
||||
- [Install using Docker:](#install-using-docker-)
|
||||
- [💡 Usage](#-usage)
|
||||
* [The Web UI](#the-web-ui)
|
||||
* [The `g4f` Package](#the-g4f-package)
|
||||
+ [ChatCompletion](#chatcompletion)
|
||||
- [Completion](#completion)
|
||||
- [Providers](#providers)
|
||||
- [Using Browser](#using-browser)
|
||||
- [Async Support](#async-support)
|
||||
- [Proxy and Timeout Support](#proxy-and-timeout-support)
|
||||
* [Interference openai-proxy API](#interference-openai-proxy-api-use-with-openai-python-package-)
|
||||
+ [Run interference API from PyPi package](#run-interference-api-from-pypi-package)
|
||||
+ [Run interference API from repo](#run-interference-api-from-repo)
|
||||
* [Text Generation](#text-generation)
|
||||
* [Image Generation](#text-generation)
|
||||
* [Interference API](#interference-api)
|
||||
* [Configuration](#configuration)
|
||||
- [🚀 Providers and Models](#-providers-and-models)
|
||||
* [GPT-4](#gpt-4)
|
||||
* [GPT-3.5](#gpt-35)
|
||||
@ -96,6 +91,11 @@ docker run -p 8080:8080 -p 1337:1337 -p 7900:7900 --shm-size="2g" hlohaus789/g4f
|
||||
or set the api base in your client to: [http://localhost:1337/v1](http://localhost:1337/v1)
|
||||
4. (Optional) If you need to log in to a provider, you can view the desktop from the container here: http://localhost:7900/?autoconnect=1&resize=scale&password=secret.
|
||||
|
||||
##### Use your smartphone:
|
||||
|
||||
Run the Web UI on Your Smartphone:
|
||||
- [/docs/guides/phone](/docs/guides/phone.md)
|
||||
|
||||
#### Use python
|
||||
|
||||
##### Prerequisites:
|
||||
@ -109,18 +109,19 @@ or set the api base in your client to: [http://localhost:1337/v1](http://localho
|
||||
pip install -U g4f[all]
|
||||
```
|
||||
|
||||
Or use partial requirements.
|
||||
How do I install only parts or do disable parts?
|
||||
Use partial requirements: [/docs/requirements](/docs/requirements.md)
|
||||
|
||||
See: [/docs/requirements](/docs/requirements.md)
|
||||
##### Install from source:
|
||||
|
||||
##### Install from source using git:
|
||||
|
||||
See: [/docs/git](/docs/git.md)
|
||||
How do I load the project using git and installing the project requirements?
|
||||
Read this tutorial and follow it step by step: [/docs/git](/docs/git.md)
|
||||
|
||||
|
||||
##### Install using Docker for Developers:
|
||||
##### Install using Docker:
|
||||
|
||||
See: [/docs/docker](/docs/docker.md)
|
||||
How do I build and run composer image from source?
|
||||
Use docker-compose: [/docs/docker](/docs/docker.md)
|
||||
|
||||
|
||||
## 💡 Usage
|
||||
@ -139,6 +140,10 @@ response = client.chat.completions.create(
|
||||
print(response.choices[0].message.content)
|
||||
```
|
||||
|
||||
```
|
||||
Hello! How can I assist you today?
|
||||
```
|
||||
|
||||
#### Image Generation
|
||||
|
||||
```python
|
||||
@ -159,8 +164,8 @@ image_url = response.data[0].url
|
||||
|
||||
**See also:**
|
||||
|
||||
- Documentation for the new Client: [/docs/client](/docs/client.md)
|
||||
- Documentation for the leagcy API: [docs/leagcy](/docs/leagcy.md)
|
||||
- Documentation for the new Client API: [/docs/client](/docs/client.md)
|
||||
- Documentation for the leagcy API: [/docs/leagcy](/docs/leagcy.md)
|
||||
|
||||
|
||||
#### Web UI
|
||||
@ -228,11 +233,10 @@ set G4F_PROXY=http://host:port
|
||||
| Website | Provider | GPT-3.5 | GPT-4 | Stream | Status | Auth |
|
||||
| ------ | ------- | ------- | ----- | ------ | ------ | ---- |
|
||||
| [bing.com](https://bing.com/chat) | `g4f.Provider.Bing` | ❌ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
|
||||
| [free.chatgpt.org.uk](https://free.chatgpt.org.uk) | `g4f.Provider.FreeChatgpt` | ✔️ | ✔️ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
|
||||
| [liaobots.site](https://liaobots.site) | `g4f.Provider.Liaobots` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
|
||||
| [chat.openai.com](https://chat.openai.com) | `g4f.Provider.OpenaiChat` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ✔️ |
|
||||
| [chat.openai.com](https://chat.openai.com) | `g4f.Provider.OpenaiChat` | ✔️ | ✔️ | ✔️ | ![Unknown](https://img.shields.io/badge/Active-brightgreen) | ✔️ |
|
||||
| [raycast.com](https://raycast.com) | `g4f.Provider.Raycast` | ✔️ | ✔️ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ✔️ |
|
||||
| [beta.theb.ai](https://beta.theb.ai) | `g4f.Provider.Theb` | ✔️ | ✔️ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
|
||||
| [you.com](https://you.com) | `g4f.Provider.You` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
|
||||
| [chat.geekgpt.org](https://chat.geekgpt.org) | `g4f.Provider.GeekGpt` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
|
||||
|
||||
### GPT-3.5
|
||||
@ -242,14 +246,13 @@ set G4F_PROXY=http://host:port
|
||||
| [chat3.aiyunos.top](https://chat3.aiyunos.top/) | `g4f.Provider.AItianhuSpace` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
|
||||
| [aichatonline.org](https://aichatonline.org) | `g4f.Provider.AiChatOnline` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
|
||||
| [openchat.team](https://openchat.team) | `g4f.Provider.Aura` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
|
||||
| [chatbase.co](https://www.chatbase.co) | `g4f.Provider.ChatBase` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
|
||||
| [chatbase.co](https://www.chatbase.co) | `g4f.Provider.ChatBase` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
|
||||
| [chatforai.store](https://chatforai.store) | `g4f.Provider.ChatForAi` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
|
||||
| [chatgpt.ai](https://chatgpt.ai) | `g4f.Provider.ChatgptAi` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
|
||||
| [chat.chatgptdemo.net](https://chat.chatgptdemo.net) | `g4f.Provider.ChatgptDemo` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
|
||||
| [chatgpt-free.cc](https://www.chatgpt-free.cc) | `g4f.Provider.ChatgptNext` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
|
||||
| [chat.3211000.xyz](https://chat.3211000.xyz) | `g4f.Provider.Chatxyz` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
|
||||
| [gptalk.net](https://gptalk.net) | `g4f.Provider.GPTalk` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
|
||||
| [geminiprochat.com](https://geminiprochat.com) | `g4f.Provider.GeminiProChat` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
|
||||
| [gpt6.ai](https://gpt6.ai) | `g4f.Provider.Gpt6` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
|
||||
| [gptchatly.com](https://gptchatly.com) | `g4f.Provider.GptChatly` | ✔️ | ❌ | ❌ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
|
||||
| [ai18.gptforlove.com](https://ai18.gptforlove.com) | `g4f.Provider.GptForLove` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
|
||||
@ -261,7 +264,6 @@ set G4F_PROXY=http://host:port
|
||||
| [perplexity.ai](https://www.perplexity.ai) | `g4f.Provider.PerplexityAi` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
|
||||
| [poe.com](https://poe.com) | `g4f.Provider.Poe` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ✔️ |
|
||||
| [talkai.info](https://talkai.info) | `g4f.Provider.TalkAi` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
|
||||
| [you.com](https://you.com) | `g4f.Provider.You` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
|
||||
| [aitianhu.com](https://www.aitianhu.com) | `g4f.Provider.AItianhu` | ✔️ | ❌ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
|
||||
| [e.aiask.me](https://e.aiask.me) | `g4f.Provider.AiAsk` | ✔️ | ❌ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
|
||||
| [chatgpt.bestim.org](https://chatgpt.bestim.org) | `g4f.Provider.Bestim` | ✔️ | ❌ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
|
||||
@ -284,12 +286,16 @@ set G4F_PROXY=http://host:port
|
||||
| ------ | ------- | ------- | ----- | ------ | ------ | ---- |
|
||||
| [bard.google.com](https://bard.google.com) | `g4f.Provider.Bard` | ❌ | ❌ | ❌ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ✔️ |
|
||||
| [deepinfra.com](https://deepinfra.com) | `g4f.Provider.DeepInfra` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
|
||||
| [gemini.google.com](https://gemini.google.com) | `g4f.Provider.Gemini` | ❌ | ❌ | ❌ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ✔️ |
|
||||
| [huggingface.co](https://huggingface.co/chat) | `g4f.Provider.HuggingChat` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
|
||||
| [gemini.google.com](https://gemini.google.com) | `g4f.Provider.Gemini` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ✔️ |
|
||||
| [ai.google.dev](https://ai.google.dev) | `g4f.Provider.GeminiPro` | ❌ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Active-brightgreen) | ❌ |
|
||||
| [gemini-chatbot-sigma.vercel.app](https://gemini-chatbot-sigma.vercel.app) | `g4f.Provider.GeminiProChat` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
|
||||
| [huggingface.co](https://huggingface.co/chat) | `g4f.Provider.HuggingChat` | ❌ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
|
||||
| [llama2.ai](https://www.llama2.ai) | `g4f.Provider.Llama2` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
|
||||
| [labs.perplexity.ai](https://labs.perplexity.ai) | `g4f.Provider.PerplexityLabs` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
|
||||
| [phind.com](https://www.phind.com) | `g4f.Provider.Phind` | ❌ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
|
||||
| [pi.ai](https://pi.ai/talk) | `g4f.Provider.Pi` | ❌ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
|
||||
| [phind.com](https://www.phind.com) | `g4f.Provider.Phind` | ❌ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Active-brightgreen) | ❌ |
|
||||
| [pi.ai](https://pi.ai/talk) | `g4f.Provider.Pi` | ❌ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Active-brightgreen) | ❌ |
|
||||
| [beta.theb.ai](https://beta.theb.ai) | `g4f.Provider.Theb` | ✔️ | ✔️ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
|
||||
| [free.chatgpt.org.uk](https://free.chatgpt.org.uk) | `g4f.Provider.FreeChatgpt` | ✔️ | ✔️ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
|
||||
| [theb.ai](https://theb.ai) | `g4f.Provider.ThebApi` | ❌ | ❌ | ❌ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ✔️ |
|
||||
| [open-assistant.io](https://open-assistant.io/chat) | `g4f.Provider.OpenAssistant` | ❌ | ❌ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ✔️ |
|
||||
|
||||
@ -300,10 +306,11 @@ set G4F_PROXY=http://host:port
|
||||
| gpt-3.5-turbo | OpenAI | 5+ Providers | [openai.com](https://openai.com/) |
|
||||
| gpt-4 | OpenAI | 2+ Providers | [openai.com](https://openai.com/) |
|
||||
| gpt-4-turbo | OpenAI | g4f.Provider.Bing | [openai.com](https://openai.com/) |
|
||||
| Llama-2-7b-chat-hf | Huggingface | 2+ Providers | [huggingface.co](https://huggingface.co/) |
|
||||
| Llama-2-13b-chat-hf | Huggingface | 2+ Providers | [huggingface.co](https://huggingface.co/) |
|
||||
| Llama-2-70b-chat-hf | Huggingface | 4+ Providers | [huggingface.co](https://huggingface.co/) |
|
||||
| CodeLlama-34b-Instruct-hf | Huggingface | 3+ Providers | [huggingface.co](https://huggingface.co/) |
|
||||
| Llama-2-7b-chat-hf | Meta | 2+ Providers | [llama.meta.com](https://llama.meta.com/) |
|
||||
| Llama-2-13b-chat-hf | Meta | 2+ Providers | [llama.meta.com](https://llama.meta.com/) |
|
||||
| Llama-2-70b-chat-hf | Meta | 4+ Providers | [llama.meta.com](https://llama.meta.com/) |
|
||||
| CodeLlama-34b-Instruct-hf | Meta | 3+ Providers | [llama.meta.com](https://llama.meta.com/) |
|
||||
| CodeLlama-70b-Instruct-hf | Meta | g4f.Provider.DeepInfra | [llama.meta.com](https://llama.meta.com/) |
|
||||
| Mixtral-8x7B-Instruct-v0.1 | Huggingface | 3+ Providers | [huggingface.co](https://huggingface.co/) |
|
||||
| Mistral-7B-Instruct-v0.1 | Huggingface | 3+ Providers | [huggingface.co](https://huggingface.co/) |
|
||||
| dolphin-2.6-mixtral-8x7b | Huggingface | g4f.Provider.DeepInfra | [huggingface.co](https://huggingface.co/) |
|
||||
|
@ -1,4 +1,4 @@
|
||||
### G4F - Client API (Beta Version)
|
||||
### G4F - Client API
|
||||
|
||||
#### Introduction
|
||||
|
||||
@ -39,7 +39,7 @@ client = Client(
|
||||
|
||||
## Configuration
|
||||
|
||||
You can set an "api_key" for your provider in client.
|
||||
You can set an "api_key" for your provider in the client.
|
||||
And you also have the option to define a proxy for all outgoing requests:
|
||||
|
||||
```python
|
||||
@ -108,13 +108,35 @@ response = client.images.create_variation(
|
||||
|
||||
image_url = response.data[0].url
|
||||
```
|
||||
|
||||
#### Visual Examples
|
||||
|
||||
Original / Variant:
|
||||
|
||||
[![Original Image](/docs/cat.jpeg)](/docs/client.md) [![Variant Image](/docs/cat.webp)](/docs/client.md)
|
||||
|
||||
#### Use a list of providers with RetryProvider
|
||||
|
||||
```python
|
||||
from g4f.client import Client
|
||||
from g4f.Provider import RetryProvider, Phind, FreeChatgpt, Liaobots
|
||||
|
||||
import g4f.debug
|
||||
g4f.debug.logging = True
|
||||
|
||||
client = Client(
|
||||
provider=RetryProvider([Phind, FreeChatgpt, Liaobots], shuffle=False)
|
||||
)
|
||||
response = client.chat.completions.create(
|
||||
model="",
|
||||
messages=[{"role": "user", "content": "Hello"}],
|
||||
)
|
||||
print(response.choices[0].message.content)
|
||||
```
|
||||
|
||||
```
|
||||
Using RetryProvider provider
|
||||
Using Phind provider
|
||||
How can I assist you today?
|
||||
```
|
||||
|
||||
#### Advanced example using GeminiProVision
|
||||
|
||||
```python
|
||||
@ -128,13 +150,16 @@ client = Client(
|
||||
response = client.chat.completions.create(
|
||||
model="gemini-pro-vision",
|
||||
messages=[{"role": "user", "content": "What are on this image?"}],
|
||||
image=open("docs/cat.jpeg", "rb")
|
||||
image=open("docs/waterfall.jpeg", "rb")
|
||||
)
|
||||
print(response.choices[0].message.content)
|
||||
```
|
||||
**Question:** What are on this image?
|
||||
![Waterfall](/docs/waterfall.jpeg)
|
||||
```
|
||||
A cat is sitting on a window sill looking at a bird outside the window.
|
||||
User: What are on this image?
|
||||
```
|
||||
```
|
||||
Bot: There is a waterfall in the middle of a jungle. There is a rainbow over...
|
||||
```
|
||||
|
||||
[Return to Home](/)
|
BIN
docs/guides/phone.jpeg
Normal file
BIN
docs/guides/phone.jpeg
Normal file
Binary file not shown.
After Width: | Height: | Size: 13 KiB |
50
docs/guides/phone.md
Normal file
50
docs/guides/phone.md
Normal file
@ -0,0 +1,50 @@
|
||||
### Guide: Running the G4F GUI on Your Smartphone
|
||||
|
||||
Running Python applications on your smartphone is possible with specialized apps like Pydroid. This tutorial will walk you through the process using an Android smartphone with Pydroid. Note that the steps may vary slightly for iPhone users due to differences in app names and ownership.
|
||||
|
||||
<p align="center">
|
||||
On the first screenshot is <strong>Pydroid</strong> and on the second is the <strong>Web UI</strong> in a browser
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<img src="/docs/guides/phone.png" />
|
||||
<img src="/docs/guides/phone2.jpeg" />
|
||||
</p>
|
||||
|
||||
1. **Install Pydroid from the Google Play Store:**
|
||||
- Navigate to the Google Play Store and search for "Pydroid 3 - IDE for Python 3" or use the following link: [Pydroid 3 - IDE for Python 3](https://play.google.com/store/apps/details/Pydroid_3_IDE_for_Python_3).
|
||||
|
||||
2. **Install the Pydroid Repository Plugin:**
|
||||
- To enhance functionality, install the Pydroid repository plugin. Find it on the Google Play Store or use this link: [Pydroid Repository Plugin](https://play.google.com/store/apps/details?id=ru.iiec.pydroid3.quickinstallrepo).
|
||||
|
||||
3. **Adjust App Settings:**
|
||||
- In the app settings for Pydroid, disable power-saving mode and ensure that the option to pause when not in use is also disabled. This ensures uninterrupted operation of your Python scripts.
|
||||
|
||||
4. **Install Required Packages:**
|
||||
- Open Pip within the Pydroid app and install the necessary packages by executing the following commands:
|
||||
```
|
||||
pip install g4f flask pillow beautifulsoup4
|
||||
```
|
||||
|
||||
5. **Create a New Python Script:**
|
||||
- Within Pydroid, create a new Python script and input the following content:
|
||||
```python
|
||||
from g4f import set_cookies
|
||||
|
||||
set_cookies(".bing.com", {
|
||||
"_U": "cookie value"
|
||||
})
|
||||
|
||||
from g4f.gui import run_gui
|
||||
|
||||
run_gui("0.0.0.0", 8080, debug=True)
|
||||
```
|
||||
Replace `"cookie value"` with your actual cookie value from Bing if you intend to create images using Bing.
|
||||
|
||||
6. **Execute the Script:**
|
||||
- Run the script by clicking on the play button or selecting the option to execute it.
|
||||
|
||||
7. **Access the GUI:**
|
||||
- Wait for the server to start, and once it's running, open the GUI using the URL provided in the output. [http://localhost:8080/chat/](http://localhost:8080/chat/)
|
||||
|
||||
By following these steps, you can successfully run the G4F GUI on your smartphone using Pydroid, allowing you to create and interact with graphical interfaces directly from your device.
|
BIN
docs/guides/phone.png
Normal file
BIN
docs/guides/phone.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 88 KiB |
BIN
docs/guides/phone2.jpeg
Normal file
BIN
docs/guides/phone2.jpeg
Normal file
Binary file not shown.
After Width: | Height: | Size: 21 KiB |
BIN
docs/waterfall.jpeg
Normal file
BIN
docs/waterfall.jpeg
Normal file
Binary file not shown.
After Width: | Height: | Size: 8.1 KiB |
@ -87,7 +87,8 @@ def print_models():
|
||||
"openai": "OpenAI",
|
||||
"huggingface": "Huggingface",
|
||||
"anthropic": "Anthropic",
|
||||
"inflection": "Inflection"
|
||||
"inflection": "Inflection",
|
||||
"meta": "Meta"
|
||||
}
|
||||
provider_urls = {
|
||||
"google": "https://gemini.google.com/",
|
||||
@ -95,6 +96,7 @@ def print_models():
|
||||
"huggingface": "https://huggingface.co/",
|
||||
"anthropic": "https://www.anthropic.com/",
|
||||
"inflection": "https://inflection.ai/",
|
||||
"meta": "https://llama.meta.com/"
|
||||
}
|
||||
|
||||
lines = [
|
||||
@ -120,6 +122,6 @@ def print_models():
|
||||
print("\n".join(lines))
|
||||
|
||||
if __name__ == "__main__":
|
||||
#print_providers()
|
||||
#print("\n", "-" * 50, "\n")
|
||||
print_providers()
|
||||
print("\n", "-" * 50, "\n")
|
||||
print_models()
|
@ -4,24 +4,13 @@ import json, random
|
||||
from aiohttp import ClientSession
|
||||
|
||||
from ..typing import AsyncResult, Messages
|
||||
from .base_provider import AsyncGeneratorProvider
|
||||
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
|
||||
|
||||
models = {
|
||||
"claude-v2": "claude-2.0",
|
||||
"claude-v2.1":"claude-2.1",
|
||||
"gemini-pro": "google-gemini-pro"
|
||||
}
|
||||
urls = [
|
||||
"https://free.chatgpt.org.uk",
|
||||
"https://ai.chatgpt.org.uk"
|
||||
]
|
||||
|
||||
class FreeChatgpt(AsyncGeneratorProvider):
|
||||
class FreeChatgpt(AsyncGeneratorProvider, ProviderModelMixin):
|
||||
url = "https://free.chatgpt.org.uk"
|
||||
working = True
|
||||
supports_gpt_35_turbo = True
|
||||
supports_gpt_4 = True
|
||||
supports_message_history = True
|
||||
default_model = "google-gemini-pro"
|
||||
|
||||
@classmethod
|
||||
async def create_async_generator(
|
||||
@ -31,11 +20,6 @@ class FreeChatgpt(AsyncGeneratorProvider):
|
||||
proxy: str = None,
|
||||
**kwargs
|
||||
) -> AsyncResult:
|
||||
if model in models:
|
||||
model = models[model]
|
||||
elif not model:
|
||||
model = "gpt-3.5-turbo"
|
||||
url = random.choice(urls)
|
||||
headers = {
|
||||
"Accept": "application/json, text/event-stream",
|
||||
"Content-Type":"application/json",
|
||||
@ -51,16 +35,15 @@ class FreeChatgpt(AsyncGeneratorProvider):
|
||||
}
|
||||
async with ClientSession(headers=headers) as session:
|
||||
data = {
|
||||
"messages":messages,
|
||||
"stream":True,
|
||||
"model":model,
|
||||
"temperature":0.5,
|
||||
"presence_penalty":0,
|
||||
"frequency_penalty":0,
|
||||
"top_p":1,
|
||||
**kwargs
|
||||
"messages": messages,
|
||||
"stream": True,
|
||||
"model": cls.get_model(""),
|
||||
"temperature": kwargs.get("temperature", 0.5),
|
||||
"presence_penalty": kwargs.get("presence_penalty", 0),
|
||||
"frequency_penalty": kwargs.get("frequency_penalty", 0),
|
||||
"top_p": kwargs.get("top_p", 1)
|
||||
}
|
||||
async with session.post(f'{url}/api/openai/v1/chat/completions', json=data, proxy=proxy) as response:
|
||||
async with session.post(f'{cls.url}/api/openai/v1/chat/completions', json=data, proxy=proxy) as response:
|
||||
response.raise_for_status()
|
||||
started = False
|
||||
async for line in response.content:
|
||||
|
@ -11,7 +11,6 @@ from .base_provider import AsyncGeneratorProvider
|
||||
class GeminiProChat(AsyncGeneratorProvider):
|
||||
url = "https://gemini-chatbot-sigma.vercel.app"
|
||||
working = True
|
||||
supports_gpt_35_turbo = True
|
||||
|
||||
@classmethod
|
||||
async def create_async_generator(
|
||||
|
@ -99,31 +99,31 @@ gpt_4_turbo = Model(
|
||||
|
||||
llama2_7b = Model(
|
||||
name = "meta-llama/Llama-2-7b-chat-hf",
|
||||
base_provider = 'huggingface',
|
||||
base_provider = 'meta',
|
||||
best_provider = RetryProvider([Llama2, DeepInfra])
|
||||
)
|
||||
|
||||
llama2_13b = Model(
|
||||
name = "meta-llama/Llama-2-13b-chat-hf",
|
||||
base_provider = 'huggingface',
|
||||
base_provider = 'meta',
|
||||
best_provider = RetryProvider([Llama2, DeepInfra])
|
||||
)
|
||||
|
||||
llama2_70b = Model(
|
||||
name = "meta-llama/Llama-2-70b-chat-hf",
|
||||
base_provider = "huggingface",
|
||||
base_provider = "meta",
|
||||
best_provider = RetryProvider([Llama2, DeepInfra, HuggingChat, PerplexityLabs])
|
||||
)
|
||||
|
||||
codellama_34b_instruct = Model(
|
||||
name = "codellama/CodeLlama-34b-Instruct-hf",
|
||||
base_provider = "huggingface",
|
||||
base_provider = "meta",
|
||||
best_provider = RetryProvider([HuggingChat, PerplexityLabs, DeepInfra])
|
||||
)
|
||||
|
||||
codellama_70b_instruct = Model(
|
||||
name = "codellama/CodeLlama-70b-Instruct-hf",
|
||||
base_provider = "huggingface",
|
||||
base_provider = "meta",
|
||||
best_provider = DeepInfra
|
||||
)
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user