Update docs / readme, Improve Gemini auth

This commit is contained in:
Heiner Lohaus 2024-02-21 17:02:54 +01:00
parent f560bac946
commit 0a0698c7f3
8 changed files with 253 additions and 192 deletions

213
README.md
View File

@ -100,72 +100,43 @@ or set the api base in your client to: [http://localhost:1337/v1](http://localho
##### Install using pypi:
Install all supported tools / all used packages:
```
pip install -U g4f[all]
```
Or use: [Partially Requirements](/docs/requirements.md)
Or use partial requirements.
See: [/docs/requirements](/docs/requirements.md)
##### Install from source:
1. Clone the GitHub repository:
See: [/docs/git](/docs/git.md)
```
git clone https://github.com/xtekky/gpt4free.git
```
2. Navigate to the project directory:
```
cd gpt4free
```
3. (Recommended) Create a Python virtual environment:
You can follow the [Python official documentation](https://docs.python.org/3/tutorial/venv.html) for virtual environments.
```
python3 -m venv venv
```
4. Activate the virtual environment:
- On Windows:
```
.\venv\Scripts\activate
```
- On macOS and Linux:
```
source venv/bin/activate
```
5. Install minimum requirements:
```
pip install -r requirements-min.txt
```
6. Or install all used Python packages from `requirements.txt`:
```
pip install -r requirements.txt
```
7. Create a `test.py` file in the root folder and start using the repo, further Instructions are below
```py
import g4f
...
```
##### Install using Docker
Or use: [Build Docker](/docs/docker.md)
See: [/docs/docker](/docs/docker.md)
## 💡 Usage
#### Text Generation
**with Python**
```python
from g4f.client import Client
client = Client()
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Say this is a test"}],
...
)
print(response.choices[0].message.content)
```
#### Image Generation
**with Python**
```python
from g4f.client import Client
@ -182,9 +153,7 @@ Result:
[![Image with cat](/docs/cat.jpeg)](/docs/client.md)
#### Text Generation
and more:
**See also for Python:**
- [Documentation for new Client](/docs/client.md)
- [Documentation for leagcy API](/docs/leagcy.md)
@ -192,19 +161,31 @@ and more:
#### Web UI
To start the web interface, type the following codes in the command line.
To start the web interface, type the following codes in python:
```python
from g4f.gui import run_gui
run_gui()
```
or type in command line:
```bash
python -m g4f.cli gui -port 8080 -debug
```
### Interference API
You can use the Interference API to serve other OpenAI integrations with G4F.
See: [/docs/interference](/docs/interference.md)
### Configuration
##### Cookies / Access Token
For generating images with Bing and for the OpenAi Chat you need cookies or a token from your browser session. From Bing you need the "_U" cookie and from OpenAI you need the "access_token". You can pass the cookies / the access token in the create function or you use the `set_cookies` setter:
For generating images with Bing and for the OpenAi Chat you need cookies or a token from your browser session. From Bing you need the "_U" cookie and from OpenAI you need the "access_token". You can pass the cookies / the access token in the create function or you use the `set_cookies` setter before you run G4F:
```python
from g4f import set_cookies
from g4f.cookies import set_cookies
set_cookies(".bing.com", {
"_U": "cookie value"
@ -212,124 +193,30 @@ set_cookies(".bing.com", {
set_cookies("chat.openai.com", {
"access_token": "token value"
})
set_cookies(".google.com", {
"__Secure-1PSID": "cookie value"
})
from g4f.gui import run_gui
run_gui()
...
```
Alternatively, g4f reads the cookies with “browser_cookie3” from your browser
or it starts a browser instance with selenium "webdriver" for logging in.
If you use the pip package, you have to install “browser_cookie3” or "webdriver" by yourself.
Alternatively, G4F reads the cookies with `browser_cookie3` from your browser
or it starts a browser instance with selenium `webdriver` for logging in.
##### Using Proxy
If you want to hide or change your IP address for the providers, you can set a proxy globally via an environment variable:
- On macOS and Linux:
```bash
pip install browser_cookie3
pip install g4f[webdriver]
```
##### Proxy and Timeout Support
All providers support specifying a proxy and increasing timeout in the create functions.
```python
import g4f
response = g4f.ChatCompletion.create(
model=g4f.models.default,
messages=[{"role": "user", "content": "Hello"}],
proxy="http://host:port",
# or socks5://user:pass@host:port
timeout=120, # in secs
)
print(f"Result:", response)
```
You can also set a proxy globally via an environment variable:
```sh
export G4F_PROXY="http://host:port"
```
### Interference openai-proxy API (Use with openai python package)
#### Run interference API from PyPi package
```python
from g4f.api import run_api
run_api()
- On Windows:
```bash
set G4F_PROXY=http://host:port
```
#### Run interference API from repo
If you want to use the embedding function, you need to get a Hugging Face token. You can get one at [Hugging Face Tokens](https://huggingface.co/settings/tokens). Make sure your role is set to write. If you have your token, just use it instead of the OpenAI api-key.
Run server:
```sh
g4f api
```
or
```sh
python -m g4f.api.run
```
```python
from openai import OpenAI
client = OpenAI(
# Set your Hugging Face token as the API key if you use embeddings
api_key="YOUR_HUGGING_FACE_TOKEN",
# Set the API base URL if needed, e.g., for a local development environment
base_url="http://localhost:1337/v1"
)
def main():
chat_completion = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "write a poem about a tree"}],
stream=True,
)
if isinstance(chat_completion, dict):
# Not streaming
print(chat_completion.choices[0].message.content)
else:
# Streaming
for token in chat_completion:
content = token.choices[0].delta.content
if content is not None:
print(content, end="", flush=True)
if __name__ == "__main__":
main()
```
## API usage (POST)
#### Chat completions
Send the POST request to /v1/chat/completions with body containing the `model` method. This example uses python with requests library:
```python
import requests
url = "http://localhost:1337/v1/chat/completions"
body = {
"model": "gpt-3.5-turbo-16k",
"stream": False,
"messages": [
{"role": "assistant", "content": "What can you do?"}
]
}
json_response = requests.post(url, json=body).json().get('choices', [])
for choice in json_response:
print(choice.get('message', {}).get('content', ''))
```
## 🚀 Providers and Models
### GPT-4

View File

@ -43,11 +43,23 @@ client = Client(
You can use the `ChatCompletions` endpoint to generate text completions as follows:
```python
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Say this is a test"}],
...
)
print(response.choices[0].message.content)
```
Also streaming are supported:
```python
stream = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Say this is a test"}],
stream=True,
...
)
for chunk in stream:
if chunk.choices[0].delta.content:

View File

@ -1,38 +1,37 @@
### G4F - Docker
### G4F - Docker Setup
If you have Docker installed, you can easily set up and run the project without manually installing dependencies.
1. First, ensure you have both Docker and Docker Compose installed.
Easily set up and run the G4F project using Docker without the hassle of manual dependency installation.
1. **Prerequisites:**
- [Install Docker](https://docs.docker.com/get-docker/)
- [Install Docker Compose](https://docs.docker.com/compose/install/)
2. Clone the GitHub repo:
2. **Clone the Repository:**
```bash
git clone https://github.com/xtekky/gpt4free.git
```
3. Navigate to the project directory:
3. **Navigate to the Project Directory:**
```bash
cd gpt4free
```
4. Build the Docker image:
4. **Build the Docker Image:**
```bash
docker pull selenium/node-chrome
docker-compose build
```
5. Start the service using Docker Compose:
5. **Start the Service:**
```bash
docker-compose up
```
Your server will now be running at `http://localhost:1337`. You can interact with the API or run your tests as you would normally.
Your server will now be accessible at `http://localhost:1337`. Interact with the API or run tests as usual.
To stop the Docker containers, simply run:
@ -41,6 +40,6 @@ docker-compose down
```
> [!Note]
> When using Docker, any changes you make to your local files will be reflected in the Docker container thanks to the volume mapping in the `docker-compose.yml` file. If you add or remove dependencies, however, you'll need to rebuild the Docker image using `docker-compose build`.
> Changes made to local files reflect in the Docker container due to volume mapping in `docker-compose.yml`. However, if you add or remove dependencies, rebuild the Docker image using `docker-compose build`.
[Return to Home](/)

66
docs/git.md Normal file
View File

@ -0,0 +1,66 @@
### G4F - Installation Guide
Follow these steps to install G4F from the source code:
1. **Clone the Repository:**
```bash
git clone https://github.com/xtekky/gpt4free.git
```
2. **Navigate to the Project Directory:**
```bash
cd gpt4free
```
3. **(Optional) Create a Python Virtual Environment:**
It's recommended to isolate your project dependencies. You can follow the [Python official documentation](https://docs.python.org/3/tutorial/venv.html) for virtual environments.
```bash
python3 -m venv venv
```
4. **Activate the Virtual Environment:**
- On Windows:
```bash
.\venv\Scripts\activate
```
- On macOS and Linux:
```bash
source venv/bin/activate
```
5. **Install Minimum Requirements:**
Install the minimum required packages:
```bash
pip install -r requirements-min.txt
```
6. **Or Install All Packages from `requirements.txt`:**
If you prefer, you can install all packages listed in `requirements.txt`:
```bash
pip install -r requirements.txt
```
7. **Start Using the Repository:**
You can now create Python scripts and utilize the G4F functionalities. Here's a basic example:
Create a `test.py` file in the root folder and start using the repository:
```python
import g4f
# Your code here
```
[Return to Home](/)

69
docs/interference.md Normal file
View File

@ -0,0 +1,69 @@
### Interference openai-proxy API
#### Run interference API from PyPi package
```python
from g4f.api import run_api
run_api()
```
#### Run interference API from repo
Run server:
```sh
g4f api
```
or
```sh
python -m g4f.api.run
```
```python
from openai import OpenAI
client = OpenAI(
api_key="",
# Change the API base URL to the local interference API
base_url="http://localhost:1337/v1"
)
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "write a poem about a tree"}],
stream=True,
)
if isinstance(response, dict):
# Not streaming
print(response.choices[0].message.content)
else:
# Streaming
for token in response:
content = token.choices[0].delta.content
if content is not None:
print(content, end="", flush=True)
```
#### API usage (POST)
Send the POST request to /v1/chat/completions with body containing the `model` method. This example uses python with requests library:
```python
import requests
url = "http://localhost:1337/v1/chat/completions"
body = {
"model": "gpt-3.5-turbo-16k",
"stream": False,
"messages": [
{"role": "assistant", "content": "What can you do?"}
]
}
json_response = requests.post(url, json=body).json().get('choices', [])
for choice in json_response:
print(choice.get('message', {}).get('content', ''))
```
[Return to Home](/)

View File

@ -179,4 +179,22 @@ async def run_all():
asyncio.run(run_all())
```
##### Proxy and Timeout Support
All providers support specifying a proxy and increasing timeout in the create functions.
```python
import g4f
response = g4f.ChatCompletion.create(
model=g4f.models.default,
messages=[{"role": "user", "content": "Hello"}],
proxy="http://host:port",
# or socks5://user:pass@host:port
timeout=120, # in secs
)
print(f"Result:", response)
```
[Return to Home](/)

View File

@ -6,15 +6,19 @@ You can install requirements partially or completely. So G4F can be used as you
#### Options
Install required packages for the OpenaiChat provider:
Install g4f with all possible dependencies:
```
pip install -U g4f[all]
```
Or install only g4f and the required packages for the OpenaiChat provider:
```
pip install -U g4f[openai]
```
Install required packages for the interference api:
Install required packages for the Interference API:
```
pip install -U g4f[api]
```
Install required packages for the web interface:
Install required packages for the Web UI:
```
pip install -U g4f[gui]
```

View File

@ -50,7 +50,6 @@ class Gemini(AsyncGeneratorProvider):
url = "https://gemini.google.com"
needs_auth = True
working = True
supports_stream = False
@classmethod
async def create_async_generator(
@ -64,10 +63,9 @@ class Gemini(AsyncGeneratorProvider):
**kwargs
) -> AsyncResult:
prompt = format_prompt(messages)
if not cookies:
cookies = get_cookies(".google.com", False, True)
if "__Secure-1PSID" not in cookies or "__Secure-1PSIDCC" not in cookies:
cookies = cookies if cookies else get_cookies(".google.com", False, True)
snlm0e = await cls.fetch_snlm0e(cookies, proxy) if cookies else None
if not snlm0e:
driver = None
try:
driver = get_browser(proxy=proxy)
@ -90,8 +88,12 @@ class Gemini(AsyncGeneratorProvider):
if driver:
driver.close()
if "__Secure-1PSID" not in cookies:
raise MissingAuthError('Missing "__Secure-1PSID" cookie')
if not snlm0e:
if "__Secure-1PSID" not in cookies:
raise MissingAuthError('Missing "__Secure-1PSID" cookie')
snlm0e = await cls.fetch_snlm0e(cookies, proxy)
if not snlm0e:
raise RuntimeError("Invalid auth. SNlM0e not found")
image_url = await cls.upload_image(to_bytes(image), image_name, proxy) if image else None
@ -99,14 +101,6 @@ class Gemini(AsyncGeneratorProvider):
cookies=cookies,
headers=REQUEST_HEADERS
) as session:
async with session.get(cls.url, proxy=proxy) as response:
text = await response.text()
match = re.search(r'SNlM0e\":\"(.*?)\"', text)
if match:
snlm0e = match.group(1)
else:
raise RuntimeError("SNlM0e not found")
params = {
'bl': REQUEST_BL_PARAM,
'_reqid': random.randint(1111, 9999),
@ -205,3 +199,15 @@ class Gemini(AsyncGeneratorProvider):
) as response:
response.raise_for_status()
return await response.text()
@classmethod
async def fetch_snlm0e(cls, cookies: Cookies, proxy: str = None):
async with ClientSession(
cookies=cookies,
headers=REQUEST_HEADERS
) as session:
async with session.get(cls.url, proxy=proxy) as response:
text = await response.text()
match = re.search(r'SNlM0e\":\"(.*?)\"', text)
if match:
return match.group(1)