Merge pull request #1988 from hlohaus/kessh

Improve async client readme, Fix print styling, Add image api example
This commit is contained in:
H Lohaus 2024-05-20 16:22:10 +02:00 committed by GitHub
commit e4b3b2692e
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
3 changed files with 51 additions and 4 deletions

View File

@ -9,15 +9,39 @@ Designed to maintain compatibility with the existing OpenAI API, the G4F AsyncCl
The G4F AsyncClient API offers several key features:
- **Custom Providers:** The G4F Client API allows you to use custom providers. This feature enhances the flexibility of the API, enabling it to cater to a wide range of use cases.
- **ChatCompletion Interface:** The G4F package provides an interface for interacting with chat models through the ChatCompletion class. This class provides methods for creating both streaming and non-streaming responses.
- **Streaming Responses:** The ChatCompletion.create method can return a response iteratively as and when they are received if the stream parameter is set to True.
- **Non-Streaming Responses:** The ChatCompletion.create method can also generate non-streaming responses.
- **Image Generation and Vision Models:** The G4F Client API also supports image generation and vision models, expanding its utility beyond text-based interactions.
- **Image Generation and Vision Models:** The G4F Client API also supports image generation and vision models, expanding its utility beyond text-based interactions.
## Initializing the Client
To utilize the G4F AsyncClient, create a new instance. Below is an example showcasing custom providers:
```python
from g4f.client import AsyncClient
from g4f.Provider import BingCreateImages, OpenaiChat, Gemini
client = AsyncClient(
provider=OpenaiChat,
image_provider=Gemini,
...
)
```
## Configuration
You can set an "api_key" for your provider in the client. You also have the option to define a proxy for all outgoing requests:
```python
from g4f.client import AsyncClient
client = AsyncClient(
api_key="...",
proxies="http://user:pass@host",
...
)
```
## Using AsyncClient
@ -62,6 +86,17 @@ response = await client.images.generate(
image_url = response.data[0].url
```
#### Base64 as the response format
```python
response = await client.images.generate(
prompt="a cool cat",
response_format="b64_json"
)
base64_text = response.data[0].b64_json
```
### Example usage with asyncio.gather
Start two tasks at the same time:

View File

@ -0,0 +1,9 @@
import requests
url = "http://localhost:1337/v1/images/generations"
body = {
"prompt": "heaven for dogs",
"provider": "OpenaiAccount",
"response_format": "b64_json",
}
data = requests.post(url, json=body, stream=True).json()
print(data)

View File

@ -1143,4 +1143,7 @@ a:-webkit-any-link {
.message .user {
display: none;
}
.message.regenerate {
opacity: 1;
}
}