Commit Graph

222 Commits

Author SHA1 Message Date
Luneye
23127acab2
Update Bing.py - Removed unnecessary "await" statements that could potentially lead to errors 2023-11-04 17:52:59 +01:00
H Lohaus
85ca16d77f
Merge pull request #1181 from hlohaus/arkose
Use asyncio subprocess in OpenaiChat
2023-10-29 19:58:56 +01:00
Luneye
b993bc00fa
Update ChatBase.py - Added jailbreak (enabled by default), Removed list incorrect responses 2023-10-29 18:34:12 +01:00
Heiner Lohaus
cc301a3dd8 Use asyncio subprocess in OpenaiChat 2023-10-28 19:02:39 +02:00
Heiner Lohaus
dc04ca9306 Add arkose_token to OpenaiChat 2023-10-28 07:21:00 +02:00
Heiner Lohaus
79cf039a88 Update config supports_message_history 2023-10-27 22:59:14 +02:00
Heiner Lohaus
0d1ae405cc Add Llama2 Providers / Models 2023-10-26 21:43:20 +02:00
Tekky
ffa36c49e4
Merge pull request #1153 from AndPim4912/ChatBase-incorrect-responses
Extract keywords from incorrect responses
2023-10-25 16:55:36 +01:00
Tekky
a167970d76
Merge pull request #1149 from Luneye/patch-4
[suggestion] Adding new parameter to check if a provider 'natively' supports mesage history
2023-10-25 14:07:40 +01:00
razrab
5ad48d9181 Extract keywords from incorrect responses
The text of error responses is dynamically created by LLM. Need determine by keywords for more precise identification.
2023-10-25 13:04:34 +03:00
Luneye
4bb751d989
Indicated support of message history in GptForLove.py 2023-10-24 23:46:54 +02:00
Luneye
2f539d0601
Indicated support of message history in Bing.py 2023-10-24 23:44:44 +02:00
Luneye
e93887aff8
Indicated support of message history in ChatBase.py 2023-10-24 23:43:08 +02:00
Luneye
7a2c8e4cd3
Indicated support of message history in FreeGpt.py 2023-10-24 23:42:16 +02:00
Luneye
0b43c13268
Indicated support of message history in GPTalk.py 2023-10-24 23:41:08 +02:00
Luneye
c43f82e966
Indicated support of message history in Yqcloud.py 2023-10-24 23:40:15 +02:00
Luneye
c839597c6d
Indicated support of message history in You.py 2023-10-24 23:39:29 +02:00
Luneye
aee8d5e628
Indicated support of message history in FakeGpt.py 2023-10-24 23:37:59 +02:00
Luneye
7f6d85f861
Indicated support of message history in ChatForAi.py 2023-10-24 23:36:48 +02:00
Luneye
dc798b520d
Indicated support of message history in ChatgptX.py 2023-10-24 23:30:07 +02:00
Tekky
6363353670
Merge pull request #1146 from AndPim4912/GetGpt-debian-compat
Update GetGpt provider for Debian python3-pycryptodome compatibility
2023-10-24 19:42:11 +01:00
Tekky
4c276c7ed6
Merge pull request #1145 from AndPim4912/chatbase-invalid-response
Add support for detecting incorrect responses in ChatBase API requests.
2023-10-24 19:41:56 +01:00
Heiner Lohaus
979904166f
Update MyShell.py 2023-10-24 18:58:12 +02:00
razrab
87f8007345 Update GetGpt provider for Debian python3-pycryptodome compatibility
Try to import AES from Cryptodome.Cipher if Crypto.Cipher caused error.
2023-10-24 19:30:57 +03:00
razrab
fd2b52823b Add support for detecting incorrect responses in ChatBase API requests. 2023-10-24 18:30:24 +03:00
Luneye
63ae5bb2cd
[suggestion] Adding new parameter to check if provider supports message history
What are your thoughts on introducing a parameter that allows us to promptly verify whether the provider supports message history? I also considered adding a parameter to indicate whether a provider can perform web searches.
2023-10-24 16:35:45 +02:00
Shubh Gajjar
f0f5cb05f9
Update FreeGpt.py
Changed older domain url with the working new url
2023-10-24 13:47:55 +05:30
Luneye
21e56a1af8
Bugfix Bing.py - Resolved Issues with system prompt, Bing personalities and enabled all supported user requests
I used this repository (https://github.com/waylaidwanderer/node-chatgpt-api/) as a reference to fix all the bugs related to Bing "personality." I included all the required fields in the allowedMessageTypes and optionsSets (as well as sliceIds) to allow it to respond to any requests it actually supports.

Will also finish the code to fully implement the image generation functionality.
2023-10-23 14:00:36 +02:00
ⲘrṨhส∂ow
3982f39424
'Refactored by Sourcery' (#1125)
Co-authored-by: Sourcery AI <>
2023-10-23 09:46:25 +02:00
Tekky
955fb4bbaa
Merge pull request #1124 from hlohaus/fake
Improve helper
2023-10-22 22:55:32 +01:00
Tekky
33fcf907b6
Merge pull request #1122 from Luneye/patch-2
Major Update for Bing - Supports latest bundle version and image analysis
2023-10-22 22:54:14 +01:00
Heiner Lohaus
598255fa26 Debug logging support
Async browse access token
2023-10-22 23:53:18 +02:00
Heiner Lohaus
3ae90b57ed Improve get_cookies helper 2023-10-22 20:01:14 +02:00
Heiner Lohaus
fc15181110 Fix ChatgptAi Provider 2023-10-22 17:13:13 +02:00
Luneye
c400d02024
Major Update for Bing - Supports latest bundle version and image analysis
Here it is, a much-needed update to this service which offers numerous functionalities that the old code was unable to deliver to us.

As you may know, ChatGPT Plus subscribers now have the opportunity to request image analysis directly from GPT within the chat bar. Bing has also integrated this feature into its chatbot. With this new code, you can now provide an image using a data URI, with all the following supported extensions: jpg, jpeg, png, and gif!

**What is a data URI and how can I provide an image to Bing?**

Just to clarify, a data URI is a method for encoding data directly into a URI (Uniform Resource Identifier). It is typically used for embedding small data objects like images, text, or other resources within web pages or documents. Data URIs are widely used in web applications.

To provide an image from your desktop and retrieve it as a data URI, you can use this code: [GitHub link](https://gist.github.com/jsocol/1089733).

Now, here is a code snippet you can use to provide images to Bing:

```python
import g4f

provider = g4f.Provider.Bing
user_message = [{"role": "user", "content": "Hi, describe this image."}]

response = g4f.ChatCompletion.create(
    model = g4f.models.gpt_4,
    provider = g4f.provider,  # Corrected the provider value
    messages = user_message,
    stream = True,
    image = "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQEASABIAAD/4RiSRXhpZgAASUkqAAg..."  # Insert your full data URI image here
)

for message in response:
    print(message, flush=True, end='')
```

If you don't want to analyze the image, just do not specify the image parameter.

Regarding the implementation, the image is preprocessed within the Bing.py code, which can be resource-intensive for a server-side implementation. When using the Bing chatbot in your web browser, the image is preprocessed on your computer before being sent to the server. This preprocessing includes tasks like image rotation and compression. Although this implementation works, it would be more efficient to delegate image preprocessing to the client as it happens in reality. I will try to provide a JavaScript code for that at a later time.

As you saw, I did mention in the title that it is in Beta. The way the code is written, Bing can sometimes mess up its answers. Indeed, Bing does not really stream its responses as the other providers do. Bing sends its answers like this on each iteration:

"Hi,"
"Hi, this,"
"Hi, this is,"
"Hi, this is Bing."

Instead of sending each segment one at a time, it already adds them on each iteration. So, to simulate a normal streaming response, other contributors made the code wait for the next iteration to retrieve the newer segments and yield them. However, this method ignores something that Bing does.

Bing processes its responses in a markdown detector, which searches for links while the AI answers. If it finds a link, it saves it and waits until the AI finishes its answer to put all the found links at the very end of the answer. So if the AI is writing a link, but then on the next iteration, it finishes writing this link, it will then be deleted from the answer and appear later at the very end. Example:

"Here is your link reference ["
"Here is your link reference [^"
"Here is your link reference [^1"
"Here is your link reference [^1^"

And then the response would get stuck there because the markdown detector would have deleted this link reference in the next response and waited until the AI is finished to put it at the very end.

For this reason, I am working on an update to anticipate the markdown detector.
So please, if you guys notice any bugs with this new implementation, I would greatly appreciate it if you could report them on the issue tab of this repo. Thanks in advance, and I hope that all these explanations were clear to you!
2023-10-22 15:59:56 +02:00
Heiner Lohaus
78f93bb737 Add rate limit error messages 2023-10-22 15:15:43 +02:00
Heiner Lohaus
63cda8d779 Fix increase timeout
Add Hashnode Provider
Fix Yqcloud Provider
2023-10-22 14:22:33 +02:00
Heiner Lohaus
4225a39a49 Enable Liaobots and ChatForAi again 2023-10-22 09:04:14 +02:00
Heiner Lohaus
13e89d6ab9 Fix MyShell Provider 2023-10-22 08:57:31 +02:00
Heiner Lohaus
a3af9fac3e Add FakeGpt Provider
Update providers in models
2023-10-22 01:22:25 +02:00
abc
ae8dae82cf ~ | g4f v-0.1.7.2
patch / unpatch providers
2023-10-21 00:52:19 +01:00
abc
dad69d24ce ~
minor changes
2023-10-20 19:28:46 +01:00
abc
d4ab83a45b ~
automatic models fetching in GUI.
2023-10-19 15:14:48 +01:00
hs_junxiang
042ee7633b Fix: debug.logging not work in retry provider 2023-10-19 10:15:38 +08:00
ostix360
24f7495f24 Add timeout 2023-10-17 09:29:12 +02:00
abc
5b240665fb ~ | add g4f.Provider.GeekGpt 2023-10-16 14:34:00 +01:00
abc
4a3b663ccd ~ | remove non-working providers 2023-10-16 00:47:10 +01:00
Heiner Lohaus
c1adfbee8e Add Llama2 and NoowAi Provider 2023-10-15 19:10:25 +02:00
Tekky
8bdbb9e9cd
~ | Merge pull request #1068 from hlohaus/fre
Fix Opchatgpts and ChatForAi Provider
2023-10-14 14:36:47 +01:00
abc
1f8293250e ~
fix chatbase (bad) and remove from auto selection
2023-10-14 14:36:24 +01:00