~ | g4f v-0.1.7.2

patch / unpatch providers
This commit is contained in:
abc 2023-10-21 00:52:19 +01:00
parent 29e2302fb2
commit ae8dae82cf
20 changed files with 195 additions and 140 deletions

View File

@ -2,7 +2,7 @@
By using this repository or any code related to it, you agree to the [legal notice](./LEGAL_NOTICE.md). The author is not responsible for any copies, forks, reuploads made by other users, or anything else related to gpt4free. This is the author's only account and repository. To prevent impersonation or irresponsible actions, please comply with the GNU GPL license this Repository uses.
- latest pypi version: [`0.1.7.1`](https://pypi.org/project/g4f/0.1.7.1)
- latest pypi version: [`0.1.7.2`](https://pypi.org/project/g4f/0.1.7.2)
```sh
pip install -U g4f
```
@ -412,37 +412,34 @@ if __name__ == "__main__":
| Website| Provider| gpt-3.5 | Stream | Async | Status | Auth |
| ------ | ------- | ------- | --------- | --------- | ------ | ---- |
| [www.aitianhu.com](https://www.aitianhu.com) | `g4f.Provider.AItianhu` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [chat3.aiyunos.top](https://chat3.aiyunos.top/) | `g4f.Provider.AItianhuSpace` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [e.aiask.me](https://e.aiask.me) | `g4f.Provider.AiAsk` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [chat-gpt.org](https://chat-gpt.org/chat) | `g4f.Provider.Aichat` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [www.chatbase.co](https://www.chatbase.co) | `g4f.Provider.ChatBase` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [chatforai.store](https://chatforai.store) | `g4f.Provider.ChatForAi` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [chatgpt4online.org](https://chatgpt4online.org) | `g4f.Provider.Chatgpt4Online` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [chatgpt.ai](https://chatgpt.ai/) | `g4f.Provider.ChatgptAi` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [chat.chatgptdemo.net](https://chat.chatgptdemo.net) | `g4f.Provider.ChatgptDemo` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [chatgptlogin.ai](https://chatgptlogin.ai) | `g4f.Provider.ChatgptLogin` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [chatgptfree.ai](https://chatgptfree.ai) | `g4f.Provider.ChatgptFree` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [chatgptx.de](https://chatgptx.de) | `g4f.Provider.ChatgptX` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [freegpts1.aifree.site](https://freegpts1.aifree.site/) | `g4f.Provider.FreeGpt` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [gptalk.net](https://gptalk.net) | `g4f.Provider.GPTalk` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [ai18.gptforlove.com](https://ai18.gptforlove.com) | `g4f.Provider.GptForLove` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [gptgo.ai](https://gptgo.ai) | `g4f.Provider.GptGo` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [gptgod.site](https://gptgod.site) | `g4f.Provider.GptGod` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [www.llama2.ai](https://www.llama2.ai) | `g4f.Provider.Llama2` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [noowai.com](https://noowai.com) | `g4f.Provider.NoowAi` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [opchatgpts.net](https://opchatgpts.net) | `g4f.Provider.Opchatgpts` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [chat.openai.com](https://chat.openai.com) | `g4f.Provider.OpenaiChat` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ✔️ |
| [theb.ai](https://theb.ai) | `g4f.Provider.Theb` | ✔️ | ✔️ | ❌ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ✔️ |
| [sdk.vercel.ai](https://sdk.vercel.ai) | `g4f.Provider.Vercel` | ✔️ | ✔️ | ❌ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [you.com](https://you.com) | `g4f.Provider.You` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [chat9.yqcloud.top](https://chat9.yqcloud.top/) | `g4f.Provider.Yqcloud` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [www.aitianhu.com](https://www.aitianhu.com) | `g4f.Provider.AItianhu` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [chat.acytoo.com](https://chat.acytoo.com) | `g4f.Provider.Acytoo` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [aiservice.vercel.app](https://aiservice.vercel.app/) | `g4f.Provider.AiService` | ✔️ | ❌ | ❌ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [aibn.cc](https://aibn.cc) | `g4f.Provider.Aibn` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [ai.ls](https://ai.ls) | `g4f.Provider.Ails` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [chataigpt.org](https://chataigpt.org) | `g4f.Provider.ChatAiGpt` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [chatforai.store](https://chatforai.store) | `g4f.Provider.ChatForAi` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [chatgpt4online.org](https://chatgpt4online.org) | `g4f.Provider.Chatgpt4Online` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [chat.chatgptdemo.net](https://chat.chatgptdemo.net) | `g4f.Provider.ChatgptDemo` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [chatgptduo.com](https://chatgptduo.com) | `g4f.Provider.ChatgptDuo` | ✔️ | ❌ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [chatgptfree.ai](https://chatgptfree.ai) | `g4f.Provider.ChatgptFree` | ✔️ | ❌ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [chatgptlogin.ai](https://chatgptlogin.ai) | `g4f.Provider.ChatgptLogin` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [ava-ai-ef611.web.app](https://ava-ai-ef611.web.app) | `g4f.Provider.CodeLinkAva` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [cromicle.top](https://cromicle.top) | `g4f.Provider.Cromicle` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [chat.dfehub.com](https://chat.dfehub.com/) | `g4f.Provider.DfeHub` | ✔️ | ✔️ | ❌ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
@ -451,8 +448,10 @@ if __name__ == "__main__":
| [chat9.fastgpt.me](https://chat9.fastgpt.me/) | `g4f.Provider.FastGpt` | ✔️ | ✔️ | ❌ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [forefront.com](https://forefront.com) | `g4f.Provider.Forefront` | ✔️ | ✔️ | ❌ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [chat.getgpt.world](https://chat.getgpt.world/) | `g4f.Provider.GetGpt` | ✔️ | ✔️ | ❌ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [gptgod.site](https://gptgod.site) | `g4f.Provider.GptGod` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [komo.ai](https://komo.ai/api/ask) | `g4f.Provider.Komo` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [ai.okmiku.com](https://ai.okmiku.com) | `g4f.Provider.MikuChat` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [opchatgpts.net](https://opchatgpts.net) | `g4f.Provider.Opchatgpts` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [www.perplexity.ai](https://www.perplexity.ai) | `g4f.Provider.PerplexityAi` | ✔️ | ❌ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [talkai.info](https://talkai.info) | `g4f.Provider.TalkAi` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [p5.v50.ltd](https://p5.v50.ltd) | `g4f.Provider.V50` | ✔️ | ❌ | ❌ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
@ -460,6 +459,7 @@ if __name__ == "__main__":
| [wewordle.org](https://wewordle.org) | `g4f.Provider.Wewordle` | ✔️ | ❌ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [chat.wuguokai.xyz](https://chat.wuguokai.xyz) | `g4f.Provider.Wuguokai` | ✔️ | ❌ | ❌ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [chat.ylokh.xyz](https://chat.ylokh.xyz) | `g4f.Provider.Ylokh` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [chat9.yqcloud.top](https://chat9.yqcloud.top/) | `g4f.Provider.Yqcloud` | ✔️ | ✔️ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
### Other Models

View File

@ -10,7 +10,7 @@ from .base_provider import AsyncGeneratorProvider, format_prompt, get_cookies
class AItianhu(AsyncGeneratorProvider):
url = "https://www.aitianhu.com"
working = False
working = True
supports_gpt_35_turbo = True
@classmethod
@ -23,9 +23,9 @@ class AItianhu(AsyncGeneratorProvider):
timeout: int = 120, **kwargs) -> AsyncResult:
if not cookies:
cookies = browser_cookie3.chrome(domain_name='www.aitianhu.com')
cookies = get_cookies(domain_name='www.aitianhu.com')
if not cookies:
raise RuntimeError(f"g4f.provider.{cls.__name__} requires cookies")
raise RuntimeError(f"g4f.provider.{cls.__name__} requires cookies [refresh https://www.aitianhu.com on chrome]")
data = {
"prompt": format_prompt(messages),
@ -68,7 +68,6 @@ class AItianhu(AsyncGeneratorProvider):
if b"platform's risk control" in line:
raise RuntimeError("Platform's Risk Control")
print(line)
line = json.loads(line)
if "detail" in line:

View File

@ -1,7 +1,7 @@
from __future__ import annotations
import random, json
from ..debug import logging
from ..typing import AsyncResult, Messages
from ..requests import StreamSession
from .base_provider import AsyncGeneratorProvider, format_prompt, get_cookies
@ -17,37 +17,37 @@ class AItianhuSpace(AsyncGeneratorProvider):
supports_gpt_35_turbo = True
@classmethod
async def create_async_generator(
cls,
async def create_async_generator(cls,
model: str,
messages: Messages,
proxy: str = None,
domain: str = None,
cookies: dict = None,
timeout: int = 120,
**kwargs
) -> AsyncResult:
timeout: int = 10, **kwargs) -> AsyncResult:
if not model:
model = "gpt-3.5-turbo"
elif not model in domains:
raise ValueError(f"Model are not supported: {model}")
if not domain:
chars = 'abcdefghijklmnopqrstuvwxyz0123456789'
rand = ''.join(random.choice(chars) for _ in range(6))
domain = f"{rand}.{domains[model]}"
if logging:
print(f"AItianhuSpace | using domain: {domain}")
if not cookies:
cookies = get_cookies(domain)
cookies = get_cookies('.aitianhu.space')
if not cookies:
raise RuntimeError(f"g4f.provider.{cls.__name__} requires cookies")
raise RuntimeError(f"g4f.provider.{cls.__name__} requires cookies [refresh https://{domain} on chrome]")
url = f'https://{domain}'
async with StreamSession(
proxies={"https": proxy},
cookies=cookies,
timeout=timeout,
impersonate="chrome110",
verify=False
) as session:
async with StreamSession(proxies={"https": proxy},
cookies=cookies, timeout=timeout, impersonate="chrome110", verify=False) as session:
data = {
"prompt": format_prompt(messages),
"options": {},

View File

@ -4,7 +4,8 @@ from aiohttp import ClientSession
from ..typing import Messages
from .base_provider import AsyncProvider, format_prompt
from .helper import get_cookies
from ..requests import StreamSession
class Aichat(AsyncProvider):
url = "https://chat-gpt.org/chat"
@ -15,27 +16,34 @@ class Aichat(AsyncProvider):
async def create_async(
model: str,
messages: Messages,
proxy: str = None,
**kwargs
) -> str:
proxy: str = None, **kwargs) -> str:
cookies = get_cookies('chat-gpt.org') if not kwargs.get('cookies') else kwargs.get('cookies')
if not cookies:
raise RuntimeError(f"g4f.provider.Aichat requires cookies, [refresh https://chat-gpt.org on chrome]")
headers = {
"authority": "chat-gpt.org",
"accept": "*/*",
"cache-control": "no-cache",
"content-type": "application/json",
"origin": "https://chat-gpt.org",
"pragma": "no-cache",
"referer": "https://chat-gpt.org/chat",
"sec-ch-ua-mobile": "?0",
"sec-ch-ua-platform": '"macOS"',
"sec-fetch-dest": "empty",
"sec-fetch-mode": "cors",
"sec-fetch-site": "same-origin",
"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36",
'authority': 'chat-gpt.org',
'accept': '*/*',
'accept-language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3',
'content-type': 'application/json',
'origin': 'https://chat-gpt.org',
'referer': 'https://chat-gpt.org/chat',
'sec-ch-ua': '"Chromium";v="118", "Google Chrome";v="118", "Not=A?Brand";v="99"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"macOS"',
'sec-fetch-dest': 'empty',
'sec-fetch-mode': 'cors',
'sec-fetch-site': 'same-origin',
'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/118.0.0.0 Safari/537.36',
}
async with ClientSession(
headers=headers
) as session:
async with StreamSession(headers=headers,
cookies=cookies,
timeout=6,
proxies={"https": proxy} if proxy else None,
impersonate="chrome110", verify=False) as session:
json_data = {
"message": format_prompt(messages),
"temperature": kwargs.get('temperature', 0.5),
@ -43,13 +51,14 @@ class Aichat(AsyncProvider):
"top_p": kwargs.get('top_p', 1),
"frequency_penalty": 0,
}
async with session.post(
"https://chat-gpt.org/api/text",
proxy=proxy,
json=json_data
) as response:
async with session.post("https://chat-gpt.org/api/text",
json=json_data) as response:
response.raise_for_status()
result = await response.json()
if not result['response']:
raise Exception(f"Error Response: {result}")
return result["message"]

View File

@ -10,7 +10,7 @@ from .base_provider import AsyncGeneratorProvider
class ChatForAi(AsyncGeneratorProvider):
url = "https://chatforai.store"
working = True
working = False
supports_gpt_35_turbo = True
@classmethod

View File

@ -10,7 +10,7 @@ from .base_provider import AsyncGeneratorProvider
class Chatgpt4Online(AsyncGeneratorProvider):
url = "https://chatgpt4online.org"
supports_gpt_35_turbo = True
working = True
working = False
@classmethod
async def create_async_generator(
@ -31,6 +31,7 @@ class Chatgpt4Online(AsyncGeneratorProvider):
"newMessage": messages[-1]["content"],
"stream": True
}
async with session.post(cls.url + "/wp-json/mwai-ui/v1/chats/submit", json=data, proxy=proxy) as response:
response.raise_for_status()
async for line in response.content:

View File

@ -10,7 +10,7 @@ from .helper import format_prompt
class ChatgptDemo(AsyncGeneratorProvider):
url = "https://chat.chatgptdemo.net"
supports_gpt_35_turbo = True
working = True
working = False
@classmethod
async def create_async_generator(

View File

@ -5,6 +5,7 @@ from __future__ import annotations
import re
from aiohttp import ClientSession
from ..requests import StreamSession
from ..typing import Messages
from .base_provider import AsyncProvider
from .helper import format_prompt, get_cookies
@ -13,7 +14,7 @@ from .helper import format_prompt, get_cookies
class ChatgptFree(AsyncProvider):
url = "https://chatgptfree.ai"
supports_gpt_35_turbo = True
working = False
working = True
_post_id = None
_nonce = None
@ -23,40 +24,50 @@ class ChatgptFree(AsyncProvider):
model: str,
messages: Messages,
proxy: str = None,
cookies: dict = None,
**kwargs
) -> str:
if not cookies:
cookies = get_cookies('chatgptfree.ai')
if not cookies:
raise RuntimeError(f"g4f.provider.{cls.__name__} requires cookies [refresh https://chatgptfree.ai on chrome]")
headers = {
"User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/118.0",
"Accept": "*/*",
"Accept-Language": "de,en-US;q=0.7,en;q=0.3",
"Accept-Encoding": "gzip, deflate, br",
"Origin": cls.url,
"Alt-Used": "chatgptfree.ai",
"Connection": "keep-alive",
"Referer": f"{cls.url}/",
"Sec-Fetch-Dest": "empty",
"Sec-Fetch-Mode": "cors",
"Sec-Fetch-Site": "same-origin",
"Pragma": "no-cache",
"Cache-Control": "no-cache",
"TE": "trailers"
'authority': 'chatgptfree.ai',
'accept': '*/*',
'accept-language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3',
'origin': 'https://chatgptfree.ai',
'referer': 'https://chatgptfree.ai/chat/',
'sec-ch-ua': '"Chromium";v="118", "Google Chrome";v="118", "Not=A?Brand";v="99"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"macOS"',
'sec-fetch-dest': 'empty',
'sec-fetch-mode': 'cors',
'sec-fetch-site': 'same-origin',
'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/118.0.0.0 Safari/537.36',
}
async with ClientSession(headers=headers) as session:
async with StreamSession(headers=headers,
impersonate="chrome107", proxies={"https": proxy}, timeout=10) as session:
if not cls._nonce:
async with session.get(f"{cls.url}/",
proxy=proxy, cookies=cookies) as response:
async with session.get(f"{cls.url}/", cookies=cookies) as response:
response.raise_for_status()
response = await response.text()
result = re.search(r'data-post-id="([0-9]+)"', response)
if not result:
raise RuntimeError("No post id found")
cls._post_id = result.group(1)
result = re.search(r'data-nonce="(.*?)"', response)
if not result:
raise RuntimeError("No nonce found")
cls._nonce = result.group(1)
prompt = format_prompt(messages)
data = {
"_wpnonce": cls._nonce,
@ -66,6 +77,8 @@ class ChatgptFree(AsyncProvider):
"message": prompt,
"bot_id": "0"
}
async with session.post(cls.url + "/wp-admin/admin-ajax.php", data=data, proxy=proxy) as response:
async with session.post(cls.url + "/wp-admin/admin-ajax.php",
data=data, cookies=cookies) as response:
response.raise_for_status()
return (await response.json())["data"]

View File

@ -13,7 +13,7 @@ from .helper import format_prompt
class ChatgptLogin(AsyncGeneratorProvider):
url = "https://chatgptlogin.ai"
supports_gpt_35_turbo = True
working = True
working = False
_user_id = None
@classmethod

View File

@ -7,8 +7,7 @@ from ..requests import StreamSession
from .base_provider import AsyncGeneratorProvider
domains = [
'https://k.aifree.site',
'https://p.aifree.site'
'https://r.aifree.site'
]
class FreeGpt(AsyncGeneratorProvider):

View File

@ -2,8 +2,7 @@
from __future__ import annotations
from aiohttp import ClientSession
from ..requests import StreamSession
from ..typing import Messages
from .base_provider import AsyncProvider
from .helper import get_cookies
@ -13,7 +12,7 @@ class GptChatly(AsyncProvider):
url = "https://gptchatly.com"
supports_gpt_35_turbo = True
supports_gpt_4 = True
working = False
working = True
@classmethod
async def create_async(
@ -22,9 +21,9 @@ class GptChatly(AsyncProvider):
messages: Messages,
proxy: str = None, cookies: dict = None, **kwargs) -> str:
cookies = get_cookies('gptchatly.com') if not cookies else cookies
if not cookies:
cookies = get_cookies('gptchatly.com')
raise RuntimeError(f"g4f.provider.GptChatly requires cookies, [refresh https://gptchatly.com on chrome]")
if model.startswith("gpt-4"):
chat_url = f"{cls.url}/fetch-gpt4-response"
@ -32,25 +31,26 @@ class GptChatly(AsyncProvider):
chat_url = f"{cls.url}/fetch-response"
headers = {
"User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/118.0",
"Accept": "*/*",
"Accept-Language": "de,en-US;q=0.7,en;q=0.3",
"Accept-Encoding": "gzip, deflate, br",
"Referer": f"{cls.url}/",
"Content-Type": "application/json",
"Origin": cls.url,
"Connection": "keep-alive",
"Sec-Fetch-Dest": "empty",
"Sec-Fetch-Mode": "cors",
"Sec-Fetch-Site": "same-origin",
"Pragma": "no-cache",
"Cache-Control": "no-cache",
"TE": "trailers",
'authority': 'gptchatly.com',
'accept': '*/*',
'accept-language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3',
'content-type': 'application/json',
'origin': 'https://gptchatly.com',
'referer': 'https://gptchatly.com/',
'sec-ch-ua': '"Chromium";v="118", "Google Chrome";v="118", "Not=A?Brand";v="99"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"macOS"',
'sec-fetch-dest': 'empty',
'sec-fetch-mode': 'cors',
'sec-fetch-site': 'same-origin',
'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/118.0.0.0 Safari/537.36',
}
async with ClientSession(headers=headers) as session:
async with StreamSession(headers=headers,
proxies={"https": proxy}, cookies=cookies, impersonate='chrome110') as session:
data = {
"past_conversations": messages
}
async with session.post(chat_url, json=data, proxy=proxy) as response:
async with session.post(chat_url, json=data) as response:
response.raise_for_status()
return (await response.json())["chatGPTResponse"]

View File

@ -8,7 +8,7 @@ from .helper import format_prompt
class GptGod(AsyncGeneratorProvider):
url = "https://gptgod.site"
supports_gpt_35_turbo = True
working = True
working = False
@classmethod
async def create_async_generator(
@ -18,6 +18,7 @@ class GptGod(AsyncGeneratorProvider):
proxy: str = None,
**kwargs
) -> AsyncResult:
headers = {
"User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/118.0",
"Accept": "text/event-stream",
@ -32,6 +33,7 @@ class GptGod(AsyncGeneratorProvider):
"Pragma": "no-cache",
"Cache-Control": "no-cache",
}
async with ClientSession(headers=headers) as session:
prompt = format_prompt(messages)
data = {
@ -42,6 +44,8 @@ class GptGod(AsyncGeneratorProvider):
response.raise_for_status()
event = None
async for line in response.content:
print(line)
if line.startswith(b'event: '):
event = line[7:-1]
elif event == b"data" and line.startswith(b"data: "):

View File

@ -30,7 +30,7 @@ models = {
class Liaobots(AsyncGeneratorProvider):
url = "https://liaobots.site"
working = True
working = False
supports_gpt_35_turbo = True
supports_gpt_4 = True
_auth_code = None

View File

@ -10,16 +10,15 @@ from .base_provider import AsyncGeneratorProvider
class Opchatgpts(AsyncGeneratorProvider):
url = "https://opchatgpts.net"
supports_gpt_35_turbo = True
working = True
working = False
@classmethod
async def create_async_generator(
cls,
model: str,
messages: Messages,
proxy: str = None,
**kwargs
) -> AsyncResult:
proxy: str = None, **kwargs) -> AsyncResult:
headers = {
"User-Agent" : "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36",
"Accept" : "*/*",
@ -58,7 +57,6 @@ class Opchatgpts(AsyncGeneratorProvider):
elif line["type"] == "end":
break
@classmethod
@property
def params(cls):
@ -71,6 +69,5 @@ class Opchatgpts(AsyncGeneratorProvider):
param = ", ".join([": ".join(p) for p in params])
return f"g4f.provider.{cls.__name__} supports: ({param})"
def random_string(length: int = 10):
return ''.join(random.choice(string.ascii_lowercase + string.digits) for _ in range(length))

View File

@ -22,8 +22,6 @@ class Vercel(BaseProvider):
stream: bool,
proxy: str = None, **kwargs) -> CreateResult:
print(model)
if not model:
model = "gpt-3.5-turbo"

View File

@ -9,7 +9,7 @@ from .base_provider import AsyncGeneratorProvider, format_prompt
class Yqcloud(AsyncGeneratorProvider):
url = "https://chat9.yqcloud.top/"
working = True
working = False
supports_gpt_35_turbo = True
@staticmethod

View File

@ -1,11 +1,15 @@
from __future__ import annotations
import asyncio
import sys
from asyncio import AbstractEventLoop
import asyncio
import webbrowser
import http.cookiejar
from os import path
from ..typing import Dict, List, Messages
import browser_cookie3
from asyncio import AbstractEventLoop
from ..typing import Dict, Messages
from browser_cookie3 import chrome, chromium, opera, opera_gx, brave, edge, vivaldi, firefox, BrowserCookieError
# Change event loop policy on windows
if sys.platform == 'win32':
@ -39,18 +43,46 @@ def get_event_loop() -> AbstractEventLoop:
'Use "create_async" instead of "create" function in a running event loop. Or install the "nest_asyncio" package.'
)
def init_cookies():
urls = [
'https://chat-gpt.org',
'https://www.aitianhu.com',
'https://chatgptfree.ai',
'https://gptchatly.com',
'https://bard.google.com',
'https://huggingface.co/chat',
'https://open-assistant.io/chat'
]
browsers = ['google-chrome', 'chrome', 'firefox', 'safari']
def open_urls_in_browser(browser):
b = webbrowser.get(browser)
for url in urls:
b.open(url, new=0, autoraise=True)
for browser in browsers:
try:
open_urls_in_browser(browser)
break
except webbrowser.Error:
continue
# Load cookies for a domain from all supported browsers.
# Cache the results in the "_cookies" variable.
def get_cookies(cookie_domain: str) -> Dict[str, str]:
if cookie_domain not in _cookies:
_cookies[cookie_domain] = {}
def get_cookies(domain_name=''):
cj = http.cookiejar.CookieJar()
for cookie_fn in [chrome, chromium, opera, opera_gx, brave, edge, vivaldi, firefox]:
try:
for cookie in browser_cookie3.load(cookie_domain):
_cookies[cookie_domain][cookie.name] = cookie.value
except:
for cookie in cookie_fn(domain_name=domain_name):
cj.set_cookie(cookie)
except BrowserCookieError:
pass
return _cookies[cookie_domain]
_cookies[domain_name] = {cookie.name: cookie.value for cookie in cj}
return _cookies[domain_name]
def format_prompt(messages: Messages, add_special_tokens=False) -> str:

View File

@ -5,7 +5,7 @@ from .Provider import BaseProvider, RetryProvider
from .typing import Messages, CreateResult, Union, List
from .debug import logging
version = '0.1.7.1'
version = '0.1.7.2'
version_check = True
def check_pypi_version() -> None:
@ -22,7 +22,8 @@ def check_pypi_version() -> None:
def get_model_and_provider(model : Union[Model, str],
provider : Union[type[BaseProvider], None],
stream : bool,
ignored : List[str] = None) -> tuple[Model, type[BaseProvider]]:
ignored : List[str] = None,
ignore_working: bool = False) -> tuple[Model, type[BaseProvider]]:
if isinstance(model, str):
if model in ModelUtils.convert:
@ -39,7 +40,7 @@ def get_model_and_provider(model : Union[Model, str],
if not provider:
raise RuntimeError(f'No provider found for model: {model}')
if not provider.working:
if not provider.working and not ignore_working:
raise RuntimeError(f'{provider.__name__} is not working')
if not provider.supports_stream and stream:
@ -59,9 +60,10 @@ class ChatCompletion:
provider : Union[type[BaseProvider], None] = None,
stream : bool = False,
auth : Union[str, None] = None,
ignored : List[str] = None, **kwargs) -> Union[CreateResult, str]:
ignored : List[str] = None,
ignore_working: bool = False, **kwargs) -> Union[CreateResult, str]:
model, provider = get_model_and_provider(model, provider, stream, ignored)
model, provider = get_model_and_provider(model, provider, stream, ignored, ignore_working)
if provider.needs_auth and not auth:
raise ValueError(

View File

@ -16,6 +16,7 @@ from .Provider import (
GeekGpt,
Myshell,
FreeGpt,
Cromicle,
NoowAi,
Vercel,
Aichat,
@ -72,7 +73,7 @@ gpt_35_turbo = Model(
base_provider = 'openai',
best_provider=RetryProvider([
ChatgptX, ChatgptDemo, GptGo, You,
NoowAi, GPTalk, GptForLove, Phind, ChatBase
NoowAi, GPTalk, GptForLove, Phind, ChatBase, Cromicle
])
)

View File

@ -11,7 +11,7 @@ with codecs.open(os.path.join(here, "README.md"), encoding="utf-8") as fh:
with open("requirements.txt") as f:
required = f.read().splitlines()
VERSION = "0.1.7.1"
VERSION = "0.1.7.2"
DESCRIPTION = (
"The official gpt4free repository | various collection of powerful language models"
)