Commit Graph

8 Commits

Author SHA1 Message Date
Bennet Bo Fenner
f413ea90bf
assistant: Fix Google AI provider not respecting low_speed_timeout_in_seconds (#17423)
Release Notes:

- Fixed an issue when using Google Gemini models, where the setting
`low_speed_timeout_in_seconds` was not respected
2024-09-05 18:16:30 +02:00
Antonio Scandurra
d6bdaa8a91
Simplify LLM protocol (#15366)
In this pull request, we change the zed.dev protocol so that we pass the
raw JSON for the specified provider directly to our server. This avoids
the need to define a protobuf message that's a superset of all these
formats.

@bennetbo: We also changed the settings for available_models under
zed.dev to be a flat format, because the nesting seemed too confusing.
Can you help us upgrade the local provider configuration to be
consistent with this? We do whatever we need to do when parsing the
settings to make this simple for users, even if it's a bit more complex
on our end. We want to use versioning to avoid breaking existing users,
but need to keep making progress.

```json
"zed.dev": {
  "available_models": [
    {
      "provider": "anthropic",
        "name": "some-newly-released-model-we-havent-added",
        "max_tokens": 200000
      }
  ]
}
```

Release Notes:

- N/A

---------

Co-authored-by: Nathan <nathan@zed.dev>
2024-07-28 11:07:10 +02:00
Marshall Bowers
02c43a5bf2
Add missing workspace lints (#15237)
This PR adds the missing workspace lint configuration for the following
crates that were missing it:

- `google_ai`
- `open_ai`
- `tab_switcher`

Release Notes:

- N/A
2024-07-25 19:52:24 -04:00
Mikayla Maki
855048041d
Update http crate name (#15041)
Release Notes:

- N/A
2024-07-23 15:01:05 -07:00
Conrad Irwin
5515ba6043
Extract http from util (#11680)
This avoids the CLI linking libssl etc...

Release Notes:

- N/A
2024-05-10 15:50:20 -06:00
Antonio Scandurra
9ab7a22fa8 Fix licensing errors 2024-03-20 15:52:02 +01:00
Antonio Scandurra
f2394c76f5 Fix licensing 2024-03-20 13:03:13 +01:00
Nathan Sobo
8ae5a3b61a
Allow AI interactions to be proxied through Zed's server so you don't need an API key (#7367)
Co-authored-by: Antonio <antonio@zed.dev>

Resurrected this from some assistant work I did in Spring of 2023.
- [x] Resurrect streaming responses
- [x] Use streaming responses to enable AI via Zed's servers by default
(but preserve API key option for now)
- [x] Simplify protobuf
- [x] Proxy to OpenAI on zed.dev
- [x] Proxy to Gemini on zed.dev
- [x] Improve UX for switching between openAI and google models
- We current disallow cycling when setting a custom model, but we need a
better solution to keep OpenAI models available while testing the google
ones
- [x] Show remaining tokens correctly for Google models
- [x] Remove semantic index
- [x] Delete `ai` crate
- [x] Cloud front so we can ban abuse
- [x] Rate-limiting
- [x] Fix panic when using inline assistant
- [x] Double check the upgraded `AssistantSettings` are
backwards-compatible
- [x] Add hosted LLM interaction behind a `language-models` feature
flag.

Release Notes:

- We are temporarily removing the semantic index in order to redesign it
from scratch.

---------

Co-authored-by: Antonio <antonio@zed.dev>
Co-authored-by: Antonio Scandurra <me@as-cii.com>
Co-authored-by: Thorsten <thorsten@zed.dev>
Co-authored-by: Max <max@zed.dev>
2024-03-19 19:22:26 +01:00