zed/crates/assistant/Cargo.toml

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

90 lines
2.2 KiB
TOML
Raw Normal View History

[package]
2023-09-22 04:54:59 +03:00
name = "assistant"
version = "0.1.0"
edition = "2021"
publish = false
license = "GPL-3.0-or-later"
[lints]
workspace = true
[lib]
2023-09-22 04:54:59 +03:00
path = "src/assistant.rs"
doctest = false
[features]
test-support = [
"editor/test-support",
"language/test-support",
"project/test-support",
"text/test-support",
]
[dependencies]
anthropic = { workspace = true, features = ["schemars"] }
anyhow.workspace = true
assets.workspace = true
assistant_slash_command.workspace = true
async-watch.workspace = true
cargo_toml.workspace = true
chrono.workspace = true
client.workspace = true
clock.workspace = true
collections.workspace = true
command_palette_hooks.workspace = true
Extract completion provider crate (#14823) We will soon need `semantic_index` to be able to use `CompletionProvider`. This is currently impossible due to a cyclic crate dependency, because `CompletionProvider` lives in the `assistant` crate, which depends on `semantic_index`. This PR breaks the dependency cycle by extracting two crates out of `assistant`: `language_model` and `completion`. Only one piece of logic changed: [this code](https://github.com/zed-industries/zed/commit/922fcaf5a6076e56890373035b1065b13512546d#diff-3857b3707687a4d585f1200eec4c34a7a079eae8d303b4ce5b4fce46234ace9fR61-R69). * As of https://github.com/zed-industries/zed/pull/13276, whenever we ask a given completion provider for its available models, OpenAI providers would go and ask the global assistant settings whether the user had configured an `available_models` setting, and if so, return that. * This PR changes it so that instead of eagerly asking the assistant settings for this info (the new crate must not depend on `assistant`, or else the dependency cycle would be back), OpenAI completion providers now store the user-configured settings as part of their struct, and whenever the settings change, we update the provider. In theory, this change should not change user-visible behavior...but since it's the only change in this large PR that's more than just moving code around, I'm mentioning it here in case there's an unexpected regression in practice! (cc @amtoaer in case you'd like to try out this branch and verify that the feature is still working the way you expect.) Release Notes: - N/A --------- Co-authored-by: Marshall Bowers <elliott.codes@gmail.com>
2024-07-19 20:35:34 +03:00
completion.workspace = true
editor.workspace = true
fs.workspace = true
futures.workspace = true
fuzzy.workspace = true
gpui.workspace = true
heed.workspace = true
html_to_markdown.workspace = true
http_client.workspace = true
indexed_docs.workspace = true
indoc.workspace = true
language.workspace = true
Extract completion provider crate (#14823) We will soon need `semantic_index` to be able to use `CompletionProvider`. This is currently impossible due to a cyclic crate dependency, because `CompletionProvider` lives in the `assistant` crate, which depends on `semantic_index`. This PR breaks the dependency cycle by extracting two crates out of `assistant`: `language_model` and `completion`. Only one piece of logic changed: [this code](https://github.com/zed-industries/zed/commit/922fcaf5a6076e56890373035b1065b13512546d#diff-3857b3707687a4d585f1200eec4c34a7a079eae8d303b4ce5b4fce46234ace9fR61-R69). * As of https://github.com/zed-industries/zed/pull/13276, whenever we ask a given completion provider for its available models, OpenAI providers would go and ask the global assistant settings whether the user had configured an `available_models` setting, and if so, return that. * This PR changes it so that instead of eagerly asking the assistant settings for this info (the new crate must not depend on `assistant`, or else the dependency cycle would be back), OpenAI completion providers now store the user-configured settings as part of their struct, and whenever the settings change, we update the provider. In theory, this change should not change user-visible behavior...but since it's the only change in this large PR that's more than just moving code around, I'm mentioning it here in case there's an unexpected regression in practice! (cc @amtoaer in case you'd like to try out this branch and verify that the feature is still working the way you expect.) Release Notes: - N/A --------- Co-authored-by: Marshall Bowers <elliott.codes@gmail.com>
2024-07-19 20:35:34 +03:00
language_model.workspace = true
log.workspace = true
menu.workspace = true
multi_buffer.workspace = true
Ollama Provider for Assistant (#12902) Closes #4424. A few design decisions that may need some rethinking or later PRs: * Other providers have a check for authentication. I use this opportunity to fetch the models which doubles as a way of finding out if the Ollama server is running. * Ollama has _no_ API for getting the max tokens per model * Ollama has _no_ API for getting the current token count https://github.com/ollama/ollama/issues/1716 * Ollama does allow setting the `num_ctx` so I've defaulted this to 4096. It can be overridden in settings. * Ollama models will be "slow" to start inference because they're loading the model into memory. It's faster after that. There's no UI affordance to show that the model is being loaded. Release Notes: - Added an Ollama Provider for the assistant. If you have [Ollama](https://ollama.com/) running locally on your machine, you can enable it in your settings under: ```jsonc "assistant": { "version": "1", "provider": { "name": "ollama", // Recommended setting to allow for model startup "low_speed_timeout_in_seconds": 30, } } ``` Chat like usual <img width="1840" alt="image" src="https://github.com/zed-industries/zed/assets/836375/4e0af266-4c4f-4d9e-9d74-1a91f76a12fe"> Interact with any model from the [Ollama Library](https://ollama.com/library) <img width="587" alt="image" src="https://github.com/zed-industries/zed/assets/836375/87433ac6-bf87-4a99-89e1-96a93bf8de8a"> Open up the terminal to download new models via `ollama pull`: ![image](https://github.com/zed-industries/zed/assets/836375/af7ec411-76bf-41c7-ba81-64bbaeea98a8)
2024-06-12 03:35:27 +03:00
ollama = { workspace = true, features = ["schemars"] }
open_ai = { workspace = true, features = ["schemars"] }
ordered-float.workspace = true
parking_lot.workspace = true
paths.workspace = true
project.workspace = true
regex.workspace = true
Allow the assistant to suggest edits to files in the project (#11993) ### Todo * [x] tuck the new system prompt away somehow * for now, we're treating it as built-in, and not editable. once we have a way to fold away default prompts, let's make it a default prompt. * [x] when applying edits, re-parse the edit from the latest content of the assistant buffer (to allow for manual editing of edits) * [x] automatically adjust the indentation of edits suggested by the assistant * [x] fix edit row highlights persisting even when assistant messages with edits are deleted * ~adjust the fuzzy search to allow for small errors in the old text, using some string similarity routine~ We decided to defer the fuzzy searching thing to a separate PR, since it's a little bit involved, and the current functionality works well enough to be worth landing. A couple of notes on the fuzzy searching: * sometimes the assistant accidentally omits line breaks from the text that it wants to replace * when the old text has hallucinations, the new text often contains the same hallucinations. so we'll probably need to use a more fine-grained editing strategy where we perform a character-wise diff of the old and new text as reported by the assistant, and then adjust that diff so that it can be applied to the actual buffer text Release Notes: - Added the ability to request edits to project files using the assistant panel. --------- Co-authored-by: Antonio Scandurra <me@as-cii.com> Co-authored-by: Marshall <marshall@zed.dev> Co-authored-by: Antonio <antonio@zed.dev> Co-authored-by: Nathan <nathan@zed.dev>
2024-05-18 01:38:14 +03:00
rope.workspace = true
schemars.workspace = true
search.workspace = true
semantic_index.workspace = true
2023-05-29 17:23:16 +03:00
serde.workspace = true
serde_json.workspace = true
settings.workspace = true
similar.workspace = true
smol.workspace = true
strsim.workspace = true
telemetry_events.workspace = true
terminal.workspace = true
terminal_view.workspace = true
theme.workspace = true
toml.workspace = true
ui.workspace = true
util.workspace = true
uuid.workspace = true
workspace.workspace = true
picker.workspace = true
[dev-dependencies]
Extract completion provider crate (#14823) We will soon need `semantic_index` to be able to use `CompletionProvider`. This is currently impossible due to a cyclic crate dependency, because `CompletionProvider` lives in the `assistant` crate, which depends on `semantic_index`. This PR breaks the dependency cycle by extracting two crates out of `assistant`: `language_model` and `completion`. Only one piece of logic changed: [this code](https://github.com/zed-industries/zed/commit/922fcaf5a6076e56890373035b1065b13512546d#diff-3857b3707687a4d585f1200eec4c34a7a079eae8d303b4ce5b4fce46234ace9fR61-R69). * As of https://github.com/zed-industries/zed/pull/13276, whenever we ask a given completion provider for its available models, OpenAI providers would go and ask the global assistant settings whether the user had configured an `available_models` setting, and if so, return that. * This PR changes it so that instead of eagerly asking the assistant settings for this info (the new crate must not depend on `assistant`, or else the dependency cycle would be back), OpenAI completion providers now store the user-configured settings as part of their struct, and whenever the settings change, we update the provider. In theory, this change should not change user-visible behavior...but since it's the only change in this large PR that's more than just moving code around, I'm mentioning it here in case there's an unexpected regression in practice! (cc @amtoaer in case you'd like to try out this branch and verify that the feature is still working the way you expect.) Release Notes: - N/A --------- Co-authored-by: Marshall Bowers <elliott.codes@gmail.com>
2024-07-19 20:35:34 +03:00
completion = { workspace = true, features = ["test-support"] }
ctor.workspace = true
editor = { workspace = true, features = ["test-support"] }
env_logger.workspace = true
language = { workspace = true, features = ["test-support"] }
log.workspace = true
project = { workspace = true, features = ["test-support"] }
rand.workspace = true
text = { workspace = true, features = ["test-support"] }
Allow the assistant to suggest edits to files in the project (#11993) ### Todo * [x] tuck the new system prompt away somehow * for now, we're treating it as built-in, and not editable. once we have a way to fold away default prompts, let's make it a default prompt. * [x] when applying edits, re-parse the edit from the latest content of the assistant buffer (to allow for manual editing of edits) * [x] automatically adjust the indentation of edits suggested by the assistant * [x] fix edit row highlights persisting even when assistant messages with edits are deleted * ~adjust the fuzzy search to allow for small errors in the old text, using some string similarity routine~ We decided to defer the fuzzy searching thing to a separate PR, since it's a little bit involved, and the current functionality works well enough to be worth landing. A couple of notes on the fuzzy searching: * sometimes the assistant accidentally omits line breaks from the text that it wants to replace * when the old text has hallucinations, the new text often contains the same hallucinations. so we'll probably need to use a more fine-grained editing strategy where we perform a character-wise diff of the old and new text as reported by the assistant, and then adjust that diff so that it can be applied to the actual buffer text Release Notes: - Added the ability to request edits to project files using the assistant panel. --------- Co-authored-by: Antonio Scandurra <me@as-cii.com> Co-authored-by: Marshall <marshall@zed.dev> Co-authored-by: Antonio <antonio@zed.dev> Co-authored-by: Nathan <nathan@zed.dev>
2024-05-18 01:38:14 +03:00
unindent.workspace = true