quivr/core/tests/test_config.py
Jacopo Chevallard 285fe5b960
feat: websearch, tool use, user intent, dynamic retrieval, multiple questions (#3424)
# Description

This PR includes far too many new features:

- detection of user intent (closes CORE-211)
- treating multiple questions in parallel (closes CORE-212)
- using the chat history when answering a question (closes CORE-213)
- filtering of retrieved chunks by relevance threshold (closes CORE-217)
- dynamic retrieval of chunks (closes CORE-218)
- enabling web search via Tavily (closes CORE-220)
- enabling agent / assistant to activate tools when relevant to complete
the user task (closes CORE-224)

Also closes CORE-205

## Checklist before requesting a review

Please delete options that are not relevant.

- [ ] My code follows the style guidelines of this project
- [ ] I have performed a self-review of my code
- [ ] I have commented hard-to-understand areas
- [ ] I have ideally added tests that prove my fix is effective or that
my feature works
- [ ] New and existing unit tests pass locally with my changes
- [ ] Any dependent changes have been merged

## Screenshots (if appropriate):

---------

Co-authored-by: Stan Girard <stan@quivr.app>
2024-10-31 17:57:54 +01:00

29 lines
745 B
Python

from quivr_core.rag.entities.config import LLMEndpointConfig, RetrievalConfig
def test_default_llm_config():
config = LLMEndpointConfig()
assert (
config.model_dump()
== LLMEndpointConfig(
model="gpt-4o",
llm_base_url=None,
llm_api_key=None,
max_context_tokens=2000,
max_output_tokens=2000,
temperature=0.7,
streaming=True,
).model_dump()
)
def test_default_retrievalconfig():
config = RetrievalConfig()
assert config.max_files == 20
assert config.prompt is None
print("\n\n", config.llm_config, "\n\n")
print("\n\n", LLMEndpointConfig(), "\n\n")
assert config.llm_config == LLMEndpointConfig()