mirror of
https://github.com/zed-industries/zed.git
synced 2024-11-08 07:35:01 +03:00
2f44055079
v0 of the Semantic Index evaluate test suite Release Notes: - Added eval.rs as an example to the semantic-index crates - Generates test metrics for two small projects, as a starting point to systematically evaluate retrieval quality |
||
---|---|---|
.. | ||
eval | ||
examples | ||
src | ||
Cargo.toml | ||
README.md |
Semantic Index
Evaluation
Metrics
nDCG@k:
- "The value of NDCG is determined by comparing the relevance of the items returned by the search engine to the relevance of the item that a hypothetical "ideal" search engine would return.
- "The relevance of result is represented by a score (also known as a 'grade') that is assigned to the search query. The scores of these results are then discounted based on their position in the search results -- did they get recommended first or last?"
MRR@k:
- "Mean reciprocal rank quantifies the rank of the first relevant item found in teh recommendation list."
MAP@k:
- "Mean average precision averages the precision@k metric at each relevant item position in the recommendation list.
Resources: