quivr/frontend/lib/helpers/defineMaxTokens.ts
Stan Girard b330370d8c
feat: 🎸 max-token (#1538)
added limit to 4k for gpt4

# Description

Please include a summary of the changes and the related issue. Please
also include relevant motivation and context.

## Checklist before requesting a review

Please delete options that are not relevant.

- [ ] My code follows the style guidelines of this project
- [ ] I have performed a self-review of my code
- [ ] I have commented hard-to-understand areas
- [ ] I have ideally added tests that prove my fix is effective or that
my feature works
- [ ] New and existing unit tests pass locally with my changes
- [ ] Any dependent changes have been merged

## Screenshots (if appropriate):
2023-11-01 08:52:49 +01:00

16 lines
375 B
TypeScript

import { Model, PaidModels } from "../types/brainConfig";
export const defineMaxTokens = (model: Model | PaidModels): number => {
//At the moment is evaluating only models from OpenAI
switch (model) {
case "gpt-3.5-turbo":
return 1000;
case "gpt-3.5-turbo-16k":
return 4000;
case "gpt-4":
return 4000;
default:
return 500;
}
};