Adam Treat
b5edaa2656
Revert "add tokenizer readme w/ instructions for convert script"
...
This reverts commit 5063c2c1b2
.
2023-05-30 12:58:18 -04:00
aaron miller
5063c2c1b2
add tokenizer readme w/ instructions for convert script
2023-05-30 12:05:57 -04:00
Aaron Miller
d59c77ac55
buf_ref.into() can be const now
2023-05-30 12:05:57 -04:00
Aaron Miller
bbcee1ced5
New tokenizer implementation for MPT and GPT-J
...
Improves output quality by making these tokenizers more closely
match the behavior of the huggingface `tokenizers` based BPE
tokenizers these models were trained with.
Featuring:
* Fixed unicode handling (via ICU)
* Fixed BPE token merge handling
* Complete added vocabulary handling
2023-05-30 12:05:57 -04:00
Adam Treat
474c5387f9
Get the backend as well as the client building/working with msvc.
2023-05-25 15:22:45 -04:00
Adam Treat
9bfff8bfcb
Add new reverse prompt for new localdocs context feature.
2023-05-25 11:28:06 -04:00
Juuso Alasuutari
ef052aed84
llmodel: constify some casts in LLModelWrapper
2023-05-22 08:54:46 -04:00
Juuso Alasuutari
81fdc28e58
llmodel: constify LLModel::threadCount()
2023-05-22 08:54:46 -04:00
Juuso Alasuutari
08ece43f0d
llmodel: fix wrong and/or missing prompt callback type
...
Fix occurrences of the prompt callback being incorrectly specified, or
the response callback's prototype being incorrectly used in its place.
Signed-off-by: Juuso Alasuutari <juuso.alasuutari@gmail.com>
2023-05-21 16:02:11 -04:00
Adam Treat
8204c2eb80
Only default mlock on macOS where swap seems to be a problem.
2023-05-21 10:27:04 -04:00
Adam Treat
aba1147a22
Always default mlock to true.
2023-05-20 21:16:15 -04:00
aaron miller
e6fd0a240d
backend: fix buffer overrun in repeat penalty code
...
Caught with AddressSanitizer running a basic prompt test against llmodel
standalone. This fix allows ASan builds to complete a simple prompt
without illegal accesses but there are still notably several leaks.
2023-05-17 07:54:10 -04:00
kuvaus
26cb31c4e6
Bugfix on llmodel_model_create function
...
Fixes the bug where llmodel_model_create prints "Invalid model file" even though the model is loaded correctly. Credits and thanks to @serendipity for the fix.
2023-05-17 07:49:32 -04:00
kuvaus
3cb6dd7a66
gpt4all-backend: Add llmodel create and destroy functions ( #554 )
...
* Add llmodel create and destroy functions
* Fix capitalization
* Fix capitalization
* Fix capitalization
* Update CMakeLists.txt
---------
Co-authored-by: kuvaus <kuvaus@users.noreply.github.com>
2023-05-16 11:36:46 -04:00
kuvaus
507e913faf
gpt4all-backend: Add MSVC support to backend ( #595 )
...
* Add MSVC compatibility
* Add _MSC_VER macro
---------
Co-authored-by: kuvaus <kuvaus@users.noreply.github.com>
2023-05-16 11:35:33 -04:00
Aaron Miller
d14936bfd6
backend: dedupe tokenizing code in mpt/gptj
2023-05-16 10:30:19 -04:00
Aaron Miller
6182026c70
backend: dedupe tokenizing code in gptj/mpt
2023-05-16 10:30:19 -04:00
Aaron Miller
4cd8bdf9a1
backend: make initial buf_size const in model impls
...
more unifying mpt and gptj code - this one's never written so also
changing the name to be clearer
2023-05-16 10:30:19 -04:00
Aaron Miller
08402a1b64
mpt: use buf in model struct (thread safety)
2023-05-16 10:30:19 -04:00
AT
4920816c90
Update README.md
...
Signed-off-by: AT <manyoso@users.noreply.github.com>
2023-05-14 15:26:00 -04:00
Zach Nussbaum
1ed71fbbf8
fix: use right conversion script
2023-05-11 11:20:43 -04:00
Adam Treat
d918b02c29
Move the llmodel C API to new top-level directory and version it.
2023-05-10 11:46:40 -04:00
Richard Guo
02d1bdb0be
mono repo structure
2023-05-01 15:45:23 -04:00