Commit Graph

143 Commits

Author SHA1 Message Date
AT
a8a0f4635a
ci: upload installer repo as compressed archive (#2636)
Signed-off-by: Adam Treat <treat.adam@gmail.com>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
2024-07-10 16:23:31 -04:00
AT
ef4e362d92
ci: downgrade CUDA dep to 11.8 for compatibility (#2639)
Signed-off-by: Adam Treat <treat.adam@gmail.com>
2024-07-10 15:29:44 -04:00
John W. Parent
4c26726f3e
MacOS Build Online: no offline on (#2509)
Signed-off-by: John Parent <john.parent@kitware.com>
2024-07-01 20:03:00 -04:00
John W. Parent
f751d206bb
Online workflow (#2505)
Adds a circleci workflow to build and sign online
installers on Windows and MacOS

Signed-off-by: John Parent <john.parent@kitware.com>
2024-07-01 19:14:19 -04:00
John W. Parent
47015278f4
Ensure dotnet path in Windows signing job (#2508)
Signed-off-by: John Parent <john.parent@kitware.com>
2024-07-01 19:08:47 -04:00
John W. Parent
c0d311bc66
Add initial template windows signing flow (#2443)
Adds workflow signing Windows installers with
EV certificate from Azure Key Vault via
AzureSignTool

Adds CMake to sign Windows binaries as they're processed

Installs dotnet 8 as required by AST

Signed-off-by: John Parent <john.parent@kitware.com>
2024-07-01 17:40:02 -04:00
mcembalest
125b8d50bd
mkdocs imaging requirements (#2500)
Signed-off-by: Max Cembalest <max@nomic.ai>
Signed-off-by: mcembalest <70534565+mcembalest@users.noreply.github.com>
2024-07-01 13:34:23 -04:00
mcembalest
7127539146
markdown captions (#2499)
Signed-off-by: Max Cembalest <max@nomic.ai>
2024-07-01 13:18:18 -04:00
John W. Parent
23e8b187a4
Add basic signing of app bundle and binaries (#2472)
Adds verification functionality to codesign script
Adds required context to enable XCode to perform the signing
Adds install time check + signing for all binaries
Adds instructions allowing macdeployqt to sign the finalized app bundle

Signed-off-by: John Parent <john.parent@kitware.com>
2024-06-28 14:21:18 -04:00
John W. Parent
30febbe3d2
Add basic Macos signing + notarizing workflow (#2319)
Adds basic CircleCI workflow to sign, notarize,
and staple MacOS app bundle and associated DMG,
then publishes signed binary in CircleCI artifacts

Signed-off-by: Adam Treat <treat.adam@gmail.com>
2024-06-25 20:31:51 -04:00
Jared Van Bortel
88d85be0f9
chat: fix build on Windows and Nomic Embed path on macOS (#2467)
* chat: remove unused oscompat source files

These files are no longer needed now that the hnswlib index is gone.
This fixes an issue with the Windows build as there was a compilation
error in oscompat.cpp.

Signed-off-by: Jared Van Bortel <jared@nomic.ai>

* llm: fix pragma to be recognized by MSVC

Replaces this MSVC warning:
C:\msys64\home\Jared\gpt4all\gpt4all-chat\llm.cpp(53,21): warning C4081: expected '('; found 'string'

With this:
C:\msys64\home\Jared\gpt4all\gpt4all-chat\llm.cpp : warning : offline installer build will not check for updates!

Signed-off-by: Jared Van Bortel <jared@nomic.ai>

* usearch: fork usearch to fix `CreateFile` build error

Signed-off-by: Jared Van Bortel <jared@nomic.ai>

* dlhandle: fix incorrect assertion on Windows

SetErrorMode returns the previous value of the error mode flags, not an
indicator of success.

Signed-off-by: Jared Van Bortel <jared@nomic.ai>

* llamamodel: fix UB in LLamaModel::embedInternal

It is undefined behavior to increment an STL iterator past the end of
the container. Use offsets to do the math instead.

Signed-off-by: Jared Van Bortel <jared@nomic.ai>

* cmake: install embedding model to bundle's Resources dir on macOS

Signed-off-by: Jared Van Bortel <jared@nomic.ai>

* ci: fix macOS build by explicitly installing Rosetta

Signed-off-by: Jared Van Bortel <jared@nomic.ai>

---------

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-06-25 17:22:51 -04:00
Jared Van Bortel
beaede03fb
repo: remove bindings that have no maintainer (#2429)
The C#, Java, and Go bindings are now removed from the repo.

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-06-11 18:11:25 -04:00
Jared Van Bortel
55d709862f Revert "typescript bindings maintenance (#2363)"
As discussed on Discord, this PR was not ready to be merged. CI fails on
it.

This reverts commit a602f7fde7.

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-06-03 17:26:19 -04:00
Andreas Obersteiner
a602f7fde7
typescript bindings maintenance (#2363)
* remove outdated comments

Signed-off-by: limez <limez@protonmail.com>

* simpler build from source

Signed-off-by: limez <limez@protonmail.com>

* update unix build script to create .so runtimes correctly

Signed-off-by: limez <limez@protonmail.com>

* configure ci build type, use RelWithDebInfo for dev build script

Signed-off-by: limez <limez@protonmail.com>

* add clean script

Signed-off-by: limez <limez@protonmail.com>

* fix streamed token decoding / emoji

Signed-off-by: limez <limez@protonmail.com>

* remove deprecated nCtx

Signed-off-by: limez <limez@protonmail.com>

* update typings

Signed-off-by: jacob <jacoobes@sern.dev>

update typings

Signed-off-by: jacob <jacoobes@sern.dev>

* readme,mspell

Signed-off-by: jacob <jacoobes@sern.dev>

* cuda/backend logic changes + name napi methods like their js counterparts

Signed-off-by: limez <limez@protonmail.com>

* convert llmodel example into a test, separate test suite that can run in ci

Signed-off-by: limez <limez@protonmail.com>

* update examples / naming

Signed-off-by: limez <limez@protonmail.com>

* update deps, remove the need for binding.ci.gyp, make node-gyp-build fallback easier testable

Signed-off-by: limez <limez@protonmail.com>

* make sure the assert-backend-sources.js script is published, but not the others

Signed-off-by: limez <limez@protonmail.com>

* build correctly on windows (regression on node-gyp-build)

Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

* codespell

Signed-off-by: limez <limez@protonmail.com>

* make sure dlhandle.cpp gets linked correctly

Signed-off-by: limez <limez@protonmail.com>

* add include for check_cxx_compiler_flag call during aarch64 builds

Signed-off-by: limez <limez@protonmail.com>

* x86 > arm64 cross compilation of runtimes and bindings

Signed-off-by: limez <limez@protonmail.com>

* default to cpu instead of kompute on arm64

Signed-off-by: limez <limez@protonmail.com>

* formatting, more minimal example

Signed-off-by: limez <limez@protonmail.com>

---------

Signed-off-by: limez <limez@protonmail.com>
Signed-off-by: jacob <jacoobes@sern.dev>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Co-authored-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Co-authored-by: jacob <jacoobes@sern.dev>
2024-06-03 11:12:55 -05:00
Jared Van Bortel
8a70f770a2
ci: fix Python build after CUDA PR (#2373)
Build with -DCMAKE_BUILD_TYPE=Release, and use MSVC on Windows.

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-05-29 10:52:45 -04:00
Jared Van Bortel
d2a99d9bc6
support the llama.cpp CUDA backend (#2310)
* rebase onto llama.cpp commit ggerganov/llama.cpp@d46dbc76f
* support for CUDA backend (enabled by default)
* partial support for Occam's Vulkan backend (disabled by default)
* partial support for HIP/ROCm backend (disabled by default)
* sync llama.cpp.cmake with upstream llama.cpp CMakeLists.txt
* changes to GPT4All backend, bindings, and chat UI to handle choice of llama.cpp backend (Kompute or CUDA)
* ship CUDA runtime with installed version
* make device selection in the UI on macOS actually do something
* model whitelist: remove dbrx, mamba, persimmon, plamo; add internlm and starcoder2

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-05-15 15:27:50 -04:00
Jared Van Bortel
6c8a44f6c4
ci: use aws s3 sync to upload docs (#2172)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-03-27 11:03:10 -04:00
Jacob Nguyen
0e9e5237c5
ci: fix build-ts-docs with npm install --ignore-scripts (#2143)
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
2024-03-19 17:28:14 -04:00
Jared Van Bortel
f30151491d Revert "ci: fix failing build-ts-docs workflow (#2142)"
According to jacoobes, --ignore-scripts was removed in yarn v2.

This reverts commit c6bd8577a9.

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-03-19 12:28:43 -04:00
Jacob Nguyen
c6bd8577a9
ci: fix failing build-ts-docs workflow (#2142)
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
2024-03-19 12:20:53 -04:00
Jared Van Bortel
72474a2efa
ci: fix chat installer build by updating QtIFW dependency (#2015)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-02-26 11:47:11 -05:00
TareHimself
a153cc5b25
typescript: async generator and token stream (#1897)
Signed-off-by: Tare Ebelo <75279482+TareHimself@users.noreply.github.com>
Signed-off-by: jacob <jacoobes@sern.dev>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Co-authored-by: jacob <jacoobes@sern.dev>
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
2024-02-24 17:50:14 -05:00
Jared Van Bortel
fc7e5f4a09
ci: fix missing Kompute support in python bindings (#1953)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-02-09 21:40:32 -05:00
Jared Van Bortel
5dd7378db4
csharp: fix NuGet package build (#1951)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Signed-off-by: Konstantin Semenenko <mail@ksemenenko.com>
Co-authored-by: Konstantin Semenenko <mail@ksemenenko.com>
2024-02-09 14:58:28 -05:00
Jared Van Bortel
15ce428672
ci: run all workflows on config change (#1829)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-01-17 12:41:52 -05:00
Jared Van Bortel
b96406669d CI: fix Windows Python build 2024-01-12 16:02:56 -05:00
Jacob Nguyen
a1f27072c2
fix/macm1ts (#1746)
* make runtime library backend universal searchable

* corepack enable

* fix

* pass tests

* simpler

* add more jsdoc

* fix testS

* fix up circle ci

* bump version

* remove false positive warning

* add disclaimer

* update readme

* revert

* update ts docs

---------

Co-authored-by: Matthew Nguyen <matthewpnguyen@Matthews-MacBook-Pro-7.local>
2023-12-15 12:44:39 -06:00
Jacob Nguyen
9481762802
Update continue_config.yml, shoudl fix ts docs failing (#1743) 2023-12-11 15:46:02 -05:00
Jacob Nguyen
da95bcfb4b
vulkan support for typescript bindings, gguf support (#1390)
* adding some native methods to cpp wrapper

* gpu seems to work

* typings and add availibleGpus method

* fix spelling

* fix syntax

* more

* normalize methods to conform to py

* remove extra dynamic linker deps when building with vulkan

* bump python version (library linking fix)

* Don't link against libvulkan.

* vulkan python bindings on windows fixes

* Bring the vulkan backend to the GUI.

* When device is Auto (the default) then we will only consider discrete GPU's otherwise fallback to CPU.

* Show the device we're currently using.

* Fix up the name and formatting.

* init at most one vulkan device, submodule update

fixes issues w/ multiple of the same gpu

* Update the submodule.

* Add version 2.4.15 and bump the version number.

* Fix a bug where we're not properly falling back to CPU.

* Sync to a newer version of llama.cpp with bugfix for vulkan.

* Report the actual device we're using.

* Only show GPU when we're actually using it.

* Bump to new llama with new bugfix.

* Release notes for v2.4.16 and bump the version.

* Fallback to CPU more robustly.

* Release notes for v2.4.17 and bump the version.

* Bump the Python version to python-v1.0.12 to restrict the quants that vulkan recognizes.

* Link against ggml in bin so we can get the available devices without loading a model.

* Send actual and requested device info for those who have opt-in.

* Actually bump the version.

* Release notes for v2.4.18 and bump the version.

* Fix for crashes on systems where vulkan is not installed properly.

* Release notes for v2.4.19 and bump the version.

* fix typings and vulkan build works on win

* Add flatpak manifest

* Remove unnecessary stuffs from manifest

* Update to 2.4.19

* appdata: update software description

* Latest rebase on llama.cpp with gguf support.

* macos build fixes

* llamamodel: metal supports all quantization types now

* gpt4all.py: GGUF

* pyllmodel: print specific error message

* backend: port BERT to GGUF

* backend: port MPT to GGUF

* backend: port Replit to GGUF

* backend: use gguf branch of llama.cpp-mainline

* backend: use llamamodel.cpp for StarCoder

* conversion scripts: cleanup

* convert scripts: load model as late as possible

* convert_mpt_hf_to_gguf.py: better tokenizer decoding

* backend: use llamamodel.cpp for Falcon

* convert scripts: make them directly executable

* fix references to removed model types

* modellist: fix the system prompt

* backend: port GPT-J to GGUF

* gpt-j: update inference to match latest llama.cpp insights

- Use F16 KV cache
- Store transposed V in the cache
- Avoid unnecessary Q copy

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

ggml upstream commit 0265f0813492602fec0e1159fe61de1bf0ccaf78

* chatllm: grammar fix

* convert scripts: use bytes_to_unicode from transformers

* convert scripts: make gptj script executable

* convert scripts: add feed-forward length for better compatiblilty

This GGUF key is used by all llama.cpp models with upstream support.

* gptj: remove unused variables

* Refactor for subgroups on mat * vec kernel.

* Add q6_k kernels for vulkan.

* python binding: print debug message to stderr

* Fix regenerate button to be deterministic and bump the llama version to latest we have for gguf.

* Bump to the latest fixes for vulkan in llama.

* llamamodel: fix static vector in LLamaModel::endTokens

* Switch to new models2.json for new gguf release and bump our version to
2.5.0.

* Bump to latest llama/gguf branch.

* chat: report reason for fallback to CPU

* chat: make sure to clear fallback reason on success

* more accurate fallback descriptions

* differentiate between init failure and unsupported models

* backend: do not use Vulkan with non-LLaMA models

* Add q8_0 kernels to kompute shaders and bump to latest llama/gguf.

* backend: fix build with Visual Studio generator

Use the $<CONFIG> generator expression instead of CMAKE_BUILD_TYPE. This
is needed because Visual Studio is a multi-configuration generator, so
we do not know what the build type will be until `cmake --build` is
called.

Fixes #1470

* remove old llama.cpp submodules

* Reorder and refresh our models2.json.

* rebase on newer llama.cpp

* python/embed4all: use gguf model, allow passing kwargs/overriding model

* Add starcoder, rift and sbert to our models2.json.

* Push a new version number for llmodel backend now that it is based on gguf.

* fix stray comma in models2.json

Signed-off-by: Aaron Miller <apage43@ninjawhale.com>

* Speculative fix for build on mac.

* chat: clearer CPU fallback messages

* Fix crasher with an empty string for prompt template.

* Update the language here to avoid misunderstanding.

* added EM German Mistral Model

* make codespell happy

* issue template: remove "Related Components" section

* cmake: install the GPT-J plugin (#1487)

* Do not delete saved chats if we fail to serialize properly.

* Restore state from text if necessary.

* Another codespell attempted fix.

* llmodel: do not call magic_match unless build variant is correct (#1488)

* chatllm: do not write uninitialized data to stream (#1486)

* mat*mat for q4_0, q8_0

* do not process prompts on gpu yet

* python: support Path in GPT4All.__init__ (#1462)

* llmodel: print an error if the CPU does not support AVX (#1499)

* python bindings should be quiet by default

* disable llama.cpp logging unless GPT4ALL_VERBOSE_LLAMACPP envvar is
  nonempty
* make verbose flag for retrieve_model default false (but also be
  overridable via gpt4all constructor)

should be able to run a basic test:

```python
import gpt4all
model = gpt4all.GPT4All('/Users/aaron/Downloads/rift-coder-v0-7b-q4_0.gguf')
print(model.generate('def fib(n):'))
```

and see no non-model output when successful

* python: always check status code of HTTP responses (#1502)

* Always save chats to disk, but save them as text by default. This also changes
the UI behavior to always open a 'New Chat' and setting it as current instead
of setting a restored chat as current. This improves usability by not requiring
the user to wait if they want to immediately start chatting.

* Update README.md

Signed-off-by: umarmnaq <102142660+umarmnaq@users.noreply.github.com>

* fix embed4all filename

https://discordapp.com/channels/1076964370942267462/1093558720690143283/1161778216462192692

Signed-off-by: Aaron Miller <apage43@ninjawhale.com>

* Improves Java API signatures maintaining back compatibility

* python: replace deprecated pkg_resources with importlib (#1505)

* Updated chat wishlist (#1351)

* q6k, q4_1 mat*mat

* update mini-orca 3b to gguf2, license

Signed-off-by: Aaron Miller <apage43@ninjawhale.com>

* convert scripts: fix AutoConfig typo (#1512)

* publish config https://docs.npmjs.com/cli/v9/configuring-npm/package-json#publishconfig (#1375)

merge into my branch

* fix appendBin

* fix gpu not initializing first

* sync up

* progress, still wip on destructor

* some detection work

* untested dispose method

* add js side of dispose

* Update gpt4all-bindings/typescript/index.cc

Co-authored-by: cebtenzzre <cebtenzzre@gmail.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

* Update gpt4all-bindings/typescript/index.cc

Co-authored-by: cebtenzzre <cebtenzzre@gmail.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

* Update gpt4all-bindings/typescript/index.cc

Co-authored-by: cebtenzzre <cebtenzzre@gmail.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

* Update gpt4all-bindings/typescript/src/gpt4all.d.ts

Co-authored-by: cebtenzzre <cebtenzzre@gmail.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

* Update gpt4all-bindings/typescript/src/gpt4all.js

Co-authored-by: cebtenzzre <cebtenzzre@gmail.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

* Update gpt4all-bindings/typescript/src/util.js

Co-authored-by: cebtenzzre <cebtenzzre@gmail.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

* fix tests

* fix circleci for nodejs

* bump version

---------

Signed-off-by: Aaron Miller <apage43@ninjawhale.com>
Signed-off-by: umarmnaq <102142660+umarmnaq@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: Akarshan Biswas <akarshan.biswas@gmail.com>
Co-authored-by: Cebtenzzre <cebtenzzre@gmail.com>
Co-authored-by: Jan Philipp Harries <jpdus@users.noreply.github.com>
Co-authored-by: umarmnaq <102142660+umarmnaq@users.noreply.github.com>
Co-authored-by: Alex Soto <asotobu@gmail.com>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
2023-11-01 14:38:58 -05:00
cebtenzzre
017c3a9649
python: prepare version 2.0.0rc1 (#1529) 2023-10-18 20:24:54 -04:00
cebtenzzre
bcbcad98d0
CI: increase minimum macOS version of Python bindings to 10.15 (#1511) 2023-10-18 12:23:00 -04:00
Aaron Miller
7f979c8258 Build offline installers in CircleCI 2023-09-27 11:24:21 -07:00
Aaron Miller
f0735efa7d vulkan python bindings on windows fixes 2023-09-12 14:16:02 -07:00
Adam Treat
a69d23ecc4 Fix for windows circleci 2023-08-31 15:29:54 -04:00
Adam Treat
b9fd0c25b2 Try and fix the rest of circleci for vulkan. 2023-08-31 15:29:54 -04:00
Adam Treat
85e34598f9 more circleci 2023-08-31 15:29:54 -04:00
Adam Treat
9f1cbad4f1 more Circleci 2023-08-31 15:29:54 -04:00
Adam Treat
202805637b More circleci 2023-08-31 15:29:54 -04:00
Adam Treat
2832fad965 More circleci 2023-08-31 15:29:54 -04:00
Adam Treat
6a309e2ac8 More circleci 2023-08-31 15:29:54 -04:00
Adam Treat
94969a4199 More circleci 2023-08-31 15:29:54 -04:00
Adam Treat
1a2a9791bd More circleci 2023-08-31 15:29:54 -04:00
Adam Treat
8d80f7963e More circleci 2023-08-31 15:29:54 -04:00
Adam Treat
1723f82aaa More circleci 2023-08-31 15:29:54 -04:00
Adam Treat
3bdc87ff4a More circleci 2023-08-31 15:29:54 -04:00
Adam Treat
5e5a235639 More circleci 2023-08-31 15:29:54 -04:00
Adam Treat
4521c71b4e More circleci 2023-08-31 15:29:54 -04:00
Adam Treat
2f1c995739 More circleci 2023-08-31 15:29:54 -04:00
Adam Treat
84e08858a8 Fix missing run in circleci 2023-08-31 15:29:54 -04:00
Adam Treat
6fd6369ab3 Fix yaml parsing 2023-08-31 15:29:54 -04:00