gpt4all/.circleci/continue_config.yml
Jacob Nguyen da95bcfb4b
vulkan support for typescript bindings, gguf support (#1390)
* adding some native methods to cpp wrapper

* gpu seems to work

* typings and add availibleGpus method

* fix spelling

* fix syntax

* more

* normalize methods to conform to py

* remove extra dynamic linker deps when building with vulkan

* bump python version (library linking fix)

* Don't link against libvulkan.

* vulkan python bindings on windows fixes

* Bring the vulkan backend to the GUI.

* When device is Auto (the default) then we will only consider discrete GPU's otherwise fallback to CPU.

* Show the device we're currently using.

* Fix up the name and formatting.

* init at most one vulkan device, submodule update

fixes issues w/ multiple of the same gpu

* Update the submodule.

* Add version 2.4.15 and bump the version number.

* Fix a bug where we're not properly falling back to CPU.

* Sync to a newer version of llama.cpp with bugfix for vulkan.

* Report the actual device we're using.

* Only show GPU when we're actually using it.

* Bump to new llama with new bugfix.

* Release notes for v2.4.16 and bump the version.

* Fallback to CPU more robustly.

* Release notes for v2.4.17 and bump the version.

* Bump the Python version to python-v1.0.12 to restrict the quants that vulkan recognizes.

* Link against ggml in bin so we can get the available devices without loading a model.

* Send actual and requested device info for those who have opt-in.

* Actually bump the version.

* Release notes for v2.4.18 and bump the version.

* Fix for crashes on systems where vulkan is not installed properly.

* Release notes for v2.4.19 and bump the version.

* fix typings and vulkan build works on win

* Add flatpak manifest

* Remove unnecessary stuffs from manifest

* Update to 2.4.19

* appdata: update software description

* Latest rebase on llama.cpp with gguf support.

* macos build fixes

* llamamodel: metal supports all quantization types now

* gpt4all.py: GGUF

* pyllmodel: print specific error message

* backend: port BERT to GGUF

* backend: port MPT to GGUF

* backend: port Replit to GGUF

* backend: use gguf branch of llama.cpp-mainline

* backend: use llamamodel.cpp for StarCoder

* conversion scripts: cleanup

* convert scripts: load model as late as possible

* convert_mpt_hf_to_gguf.py: better tokenizer decoding

* backend: use llamamodel.cpp for Falcon

* convert scripts: make them directly executable

* fix references to removed model types

* modellist: fix the system prompt

* backend: port GPT-J to GGUF

* gpt-j: update inference to match latest llama.cpp insights

- Use F16 KV cache
- Store transposed V in the cache
- Avoid unnecessary Q copy

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

ggml upstream commit 0265f0813492602fec0e1159fe61de1bf0ccaf78

* chatllm: grammar fix

* convert scripts: use bytes_to_unicode from transformers

* convert scripts: make gptj script executable

* convert scripts: add feed-forward length for better compatiblilty

This GGUF key is used by all llama.cpp models with upstream support.

* gptj: remove unused variables

* Refactor for subgroups on mat * vec kernel.

* Add q6_k kernels for vulkan.

* python binding: print debug message to stderr

* Fix regenerate button to be deterministic and bump the llama version to latest we have for gguf.

* Bump to the latest fixes for vulkan in llama.

* llamamodel: fix static vector in LLamaModel::endTokens

* Switch to new models2.json for new gguf release and bump our version to
2.5.0.

* Bump to latest llama/gguf branch.

* chat: report reason for fallback to CPU

* chat: make sure to clear fallback reason on success

* more accurate fallback descriptions

* differentiate between init failure and unsupported models

* backend: do not use Vulkan with non-LLaMA models

* Add q8_0 kernels to kompute shaders and bump to latest llama/gguf.

* backend: fix build with Visual Studio generator

Use the $<CONFIG> generator expression instead of CMAKE_BUILD_TYPE. This
is needed because Visual Studio is a multi-configuration generator, so
we do not know what the build type will be until `cmake --build` is
called.

Fixes #1470

* remove old llama.cpp submodules

* Reorder and refresh our models2.json.

* rebase on newer llama.cpp

* python/embed4all: use gguf model, allow passing kwargs/overriding model

* Add starcoder, rift and sbert to our models2.json.

* Push a new version number for llmodel backend now that it is based on gguf.

* fix stray comma in models2.json

Signed-off-by: Aaron Miller <apage43@ninjawhale.com>

* Speculative fix for build on mac.

* chat: clearer CPU fallback messages

* Fix crasher with an empty string for prompt template.

* Update the language here to avoid misunderstanding.

* added EM German Mistral Model

* make codespell happy

* issue template: remove "Related Components" section

* cmake: install the GPT-J plugin (#1487)

* Do not delete saved chats if we fail to serialize properly.

* Restore state from text if necessary.

* Another codespell attempted fix.

* llmodel: do not call magic_match unless build variant is correct (#1488)

* chatllm: do not write uninitialized data to stream (#1486)

* mat*mat for q4_0, q8_0

* do not process prompts on gpu yet

* python: support Path in GPT4All.__init__ (#1462)

* llmodel: print an error if the CPU does not support AVX (#1499)

* python bindings should be quiet by default

* disable llama.cpp logging unless GPT4ALL_VERBOSE_LLAMACPP envvar is
  nonempty
* make verbose flag for retrieve_model default false (but also be
  overridable via gpt4all constructor)

should be able to run a basic test:

```python
import gpt4all
model = gpt4all.GPT4All('/Users/aaron/Downloads/rift-coder-v0-7b-q4_0.gguf')
print(model.generate('def fib(n):'))
```

and see no non-model output when successful

* python: always check status code of HTTP responses (#1502)

* Always save chats to disk, but save them as text by default. This also changes
the UI behavior to always open a 'New Chat' and setting it as current instead
of setting a restored chat as current. This improves usability by not requiring
the user to wait if they want to immediately start chatting.

* Update README.md

Signed-off-by: umarmnaq <102142660+umarmnaq@users.noreply.github.com>

* fix embed4all filename

https://discordapp.com/channels/1076964370942267462/1093558720690143283/1161778216462192692

Signed-off-by: Aaron Miller <apage43@ninjawhale.com>

* Improves Java API signatures maintaining back compatibility

* python: replace deprecated pkg_resources with importlib (#1505)

* Updated chat wishlist (#1351)

* q6k, q4_1 mat*mat

* update mini-orca 3b to gguf2, license

Signed-off-by: Aaron Miller <apage43@ninjawhale.com>

* convert scripts: fix AutoConfig typo (#1512)

* publish config https://docs.npmjs.com/cli/v9/configuring-npm/package-json#publishconfig (#1375)

merge into my branch

* fix appendBin

* fix gpu not initializing first

* sync up

* progress, still wip on destructor

* some detection work

* untested dispose method

* add js side of dispose

* Update gpt4all-bindings/typescript/index.cc

Co-authored-by: cebtenzzre <cebtenzzre@gmail.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

* Update gpt4all-bindings/typescript/index.cc

Co-authored-by: cebtenzzre <cebtenzzre@gmail.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

* Update gpt4all-bindings/typescript/index.cc

Co-authored-by: cebtenzzre <cebtenzzre@gmail.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

* Update gpt4all-bindings/typescript/src/gpt4all.d.ts

Co-authored-by: cebtenzzre <cebtenzzre@gmail.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

* Update gpt4all-bindings/typescript/src/gpt4all.js

Co-authored-by: cebtenzzre <cebtenzzre@gmail.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

* Update gpt4all-bindings/typescript/src/util.js

Co-authored-by: cebtenzzre <cebtenzzre@gmail.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

* fix tests

* fix circleci for nodejs

* bump version

---------

Signed-off-by: Aaron Miller <apage43@ninjawhale.com>
Signed-off-by: umarmnaq <102142660+umarmnaq@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: Akarshan Biswas <akarshan.biswas@gmail.com>
Co-authored-by: Cebtenzzre <cebtenzzre@gmail.com>
Co-authored-by: Jan Philipp Harries <jpdus@users.noreply.github.com>
Co-authored-by: umarmnaq <102142660+umarmnaq@users.noreply.github.com>
Co-authored-by: Alex Soto <asotobu@gmail.com>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
2023-11-01 14:38:58 -05:00

1184 lines
45 KiB
YAML

version: 2.1
orbs:
win: circleci/windows@5.0
python: circleci/python@1.2
node: circleci/node@5.1
parameters:
run-default-workflow:
type: boolean
default: false
run-python-workflow:
type: boolean
default: false
run-chat-workflow:
type: boolean
default: false
run-ts-workflow:
type: boolean
default: false
run-csharp-workflow:
type: boolean
default: false
jobs:
default-job:
docker:
- image: circleci/python:3.7
steps:
- run: echo "CircleCI pipeline triggered"
build-offline-chat-installer-macos:
macos:
xcode: 14.0.0
steps:
- checkout
- run:
name: Update Submodules
command: |
git submodule sync
git submodule update --init --recursive
- restore_cache: # this is the new step to restore cache
keys:
- macos-qt-cache_v2
- run:
name: Installing Qt
command: |
if [ ! -d ~/Qt ]; then
curl -o qt-unified-macOS-x64-4.6.0-online.dmg https://gpt4all.io/ci/qt-unified-macOS-x64-4.6.0-online.dmg
hdiutil attach qt-unified-macOS-x64-4.6.0-online.dmg
/Volumes/qt-unified-macOS-x64-4.6.0-online/qt-unified-macOS-x64-4.6.0-online.app/Contents/MacOS/qt-unified-macOS-x64-4.6.0-online --no-force-installations --no-default-installations --no-size-checking --default-answer --accept-licenses --confirm-command --accept-obligations --email $QT_EMAIL --password $QT_PASSWORD install qt.tools.cmake qt.tools.ifw.46 qt.tools.ninja qt.qt6.651.clang_64 qt.qt6.651.qt5compat qt.qt6.651.debug_info qt.qt6.651.addons.qtpdf qt.qt6.651.addons.qthttpserver
hdiutil detach /Volumes/qt-unified-macOS-x64-4.6.0-online
fi
- save_cache: # this is the new step to save cache
key: macos-qt-cache_v2
paths:
- ~/Qt
- run:
name: Build
command: |
mkdir build
cd build
export PATH=$PATH:$HOME/Qt/Tools/QtInstallerFramework/4.6/bin
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake \
-DCMAKE_GENERATOR:STRING=Ninja \
-DBUILD_UNIVERSAL=ON \
-DMACDEPLOYQT=~/Qt/6.5.1/macos/bin/macdeployqt \
-DGPT4ALL_OFFLINE_INSTALLER=ON \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_PREFIX_PATH:PATH=~/Qt/6.5.1/macos/lib/cmake/Qt6 \
-DCMAKE_MAKE_PROGRAM:FILEPATH=~/Qt/Tools/Ninja/ninja \
-S ../gpt4all-chat \
-B .
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake --build . --target all
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake --build . --target install
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake --build . --target package
mkdir upload
cp gpt4all-installer-* upload
- store_artifacts:
path: build/upload
build-offline-chat-installer-linux:
machine:
image: ubuntu-2204:2023.04.2
steps:
- checkout
- run:
name: Update Submodules
command: |
git submodule sync
git submodule update --init --recursive
- restore_cache: # this is the new step to restore cache
keys:
- linux-qt-cache
- run:
name: Setup Linux and Dependencies
command: |
wget -qO- https://packages.lunarg.com/lunarg-signing-key-pub.asc | sudo tee /etc/apt/trusted.gpg.d/lunarg.asc
sudo wget -qO /etc/apt/sources.list.d/lunarg-vulkan-jammy.list http://packages.lunarg.com/vulkan/lunarg-vulkan-jammy.list
sudo apt update && sudo apt install -y libfontconfig1 libfreetype6 libx11-6 libx11-xcb1 libxext6 libxfixes3 libxi6 libxrender1 libxcb1 libxcb-cursor0 libxcb-glx0 libxcb-keysyms1 libxcb-image0 libxcb-shm0 libxcb-icccm4 libxcb-sync1 libxcb-xfixes0 libxcb-shape0 libxcb-randr0 libxcb-render-util0 libxcb-util1 libxcb-xinerama0 libxcb-xkb1 libxkbcommon0 libxkbcommon-x11-0 bison build-essential flex gperf python3 gcc g++ libgl1-mesa-dev libwayland-dev vulkan-sdk patchelf
- run:
name: Installing Qt
command: |
if [ ! -d ~/Qt ]; then
wget https://gpt4all.io/ci/qt-unified-linux-x64-4.6.0-online.run
chmod +x qt-unified-linux-x64-4.6.0-online.run
./qt-unified-linux-x64-4.6.0-online.run --no-force-installations --no-default-installations --no-size-checking --default-answer --accept-licenses --confirm-command --accept-obligations --email $QT_EMAIL --password $QT_PASSWORD install qt.tools.cmake qt.tools.ifw.46 qt.tools.ninja qt.qt6.651.gcc_64 qt.qt6.651.qt5compat qt.qt6.651.debug_info qt.qt6.651.addons.qtpdf qt.qt6.651.addons.qthttpserver qt.qt6.651.qtwaylandcompositor
fi
- save_cache: # this is the new step to save cache
key: linux-qt-cache
paths:
- ~/Qt
- run:
name: Build linuxdeployqt
command: |
git clone https://github.com/nomic-ai/linuxdeployqt
cd linuxdeployqt && qmake && sudo make install
- run:
name: Build
command: |
set -eo pipefail
export CMAKE_PREFIX_PATH=~/Qt/6.5.1/gcc_64/lib/cmake
export PATH=$PATH:$HOME/Qt/Tools/QtInstallerFramework/4.6/bin
mkdir build
cd build
mkdir upload
~/Qt/Tools/CMake/bin/cmake -DGPT4ALL_OFFLINE_INSTALLER=ON -DCMAKE_BUILD_TYPE=Release -S ../gpt4all-chat -B .
~/Qt/Tools/CMake/bin/cmake --build . --target all
~/Qt/Tools/CMake/bin/cmake --build . --target install
~/Qt/Tools/CMake/bin/cmake --build . --target package
cp gpt4all-installer-* upload
- store_artifacts:
path: build/upload
build-offline-chat-installer-windows:
machine:
image: 'windows-server-2019-vs2019:2022.08.1'
resource_class: windows.large
shell: powershell.exe -ExecutionPolicy Bypass
steps:
- checkout
- run:
name: Update Submodules
command: |
git submodule sync
git submodule update --init --recursive
- restore_cache: # this is the new step to restore cache
keys:
- windows-qt-cache
- run:
name: Installing Qt
command: |
if (-not (Test-Path C:\Qt)) {
Invoke-WebRequest -Uri https://gpt4all.io/ci/qt-unified-windows-x64-4.6.0-online.exe -OutFile qt-unified-windows-x64-4.6.0-online.exe
& .\qt-unified-windows-x64-4.6.0-online.exe --no-force-installations --no-default-installations --no-size-checking --default-answer --accept-licenses --confirm-command --accept-obligations --email ${Env:QT_EMAIL} --password ${Env:QT_PASSWORD} install qt.tools.cmake qt.tools.ifw.46 qt.tools.ninja qt.qt6.651.win64_msvc2019_64 qt.qt6.651.qt5compat qt.qt6.651.debug_info qt.qt6.651.addons.qtpdf qt.qt6.651.addons.qthttpserver
}
- save_cache: # this is the new step to save cache
key: windows-qt-cache
paths:
- C:\Qt
- run:
name: Install VulkanSDK
command: |
Invoke-WebRequest -Uri https://sdk.lunarg.com/sdk/download/1.3.261.1/windows/VulkanSDK-1.3.261.1-Installer.exe -OutFile VulkanSDK-1.3.261.1-Installer.exe
.\VulkanSDK-1.3.261.1-Installer.exe --accept-licenses --default-answer --confirm-command install
- run:
name: Build
command: |
$Env:PATH = "${Env:PATH};C:\Program Files (x86)\Windows Kits\10\bin\x64"
$Env:PATH = "${Env:PATH};C:\Program Files (x86)\Windows Kits\10\bin\10.0.22000.0\x64"
$Env:PATH = "${Env:PATH};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64"
$Env:PATH = "${Env:PATH};C:\VulkanSDK\1.3.261.1\bin"
$Env:PATH = "${Env:PATH};C:\Qt\Tools\QtInstallerFramework\4.6\bin"
$Env:LIB = "${Env:LIB};C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22000.0\ucrt\x64"
$Env:LIB = "${Env:LIB};C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22000.0\um\x64"
$Env:LIB = "${Env:LIB};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\lib\x64"
$Env:LIB = "${Env:LIB};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\ATLMFC\lib\x64"
$Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\ucrt"
$Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\um"
$Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\shared"
$Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\winrt"
$Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\cppwinrt"
$Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Auxiliary\VS\include"
$Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\include"
$Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\ATLMFC\include"
mkdir build
cd build
& "C:\Qt\Tools\CMake_64\bin\cmake.exe" `
"-DCMAKE_GENERATOR:STRING=Ninja" `
"-DCMAKE_BUILD_TYPE=Release" `
"-DCMAKE_PREFIX_PATH:PATH=C:\Qt\6.5.1\msvc2019_64" `
"-DCMAKE_MAKE_PROGRAM:FILEPATH=C:\Qt\Tools\Ninja\ninja.exe" `
"-DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON" `
"-DGPT4ALL_OFFLINE_INSTALLER=ON" `
"-S ..\gpt4all-chat" `
"-B ."
& "C:\Qt\Tools\Ninja\ninja.exe"
& "C:\Qt\Tools\Ninja\ninja.exe" install
& "C:\Qt\Tools\Ninja\ninja.exe" package
mkdir upload
copy gpt4all-installer-win64.exe upload
- store_artifacts:
path: build/upload
build-gpt4all-chat-linux:
machine:
image: ubuntu-2204:2023.04.2
steps:
- checkout
- run:
name: Update Submodules
command: |
git submodule sync
git submodule update --init --recursive
- restore_cache: # this is the new step to restore cache
keys:
- linux-qt-cache
- run:
name: Setup Linux and Dependencies
command: |
wget -qO- https://packages.lunarg.com/lunarg-signing-key-pub.asc | sudo tee /etc/apt/trusted.gpg.d/lunarg.asc
sudo wget -qO /etc/apt/sources.list.d/lunarg-vulkan-jammy.list http://packages.lunarg.com/vulkan/lunarg-vulkan-jammy.list
sudo apt update && sudo apt install -y libfontconfig1 libfreetype6 libx11-6 libx11-xcb1 libxext6 libxfixes3 libxi6 libxrender1 libxcb1 libxcb-cursor0 libxcb-glx0 libxcb-keysyms1 libxcb-image0 libxcb-shm0 libxcb-icccm4 libxcb-sync1 libxcb-xfixes0 libxcb-shape0 libxcb-randr0 libxcb-render-util0 libxcb-util1 libxcb-xinerama0 libxcb-xkb1 libxkbcommon0 libxkbcommon-x11-0 bison build-essential flex gperf python3 gcc g++ libgl1-mesa-dev libwayland-dev vulkan-sdk
- run:
name: Installing Qt
command: |
if [ ! -d ~/Qt ]; then
wget https://gpt4all.io/ci/qt-unified-linux-x64-4.6.0-online.run
chmod +x qt-unified-linux-x64-4.6.0-online.run
./qt-unified-linux-x64-4.6.0-online.run --no-force-installations --no-default-installations --no-size-checking --default-answer --accept-licenses --confirm-command --accept-obligations --email $QT_EMAIL --password $QT_PASSWORD install qt.tools.cmake qt.tools.ifw.46 qt.tools.ninja qt.qt6.651.gcc_64 qt.qt6.651.qt5compat qt.qt6.651.debug_info qt.qt6.651.addons.qtpdf qt.qt6.651.addons.qthttpserver qt.qt6.651.qtwaylandcompositor
fi
- save_cache: # this is the new step to save cache
key: linux-qt-cache
paths:
- ~/Qt
- run:
name: Build
command: |
export CMAKE_PREFIX_PATH=~/Qt/6.5.1/gcc_64/lib/cmake
mkdir build
cd build
~/Qt/Tools/CMake/bin/cmake -DCMAKE_BUILD_TYPE=Release -S ../gpt4all-chat -B .
~/Qt/Tools/CMake/bin/cmake --build . --target all
build-gpt4all-chat-windows:
machine:
image: 'windows-server-2019-vs2019:2022.08.1'
resource_class: windows.large
shell: powershell.exe -ExecutionPolicy Bypass
steps:
- checkout
- run:
name: Update Submodules
command: |
git submodule sync
git submodule update --init --recursive
- restore_cache: # this is the new step to restore cache
keys:
- windows-qt-cache
- run:
name: Installing Qt
command: |
if (-not (Test-Path C:\Qt)) {
Invoke-WebRequest -Uri https://gpt4all.io/ci/qt-unified-windows-x64-4.6.0-online.exe -OutFile qt-unified-windows-x64-4.6.0-online.exe
& .\qt-unified-windows-x64-4.6.0-online.exe --no-force-installations --no-default-installations --no-size-checking --default-answer --accept-licenses --confirm-command --accept-obligations --email ${Env:QT_EMAIL} --password ${Env:QT_PASSWORD} install qt.tools.cmake qt.tools.ifw.46 qt.tools.ninja qt.qt6.651.win64_msvc2019_64 qt.qt6.651.qt5compat qt.qt6.651.debug_info qt.qt6.651.addons.qtpdf qt.qt6.651.addons.qthttpserver
}
- save_cache: # this is the new step to save cache
key: windows-qt-cache
paths:
- C:\Qt
- run:
name: Install VulkanSDK
command: |
Invoke-WebRequest -Uri https://sdk.lunarg.com/sdk/download/1.3.261.1/windows/VulkanSDK-1.3.261.1-Installer.exe -OutFile VulkanSDK-1.3.261.1-Installer.exe
.\VulkanSDK-1.3.261.1-Installer.exe --accept-licenses --default-answer --confirm-command install
- run:
name: Build
command: |
$Env:PATH = "${Env:PATH};C:\Program Files (x86)\Windows Kits\10\bin\x64"
$Env:PATH = "${Env:PATH};C:\Program Files (x86)\Windows Kits\10\bin\10.0.22000.0\x64"
$Env:PATH = "${Env:PATH};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64"
$Env:PATH = "${Env:PATH};C:\VulkanSDK\1.3.261.1\bin"
$Env:LIB = "${Env:LIB};C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22000.0\ucrt\x64"
$Env:LIB = "${Env:LIB};C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22000.0\um\x64"
$Env:LIB = "${Env:LIB};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\lib\x64"
$Env:LIB = "${Env:LIB};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\ATLMFC\lib\x64"
$Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\ucrt"
$Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\um"
$Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\shared"
$Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\winrt"
$Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\cppwinrt"
$Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Auxiliary\VS\include"
$Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\include"
$Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\ATLMFC\include"
mkdir build
cd build
& "C:\Qt\Tools\CMake_64\bin\cmake.exe" `
"-DCMAKE_GENERATOR:STRING=Ninja" `
"-DCMAKE_BUILD_TYPE=Release" `
"-DCMAKE_PREFIX_PATH:PATH=C:\Qt\6.5.1\msvc2019_64" `
"-DCMAKE_MAKE_PROGRAM:FILEPATH=C:\Qt\Tools\Ninja\ninja.exe" `
"-DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON" `
"-S ..\gpt4all-chat" `
"-B ."
& "C:\Qt\Tools\Ninja\ninja.exe"
build-gpt4all-chat-macos:
macos:
xcode: 14.0.0
steps:
- checkout
- run:
name: Update Submodules
command: |
git submodule sync
git submodule update --init --recursive
- restore_cache: # this is the new step to restore cache
keys:
- macos-qt-cache_v2
- run:
name: Installing Qt
command: |
if [ ! -d ~/Qt ]; then
curl -o qt-unified-macOS-x64-4.6.0-online.dmg https://gpt4all.io/ci/qt-unified-macOS-x64-4.6.0-online.dmg
hdiutil attach qt-unified-macOS-x64-4.6.0-online.dmg
/Volumes/qt-unified-macOS-x64-4.6.0-online/qt-unified-macOS-x64-4.6.0-online.app/Contents/MacOS/qt-unified-macOS-x64-4.6.0-online --no-force-installations --no-default-installations --no-size-checking --default-answer --accept-licenses --confirm-command --accept-obligations --email $QT_EMAIL --password $QT_PASSWORD install qt.tools.cmake qt.tools.ifw.46 qt.tools.ninja qt.qt6.651.clang_64 qt.qt6.651.qt5compat qt.qt6.651.debug_info qt.qt6.651.addons.qtpdf qt.qt6.651.addons.qthttpserver
hdiutil detach /Volumes/qt-unified-macOS-x64-4.6.0-online
fi
- save_cache: # this is the new step to save cache
key: macos-qt-cache_v2
paths:
- ~/Qt
- run:
name: Build
command: |
mkdir build
cd build
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake \
-DCMAKE_GENERATOR:STRING=Ninja \
-DBUILD_UNIVERSAL=ON \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_PREFIX_PATH:PATH=~/Qt/6.5.1/macos/lib/cmake/Qt6 \
-DCMAKE_MAKE_PROGRAM:FILEPATH=~/Qt/Tools/Ninja/ninja \
-S ../gpt4all-chat \
-B .
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake --build . --target all
build-ts-docs:
docker:
- image: cimg/base:stable
steps:
- checkout
- node/install:
install-yarn: true
node-version: "18.16"
- run: node --version
- node/install-packages:
pkg-manager: yarn
app-dir: gpt4all-bindings/typescript
override-ci-command: yarn install
- run:
name: build docs ts yo
command: |
cd gpt4all-bindings/typescript
yarn docs:build
build-py-docs:
docker:
- image: circleci/python:3.8
steps:
- checkout
- run:
name: Install dependencies
command: |
sudo apt-get update
sudo apt-get -y install python3 python3-pip
sudo pip3 install awscli --upgrade
sudo pip3 install mkdocs mkdocs-material mkautodoc 'mkdocstrings[python]'
- run:
name: Make Documentation
command: |
cd gpt4all-bindings/python/
mkdocs build
- run:
name: Deploy Documentation
command: |
cd gpt4all-bindings/python/
aws s3 cp ./site s3://docs.gpt4all.io/ --recursive | cat
- run:
name: Invalidate docs.gpt4all.io cloudfront
command: aws cloudfront create-invalidation --distribution-id E1STQOW63QL2OH --paths "/*"
build-py-linux:
machine:
image: ubuntu-2204:2023.04.2
steps:
- checkout
- run:
name: Set Python Version
command: pyenv global 3.11.2
- run:
name: Install dependencies
command: |
wget -qO- https://packages.lunarg.com/lunarg-signing-key-pub.asc | sudo tee /etc/apt/trusted.gpg.d/lunarg.asc
sudo wget -qO /etc/apt/sources.list.d/lunarg-vulkan-jammy.list http://packages.lunarg.com/vulkan/lunarg-vulkan-jammy.list
sudo apt-get update
sudo apt-get install -y cmake build-essential vulkan-sdk
pip install setuptools wheel cmake
- run:
name: Build C library
command: |
git submodule init
git submodule update
cd gpt4all-backend
mkdir build
cd build
cmake ..
cmake --build . --parallel
- run:
name: Build wheel
command: |
cd gpt4all-bindings/python/
python setup.py bdist_wheel --plat-name=manylinux1_x86_64
- store_artifacts:
path: gpt4all-bindings/python/dist
- persist_to_workspace:
root: gpt4all-bindings/python/dist
paths:
- "*.whl"
build-py-macos:
macos:
xcode: "14.2.0"
resource_class: macos.m1.large.gen1
steps:
- checkout
- run:
name: Install dependencies
command: |
brew install cmake
pip install setuptools wheel cmake
- run:
name: Build C library
command: |
git submodule init
git submodule update
cd gpt4all-backend
mkdir build
cd build
cmake .. -DCMAKE_OSX_ARCHITECTURES="x86_64;arm64"
cmake --build . --parallel
- run:
name: Build wheel
command: |
cd gpt4all-bindings/python
python setup.py bdist_wheel --plat-name=macosx_10_15_universal2
- store_artifacts:
path: gpt4all-bindings/python/dist
- persist_to_workspace:
root: gpt4all-bindings/python/dist
paths:
- "*.whl"
build-py-windows:
executor:
name: win/default
steps:
- checkout
- run:
name: Install MinGW64
command: choco install -y mingw --force --no-progress
- run:
name: Install VulkanSDK
command: |
Invoke-WebRequest -Uri https://sdk.lunarg.com/sdk/download/1.3.261.1/windows/VulkanSDK-1.3.261.1-Installer.exe -OutFile VulkanSDK-1.3.261.1-Installer.exe
.\VulkanSDK-1.3.261.1-Installer.exe --accept-licenses --default-answer --confirm-command install
- run:
name: Install dependencies
command:
choco install -y cmake --installargs 'ADD_CMAKE_TO_PATH=System'
- run:
name: Install Python dependencies
command: pip install setuptools wheel cmake
- run:
name: Build C library
command: |
git submodule init
git submodule update
cd gpt4all-backend
mkdir build
cd build
$env:Path += ";C:\ProgramData\mingw64\mingw64\bin"
$env:Path += ";C:\VulkanSDK\1.3.261.1\bin"
cmake -G "MinGW Makefiles" .. -DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON -DKOMPUTE_OPT_USE_BUILT_IN_VULKAN_HEADER=OFF
cmake --build . --parallel
- run:
name: Build wheel
# TODO: As part of this task, we need to move mingw64 binaries into package.
# This is terrible and needs a more robust solution eventually.
command: |
cd gpt4all-bindings/python
cd gpt4all
mkdir llmodel_DO_NOT_MODIFY
mkdir llmodel_DO_NOT_MODIFY/build/
cp 'C:\ProgramData\mingw64\mingw64\bin\*dll' 'llmodel_DO_NOT_MODIFY/build/'
cd ..
python setup.py bdist_wheel --plat-name=win_amd64
- store_artifacts:
path: gpt4all-bindings/python/dist
- persist_to_workspace:
root: gpt4all-bindings/python/dist
paths:
- "*.whl"
store-and-upload-wheels:
docker:
- image: circleci/python:3.8
steps:
- setup_remote_docker
- attach_workspace:
at: /tmp/workspace
- run:
name: Install dependencies
command: |
sudo apt-get update
sudo apt-get install -y cmake build-essential
pip install setuptools wheel twine
- run:
name: Upload Python package
command: |
twine upload /tmp/workspace/*.whl --username __token__ --password $PYPI_CRED
- store_artifacts:
path: /tmp/workspace
build-bindings-backend-linux:
machine:
image: ubuntu-2204:2023.04.2
steps:
- checkout
- run:
name: Update Submodules
command: |
git submodule sync
git submodule update --init --recursive
- run:
name: Install dependencies
command: |
wget -qO- https://packages.lunarg.com/lunarg-signing-key-pub.asc | sudo tee /etc/apt/trusted.gpg.d/lunarg.asc
sudo wget -qO /etc/apt/sources.list.d/lunarg-vulkan-jammy.list http://packages.lunarg.com/vulkan/lunarg-vulkan-jammy.list
sudo apt-get update
sudo apt-get install -y cmake build-essential vulkan-sdk
- run:
name: Build Libraries
command: |
cd gpt4all-backend
mkdir -p runtimes/build
cd runtimes/build
cmake ../..
cmake --build . --parallel --config Release
mkdir ../linux-x64
cp -L *.so ../linux-x64 # otherwise persist_to_workspace seems to mess symlinks
- persist_to_workspace:
root: gpt4all-backend
paths:
- runtimes/linux-x64/*.so
build-bindings-backend-macos:
macos:
xcode: "14.0.0"
steps:
- checkout
- run:
name: Update Submodules
command: |
git submodule sync
git submodule update --init --recursive
- run:
name: Install dependencies
command: |
brew install cmake
- run:
name: Build Libraries
command: |
cd gpt4all-backend
mkdir -p runtimes/build
cd runtimes/build
cmake ../.. -DCMAKE_OSX_ARCHITECTURES="x86_64;arm64"
cmake --build . --parallel --config Release
mkdir ../osx-x64
cp -L *.dylib ../osx-x64
cp ../../llama.cpp-mainline/*.metal ../osx-x64
ls ../osx-x64
- persist_to_workspace:
root: gpt4all-backend
paths:
- runtimes/osx-x64/*.dylib
- runtimes/osx-x64/*.metal
build-bindings-backend-windows:
executor:
name: win/default
size: large
shell: powershell.exe -ExecutionPolicy Bypass
steps:
- checkout
- run:
name: Update Submodules
command: |
git submodule sync
git submodule update --init --recursive
- run:
name: Install MinGW64
command: choco install -y mingw --force --no-progress
- run:
name: Install VulkanSDK
command: |
Invoke-WebRequest -Uri https://sdk.lunarg.com/sdk/download/1.3.261.1/windows/VulkanSDK-1.3.261.1-Installer.exe -OutFile VulkanSDK-1.3.261.1-Installer.exe
.\VulkanSDK-1.3.261.1-Installer.exe --accept-licenses --default-answer --confirm-command install
- run:
name: Install dependencies
command: |
choco install -y cmake --installargs 'ADD_CMAKE_TO_PATH=System'
- run:
name: Build Libraries
command: |
$MinGWBin = "C:\ProgramData\mingw64\mingw64\bin"
$Env:Path += ";$MinGwBin"
$Env:Path += ";C:\Program Files\CMake\bin"
$Env:Path += ";C:\VulkanSDK\1.3.261.1\bin"
cd gpt4all-backend
mkdir runtimes/win-x64
cd runtimes/win-x64
cmake -G "MinGW Makefiles" -DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON ../..
cmake --build . --parallel --config Release
cp "$MinGWBin\libgcc*.dll" .
cp "$MinGWBin\libstdc++*.dll" .
cp "$MinGWBin\libwinpthread*.dll" .
cp bin/*.dll .
- persist_to_workspace:
root: gpt4all-backend
paths:
- runtimes/win-x64/*.dll
build-bindings-backend-windows-msvc:
machine:
image: 'windows-server-2022-gui:2023.03.1'
resource_class: windows.large
shell: powershell.exe -ExecutionPolicy Bypass
steps:
- checkout
- run:
name: Update Submodules
command: |
git submodule sync
git submodule update --init --recursive
- run:
name: Install VulkanSDK
command: |
Invoke-WebRequest -Uri https://sdk.lunarg.com/sdk/download/1.3.261.1/windows/VulkanSDK-1.3.261.1-Installer.exe -OutFile VulkanSDK-1.3.261.1-Installer.exe
.\VulkanSDK-1.3.261.1-Installer.exe --accept-licenses --default-answer --confirm-command install
- run:
name: Install dependencies
command: |
choco install -y cmake --installargs 'ADD_CMAKE_TO_PATH=System'
- run:
name: Build Libraries
command: |
$Env:Path += ";C:\Program Files\CMake\bin"
$Env:Path += ";C:\VulkanSDK\1.3.261.1\bin"
cd gpt4all-backend
mkdir runtimes/win-x64_msvc
cd runtimes/win-x64_msvc
cmake -G "Visual Studio 17 2022" -DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON -A X64 ../..
cmake --build . --parallel --config Release
cp bin/Release/*.dll .
- persist_to_workspace:
root: gpt4all-backend
paths:
- runtimes/win-x64_msvc/*.dll
build-csharp-linux:
docker:
- image: mcr.microsoft.com/dotnet/sdk:7.0-jammy # Ubuntu 22.04
steps:
- checkout
- attach_workspace:
at: /tmp/workspace
- run:
name: "Prepare Native Libs"
command: |
cd gpt4all-bindings/csharp
mkdir -p runtimes/linux-x64/native
cp /tmp/workspace/runtimes/linux-x64/*.so runtimes/linux-x64/native/
ls -R runtimes
- restore_cache:
keys:
- gpt4all-csharp-nuget-packages-nix
- run:
name: "Install project dependencies"
command: |
cd gpt4all-bindings/csharp
dotnet restore Gpt4All
- save_cache:
paths:
- ~/.nuget/packages
key: gpt4all-csharp-nuget-packages-nix
- run:
name: Build C# Project
command: |
cd gpt4all-bindings/csharp
dotnet build Gpt4All --configuration Release --nologo
- run:
name: "Run C# Tests"
command: |
cd gpt4all-bindings/csharp
dotnet test Gpt4All.Tests -v n -c Release --filter "SKIP_ON_CI!=True" --logger "trx"
- run:
name: Test results
command: |
cd gpt4all-bindings/csharp/Gpt4All.Tests
dotnet tool install -g trx2junit
export PATH="$PATH:$HOME/.dotnet/tools"
trx2junit TestResults/*.trx
- store_test_results:
path: gpt4all-bindings/csharp/Gpt4All.Tests/TestResults
build-csharp-windows:
executor:
name: win/default
size: large
shell: powershell.exe -ExecutionPolicy Bypass
steps:
- checkout
- restore_cache:
keys:
- gpt4all-csharp-nuget-packages-win
- attach_workspace:
at: C:\Users\circleci\workspace
- run:
name: "Prepare Native Libs"
command: |
cd gpt4all-bindings/csharp
mkdir -p runtimes\win-x64\native
cp C:\Users\circleci\workspace\runtimes\win-x64\*.dll runtimes\win-x64\native\
ls -R runtimes
- run:
name: "Install project dependencies"
command: |
cd gpt4all-bindings/csharp
dotnet.exe restore Gpt4All
- save_cache:
paths:
- C:\Users\circleci\.nuget\packages
key: gpt4all-csharp-nuget-packages-win
- run:
name: Build C# Project
command: |
cd gpt4all-bindings/csharp
dotnet.exe build Gpt4All --configuration Release --nologo
- run:
name: "Run C# Tests"
command: |
cd gpt4all-bindings/csharp
dotnet.exe test Gpt4All.Tests -v n -c Release --filter "SKIP_ON_CI!=True" --logger "trx"
- run:
name: Test results
command: |
cd gpt4all-bindings/csharp/Gpt4All.Tests
dotnet tool install -g trx2junit
$Env:Path += ";$Env:USERPROFILE\.dotnet\tools"
trx2junit TestResults/*.trx
- store_test_results:
path: gpt4all-bindings/csharp/Gpt4All.Tests/TestResults
build-csharp-macos:
macos:
xcode: "14.0.0"
steps:
- checkout
- restore_cache:
keys:
- gpt4all-csharp-nuget-packages-nix
- run:
name: Install dependencies
command: |
brew install --cask dotnet-sdk
- attach_workspace:
at: /tmp/workspace
- run:
name: "Prepare Native Libs"
command: |
cd gpt4all-bindings/csharp
mkdir -p runtimes/osx/native
cp /tmp/workspace/runtimes/osx-x64/*.dylib runtimes/osx/native/
cp /tmp/workspace/runtimes/osx-x64/*.metal runtimes/osx/native/
ls -R runtimes
- run:
name: "Install project dependencies"
command: |
cd gpt4all-bindings/csharp
dotnet restore Gpt4All
- save_cache:
paths:
- ~/.nuget/packages
key: gpt4all-csharp-nuget-packages-nix
- run:
name: Build C# Project
command: |
cd gpt4all-bindings/csharp
dotnet build Gpt4All --configuration Release --nologo
- run:
name: "Run C# Tests"
command: |
cd gpt4all-bindings/csharp
dotnet test Gpt4All.Tests -v n -c Release --filter "SKIP_ON_CI!=True" --logger "trx"
- run:
name: Test results
command: |
cd gpt4all-bindings/csharp/Gpt4All.Tests
dotnet tool install -g trx2junit
export PATH="$PATH:$HOME/.dotnet/tools"
trx2junit TestResults/*.trx
- store_test_results:
path: gpt4all-bindings/csharp/Gpt4All.Tests/TestResults
store-and-upload-nupkgs:
docker:
- image: mcr.microsoft.com/dotnet/sdk:6.0-jammy # Ubuntu 22.04
steps:
- attach_workspace:
at: /tmp/workspace
- checkout
- restore_cache:
keys:
- gpt4all-csharp-nuget-packages-nix
- run:
name: NuGet Pack
command: |
cd gpt4all-bindings/csharp
mkdir -p runtimes/linux-x64/native
cp /tmp/workspace/runtimes/linux-x64/*.so runtimes/linux-x64/native/
mkdir -p runtimes/win-x64/native
cp /tmp/workspace/runtimes/win-x64/*.dll runtimes/win-x64/native/
mkdir -p runtimes/osx/native
cp /tmp/workspace/runtimes/osx-x64/*.dylib runtimes/osx/native/
cp /tmp/workspace/runtimes/osx-x64/*.metal runtimes/osx/native/
dotnet pack ./Gpt4All/Gpt4All.csproj -p:IncludeSymbols=true -p:SymbolPackageFormat=snupkg -c Release
dotnet nuget push ./Gpt4All/bin/Release/Gpt4All.*.nupkg -s $NUGET_URL -k $NUGET_TOKEN --skip-duplicate
- store_artifacts:
path: gpt4all-bindings/csharp/Gpt4All/bin/Release
build-nodejs-linux:
docker:
- image: cimg/base:stable
steps:
- checkout
- attach_workspace:
at: /tmp/gpt4all-backend
- node/install:
install-yarn: true
node-version: "18.16"
- run: node --version
- node/install-packages:
app-dir: gpt4all-bindings/typescript
pkg-manager: yarn
override-ci-command: yarn install
- run:
command: |
cd gpt4all-bindings/typescript
yarn prebuildify -t 18.16.0 --napi
- run:
command: |
mkdir -p gpt4all-backend/prebuilds/linux-x64
mkdir -p gpt4all-backend/runtimes/linux-x64
cp /tmp/gpt4all-backend/runtimes/linux-x64/*-*.so gpt4all-backend/runtimes/linux-x64
cp gpt4all-bindings/typescript/prebuilds/linux-x64/*.node gpt4all-backend/prebuilds/linux-x64
- persist_to_workspace:
root: gpt4all-backend
paths:
- prebuilds/linux-x64/*.node
- runtimes/linux-x64/*-*.so
build-nodejs-macos:
macos:
xcode: "14.0.0"
steps:
- checkout
- attach_workspace:
at: /tmp/gpt4all-backend
- node/install:
install-yarn: true
node-version: "18.16"
- run: node --version
- node/install-packages:
app-dir: gpt4all-bindings/typescript
pkg-manager: yarn
override-ci-command: yarn install
- run:
command: |
cd gpt4all-bindings/typescript
yarn prebuildify -t 18.16.0 --napi
- run:
name: "Persisting all necessary things to workspace"
command: |
mkdir -p gpt4all-backend/prebuilds/darwin-x64
mkdir -p gpt4all-backend/runtimes/darwin-x64
cp /tmp/gpt4all-backend/runtimes/osx-x64/*-*.* gpt4all-backend/runtimes/darwin-x64
cp gpt4all-bindings/typescript/prebuilds/darwin-x64/*.node gpt4all-backend/prebuilds/darwin-x64
- persist_to_workspace:
root: gpt4all-backend
paths:
- prebuilds/darwin-x64/*.node
- runtimes/darwin-x64/*-*.*
build-nodejs-windows:
executor:
name: win/default
size: large
shell: powershell.exe -ExecutionPolicy Bypass
steps:
- checkout
- attach_workspace:
at: /tmp/gpt4all-backend
- run: choco install wget -y
- run:
command: wget https://nodejs.org/dist/v18.16.0/node-v18.16.0-x86.msi -P C:\Users\circleci\Downloads\
shell: cmd.exe
- run: MsiExec.exe /i C:\Users\circleci\Downloads\node-v18.16.0-x86.msi /qn
- run:
command: |
Start-Process powershell -verb runAs -Args "-start GeneralProfile"
nvm install 18.16.0
nvm use 18.16.0
- run: node --version
- run:
command: |
npm install -g yarn
cd gpt4all-bindings/typescript
yarn install
- run:
command: |
cd gpt4all-bindings/typescript
yarn prebuildify -t 18.16.0 --napi
- run:
command: |
mkdir -p gpt4all-backend/prebuilds/win32-x64
mkdir -p gpt4all-backend/runtimes/win32-x64
cp /tmp/gpt4all-backend/runtimes/win-x64_msvc/*-*.dll gpt4all-backend/runtimes/win32-x64
cp gpt4all-bindings/typescript/prebuilds/win32-x64/*.node gpt4all-backend/prebuilds/win32-x64
- persist_to_workspace:
root: gpt4all-backend
paths:
- prebuilds/win32-x64/*.node
- runtimes/win32-x64/*-*.dll
prepare-npm-pkg:
docker:
- image: cimg/base:stable
steps:
- attach_workspace:
at: /tmp/gpt4all-backend
- checkout
- node/install:
install-yarn: true
node-version: "18.16"
- run: node --version
- run:
command: |
cd gpt4all-bindings/typescript
# excluding llmodel. nodejs bindings dont need llmodel.dll
mkdir -p runtimes/win32-x64/native
mkdir -p prebuilds/win32-x64/
cp /tmp/gpt4all-backend/runtimes/win-x64_msvc/*-*.dll runtimes/win32-x64/native/
cp /tmp/gpt4all-backend/prebuilds/win32-x64/*.node prebuilds/win32-x64/
mkdir -p runtimes/linux-x64/native
mkdir -p prebuilds/linux-x64/
cp /tmp/gpt4all-backend/runtimes/linux-x64/*-*.so runtimes/linux-x64/native/
cp /tmp/gpt4all-backend/prebuilds/linux-x64/*.node prebuilds/linux-x64/
mkdir -p runtimes/darwin-x64/native
mkdir -p prebuilds/darwin-x64/
cp /tmp/gpt4all-backend/runtimes/darwin-x64/*-*.* runtimes/darwin-x64/native/
cp /tmp/gpt4all-backend/prebuilds/darwin-x64/*.node prebuilds/darwin-x64/
# Fallback build if user is not on above prebuilds
mv -f binding.ci.gyp binding.gyp
mkdir gpt4all-backend
cd ../../gpt4all-backend
mv llmodel.h llmodel.cpp llmodel_c.cpp llmodel_c.h sysinfo.h dlhandle.h ../gpt4all-bindings/typescript/gpt4all-backend/
# Test install
- node/install-packages:
app-dir: gpt4all-bindings/typescript
pkg-manager: yarn
override-ci-command: yarn install
- run:
command: |
cd gpt4all-bindings/typescript
yarn run test
- run:
command: |
cd gpt4all-bindings/typescript
npm set //registry.npmjs.org/:_authToken=$NPM_TOKEN
npm publish
workflows:
version: 2
default:
when: << pipeline.parameters.run-default-workflow >>
jobs:
- default-job
build-chat-offline-installers:
when: << pipeline.parameters.run-chat-workflow >>
jobs:
- hold:
type: approval
- build-offline-chat-installer-macos:
requires:
- hold
- build-offline-chat-installer-windows:
requires:
- hold
- build-offline-chat-installer-linux:
requires:
- hold
build-and-test-gpt4all-chat:
when: << pipeline.parameters.run-chat-workflow >>
jobs:
- hold:
type: approval
- build-gpt4all-chat-linux:
requires:
- hold
- build-gpt4all-chat-windows:
requires:
- hold
- build-gpt4all-chat-macos:
requires:
- hold
deploy-docs:
when: << pipeline.parameters.run-python-workflow >>
jobs:
- build-ts-docs:
filters:
branches:
only:
- main
- build-py-docs:
filters:
branches:
only:
- main
build-py-deploy:
when: << pipeline.parameters.run-python-workflow >>
jobs:
- pypi-hold:
type: approval
- hold:
type: approval
- build-py-linux:
filters:
branches:
only:
requires:
- hold
- build-py-macos:
filters:
branches:
only:
requires:
- hold
- build-py-windows:
filters:
branches:
only:
requires:
- hold
- store-and-upload-wheels:
filters:
branches:
only:
requires:
- pypi-hold
- build-py-windows
- build-py-linux
- build-py-macos
build-bindings:
when:
or:
- << pipeline.parameters.run-python-workflow >>
- << pipeline.parameters.run-csharp-workflow >>
- << pipeline.parameters.run-ts-workflow >>
jobs:
- hold:
type: approval
- nuget-hold:
type: approval
- npm-hold:
type: approval
- build-bindings-backend-linux:
filters:
branches:
only:
requires:
- hold
- build-bindings-backend-macos:
filters:
branches:
only:
requires:
- hold
- build-bindings-backend-windows:
filters:
branches:
only:
requires:
- hold
- build-bindings-backend-windows-msvc:
filters:
branches:
only:
requires:
- hold
# NodeJs Jobs
- prepare-npm-pkg:
filters:
branches:
only:
requires:
- npm-hold
- build-nodejs-linux
- build-nodejs-windows
- build-nodejs-macos
- build-nodejs-linux:
filters:
branches:
only:
requires:
- npm-hold
- build-bindings-backend-linux
- build-nodejs-windows:
filters:
branches:
only:
requires:
- npm-hold
- build-bindings-backend-windows-msvc
- build-nodejs-macos:
filters:
branches:
only:
requires:
- npm-hold
- build-bindings-backend-macos
# CSharp Jobs
- build-csharp-linux:
filters:
branches:
only:
requires:
- nuget-hold
- build-bindings-backend-linux
- build-csharp-windows:
filters:
branches:
only:
requires:
- nuget-hold
- build-bindings-backend-windows
- build-csharp-macos:
filters:
branches:
only:
requires:
- nuget-hold
- build-bindings-backend-macos
- store-and-upload-nupkgs:
filters:
branches:
only:
requires:
- nuget-hold
- build-csharp-windows
- build-csharp-linux
- build-csharp-macos