gpt4all/gpt4all-chat/cmake
Jared Van Bortel d2a99d9bc6
support the llama.cpp CUDA backend (#2310)
* rebase onto llama.cpp commit ggerganov/llama.cpp@d46dbc76f
* support for CUDA backend (enabled by default)
* partial support for Occam's Vulkan backend (disabled by default)
* partial support for HIP/ROCm backend (disabled by default)
* sync llama.cpp.cmake with upstream llama.cpp CMakeLists.txt
* changes to GPT4All backend, bindings, and chat UI to handle choice of llama.cpp backend (Kompute or CUDA)
* ship CUDA runtime with installed version
* make device selection in the UI on macOS actually do something
* model whitelist: remove dbrx, mamba, persimmon, plamo; add internlm and starcoder2

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-05-15 15:27:50 -04:00
..
config.h.in We no longer have an avx_only repository and better error handling for minimum hardware requirements. (#833) 2023-06-04 15:28:58 -04:00
deploy-qt-linux.cmake.in support the llama.cpp CUDA backend (#2310) 2024-05-15 15:27:50 -04:00
deploy-qt-mac.cmake.in support the llama.cpp CUDA backend (#2310) 2024-05-15 15:27:50 -04:00
deploy-qt-windows.cmake.in support the llama.cpp CUDA backend (#2310) 2024-05-15 15:27:50 -04:00
installerscript.qs chat: fix window icon on Windows (#2321) 2024-05-09 13:42:46 -04:00
sign_dmg.py make scripts executable (#1555) 2023-10-24 09:28:21 -04:00