Commit Graph

609 Commits

Author SHA1 Message Date
Jared Van Bortel
10e3f7bbf5
Fix VRAM leak when model loading fails (#1901)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-02-01 15:45:45 -05:00
Adam Treat
e1eac00ee0 Fix the download and settings dialog to take more real estate if available on large monitors.
Signed-off-by: Adam Treat <treat.adam@gmail.com>
2024-02-01 15:43:34 -05:00
Adam Treat
111e152a5d Fix the sizing for model download.
Signed-off-by: Adam Treat <adam@nomic.ai>
2024-02-01 15:39:28 -05:00
Adam Treat
ffed2ff823 Fix for progress bar color on legacy theme.
Signed-off-by: Adam Treat <treat.adam@gmail.com>
2024-02-01 08:29:44 -05:00
Adam Treat
a5275ea9e7 Bump the version and release notes for v2.6.2.
Signed-off-by: Adam Treat <treat.adam@gmail.com>
2024-01-31 23:25:58 -05:00
Adam Treat
cdf0fedae2 Make sure to use the search_query tag for nomic embed.
Signed-off-by: Adam Treat <treat.adam@gmail.com>
2024-01-31 22:44:16 -05:00
Adam Treat
d14b95f4bd Add Nomic Embed model for atlas with localdocs. 2024-01-31 22:22:08 -05:00
Jared Van Bortel
0a40e71652
Maxwell/Pascal GPU support and crash fix (#1895)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-01-31 16:32:32 -05:00
Jared Van Bortel
061d1969f8
expose n_gpu_layers parameter of llama.cpp (#1890)
Also dynamically limit the GPU layers and context length fields to the maximum supported by the model.

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-01-31 14:17:44 -05:00
Jared Van Bortel
29d2c936d1
chat: don't show "retrieving localdocs" for zero collections (#1874)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-01-29 13:57:42 -05:00
Adam Treat
cfa22ab1c4 Change to a color that exists.
Signed-off-by: Adam Treat <treat.adam@gmail.com>
2024-01-29 13:06:47 -05:00
Adam Treat
3556f63a29 Make the setting labels font a bit bigger and fix hover.
Signed-off-by: Adam Treat <treat.adam@gmail.com>
2024-01-29 12:02:51 -06:00
Adam Treat
34de19ebf6 Add a legacy dark mode.
Signed-off-by: Adam Treat <treat.adam@gmail.com>
2024-01-29 12:02:51 -06:00
Adam Treat
c1fce502f7 Fix checkbox background in dark mode.
Signed-off-by: Adam Treat <treat.adam@gmail.com>
2024-01-29 12:02:51 -06:00
Adam Treat
363f6659e4 Fix the settings font size to be a tad bigger.
Signed-off-by: Adam Treat <treat.adam@gmail.com>
2024-01-29 12:02:51 -06:00
Adam Treat
6abeefb303 Hover for links and increase font size a bit.
Signed-off-by: Adam Treat <treat.adam@gmail.com>
2024-01-29 12:02:51 -06:00
Adam Treat
697a5f5d2a New lightmode and darkmode themes with UI revamp.
Signed-off-by: Adam Treat <treat.adam@gmail.com>
2024-01-29 12:02:51 -06:00
Karthik Nair
0a45dd384e
add fedora command for QT and related packages (#1871)
Signed-off-by: Karthik Nair <realkarthiknair@gmail.com>
Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>
2024-01-24 18:00:49 -05:00
Adam Treat
27912f6e1a Fix bug with install of online models. 2024-01-22 14:16:09 -05:00
Jared Van Bortel
c7ea283f1f
chatllm: fix deserialization version mismatch (#1859)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-01-22 10:01:31 -05:00
Jared Van Bortel
b98e5f396a
docs: add missing dependencies to Linux build instructions (#1728)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-01-17 11:33:23 -05:00
Adam Treat
e51a504550 Add the new 2.6.1 release notes and bump the version. 2024-01-12 11:10:16 -05:00
Jared Van Bortel
b803d51586 restore network.h #include
The online installers need this.
2024-01-12 09:27:48 -05:00
Jared Van Bortel
7e9786fccf chat: set search path early
This fixes the issues with installed versions of v2.6.0.
2024-01-11 12:04:18 -05:00
Adam Treat
f7aeeca884 Revert the release. 2024-01-10 10:41:33 -05:00
Adam Treat
16a84972f6 Bump to new version and right the release notes. 2024-01-10 10:21:45 -05:00
Jared Van Bortel
4dbe2634aa models2.json: update models list for the next release 2024-01-10 09:18:31 -06:00
Adam Treat
233f0c4201 Bump the version for our next release. 2024-01-05 09:46:03 -05:00
Gerhard Stein
3e99b90c0b Some cleanps 2024-01-03 08:41:40 -06:00
Cal Alaera
528eb1e7ad
Update server.cpp to return valid created timestamps (#1763)
Signed-off-by: Cal Alaera <59891537+CalAlaera@users.noreply.github.com>
2023-12-18 14:06:25 -05:00
Jared Van Bortel
d1c56b8b28
Implement configurable context length (#1749) 2023-12-16 17:58:15 -05:00
Jared Van Bortel
3acbef14b7
fix AVX support by removing direct linking to AVX2 libs (#1750) 2023-12-13 12:11:09 -05:00
Jared Van Bortel
0600f551b3
chatllm: do not attempt to serialize incompatible state (#1742) 2023-12-12 11:45:03 -05:00
Adam Treat
fb3b1ceba2 Do not attempt to do a blocking retrieval if we don't have any collections. 2023-12-04 12:58:40 -05:00
Moritz Tim W
012f399639
fix typo (#1697) 2023-11-30 12:37:52 -05:00
Adam Treat
a328f9ed3f Add a button to the collections dialog. Fix close button. 2023-11-22 09:10:44 -05:00
Adam Treat
e4ff972522 Bump and release v2.5.4 2023-11-21 16:56:52 -05:00
Adam Treat
4862e8b650 Networking retry on download error for models. 2023-11-21 16:30:18 -05:00
Jared Van Bortel
078c3bd85c
models2.json: add Orca 2 models (#1672) 2023-11-21 16:10:49 -05:00
Adam Treat
9e27a118ed Fix system prompt. 2023-11-21 10:42:12 -05:00
Adam Treat
34555c4934 Bump version and release notes for v2.5.3 2023-11-20 10:26:35 -05:00
Adam Treat
9a3dd8815d Fix GUI hang with localdocs by removing file system watcher in modellist. 2023-11-17 13:27:34 -05:00
Adam Treat
c1809a23ba Fix text color on mac. 2023-11-17 11:59:31 -05:00
Adam Treat
59ed2a0bea Use a global constant and remove a debug line. 2023-11-17 11:59:31 -05:00
Adam Treat
eecf351c64 Reduce copied code. 2023-11-17 11:59:31 -05:00
AT
abd4703c79 Update gpt4all-chat/embllm.cpp
Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>
Signed-off-by: AT <manyoso@users.noreply.github.com>
2023-11-17 11:59:31 -05:00
AT
4b413a60e4 Update gpt4all-chat/embeddings.cpp
Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>
Signed-off-by: AT <manyoso@users.noreply.github.com>
2023-11-17 11:59:31 -05:00
AT
17b346dfe7 Update gpt4all-chat/embeddings.cpp
Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>
Signed-off-by: AT <manyoso@users.noreply.github.com>
2023-11-17 11:59:31 -05:00
AT
71e37816cc Update gpt4all-chat/qml/ModelDownloaderDialog.qml
Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>
Signed-off-by: AT <manyoso@users.noreply.github.com>
2023-11-17 11:59:31 -05:00
Adam Treat
371e2a5cbc LocalDocs version 2 with text embeddings. 2023-11-17 11:59:31 -05:00
Adam Treat
bc88271520 Bump version to v2.5.3 and release notes. 2023-10-30 11:15:12 -04:00
cebtenzzre
5508e43466 build_and_run: clarify which additional Qt libs are needed
Signed-off-by: cebtenzzre <cebtenzzre@gmail.com>
2023-10-30 10:37:32 -04:00
cebtenzzre
79a5522931 fix references to old backend implementations 2023-10-30 10:37:05 -04:00
Adam Treat
f529d55380 Move this logic to QML. 2023-10-30 09:57:21 -04:00
Adam Treat
5c0d077f74 Remove leading whitespace in responses. 2023-10-28 16:53:42 -04:00
Adam Treat
131cfcdeae Don't regenerate the name for deserialized chats. 2023-10-28 16:41:23 -04:00
Adam Treat
dc2e7d6e9b Don't start recalculating context immediately upon switching to a new chat
but rather wait until the first prompt. This allows users to switch between
chats fast and to delete chats more easily.

Fixes issue #1545
2023-10-28 16:41:23 -04:00
Adam Treat
89a59e7f99 Bump version and add release notes for 2.5.1 2023-10-24 13:13:04 -04:00
cebtenzzre
f5dd74bcf0
models2.json: add tokenizer merges to mpt-7b-chat model (#1563) 2023-10-24 12:43:49 -04:00
cebtenzzre
e90263c23f
make scripts executable (#1555) 2023-10-24 09:28:21 -04:00
cebtenzzre
c25dc51935 chat: fix syntax error in main.qml 2023-10-21 21:22:37 -07:00
Victor Tsaran
721d854095
chat: improve accessibility fields (#1532)
Co-authored-by: cebtenzzre <cebtenzzre@gmail.com>
2023-10-21 10:38:46 -04:00
Adam Treat
9e99cf937a Add release notes for 2.5.0 and bump the version. 2023-10-19 16:25:55 -04:00
cebtenzzre
245c5ce5ea
update default model URLs (#1538) 2023-10-19 15:25:37 -04:00
cebtenzzre
4338e72a51
MPT: use upstream llama.cpp implementation (#1515) 2023-10-19 15:25:17 -04:00
cebtenzzre
fd3014016b
docs: clarify Vulkan dep in build instructions for bindings (#1525) 2023-10-18 12:09:52 -04:00
cebtenzzre
ac33bafb91
docs: improve build_and_run.md (#1524) 2023-10-18 11:37:28 -04:00
Aaron Miller
10f9b49313 update mini-orca 3b to gguf2, license
Signed-off-by: Aaron Miller <apage43@ninjawhale.com>
2023-10-12 14:57:07 -04:00
niansa/tuxifan
a35f1ab784
Updated chat wishlist (#1351) 2023-10-12 14:01:44 -04:00
Adam Treat
908aec27fe Always save chats to disk, but save them as text by default. This also changes
the UI behavior to always open a 'New Chat' and setting it as current instead
of setting a restored chat as current. This improves usability by not requiring
the user to wait if they want to immediately start chatting.
2023-10-12 07:52:11 -04:00
cebtenzzre
04499d1c7d
chatllm: do not write uninitialized data to stream (#1486) 2023-10-11 11:31:34 -04:00
Adam Treat
f0742c22f4 Restore state from text if necessary. 2023-10-11 09:16:02 -04:00
Adam Treat
35f9cdb70a Do not delete saved chats if we fail to serialize properly. 2023-10-11 09:16:02 -04:00
cebtenzzre
9fb135e020
cmake: install the GPT-J plugin (#1487) 2023-10-10 15:50:03 -04:00
Aaron Miller
3c25d81759 make codespell happy 2023-10-10 12:00:06 -04:00
Jan Philipp Harries
4f0cee9330 added EM German Mistral Model 2023-10-10 11:44:43 -04:00
Adam Treat
56c0d2898d Update the language here to avoid misunderstanding. 2023-10-06 14:38:42 -04:00
Adam Treat
b2cd3bdb3f Fix crasher with an empty string for prompt template. 2023-10-06 12:44:53 -04:00
Cebtenzzre
5fe685427a chat: clearer CPU fallback messages 2023-10-06 11:35:14 -04:00
Aaron Miller
9325075f80 fix stray comma in models2.json
Signed-off-by: Aaron Miller <apage43@ninjawhale.com>
2023-10-05 18:32:23 -04:00
Adam Treat
f028f67c68 Add starcoder, rift and sbert to our models2.json. 2023-10-05 18:16:19 -04:00
Adam Treat
4528f73479 Reorder and refresh our models2.json. 2023-10-05 18:16:19 -04:00
Cebtenzzre
1534df3e9f backend: do not use Vulkan with non-LLaMA models 2023-10-05 18:16:19 -04:00
Cebtenzzre
672cb850f9 differentiate between init failure and unsupported models 2023-10-05 18:16:19 -04:00
Cebtenzzre
a5b93cf095 more accurate fallback descriptions 2023-10-05 18:16:19 -04:00
Cebtenzzre
75deee9adb chat: make sure to clear fallback reason on success 2023-10-05 18:16:19 -04:00
Cebtenzzre
2eb83b9f2a chat: report reason for fallback to CPU 2023-10-05 18:16:19 -04:00
Adam Treat
ea66669cef Switch to new models2.json for new gguf release and bump our version to
2.5.0.
2023-10-05 18:16:19 -04:00
Adam Treat
12f943e966 Fix regenerate button to be deterministic and bump the llama version to latest we have for gguf. 2023-10-05 18:16:19 -04:00
Cebtenzzre
a49a1dcdf4 chatllm: grammar fix 2023-10-05 18:16:19 -04:00
Cebtenzzre
31b20f093a modellist: fix the system prompt 2023-10-05 18:16:19 -04:00
Cebtenzzre
8f3abb37ca fix references to removed model types 2023-10-05 18:16:19 -04:00
Adam Treat
d90d003a1d Latest rebase on llama.cpp with gguf support. 2023-10-05 18:16:19 -04:00
Akarshan Biswas
5f3d739205 appdata: update software description 2023-10-05 10:12:43 -04:00
Akarshan Biswas
b4cf12e1bd Update to 2.4.19 2023-10-05 10:12:43 -04:00
Akarshan Biswas
21a5709b07 Remove unnecessary stuffs from manifest 2023-10-05 10:12:43 -04:00
Akarshan Biswas
4426640f44 Add flatpak manifest 2023-10-05 10:12:43 -04:00
Aaron Miller
6711bddc4c launch browser instead of maintenancetool from offline builds 2023-09-27 11:24:21 -07:00
Aaron Miller
7f979c8258 Build offline installers in CircleCI 2023-09-27 11:24:21 -07:00
Adam Treat
dc80d1e578 Fix up the offline installer. 2023-09-18 16:21:50 -04:00
Adam Treat
f47e698193 Release notes for v2.4.19 and bump the version. 2023-09-16 12:35:08 -04:00
Adam Treat
ecf014f03b Release notes for v2.4.18 and bump the version. 2023-09-16 10:21:50 -04:00
Adam Treat
e6e724d2dc Actually bump the version. 2023-09-16 10:07:20 -04:00
Adam Treat
06a833e652 Send actual and requested device info for those who have opt-in. 2023-09-16 09:42:22 -04:00
Adam Treat
045f6e6cdc Link against ggml in bin so we can get the available devices without loading a model. 2023-09-15 14:45:25 -04:00
Adam Treat
655372dbfa Release notes for v2.4.17 and bump the version. 2023-09-14 17:11:04 -04:00
Adam Treat
aa33419c6e Fallback to CPU more robustly. 2023-09-14 16:53:11 -04:00
Adam Treat
79843c269e Release notes for v2.4.16 and bump the version. 2023-09-14 11:24:25 -04:00
Adam Treat
3076e0bf26 Only show GPU when we're actually using it. 2023-09-14 09:59:19 -04:00
Adam Treat
1fa67a585c Report the actual device we're using. 2023-09-14 08:25:37 -04:00
Adam Treat
21a3244645 Fix a bug where we're not properly falling back to CPU. 2023-09-13 19:30:27 -04:00
Adam Treat
0458c9b4e6 Add version 2.4.15 and bump the version number. 2023-09-13 17:55:50 -04:00
Aaron Miller
6f038c136b init at most one vulkan device, submodule update
fixes issues w/ multiple of the same gpu
2023-09-13 12:49:53 -07:00
Adam Treat
86e862df7e Fix up the name and formatting. 2023-09-13 15:48:55 -04:00
Adam Treat
358ff2a477 Show the device we're currently using. 2023-09-13 15:24:33 -04:00
Adam Treat
891ddafc33 When device is Auto (the default) then we will only consider discrete GPU's otherwise fallback to CPU. 2023-09-13 11:59:36 -04:00
Adam Treat
8f99dca70f Bring the vulkan backend to the GUI. 2023-09-13 11:26:10 -04:00
Adam Treat
987546c63b Nomic vulkan backend licensed under the Software for Open Models License (SOM), version 1.0. 2023-08-31 15:29:54 -04:00
Adam Treat
d55cbbee32 Update to newer llama.cpp and disable older forks. 2023-08-31 15:29:54 -04:00
Adam Treat
a63093554f Remove older models that rely upon soon to be no longer supported quantization formats. 2023-08-15 13:19:41 -04:00
Adam Treat
2c0ee50dce Add starcoder 7b. 2023-08-15 09:27:55 -04:00
Victor Tsaran
ca8baa294b
Updated README.md with a wishlist idea (#1315)
Signed-off-by: Victor Tsaran <vtsaran@yahoo.com>
2023-08-10 11:27:09 -04:00
Lakshay Kansal
0f2bb506a8
font size changer and updates (#1322) 2023-08-07 13:54:13 -04:00
Akarshan Biswas
c449b71b56
Add LLaMA2 7B model to model.json. (#1296)
* Add LLaMA2 7B model to model.json.

---------

Signed-off-by: Akarshan Biswas <akarshan.biswas@gmail.com>
2023-08-02 16:58:14 +02:00
Lakshay Kansal
cbdcde8b75
scrollbar fixed for main chat and chat drawer (#1301) 2023-07-31 12:18:38 -04:00
Lakshay Kansal
3d2db76070
fixed issue of text color changing for code blocks in light mode (#1299) 2023-07-31 12:18:19 -04:00
Aaron Miller
b9e2553995
remove trailing comma from models json (#1284) 2023-07-27 09:14:33 -07:00
Adam Treat
09a143228c New release notes and bump version. 2023-07-27 11:48:16 -04:00
Lakshay Kansal
fc1af4a234 light mode vs dark mode 2023-07-27 09:31:55 -04:00
Adam Treat
6d03b3e500 Add starcoder support. 2023-07-27 09:15:16 -04:00
Adam Treat
397f3ba2d7 Add a little size to the monospace font. 2023-07-27 09:15:16 -04:00
AMOGUS
4974ae917c Update default TopP to 0.4
TopP 0.1 was found to be somewhat too aggressive, so a more moderate default of 0.4 would be better suited for general use.

Signed-off-by: AMOGUS <137312610+Amogus8P@users.noreply.github.com>
2023-07-19 10:36:23 -04:00
Lakshay Kansal
6c8669cad3 highlighting rules for html and php and latex 2023-07-14 11:36:01 -04:00
Adam Treat
0efdbfcffe Bert 2023-07-13 14:21:46 -04:00
Adam Treat
315a1f2aa2 Move it back as internal class. 2023-07-13 14:21:46 -04:00
Adam Treat
ae8eb297ac Add sbert backend. 2023-07-13 14:21:46 -04:00
Adam Treat
1f749d7633 Clean up backend code a bit and hide impl. details. 2023-07-13 14:21:46 -04:00
Adam Treat
a0dae86a95 Add bert to models.json 2023-07-13 13:37:12 -04:00
AT
18ca8901f0
Update README.md
Signed-off-by: AT <manyoso@users.noreply.github.com>
2023-07-12 16:30:56 -04:00
Adam Treat
e8b19b8e82 Bump version to 2.4.14 and provide release notes. 2023-07-12 14:58:45 -04:00
Adam Treat
8eb0844277 Check if the trimmed version is empty. 2023-07-12 14:31:43 -04:00
Adam Treat
be395c12cc Make all system prompts empty by default if model does not include in training data. 2023-07-12 14:31:43 -04:00
Aaron Miller
6a8fa27c8d Correctly find models in subdirs of model dir
QDirIterator doesn't seem particular subdir aware, its path() returns
the iterated dir. This was the simplest way I found to get this right.
2023-07-12 14:18:40 -04:00
Adam Treat
8893db5896 Add wizard model and rename orca to be more specific. 2023-07-12 14:12:46 -04:00
Adam Treat
60627bd41f Prefer 7b models in order of default model load. 2023-07-12 12:50:18 -04:00
Aaron Miller
5df4f1bf8c codespell 2023-07-12 12:49:06 -04:00
Aaron Miller
10ca2c4475 center the spinner 2023-07-12 12:49:06 -04:00
Adam Treat
e9897518d1 Show busy if models.json download taking longer than expected. 2023-07-12 12:49:06 -04:00
Aaron Miller
ad0e7fd01f chatgpt: ensure no extra newline in header 2023-07-12 10:53:25 -04:00
Aaron Miller
f0faa23ad5
cmakelists: always export build commands (#1179)
friendly for using editors with clangd integration that don't also
manage the build themselves
2023-07-12 10:49:24 -04:00
Adam Treat
0d726b22b8 When we explicitly cancel an operation we shouldn't throw an error. 2023-07-12 10:34:10 -04:00
Adam Treat
13b2d47be5 Provide an error dialog if for any reason we can't access the settings file. 2023-07-12 08:50:21 -04:00
Adam Treat
e9d42fba35 Don't show first start more than once. 2023-07-11 18:54:53 -04:00
Adam Treat
2679dc1521 Give note about gpt-4 and openai key access. 2023-07-11 15:35:10 -04:00
Adam Treat
806905f747 Explicitly set the color in MyTextField. 2023-07-11 15:27:26 -04:00
Adam Treat
9dccc96e70 Immediately signal when the model is in a new loading state. 2023-07-11 15:10:59 -04:00
Adam Treat
833a56fadd Fix the tap handler on these buttons. 2023-07-11 14:58:54 -04:00
Adam Treat
18dbfddcb3 Fix default thread setting. 2023-07-11 13:07:41 -04:00
Adam Treat
34a3b9c857 Don't block on exit when not connected. 2023-07-11 12:37:21 -04:00
Adam Treat
88bbe30952 Provide a guardrail for OOM errors. 2023-07-11 12:09:33 -04:00
Adam Treat
9ef53163dd Explicitly send the opt out because we were artificially lowering them with settings changes. 2023-07-11 10:53:19 -04:00
Adam Treat
4f9e489093 Don't use a local event loop which can lead to recursion and crashes. 2023-07-11 10:08:03 -04:00
Adam Treat
8467e69f24 Check that we're not null. This is necessary because the loop can make us recursive. Need to fix that. 2023-07-10 17:30:08 -04:00
Adam Treat
99cd555743 Provide some guardrails for thread count. 2023-07-10 17:29:51 -04:00
Lakshay Kansal
a190041c6e
json and c# highlighting rules (#1163) 2023-07-10 16:23:32 -04:00
Adam Treat
3e3b05a2a4 Don't process the system prompt when restoring state. 2023-07-10 16:20:19 -04:00
Adam Treat
98dd2ab4bc Provide backup options if models.json does not download synchronously. 2023-07-10 16:14:57 -04:00
Adam Treat
c8d761a004 Add a nicer message. 2023-07-09 15:51:59 -04:00
Adam Treat
e120eb5008 Allow closing the download dialog and display a message to the user if no models are installed. 2023-07-09 15:08:14 -04:00
Adam Treat
fb172a2524 Don't prevent closing the model download dialog. 2023-07-09 14:58:55 -04:00
Adam Treat
15d04a7916 Fix new version dialog ui. 2023-07-09 14:56:54 -04:00
Adam Treat
12083fcdeb When deleting chats we sometimes have to update our modelinfo. 2023-07-09 14:52:08 -04:00
Adam Treat
59f3c093cb Stop generating anything on shutdown. 2023-07-09 14:42:11 -04:00
Adam Treat
e2458454d3 Bump to v2.4.12 and new release notes. 2023-07-09 13:33:07 -04:00
Adam Treat
d9f0245c1b Fix problems with browse of folder in settings dialog. 2023-07-09 13:05:06 -04:00
Adam Treat
58d6f40f50 Fix broken installs. 2023-07-09 11:50:44 -04:00
Adam Treat
85626b3dab Fix model path. 2023-07-09 11:33:58 -04:00
Adam Treat
ee73f1ab1d Shrink the templates. 2023-07-06 17:10:57 -04:00
Akarshan Biswas
c987e56db7 Update CMakeLists.txt - change WaylandClient to WaylandCompositor
https://doc.qt.io/qt-6/qwaylandcompositor.html

Signed-off-by: Akarshan Biswas <akarshan.biswas@gmail.com>
2023-07-06 12:50:05 -04:00
Akarshan Biswas
16bd4a14d3 Add Qt6:WaylandClient only to Linux Build
Signed-off-by: Akarshan Biswas <akarshan.biswas@gmail.com>
2023-07-06 12:50:05 -04:00
Adam Treat
18316cde39 Bump to 2.4.12 and release notes. 2023-07-06 12:25:25 -04:00
Adam Treat
db528ef1b0 Add a close button for dialogs. 2023-07-06 10:53:56 -04:00
Adam Treat
27981c0d21 Fix broken download/remove/install. 2023-07-05 20:12:37 -04:00
Adam Treat
eab92a9d73 Fix typo and add new show references setting to localdocs. 2023-07-05 19:41:23 -04:00
Adam Treat
0638b45b47 Per model prompts / templates. 2023-07-05 16:30:41 -04:00
Adam Treat
1491c9fe49 Fix build on windows. 2023-07-05 15:51:42 -04:00
Adam Treat
6d9cdf228c Huge change that completely revamps the settings dialog and implements
per model settings as well as the ability to clone a model into a "character."
This also implements system prompts as well as quite a few bugfixes for
instance this fixes chatgpt.
2023-07-05 15:51:42 -04:00
Adam Treat
2a6c673c25 Begin redesign of settings dialog. 2023-07-05 15:51:42 -04:00
Adam Treat
dedb0025be Refactor the settings dialog so that it uses a set of components/abstractions
for all of the tabs and stacks
2023-07-05 15:51:42 -04:00
Lakshay Kansal
b3c29e4179
implemented support for bash and go highlighting rules (#1138)
* implemented support for bash and go

* add more commands to bash

* gave precedence to variables over strings in bash
2023-07-05 11:04:13 -04:00
matthew-gill
fd4081aed8 Update codeblock font 2023-07-05 09:44:25 -04:00
Lakshay Kansal
70cbff70cc created highlighting rules for java using regex for the gpt4all chat interface 2023-06-29 13:11:37 -03:00
Adam Treat
1cd734efdc Provide an abstraction to break up the settings dialog into managable pieces. 2023-06-29 09:59:54 -04:00
Adam Treat
7f252b4970 This completes the work of consolidating all settings that can be changed by the user on new settings object. 2023-06-29 00:44:48 -03:00
Adam Treat
285aa50b60 Consolidate generation and application settings on the new settings object. 2023-06-28 20:36:43 -03:00
Adam Treat
7f66c28649 Use the new settings for response generation. 2023-06-28 20:11:24 -03:00
Adam Treat
a8baa4da52 The sync for save should be after. 2023-06-28 20:11:24 -03:00
Adam Treat
705b480d72 Start moving toward a single authoritative class for all settings. This
is necessary to get rid of technical debt before we drastically increase
the complexity of settings by adding per model settings and mirostat and
other fun things. Right now the settings are divided between QML and C++
and some convenience methods to deal with settings sync and so on that are
in other singletons. This change consolidates all the logic for settings
into a single class with a single API for both C++ and QML.
2023-06-28 20:11:24 -03:00
Adam Treat
e70899a26c Make the retrieval/parsing of models.json sync on startup. We were jumping to many hoops to mitigate the async behavior. 2023-06-28 12:32:22 -03:00
Adam Treat
9560336490 Match on the filename too for server mode. 2023-06-28 09:20:05 -04:00
Adam Treat
58cd346686 Bump release again and new release notes. 2023-06-27 18:01:23 -04:00
Adam Treat
0f8f364d76 Fix mac again for falcon. 2023-06-27 17:20:40 -04:00
Adam Treat
8aae4e52b3 Fix for falcon on mac. 2023-06-27 17:13:13 -04:00
Adam Treat
9375c71aa7 New release notes for 2.4.9 and bump version. 2023-06-27 17:01:49 -04:00
Adam Treat
71449bbc4b Fix this correctly? 2023-06-27 16:01:11 -04:00
Adam Treat
07a5405618 Make it clear this is our finetune. 2023-06-27 15:33:38 -04:00
Adam Treat
189ac82277 Fix server mode. 2023-06-27 15:01:16 -04:00
Adam Treat
b56cc61ca2 Don't allow setting an invalid prompt template. 2023-06-27 14:52:44 -04:00
Adam Treat
0780393d00 Don't use local. 2023-06-27 14:13:42 -04:00
Adam Treat
924efd9e25 Add falcon to our models.json 2023-06-27 13:56:16 -04:00
Adam Treat
d3b8234106 Fix spelling. 2023-06-27 14:23:56 -03:00
Adam Treat
42c0a6673a Don't persist the force metal setting. 2023-06-27 14:23:56 -03:00
Adam Treat
267601d670 Enable the force metal setting. 2023-06-27 14:23:56 -03:00
Aaron Miller
e22dd164d8 add falcon to chatllm::serialize 2023-06-27 14:06:39 -03:00
Aaron Miller
198b5e4832 add Falcon 7B model
Tested with https://huggingface.co/TheBloke/falcon-7b-instruct-GGML/blob/main/falcon7b-instruct.ggmlv3.q4_0.bin
2023-06-27 14:06:39 -03:00
Adam Treat
985d3bbfa4 Add Orca models to list. 2023-06-27 09:38:43 -04:00
Adam Treat
8558fb4297 Fix models.json for spanning multiple lines with string. 2023-06-26 21:35:56 -04:00
Adam Treat
c24ad02a6a Wait just a bit to set the model name so that we can display the proper name instead of filename. 2023-06-26 21:00:09 -04:00
Adam Treat
57fa8644d6 Make spelling check happy. 2023-06-26 17:56:56 -04:00
Adam Treat
d0a3e82ffc Restore feature I accidentally erased in modellist update. 2023-06-26 17:50:45 -04:00
Aaron Miller
b19a3e5b2c add requiredMem method to llmodel impls
most of these can just shortcut out of the model loading logic llama is a bit worse to deal with because we submodule it so I have to at least parse the hparams, and then I just use the size on disk as an estimate for the mem size (which seems reasonable since we mmap() the llama files anyway)
2023-06-26 18:27:58 -03:00
Adam Treat
dead954134 Fix save chats setting. 2023-06-26 16:43:37 -04:00
Adam Treat
26c9193227 Sigh. Windows. 2023-06-26 16:34:35 -04:00
Adam Treat
5deec2afe1 Change this back now that it is ready. 2023-06-26 16:21:09 -04:00
Adam Treat
676248fe8f Update the language. 2023-06-26 14:14:49 -04:00
Adam Treat
ef92492d8c Add better warnings and links. 2023-06-26 14:14:49 -04:00
Adam Treat
71c972f8fa Provide a more stark warning for localdocs and add more size to dialogs. 2023-06-26 14:14:49 -04:00
Adam Treat
1b5aa4617f Enable the add button always, but show an error in placeholder text. 2023-06-26 14:14:49 -04:00
Adam Treat
a0f80453e5 Use sysinfo in backend. 2023-06-26 14:14:49 -04:00
Adam Treat
5e520bb775 Fix so that models are searched in subdirectories. 2023-06-26 14:14:49 -04:00
Adam Treat
64e98b8ea9 Fix bug with model loading on initial load. 2023-06-26 14:14:49 -04:00
Adam Treat
3ca9e8692c Don't try and load incomplete files. 2023-06-26 14:14:49 -04:00
Adam Treat
27f25d5878 Get rid of recursive mutex. 2023-06-26 14:14:49 -04:00
Adam Treat
7f01b153b3 Modellist temp 2023-06-26 14:14:46 -04:00
Adam Treat
c1794597a7 Revert "Enable Wayland in build"
This reverts commit d686a583f9.
2023-06-26 14:10:27 -04:00
Akarshan Biswas
d686a583f9 Enable Wayland in build
# Describe your changes
The patch include support for running natively on a Linux Wayland display server/compositor which is successor to old Xorg.
Cmakelist was missing WaylandClient so added it back.

Will fix #1047 .

Signed-off-by: Akarshan Biswas <akarshan.biswas@gmail.com>
2023-06-26 14:58:23 -03:00
AMOGUS
3417a37c54
Change "web server" to "API server" for less confusion (#1039)
* Change "Web server" to "API server"

* Changed "API server" to "OpenAPI server"

* Reversed back to "API server" and updated tooltip
2023-06-23 16:28:52 -04:00
cosmic-snow
a423075403
Allow Cross-Origin Resource Sharing (CORS) (#1008) 2023-06-22 09:19:49 -07:00
Martin Mauch
af28173a25
Parse Org Mode files (#1038) 2023-06-22 09:09:39 -07:00
niansa/tuxifan
01acb8d250 Update download speed less often
To not show every little tiny network spike to the user

Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
2023-06-22 09:29:15 +02:00
Adam Treat
09ae04cee9 This needs to work even when localdocs and codeblocks are detected. 2023-06-20 19:07:02 -04:00
Adam Treat
ce7333029f Make the copy button a little more tolerant. 2023-06-20 18:59:08 -04:00
Adam Treat
508993de75 Exit early when no chats are saved. 2023-06-20 18:30:17 -04:00
Adam Treat
85bc861835 Fix the alignment. 2023-06-20 17:40:02 -04:00
Adam Treat
eebfe642c4 Add an error message to download dialog if models.json can't be retrieved. 2023-06-20 17:31:36 -04:00
Adam Treat
968868415e Move saving chats to a thread and display what we're doing to the user. 2023-06-20 17:18:33 -04:00
Adam Treat
c8a590bc6f Get rid of last blocking operations and make the chat/llm thread safe. 2023-06-20 18:18:10 -03:00
Adam Treat
84ec4311e9 Remove duplicated state tracking for chatgpt. 2023-06-20 18:18:10 -03:00
Adam Treat
7d2ce06029 Start working on more thread safety and model load error handling. 2023-06-20 14:39:22 -03:00
Adam Treat
d5f56d3308 Forgot to add a signal handler. 2023-06-20 14:39:22 -03:00