gpt4all/gpt4all-training
mcembalest 69102a2859
small edits and placeholder gif (#2513)
* small edits and placeholder gif

Signed-off-by: Max Cembalest <max@nomic.ai>

* jul2 docs updates

Signed-off-by: Max Cembalest <max@nomic.ai>

* added video

Signed-off-by: mcembalest <70534565+mcembalest@users.noreply.github.com>
Signed-off-by: Max Cembalest <max@nomic.ai>

* quantization nits

Signed-off-by: Max Cembalest <max@nomic.ai>

---------

Signed-off-by: Max Cembalest <max@nomic.ai>
Signed-off-by: mcembalest <70534565+mcembalest@users.noreply.github.com>
2024-07-02 11:41:39 -04:00
..
configs fix: update train scripts and configs for other models (#1164) 2023-07-12 15:18:24 -04:00
figs mono repo structure 2023-05-01 15:45:23 -04:00
build_map.py make scripts executable (#1555) 2023-10-24 09:28:21 -04:00
clean.py make scripts executable (#1555) 2023-10-24 09:28:21 -04:00
create_hostname.sh make scripts executable (#1555) 2023-10-24 09:28:21 -04:00
data.py fix: update train scripts and configs for other models (#1164) 2023-07-12 15:18:24 -04:00
env.yaml mono repo structure 2023-05-01 15:45:23 -04:00
eval_figures.py make scripts executable (#1555) 2023-10-24 09:28:21 -04:00
eval_self_instruct.py make scripts executable (#1555) 2023-10-24 09:28:21 -04:00
generate.py make scripts executable (#1555) 2023-10-24 09:28:21 -04:00
GPT-J_MAP.md mono repo structure 2023-05-01 15:45:23 -04:00
inference.py make scripts executable (#1555) 2023-10-24 09:28:21 -04:00
launcher.sh make scripts executable (#1555) 2023-10-24 09:28:21 -04:00
old-README.md llamamodel: fix BERT tokenization after llama.cpp update (#2381) 2024-05-28 13:11:57 -04:00
read.py mono repo structure 2023-05-01 15:45:23 -04:00
README.md small edits and placeholder gif (#2513) 2024-07-02 11:41:39 -04:00
requirements.txt fix: update train scripts and configs for other models (#1164) 2023-07-12 15:18:24 -04:00
train.py make scripts executable (#1555) 2023-10-24 09:28:21 -04:00
TRAINING_LOG.md mono repo structure 2023-05-01 15:45:23 -04:00

Training GPT4All-J

Technical Reports

📗 Technical Report 3: GPT4All Snoozy and Groovy

📗 Technical Report 2: GPT4All-J

📗 Technical Report 1: GPT4All

GPT4All-J Training Data

We have released updated versions of our GPT4All-J model and training data.

  • v1.0: The original model trained on the v1.0 dataset
  • v1.1-breezy: Trained on a filtered dataset where we removed all instances of AI language model
  • v1.2-jazzy: Trained on a filtered dataset where we also removed instances like I'm sorry, I can't answer... and AI language model

The models and data versions can be specified by passing a revision argument.

For example, to load the v1.2-jazzy model and dataset, run:

from datasets import load_dataset
from transformers import AutoModelForCausalLM

dataset = load_dataset("nomic-ai/gpt4all-j-prompt-generations", revision="v1.2-jazzy")
model = AutoModelForCausalLM.from_pretrained("nomic-ai/gpt4all-j", revision="v1.2-jazzy")

GPT4All-J Training Instructions

accelerate launch --dynamo_backend=inductor --num_processes=8 --num_machines=1 --machine_rank=0 --deepspeed_multinode_launcher standard --mixed_precision=bf16  --use_deepspeed --deepspeed_config_file=configs/deepspeed/ds_config_gptj.json train.py --config configs/train/finetune_gptj.yaml