mirror of
https://github.com/nomic-ai/gpt4all.git
synced 2024-09-20 17:49:07 +03:00
Update TRAINING_LOG.md
This commit is contained in:
parent
2fec742c36
commit
644d548549
@ -234,4 +234,4 @@ Taking inspiration from [the Alpaca Repo](https://github.com/tatsu-lab/stanford_
|
||||
|
||||
Comparing our model LoRa to the [Alpaca LoRa](https://huggingface.co/tloen/alpaca-lora-7b), our model has lower perplexity. Qualitatively, training on 3 epochs performed the best on perplexity as well as qualitative examples.
|
||||
|
||||
We tried training a full model using the parameters above, but found that during the second epoch the model overfit.
|
||||
We tried training a full model using the parameters above, but found that during the second epoch the model diverged and samples generated post training were worse than the first epoch.
|
||||
|
Loading…
Reference in New Issue
Block a user