fairseq/examples/language_model
Myle Ott e6422528da 0.6.1 -> 0.6.2 (#577)
Summary:
Changelog:
- 998ba4f: Add language models from Baevski & Auli (2018)
- 4294c4f: Add mixture of experts code from Shen et al. (2019)
- 0049349: Add example for multilingual training
- 48d9afb: Speed improvements, including fused operators from apex
- 44d27e6: Add Tensorboard support
- d17fa85: Add Adadelta optimizer
- 9e1c880: Add `FairseqEncoderModel`
- b65c579: Add `FairseqTask.inference_step` to modularize generate.py
- 2ad1178: Add back `--curriculum`
- Misc bug fixes and other features

Pull Request resolved: https://github.com/pytorch/fairseq/pull/577

Differential Revision: D14481233

Pulled By: myleott

fbshipit-source-id: 4ff8625ef1c0b24273fc65df7c5658e3c932e8b7
2019-03-15 10:27:01 -07:00
..
conv_lm 0.6.1 -> 0.6.2 (#577) 2019-03-15 10:27:01 -07:00
transformer_lm 0.6.1 -> 0.6.2 (#577) 2019-03-15 10:27:01 -07:00
prepare-wikitext-103.sh 0.6.1 -> 0.6.2 (#577) 2019-03-15 10:27:01 -07:00
README.md 0.6.1 -> 0.6.2 (#577) 2019-03-15 10:27:01 -07:00

Neural Language Modeling

Pre-trained models

Description Parameters Dataset Model and Test set(s)
Adaptive Inputs
(Baevski and Auli, 2018)
1026M Google Billion Words download (.tar.bz2)
Adaptive Inputs
(Baevski and Auli, 2018)
247M WikiText-103 download (.tar.bz2)

Example usage

These scripts provide an example of pre-processing data for the Language Modeling task.

prepare-wikitext-103.sh

Provides an example of pre-processing for WikiText-103 language modeling task:

Example usage:

Prepare data:

$ cd examples/language_model/
$ bash prepare-wikitext-103.sh
$ cd ../..

# Binarize the dataset:
$ TEXT=examples/language_model/wikitext-103

$ fairseq-preprocess --only-source \
  --trainpref $TEXT/wiki.train.tokens --validpref $TEXT/wiki.valid.tokens --testpref $TEXT/wiki.test.tokens \ 
  --destdir data-bin/wikitext-103

Train a transformer language model with adaptive inputs (Baevski and Auli (2018): Adaptive Input Representations for Neural Language Modeling):

# If it runs out of memory, try to reduce max-tokens and tokens-per-sample
$ mkdir -p checkpoints/transformer_wikitext-103
$ fairseq-train --task language_modeling data-bin/wikitext-103 \
  --save-dir checkpoints/transformer_wikitext-103 --arch transformer_lm_wiki103 \
  --max-update 286000 --max-lr 1.0 --t-mult 2 --lr-period-updates 270000 --lr-scheduler cosine --lr-shrink 0.75 \
  --warmup-updates 16000 --warmup-init-lr 1e-07 --min-lr 1e-09 --optimizer nag --lr 0.0001 --clip-norm 0.1 \
  --criterion adaptive_loss --max-tokens 3072 --update-freq 4 --tokens-per-sample 3072 --seed 1 \
  --sample-break-mode none --skip-invalid-size-inputs-valid-test --ddp-backend=no_c10d

# Evaluate:
$ fairseq-eval-lm data-bin/wikitext-103 --path 'checkpoints/transformer_wiki103/checkpoint_best.pt' \
  --sample-break-mode complete --max-tokens 3072 --context-window 2560 --softmax-batch 1024

Train a convolutional language model (Dauphin et al. (2017): Language Modeling with Gated Convolutional Networks):

# If it runs out of memory, try to reduce max-tokens and tokens-per-sample
$ mkdir -p checkpoints/fconv_wikitext-103
$ fairseq-train --task language_modeling data-bin/wikitext-103 \
  --save-dir checkpoints/fconv_wikitext-103 \
  --max-epoch 35 --arch fconv_lm_dauphin_wikitext103 --optimizer nag \
  --lr 1.0 --lr-scheduler reduce_lr_on_plateau --lr-shrink 0.5 \
  --clip-norm 0.1 --dropout 0.2 --weight-decay 5e-06 --criterion adaptive_loss \
  --adaptive-softmax-cutoff 10000,20000,200000 --max-tokens 1024 --tokens-per-sample 1024
  --ddp-backend=no_c10d

# Evaluate:
$ fairseq-eval-lm data-bin/wikitext-103 --path 'checkpoints/fconv_wiki103/checkpoint_best.pt'