Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
Go to file
Alex Xiao 9ad6b5a967 fix bmuf restart causing lr to be reset
Summary:
From https://fb.workplace.com/groups/332923290658088/permalink/505568510060231/, we have observed that learning rate on resume is reset to original value. Based on investigation in https://fb.workplace.com/groups/332923290658088/permalink/505568510060231/?comment_id=506348519982230, it seems like this happens 500 iterations after restarting, which coincides with BMUF warmup. After further debugging, ti seems like this is confirmed to be due to bmuf and because after warmup we reset the optimizer to the initial state  when created.

My proposed fix is to reset the initial state of the optimizer whenever we load the state dict.

Reviewed By: zhengwy888

Differential Revision: D19183595

fbshipit-source-id: 4cdc13378817a7e9a6b658010b152a508991971f
2019-12-20 15:27:42 -08:00
.github Create build.yml 2019-12-17 20:45:11 -08:00
docs More fully deprecate --raw-text and --lazy-load (fixes #1488) 2019-12-16 17:22:11 -08:00
examples Add lightconv and dynamic conv models to torch.hub and add missing BPE codes 2019-12-18 16:54:41 -08:00
fairseq fix bmuf restart causing lr to be reset 2019-12-20 15:27:42 -08:00
fairseq_cli Add fairseq to PyPI (#495) 2019-02-08 22:03:29 -08:00
scripts Add piece decoding in spm_decode.py 2019-12-11 10:38:00 -08:00
tests Fix multilingual translation errors and add unit test 2019-12-19 07:08:59 -08:00
.gitignore REFACTOR: NAT Implementation (#925) 2019-12-03 18:39:28 -08:00
CODE_OF_CONDUCT.md Adopt Contributor Covenant 2019-08-29 23:24:43 -07:00
CONTRIBUTING.md Relicense fairseq under MIT license (#786) 2019-07-30 07:48:23 -07:00
eval_lm.py Small fixes 2019-08-19 15:08:25 -07:00
fairseq_logo.png Fixes (#442) 2019-01-14 08:58:51 -08:00
fairseq.gif Initial commit 2017-09-14 17:22:43 -07:00
generate.py More fully deprecate --raw-text and --lazy-load (fixes #1488) 2019-12-16 17:22:11 -08:00
hubconf.py Build Cython components when loading hub (#1386) 2019-11-17 17:43:49 -08:00
interactive.py data augmentation pipeline 2019-12-04 14:11:45 -08:00
LICENSE Relicense fairseq under MIT license (#786) 2019-07-30 07:48:23 -07:00
preprocess.py Implementation of the paper "Jointly Learning to Align and Translate with Transformer Models" (#877) 2019-09-30 06:57:32 -07:00
README.md Update README.md 2019-12-18 07:04:19 -08:00
score.py Relicense fairseq under MIT license (#786) 2019-07-30 07:48:23 -07:00
setup.py Update README to indicate we only support Python >= 3.6 (fixes #1317) 2019-12-16 19:46:53 -08:00
train.py fix a bug when resuming training from the last epoch (#1275) 2019-11-14 11:50:46 -08:00
validate.py Small fixes 2019-08-19 15:08:25 -07:00



MIT License Latest Release Build Status Documentation Status


Fairseq(-py) is a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling and other text generation tasks.

What's New:

Features:

Fairseq provides reference implementations of various sequence-to-sequence models, including:

Additionally:

  • multi-GPU (distributed) training on one machine or across multiple machines
  • fast generation on both CPU and GPU with multiple search algorithms implemented:
  • large mini-batch training even on a single GPU via delayed updates
  • mixed precision training (trains faster with less GPU memory on NVIDIA tensor cores)
  • extensible: easily register new models, criterions, tasks, optimizers and learning rate schedulers

We also provide pre-trained models for translation and language modeling with a convenient torch.hub interface:

en2de = torch.hub.load('pytorch/fairseq', 'transformer.wmt19.en-de.single_model')
en2de.translate('Hello world', beam=5)
# 'Hallo Welt'

See the PyTorch Hub tutorials for translation and RoBERTa for more examples.

Model

Requirements and Installation

  • PyTorch version >= 1.2.0
  • Python version >= 3.6
  • For training new models, you'll also need an NVIDIA GPU and NCCL
  • For faster training install NVIDIA's apex library with the --cuda_ext option

To install fairseq:

pip install fairseq

On MacOS:

CFLAGS="-stdlib=libc++" pip install fairseq

If you use Docker make sure to increase the shared memory size either with --ipc=host or --shm-size as command line options to nvidia-docker run.

Installing from source

To install fairseq from source and develop locally:

git clone https://github.com/pytorch/fairseq
cd fairseq
pip install --editable .

Getting Started

The full documentation contains instructions for getting started, training new models and extending fairseq with new model types and tasks.

Pre-trained models and examples

We provide pre-trained models and pre-processed, binarized test sets for several tasks listed below, as well as example training and evaluation commands.

  • Translation: convolutional and transformer models are available
  • Language Modeling: convolutional and transformer models are available
  • wav2vec: wav2vec large model is available

We also have more detailed READMEs to reproduce results from specific papers:

Join the fairseq community

License

fairseq(-py) is MIT-licensed. The license applies to the pre-trained models as well.

Citation

Please cite as:

@inproceedings{ott2019fairseq,
  title = {fairseq: A Fast, Extensible Toolkit for Sequence Modeling},
  author = {Myle Ott and Sergey Edunov and Alexei Baevski and Angela Fan and Sam Gross and Nathan Ng and David Grangier and Michael Auli},
  booktitle = {Proceedings of NAACL-HLT 2019: Demonstrations},
  year = {2019},
}