Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
Go to file
Naman Goyal 718677ebb0 dont project maske tokens for mlm loss (#859)
Summary:
This saves ~4-5gb gpu memory while training roberta large with `seq_len=512`.

I am able to fit `--max-sentences=16` on `volta32gb` for `roberta-large`
Pull Request resolved: https://github.com/fairinternal/fairseq-py/pull/859

Differential Revision: D17435814

fbshipit-source-id: 2663909768fac0ef0102107613770ee01b1f8c00
2019-09-18 10:06:38 -07:00
docs v0.7.2 -> v0.8.0 (#1017) 2019-08-14 05:02:45 -07:00
examples Fix link to RACE fine-tuning instructions. 2019-09-17 14:01:14 -07:00
fairseq dont project maske tokens for mlm loss (#859) 2019-09-18 10:06:38 -07:00
fairseq_cli Add fairseq to PyPI (#495) 2019-02-08 22:03:29 -08:00
scripts Small fixes 2019-08-19 15:08:25 -07:00
tests Small fixes 2019-08-19 15:08:25 -07:00
.gitignore Backward reranking public (#667) 2019-08-15 10:05:07 -07:00
CODE_OF_CONDUCT.md Adopt Contributor Covenant 2019-08-29 23:24:43 -07:00
CONTRIBUTING.md Relicense fairseq under MIT license (#786) 2019-07-30 07:48:23 -07:00
eval_lm.py Small fixes 2019-08-19 15:08:25 -07:00
fairseq_logo.png Fixes (#442) 2019-01-14 08:58:51 -08:00
fairseq.gif Initial commit 2017-09-14 17:22:43 -07:00
generate.py Relicense fairseq under MIT license (#786) 2019-07-30 07:48:23 -07:00
hubconf.py Minor cleanup for setup.py 2019-08-27 10:07:40 -07:00
interactive.py Relicense fairseq under MIT license (#786) 2019-07-30 07:48:23 -07:00
LICENSE Relicense fairseq under MIT license (#786) 2019-07-30 07:48:23 -07:00
preprocess.py Relicense fairseq under MIT license (#786) 2019-07-30 07:48:23 -07:00
README.md Update READMEs 2019-08-14 08:28:36 -07:00
score.py Relicense fairseq under MIT license (#786) 2019-07-30 07:48:23 -07:00
setup.py added cython to install_requires 2019-09-03 09:08:38 -07:00
train.py Improve support for python setup.py build_ext --inplace 2019-08-31 13:44:22 -07:00
validate.py Small fixes 2019-08-19 15:08:25 -07:00

Introduction

Fairseq(-py) is a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling and other text generation tasks.

What's New:

Features:

Fairseq provides reference implementations of various sequence-to-sequence models, including:

Additionally:

  • multi-GPU (distributed) training on one machine or across multiple machines
  • fast generation on both CPU and GPU with multiple search algorithms implemented:
  • large mini-batch training even on a single GPU via delayed updates
  • mixed precision training (trains faster with less GPU memory on NVIDIA tensor cores)
  • extensible: easily register new models, criterions, tasks, optimizers and learning rate schedulers

We also provide pre-trained models for several benchmark translation and language modeling datasets.

Model

Requirements and Installation

  • PyTorch version >= 1.1.0
  • Python version >= 3.5
  • For training new models, you'll also need an NVIDIA GPU and NCCL
  • For faster training install NVIDIA's apex library with the --cuda_ext option

To install fairseq:

pip install fairseq

On MacOS:

CFLAGS="-stdlib=libc++" pip install fairseq

If you use Docker make sure to increase the shared memory size either with --ipc=host or --shm-size as command line options to nvidia-docker run.

Installing from source

To install fairseq from source and develop locally:

git clone https://github.com/pytorch/fairseq
cd fairseq
pip install --editable .

Getting Started

The full documentation contains instructions for getting started, training new models and extending fairseq with new model types and tasks.

Pre-trained models and examples

We provide pre-trained models and pre-processed, binarized test sets for several tasks listed below, as well as example training and evaluation commands.

We also have more detailed READMEs to reproduce results from specific papers:

Join the fairseq community

License

fairseq(-py) is MIT-licensed. The license applies to the pre-trained models as well.

Citation

Please cite as:

@inproceedings{ott2019fairseq,
  title = {fairseq: A Fast, Extensible Toolkit for Sequence Modeling},
  author = {Myle Ott and Sergey Edunov and Alexei Baevski and Angela Fan and Sam Gross and Nathan Ng and David Grangier and Michael Auli},
  booktitle = {Proceedings of NAACL-HLT 2019: Demonstrations},
  year = {2019},
}