Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
Go to file
Sarthak Garg 1c66792948 Implementation of the paper "Jointly Learning to Align and Translate with Transformer Models" (#877)
Summary:
Pull Request resolved: https://github.com/fairinternal/fairseq-py/pull/877

This PR implements guided alignment training described in  "Jointly Learning to Align and Translate with Transformer Models (https://arxiv.org/abs/1909.02074)".

In summary, it allows for training selected heads of the Transformer Model with external alignments computed by Statistical Alignment Toolkits. During inference, attention probabilities from the trained heads can be used to extract reliable alignments. In our work, we did not see any regressions in the translation performance because of guided alignment training.
Pull Request resolved: https://github.com/pytorch/fairseq/pull/1095

Differential Revision: D17170337

Pulled By: myleott

fbshipit-source-id: daa418bef70324d7088dbb30aa2adf9f95774859
2019-09-30 06:57:32 -07:00
docs Update getting_started.rst (#1188) 2019-09-27 07:27:28 -07:00
examples Implementation of the paper "Jointly Learning to Align and Translate with Transformer Models" (#877) 2019-09-30 06:57:32 -07:00
fairseq Implementation of the paper "Jointly Learning to Align and Translate with Transformer Models" (#877) 2019-09-30 06:57:32 -07:00
fairseq_cli Add fairseq to PyPI (#495) 2019-02-08 22:03:29 -08:00
scripts Small fixes 2019-08-19 15:08:25 -07:00
tests Implementation of the paper "Jointly Learning to Align and Translate with Transformer Models" (#877) 2019-09-30 06:57:32 -07:00
.gitignore Add autogenerated cython files to gitignore (#860) 2019-09-18 15:58:38 -07:00
CODE_OF_CONDUCT.md Adopt Contributor Covenant 2019-08-29 23:24:43 -07:00
CONTRIBUTING.md Relicense fairseq under MIT license (#786) 2019-07-30 07:48:23 -07:00
eval_lm.py Small fixes 2019-08-19 15:08:25 -07:00
fairseq_logo.png Fixes (#442) 2019-01-14 08:58:51 -08:00
fairseq.gif Initial commit 2017-09-14 17:22:43 -07:00
generate.py Implementation of the paper "Jointly Learning to Align and Translate with Transformer Models" (#877) 2019-09-30 06:57:32 -07:00
hubconf.py Minor cleanup for setup.py 2019-08-27 10:07:40 -07:00
interactive.py Implementation of the paper "Jointly Learning to Align and Translate with Transformer Models" (#877) 2019-09-30 06:57:32 -07:00
LICENSE Relicense fairseq under MIT license (#786) 2019-07-30 07:48:23 -07:00
preprocess.py Implementation of the paper "Jointly Learning to Align and Translate with Transformer Models" (#877) 2019-09-30 06:57:32 -07:00
README.md Implementation of the paper "Jointly Learning to Align and Translate with Transformer Models" (#877) 2019-09-30 06:57:32 -07:00
score.py Relicense fairseq under MIT license (#786) 2019-07-30 07:48:23 -07:00
setup.py Levenshtein Transformer paper code 2019-09-27 13:58:45 -07:00
train.py Levenshtein Transformer paper code 2019-09-27 13:58:45 -07:00
validate.py Small fixes 2019-08-19 15:08:25 -07:00

Introduction

Fairseq(-py) is a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling and other text generation tasks.

What's New:

Features:

Fairseq provides reference implementations of various sequence-to-sequence models, including:

Additionally:

  • multi-GPU (distributed) training on one machine or across multiple machines
  • fast generation on both CPU and GPU with multiple search algorithms implemented:
  • large mini-batch training even on a single GPU via delayed updates
  • mixed precision training (trains faster with less GPU memory on NVIDIA tensor cores)
  • extensible: easily register new models, criterions, tasks, optimizers and learning rate schedulers

We also provide pre-trained models for several benchmark translation and language modeling datasets.

Model

Requirements and Installation

  • PyTorch version >= 1.2.0
  • Python version >= 3.5
  • For training new models, you'll also need an NVIDIA GPU and NCCL
  • For faster training install NVIDIA's apex library with the --cuda_ext option

To install fairseq:

pip install fairseq

On MacOS:

CFLAGS="-stdlib=libc++" pip install fairseq

If you use Docker make sure to increase the shared memory size either with --ipc=host or --shm-size as command line options to nvidia-docker run.

Installing from source

To install fairseq from source and develop locally:

git clone https://github.com/pytorch/fairseq
cd fairseq
pip install --editable .

Getting Started

The full documentation contains instructions for getting started, training new models and extending fairseq with new model types and tasks.

Pre-trained models and examples

We provide pre-trained models and pre-processed, binarized test sets for several tasks listed below, as well as example training and evaluation commands.

We also have more detailed READMEs to reproduce results from specific papers:

Join the fairseq community

License

fairseq(-py) is MIT-licensed. The license applies to the pre-trained models as well.

Citation

Please cite as:

@inproceedings{ott2019fairseq,
  title = {fairseq: A Fast, Extensible Toolkit for Sequence Modeling},
  author = {Myle Ott and Sergey Edunov and Alexei Baevski and Angela Fan and Sam Gross and Nathan Ng and David Grangier and Michael Auli},
  booktitle = {Proceedings of NAACL-HLT 2019: Demonstrations},
  year = {2019},
}