Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
Go to file
Changhan Wang 49cf3e0bc3 fixing s2t transformer and N-best checkpoint saving
Summary:
- fixing the default value for `encoder_freezing_updates` in s2t transformer
- fixing N-best checkpoint saving: the previous implementation compares the new checkpoint with only the previous best one but not the previous N best ones. This leads to suboptimal results on N-best checkpoint averaging.

Reviewed By: jmp84

Differential Revision: D28546493

fbshipit-source-id: 44ec6d5ab49347f392d71269c5dcfd154b00c11e
2021-05-22 00:22:05 -07:00
.github Add README/tutorial for Fully Sharded Data Parallel (#3327) 2021-03-09 06:31:53 -08:00
docs Hydra Integration doc should refer to non legacy task (#1619) 2021-02-20 06:27:14 -08:00
examples Merge Hubert to master (#1877) 2021-05-21 18:40:56 -07:00
fairseq fixing s2t transformer and N-best checkpoint saving 2021-05-22 00:22:05 -07:00
fairseq_cli Wav2vec u (#1889) 2021-05-21 07:34:11 -07:00
scripts FSDP uses new optimizer gathering to save optimizer state (#1744) 2021-03-26 07:18:59 -07:00
tests support use_sharded_state on command line 2021-05-14 18:53:16 -07:00
.gitignore Reproduce #1781. Add Weights and Biases support 2020-11-03 20:48:00 -08:00
.gitmodules Remove unused hf/transformers submodule (#1435) 2020-11-16 09:12:02 -08:00
CODE_OF_CONDUCT.md Update CODE_OF_CONDUCT.md (#1759) 2020-03-04 14:05:25 -08:00
CONTRIBUTING.md Relicense fairseq under MIT license (#786) 2019-07-30 07:48:23 -07:00
hubconf.py Move dep checks before fairseq imports in hubconf.py (fixes #3093) (#3104) 2021-01-05 12:14:46 -08:00
LICENSE Relicense fairseq under MIT license (#786) 2019-07-30 07:48:23 -07:00
pyproject.toml fetch pyproject.toml for building cython codes without pre-installation (#1697) 2020-02-15 20:24:10 -08:00
README.md Add README/tutorial for Fully Sharded Data Parallel (#3327) 2021-03-09 06:31:53 -08:00
setup.py BASE layers (#1654) 2021-03-29 18:02:50 -07:00
train.py Apply black+isort (#1357) 2020-10-18 18:14:51 -07:00



MIT License Latest Release Build Status Documentation Status


Fairseq(-py) is a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling and other text generation tasks.

We provide reference implementations of various sequence modeling papers:

List of implemented papers

What's New:

Previous updates

Features:

We also provide pre-trained models for translation and language modeling with a convenient torch.hub interface:

en2de = torch.hub.load('pytorch/fairseq', 'transformer.wmt19.en-de.single_model')
en2de.translate('Hello world', beam=5)
# 'Hallo Welt'

See the PyTorch Hub tutorials for translation and RoBERTa for more examples.

Requirements and Installation

  • PyTorch version >= 1.5.0
  • Python version >= 3.6
  • For training new models, you'll also need an NVIDIA GPU and NCCL
  • To install fairseq and develop locally:
git clone https://github.com/pytorch/fairseq
cd fairseq
pip install --editable ./

# on MacOS:
# CFLAGS="-stdlib=libc++" pip install --editable ./

# to install the latest stable release (0.10.x)
# pip install fairseq
  • For faster training install NVIDIA's apex library:
git clone https://github.com/NVIDIA/apex
cd apex
pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" \
  --global-option="--deprecated_fused_adam" --global-option="--xentropy" \
  --global-option="--fast_multihead_attn" ./
  • For large datasets install PyArrow: pip install pyarrow
  • If you use Docker make sure to increase the shared memory size either with --ipc=host or --shm-size as command line options to nvidia-docker run .

Getting Started

The full documentation contains instructions for getting started, training new models and extending fairseq with new model types and tasks.

Pre-trained models and examples

We provide pre-trained models and pre-processed, binarized test sets for several tasks listed below, as well as example training and evaluation commands.

We also have more detailed READMEs to reproduce results from specific papers:

Join the fairseq community

License

fairseq(-py) is MIT-licensed. The license applies to the pre-trained models as well.

Citation

Please cite as:

@inproceedings{ott2019fairseq,
  title = {fairseq: A Fast, Extensible Toolkit for Sequence Modeling},
  author = {Myle Ott and Sergey Edunov and Alexei Baevski and Angela Fan and Sam Gross and Nathan Ng and David Grangier and Michael Auli},
  booktitle = {Proceedings of NAACL-HLT 2019: Demonstrations},
  year = {2019},
}