Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
Go to file
Wei Wei 3a757d7ab2 BT enablement on fairseq - fairseq change (#4480)
Summary:
Pull Request resolved: https://github.com/facebookresearch/fairseq/pull/4480

as titled and depends on D36057338
Fork the inference path inside the forward function. If loaded the checkpoint file and perform the inference, we will deploy BT. Otherwise, fairseq take the position.

In summary:
Accuracy: accuracy loss due to the fp16, the maximum diff is around 0.009. If we set it to fp32, there is no accuracy loss
Perf: the current fairseq has similar speed as vanilla version. After the enablement, the speedup is similar to standalone BT test.
With batch size=64
For V100, the speedup reaches to 1.23x
For A100, the speedup reaches to 1.38x

After enable nested tensor,
For V100, the speedup reaches to 2.46x

Reviewed By: mikekgfb

Differential Revision: D37082681

fbshipit-source-id: 984266f850fc30603e48be56e41ac2c67da080f5
2022-06-15 21:48:41 -07:00
.circleci CircleCI deprecating Ubuntu 16.04-based machine images (#4218) 2022-05-31 08:52:43 -07:00
.github Add command to release workflow (#4483) 2022-06-13 07:36:54 -07:00
docs Rename references from master -> main in preparation for branch name change (#2297) 2021-09-20 08:29:38 -07:00
examples include wav2vec-u 2.0 (#2826) 2022-06-14 21:54:56 -07:00
fairseq BT enablement on fairseq - fairseq change (#4480) 2022-06-15 21:48:41 -07:00
fairseq_cli Missing f prefix on f-strings fix (#4380) 2022-05-23 16:26:35 -07:00
scripts BT enablement on fairseq - fairseq change (#4480) 2022-06-15 21:48:41 -07:00
tests BT enablement on fairseq - fairseq change (#4480) 2022-06-15 21:48:41 -07:00
.gitignore Data2vec prelim (#2929) 2022-01-20 00:02:16 -08:00
.gitmodules Remove unused hf/transformers submodule (#1435) 2020-11-16 09:12:02 -08:00
.isort.cfg Add pre commit config and flake8 config (#2676) 2021-11-24 18:03:37 -08:00
.pre-commit-config.yaml add masked_lm test (#4344) 2022-04-18 14:47:00 -07:00
CODE_OF_CONDUCT.md Update CODE_OF_CONDUCT.md (#1759) 2020-03-04 14:05:25 -08:00
CONTRIBUTING.md Add pre commit config and flake8 config (#2676) 2021-11-24 18:03:37 -08:00
hubconf.py Move dep checks before fairseq imports in hubconf.py (fixes #3093) (#3104) 2021-01-05 12:14:46 -08:00
LICENSE Relicense fairseq under MIT license (#786) 2019-07-30 07:48:23 -07:00
pyproject.toml fetch pyproject.toml for building cython codes without pre-installation (#1697) 2020-02-15 20:24:10 -08:00
README.md include wav2vec-u 2.0 (#2826) 2022-06-14 21:54:56 -07:00
release_utils.py Auto release (#4455) 2022-06-08 16:23:48 -07:00
RELEASE.md Refactor release.yml (#4475) 2022-06-11 11:49:18 -07:00
setup.cfg fix flake8 issues (#2570) 2021-12-09 02:34:30 -08:00
setup.py Do not append commit hash to version (#4472) 2022-06-09 16:13:26 -07:00
train.py Apply black+isort (#1357) 2020-10-18 18:14:51 -07:00



Support Ukraine MIT License Latest Release Build Status Documentation Status CicleCI Status


Fairseq(-py) is a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling and other text generation tasks.

We provide reference implementations of various sequence modeling papers:

List of implemented papers

What's New:

Previous updates

Features:

We also provide pre-trained models for translation and language modeling with a convenient torch.hub interface:

en2de = torch.hub.load('pytorch/fairseq', 'transformer.wmt19.en-de.single_model')
en2de.translate('Hello world', beam=5)
# 'Hallo Welt'

See the PyTorch Hub tutorials for translation and RoBERTa for more examples.

Requirements and Installation

  • PyTorch version >= 1.5.0
  • Python version >= 3.6
  • For training new models, you'll also need an NVIDIA GPU and NCCL
  • To install fairseq and develop locally:
git clone https://github.com/pytorch/fairseq
cd fairseq
pip install --editable ./

# on MacOS:
# CFLAGS="-stdlib=libc++" pip install --editable ./

# to install the latest stable release (0.10.x)
# pip install fairseq
  • For faster training install NVIDIA's apex library:
git clone https://github.com/NVIDIA/apex
cd apex
pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" \
  --global-option="--deprecated_fused_adam" --global-option="--xentropy" \
  --global-option="--fast_multihead_attn" ./
  • For large datasets install PyArrow: pip install pyarrow
  • If you use Docker make sure to increase the shared memory size either with --ipc=host or --shm-size as command line options to nvidia-docker run .

Getting Started

The full documentation contains instructions for getting started, training new models and extending fairseq with new model types and tasks.

Pre-trained models and examples

We provide pre-trained models and pre-processed, binarized test sets for several tasks listed below, as well as example training and evaluation commands.

We also have more detailed READMEs to reproduce results from specific papers:

Join the fairseq community

License

fairseq(-py) is MIT-licensed. The license applies to the pre-trained models as well.

Citation

Please cite as:

@inproceedings{ott2019fairseq,
  title = {fairseq: A Fast, Extensible Toolkit for Sequence Modeling},
  author = {Myle Ott and Sergey Edunov and Alexei Baevski and Angela Fan and Sam Gross and Nathan Ng and David Grangier and Michael Auli},
  booktitle = {Proceedings of NAACL-HLT 2019: Demonstrations},
  year = {2019},
}