Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
Go to file
Pierre Andrews f591cc94ca upgrade black for lints (#3004)
Summary:
This is the same as https://github.com/fairinternal/fairseq-py/issues/3003 but for main instead of gshard.

the lint test will run the latest version of black, which is 22.1.0 right now and seems to be incompatible with the 21.12b0 version that is setup in pre-commit. This means that some files were with valid format in the past, but are not anymore...

This PR formats these files with 22.1.0 and autoupdates pre-commit config to use that black version too.

(note: this is the second time it happens. a solution would be to pin the lint test to the same version as the one in the pre-commit hook and that was used to format everything clean so that we have a stable formating)

Pull Request resolved: https://github.com/fairinternal/fairseq-py/pull/3004

Reviewed By: dianaml0

Differential Revision: D33917490

Pulled By: Mortimerp9

fbshipit-source-id: d55e800b976f94545cdab4132daa7c45cbd0e34c
2022-02-02 04:31:33 -08:00
.circleci fix flake8 issues (#2570) 2021-12-09 02:34:30 -08:00
.github Add linting with black (#2678) 2021-11-29 12:32:59 -08:00
docs Rename references from master -> main in preparation for branch name change (#2297) 2021-09-20 08:29:38 -07:00
examples Add citation details and other wording fixes to model card (#4172) 2022-02-01 10:57:35 -08:00
fairseq upgrade black for lints (#3004) 2022-02-02 04:31:33 -08:00
fairseq_cli upgrade black for lints (#3004) 2022-02-02 04:31:33 -08:00
scripts fix flake8 issues (#2570) 2021-12-09 02:34:30 -08:00
tests upgrade black for lints (#3004) 2022-02-02 04:31:33 -08:00
.gitignore Data2vec prelim (#2929) 2022-01-20 00:02:16 -08:00
.gitmodules Remove unused hf/transformers submodule (#1435) 2020-11-16 09:12:02 -08:00
.isort.cfg Add pre commit config and flake8 config (#2676) 2021-11-24 18:03:37 -08:00
.pre-commit-config.yaml upgrade black for lints (#3004) 2022-02-02 04:31:33 -08:00
CODE_OF_CONDUCT.md Update CODE_OF_CONDUCT.md (#1759) 2020-03-04 14:05:25 -08:00
CONTRIBUTING.md Add pre commit config and flake8 config (#2676) 2021-11-24 18:03:37 -08:00
hubconf.py Move dep checks before fairseq imports in hubconf.py (fixes #3093) (#3104) 2021-01-05 12:14:46 -08:00
LICENSE Relicense fairseq under MIT license (#786) 2019-07-30 07:48:23 -07:00
pyproject.toml fetch pyproject.toml for building cython codes without pre-installation (#1697) 2020-02-15 20:24:10 -08:00
README.md S2ST oss (#2756) 2021-12-28 08:07:55 -08:00
setup.cfg fix flake8 issues (#2570) 2021-12-09 02:34:30 -08:00
setup.py Add linting with black (#2678) 2021-11-29 12:32:59 -08:00
train.py Apply black+isort (#1357) 2020-10-18 18:14:51 -07:00



MIT License Latest Release Build Status Documentation Status


Fairseq(-py) is a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling and other text generation tasks.

We provide reference implementations of various sequence modeling papers:

List of implemented papers

What's New:

Previous updates

Features:

We also provide pre-trained models for translation and language modeling with a convenient torch.hub interface:

en2de = torch.hub.load('pytorch/fairseq', 'transformer.wmt19.en-de.single_model')
en2de.translate('Hello world', beam=5)
# 'Hallo Welt'

See the PyTorch Hub tutorials for translation and RoBERTa for more examples.

Requirements and Installation

  • PyTorch version >= 1.5.0
  • Python version >= 3.6
  • For training new models, you'll also need an NVIDIA GPU and NCCL
  • To install fairseq and develop locally:
git clone https://github.com/pytorch/fairseq
cd fairseq
pip install --editable ./

# on MacOS:
# CFLAGS="-stdlib=libc++" pip install --editable ./

# to install the latest stable release (0.10.x)
# pip install fairseq
  • For faster training install NVIDIA's apex library:
git clone https://github.com/NVIDIA/apex
cd apex
pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" \
  --global-option="--deprecated_fused_adam" --global-option="--xentropy" \
  --global-option="--fast_multihead_attn" ./
  • For large datasets install PyArrow: pip install pyarrow
  • If you use Docker make sure to increase the shared memory size either with --ipc=host or --shm-size as command line options to nvidia-docker run .

Getting Started

The full documentation contains instructions for getting started, training new models and extending fairseq with new model types and tasks.

Pre-trained models and examples

We provide pre-trained models and pre-processed, binarized test sets for several tasks listed below, as well as example training and evaluation commands.

We also have more detailed READMEs to reproduce results from specific papers:

Join the fairseq community

License

fairseq(-py) is MIT-licensed. The license applies to the pre-trained models as well.

Citation

Please cite as:

@inproceedings{ott2019fairseq,
  title = {fairseq: A Fast, Extensible Toolkit for Sequence Modeling},
  author = {Myle Ott and Sergey Edunov and Alexei Baevski and Angela Fan and Sam Gross and Nathan Ng and David Grangier and Michael Auli},
  booktitle = {Proceedings of NAACL-HLT 2019: Demonstrations},
  year = {2019},
}