Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
Go to file
Pierre Andrews 279796224f Preprocess Split (#2738)
Summary:
This is the equivalent to PR https://github.com/fairinternal/fairseq-py/issues/2697 but on top of main instead of gshard (cherry-picked and merged the squash):

* reorganize preprocess.py code a bit
* use Binarizers objects in the multiprocess code
* clean up the make_binary
* multiprocess logic
* learn to count
* format and doc string
* add basic test for vocab binarizer
* generalize to one line
* move multiprocess in binarizer

Testing:
```
python -m fairseq_cli.preprocess --only-source --trainpref ~/fixathon/small_vocab_test/train.in --destdir ~/fixathon/small_vocab_test/data-bin.cherry --workers 20
python -m fairseq_cli.preprocess --only-source --trainpref ~/fixathon/small_vocab_test/train.in --destdir ~/fixathon/small_vocab_test/data-bin.main --workers 20
```

```
 md5sum ~/fixathon/small_vocab_test/data-bin.cherry/train.bin == md5sum ~/fixathon/small_vocab_test/data-bin.main/train.bin
```

```
diff ~/fixathon/small_vocab_test/data-bin.main/dict.txt ~/fixathon/small_vocab_test/data-bin.cherry/dict.tx
```

Pull Request resolved: https://github.com/fairinternal/fairseq-py/pull/2738

Reviewed By: sshleifer, dianaml0

Differential Revision: D32830875

Pulled By: Mortimerp9

fbshipit-source-id: e7463d5cdd96a877691bf39666daa319ebb3dcb8
2022-01-11 11:56:46 -08:00
.circleci fix flake8 issues (#2570) 2021-12-09 02:34:30 -08:00
.github Add linting with black (#2678) 2021-11-29 12:32:59 -08:00
docs Rename references from master -> main in preparation for branch name change (#2297) 2021-09-20 08:29:38 -07:00
examples Benchmarking OSS (#2852) 2022-01-11 09:29:29 -08:00
fairseq Preprocess Split (#2738) 2022-01-11 11:56:46 -08:00
fairseq_cli Preprocess Split (#2738) 2022-01-11 11:56:46 -08:00
scripts fix flake8 issues (#2570) 2021-12-09 02:34:30 -08:00
tests Preprocess Split (#2738) 2022-01-11 11:56:46 -08:00
.gitignore Reproduce #1781. Add Weights and Biases support 2020-11-03 20:48:00 -08:00
.gitmodules Remove unused hf/transformers submodule (#1435) 2020-11-16 09:12:02 -08:00
.isort.cfg Add pre commit config and flake8 config (#2676) 2021-11-24 18:03:37 -08:00
.pre-commit-config.yaml fix flake8 issues (#2570) 2021-12-09 02:34:30 -08:00
CODE_OF_CONDUCT.md Update CODE_OF_CONDUCT.md (#1759) 2020-03-04 14:05:25 -08:00
CONTRIBUTING.md Add pre commit config and flake8 config (#2676) 2021-11-24 18:03:37 -08:00
hubconf.py Move dep checks before fairseq imports in hubconf.py (fixes #3093) (#3104) 2021-01-05 12:14:46 -08:00
LICENSE Relicense fairseq under MIT license (#786) 2019-07-30 07:48:23 -07:00
pyproject.toml fetch pyproject.toml for building cython codes without pre-installation (#1697) 2020-02-15 20:24:10 -08:00
README.md S2ST oss (#2756) 2021-12-28 08:07:55 -08:00
setup.cfg fix flake8 issues (#2570) 2021-12-09 02:34:30 -08:00
setup.py Add linting with black (#2678) 2021-11-29 12:32:59 -08:00
train.py Apply black+isort (#1357) 2020-10-18 18:14:51 -07:00



MIT License Latest Release Build Status Documentation Status


Fairseq(-py) is a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling and other text generation tasks.

We provide reference implementations of various sequence modeling papers:

List of implemented papers

What's New:

Previous updates

Features:

We also provide pre-trained models for translation and language modeling with a convenient torch.hub interface:

en2de = torch.hub.load('pytorch/fairseq', 'transformer.wmt19.en-de.single_model')
en2de.translate('Hello world', beam=5)
# 'Hallo Welt'

See the PyTorch Hub tutorials for translation and RoBERTa for more examples.

Requirements and Installation

  • PyTorch version >= 1.5.0
  • Python version >= 3.6
  • For training new models, you'll also need an NVIDIA GPU and NCCL
  • To install fairseq and develop locally:
git clone https://github.com/pytorch/fairseq
cd fairseq
pip install --editable ./

# on MacOS:
# CFLAGS="-stdlib=libc++" pip install --editable ./

# to install the latest stable release (0.10.x)
# pip install fairseq
  • For faster training install NVIDIA's apex library:
git clone https://github.com/NVIDIA/apex
cd apex
pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" \
  --global-option="--deprecated_fused_adam" --global-option="--xentropy" \
  --global-option="--fast_multihead_attn" ./
  • For large datasets install PyArrow: pip install pyarrow
  • If you use Docker make sure to increase the shared memory size either with --ipc=host or --shm-size as command line options to nvidia-docker run .

Getting Started

The full documentation contains instructions for getting started, training new models and extending fairseq with new model types and tasks.

Pre-trained models and examples

We provide pre-trained models and pre-processed, binarized test sets for several tasks listed below, as well as example training and evaluation commands.

We also have more detailed READMEs to reproduce results from specific papers:

Join the fairseq community

License

fairseq(-py) is MIT-licensed. The license applies to the pre-trained models as well.

Citation

Please cite as:

@inproceedings{ott2019fairseq,
  title = {fairseq: A Fast, Extensible Toolkit for Sequence Modeling},
  author = {Myle Ott and Sergey Edunov and Alexei Baevski and Angela Fan and Sam Gross and Nathan Ng and David Grangier and Michael Auli},
  booktitle = {Proceedings of NAACL-HLT 2019: Demonstrations},
  year = {2019},
}