Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
Go to file
Sravya Popuri 7b9118bd93 Open source code for "Enhanced Direct Speech-to-Speech Translation Using Self-supervised Pre-training and Data Augmentation" (#3233)
Summary:
OSS "Enhanced Direct Speech-to-Speech Translation Using Self-supervised Pre-training and Data Augmentation" paper code
- Update xm_transformer to add a new arguments called encoder_proj (which ensures the encoder embedding dim and decoder embedding dim are matched) and max_positions (related to embedding size of conformer).
- Add documentation and pretrained models related to the paper

X-link: https://github.com/fairinternal/fairseq-py/pull/3233

Reviewed By: pipibjc

Differential Revision: D35119604

Pulled By: sravyapopuri388

fbshipit-source-id: bbe517c4803c5808f8cce0e5d16cf5ffa96f425c
2022-03-25 11:52:07 -07:00
.circleci fix flake8 issues (#2570) 2021-12-09 02:34:30 -08:00
.github Add linting with black (#2678) 2021-11-29 12:32:59 -08:00
docs Rename references from master -> main in preparation for branch name change (#2297) 2021-09-20 08:29:38 -07:00
examples Open source code for "Enhanced Direct Speech-to-Speech Translation Using Self-supervised Pre-training and Data Augmentation" (#3233) 2022-03-25 11:52:07 -07:00
fairseq Open source code for "Enhanced Direct Speech-to-Speech Translation Using Self-supervised Pre-training and Data Augmentation" (#3233) 2022-03-25 11:52:07 -07:00
fairseq_cli Best metric is now only logged for the first of all the validation subsets (#4180) 2022-02-25 14:29:43 -08:00
scripts fix flake8 issues (#2570) 2021-12-09 02:34:30 -08:00
tests fix failing convtransformer test (#3107) 2022-02-22 11:24:11 -08:00
.gitignore Data2vec prelim (#2929) 2022-01-20 00:02:16 -08:00
.gitmodules Remove unused hf/transformers submodule (#1435) 2020-11-16 09:12:02 -08:00
.isort.cfg Add pre commit config and flake8 config (#2676) 2021-11-24 18:03:37 -08:00
.pre-commit-config.yaml upgrade black for lints (#3004) 2022-02-02 04:31:33 -08:00
CODE_OF_CONDUCT.md Update CODE_OF_CONDUCT.md (#1759) 2020-03-04 14:05:25 -08:00
CONTRIBUTING.md Add pre commit config and flake8 config (#2676) 2021-11-24 18:03:37 -08:00
hubconf.py Move dep checks before fairseq imports in hubconf.py (fixes #3093) (#3104) 2021-01-05 12:14:46 -08:00
LICENSE Relicense fairseq under MIT license (#786) 2019-07-30 07:48:23 -07:00
pyproject.toml fetch pyproject.toml for building cython codes without pre-installation (#1697) 2020-02-15 20:24:10 -08:00
README.md docs: add social button in support of Ukraine (#4249) 2022-03-04 16:28:09 -08:00
setup.cfg fix flake8 issues (#2570) 2021-12-09 02:34:30 -08:00
setup.py Add linting with black (#2678) 2021-11-29 12:32:59 -08:00
train.py Apply black+isort (#1357) 2020-10-18 18:14:51 -07:00



Support Ukraine MIT License Latest Release Build Status Documentation Status


Fairseq(-py) is a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling and other text generation tasks.

We provide reference implementations of various sequence modeling papers:

List of implemented papers

What's New:

Previous updates

Features:

We also provide pre-trained models for translation and language modeling with a convenient torch.hub interface:

en2de = torch.hub.load('pytorch/fairseq', 'transformer.wmt19.en-de.single_model')
en2de.translate('Hello world', beam=5)
# 'Hallo Welt'

See the PyTorch Hub tutorials for translation and RoBERTa for more examples.

Requirements and Installation

  • PyTorch version >= 1.5.0
  • Python version >= 3.6
  • For training new models, you'll also need an NVIDIA GPU and NCCL
  • To install fairseq and develop locally:
git clone https://github.com/pytorch/fairseq
cd fairseq
pip install --editable ./

# on MacOS:
# CFLAGS="-stdlib=libc++" pip install --editable ./

# to install the latest stable release (0.10.x)
# pip install fairseq
  • For faster training install NVIDIA's apex library:
git clone https://github.com/NVIDIA/apex
cd apex
pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" \
  --global-option="--deprecated_fused_adam" --global-option="--xentropy" \
  --global-option="--fast_multihead_attn" ./
  • For large datasets install PyArrow: pip install pyarrow
  • If you use Docker make sure to increase the shared memory size either with --ipc=host or --shm-size as command line options to nvidia-docker run .

Getting Started

The full documentation contains instructions for getting started, training new models and extending fairseq with new model types and tasks.

Pre-trained models and examples

We provide pre-trained models and pre-processed, binarized test sets for several tasks listed below, as well as example training and evaluation commands.

We also have more detailed READMEs to reproduce results from specific papers:

Join the fairseq community

License

fairseq(-py) is MIT-licensed. The license applies to the pre-trained models as well.

Citation

Please cite as:

@inproceedings{ott2019fairseq,
  title = {fairseq: A Fast, Extensible Toolkit for Sequence Modeling},
  author = {Myle Ott and Sergey Edunov and Alexei Baevski and Angela Fan and Sam Gross and Nathan Ng and David Grangier and Michael Auli},
  booktitle = {Proceedings of NAACL-HLT 2019: Demonstrations},
  year = {2019},
}