Summary:
The current translation_multi_simple_epoch will add extrac layer of virtual epoch abstracts to load part of data and start training earlier. However, for smaller dataset this is not necessary.
This diff makes it skip virtual epoch layer if --virtual-epoch-size is not specified.
Reviewed By: pipibjc
Differential Revision: D24962835
fbshipit-source-id: 7de4293a6996ed075a1ed0c1ff2de94c8ae3df14
Summary:
this adds a hydra_train binary that uses hydra configs/command line overrides instead of argparse
use case 1: built in configs + overrides from command line
```
python fairseq_cli/hydra_train.py distributed_training.distributed_world_size=1 dataset.batch_size=2 task.data=/private/home/myleott/data/data-bin/wikitext-103-roberta-bpe-bin/ model=transformer_lm/transformer_lm_gpt task=language_modeling optimization.max_update=5000
```
use case 2: use an external config that is used instead of bundled configs (but dataclass defaults still work)
```
python fairseq_cli/hydra_train.py --config-path ~/fairseq-py-dev/lm --config-name wiki103
```
the config file contains this:
```
# package _group_
model:
_name: transformer_lm
distributed_training:
distributed_world_size: 1
dataset:
batch_size: 2
task:
_name: language_modeling
data: /private/home/myleott/data/data-bin/wikitext-103-roberta-bpe-bin/
add_bos_token: false
max_target_positions: 1024
optimization:
max_update: 50000
lr: [ 0.25 ]
criterion: cross_entropy
optimizer: adam
lr_scheduler:
_name: cosine
```
use case 3: use an external config directory that provides additional configs for e.g. models
python fairseq_cli/hydra_train.py distributed_training.distributed_world_size=1 dataset.batch_size=2 task.data=/private/home/myleott/data/data-bin/wikitext-103-roberta-bpe-bin/ model=transformer_lm/2_layers task=language_modeling optimization.max_update=5000 --config-dir ~/fairseq-py-dev/lm/hydra
where ~/fairseq-py-dev/lm/hydra has the following structure:
- model
-- transformer_lm
--- 2_layers.yaml
and inside 2_layers.yaml is a copy of transformer_lm_gpt.yaml but with decoder_layers set to 2
Pull Request resolved: https://github.com/fairinternal/fairseq-py/pull/1393
Reviewed By: myleott
Differential Revision: D24722252
Pulled By: alexeib
fbshipit-source-id: 758ea431fa099cd7c0e4daf41eff680df1d3b841
Summary: In past, we always use shared dictionary for multilingual experiments. This diff renables different dictionaries for source and target languages by changing the assertion criteria and reverts back to use specific languages to return source_dict and target_dict.
Reviewed By: chtran
Differential Revision: D24637682
fbshipit-source-id: a982e4f1e48395cc5bf10dc03b98fbe970062f8d
Summary:
This PR reverts recent changes that attempted to make `--user-dir` work with non-unique module names. But that new approach introduced other issues (e.g., poor compatibility with multiprocessing and Windows), so let's revert to the previous simpler implementation.
Pull Request resolved: https://github.com/pytorch/fairseq/pull/2815
Reviewed By: alexeib
Differential Revision: D24611571
Pulled By: myleott
fbshipit-source-id: cecfe28395585ca0401f844f10bd0d49d014c4d8
Summary:
Pull Request resolved: https://github.com/facebookresearch/pytext/pull/1510
this is the main pr that switches on hydra functionality in fairseq
we migrate "args" object into omegaconf "DictConfig" at all legacy entry points
in addition this migrates various components from secondary registries (like bpe encoders and tokenizers) to make the migration smoother
i am going through code that references migrated fairseq components and changing it to inherit from "Legacy*" components instead. hopefully tests will catch most of this
Pull Request resolved: https://github.com/fairinternal/fairseq-py/pull/1343
Reviewed By: myleott
Differential Revision: D23973928
Pulled By: alexeib
fbshipit-source-id: dd9554981fff51ea75c1ff343874d1d6e61793c9
Summary:
## What does this PR do?
Implements R3F and R4F coming from Facebook Research: https://arxiv.org/abs/2008.03156
This code was used to generate all the results from the paper excluding probing results.
Pull Request resolved: https://github.com/pytorch/fairseq/pull/2455
Reviewed By: myleott
Differential Revision: D23444863
Pulled By: AkshatSh
fbshipit-source-id: b724a6d6cc9cebfdb4bd219828afbb5679f2259b
Summary:
# Before submitting
- [ ] Was this discussed/approved via a Github issue? (no need for typos, doc improvements)
- [ ] Did you read the [contributor guideline](https://github.com/pytorch/fairseq/blob/master/CONTRIBUTING.md)?
- [ ] Did you make sure to update the docs?
- [ ] Did you write any new necessary tests?
## What does this PR do?
Opensource code for Deep Transformer with Latent Depth (https://arxiv.org/pdf/2009.13102.pdf).
New features and design choices made:
- New feature: allow non-residual block to be weighted by sample z (generated per batch) instead of `x = residual + x`.
- Design choice: move `x = residual + x` in transformer_layer.py into a function where the subclass (with latent depth) could overwrite it to `x = residual + z*x`.
- New feature: allow TransformerEncoder or TransformerDecoder to have additional logits parameters which will generate the samples z.
- Design choice: added subclass LatentTransformerEncoder and LatentTransformerDecoder, which has additional attributes for the logits parameters, and instantiate the corresponding LatentTransformerEncoderLayer and LatentTransformerDecoderLayer.
- New feature: allow multilingual_translation task to train with latent depth (results in the paper).
- Design choice:
- added additional arguments in the multilingual_translation task.
- added option for multilingual_transformer to use LatentTransformerEncoder and LatentTransformerDecoder besides standard TransformerEncoder.
- added option in multilingual_translation task's `train_step` to generate the samples z and compute the KL (and sparsity) loss per batch.
## PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
## Did you have fun?
Make sure you had fun coding �
Pull Request resolved: https://github.com/pytorch/fairseq/pull/2703
Reviewed By: myleott
Differential Revision: D24155059
Pulled By: xianxl
fbshipit-source-id: f3e41639429f9664ec5565839709aa857a643668
Summary:
Imported from https://github.com/fairinternal/fairseq-py/pull/1284. Updated according to PR comments.
Main changes:
* New task: `fairseq.tasks.speech_to_text`
* Multilingual support: multiple train sub-splits, temperature-based sampling, language ID tokens
* New dataset: `fairseq.data.audio.speech_to_text_dataset`
* Added accuracy metrics and BOS prefix removal to label smoothed cross entropy
* New models: Transformer (`fairseq.models.speech_to_text.s2t_transformer`) and BLSTM (`fairseq.models.speech_to_text.berard`)
* Extended scorers:
* Added a base scorer class: `fairseq.scorers.BaseScorer` (the parent class for all scorers except the BLEU scorer in CPP)
* Added an evaluation tokenizer: `fairseq.scorers.eval_tokenizer` which leverages sacreBLEU's built-in tokenizers and allows character-level tokenization as well as punctuation removal (for WER scoring).
* Added chrF scorer: `fairseq.scorers.chrf`
* Online Mel-filter bank speech feature extraction (via CPP-based pyKaldi or Python-based TorchAudio): `fairseq.data.audio.audio_utils`
* Online speech feature transforms: `fairseq.data.audio.feature_transforms.*`
* Fixed the subsampled sequence lengths in VGGTransformer (`examples.speech_recognition.models.vggtransformer`)
* Examples under `examples/speech_to_text`:
* LibriSpeech (ASR): better results than VGGTransformer with smaller Transformer-based models
* MuST-C (ST): comparable to [SOTA results](https://arxiv.org/pdf/2004.10234.pdf) but with less tricks
Reviewed By: jmp84
Differential Revision: D24065273
fbshipit-source-id: 5f842ca9c826f92d4af660705611885fe440a9ab
Summary:
now that we are moving to using dataclasses to define fairseq configuration, having aliases for options is no longer practical. this pr removes "max-sentences" argument while keeping its alias "batch-size", which is more appropriate
Pull Request resolved: https://github.com/fairinternal/fairseq-py/pull/1333
Reviewed By: shruti-bh
Differential Revision: D24121305
Pulled By: alexeib
fbshipit-source-id: 34343cea54c8f2c8b059c38ef9f29b66e76df9fb
Summary:
Fixes https://github.com/pytorch/fairseq/issues/2673.
# Before submitting
- [x] Was this discussed/approved via a Github issue? (no need for typos, doc improvements)
- [x] Did you read the [contributor guideline](https://github.com/pytorch/fairseq/blob/master/CONTRIBUTING.md)?
- [ ] Did you make sure to update the docs?
- [ ] Did you write any new necessary tests?
## What does this PR do?
Fixes https://github.com/pytorch/fairseq/issues/2673 (issue).
## PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
## Did you have fun?
Make sure you had fun coding �
Pull Request resolved: https://github.com/pytorch/fairseq/pull/2675
Reviewed By: ngoyal2707
Differential Revision: D24001793
Pulled By: myleott
fbshipit-source-id: 6b4e9270e5f5a31ba1b65ae2ae717019108af913
Summary:
This pull request implements a variant of the Transformer model that uses an attention distribution for pointing to input words. The attention distribution over the input words is interpolated with the normal output distribution over the vocabulary words, as in [See et al. (2017)](https://arxiv.org/abs/1704.04368). This allows the model to generate words that appear in the input, even if they don't appear in the vocabulary, helping especially with small vocabularies.
The mechanism for copying out-of-vocabulary words from the input has been implemented differently to See et al. In their [implementation](https://github.com/abisee/pointer-generator) they convey the word identities through the model in order to be able to produce out-of-vocabulary words. We wanted to minimize changes to the Fairseq code base and took a different approach, which I'll describe below. The entire implementation is contained in one file (plus there's one new test).
Copying out-of-vocabulary words is possible by pre-processing the input and post-processing the output. The user may add special words to the end of the vocabulary that can be used in place of `<unk>` tokens to identify different input positions (e.g. `<unk-0>`, `<unk-1>`, `<unk-2>`, ...). The number of these special words is given to the model with the `--source-position-markers` argument—the model simply maps all of these to the same word embedding as `<unk>`. With a simple post-processing the user may retrieve word at position N in the original text and use it in place of `<unk-N>`.
I didn't find a good place to document this usage of this model, so let me know if you think I should improve documentation somewhere.
This feature has not yet been discussed via a GitHub issue, but I'll open a new issue for discussion.
Pull Request resolved: https://github.com/pytorch/fairseq/pull/2529
Reviewed By: ngoyal2707
Differential Revision: D23398430
Pulled By: myleott
fbshipit-source-id: f2f26c8ce8802ae6cf95515637660348ff3fc457
Summary:
Recently some of our runs are getting:
"RuntimeError: Mismatch between actual and expected iterable length. Please report this to the fairseq developers."
f214567466
We never ran into this before because this is a new check by fairseq to be more strict with iterators.
Fix is to:
1. Account for the offset (i.e. load from checkpoint mid epoch) when propagating `take`. This fixes the issue of `next` returning too many things, which is what causes the error.
2. Update the underlying iterator when calling `take` on `BufferedIterator` and the length of the `BufferedIterator`. Although this doesn't cause the error, it is necessary to maintain consistency.
Reviewed By: myleott
Differential Revision: D23443012
fbshipit-source-id: 73c26db8392e5508a61acfda7ca40a24df89fabb
Summary: translation_multi_simple_epoch task only supports shared dictionary across all languages, so add the check in the task setup.
Reviewed By: pipibjc
Differential Revision: D23288388
fbshipit-source-id: 4236a096bcb75429b486ef8a9244e3ef0d5095f0
Summary:
PySpeech integration training tests have recently been stuck at end of epoch.
Digging into it, it looks like this is because the end of epoch check relies on this (https://fburl.com/diffusion/xt09z6n9):
```
def end_of_epoch(self) -> bool:
"""Returns whether the most recent epoch iterator has been exhausted"""
return not self._cur_epoch_itr.has_next()
```
which is implemented like this in CountingIterator:
def has_next(self):
"""Whether the iterator has been exhausted."""
return self.n < len(self)
It seems like D23172408 (110f9f0cc7) modified CountingIterator such that `len(self) > len(iter(self))` when `take()` is used. This mismatch causes `has_next` to return `True` for some PySpeech processes even when all elements in `iter(self))` have been consumed, causing training to get stuck.
My proposed fix is to remove the `self.early_stop` variable and just directly modify `self.total` and `self.iterable`, ensuring `len(self) == len(iter(self))`
Reviewed By: myleott
Differential Revision: D23250734
fbshipit-source-id: efb5a38216783bded67f501135b2f68b9246b9dd
Summary:
# Before submitting
- [x] Was this discussed/approved via a Github issue? (no need for typos, doc improvements)
- [x] Did you read the [contributor guideline](https://github.com/pytorch/fairseq/blob/master/CONTRIBUTING.md)?
- [x] Did you make sure to update the docs?
- [x] Did you write any new necessary tests?
## What does this PR do?
This PR implements constrained decoding ([Hokamp & Liu, 2017](https://www.aclweb.org/anthology/P17-1141/); [Post & Vilar, 2018](https://www.aclweb.org/anthology/N18-1119/)) with vectorization for batching ([Hu et al., 2019](https://www.aclweb.org/anthology/N19-1090/)). In addition, it add *ordered constraints*, where the constraints are generated on the target side in order, with zero or more unconstrained tokens in between. This variant allows for optimizations that increase speed and BLEU scores (when testing with random scraps from the references).
### Usage and quick start
It works with `fairseq-interactive` via a new command-line option: `fairseq-interactive --constraints [ordered,unordered]`, defaulting to `ordered` if nothing is provided. When active, it will split lines from STDIN on `\t`, with separate constraints each separated by a tab. For example (after downloading the [Fairseq WMT19 German--English model](https://github.com/pytorch/fairseq/blob/master/examples/wmt19/README.md)):
```bash
echo -e "Die maschinelle Übersetzung ist schwer zu kontrollieren.\thard\tinfluence" \
| [normalize.py](https://gist.github.com/mjpost/4c54446b7030d7c64b57461d27090650) \
| [tok.py](https://gist.github.com/mjpost/ed7456f6a987c533102fc121678ed302) \
| PYTHONPATH=$HOME/code/fairseq-constraints fairseq-interactive $modeldir \
--bpe fastbpe \
--bpe-codes $modeldir/bpecodes \
--constraints \
--constraints-both
-s de -t en \
--path $modeldir/model1.pt \
--max-tokens 1000 \
--beam 5 \
```
Adding the `--constraints-both` option causes it to batch-decode the input sentence both with and without the constraints. When run with the Fairseq WMT19 German--English model, the following results are produced (here run on a CPU, don't be alarmed by the times!)
```text
S-0 Die masch@@ in@@ elle Über@@ setzung ist schwer zu kontrollieren .
W-0 1.844 seconds
C-0 hard
C-0 influence
H-0 -1.5333266258239746 Mach@@ ine trans@@ lation is hard to influence .
D-0 -1.5333266258239746 Machine translation is hard to influence .
P-0 -0.5434 -0.1423 -0.1930 -0.1415 -0.2346 -1.8031 -0.1701 -11.7727 -0.1815 -0.1511
S-0 Die masch@@ in@@ elle Über@@ setzung ist schwer zu kontrollieren .
W-0 1.844 seconds
H-0 -0.3731671869754791 Mach@@ ine trans@@ lation is difficult to control .
D-0 -0.3731671869754791 Machine translation is difficult to control .
P-0 -0.5434 -0.1423 -0.1930 -0.1415 -0.2346 -1.1430 -0.1665 -0.8482 -0.1678 -0.1514
2020-07-31 12:17:55 | INFO | fairseq_cli.interactive | Total time: 12.803 seconds; translation time: 3.688
```
Note the new tags present in the output:
* `C-#` records active constraints (after applying preprocessing) for a sentence
* `W-#` reports the sentence-level translation time (a useful unrelated feature I hope you'll accept)
Some unit tests are written (`fairseq/test_constraints.py`) but not yet integrated. Advice here on where to place this is welcome. I also have not run this through lint; if someone can tell me the command to run, I'd appreciate it.
### Implementation notes
This is largely self-contained, implemented in a new `LexicallyConstrainedBeamSearch` class in `search.py`. It does require a few minimal hooks from `_generate()` in `sequence_generator.py`, to ensure that constraints are updated at each timestep. (Edit: most changes in that file are documentation clarifications, corrections, and updates). Unconstrained sentences that are intermingled with constrained ones will not incur any time penalty, so long as they do not occur in the same batch.
Addresses https://github.com/pytorch/fairseq/issues/1536.
## PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
## Did you have fun?
Make sure you had fun coding �
Pull Request resolved: https://github.com/pytorch/fairseq/pull/2402
Reviewed By: alexeib
Differential Revision: D23188945
Pulled By: myleott
fbshipit-source-id: 9f5ed855f7a1dcf535b091c0ccf98b07fb9cbdd6
Summary:
# Before submitting
- [ ] Was this discussed/approved via a Github issue? (no need for typos, doc improvements)
- [x] Did you read the [contributor guideline](https://github.com/pytorch/fairseq/blob/master/CONTRIBUTING.md)?
- [x] Did you make sure to update the docs?
- [x] Did you write any new necessary tests?
## What does this PR do?
Implements the multiply_factor optimization used in memory efficient fp16 training to mixed precision training. The methods multiply_grads and clip_grad_norm do not touch each gradient, but rather a "multiply factor" that is then factored in when unscaling gradients.
## PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
## Did you have fun?
Make sure you had fun coding �
Pull Request resolved: https://github.com/fairinternal/fairseq-py/pull/1248
Reviewed By: myleott
Differential Revision: D23201396
Pulled By: andersonic
fbshipit-source-id: 6c6f64542893e0ecac72e132464bb334dcb9874d
Summary:
A first version of XLNMT multilingual project code release: Multilingual Training with multiple bitext
- Minor changes to
- fairseq/checkpoint_utils.py to add finetuning option instead of using restore_file which will restore from original model when being requeued.
Reviewed By: myleott
Differential Revision: D22483494
fbshipit-source-id: 733300fd6a4d185e561c793ea668047c96f616c6
Summary:
A first version of XLNMT multilingual project code release: Multilingual Training with multiple bitext
- A new task to glue all things together: fairseq/tasks/translation_multi_simple_epoch.py
- Minor changes to
- fairseq/data/iterators.py to allow dynamic batch sampler
- fairseq/checkpoint_utils.py to add finetuning option instead of using restore_file which will restore from original model when being requeued.
Reviewed By: pipibjc
Differential Revision: D22483484
fbshipit-source-id: 283b67e538508f330b0968609b7dae64d26bea05
Summary:
Pull Request resolved: https://github.com/pytorch/fairseq/pull/2308
Implemented Monte Carlo dropout. Added README to reproduce the results from our paper
that applies this idea for unsupervised quality estimation of NMT (joint work of Facebook AI and the University of Sheffield):
Marina Fomicheva, Shuo Sun, Lisa Yankovskaya, Frédéric Blain, Francisco Guzmán, Mark Fishel, Nikolaos Aletras, Vishrav Chaudhary, Lucia Specia. Unsupervised Quality Estimation for Neural Machine Translation. Accepted to TACL
Retaining dropout at test time is not possible in the current code base. The statement
```
if not self.retain_dropout:
model.eval()
```
in `SequenceGenerator` does not have any effect, since model `training` attribute is already set to False by the method `make_generate_fast_`, which is applied before initializing `SequenceGenerator` in `generate.py`. `make_generate_fast_` throws an exception when trying to set `training` to True after its application. Also, if I am not mistaken `self.training=True` can have other effects, so setting it to True only for the purpose of retaining dropout at test time might be confusing. I propose an alternative implementation where `retain_dropout` is an attribute of FairseqModel class.
# Before submitting
- [N] Was this discussed/approved via a Github issue? (no need for typos, doc improvements)
- [Y] Did you read the [contributor guideline](https://github.com/pytorch/fairseq/blob/master/CONTRIBUTING.md)?
- [Y] Did you make sure to update the docs?
- [Y] Did you write any new necessary tests?
## What does this PR do?
New feature.
## PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
## Did you have fun?
Make sure you had fun coding �
Pull Request resolved: https://github.com/pytorch/fairseq/pull/2151
Reviewed By: ngoyal2707
Differential Revision: D22048889
Pulled By: myleott
fbshipit-source-id: 0d0d4784a7314fc7a45b76341fd3b8232b3e2cf0
Summary:
In PyTorch 1.5 using an integer fill_value and not setting the dtype or out kwarg with torch.full was deprecated, and soon will throw a runtime error. In the future, torch.full will infer its dtype from the fill_value, and these would produce integer, not float, tensors. This update maintains the current behavior.
Created from Diffusion's 'Open in Editor' feature.
Reviewed By: myleott
Differential Revision: D22161456
fbshipit-source-id: b5d687e4de83dba6e76cae6e61b5106bf5b320db
Summary:
…trogram
# Before submitting
- [ ] Was this discussed/approved via a Github issue? (no need for typos, doc improvements)
- [x] Did you read the [contributor guideline](https://github.com/pytorch/fairseq/blob/master/CONTRIBUTING.md)?
- [ ] Did you make sure to update the docs?
- [x] Did you write any new necessary tests?
## What does this PR do?
Fixes https://github.com/pytorch/fairseq/issues/1863.
## PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
## Did you have fun?
Make sure you had fun coding �
Pull Request resolved: https://github.com/pytorch/fairseq/pull/1864
Reviewed By: yqwangustc
Differential Revision: D21663642
Pulled By: myleott
fbshipit-source-id: f411c5c01c7505375bec6d47554e85fb70877e9c
Summary:
A few changes here:
- update GroupedIterator and ShardedIterator to support counting. This will be useful on TPUs, since the TPU dataloading threads may advance faster than we can process them.
- add tests for the above
- in CountingIterator, rename `count` -> `n`. This is needed because `count` is overloaded for iterables (e.g., `list` defines a different `count` method, which is actually a search function).
- in CountingIterator, rename `override_len` -> `total` to be more consistent with other iterators (e.g., tqdm). This functionality was unused previously (it's only needed for TPUs), so the rename is easy.
Pull Request resolved: https://github.com/fairinternal/fairseq-py/pull/1166
Reviewed By: ngoyal2707
Differential Revision: D21373525
Pulled By: myleott
fbshipit-source-id: 102f3d50ed1a5163a7d1216ca5a179564a05dfe4
Summary:
# Before submitting
- [ ] Was this discussed/approved via a Github issue? (no need for typos, doc improvements)
- [x] Did you read the [contributor guideline](https://github.com/pytorch/fairseq/blob/master/CONTRIBUTING.md)?
- [ ] Did you make sure to update the docs?
- [x] Did you write any new necessary tests?
## What does this PR do?
Fixes https://github.com/pytorch/fairseq/issues/2022.
## PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
## Did you have fun?
Make sure you had fun coding �
Pull Request resolved: https://github.com/pytorch/fairseq/pull/2090
Reviewed By: cndn
Differential Revision: D21385984
Pulled By: myleott
fbshipit-source-id: 1428e02e625b8625df71a83c05dcf933c3f899df
Summary:
Pull Request resolved: https://github.com/pytorch/fairseq/pull/2059
test_ensemble_sequence_generator and test_export_ensemble_model are green on fbcode master but Pytorch 1.5 release cut happened before the TorchScript fix, so updating the gate to 1.6
Remove quantization test from fairseq as FBGEMMS is binded at OSS side. Will add the test back in fbtranslate but land this first to fix OSS side failures.
Reviewed By: myleott
Differential Revision: D21231873
fbshipit-source-id: 8a2ad7dbed118ca8e3f4c351c399a82fd9740445
Summary:
FUNCTIONALITY:
This diff provides two core pieces of functionality
- Adds training with quantization noise from "Training with Quantization Noise for Extreme Model Compression" - controlled by the "quant_noise" and "quant_noise_block_size" parameters. Added in embeddings, attention, FFN for BERT and Transformer LM training
- Adds quantization with product quantization based on code from "And the bit goes down: Revisiting the quantization of neural networks" (Stock et al, 2019). This is applied to a fairseq trained model to quantize after training.
TODO:
-> Pierre, look at quantization code
-> int4 and int8 quantization will be added soon.
EVALUATED TEST CASES:
0. Training of LM and BERT models starts from scratch with no errors -> yes
1. Retrain LM from scratch with code, no quantization, reproduces Wikitext-103 LM results -> yes, see /checkpoint/angelafan/qn_open_source_noise
2. Reload previously trained LM from scratch, not trained with quant noise, reproduces Wikitext-103 LM results -> yes
3. Train LM from scratch with code, no trained with quant noise, reproduces Wikitext-103 LM results -> yes, see /checkpoint/angelafan/qn_open_source_baseline
4. Train BERT model from scratch with code, no quantization, training curve looks the same as before -> yes
5. Check wps during training and wps during inference, no large change from before -> yes
6. Check structured dropout isn't being applied at eval time -> yes
7. Works in combination with LayerDrop -> yes
Pull Request resolved: https://github.com/pytorch/fairseq/pull/1896
Reviewed By: myleott
Differential Revision: D20609420
Pulled By: huihuifan
fbshipit-source-id: 94468dd811c4caaaef46a9fab2b8d381f9d2b955
Summary:
# Before submitting
- [ ] Was this discussed/approved via a Github issue? (no need for typos, doc improvements)
- [x] Did you read the [contributor guideline](https://github.com/pytorch/fairseq/blob/master/CONTRIBUTING.md)?
- [ ] Did you make sure to update the docs?
- [x] Did you write any new necessary tests?
## What does this PR do?
Fixes https://github.com/pytorch/fairseq/issues/2027 .
## PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
## Did you have fun?
Make sure you had fun coding �
Pull Request resolved: https://github.com/pytorch/fairseq/pull/2028
Reviewed By: ngoyal2707
Differential Revision: D21134466
Pulled By: myleott
fbshipit-source-id: 070d7f971bc8d88ec1ca43d52797e2f0b07fb6af
Summary:
Pull Request resolved: https://github.com/pytorch/fairseq/pull/2016
It is to update Fairseq LSTM to jitable version
Reviewed By: cndn
Differential Revision: D20937370
fbshipit-source-id: 26f677fcb58bbeaa507d303e9a81060ff78f0502
Summary:
With 1 GPU, BMUF is no longer required. Instead, it just works like a simple model training.
Add unit test too for Single GPU BMUF
Reviewed By: jay-mahadeokar
Differential Revision: D21033060
fbshipit-source-id: 9030187c05d49548222c8d1e2fe9534a6c6c4389
Summary: Moving ``_test_save_and_load()` up top top-level for possible reuse across classes.
Reviewed By: cndn
Differential Revision: D20971566
fbshipit-source-id: b9d9c554d03f26cd43eee9f209e1c1367679af72
Summary: The fix in MHA is suggested by driazati, to avoid JIT compilation for if branch in MHA forward when in scripting. Without this quantization wouldn't work. Details in https://fb.workplace.com/groups/2240361332735959/permalink/626166461295703/
Reviewed By: jhcross
Differential Revision: D20881076
fbshipit-source-id: b50347b45cd7dbdef02ac7b71316ba734019f57e
Summary:
Pull Request resolved: https://github.com/fairinternal/fairseq-py/pull/1127
Pull Request resolved: https://github.com/pytorch/fairseq/pull/1953
Script the `reorder_incremental_states` in the base FairseqModel
Remove the overwrite scriptable `reorder_incremental_states` in the TransformerModel
Change the decoder_len, since len(Tuple) is supported in Script
Relanded reverted diff D20797390
Reviewed By: myleott
Differential Revision: D20896200
fbshipit-source-id: cc4ae34f89f16007656cce6ec6f7e01b13899278
Summary:
Pull Request resolved: https://github.com/fairinternal/fairseq-py/pull/1120
Pull Request resolved: https://github.com/pytorch/fairseq/pull/1940
Deprecate the SequenceGenerator in Fairseq with the Scripted vision.
Pass all integration unit tests
- Copy ScriptSequenceGenerator to SequenceGenerator:
- Modified the forward_decoder to fix bug when using adaptive_softmax in `get_prob_normalize` (marked with the inline comment)
- Add support for other EnsembleModels as input arg (marked with the inline comment)
- Add `FBEnsembleModelWithFork` to support folk/join in ensemblemodel
- Add `test_fb_ensemble_model` to test folk/join feature
- Still have bugs in folk/join feature when running in the Fairseq interface (like generation and interactive). Need further investigation P128130029. cc cndn, jhcross
- Modified SequenceGenerator initialization the interface
- Clear up the codes: delete unused functions `get_normalized_probs` and `_decode`
Reland reverted diff D20685075
Reviewed By: cndn
Differential Revision: D20895977
fbshipit-source-id: 424ee318e67d5d6ffed3edb92c7fa78485ba34af
Summary:
Pull Request resolved: https://github.com/fairinternal/fairseq-py/pull/1127
Pull Request resolved: https://github.com/pytorch/fairseq/pull/1953
Script the `reorder_incremental_states` in the base FairseqModel
Remove the overwrite scriptable `reorder_incremental_states` in the TransformerModel
Change the decoder_len, since len(Tuple) is supported in Script
Reviewed By: myleott
Differential Revision: D20797390
fbshipit-source-id: ab29874973adc5dbd556c591942a0e071c81fc52
Summary:
Pull Request resolved: https://github.com/fairinternal/fairseq-py/pull/1120
Pull Request resolved: https://github.com/pytorch/fairseq/pull/1940
Deprecate the SequenceGenerator in Fairseq with the Scripted vision.
Pass all integration unit tests
- Copy ScriptSequenceGenerator to SequenceGenerator:
- Modified the forward_decoder to fix bug when using adaptive_softmax in `get_prob_normalize` (marked with the inline comment)
- Add support for other EnsembleModels as input arg (marked with the inline comment)
- Add `FBEnsembleModelWithFork` to support folk/join in ensemblemodel
- Add `test_fb_ensemble_model` to test folk/join feature
- Still have bugs in folk/join feature when running in the Fairseq interface (like generation and interactive). Need further investigation P128130029. cc cndn, jhcross
- Modified SequenceGenerator initialization the interface
- Clear up the codes: delete unused functions `get_normalized_probs` and `_decode`
Reviewed By: myleott
Differential Revision: D20685075
fbshipit-source-id: 046b76874465a70d8118a97ad670311c6ce1d1c8
Summary:
# Before submitting
- [ ] Was this discussed/approved via a Github issue? (no need for typos, doc improvements)
- [ ] Did you read the [contributor guideline](https://github.com/pytorch/fairseq/blob/master/CONTRIBUTING.md)?
- [ ] Did you make sure to update the docs?
- [ ] Did you write any new necessary tests?
## What does this PR do?
Fixes validation happening twice at the end of epoch after refactor. Spotted by freewym
here: b5dad3b7e0 (r38103577)
## PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
## Did you have fun?
Make sure you had fun coding �
Pull Request resolved: https://github.com/pytorch/fairseq/pull/1934
Reviewed By: myleott
Differential Revision: D20724205
Pulled By: louismartin
fbshipit-source-id: 8c26c39b9904508780e8542813797c8e1306ca80
Summary:
Pull Request resolved: https://github.com/fairinternal/fairseq-py/pull/1125
Pull Request resolved: https://github.com/pytorch/translate/pull/695
Pull Request resolved: https://github.com/pytorch/fairseq/pull/1927
- Switches the model to the scripted sequence generator recently implemented in fairseq. Involved making the input/ouput format of this model to conform to that in Fairseq TransformerEncoder/Decoder
- Modify the `EncoderOut` format for fairseq transformer and added optional fields needed for copy ptr decoder
- Switches to using WordEmbedding directly instead of the non scriptable EmbeddingList for src/trg embedding layer
- Small assorted syntactic changes to make it jit scriptable
- Adds a torchscriptify method for this model. Preliminarily latency seems similar to the unexported model. Also verified that the outputs match
- Currently the Roberta decoupled model is not scriptable because the base TransformerSentenceEncoder it is based on is not scriptable. We can look at adding that later
Reviewed By: einolghozati
Differential Revision: D20687247
fbshipit-source-id: 8232972bba2f1b2df4100f3c1776b6bad08a71db
Summary:
Pull Request resolved: https://github.com/pytorch/fairseq/pull/1894
Having a uniform return type for `FairseqEncoder` makes these test models function more similarly to real models.
Reviewed By: myleott, cndn
Differential Revision: D20596971
fbshipit-source-id: a744614c015af9b150f2b0ae8381b1368556f738
Summary:
# Before submitting
- [x] Was this discussed/approved via a Github issue? (no need for typos, doc improvements)
- [x] Did you read the [contributor guideline](https://github.com/pytorch/fairseq/blob/master/CONTRIBUTING.md)?
- [x] Did you make sure to update the docs?
- [x] Did you write any new necessary tests?
## What does this PR do?
Fixes https://github.com/pytorch/fairseq/issues/1830
Adds tests for RoBERTa (masked_lm, classification, single regression, multiple regression)
Pull Request resolved: https://github.com/pytorch/fairseq/pull/1831
Reviewed By: ngoyal2707
Differential Revision: D20446010
Pulled By: myleott
fbshipit-source-id: 9f37bcedf0910d85446245d71bc234bc74c62da5
Summary:
# Before submitting
- [ ] Was this discussed/approved via a Github issue? (no need for typos, doc improvements)
- [x] Did you read the [contributor guideline](https://github.com/pytorch/fairseq/blob/master/CONTRIBUTING.md)?
- [ ] Did you make sure to update the docs?
- [x] Did you write any new necessary tests?
## What does this PR do?
Fixes https://github.com/pytorch/fairseq/issues/1791.
## PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
## Did you have fun?
Make sure you had fun coding �
Pull Request resolved: https://github.com/pytorch/fairseq/pull/1792
Reviewed By: jmp84
Differential Revision: D20322704
Pulled By: myleott
fbshipit-source-id: 3cfa1bddda06b966e9dc9bc8ff183009d844b23c
Summary:
[This commit](dd1298e15f) made it so that duplicate entries in a dictionary are ignored. Unfortunately the Camembert model depends on overwriting `<unk>`, `<s>` and `</s>`.
The proposed solution here is to allow the dictionary to have entries like:
```
<unk> 999 #fairseq:overwrite
<s> 999 #fairseq:overwrite
</s> 999 #fairseq:overwrite
, 999
▁de 999
. 999
(...)
```
These will preserve the old overwriting behavior. Thus we can release a new `camembert.v0.tar.gz` with a dictionary like above and it works.
Pull Request resolved: https://github.com/fairinternal/fairseq-py/pull/1073
Reviewed By: kahne
Differential Revision: D20284569
Pulled By: myleott
fbshipit-source-id: bf78fbff13c94bf8a6485cbdda62305ddc30c056
Summary:
# Before submitting
- [x] Was this discussed/approved via a Github issue? (no need for typos, doc improvements)
- [x] Did you read the [contributor guideline](https://github.com/pytorch/fairseq/blob/master/CONTRIBUTING.md)?
- [x] Did you make sure to update the docs?
- [x] Did you write any new necessary tests?
## What does this PR do?
Fixes https://github.com/pytorch/fairseq/issues/1672 in part (part 1: [context](https://github.com/pytorch/fairseq/pull/1714#issuecomment-587507040))
## PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
## Did you have fun?
Make sure you had fun coding �
Pull Request resolved: https://github.com/pytorch/fairseq/pull/1729
Differential Revision: D20049353
Pulled By: myleott
fbshipit-source-id: 732077a1cc339c9f7ebe26dae42a7e8d7b5a07b4
Summary:
We are somewhat inconsistent in whether we're using 0-based or 1-based indexing for epochs. This should fix things to be 0-based internally, with logging and checkpoint naming still using 1-based indexing.
Pull Request resolved: https://github.com/fairinternal/fairseq-py/pull/1053
Reviewed By: spencerp
Differential Revision: D20160715
Pulled By: myleott
fbshipit-source-id: 4ed94f9c371e1bfe29bcfa087fa6756507d6e627
Summary:
sanitized vq-wav2vec implementation. i will also add docs to this. i have a fixed-up checkpoint that this code can load and verified that it produces same results as what we used in paper
Pull Request resolved: https://github.com/fairinternal/fairseq-py/pull/1029
Differential Revision: D20129246
Pulled By: alexeib
fbshipit-source-id: f72f455e0c309168e644ab86ec18c768c308da98
Summary:
1. Overwrite the base class function `get_normalized_probs` in scriptable TransformerModel
2. Change the unit test setup to match the Transformer decoder output format
3. Initialze the buffer in the simple sequence generator [WIP]
1. It is the initial step to script the sequence generator from simple scriptable version.
4. Refactor the unit test of simple sequence generator.
5. Change the input format of simple sequence generator and unit test.
Reviewed By: myleott
Differential Revision: D20017859
fbshipit-source-id: a3e93b57c22e49840e460469fa2b1c530346886d
Summary:
Pull Request resolved: https://github.com/pytorch/fairseq/pull/1653
Earlier we had some issues at pickling. Type information gets lost. Fixed in https://github.com/pytorch/pytorch/pull/32569.
These save_and_load tests are added for protection in the future.
Reviewed By: myleott
Differential Revision: D19435988
fbshipit-source-id: 560ea65ed3493bebcf394327818364b3fcd6fc92
Summary:
Pull Request resolved: https://github.com/fairinternal/fairseq-py/pull/1011
Pull Request resolved: https://github.com/pytorch/fairseq/pull/1620
Make Fairseq transformer scriptable. Discussion points on possible code refactoring:
(1) Original decoder output is a tuple (x, {"attn": attn, "inner_states": inner_states}). TorchScript does not support dictionary with values of different types (attn: Tensor, inner_states: List[Tensor]). Current workaround is to use [attn] for attention field and access via output["attn"][0] in downstream. This is currently used in fairspeq custom transformer code. Another (maybe) cleaner alternative is to use namedtuple for decoder output but involves tons of downstream changes too.
(2) Currently TorchScript doesn't support **kwargs. Some unused arguments might get passed in due to polymorphism. Now the only workaround I can think of is to add possible unused arguments, (e.g. line 666 in transformer.py)
Reviewed By: myleott
Differential Revision: D19234599
fbshipit-source-id: db3dd364ecf3ae14fb7ac8c0928bd0ebe250f19d
Summary:
When training with `--fp16` we usually flatten the grads since it's faster. But flat grads are not semantically equivalent for certain optimizers (e.g., Adafactor, LAMB), thus the user needed to be aware of this and set `--fp16-no-flatten-grads`. Let's raise a RuntimeError in this case instead.
Pull Request resolved: https://github.com/fairinternal/fairseq-py/pull/1010
Differential Revision: D19575773
Pulled By: myleott
fbshipit-source-id: bac99c3026f9870e6127e0fa55f70e8a3e4507dc
Summary:
* Now that we have `FairseqIncrementalState`, we can move `get_incremental_state` and `set_incremental_state` as methods in that class, instead of having the helper functions in `utils.py`. I think this will eventually help with type checking too.
* The incremental ID logic was overly complicated, we can just use `uuid` to generate a unique ID for every instance.
* Add missing `with_incremental_state` to light/dynamic conv modules.
* Add additional unit test: `test_incremental_state_multihead_attention`
Pull Request resolved: https://github.com/fairinternal/fairseq-py/pull/1005
Test Plan:
* unit tests
Also confirmed this matches master:
```
$ python generate.py ~/data/data-bin/wmt16_en_de_bpe32k --path /checkpoint/myleott/s3/models/wmt16.en-de.joined-dict.transformer/model.pt --beam 4 --lenpen 0.6 --remove-bpe --quiet
(...)
2020-01-22 09:53:38 | INFO | fairseq_cli.generate | Generate test with beam=4: BLEU4 = 29.28, 60.8/35.1/22.8/15.3 (BP=0.997, ratio=0.997, syslen=62859, reflen=63078)
```
Reviewed By: cndn
Differential Revision: D19517908
Pulled By: myleott
fbshipit-source-id: a406490e342d0d30a9231bf823d3350999bda4c0
Summary:
Currently, the LSTM models in Fairseq master can only be used in an encoder/decoder setting, for example, in `class LSTMModel(FairseqEncoderDecoderModel)`. This PR adds a standalone LSTM decoder language model.
Changes:
- adds support for `LSTMDecoder` in cases where an encoder is not present, for instance, where `encoder_output_units=0`.
- fixes bugs in `LSTMDecoder` that only become apparent when using it in a standalone fashion, for example, not handling `src_lengths` as an optional argument.
- adds `class LSTMLanguageModel(FairseqLanguageModel)` for training LSTM language models.
- tests for the `LSTMLanguageModel`. Changes to the `LSTMDecoder` are handled by existing test cases.
Pull Request resolved: https://github.com/fairinternal/fairseq-py/pull/934
Reviewed By: myleott
Differential Revision: D18816310
Pulled By: joshim5
fbshipit-source-id: 4773695a7f5d36aa773da8a45db2e02f76c968a9
Summary:
Pull Request resolved: https://github.com/fairinternal/fairseq-py/pull/1007
# Before submitting
- [x] Was this discussed/approved via a Github issue? (no need for typos, doc improvements)
- [x] Did you read the [contributor guideline](https://github.com/pytorch/fairseq/blob/master/CONTRIBUTING.md)?
- [x] Did you make sure to update the docs?
- [ ] Did you write any new necessary tests?
## What does this PR do?
Fixes https://github.com/pytorch/fairseq/issues/1622
## PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
## Did you have fun?
Make sure you had fun coding �
Pull Request resolved: https://github.com/pytorch/fairseq/pull/1631
Differential Revision: D19555401
Pulled By: myleott
fbshipit-source-id: c62dfc109e09a7d732a9fc73ac6feef63a8dd341
Summary:
Pull Request resolved: https://github.com/pytorch/translate/pull/683
Pull Request resolved: https://github.com/pytorch/fairseq/pull/1612
Make SinusoidalPositionalEmbedding scriptable. Mostly adding types. The only change that affects lots of downstream code is to have max_positions as member variable instead of method.
Reviewed By: myleott
Differential Revision: D18924939
fbshipit-source-id: 2b6486563e9ec5cc34bcf11acdff9054658f4674
Summary:
Pull Request resolved: https://github.com/fairinternal/fairseq-py/pull/1002
Pull Request resolved: https://github.com/pytorch/translate/pull/681
Pull Request resolved: https://github.com/pytorch/fairseq/pull/1524
Make fairseq MultiheadAttention scriptable. Looking for feedbacks.
1. Add types
2. Move incremental state management logic from util functions to initializers. TorchScript in general doesn't support global dict. As a result modules with multihead attention in it would assign itself fairseq_instance_id in the initializer.
3. There might be opportunities to make assertions and annotations cleaner.
Reviewed By: myleott
Differential Revision: D18772594
fbshipit-source-id: 377aef4bbb7ef51da5b6bac9a87a6f7b03b16fe1
Summary:
* fix: mid-epoch validation metrics were previously polluting training metrics
* fix: mid-epoch metrics were not properly saved/restored in checkpoints
* added tests, both for metrics and for mid-epoch reproducibility
Pull Request resolved: https://github.com/pytorch/fairseq/pull/1634
Differential Revision: D19470714
Pulled By: myleott
fbshipit-source-id: 491fa8d830b653cdd6a86095645aabcac758d214
Summary: This is needed to support other build environments (e.g., Windows)
Reviewed By: ngoyal2707
Differential Revision: D19409984
fbshipit-source-id: e970510781abf92f1b02d0961bc30e1210b524dd
Summary:
This PR implements a new generation strategy that we experimented with in project Pinocchio (https://github.com/fairinternal/Pinocchio), see the paper submission in: https://fburl.com/hduj2me7.
Specifically in this PR:
- added a Diverse Beam Search variant as described in https://arxiv.org/abs/1611.08562
- moved the Search object generation out of `sequence_generation.py`, which allows for limiting the number of kwargs passes around
- made sure the above changes are backward compatible based on grep - P124083926
- added test cases covering these scenarios
Pull Request resolved: https://github.com/fairinternal/fairseq-py/pull/953
Test Plan:
- `python -m unittest tests.test_binaries -v`- including added test cases, see issues below for some details
- `python -m unittest tests.test_sequence_generator -v` - including added test cases
- tested locally in conjunction with the Pinocchio repo
- grepped for all instantiations of `SequenceGeneration`, made sure they're backward compatible
# Issues
- when I try to run all tests with `python -m unittest tests.test_binaries -v` command, the execution gets stuck on `test_binaries.TestTranslation.test_generation` - the test otherwise passes without problems when ran individually. Is this a known problem?
- discovered T59235948 - assigned to fairseq oncall
Reviewed By: myleott, fabiopetroni
Differential Revision: D19142394
Pulled By: ola13
fbshipit-source-id: d24543424c14a9537e7b6485951d9f841da62b07
Summary:
- Fix github issue [1393](https://github.com/pytorch/fairseq/issues/1393), [1315](https://github.com/pytorch/fairseq/issues/1315).
- Add unit test to cover training, validation and generation for multilingual model to make sure they can run without problem. (didn't test the correctness)
Reviewed By: lematt1991
Differential Revision: D19149575
fbshipit-source-id: 9ec9000d037cc5c3bd8457feb527f2305375a442
Summary: Added unit test for PathManager file io (with or without fvcore).
Reviewed By: theweiho
Differential Revision: D18880067
fbshipit-source-id: 969c2be90415d22041b8276b7a5ff264571561d0
Summary:
This diff mainly first contains the implementation for NAT-CRF model:
- Fast Structured Decoding for Sequence Models (NAT-CRF, Sun et al., 2019)
We implemented a dynamic CRF module and incorporate it into the implementation of vanilla NAT model. In order to reproduce the performance on paper.
We implemented the length beam as well as reranking from a learned autoregressive model in the iterative-refinement-generator;
We also implemented a new ensemble code which enables to do ensemble for all NAT models, not only Levenshtein Transformer itself. We refactor all the codes and move the models into ``fairseq/models/nat``.
Finally, we updated the README.md for NAT models.
Pull Request resolved: https://github.com/fairinternal/fairseq-py/pull/925
Differential Revision: D18738085
Pulled By: MultiPath
fbshipit-source-id: 4e421c5d52d2456fbe99e7863d715c756b1fd49b
Summary:
https://github.com/pytorch/fairseq/pull/1097 added key padding mask history in TransformerDecoderLayer, but during an edge case where only the current or only the previous key_padding_mask exists, the resulting key_padding_mask is the wrong size.
This diff adds empty columns in such a case to ensure key_padding_mask is a usable size.
Reviewed By: myleott
Differential Revision: D18224313
fbshipit-source-id: c9fb7266baf0a2d79a66704e00a5ea8bd2987ff6
Summary:
This unit test guards the bmuf code.
change:
1. distributed_init assumes we are always using cuda device which is not the case if you are using "gloo" backend on CPU machine.
Reviewed By: jay-mahadeokar
Differential Revision: D17821391
fbshipit-source-id: 28e1bb39f7a4889b1dc6bd636b7c499e55bfc69a
Summary:
Pull Request resolved: https://github.com/fairinternal/fairseq-py/pull/877
This PR implements guided alignment training described in "Jointly Learning to Align and Translate with Transformer Models (https://arxiv.org/abs/1909.02074)".
In summary, it allows for training selected heads of the Transformer Model with external alignments computed by Statistical Alignment Toolkits. During inference, attention probabilities from the trained heads can be used to extract reliable alignments. In our work, we did not see any regressions in the translation performance because of guided alignment training.
Pull Request resolved: https://github.com/pytorch/fairseq/pull/1095
Differential Revision: D17170337
Pulled By: myleott
fbshipit-source-id: daa418bef70324d7088dbb30aa2adf9f95774859
Summary:
This PR implements a new attention module which combines cross-attention (encoder-decoder attention) and the decoder self-attention. This work was accepted as an abstract at WeCNLP 2019 (https://www.wecnlp.ai/wecnlp-2019).
Cross+Self-Attention reduces the amount of parameter and increases the inference speed without any degradation in translation quality.
More details can be found in the attached [abstract](https://github.com/pytorch/fairseq/files/3561282/paper.pdf)
Pull Request resolved: https://github.com/pytorch/fairseq/pull/1097
Differential Revision: D17653168
Pulled By: myleott
fbshipit-source-id: deb834c2c78a229d7418ffbfea20ba3ce252991c
Summary:
Code for our NeurIPS paper [Levenshtein Transformer](https://arxiv.org/abs/1905.11006)
* Added Levenshtein Transformer model, task and criterion class
* Added iterative NAT Transformer, insertion Transformer and CMLM Transformer model class for baselines
* Add an option for prepending BOS to dictionary class and translation task class
Reviewed By: myleott
Differential Revision: D17297372
fbshipit-source-id: 54eca60831ae95dc721c2c34e882e1810ee575c7
Summary:
As discussed with Naman earlier today. Weighted sampling with
replacement can be done on a per-epoch basis using `set_epoch()`
functionality, which generates the samples as a function of random seed
and epoch.
Additionally, `FairseqTask` needs to set the starting epoch for the
dataset at the very beginning of iterator construction.
Not yet implemented is the per-epoch iterator construction, which
is necessary to actually regenerate the batches for each epoch.
Pull Request resolved: https://github.com/fairinternal/fairseq-py/pull/861
Differential Revision: D17460687
Pulled By: jma127
fbshipit-source-id: 1c2a54f04ac96b3561c100a6fd66a9fccbe3c658
Summary:
Initial code for speech recognition task.
Right now only one ASR model added - https://arxiv.org/abs/1904.11660
unit test testing:
python -m unittest discover tests
also run model training with this code and obtained
5.0 test_clean | 13.4 test_other
on librispeech with pytorch/audio features
Pull Request resolved: https://github.com/fairinternal/fairseq-py/pull/810
Reviewed By: cpuhrsch
Differential Revision: D16706659
Pulled By: okhonko
fbshipit-source-id: 89a5f9883e50bc0e548234287aa0ea73f7402514
Summary:
The previous BSD+PATENTS license was controversial. We have been
approved to relicense fairseq under the MIT license.
Pull Request resolved: https://github.com/fairinternal/fairseq-py/pull/786
Differential Revision: D16560654
Pulled By: myleott
fbshipit-source-id: f78b1beb4f2895dd7b9bfc79f5f952a2bfb94034
Summary:
Pull Request resolved: https://github.com/facebookresearch/pytext/pull/804
Pull Request resolved: https://github.com/fairinternal/fairseq-py/pull/746
Pull Request resolved: https://github.com/pytorch/fairseq/pull/894
Adding an implementation of the sparse transformer to multi-head attention using the fixed attention pattern specified https://arxiv.org/pdf/1904.10509.pdf. The sparse_mask masks out words using -inf; after softmax, -inf becomes 0. Thus, a mask does not need to be re-calculated and re-applied when multiplying attn_weights and values.
Four inputs are added to the config: sparse, is_bidirectional, stride, expressivity. If we are using the sparse transformer, is_bidirectional, stride, and expressivity must be specified (there are defaults). If is_bidirectional is False, the mask values using the fixed attention pattern described in the paper. If is_bidirectional is True, subset one includes all values in the current stride window and a summary from every stride window--all other values are masked. Stride (L in the paper) controls the window size and expressivity (c in the paper) controls the size of the summary.
Reviewed By: borguz
Differential Revision: D16042988
fbshipit-source-id: c59166dc7cfe89187a256e4076000c2458842fd5
Summary:
See #467. Ping myleott to review.
This is a work-related contribution. Ping lark to review.
Pull Request resolved: https://github.com/pytorch/fairseq/pull/794
Differential Revision: D15756816
Pulled By: myleott
fbshipit-source-id: 6dce3ff3a713bf5f60e5782bc260b2ca9d2c0a9b
Summary: We never actually load the model parameters from an XLM model when using tranformer_from_pretrained_xlm. Also, change encoder_learned_pos from True -> False
Reviewed By: liezl200
Differential Revision: D15629061
fbshipit-source-id: 759eadc88041eae94505477960de57dd78a99dcb
Summary:
Pull Request resolved: https://github.com/pytorch/fairseq/pull/747
In https://github.com/pytorch/fairseq/pull/647, checkpoint averaging
is not Implemented correctly when it comes to shared parameters. This diff
has the right Implementation and a test case to guard future change.
Reviewed By: myleott
Differential Revision: D15402943
fbshipit-source-id: 8004836d5c2571814ea54844650618008a9ee522
Summary:
Pull Request resolved: https://github.com/pytorch/fairseq/pull/730
Pull Request resolved: https://github.com/pytorch/translate/pull/528
Add/modify necessary functions for ConcatDataset to work in PytorchTranslateTask and replace MultiCorpusSampledDataset which doesn't support mixed batch.
Any idea on how to implement collater here for mixed batch? Now I'm just using the collater of the first dataset.
Reviewed By: liezl200
Differential Revision: D15260872
fbshipit-source-id: 14b148c506e9f8ebf4fe60a49f95444d4123d76f
Summary:
Move `load_checkpoint`, `save_checkpoint` and `reload_train` from train.py to checkpoint_utils.py
Move `get_perplexity` from train.py to utils.py.
This will make train.py lighter and allow us to reuse all this utils functionality when fairseq is used as external library.
Reviewed By: myleott
Differential Revision: D15289607
fbshipit-source-id: 4b7c95225ac22e402bcda3497811361809110df1
Summary: the old no_bias_kv argument for masked_lm models are not used. Split it into 2 arguments and expose them.
Reviewed By: myleott
Differential Revision: D15266154
fbshipit-source-id: 60b041f8370ca1d8869ed3402fb9a67d1cd8e0e8
Summary:
Following discussion in https://github.com/pytorch/fairseq/issues/574:
- Implemented MMapIndexedDataset and MMapIndexedDatasetBuilder compatible with IndexedDataset/IndexedDatasetBuilder
- Update scripts/read_binarized.py to support new MMapIndexedDataset
- Option '--raw-text' and '--lazy-load' replaced with '--dataset-impl' and moved the option definition custom task args to more high-level options.add_dataset_args() (more appropriate)
- Implemented also utils functions in indexed_dataset: make_dataset(), dataset_exists()
Pull Request resolved: https://github.com/pytorch/fairseq/pull/589
Differential Revision: D14597128
Pulled By: myleott
fbshipit-source-id: 4e92d99920cbaa52cfe5a0f1f5d9ae5c92d4268e
Summary:
Co-authored-by: myleott <myleott@fb.com>
Changing `data` to be `str` with colon separated list for loading sharded datasets. This change is useful for loading large datasets that cannot fit into, memory. The large dataset can be sharded and then each shard is loaded in one epoch in roudrobin manner.
For example, if there are `5` shards of data and `10` epochs then the shards will be iterated upon `[0, 1, 2, 3, 4, 0, 1, 2, 3, 4]`.
myleott We need to look into `translation.py` as it currently already expects a list and then concats the datasets.
Pull Request resolved: https://github.com/pytorch/fairseq/pull/696
Differential Revision: D15214049
fbshipit-source-id: 03e43a7b69c7aefada2ca668abf1eac1969fe013
Summary:
Pull Request resolved: https://github.com/pytorch/translate/pull/508
The previous version applied the temperature after the softmax. Fix that, and
also generalize so it works with other search approaches.
Pull Request resolved: https://github.com/pytorch/fairseq/pull/694
Differential Revision: D15175160
Pulled By: myleott
fbshipit-source-id: cc87ff0e97a8a1dd37f9983163f58a8641155ab0
Summary:
Pull Request resolved: https://github.com/pytorch/fairseq/pull/666
Option to load the XLM weights into only the encoder or the decoder
Reviewed By: pipibjc
Differential Revision: D14881004
fbshipit-source-id: 6d0d598ea9c445ec468f71b8e855712de89a5dac
Summary:
Pull Request resolved: https://github.com/pytorch/fairseq/pull/639
Add argument sampling_func in the constructor to enable custom sampling over a list of dataset keys. The default strategy is to sample uniformly as it did previously.
Reviewed By: liezl200
Differential Revision: D14965774
fbshipit-source-id: f3285688a9ae3729c0ba12c22254c1144d0eea9e
Summary: sequence_generator assumes that model input is 2d tensor of longs. But it can be something like 3d tensor of floats and we should be able to handle this as long as first dimension is batch size followed by source lengths.
Reviewed By: myleott
Differential Revision: D14420044
fbshipit-source-id: bf8b1e42ad1873f7b803c1a377b0af21648db015
Summary:
Pull Request resolved: https://github.com/pytorch/fairseq/pull/541
Just a combo of a stacked pair D14057943 & D14176011,
Made this as a separete diff cause there seems to be some issue with porting a stacked change into github repo
Differential Revision: D14251048
fbshipit-source-id: 0a47f534a69d6ab2ebe035fba40fd51748cccfb8
Summary:
The `preprocess.py` script has been refactored in order to:
1. Use the `options` module for command line arguments parsing. This will give to `preprocess.py` the ability to load custom modules with `--user-dir` flag (already implemented to all other binaries)
2. Dictionary loading and building code has moved to Task implementation. This allows custom Dictionary classes to be used during the data generation step.
Pull Request resolved: https://github.com/pytorch/fairseq/pull/448
Differential Revision: D13674819
Pulled By: myleott
fbshipit-source-id: b40648a98ed6c08284577e5ec25876e018d8c822
Summary:
Changelog:
- `4889802`: can now remove detokenize sentencepiece output with `--remove-bpe=sentencepiece` (fixes#331). Also added `--sacrebleu` for computing detokenized BLEU.
- `0d76427`: fix assertion error when training language model with dataset containing empty sentences
- minor bug and style fixes
Pull Request resolved: https://github.com/pytorch/fairseq/pull/483
Differential Revision: D13867899
Pulled By: myleott
fbshipit-source-id: 25c940b847fe270262ac8f5ac838407b3977fdda