fix failing convtransformer test (#3107)

Summary:
# Before submitting

- [ ] Was this discussed/approved via a Github issue? (no need for typos, doc improvements)
- [ ] Did you read the [contributor guideline](https://github.com/pytorch/fairseq/blob/main/CONTRIBUTING.md)?
- [ ] Did you make sure to update the docs?
- [ ] Did you write any new necessary tests?

## What does this PR do?
Fixes # (issue).

## PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

## Did you have fun?
Make sure you had fun coding �

Pull Request resolved: https://github.com/fairinternal/fairseq-py/pull/3107

Reviewed By: cndn

Differential Revision: D34354339

Pulled By: sravyapopuri388

fbshipit-source-id: 50888706123d246c13d2cbb22d0e043740ff6bf5
This commit is contained in:
spopuri 2022-02-22 11:17:26 -08:00 committed by Facebook GitHub Bot
parent 5b87224417
commit 420136acd2

View File

@ -21,7 +21,10 @@ class TestConvtransformerSimulTrans(TestFairseqSpeech):
"""Only test model loading since fairseq currently doesn't support inference of simultaneous models"""
_, _, _, _ = self.download_and_load_checkpoint(
"checkpoint_best.pt",
arg_overrides={"config_yaml": "config_gcmvn_specaug.yaml"},
arg_overrides={
"config_yaml": "config_gcmvn_specaug.yaml",
"load_pretrained_encoder_from": None,
},
)
return