mirror of
https://github.com/facebookresearch/fairseq.git
synced 2024-10-26 17:32:57 +03:00
237184e522
Summary: # Before submitting - [ ] Was this discussed/approved via a Github issue? (no need for typos, doc improvements) - [x] Did you read the [contributor guideline](https://github.com/pytorch/fairseq/blob/master/CONTRIBUTING.md)? - [ ] Did you make sure to update the docs? - [x] Did you write any new necessary tests? ## What does this PR do? Fixes https://github.com/pytorch/fairseq/issues/3282 Add support for `torch.cuda.amp` AMP can be enabled by `--amp`, instead of using `--fp16` for the already present full fp16 support. ## PR review Anyone in the community is free to review the PR once the tests have passed. If we didn't discuss your PR in Github issues there's a high chance it will not be merged. ## Did you have fun? Make sure you had fun coding � Pull Request resolved: https://github.com/pytorch/fairseq/pull/3460 Reviewed By: sshleifer, msbaines Differential Revision: D27932253 Pulled By: myleott fbshipit-source-id: 21637aefb5e788c59bf4f3c5de6c4a80f7319543 |
||
---|---|---|
.. | ||
__init__.py | ||
test_binaries_gpu.py | ||
transformer_quantization_config.yaml |