fairseq/tests/gpu
Gagandeep Singh 237184e522 Add torch.cuda.amp support (#3460)
Summary:
# Before submitting

- [ ] Was this discussed/approved via a Github issue? (no need for typos, doc improvements)
- [x] Did you read the [contributor guideline](https://github.com/pytorch/fairseq/blob/master/CONTRIBUTING.md)?
- [ ] Did you make sure to update the docs?
- [x] Did you write any new necessary tests?

## What does this PR do?
Fixes https://github.com/pytorch/fairseq/issues/3282
Add support for `torch.cuda.amp`
AMP can be enabled by `--amp`, instead of using `--fp16` for the already present full fp16 support.

## PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

## Did you have fun?
Make sure you had fun coding �

Pull Request resolved: https://github.com/pytorch/fairseq/pull/3460

Reviewed By: sshleifer, msbaines

Differential Revision: D27932253

Pulled By: myleott

fbshipit-source-id: 21637aefb5e788c59bf4f3c5de6c4a80f7319543
2021-05-26 14:39:10 -07:00
..
__init__.py remediation of S205607 2020-07-17 17:21:51 -07:00
test_binaries_gpu.py Add torch.cuda.amp support (#3460) 2021-05-26 14:39:10 -07:00
transformer_quantization_config.yaml Split out fairseq GPU tests & add new deeplearning_fairseq_gpu contbuild using remote execution 2020-06-03 18:53:35 -07:00