Clarify mixed precision training support (#766)

Summary:
Change the wording to avoid confusion. Mixed precision ensures both higher arithmetic throughput and numerical stability, not exactly synonymous to pure half-precision/FP16 training. Also add mentioning of tensor cores since older generation GPUs without tensor cores don't support true mixed precision training.
Pull Request resolved: https://github.com/pytorch/fairseq/pull/766

Differential Revision: D15559565

Pulled By: myleott

fbshipit-source-id: c71e720772657bb3e8ad330b58bf69e23beb614e
This commit is contained in:
Khoa Ho 2019-05-30 12:02:58 -07:00 committed by Facebook Github Bot
parent ffc3bb5806
commit d5f76d7446

View File

@ -28,7 +28,7 @@ Fairseq features:
- Diverse Beam Search ([Vijayakumar et al., 2016](https://arxiv.org/abs/1610.02424))
- sampling (unconstrained and top-k)
- large mini-batch training even on a single GPU via delayed updates
- fast half-precision floating point (FP16) training
- mixed precision training (trains faster with less GPU memory on [NVIDIA tensor cores](https://developer.nvidia.com/tensor-cores))
- extensible: easily register new models, criterions, tasks, optimizers and learning rate schedulers
We also provide [pre-trained models](#pre-trained-models-and-examples) for several benchmark