mirror of
https://github.com/facebookresearch/fairseq.git
synced 2024-09-21 14:17:25 +03:00
Clarify mixed precision training support (#766)
Summary: Change the wording to avoid confusion. Mixed precision ensures both higher arithmetic throughput and numerical stability, not exactly synonymous to pure half-precision/FP16 training. Also add mentioning of tensor cores since older generation GPUs without tensor cores don't support true mixed precision training. Pull Request resolved: https://github.com/pytorch/fairseq/pull/766 Differential Revision: D15559565 Pulled By: myleott fbshipit-source-id: c71e720772657bb3e8ad330b58bf69e23beb614e
This commit is contained in:
parent
ffc3bb5806
commit
d5f76d7446
@ -28,7 +28,7 @@ Fairseq features:
|
||||
- Diverse Beam Search ([Vijayakumar et al., 2016](https://arxiv.org/abs/1610.02424))
|
||||
- sampling (unconstrained and top-k)
|
||||
- large mini-batch training even on a single GPU via delayed updates
|
||||
- fast half-precision floating point (FP16) training
|
||||
- mixed precision training (trains faster with less GPU memory on [NVIDIA tensor cores](https://developer.nvidia.com/tensor-cores))
|
||||
- extensible: easily register new models, criterions, tasks, optimizers and learning rate schedulers
|
||||
|
||||
We also provide [pre-trained models](#pre-trained-models-and-examples) for several benchmark
|
||||
|
Loading…
Reference in New Issue
Block a user