torch.qr is deprecated for a long time and is being removed by https://github.com/pytorch/pytorch/pull/70989.
This PR makes the example compatible with new and old PyTorch versions.
Summary:
We want to make the computation branchless here because fairseq code may be
exported and traced for deployment purposes, and tracing mechanisms can
break the correctness for a captured program if it's dependent on input data.
In this diff we try to rewrite the code to remove one branch so that tracer
can proceed here and preserve the correct semantics of the model.
Test Plan:
CI
Reviewers:
Subscribers:
Tasks:
Tags:
* Add UnitY implementation
* Rename for consistency
* Refactor conformer encoder construction
* Change the order of arguments for rdrop_alpha
* Add compute_loss_with_rdrop
* Move build_multitask_decoder to xm_transformer_unity.py
* Fix generator selection
* Fix check in build_criterion
* Modularize Rdrop
* Minor fix
* Refine class names
* Refactor submodules
* Fix CE
* Fix import
* Fix argments for datasets
* Add description to AugTransformerDecoderBase
* Fix SpeechToTextDatasetCreator
* Fix metavar in arguments
* Uncomment override_decoder_args
* Fix comment in warning
* Add is_fisrt_pass_decoder flag
* Change Translatotron2SpeechGenerator to MultiDecoderSpeechGenerator
* Move inference code to examples/speech_to_speech/unity
* Fix rdrop default value in aux tasks
* Add language tag mapping option to multitask-config-yaml
* Rename encoder_out2 and encoder_outs2
* Rename UnitYXMTransformerModel to XMTransformerModelUnitY
* Support num_best_checkpoints in average_checkpoints
* Fix has_multitask
* Inherit SequenceGenerator
* Reflect recent updates
* Minor fix in logging
* Fix typo
* Refactor SpeechToSpectrogram2passMultitaskTaskCriterion
* Minor update for multitask
* run all tests
* make torch a build-time dependency
* add 'dev' extra deps to install black, flake, pytest at once
* Build docs in CI
This should also help catch some import bugs, since sphinx inspect a lot of code
* CI should do the real install not "--editable"
* check installation succeeded
* add missing __init__.py file
* add check installation
* move check_installation.py to its own script
* fix pytest import mode, force recent numpy, torch
* run black before flake and tests
* torch >= 1.10.0
* use torch 1.10 for GPU tests
* Move TextTargetMultitaskData
* Support MTL for speech-to-text
* Fix for black
* Fix SpeechToTextDatasetCreator
* Suport online text preprocessing
* Add keyword to arguments
* add an option to fetch datapoints within a batch in an async manner, which is helpful if the fetching is io bound
* add an option to fetch datapoints within a batch in an async manner
Co-authored-by: juntengjia <juntengjia@fb.com>
This is a no-op in eager and in ONNX export, but it's better for other
tracers if this is preserved as shapes directly instead of converted to
a tensor.
There is a little annoying code duplication with
`torch.jit.is_scripting()`, which is unforunately necessary because we
didn't implement compile-time short circuiting correctly in TorchScript
lol.