This replaces dense alignment storage and training with a sparse representation. Training speed with guided alignment matches now nearly normal training speed, regaining about 25% speed.
This is no. 1 of 2 PRs. The next one will introduce a new guided-alignment training scheme with better alignment accuracy.
* Add -DDETERMINISTIC=ON/OFF flag to CMake
* Use -DDETERMINISTIC=on in GitHub/Azure workflows
Co-authored-by: Roman Grundkiewicz <rgrundkiewicz@gmail.com>
* More examples for MLP layers and docs about RNN layers
* Docs about embedding layer and more doxygen code docs
* Add layer and factors docs into index.rst
* Update layer documentation
* Fix typos
Co-authored-by: Roman Grundkiewicz <rgrundkiewicz@gmail.com>
Co-authored-by: Graeme Nail <graemenail.work@gmail.com>
* Add MMAP as an option
* Use io::isBin
* Allow getYamlFromModel from an Item vector
* ScorerWrapper can now load on to a graph from Item vector
The interface IEncoderDecoder can now call graph loads directly from an
Item Vector.
* Translator loads model before creating scorers
Scorers are created from an Item vector
* Replace model-config try-catch with check using IsNull
* Prefer empty vs size
* load by items should be pure virtual
* Stepwise forward load to encdec
* nematus can load from items
* amun can load from items
* loadItems in TranslateService
* Remove logging
* Remove by filename scorer functions
* Replace by filename createScorer
* Explicitly provide default value for get model-mmap
* CLI option for model-mmap only for translation and CPU compile
* Ensure model-mmap option is CPU only
* Remove move on temporary object
* Reinstate log messages for model loading in Amun / Nematus
* Add log messages for model loading in scorers
Co-authored-by: Roman Grundkiewicz <rgrundkiewicz@gmail.com>
MacOS is weird and its CPU flags are separated in two separate fields returned by the sysctl interface. To get around this, we need to test both of them, so here goes
Co-authored-by: Roman Grundkiewicz <rgrundkiewicz@gmail.com>
* add initial guidelines of code documentation
* fix math formula not displayed in Sphinx
* remove @name tags which cannot be extracted by exhale and cause function signature errors
* fix markdown ref warning and update markdown parser in sphinx
* more about doxygen: add Doxygen commands and math formulas
* move code doc guide to a new .rst file
* add formula image
* Set myst-parser version appropriate for the requested sphinx version
* Update documentation on how to write Doxygen comments
* Add new section to the documentation index
* Sphinx 2.4.4 requires myst-parser 0.14
* complete code doc guide and small fixes on reStructuredText formats
* More about reStructuredText
* Update badges on the documentation frontpage
Co-authored-by: Roman Grundkiewicz <rgrundkiewicz@gmail.com>
This parallelizes data reading. On very fast GPUs and with small models training speed can be starved by too slow batch creation. Use --data-threads 8 or more, by default currently set to 1 for backcompat.
* Add GCC 11 support
Some C++ Standard Library headers have been changed to no longer include other headers that they do need to depend on. As such, C++ programs that used standard library components without including the right headers will no longer compile.
The following headers are used less widely in libstdc++ and may need to be included explicitly when compiled with GCC 11:
<limits> (for std::numeric_limits)
<memory> (for std::unique_ptr, std::shared_ptr etc.)
<utility> (for std::pair, std::tuple_size, std::index_sequence etc.)
<thread> (for members of namespace std::this_thread.)
Co-authored-by: Roman Grundkiewicz <rgrundkiewicz@gmail.com>
* Attempt to validate task alias
* Validate allowed options for --task alias
* Update comment in aliases.cpp
* Show allowed values for alias
Co-authored-by: Roman Grundkiewicz <rgrundkiewicz@gmail.com>
This PR improves clipping and pruning behavior of NaNs and Infs during fp16 training, ultimately avoiding the underflow problems that we were facing so far.
Marian does not compile on macOS 11.6, so the build has stopped working due to an upgrade from macOS-10.15 to macOS 11.6 in Azure Pipelines: https://github.com/actions/virtual-environments/issues/4060
This PR explicitly set macOS 10.15 in the workflow.
- Add `--allow-unauthenticated` to `apt` when installing CUDA on Ubuntu
- Removing `ubuntu-16.04` image from Azure pipelines, which will become unavailable after September 20
This PR adds a checkbox which can be unchecked to skip running compilation checks when triggering them manually. It is useful for generating expected outputs on different CPUs for tests using 8-bit models.
* concatenation combining option added when embeding using factors
* crossMask not used by default
* added an option to better clarify when choosing factor predictor options
* fixed bug when choosing re-embedding option and not setting embedding size
* avoid uncessary string copy
* Check in factors documentation
* Fix duplication in merge
* Self-referential repository
* change --factors-predictor to --lemma-dependency. Default behaviour changed.
* factor related options are now stored with the model
* Update doc/factors.md
* add backward compability for the target factors
* Move backward compatibility checks for factors to happen after the model.npz config is loaded
* Add explicit error msg if using concat on target
* Update func comments. Fix spaces
* Add Marian version requirement
* delete experimental code
Co-authored-by: Pedro Coelho <pedrodiascoelho97@gmail.com>
Co-authored-by: Pedro Coelho <pedro.coelho@unbabel.com>
Co-authored-by: Roman Grundkiewicz <rgrundkiewicz@gmail.com>