* unit tests for binary file operations
* adjust changelog
* Set file_ in TemporaryFile for MSVC
Co-authored-by: Roman Grundkiewicz <rgrundkiewicz@gmail.com>
Adds intgemm as a module for Marian. Intgemm is @kpu 's 8/16 bit gemm library with support for architectures from SSE2 to AVX512VNNI
Removes outdated integer code, related to the --optimize option
Co-authored-by: Kenneth Heafield <github@kheafield.com>
Co-authored-by: Kenneth Heafield <kpu@users.noreply.github.com>
Co-authored-by: Ulrich Germann <ugermann@inf.ed.ac.uk>
Co-authored-by: Marcin Junczys-Dowmunt <marcinjd@microsoft.com>
Co-authored-by: Roman Grundkiewicz <rgrundkiewicz@gmail.com>
* copy changes from commit 4df92f2
* add comments for better understanding
* restore the newline at the end of file and add this changes in changelog.md
* support for Apple Accelerate
* add a CMake flag to use Apple Accelerate as the BLAS library.
* rename USE_ACCELERATE to USE_APPLE_ACCELERATE
* add comment with more info on Accelerate
* link to the Apple documentation on Accelerate.
* This PR adds training of embedding spaces with better separation based on https://arxiv.org/abs/2007.01852
* We can now train with in-batch negative examples or a handful of hand-constructed negative examples provided in a tsv-file.
Adds e.g. --logical-epoch 1Gt (or other units) that alters the way the epoch counter is displayed. The actual underlying counter in form data passes is not changed. This is essentially a logging change that will now display the epoch as a fractional multiple of the chosen unit.
Example for `--logical-epoch 100Mt`:
```
[2020-11-02 04:14:16] Ep. 4.8602 : Up. 16755 : Sen. 1,088,000 : Cost 1.17630422 * 1,993,304 @ 31,985 after 486,015,051 : Time 61.36s : 32483.55 words/s
[2020-11-02 04:15:18] Ep. 4.8803 : Up. 16825 : Sen. 1,162,648 : Cost 1.17474616 * 2,009,996 @ 37,740 after 488,025,047 : Time 61.88s : 32480.17 words/s
[2020-11-02 04:16:19] Ep. 4.9002 : Up. 16893 : Sen. 1,235,200 : Cost 1.17799997 * 1,990,844 @ 26,173 after 490,015,891 : Time 60.47s : 32920.16 words/s
Replace `--after-batches N` and `--after-epochs N` with `--after Nu/Ne` which allows to specify updates, epochs, target labels with units, e.g.:
* `--after 30Gt` or `--after 50ku` or `--after 10e`
* Can also combine multiple criteria: `--after 30Gt,50ku,10e` and will stop when whichever hits first
Changes default `cost-type` from `ce-mean` to `ce-sum` and turns `display-label-counts` on by default.
* Fixes reductions into scalars for <= 32 input elements. Only affects reductions where 0 is not the identity
* Update CHANGELOG.md
* Adds space before "?"
* Adds comment explaining increase in margin for reduction tests. Adds axis comment to argument to reduce functions. Adds more tests for small reduction operators
This PR enables final post-processing of a full transformer stack for correct prenorm behavior.
See issues: #715 and #699,
List of changes:
Add final post-processing in encoder and decoder if requested with --transformer-postprocess-top. Can take combinations of d, n, a. Using a will add a skip connection from the bottom of the stack.
Add --task transformer-base-prenorm and --task transformer-big-prenorm which correspond to --task transformer-base --transformer-preprocess n --transformer-postprocess da --transformer-postprocess-top n.
* Return exit code 15 (SIGTERM) after SIGTERM.
When marian receives signal SIGTERM and exits gracefully (save model & exit),
it should then exit with a non-zero exit code, to signal to any parent process
that it did not exit "naturally".
* Added explanatory comment about exiting marian_train with non-zero status after SIGTERM.
* Bug fix: better handling of SIGTERM for graceful shutdown during training.
Prior to this bug fix, BatchGenerator::fetchBatches, which runs in a separate
thread, would ignore SIGTERM during training (training uses a custom signal handler
for SIGTERM, which simply sets a global flag, to enable graceful shutdown (i.e.,
save models and current state of training before shutting down).
The changes in this commit also facilitate custom handling of other signals in the
future by providing a general singal handler for all signals with a signal number
below 32 (setSignalFlag) and a generic flag checking function (getSignalFlag(sig))
for checking such flags.
* Add GitHub workflows
* Workflows with CMake compilation on Windows
* Ubuntu workflow with Boost
* Ignore warnings from Boost
* Compile unit tests on Windows
* Disable cpuinfo tools if compiled with ninja
* Use a separate CMakeSettings.json for CI
* Disable CMake debugs
* Fix unit tests compilation with Ninja Release
* Use FBGEMM in Windows workflow; add comments
* Fix C4706 warning
* Update CHANGELOG
* Run Windows build on pull requests
* Compile SentencePiece statically in Windows workflow
* Add GitHub workflow on MacOS
* Address review comments
* Disable C4702 globally, not only in Debug
* Update CHANGELOG and workflows names
* Update VERSION
* Add tuple nodes via views and trickery
* Add `topk` operator, currently unused outside unit tests
* Add `abs` operator, currently unused outside unit tests
* Change return type of `Node::allocate()` to `void`. This used to return the number of allocated elements, but isn't really used anywhere. To avoid future confusion of elements and bytes, removed for now.
* Fix server build with current boost, move simple-websocket-server to submodule
* Change submodule to marian-nmt/Simple-WebSocket-Server
* Update submodule simple-websocket-server
Co-authored-by: Gleb Tv <glebtv@gmail.com>
* Add basic support for TSV inputs
* Fix mini-batch-fit for TSV inputs
* Abort if shuffling data from stdin
* Fix terminating training with data from STDIN
* Allow creating vocabs from TSV files
* Add comments; clean creation of vocabs from TSV files
* Guess --tsv-size based on the model type
* Add shortcut for STDIN inputs
* Rename --tsv-size to --tsv-fields
* Allow only one 'stdin' in --train-sets
* Properly create separate vocabularies from a TSV file
* Clearer logging message
* Add error message for wrong number of valid sets if --tsv is used
* Use --no-shuffle instead of --shuffle in the error message
* Fix continuing training from STDIN
* Update CHANGELOG
* Support both 'stdin' and '-'
* Guess --tsv-fields from dim-vocabs if special:model.yml available
* Update error messages
* Move variable outside the loop
* Refactorize utils::splitTsv; add unit tests
* Support '-' as stdin; refactorize; add comments
* Abort if excessive field(s) in the TSV input
* Add a TODO on passing one vocab with fully-tied embeddings
* Remove the unit test with excessive tab-separated fields
The previous mechanism to remove empty inputs does not play well with batch purging (removal of finished sentences). Now we reuse the batch purging mechanism to get rid of empty inputs by forcing EOS for all beam entries of a batch entry for the corresponding source batch entry. The purging then takes care of the rest. We set the probability to log(1) = 0.
Splitting up header file into header and *.cu, comes with the price of having to include specializations for combinations of types as for element.inc and add.inc. No code changes otherwise.
Add CMake options to disable specific compute capabilities.
When run with `make -j16` this compiles in about 6 minutes instead of 7 minutes. Selecting only SM70 during compilation brings down the time to 3 minutes.
* Downgrade NCCL to 2.3.7 as 2.4.2 is buggy (hangs with larger models)
* Actually enable gradient-checkpointing, previous option was inactive
* Clean-up training-only options that should not be displayed for decoder and scorer
* Re-enable conversion to FP16 if element types are compatible (belong to the same type class)
* A few typos and more verbose log messages.
* Add printing word level scores
* Add option --no-spm-decode
* Fix precision for word-level scores
* Fix getting the no-spm-decode option
* Update CHANGELOG
* Add comments and refactor
* Print word-level scores next to other scores in an n-best list
* Remove --word-scores from marian-scorer
* Add --no-spm-decode only if compiled with SentencePiece
* Add comments
* Printing word scores before model scores in n-best lists
* Update VERSION
Co-authored-by: Marcin Junczys-Dowmunt <Marcin.JunczysDowmunt@microsoft.com>
This implements Sequential Unlikelihood Training from https://arxiv.org/abs/1908.04319
* implementation as expensive multi-op, special node in-progress.
* fixed gather operator to work in batched cases
This fixes a number of bugs in our GPU reduce-kernels that would manifest mainly for larger matrices and during back-prop. We also drop support for CUDA 8.0 to be able to take advantage of new GPU primitives introduced by NVidia in CUDA 9.0.
This PR introduces batch-purging in Marian, i.e. whenever a virtual beam becomes inactive (empty) the entire batch entry that corresponds to that beam can be removed from the encoder and decoder neural states. The CPU-side beam search keeps tracks of the hypotheses as before, but needs to perform mappings between original and shifted batch indices.
In FastOpt we do not want to use locking during access, but that makes reference counting not thread-safe. We now use std::unique_ptr to const objects or const references everywhere. This fixes random segfaults with multi-GPU training. @TODO: clean-up option merging to make option generally immutable.