Training open neural machine translation models
Go to file
2020-06-29 12:26:45 +03:00
backtranslate fixed a bug in eval-testsets 2020-05-29 14:43:36 +03:00
doc language group jobs with some more documentation 2020-06-29 12:26:45 +03:00
evaluate fixed includes in backtranslate/evaluate/finetune makefiles 2020-05-07 22:51:31 +03:00
finetune fixed includes in backtranslate/evaluate/finetune makefiles 2020-05-07 22:51:31 +03:00
html train with backtranslations 2020-01-18 20:37:01 +02:00
lib language group jobs with some more documentation 2020-06-29 12:26:45 +03:00
models fixed a bug in eval-testsets 2020-05-29 14:43:36 +03:00
pivoting fixed a bug in eval-testsets 2020-05-29 14:43:36 +03:00
scripts fix chinese/korean/japanese language codes 2020-06-17 22:02:39 +03:00
testsets fix chinese/korean/japanese language codes 2020-06-17 22:02:39 +03:00
work-spm sami 2020-03-27 22:30:51 +02:00
Dockerfile.cpu Add more aligners 2020-02-03 16:03:26 +07:00
Dockerfile.gpu Add Dockerfile for GPU 2020-02-03 15:41:34 +07:00
LICENSE fixed license 2020-01-10 17:04:04 +02:00
Makefile fixed multilingual tatoeba evaluation 2020-06-11 00:54:40 +03:00
NOTES.md fixed multilingual tatoeba evaluation 2020-06-11 00:54:40 +03:00
postprocess-bpe.sh initial import 2020-01-10 16:45:42 +02:00
postprocess-spm.sh initial import 2020-01-10 16:45:42 +02:00
preprocess-bpe-multi-target.sh sami 2020-03-27 22:30:51 +02:00
preprocess-bpe.sh pre-processing scripts fixed 2020-01-17 13:42:18 +02:00
preprocess-spm-multi-target.sh sami 2020-03-27 22:30:51 +02:00
preprocess-spm.sh removed punctuation normalisation and added language filter 2020-02-08 00:19:21 +02:00
project_2000661-openrc-backup.sh initial import 2020-01-10 16:45:42 +02:00
README.md multilingual tatoeba models and some documentation added 2020-06-03 15:39:18 +03:00
TODO.md fit-data-size fixed 2020-06-08 14:14:55 +03:00

Train Opus-MT models

This package includes scripts for training NMT models using MarianNMT and OPUS data for OPUS-MT. More details are given in the Makefile but documentation needs to be improved. Also, the targets require a specific environment and right now only work well on the CSC HPC cluster in Finland.

Pre-trained models

The subdirectory models contains information about pre-trained models that can be downloaded from this project. They are distribted with a CC-BY 4.0 license license.

Prerequisites

Running the scripts does not work out of the box because many settings are adjusted for the local installations on our IT infrastructure at CSC. Here is an incomplete list of prerequisites needed for running a process. It is on our TODO list to make the training procedures and setting more transparent and self-contained but this will take time ...

  • marian-nmt: The essential NMT toolkit we use in OPUS-MT; make sure you compile a version with GPU and SentencePiece support!
  • Moses scripts: various pre- and post-processing scripts from the Moses SMT toolkit (also bundled here: marian-nmt)
  • OpusTools: library and tools for accessing OPUS data
  • OpusTools-perl: additional tools for accessing OPUS data
  • iso-639: a Python package for ISO 639 language codes

Optional software:

  • terashuf: efficiently shuffle massive data sets
  • pigz: multithreaded gzip

Structure of the training scripts

Essential files for making new models:

  • Makefile: top-level makefile
  • lib/env.mk: system-specific environment (now based on CSC machines)
  • lib/config.mk: essential model configuration
  • lib/data.mk: data pre-processing tasks
  • lib/generic.mk: generic implicit rules that can extend other tasks
  • lib/dist.mk: make packages for distributing models (CSC ObjectStorage based)
  • lib/slurm.mk: submit jobs with SLURM

There are also make targets for specific models and tasks. Look into lib/models/ to see what has been defined already. Note that this frequently changes! There is, for example:

  • lib/models/multilingual.mk: various multilingual models
  • lib/models/celtic.mk: data and models for Celtic languages
  • lib/models/doclevel.mk: experimental document-level models

Run this if you want to train a model, for example for translating English to French:

make SRCLANG=en TRGLANG=fr train

To evaluate the model with the automatically generated test data (from the Tatoeba corpus as a default) run:

make SRCLANG=en TRGLANG=fr eval

For multilingual (more than one language on either side) models run, for example:

make SRCLANG="de en" TRGLANG="fr es pt" train
make SRCLANG="de en" TRGLANG="fr es pt" eval

Note that data pre-processing should run on CPUs and training/testing on GPUs. To speed up things you can process data sets in parallel using the jobs flag of make, for example using 8 threads:

make -j 8 SRCLANG=en TRGLANG=fr data

Upload to Object Storage

This is only for internal use:

swift upload OPUS-MT --changed --skip-identical name-of-file
swift post OPUS-MT --read-acl ".r:*"