Training open neural machine translation models
Go to file
Joerg Tiedemann ee6eb8551f bugs fixed
2022-02-08 00:16:23 +02:00
backtranslate Merge branch 'master' of github.com:Helsinki-NLP/OPUS-MT-train 2021-12-10 19:21:26 +02:00
bt-tatoeba Merge branch 'master' of github.com:Helsinki-NLP/OPUS-MT-train 2021-12-10 19:21:26 +02:00
doc a note about setting up some environment specifications 2022-02-03 22:34:54 +02:00
evaluate Merge branch 'master' of github.com:Helsinki-NLP/OPUS-MT-train 2021-12-10 19:21:26 +02:00
finetune Merge branch 'master' of github.com:Helsinki-NLP/OPUS-MT-train 2021-12-10 19:21:26 +02:00
ft-tatoeba cleanup in tatoeba data recipes 2021-12-18 00:27:04 +02:00
html train with backtranslations 2020-01-18 20:37:01 +02:00
lib bugs fixed 2022-02-08 00:16:23 +02:00
models added recipes for tatoeba models other than English 2021-05-04 08:49:16 +03:00
OPUS-MT-testsets@e417bd4cb2 fixed bug in eval recipe 2021-11-30 09:41:01 +02:00
pivoting Merge branch 'master' of github.com:Helsinki-NLP/OPUS-MT-train 2021-12-10 19:21:26 +02:00
scripts fixing many bugs with tatoeba model recipes 2022-02-07 20:55:31 +02:00
tatoeba bugs fixed 2022-02-08 00:16:23 +02:00
testsets fixed a problem with langlabel files 2021-09-13 00:07:51 +03:00
tools student model quantisation finetuning added 2022-01-18 14:41:17 +02:00
work-tatoeba renamed variable for loading environment 2021-11-11 19:21:35 +02:00
.gitmodules student model quantisation finetuning added 2022-01-18 14:41:17 +02:00
Dockerfile.cpu Add more aligners 2020-02-03 16:03:26 +07:00
Dockerfile.gpu Add Dockerfile for GPU 2020-02-03 15:41:34 +07:00
LICENSE fixed license 2020-01-10 17:04:04 +02:00
Makefile fixing many bugs with tatoeba model recipes 2022-02-07 20:55:31 +02:00
NOTES.md student model quantisation finetuning added 2022-01-18 14:41:17 +02:00
README.md a note about setting up some environment specifications 2022-02-03 22:37:36 +02:00
requirements.txt create valid yaml files from vocab 2021-10-05 17:43:46 +03:00
TODO.md student models for tatoeba 2022-01-25 22:43:48 +02:00

Train Opus-MT models

This package includes scripts for training NMT models using MarianNMT and OPUS data for OPUS-MT. More details are given in the Makefile but documentation needs to be improved. Also, the targets require a specific environment and right now only work well on the CSC HPC cluster in Finland.

Pre-trained models

The subdirectory models contains information about pre-trained models that can be downloaded from this project. They are distribted with a CC-BY 4.0 license license. More pre-trained models trained with the OPUS-MT training pipeline are available from the Tatoeba translation challenge also under a CC-BY 4.0 license license.

Quickstart

Setting up:

git clone https://github.com/Helsinki-NLP/OPUS-MT-train.git
git submodule update --init --recursive --remote
make install

Look into lib/env.mk and adust any settings that you need in your environment. For CSC-users: adjust lib/env/puhti.mk and lib/env/mahti.mk to match yoursetup (especially the locations where Marian-NMT and other tools are installed and the CSC project that you are using).

Training a multilingual NMT model (Finnish and Estonian to Danish, Swedish and English):

make SRCLANGS="fi et" TRGLANGS="da sv en" train
make SRCLANGS="fi et" TRGLANGS="da sv en" eval
make SRCLANGS="fi et" TRGLANGS="da sv en" release

More information is available in the documentation linked below.

Documentation

Tutorials

References

Please, cite the following paper if you use OPUS-MT software and models:

@InProceedings{TiedemannThottingal:EAMT2020,
  author = {J{\"o}rg Tiedemann and Santhosh Thottingal},
  title = {{OPUS-MT} — {B}uilding open translation services for the {W}orld},
  booktitle = {Proceedings of the 22nd Annual Conferenec of the European Association for Machine Translation (EAMT)},
  year = {2020},
  address = {Lisbon, Portugal}
 }

Acknowledgements

None of this would be possible without all the great open source software including

... and many other tools like terashuf, pigz, jq, Moses SMT, fast_align, sacrebleu ...

We would also like to acknowledge the support by the University of Helsinki, the IT Center of Science CSC, the funding through projects in the EU Horizon 2020 framework (FoTran, MeMAD, ELG) and the contributors to the open collection of parallel corpora OPUS.