Unsupervised Word Segmentation for Neural Machine Translation and Text Generation
Go to file
2018-03-26 19:04:10 +01:00
test fix number of arguments in test_glossaries.encode_mock 2018-01-22 17:48:59 +00:00
.gitignore add .gitignore file 2017-12-21 21:37:07 +01:00
apply_bpe.py don't break on leading whitespace 2018-03-26 10:35:33 +01:00
bpe_toy.py do not force system's default python 2017-12-21 21:38:22 +01:00
CHANGELOG.md changelog 2017-04-21 11:25:06 +01:00
chrF.py remove unused imports, fix trailing whitespace 2017-12-21 21:42:38 +01:00
get_vocab.py get_vocabulary: don't crash on double whitespace or empty line 2018-03-26 19:04:10 +01:00
learn_bpe.py get_vocabulary: don't crash on double whitespace or empty line 2018-03-26 19:04:10 +01:00
learn_joint_bpe_and_vocab.py do not force system's default python 2017-12-21 21:38:22 +01:00
LICENSE initial commit 2015-09-01 11:48:49 +01:00
README.md reference to package 2018-03-01 19:02:04 +00:00
segment-char-ngrams.py do not force system's default python 2017-12-21 21:38:22 +01:00

Subword Neural Machine Translation

This repository contains preprocessing scripts to segment text into subword units. The primary purpose is to facilitate the reproduction of our experiments on Neural Machine Translation with subword units (see below for reference).

INSTALLATION

Clone or copy this repository and follow the usage instructions below.

For an installable package, see https://github.com/rsennrich/subword-nmt/tree/package

USAGE INSTRUCTIONS

Check the individual files for usage instructions.

To apply byte pair encoding to word segmentation, invoke these commands:

./learn_bpe.py -s {num_operations} < {train_file} > {codes_file}
./apply_bpe.py -c {codes_file} < {test_file}

To segment rare words into character n-grams, do the following:

./get_vocab.py < {train_file} > {vocab_file}
./segment-char-ngrams.py --vocab {vocab_file} -n {order} --shortlist {size} < {test_file}

The original segmentation can be restored with a simple replacement:

sed -r 's/(@@ )|(@@ ?$)//g'

BEST PRACTICE ADVICE FOR BYTE PAIR ENCODING IN NMT

We found that for languages that share an alphabet, learning BPE on the concatenation of the (two or more) involved languages increases the consistency of segmentation, and reduces the problem of inserting/deleting characters when copying/transliterating names.

However, this introduces undesirable edge cases in that a word may be segmented in a way that has only been observed in the other language, and is thus unknown at test time. To prevent this, apply_bpe.py accepts a --vocabulary and a --vocabulary-threshold option so that the script will only produce symbols which also appear in the vocabulary (with at least some frequency).

To use this functionality, we recommend the following recipe (assuming L1 and L2 are the two languages):

Learn byte pair encoding on the concatenation of the training text, and get resulting vocabulary for each:

cat {train_file}.L1 {train_file}.L2 | ./learn_bpe.py -s {num_operations} -o {codes_file}
./apply_bpe.py -c {codes_file} < {train_file}.L1 | ./get_vocab.py > {vocab_file}.L1
./apply_bpe.py -c {codes_file} < {train_file}.L2 | ./get_vocab.py > {vocab_file}.L2

more conventiently, you can do the same with with this command:

./learn_joint_bpe_and_vocab.py --input {train_file}.L1 {train_file}.L2 -s {num_operations} -o {codes_file} --write-vocabulary {vocab_file}.L1 {vocab_file}.L2

re-apply byte pair encoding with vocabulary filter:

./apply_bpe.py -c {codes_file} --vocabulary {vocab_file}.L1 --vocabulary-threshold 50 < {train_file}.L1 > {train_file}.BPE.L1
./apply_bpe.py -c {codes_file} --vocabulary {vocab_file}.L2 --vocabulary-threshold 50 < {train_file}.L2 > {train_file}.BPE.L2

as a last step, extract the vocabulary to be used by the neural network. Example with Nematus:

nematus/data/build_dictionary.py {train_file}.BPE.L1 {train_file}.BPE.L2

[you may want to take the union of all vocabularies to support multilingual systems]

for test/dev data, re-use the same options for consistency:

./apply_bpe.py -c {codes_file} --vocabulary {vocab_file}.L1 --vocabulary-threshold 50 < {test_file}.L1 > {test_file}.BPE.L1

PUBLICATIONS

The segmentation methods are described in:

Rico Sennrich, Barry Haddow and Alexandra Birch (2016): Neural Machine Translation of Rare Words with Subword Units Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016). Berlin, Germany.

ACKNOWLEDGMENTS

This project has received funding from Samsung Electronics Polska sp. z o.o. - Samsung R&D Institute Poland, and from the European Unions Horizon 2020 research and innovation programme under grant agreement 645452 (QT21).