Welcome to mirror list, hosted at ThFree Co, Russian Federation.

github.com/marian-nmt/marian-examples.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorMarcin Junczys-Dowmunt <marcinjd@microsoft.com>2018-11-26 03:30:48 +0300
committerGitHub <noreply@github.com>2018-11-26 03:30:48 +0300
commitc5808ab2a72107eec67aa2b2c539c72e25978c69 (patch)
tree291361261bafe03be68c52eb2b76c4b8a2176c41
parent0413df35956b3abdfdda25a75369285f52f4945b (diff)
Update README.md
-rw-r--r--training-basics-spm/README.md100
1 files changed, 45 insertions, 55 deletions
diff --git a/training-basics-spm/README.md b/training-basics-spm/README.md
index f0cc40d..9747a85 100644
--- a/training-basics-spm/README.md
+++ b/training-basics-spm/README.md
@@ -1,8 +1,12 @@
# Example for training with Marian and SentencePiece
+In this example, we modify the Romanian-English example from `examples/training-basics` to use Tako Kudo's
+[SentencePiece](https://github.com/google/sentencepiece) instead of a complicated pre/prost-processing pipeline.
+We also replace the evaluation scripts with Matt Post's [SacreBLEU](https://github.com/mjpost/sacreBLEU).
+
## Building Marian with SentencePiece support
-Since version 1.7.0, Marian has support for (SentencePiece)[https://github.com/google/sentencepiece],
+Since version 1.7.0, Marian has built-in support for SentencePiece,
but this needs to be enabled at compile-time. We decided to make the compilation of SentencePiece
optional as SentencePiece has a number of dependencies - especially Google's Protobuf - that
are potentially non-trivial to install.
@@ -13,19 +17,19 @@ install for a coule of Ubuntu versions:
On Ubuntu 14.04 LTS (Trusty Tahr):
```
-% sudo apt-get install libprotobuf8 protobuf-compiler libprotobuf-dev
+sudo apt-get install libprotobuf8 protobuf-compiler libprotobuf-dev
```
On Ubuntu 16.04 LTS (Xenial Xerus):
```
-% sudo apt-get install libprotobuf9v5 protobuf-compiler libprotobuf-dev
+sudo apt-get install libprotobuf9v5 protobuf-compiler libprotobuf-dev
```
On Ubuntu 17.10 (Artful Aardvark) and Later:
```
-% sudo apt-get install libprotobuf10 protobuf-compiler libprotobuf-dev
+sudo apt-get install libprotobuf10 protobuf-compiler libprotobuf-dev
```
For more details see the documentation in the SentencePiece repo:
@@ -37,6 +41,7 @@ With these dependencies met, you can compile Marian as follows:
git clone https://github.com/marian-nmt/marian
cd marian
mkdir build
+cd build
cmake .. -DCMAKE_BUILD_TYPE=Release -DUSE_SENTENCEPIECE=ON
make -j 8
```
@@ -47,7 +52,7 @@ To test if `marian` has been compiled with SentencePiece support run
./marian --help |& grep sentencepiece
```
-which should display the following new options
+which should display the following new options:
```
--sentencepiece-alphas VECTOR ... Sampling factors for SentencePieceVocab; i-th factor corresponds to i-th vocabulary
@@ -55,7 +60,7 @@ which should display the following new options
--sentencepiece-max-lines UINT=10000000
```
-##
+## Walkthrough
Files and scripts in this folder have been adapted from the Romanian-English
sample from https://github.com/rsennrich/wmt16-scripts. We also add the
@@ -65,43 +70,46 @@ http://www.aclweb.org/anthology/W16-2323. The resulting system should be
competitive or even slightly better than reported in the Edinburgh WMT2016
paper.
-To execute the complete example type:
+Assuming you one GPU, to execute the complete example type:
```
./run-me.sh
```
-which downloads the Romanian-English training files and preprocesses them (tokenization,
-truecasing, segmentation into subwords units).
+which downloads the Romanian-English training files and concatenates them into training files.
+No preprocessing is required as the Marian command will train a SentencePiece vocabulary from
+the raw text.
-To use with a different GPU than device 0 or more GPUs (here 0 1 2 3) type the command below.
-Training time on 1 NVIDIA GTX 1080 GPU should be roughly 24 hours.
+To use with a different GPUs than device 0 or more GPUs (here 0 1 2 3) use the command below:
```
./run-me.sh 0 1 2 3
```
-Next it executes a training run with `marian`:
+Next it executes a training run with `marian`. Note how the training command is called passing the
+raw training and validation data into Marian. A single joint SentencePiece model will be saved to
+`model/vocab.roen.spm`.
```
-../../build/marian \
+$MARIAN/build/marian \
--devices $GPUS \
- --type amun \
+ --type s2s \
--model model/model.npz \
- --train-sets data/corpus.bpe.ro data/corpus.bpe.en \
- --vocabs model/vocab.ro.yml model/vocab.en.yml \
- --dim-vocabs 66000 50000 \
- --mini-batch-fit -w 3000 \
- --layer-normalization --dropout-rnn 0.2 --dropout-src 0.1 --dropout-trg 0.1 \
- --early-stopping 5 \
+ --train-sets data/corpus.ro data/corpus.en \
+ --vocabs model/vocab.roen.spm model/vocab.roen.spm \
+ --sentencepiece-options '--normalization_rule_tsv=data/norm_romanian.tsv' \
+ --dim-vocabs 32000 32000 \
+ --mini-batch-fit -w 5000 \
+ --layer-normalization --tied-embeddings-all \
+ --dropout-rnn 0.2 --dropout-src 0.1 --dropout-trg 0.1 \
+ --early-stopping 5 --max-length 100 \
--valid-freq 10000 --save-freq 10000 --disp-freq 1000 \
- --valid-metrics cross-entropy translation \
- --valid-sets data/newsdev2016.bpe.ro data/newsdev2016.bpe.en \
- --valid-script-path ./scripts/validate.sh \
- --log model/train.log --valid-log model/valid.log \
+ --cost-type ce-mean-words --valid-metrics ce-mean-words bleu-detok \
+ --valid-sets data/newsdev2016.ro data/newsdev2016.en \
+ --log model/train.log --valid-log model/valid.log --tempdir model \
--overwrite --keep-best \
--seed 1111 --exponential-smoothing \
- --normalize=1 --beam-size 12 --quiet-translation
+ --normalize=0.6 --beam-size=6 --quiet-translation
```
After training (the training should stop if cross-entropy on the validation set
@@ -109,39 +117,21 @@ stops improving) the model with the highest translation validation score is used
to translate the WMT2016 dev set and test set with `marian-decoder`:
```
-cat data/newsdev2016.bpe.ro \
- | ../../build/marian-decoder -c model/model.npz.best-translation.npz.decoder.yml -d $GPUS \
- -b 12 -n1 --mini-batch 64 --maxi-batch 10 --maxi-batch-sort src -w 2500 \
- | sed 's/\@\@ //g' \
- | ../tools/moses-scripts/scripts/recaser/detruecase.perl \
- | ../tools/moses-scripts/scripts/tokenizer/detokenizer.perl -l en \
- > data/newsdev2016.ro.output
+# translate dev set
+cat data/newsdev2016.ro \
+ | $MARIAN/build/marian-decoder -c model/model.npz.best-bleu-detok.npz.decoder.yml -d $GPUS -b 6 -n0.6 \
+ --mini-batch 64 --maxi-batch 100 --maxi-batch-sort src > data/newsdev2016.ro.output
+
+# translate test set
+cat data/newstest2016.ro \
+ | $MARIAN/build/marian-decoder -c model/model.npz.best-bleu-detok.npz.decoder.yml -d $GPUS -b 6 -n0.6 \
+ --mini-batch 64 --maxi-batch 100 --maxi-batch-sort src > data/newstest2016.ro.output
```
after which BLEU scores for the dev and test set are reported. Results should
be somewhere in the area of:
```
-newsdev2016:
-BLEU = 35.88, 67.4/42.3/28.8/20.2 (BP=1.000, ratio=1.012, hyp_len=51085, ref_len=50483)
-
-newstest2016:
-BLEU = 34.53, 66.0/40.7/27.5/19.2 (BP=1.000, ratio=1.015, hyp_len=49258, ref_len=48531)
-```
-
-## Custom validation script
-
-The validation script `scripts/validate.sh` is a quick example how to write a
-custom validation script. The training pauses until the validation script
-finishes executing. A validation script should not output anything to `stdout`
-apart from the final single score (last line):
-
-```
-#!/bin/bash
-
-cat $1 \
- | sed 's/\@\@ //g' \
- | ../tools/moses-scripts/scripts/recaser/detruecase.perl \
- | ../tools/moses-scripts/scripts/tokenizer/detokenize.perl -l en \
- | ../tools/moses-scripts/scripts/generic/multi-bleu-detok.perl data/newsdev2016.en \
- | sed -r 's/BLEU = ([0-9.]+),.*/\1/'
+# calculate bleu scores on dev and test set
+sacreBLEU/sacrebleu.py -t wmt16/dev -l ro-en < data/newsdev2016.ro.output
+sacreBLEU/sacrebleu.py -t wmt16 -l ro-en < data/newstest2016.ro.output
```