Welcome to mirror list, hosted at ThFree Co, Russian Federation.

github.com/marian-nmt/marian-examples.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorMarcin Junczys-Dowmunt <marcinjd@microsoft.com>2018-11-26 10:45:32 +0300
committerMarcin Junczys-Dowmunt <marcinjd@microsoft.com>2018-11-26 10:45:32 +0300
commit29346583cfaca11b18ebcf748c293f03e3a82975 (patch)
tree3e424f418bc36e53f057152d67f7ced2738ffb8a
parent626b5bde373844b35f806c82d31e12016d30e1b2 (diff)
fix typos
-rw-r--r--training-basics-sentencepiece/README.md6
1 files changed, 3 insertions, 3 deletions
diff --git a/training-basics-sentencepiece/README.md b/training-basics-sentencepiece/README.md
index b15d996..9b122d7 100644
--- a/training-basics-sentencepiece/README.md
+++ b/training-basics-sentencepiece/README.md
@@ -73,8 +73,8 @@ sample from https://github.com/rsennrich/wmt16-scripts. We also add the
back-translated data from
http://data.statmt.org/rsennrich/wmt16_backtranslations/ as desribed in
http://www.aclweb.org/anthology/W16-2323. In our experiments,
-we get a single model that is a good deal than the ensemble from
- the Edinburgh WMT2016 paper.
+we get a single model that is a good deal better than the ensemble from
+the Edinburgh WMT2016 system submission paper.
Assuming one GPU, to execute the complete example type:
@@ -87,7 +87,7 @@ No preprocessing is required as the Marian command will train a SentencePiece vo
the raw text. Next the translation model will be trained and after convergence, the dev and test
sets are translated and evaluated with sacreBLEU.
-To use with a different GPUs than device 0 or more GPUs (here 0 1 2 3) use the command below:
+To use with a different GPU than device 0 or more GPUs (here 0 1 2 3) use the command below:
```
./run-me.sh 0 1 2 3