Welcome to mirror list, hosted at ThFree Co, Russian Federation.

github.com/torch/torch.github.io.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authornicholas-leonard <nick@nikopia.org>2016-07-22 22:24:52 +0300
committernicholas-leonard <nick@nikopia.org>2016-07-22 22:24:52 +0300
commit7992ccb23ef065bb994ca36ad0eb973c15da273f (patch)
treeeb22a0235af4fc02c6e839c3a640b481d6101f91
parentd20bca37c2e99dc7fc4f8e906324483ebc050c8b (diff)
add TLDR to results
-rw-r--r--blog/_posts/2016-05-11-nce.md4
1 files changed, 3 insertions, 1 deletions
diff --git a/blog/_posts/2016-05-11-nce.md b/blog/_posts/2016-05-11-nce.md
index 12c94fc..8a7a3ea 100644
--- a/blog/_posts/2016-05-11-nce.md
+++ b/blog/_posts/2016-05-11-nce.md
@@ -15,8 +15,8 @@ picture: https://raw.githubusercontent.com/torch/torch.github.io/master/blog/_po
* [Building a multi-layer LSTM](#nce.lstm)
* [Training and evaluation scripts](#nce.script)
* [Results](#nce.result)
+ * [Future work](#nce.future)
* [References](#nce.ref)
- * [Future Word](#nce.future)
In this Torch blog post, we use noise contrastive estimation (NCE) [[2]](#nce.ref)
to train a multi-GPU recurrent neural network language model (RNNLM)
@@ -25,6 +25,8 @@ The work presented here is the result of many months of on-and-off work.
The enormity of the dataset caused us to contribute some novel open-source Torch modules, criteria and even a multi-GPU tensor.
We also provide scripts so that you can train and evaluate your own language models.
+If you are only interested in generated samples, perplexity and learning curves, please jump to the [results section](#nce.result).
+
<a name='nce.char'></a>
## Word versus character language models