Welcome to mirror list, hosted at ThFree Co, Russian Federation.

github.com/torch/torch.github.io.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
Diffstat (limited to 'blog/_posts/2016-07-25-nce.md')
-rw-r--r--blog/_posts/2016-07-25-nce.md6
1 files changed, 4 insertions, 2 deletions
diff --git a/blog/_posts/2016-07-25-nce.md b/blog/_posts/2016-07-25-nce.md
index 89862a5..58980e5 100644
--- a/blog/_posts/2016-07-25-nce.md
+++ b/blog/_posts/2016-07-25-nce.md
@@ -4,7 +4,7 @@ title: Language modeling a billion words
comments: True
author: nicholas-leonard
excerpt: Noise contrastive estimation is used to train a multi-GPU recurrent neural network language model on the Google billion words dataset.
-picture: https://raw.githubusercontent.com/torch/torch.github.io/master/blog/_posts/images/rnnlm.png
+picture: https://raw.githubusercontent.com/torch/torch.github.io/master/blog/_posts/images/rnnlm-small.png
---
<!---# Language modeling a billion words -->
@@ -18,10 +18,12 @@ picture: https://raw.githubusercontent.com/torch/torch.github.io/master/blog/_po
* [Future work](#nce.future)
* [References](#nce.ref)
+In our last post, we presented a [recurrent model for visual attention](http://torch.ch/blog/2015/09/21/rmva.html)
+which combined reinforcement learning with recurrent neural networks.
In this Torch blog post, we use noise contrastive estimation (NCE) [[2]](#nce.ref)
to train a multi-GPU recurrent neural network language model (RNNLM)
on the Google billion words (GBW) dataset [[7]](#nce.ref).
-The work presented here is the result of many months of on-and-off work.
+The work presented here is the result of many months of on-and-off work at [Element-Research](https://www.discoverelement.com/research).
The enormity of the dataset caused us to contribute some novel open-source Torch modules, criteria and even a multi-GPU tensor.
We also provide scripts so that you can train and evaluate your own language models.