Welcome to mirror list, hosted at ThFree Co, Russian Federation.

github.com/torch/nn.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorNicholas Léonard <nick@nikopia.org>2014-07-09 22:16:08 +0400
committerNicholas Léonard <nick@nikopia.org>2014-07-09 22:16:08 +0400
commitaba9ee56f1678b0c64ec95d566465ab258d852c2 (patch)
tree7772ab69640404c847636742d3fb196114cf9ba6
parent02b7e7205db5359f1df63f7bee3439a320e004bf (diff)
Update table.md (ElementTable)
-rw-r--r--doc/table.md95
1 files changed, 95 insertions, 0 deletions
diff --git a/doc/table.md b/doc/table.md
index 4117117..60b6dea 100644
--- a/doc/table.md
+++ b/doc/table.md
@@ -9,6 +9,7 @@ This allows one to build very rich architectures:
* Table Conversion Modules convert between tables and Tensors:
* [SplitTable](#nn.SplitTable) : splits a Tensor into a table of Tensors;
* [JoinTable](#nn.JoinTable) : joins a table of Tensors into a Tensor;
+ * [ElementTable](#nn.ElementTable) : retrieve one element from a table;
* Pair Modules compute a measure like distance or similarity from a pair (table) of input Tensors :
* [PairwiseDistance](#nn.PairwiseDistance) : outputs the `p`-norm. distance between inputs;
* [DotProduct](#nn.DotProduct) : outputs the dot product (similarity) between inputs;
@@ -375,6 +376,100 @@ for i=1,100 do -- A few steps of training such a network..
end
```
+<a name="nn.ElementTable"/>
+## ElementTable ##
+
+`module` = `ElementTable(index)`
+
+Creates a module that takes a Table as input and outputs the element at index `index`.
+This can be either a Table or a [Tensor](https://github.com/torch/torch7/blob/master/doc/tensor.md#tensor).
+
+The gradients of the non-`index` elements are zeroed Tensors of the same size. This is true regardless of the
+dept of the encapsulated Tensor as the function used internally to do so is recursive.
+
+Example 1:
+```lua
+> input = {torch.randn(2,3), torch.randn(2,1)}
+ [0.0002s]
+> =nn.ElementTable(1):forward(input)
+-0.3060 0.1398 0.2707
+ 0.0576 1.5455 0.0610
+[torch.DoubleTensor of dimension 2x3]
+
+ [0.0002s]
+> =nn.ElementTable(2):forward(input)
+ 2.3080
+-0.2955
+[torch.DoubleTensor of dimension 2x1]
+
+> =unpack(nn.ElementTable(1):backward(input, torch.randn(2,3)))
+-0.4891 -0.3495 -0.3182
+-2.0999 0.7381 -0.5312
+[torch.DoubleTensor of dimension 2x3]
+
+0
+0
+[torch.DoubleTensor of dimension 2x1]
+
+```
+
+Example 2:
+```lua
+> input = {torch.randn(2,3), {torch.randn(2,1), {torch.randn(2,2)}}}
+
+> =nn.ElementTable(2):forward(input)
+{
+ 1 : DoubleTensor - size: 2x1
+ 2 :
+ {
+ 1 : DoubleTensor - size: 2x2
+ }
+}
+
+> =unpack(nn.ElementTable(2):backward(input, {torch.randn(2,1), {torch.randn(2,2)}}))
+0 0 0
+0 0 0
+[torch.DoubleTensor of dimension 2x3]
+
+{
+ 1 : DoubleTensor - size: 2x1
+ 2 :
+ {
+ 1 : DoubleTensor - size: 2x2
+ }
+}
+
+> gradInput = nn.ElementTable(1):backward(input, torch.randn(2,3))
+
+> =gradInput
+{
+ 1 : DoubleTensor - size: 2x3
+ 2 :
+ {
+ 1 : DoubleTensor - size: 2x1
+ 2 :
+ {
+ 1 : DoubleTensor - size: 2x2
+ }
+ }
+}
+
+> =gradInput[1]
+-0.3400 -0.0404 1.1885
+ 1.2865 0.4107 0.6506
+[torch.DoubleTensor of dimension 2x3]
+
+> gradInput[2][1]
+0
+0
+[torch.DoubleTensor of dimension 2x1]
+
+> gradInput[2][2][1]
+0 0
+0 0
+[torch.DoubleTensor of dimension 2x2]
+
+```
<a name="nn.PairwiseDistance"/>
## PairwiseDistance ##