Welcome to mirror list, hosted at ThFree Co, Russian Federation.

github.com/torch/nngraph.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorAlfredo Canziani <alfredo.canziani@gmail.com>2015-04-21 18:57:51 +0300
committerAlfredo Canziani <alfredo.canziani@gmail.com>2015-04-21 18:57:51 +0300
commitd214ddcdf32a64091e018f2edfe95665adfd3845 (patch)
tree199e442ad0d793ca146941f6c92b1435e1723153
parent95a3bbb2adbbe09d70aa8cf6e5318d177438bd17 (diff)
Minor editing to DOC
-rw-r--r--README.md59
1 files changed, 31 insertions, 28 deletions
diff --git a/README.md b/README.md
index 6e9d417..1709e5b 100644
--- a/README.md
+++ b/README.md
@@ -10,7 +10,9 @@ You do *not* need graphviz to be able to use this library, but if you have then
[Plug: A more explanatory nngraph tutorial by Nando De Freitas of Oxford](https://www.cs.ox.ac.uk/people/nando.defreitas/machinelearning/practicals/practical5.pdf)
-The aim of this library is to provide users of nn library with tools to easily create complicated architectures. Any given nn module is going to be bundled into a graph node. The __call operator of an instance of nn.Module is used to create architectures as if one is writing function calls.
+The aim of this library is to provide users of nn library with tools to easily create complicated architectures.
+Any given nn module is going to be bundled into a graph node.
+The `__call` operator of an instance of `nn.Module` is used to create architectures as if one is writing function calls.
### One hidden layer network
@@ -24,18 +26,17 @@ mlp = nn.gModule({x1},{mout})
x = torch.rand(20)
dx = torch.rand(1)
mlp:updateOutput(x)
-mlp:updateGradInput(x,dx)
-mlp:accGradParameters(x,dx)
+mlp:updateGradInput(x, dx)
+mlp:accGradParameters(x, dx)
-- draw graph (the forward graph, '.fg')
-graph.dot(mlp.fg,'MLP')
+graph.dot(mlp.fg, 'MLP')
```
<img src= "https://raw.github.com/koraykv/torch-nngraph/master/doc/mlp.png" width="300px"/>
Read this diagram from top to bottom, with the first and last nodes being dummy nodes that regroup all inputs and outputs of the graph.
-The 'module' entry describes the function of the node, as applies to 'input', and producing an result of the shape 'gradOutput'; 'mapindex' contains
-pointers to the parent nodes.
+The 'module' entry describes the function of the node, as applies to 'input', and producing an result of the shape 'gradOutput'; 'mapindex' contains pointers to the parent nodes.
### A net with 2 inputs and 2 outputs
@@ -43,21 +44,21 @@ pointers to the parent nodes.
```lua
require 'nngraph'
-x1=nn.Linear(20,20)()
-x2=nn.Linear(10,10)()
-m0=nn.Linear(20,1)(nn.Tanh()(x1))
-m1=nn.Linear(10,1)(nn.Tanh()(x2))
-madd=nn.CAddTable()({m0,m1})
-m2=nn.Sigmoid()(madd)
-m3=nn.Tanh()(madd)
-gmod = nn.gModule({x1,x2},{m2,m3})
+x1 = nn.Linear(20, 20)()
+x2 = nn.Linear(10, 10)()
+m0 = nn.Linear(20, 1)(nn.Tanh()(x1))
+m1 = nn.Linear(10, 1)(nn.Tanh()(x2))
+madd = nn.CAddTable()({m0, m1})
+m2 = nn.Sigmoid()(madd)
+m3 = nn.Tanh()(madd)
+gmod = nn.gModule({x1, x2}, {m2, m3})
x = torch.rand(20)
y = torch.rand(10)
-gmod:updateOutput({x,y})
-gmod:updateGradInput({x,y},{torch.rand(1),torch.rand(1)})
-graph.dot(gmod.fg,'Big MLP')
+gmod:updateOutput({x, y})
+gmod:updateGradInput({x, y}, {torch.rand(1), torch.rand(1)})
+graph.dot(gmod.fg, 'Big MLP')
```
<img src= "https://raw.github.com/koraykv/torch-nngraph/master/doc/mlp2.png" width="300px"/>
@@ -113,32 +114,34 @@ graph.dot(g.bg, 'Backward Graph')
## Annotations
-It is possible to add annotations to your network, such as labeling nodes
-with names or attributes which will show up when you graph the network.
+It is possible to add annotations to your network, such as labeling nodes with names or attributes which will show up when you graph the network.
This can be helpful in large graphs.
For the full list of graph attributes see the
[graphviz documentation](http://www.graphviz.org/doc/info/attrs.html).
-```
+```lua
input = nn.Identity()()
L1 = nn.Tanh()(nn.Linear(10, 20)(input)):annotate{
- name = 'L1', description = 'Level 1 Node',
- graphAttributes = {color = 'red'}}
+ name = 'L1', description = 'Level 1 Node',
+ graphAttributes = {color = 'red'}
+}
L2 = nn.Tanh()(nn.Linear(30, 60)(nn.JoinTable(1)({input, L1}))):annotate{
- name = 'L2', description = 'Level 2 Node',
- graphAttributes = {color = 'blue', fontcolor = 'green'}}
+ name = 'L2', description = 'Level 2 Node',
+ graphAttributes = {color = 'blue', fontcolor = 'green'}
+}
L3 = nn.Tanh()(nn.Linear(80, 160)(nn.JoinTable(1)({L1, L2}))):annotate{
- name = 'L3', descrption = 'Level 3 Node',
- graphAttributes = {color = 'green',
- style='filled', fillcolor = 'yellow'}}
+ name = 'L3', descrption = 'Level 3 Node',
+ graphAttributes = {color = 'green',
+ style='filled', fillcolor = 'yellow'}
+}
g = nn.gModule({input},{L3})
indata = torch.rand(10)
gdata = torch.rand(160)
g:forward(indata)
-g:backward(indata,gdata)
+g:backward(indata, gdata)
graph.dot(g.fg,'Forward Graph', '/tmp/fg')
graph.dot(g.bg,'Backward Graph', '/tmp/bg')