Welcome to mirror list, hosted at ThFree Co, Russian Federation.

github.com/torch/nngraph.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorAlfredo Canziani <alfredo.canziani@gmail.com>2015-04-22 23:36:08 +0300
committerAlfredo Canziani <alfredo.canziani@gmail.com>2015-04-22 23:36:08 +0300
commit8b9429f6c5c3630ecac62f58dff4897c53a0db36 (patch)
tree3fd2ea58bdb127bec6e3291a5feb1f90fcb2824a /README.md
parent4fc8b8022da27fdc4b8a4563fefcc91af8ab625d (diff)
Further revision of documentation
Diffstat (limited to 'README.md')
-rw-r--r--README.md81
1 files changed, 48 insertions, 33 deletions
diff --git a/README.md b/README.md
index f3d8248..ec281df 100644
--- a/README.md
+++ b/README.md
@@ -4,24 +4,29 @@ This package provides graphical computation for `nn` library in [Torch](https://
## Requirements
-You do *not* need graphviz to be able to use this library, but if you have then you can display the graphs that you have created.
+You need *not* `graphviz` to be able to use this library but, if you have it, you will be able to display the graphs that you have created. For installing the package run the appropriate command below:
+
+```bash
+# Mac users
+brew install graphviz
+# Debian/Ubuntu users
+sudo apt-get install graphviz -y
+```
## Usage
[Plug: A more explanatory nngraph tutorial by Nando De Freitas of Oxford](https://www.cs.ox.ac.uk/people/nando.defreitas/machinelearning/practicals/practical5.pdf)
-The aim of this library is to provide users of nn library with tools to easily create complicated architectures.
-Any given nn module is going to be bundled into a graph node.
-The `__call` operator of an instance of `nn.Module` is used to create architectures as if one is writing function calls.
+The aim of this library is to provide users of `nn` package with tools to easily create complicated architectures.
+Any given `nn` `module` is going to be bundled into a *graph node*.
+The `__call__` operator of an instance of `nn.Module` is used to create architectures as if one is writing function calls.
-### One hidden layer network
+### Two hidden layers MLP
```lua
-require 'nngraph'
-
-x1 = nn.Linear(20,10)()
-mout = nn.Linear(10,1)(nn.Tanh()(nn.Linear(10,10)(nn.Tanh()(x1))))
-mlp = nn.gModule({x1},{mout})
+h1 = nn.Linear(20, 10)()
+h2 = nn.Linear(10, 1)(nn.Tanh()(nn.Linear(10, 10)(nn.Tanh()(h1))))
+mlp = nn.gModule({h1}, {h2})
x = torch.rand(20)
dx = torch.rand(1)
@@ -35,43 +40,49 @@ graph.dot(mlp.fg, 'MLP')
<img src= "https://raw.github.com/koraykv/torch-nngraph/master/doc/mlp.png" width="300px"/>
-Read this diagram from top to bottom, with the first and last nodes being dummy nodes that regroup all inputs and outputs of the graph.
-The 'module' entry describes the function of the node, as applies to 'input', and producing an result of the shape 'gradOutput'; 'mapindex' contains pointers to the parent nodes.
+Read this diagram from top to bottom, with the first and last nodes being *dummy nodes* that regroup all inputs and outputs of the graph.
+The `module` entry describes the function of the node, as applies to `input`, and producing a result of the shape `gradOutput`; `mapindex` contains pointers to the parent nodes.
-
-### A net with 2 inputs and 2 outputs
+To save the *graph* on file, specify the file name, and bot a `dot` and `svg` files will be saved. For example, you can type:
```lua
-require 'nngraph'
+graph.dot(mlp.fg, 'MLP', 'myMLP')
+```
-x1 = nn.Linear(20, 20)()
-x2 = nn.Linear(10, 10)()
-m0 = nn.Linear(20, 1)(nn.Tanh()(x1))
-m1 = nn.Linear(10, 1)(nn.Tanh()(x2))
-madd = nn.CAddTable()({m0, m1})
-m2 = nn.Sigmoid()(madd)
-m3 = nn.Tanh()(madd)
-gmod = nn.gModule({x1, x2}, {m2, m3})
-x = torch.rand(20)
-y = torch.rand(10)
+### A network with 2 inputs and 2 outputs
-gmod:updateOutput({x, y})
-gmod:updateGradInput({x, y}, {torch.rand(1), torch.rand(1)})
+```lua
+h1 = nn.Linear(20, 20)()
+h2 = nn.Linear(10, 10)()
+hh1 = nn.Linear(20, 1)(nn.Tanh()(h1))
+hh2 = nn.Linear(10, 1)(nn.Tanh()(h2))
+madd = nn.CAddTable()({hh1, hh2})
+oA = nn.Sigmoid()(madd)
+oB = nn.Tanh()(madd)
+gmod = nn.gModule({h1, h2}, {oA, oB})
+
+x1 = torch.rand(20)
+x2 = torch.rand(10)
+
+gmod:updateOutput({x1, x2})
+gmod:updateGradInput({x1, x2}, {torch.rand(1), torch.rand(1)})
graph.dot(gmod.fg, 'Big MLP')
```
<img src= "https://raw.github.com/koraykv/torch-nngraph/master/doc/mlp2.png" width="300px"/>
-### Another net that uses container modules (like `ParallelTable`) that output a table of outputs
+### A network with containers
+
+Another net that uses container modules (like `ParallelTable`) that output a table of outputs.
```lua
m = nn.Sequential()
m:add(nn.SplitTable(1))
m:add(nn.ParallelTable():add(nn.Linear(10, 20)):add(nn.Linear(10, 30)))
input = nn.Identity()()
-input1,input2 = m(input):split(2)
+input1, input2 = m(input):split(2)
m3 = nn.JoinTable(1)({input1, input2})
g = nn.gModule({input}, {m3})
@@ -89,7 +100,9 @@ graph.dot(g.bg, 'Backward Graph')
<img src= "https://raw.github.com/koraykv/torch-nngraph/master/doc/mlp3_backward.png" width="300px"/>
-### A Multi-layer network where each layer takes output of previous two layers as input
+### More fun with graphs
+
+A multi-layer network where each layer takes output of previous two layers as input.
```lua
input = nn.Identity()()
@@ -133,7 +146,7 @@ L2 = nn.Tanh()(nn.Linear(30, 60)(nn.JoinTable(1)({input, L1}))):annotate{
L3 = nn.Tanh()(nn.Linear(80, 160)(nn.JoinTable(1)({L1, L2}))):annotate{
name = 'L3', descrption = 'Level 3 Node',
graphAttributes = {color = 'green',
- style='filled', fillcolor = 'yellow'}
+ style = 'filled', fillcolor = 'yellow'}
}
g = nn.gModule({input},{L3})
@@ -143,9 +156,11 @@ gdata = torch.rand(160)
g:forward(indata)
g:backward(indata, gdata)
-graph.dot(g.fg,'Forward Graph', '/tmp/fg')
-graph.dot(g.bg,'Backward Graph', '/tmp/bg')
+graph.dot(g.fg, 'Forward Graph', '/tmp/fg')
+graph.dot(g.bg, 'Backward Graph', '/tmp/bg')
```
+In this case, the graphs are saved in the following 4 files: `/tmp/{fg,bg}.{dot,svg}`.
+
![Annotated forward graph](doc/annotation_fg.png?raw=true)
![Annotated backward graph](doc/annotation_bg.png?raw=true)