Welcome to mirror list, hosted at ThFree Co, Russian Federation.

index.dok « doktutorial - github.com/torch/dok.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
blob: c75a9144225e1f4eaba9bce95d13447b4c4f8d99 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
====== Torch Tutorial ======
{{anchor:torch.tutorial}}

So you are wondering how to work with Torch?
This is a little tutorial that should help get you started.

By the end of this tutorial, you should have managed to install torch
on your machine, and have a good understanding of how to manipulate
vectors, matrices and tensors and how to build and train a basic
neural network. For anything else, you should know how to access the
html help and read about how to do it.

======  What is Torch? ======

Torch7 provides a Matlab-like environment for state-of-the-art machine
learning algorithms. It is easy to use and provides a very efficient
implementation, thanks to a easy and fast scripting language (Lua) and
an underlying C/C++ implementation.  You can read more about Lua
[[http://www.lua.org|here]].

======  Installation ======

First before you can do anything, you need to install Torch7 on your
machine.  That is not described in detail here, but is instead
described in the [[..:install:index|installation help]].


======  Checking your installation works and requiring packages ======

If you have got this far, hopefully your Torch installation works. A simple
way to make sure it does is to start Lua from the shell command line, 
and then try to start Torch:
<file>
Try the IDE: torch -ide
Type help() for more info
Torch 7.0  Copyright (C) 2001-2011 Idiap, NEC Labs, NYU
Lua 5.1  Copyright (C) 1994-2008 Lua.org, PUC-Rio
torch> 
torch> x = torch.Tensor()
torch> print(x)
[torch.DoubleTensor with no dimension]

</file>

You might have to specify the exact path of the ''torch'' executable
if you installed Torch in a non-standard path. The ''torch''
executable is just a shell wrapper around ''lua'' with some libraries
being preloaded. It is equivalent to running

<file>
$lua -ltorch -llab -lgnuplot -ldok
</file>

The same effect can also be achieved by explicitely loading these libraries.

<file>
$ lua
Lua 5.1.3  Copyright (C) 1994-2008 Lua.org, PUC-Rio
> require 'torch'
> require 'lab'
> require 'gnuplot'
> require 'dok'
</file>

In this example, we checked Torch was working by creating an empty
[[..:torch:tensor|Tensor]] and printing it on the screen.  The Tensor
is the main tool in Torch, and is used to represent vector, matrices
or higher-dimensional objects (tensors).

''require "torch"'' only installs the basic parts of torch (including
Tensors).  The list of all the basic Torch objects installed are
described [[..:torch:index|here]].  However, there are several other
//external// Torch packages that you might want to use, for example
the [[..:lab:index|lab]] package.  This package provides Matlab-like
functions for linear algebra.  We will use some of these functions in
this tutorial.  To require this package you simply have to type:
<file> require "lab" </file>

To see the list of all packages distributed with Torch7, click
[[..:index|here]].

====== Getting Help ======

There are two main ways of getting help in Torch7. One way is ofcourse
the html formatted help. However, another and easier method is to use
inline help in torch interpreter. The ''torch'' executable also
integrates this capability. Help about any function can be accessed by
calling the ''help()'' function.

<file>

torch> help(lab.rand)

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
lab.rand( [res,] m [, n, k, ...])        
 y=lab.rand(n) returns a one-dimensional tensor of size n filled with 
random numbers from a uniform distribution on the interval (0,1).
 y=lab.rand(m,n) returns a mxn tensor of random numbers from a uniform 
distribution on the interval (0,1).
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

</file>

Even a more intuitive method is to use tab completion. Whenever any
input is entered at the ''torch'' prompt, one can eneter two
consecutive ''TAB'' characters (''double TAB'') to get the syntax
completion. Moreover entering ''double TAB'' at an open paranthesis
also causes the help for that particular function to be printed.

<file>

torch> lab.randn( -- enter double TAB after (
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
lab.randn( [res,] m [, n, k, ...])       
 y=lab.randn(n) returns a one-dimensional tensor of size n filled with 
random numbers from a normal distribution with mean zero and variance 
one.
 y=lab.randn(m,n) returns a mxn tensor of random numbers from a normal 
distribution with mean zero and variance one.
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

/  \  
torch> lab.randn(

</file>

======  Torch basics: playing with Tensors ======

Ok, now we are ready to actually do something in Torch.  Lets start by
constructing a vector, say a vector with 5 elements, and filling the
i-th element with value i. Here's how:

<file>
> x=torch.Tensor(5)
> for i=1,5 do x[i]=i; end
> print(x)

 1
 2
 3
 4
 5
[torch.DoubleTensor of dimension 5] 

>
</file>

To make a matrix (2-dimensional Tensor), one simply does something like
''x=torch.Tensor(5,5)'' instead:
<file>
x=torch.Tensor(5,5)
for i=1,5 do 
 for j=1,5 do 
   x[i][j]=math.random();
 end
end
</file>

Another way to do the same thing as the code above is provided by the 
[[..:lab:index|lab]] package:

<file>
require "lab"
x=lab.rand(5,5)
</file>

The [[..:lab:index|lab]] package contains a wide variety of commands 
for manipulating Tensors that follow rather closely the equivalent
Matlab commands. For example one can construct Tensors using the commands
[[..:lab:index#lab.ones|ones]], 
[[..:lab:index#lab.zeros|zeros]], 
[[..:lab:index#lab.rand|rand]],
[[..:lab:index#lab.randn|randn]] and
[[..:lab:index#lab.eye|eye]], amongst others.

Similarly, row or column-wise operations such as 
[[..:lab:index#lab.sum|sum]] and 
[[..:lab:index#lab.max|max]] are called in the same way:

<file>
> require "lab"
> x1=lab.rand(5,5)
> x2=lab.sum(x1); 
> print(x2) 
 2.3450
 2.7099
 2.5044
 3.6897
 2.4089
[torch.DoubleTensor of dimension 5x1]

>
</file>


====== Types in Torch7 ======

In Torch7, different types of tensors can be used. By default, all
tensors are created using ''double'' type. ''torch.Tensor'' is a
convenience call to ''torch.DoubleTensor''. One can easily switch the
default tensor type to other types, like ''float''.

<file>
> =torch.Tensor()
[torch.DoubleTensor with no dimension]
> torch.setdefaulttensortype('torch.FloatTensor')
> =torch.Tensor()
[torch.FloatTensor with no dimension]
</file>

======  Example: training a neural network ======

We will show now how to train a neural network using the [[..:nn:index|nn]] package
available in Torch.

=====  Torch basics: building a dataset using Lua tables =====

In general the user has the freedom to create any kind of structure he
wants for dealing with data.

For example, training a neural network in Torch is achieved easily by
performing a loop over the data, and forwarding/backwarding tensors
through the network. Then, the way the dataset is built is left to the
user's creativity.


However, if you want to use some convenience classes, like
[[..:nn:index#nn.StochasticGradient|StochasticGradient]], which basically
does the training loop for you, one has to follow the dataset
convention of these classes.  (We will discuss manual training of a
network, where one does not use these convenience classes, in a later
section.)

StochasticGradient expects as a ''dataset'' an object which implements
the operator ''dataset[index]'' and implements the method
''dataset:size()''. The ''size()'' methods returns the number of
examples and ''dataset[i]'' has to return the i-th example.

An ''example'' has to be an object which implements the operator
''example[field]'', where ''field'' often takes the value ''1'' (for
input features) or ''2'' (for corresponding labels), i.e an example is
a pair of input and output objects.  The input is usually a Tensor
(exception: if you use special kind of gradient modules, like
[[..:nn:index#nn.TableLayers|table layers]]). The label type depends
on the criterion. For example, the
[[..:nn:index#nn.MSECriterion|MSECriterion]] expects a Tensor, but the
[[..:nn:index#nn.ClassNLLCriterion|ClassNLLCriterion]] expects an
integer (the class).

Such a dataset is easily constructed by using Lua tables, but it could any object
as long as the required operators/methods are implemented.

Here is an example of making a dataset for an XOR type problem:
<file>
require "lab"
dataset={};
function dataset:size() return 100 end -- 100 examples
for i=1,dataset:size() do 
	local input= lab.randn(2);     --normally distributed example in 2d
	local output= torch.Tensor(1);
	if input[1]*input[2]>0 then    --calculate label for XOR function
		output[1]=-1;
	else
		output[1]=1;
	end
	dataset[i] = {input, output};
end
</file>

=====  Torch basics: building a neural network =====

To train a neural network we first need some data.  We can use the XOR data
we just generated in the section before.  Now all that remains is to define
our network architecture, and train it.

To use Neural Networks in Torch you have to require the 
[[..:nn:index|nn]] package. 
A classical feed-forward network is created with the ''Sequential'' object:
<file>
require "nn"
mlp=nn.Sequential();  -- make a multi-layer perceptron
</file>

To build the layers of the network, you simply add the Torch objects 
corresponding to those layers to the //mlp// variable created above.

The two basic objects you might be interested in first are the 
[[..:nn:index#nn.Linear|Linear]] and 
[[..:nn:index#nn.Tanh|Tanh]] layers.
The Linear layer is created with two parameters: the number of input
dimensions, and the number of output dimensions. 
So making a classical feed-forward neural network with one hidden layer with 
//HUs// hidden units is as follows:
<file>
require "nn"
mlp=nn.Sequential();  -- make a multi-layer perceptron
inputs=2; outputs=1; HUs=20;
mlp:add(nn.Linear(inputs,HUs))
mlp:add(nn.Tanh())
mlp:add(nn.Linear(HUs,outputs))
</file>


=====  Torch basics: training a neural network =====

Now we're ready to train.
This is done with the following code:
<file>
criterion = nn.MSECriterion()  
trainer = nn.StochasticGradient(mlp, criterion)
trainer.learningRate = 0.01
trainer:train(dataset)
</file>

You should see printed on the screen something like this:
<file>
# StochasticGradient: training
# current error = 0.94550937745458
# current error = 0.83996744568527
# current error = 0.70880093908742
# current error = 0.58663679932706
# current error = 0.49190661630473
[..snip..]
# current error = 0.34533844015756
# current error = 0.344305927029
# current error = 0.34321901952818
# current error = 0.34206793525954
# StochasticGradient: you have reached the maximum number of iterations
</file>

Some other options of the //trainer// you might be interested in are for example:
<file>
trainer.maxIteration = 10
trainer.shuffleIndices = false
</file>
See the nn package description of the
[[..:nn:index#nn.StochasticGradient|StochasticGradient]] object
for more details.


=====  Torch basics: testing your neural network =====

To test your network on a single example you can do this:
<file>
x=torch.Tensor(2);   -- create a test example Tensor
x[1]=0.5; x[2]=-0.5; -- set its values
pred=mlp:forward(x)  -- get the prediction of the mlp 
print(pred)          -- print it 
</file>

You should see that your network has learned XOR:
<file>
> x=torch.Tensor(2); x[1]=0.5; x[2]=0.5; print(mlp:forward(x))
-0.5886
[torch.DoubleTensor of dimension 1]

> x=torch.Tensor(2); x[1]=-0.5; x[2]=0.5; print(mlp:forward(x))
 0.9261
[torch.DoubleTensor of dimension 1]

> x=torch.Tensor(2); x[1]=0.5; x[2]=-0.5; print(mlp:forward(x))
 0.7913
[torch.DoubleTensor of dimension 1]

> x=torch.Tensor(2); x[1]=-0.5; x[2]=-0.5; print(mlp:forward(x))
-0.5576
[torch.DoubleTensor of dimension 1]
</file>

=====  Manual Training of a Neural Network =====

Instead of using the [[..:nn:index#nn.StochasticGradient|StochasticGradient]] class
you can directly make the forward and backward calls on the network yourself.
This gives you greater flexibility.
In the following code example we create the same XOR data on the fly
and train each example online.

<file>
require "lab"
criterion = nn.MSECriterion()  
mlp=nn.Sequential();  -- make a multi-layer perceptron
inputs=2; outputs=1; HUs=20;
mlp:add(nn.Linear(inputs,HUs))
mlp:add(nn.Tanh())
mlp:add(nn.Linear(HUs,outputs))

for i = 1,2500 do
  -- random sample
  local input= lab.randn(2);     -- normally distributed example in 2d
  local output= torch.Tensor(1);
  if input[1]*input[2] > 0 then  -- calculate label for XOR function
    output[1] = -1
  else
    output[1] = 1
  end

  -- feed it to the neural network and the criterion
  prediction = mlp:forward(input)
  criterion:forward(prediction, output)

  -- train over this example in 3 steps

  -- (1) zero the accumulation of the gradients
  mlp:zeroGradParameters()

  -- (2) accumulate gradients
  criterion_gradient = criterion:backward(prediction, output)
  mlp:backward(input, criterion_gradient)

  -- (3) update parameters with a 0.01 learning rate
  mlp:updateParameters(0.01)
end
</file>

Super!

====== Concluding remarks ======

That's the end of this tutorial, but not the end of what you have left
to discover of Torch! To explore more of Torch, you should take a look
at the [[..:index|Torch package help]] which has been linked to
throughout this tutorial every time we have mentioned one of the basic
Torch object types.  The Torch library reference manual is available
[[..:index|here]] and the external torch packages installed on your
system can be viewed [[..:torch:index|here]].

Good luck and have fun!