Welcome to mirror list, hosted at ThFree Co, Russian Federation.

README.md - github.com/torch/cunn.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
blob: 6d8609c022ee5d60ba8d3fe763cfab27f399688e (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
<a name="cunn.dok"/>
# CUDA backend for the Neural Network Package #

This package provides a CUDA implementation for many of the modules in the base nn package: [nn](https://github.com/torch/nn/blob/master/README.md)
 * [Modules](doc/cunnmodules.md#nn.cunnmodules.dok): There are also additional GPU-related modules not found in the nn package.

## Installing from source
```bash
git clone https://github.com/torch/cunn
cd cunn
luarocks make rocks/cunn-scm-1.rockspec
```

## To use

Simply convert your network model to CUDA by calling `:cuda()`:
```lua
local model = nn.Sequential()
model:add(nn.Linear(2,2))
model:add(nn.LogSoftMax())

model:cuda()  -- convert model to CUDA
```

... and similarly for your tensors:
```lua
local input = torch.Tensor(32,2):uniform()
input = input:cuda()
local output = model:forward(input)
```
... or create them directly as `CudaTensor`s:
```lua
local input = torch.CudaTensor(32,2):uniform()
local output = model:forward(input)
```

## To run unit-tests

```lua
luajit -l cunn -e 'cunn.test()'
```

## GPU Training Concepts

__Performance__

* data should be transferred between main memory and gpu in batches, otherwise the transfer time will be dominated
by latency associated with speed of light, and execution overheads, rather than by bandwidth
* therefore, train and predict using mini-batches
* allocating GPU memory causes a sync-point, which will noticeably affect performance
  * therefore try to allocate any `CudaTensor`s once, at the start of the program,
  and then simply copy data backwards and forwards
  between main memory and existing `CudaTensor`s
* similarly, try to avoid any operations that implicitly allocate new tensors.  For example, if you write:
```lua
require 'cutorch'

local a = torch.CudaTensor(1000):uniform()
for it=1,1000 do
  local b = torch.add(a, 1)
end
```
... this will allocate one thousand new `CudaTensor`s, one for each call to `torch.add(a, 1)`.

Use instead this form:
```lua
require 'cutorch'

local a = torch.CudaTensor(1000):uniform()
local b = torch.CudaTensor(1000):uniform()
for it=1,1000 do
  b:add(a, 1)
end
```
In this form, `b` is allocated only once, before the loop.  Then the `b:add(a,1)` operation will perform
the add inside the GPU kernel, and store the result into the original `b` `CudaTensor`.  This
will run noticeably faster, in general.  It's also a lot less likely to eat up arbitrary amounts of memory,
and less likely to need frequent calls to `collectgarbage(); collectgarbage()`.

__Benchmarking__

* GPU operations will typically continue after an instruction has been issued
* eg, if you do:
```lua
require 'cutorch'
local a = torch.CudaTensor(1000,1000):uniform()
a:add(1)
```
... the GPU kernel to add 1 will only be scheduled for launch by `a:add(1)`.  It might not have completed yet, or
even have reached the GPU, at the time that the `a:add(1)` returns
* therefore for running wall-clock timings, you should call `cutorch.synchronize()` before each timecheck
point:
```lua
require 'cutorch'
require 'sys'

local a = torch.CudaTensor(1000,1000):uniform()
cutorch.synchronize()
start = sys.tic()
a:add(1)
cutorch.synchronize()
print(sys.toc())
```