diff options
author | Adam Lerer <alerer@fb.com> | 2015-04-29 18:04:06 +0300 |
---|---|---|
committer | Adam Lerer <alerer@fb.com> | 2015-05-06 08:37:00 +0300 |
commit | f1d93d4f5cb2e93bf80cfa507d05f575065c9eae (patch) | |
tree | 0d2c80a45c52b1098cbfd561d212eee9bee6f5f6 /init.lua | |
parent | ea3f278ad9093c72a54b10c151dce13d9baa3271 (diff) |
Add :cudaOn() and TransferGPU module.
With cutorch auto-mode, multi-gpu models can be run without explicitly
updating the device. However, copies must be explicit, so a
TransferGPU module is necessary to make the copy.
Here's an example multi-GPU module (added as a test):
```lua
cutorch.setDevice(0)
net = nn.Sequential()
for i = 1,3 do
net:add(nn.Linear(1000,1000):cudaOn(i))
net:add(nn.SoftMax():cudaOn(i))
net:add(nn.TransferGPU(i, i+1))
end
local input = torch.CudaTensorOn(1, 1000)
local output = net:forward(input)
print(output:getDevice())
local gradOutput = output/100
local gradInput = net:backward(input, gradOutput)
print(gradInput:getDevice())
```
Diffstat (limited to 'init.lua')
-rw-r--r-- | init.lua | 10 |
1 files changed, 10 insertions, 0 deletions
@@ -3,5 +3,15 @@ require "nn" require "libcunn" include('test.lua') +include('utils.lua') include('DataParallelTable.lua') +include('TransferGPU.lua') + +function nn.Module:cudaOn(device) + return nn.utils.recursiveCudaOn(self, device) +end + +function nn.Criterion:cudaOn(device) + return nn.utils.recursiveCudaOn(self, device) +end |