Welcome to mirror list, hosted at ThFree Co, Russian Federation.

github.com/torch/cunn.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorAdam Lerer <alerer@fb.com>2015-04-29 18:04:06 +0300
committerAdam Lerer <alerer@fb.com>2015-05-06 08:37:00 +0300
commitf1d93d4f5cb2e93bf80cfa507d05f575065c9eae (patch)
tree0d2c80a45c52b1098cbfd561d212eee9bee6f5f6 /init.lua
parentea3f278ad9093c72a54b10c151dce13d9baa3271 (diff)
Add :cudaOn() and TransferGPU module.
With cutorch auto-mode, multi-gpu models can be run without explicitly updating the device. However, copies must be explicit, so a TransferGPU module is necessary to make the copy. Here's an example multi-GPU module (added as a test): ```lua cutorch.setDevice(0) net = nn.Sequential() for i = 1,3 do net:add(nn.Linear(1000,1000):cudaOn(i)) net:add(nn.SoftMax():cudaOn(i)) net:add(nn.TransferGPU(i, i+1)) end local input = torch.CudaTensorOn(1, 1000) local output = net:forward(input) print(output:getDevice()) local gradOutput = output/100 local gradInput = net:backward(input, gradOutput) print(gradInput:getDevice()) ```
Diffstat (limited to 'init.lua')
-rw-r--r--init.lua10
1 files changed, 10 insertions, 0 deletions
diff --git a/init.lua b/init.lua
index 0145570..0f2e2ab 100644
--- a/init.lua
+++ b/init.lua
@@ -3,5 +3,15 @@ require "nn"
require "libcunn"
include('test.lua')
+include('utils.lua')
include('DataParallelTable.lua')
+include('TransferGPU.lua')
+
+function nn.Module:cudaOn(device)
+ return nn.utils.recursiveCudaOn(self, device)
+end
+
+function nn.Criterion:cudaOn(device)
+ return nn.utils.recursiveCudaOn(self, device)
+end