Welcome to mirror list, hosted at ThFree Co, Russian Federation.

github.com/torch/nn.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorGuillaume Klein <guillaume.klein@systrangroup.com>2017-01-03 13:57:20 +0300
committerGuillaume Klein <guillaume.klein@systrangroup.com>2017-01-03 13:57:20 +0300
commit1ded236c9e100e1e994c1b6c28c6617d800956df (patch)
treed58dbfae38bb6a41fa5bab0eca595672cd80797a /PartialLinear.lua
parente37c33d04eef3bcd7588eb85f3be580116b82f86 (diff)
Fix shared function override for specific modules
Diffstat (limited to 'PartialLinear.lua')
-rw-r--r--PartialLinear.lua7
1 files changed, 4 insertions, 3 deletions
diff --git a/PartialLinear.lua b/PartialLinear.lua
index d208f52..6e92cfc 100644
--- a/PartialLinear.lua
+++ b/PartialLinear.lua
@@ -102,9 +102,10 @@ function PartialLinear:updateParameters(learningRate)
self.bias:add(-learningRate, self.gradBias)
end
--- we do not need to accumulate parameters when sharing
-PartialLinear.sharedAccUpdateGradParameters =
- PartialLinear.accUpdateGradParameters
+function PartialLinear:sharedAccUpdateGradParameters(input, gradOutput, lr)
+ -- we do not need to accumulate parameters when sharing:
+ self:defaultAccUpdateGradParameters(input, gradOutput, lr)
+end
function PartialLinear:__tostring__()
return torch.type(self) ..