diff options
author | Guillaume Klein <guillaume.klein@systrangroup.com> | 2017-01-03 13:57:20 +0300 |
---|---|---|
committer | Guillaume Klein <guillaume.klein@systrangroup.com> | 2017-01-03 13:57:20 +0300 |
commit | 1ded236c9e100e1e994c1b6c28c6617d800956df (patch) | |
tree | d58dbfae38bb6a41fa5bab0eca595672cd80797a /PartialLinear.lua | |
parent | e37c33d04eef3bcd7588eb85f3be580116b82f86 (diff) |
Fix shared function override for specific modules
Diffstat (limited to 'PartialLinear.lua')
-rw-r--r-- | PartialLinear.lua | 7 |
1 files changed, 4 insertions, 3 deletions
diff --git a/PartialLinear.lua b/PartialLinear.lua index d208f52..6e92cfc 100644 --- a/PartialLinear.lua +++ b/PartialLinear.lua @@ -102,9 +102,10 @@ function PartialLinear:updateParameters(learningRate) self.bias:add(-learningRate, self.gradBias) end --- we do not need to accumulate parameters when sharing -PartialLinear.sharedAccUpdateGradParameters = - PartialLinear.accUpdateGradParameters +function PartialLinear:sharedAccUpdateGradParameters(input, gradOutput, lr) + -- we do not need to accumulate parameters when sharing: + self:defaultAccUpdateGradParameters(input, gradOutput, lr) +end function PartialLinear:__tostring__() return torch.type(self) .. |