Welcome to mirror list, hosted at ThFree Co, Russian Federation.

github.com/torch/nn.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
path: root/lib
diff options
context:
space:
mode:
authorGregory Chanan <gchanan@fb.com>2017-05-03 21:03:58 +0300
committerSoumith Chintala <soumith@gmail.com>2017-05-09 21:55:29 +0300
commit75fa86b20ac188fd794a7a38cb75921b97ac5fc9 (patch)
treeb22567b4d98529d74224316c1d98c874ce85f2c7 /lib
parent3752f2426b55bc32cbd0ef112649d47dc674baa8 (diff)
Add a keepdim parameter for reduction functions over a single dimension.
By default, this parameter is False -- a backwards incompatible change, but one that follows numpy semantics, e.g. numpy.sum (numpy names the parameter "keepdims" since you can pass multiple dims to reduction functions). The old behavior seems desired for normalization type operations where the tensor will immediately be expanded out again, e.g.: probs.sum(1).expand_as(probs) which no longer works because the dimension to expand is missing. This can be fixed by simply passing True as "keepdim" argument to the reduction operation, e.g: probs.sum(1, keepdim=True).expand_as(probs)
Diffstat (limited to 'lib')
-rw-r--r--lib/THNN/generic/SparseLinear.c2
-rw-r--r--lib/THNN/generic/TemporalSubSampling.c4
2 files changed, 3 insertions, 3 deletions
diff --git a/lib/THNN/generic/SparseLinear.c b/lib/THNN/generic/SparseLinear.c
index d9cec8c..1cf7122 100644
--- a/lib/THNN/generic/SparseLinear.c
+++ b/lib/THNN/generic/SparseLinear.c
@@ -234,7 +234,7 @@ void THNN_(SparseLinear_accGradParameters)(
// gradBias += gradOutput
THTensor* buf = THTensor_(new)();
- THTensor_(sum)(buf, gradOutput, 0);
+ THTensor_(sum)(buf, gradOutput, 0, 1);
THTensor_(cadd)(gradBias, gradBias, scale, buf);
THTensor_(free)(buf);
THLongTensor_free(csc);
diff --git a/lib/THNN/generic/TemporalSubSampling.c b/lib/THNN/generic/TemporalSubSampling.c
index 6b788df..68f35e2 100644
--- a/lib/THNN/generic/TemporalSubSampling.c
+++ b/lib/THNN/generic/TemporalSubSampling.c
@@ -70,7 +70,7 @@ void THNN_(TemporalSubSampling_updateOutput)(
{
THTensor_(narrow)(inputWindow, input, 0, k*dW, kW);
THTensor_(select)(outputFrame, output, 0, k);
- THTensor_(sum)(outputFrame, inputWindow, 0);
+ THTensor_(sum)(outputFrame, inputWindow, 0, 1);
THTensor_(cmul)(outputFrame, outputFrame, weight);
THTensor_(cadd)(outputFrame, outputFrame, 1, bias);
}
@@ -143,7 +143,7 @@ void THNN_(TemporalSubSampling_accGradParameters)(
{
THTensor_(narrow)(inputWindow, input, 0, k*dW, kW);
THTensor_(select)(gradOutputFrame, gradOutput, 0, k);
- THTensor_(sum)(buffer, inputWindow, 0);
+ THTensor_(sum)(buffer, inputWindow, 0, 1);
THTensor_(addcmul)(gradWeight, gradWeight, scale, buffer, gradOutputFrame);
THTensor_(cadd)(gradBias, gradBias, scale, gradOutputFrame);
}