Age | Commit message (Collapse) | Author |
|
The BatchNormalization modules now all extend nn.BatchNormalization and
use the same THNN/THCUNN implementation.
|
|
Add THNN conversion of SpatialBatchNormalization, SpatialFractionalMaxPooling and SpatialSubSampling
Add THNN convertion of SpatialConvolutionLocal, SpatialFullConvolution and SpatialUpSamplingNearest
THNN conversion of SpatialMaxUnpooling
Remove unfold from generic
Add functional conversion of SpatialCrossMapLRN
Plus fix in the init.c
Fix
|
|
|
|
|
|
Fix typo and copy-paste mistake
|
|
The old-style running_std is actually the E[1/sqrt(var + eps)]. I forgot
to subtract out the 'eps' when converting to running_var.
|
|
|
|
This is primarily to support the fast, memory-efficient CUDA
implementation. Some other changes include making weight and bias each
individually optional and averaging the variances instead of the
inverse standard deviation.
|
|
|
|
Not initialising those variables saved 30% of GPU memory when not training.
|
|
fixing batchnorm tests
|
|
|
|
|