Welcome to mirror list, hosted at ThFree Co, Russian Federation.

github.com/torch/nn.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2014-02-01Add extra tests for SparseLinear and fix bug where scale was not being ↵Páidí Creed
multiplied into bias updates
2014-01-28Merge remote-tracking branch 'upstream/master'Páidí Creed
2013-11-27less strict about ansi libraries, less warningsLeon Bottou
2013-11-18fixed PairwiseDistance type() function bug. Overloaded type now returns self.Jonathan Tompson
2013-11-18Fixed a bug in PairwiseDistance where the gradInput table isn't converted ↵Jonathan Tompson
when Module.type function is called (this bug has always existed and is not due to the recent changes).
2013-11-06Merge remote-tracking branch 'upstream/master'Páidí Creed
Conflicts: extra/nn/test/test.lua
2013-10-29Fix SparseLinearPáidí Creed
Fixed an issue with SparseLinear and added a test along with a new class SparseJacobian (for testing sparse modules)
2013-10-21going back on Clement's suggestion. It was a good idea, but we're ↵Jonathan Tompson
needlessly loosing performance in the 1D case with the input clone.
2013-10-20fixed a bug in Pairwise distance when the output Lp norm is zero (which ↵Jonathan Tompson
results in a divide by zero issue). Rewrote PairwiseDistance following Clement's suggestion to only have one codepath. Fixed a small bug in extra/test/test.lua where the input to the non-batch fprop test was zero.
2013-10-19it seems like torch.abs() doesn't have a cuda implementation, so the ↵Jonathan Tompson
previous commit would fail when PairwiseDistance():cuda() was called.
2013-10-19Fixed the bprop in PairwiseDistance for pnorms other than one. The ↵Jonathan Tompson
gradInput has always been wrong it seems; the sign of the gradient is correct but the magnitude was wrong. I also added a test in extra/nn/test/test.lua for nn.PairwiseDifference, which tests both the batch and non-batch code paths for a few different p-norms.
2013-10-18Fixed PairwiseDistance for odd Lp normsJonathan Tompson
2013-10-17init self.output with a single element tensor, not an empty tensorkoray kavukcuoglu
2013-10-17correct the stochastic case checkkoray kavukcuoglu
2013-10-16Merge pull request #163 from vladmnih/batchClement Farabet
Added minibatch support to MarginRankingCriterion and PairwiseDistance.
2013-10-16C89Ronan Collobert
2013-10-16added strict c89 flagsRonan Collobert
2013-10-15add 3D convolution documentationkoray kavukcuoglu
2013-10-15add 3D max poolingkoray kavukcuoglu
2013-10-13move index declaragtion into pragmakoray kavukcuoglu
2013-10-13move k into thread private loopkoray kavukcuoglu
2013-09-27remove WeightedMSECriterion test since it fails with an unknown modulekoray kavukcuoglu
2013-10-08pkg/nn: C89Ronan Collobert
2013-09-04Added support for the partialSum parameter to the SpatialConvolutionCUDA module.Volodymyr Mnih
2013-09-04Added minibatch support to MarginRankingCriterion and PairwiseDistance.Volodymyr Mnih
2013-08-11luaopen_xxx functions need LUA_EXTERNCLeon Bottou
2013-06-11Using a default scale in Module.backward().Clement Farabet
This should not affect anything, as modules are always used within containers. Mostly important during testing.
2013-06-11More CUDA testing.Clement Farabet
2013-06-08Exposed padding parameter (SpatialConvolutionCUDA)Clement Farabet
2013-04-16CMul should resize the gradInput into the same shape as input.koray kavukcuoglu
2013-03-26Fixed a missing </file>.Ivo Danihelka
2013-03-23Merge pull request #117 from akfidjeland/linear_nankoray kavukcuoglu
Linear:updateGradInput avoids NaN and inf
2013-03-23Sped up getParameters() in simple situations.Clement Farabet
2013-03-23Correct implementation of SpatialMaxPoolingCUDA (Sixin's port)Clement Farabet
2013-03-22Merge branch 'master' of github.com:andresy/torchClement Farabet
2013-03-22Added Bias to SpatialConvolutionCUDA.Clement Farabet
2013-03-20Added Max Pooling for CUDA.Clement Farabet
2013-03-20Integrated CUDA conv routines from cuda-convnet library.Clement Farabet
2013-03-19CMulTable updateGradInput now uses proper but slow gradient calculation. The ↵koray kavukcuoglu
old one is available in updateGradInput_efficient function.
2013-03-13Linear:updateGradInput avoids NaN and infAndreas Fidjeland
The resize in Linear:updateGradInput can introduce NaN and inf into the gradients. The resize itself leaves garbage in the gradInput tensor. For normal numbers the subsequent addmm/addmv will clear the garbage. However, if gradInput contains either nan or inf after the resize, the multiply will result in nan instead of the desired result.
2013-02-23Merge pull request #115 from fidlej/topic_CopyClement Farabet
Fixed nn.Copy with default arguments.
2013-02-23Fixed nn.Copy with default arguments.Ivo Danihelka
2013-02-23Corrected getParameters() documentation.Ivo Danihelka
2013-02-20Merge branch 'nn_fast_reset'Clement Farabet
2013-02-20Merge branch 'master' into nn_fast_resetClement Farabet
2013-01-03New NN classeskoray kavukcuoglu
extra/nn/L1Cost.lua : L1 penalty extra/nn/SpatialFullConvolution.lua : full convolution extra/nn/SpatialFullConvolutionMap.lua : full convolution with connection table extra/nn/TanhShrink.lua : shrinkage with x-tanh(x) extra/nn/WeightedMSECriterion.lua : mean squared error with weighting mask on the target Add new nn classes that are used commonly for unsupervised training of convolutional auto encoders
2012-12-07Document function getParameters()Julien
2012-12-06fix Jacobian when using direct-updatesRonan Collobert
2012-11-30Updated flatten() in hessian.lua to be synchronized the version in ↵Ivo Danihelka
Module:getParameters().
2012-11-29Improved Module:getParameters() speed when using many storages.Ivo Danihelka