Welcome to mirror list, hosted at ThFree Co, Russian Federation.

github.com/soumith/cudnn.torch.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
path: root/test
AgeCommit message (Collapse)Author
2016-08-06Revert "Refactoring CUDNN Find"revert-231-algoSoumith Chintala
2016-08-04Completing cudnnFind refactoring; addressing code review notesBoris Fomitchev
2016-08-03Merge remote-tracking branch 'upstream/master' into algoBoris Fomitchev
2016-08-03Refactoring cudnnFindBoris Fomitchev
2016-08-02Refactoring CUDNN FindBoris Fomitchev
2016-07-29Adjusting test toletance, disabling double testBoris Fomitchev
2016-07-29Merge pull request #206 from szagoruyko/fp16Soumith Chintala
half and double with tests
2016-07-07Added bias and weight functions for RNNSeanNaren
2016-06-23deal with fp16 batchnormSergey Zagoruyko
2016-06-23half, double with testsSergey Zagoruyko
2016-06-11Modified tests and params, updated docs for clipped ReLUSeanNaren
2016-06-11Added clipped ReLUSeanNaren
2016-06-08Natalia fix corner case where both reference impl and cudnn produce nansBoris Fomitchev
2016-05-22add avg pooling back compat testSergey Zagoruyko
2016-05-17fix for V5 GA RNN APIsNatalia Gimelshein
2016-04-18Added tests, modified README and added RNN modulesSeanNaren
2016-04-18Add utility functions and correct tensor dimensions.Anthony Sandrin
Tensor dimensions needed to be reversed since RNN function expect stride 1 in the first dimension. Remove experimental RNN scripts. Make inputSize a parameter to the RNN layer and use strings for enum values. * add missing cutorch.sychronize() to SpatialConvolution benchmark * rename bench to benchSpatial * add benchVolumetric Set weights in __init() Remove unnecessary input argument to resetIODescriptors - added new convolution modes - cleaned up benchVolumetric Add missing synchronization test reverted to non-verbose and random optimizer Return output from RNN:updateOutput and zero cell output. cuDNN does not set cell output for RNN_RELU so the output was garbage. Check for nil cx/hx when not training. Correct self.RNNDesc to self.rnnDesc
2016-04-18Initial work for cudnn RNN api integrationAnthony Sandrin
Added cudnnFind auto-tuning Change RNN layer api and improve descriptor/tensor resizing conditions Implement updateGradInput and accGradParameters
2016-04-16cudnn.convert for BNSergey Zagoruyko
2016-04-14fix running_var meaning in BNSergey Zagoruyko
2016-04-13R5 rebaseSergey Zagoruyko
2016-03-30Merge pull request #152 from gheinrich/fix-temporal-convolutionSoumith Chintala
Fix TemporalConvolution output size
2016-03-25Fix TemporalConvolution output sizeGreg Heinrich
Issue seen in updateOutput() when batch size is decreasing (e.g. 256->128): ``` .../torch/install/share/lua/5.1/torch/Tensor.lua:462: Wrong size for view. Input size: 66060288. Output size: 128x256x1008x1 stack traceback: [C]: in function 'error' .../torch/install/share/lua/5.1/torch/Tensor.lua:462: in function 'view' ...orch/install/share/lua/5.1/cudnn/TemporalConvolution.lua:62: in function <...orch/install/share/lua/5.1/cudnn/TemporalConvolution.lua:54> ``` See new test cudnntest.TemporalConvolution_reduceBatchSize() for a repro. Calling a tensor's set() method without extra parameters but the requested storage causes the tensor to see a 1-D view of the full storage (not the associated tensor). For example: ``` th> x=torch.Tensor(10) [0.0001s] th> y=x:resize(5) [0.0001s] th> torch.Tensor():set(y:storage()):size() 10 [torch.LongStorage of size 1] [0.0002s] ``` Proposed fix is to specify the desired view through the optional parameters of the set() method.
2016-03-24fix failing SpatialFullConvolution testNatalia Gimelshein
2016-03-21full conv testsSergey Zagoruyko
2016-03-21removed double conversion testsSergey Zagoruyko
2016-03-17fixing testssoumith
2016-03-16Adding support for SpatialFullConvolution.Christopher D. Twigg
Since SpatialFullConvolution is just the transpose of the regular convolution operator, we can use CUDNN by swapping the forward and backward passes. This can be a substantial speed improvement over cunn.SpatialFullConvolution, which works by explicitly building the full matrix for a GEMM operation.
2016-02-27Should fix #123ngimel
2016-02-25Add cudnn.BatchNormalization and cudnn.VolumetricBatchNormalizationSam Gross
2016-02-17add forgotten reset for BNSergey Zagoruyko
2016-02-03Fixed issues with two testsivpopov
SpatialCrossEntropyCriterion - inconsistent averageSize normalization VolumetricMaxPooling_batch - will run out of memory: reducing maximal batch size
2016-01-26fix volumetric convolution testSergey Zagoruyko
2016-01-26cudnn.convert conflicts manually appliedsoumith
2016-01-26syncing uptil master/R3 9bc3cbac4f054438f5e77824f868cd94e9e22f81soumith
2016-01-21Delete modelTemp.t7ngimel
delete test output
2016-01-21cudnn.TemporalConvolution inherits from nn.TemporalConvolutionNatalia Gimelshein
2016-01-05use cudnn for temporal convolutionNatalia Gimelshein
2015-11-21fixed conflict leftoverBoris Fomitchev
as per Natalia, restored randomization
2015-11-13rebase from upstreamBoris Fomitchev
2015-11-13Natalia's fixed for BN. Added bntest.luaBoris Fomitchev
2015-11-06integrating changes from mastersoumith
2015-10-20adding SpatialCrossEntropyCriterionsoumith
2015-09-18test max pooling with paddingSergey Zagoruyko
2015-09-15whitespace cleanups, fixing logsoftmax testsoumith
2015-09-15functional interface for R3 as wellsoumith
2015-08-23flag to enable or not to enable auto-tunersoumith
2015-08-07added LogSoftMax testSergey Zagoruyko
2015-08-05Volumetric max and avg poolingsoumith
2015-08-02working R3 bindings for non-new modulesSoumith Chintala