Age | Commit message (Collapse) | Author | |
---|---|---|---|
2016-08-06 | Revert "Refactoring CUDNN Find"revert-231-algo | Soumith Chintala | |
2016-08-04 | Completing cudnnFind refactoring; addressing code review notes | Boris Fomitchev | |
2016-08-03 | Merge remote-tracking branch 'upstream/master' into algo | Boris Fomitchev | |
2016-08-03 | Refactoring cudnnFind | Boris Fomitchev | |
2016-08-02 | Refactoring CUDNN Find | Boris Fomitchev | |
2016-07-29 | Adjusting test toletance, disabling double test | Boris Fomitchev | |
2016-07-29 | Merge pull request #206 from szagoruyko/fp16 | Soumith Chintala | |
half and double with tests | |||
2016-07-07 | Added bias and weight functions for RNN | SeanNaren | |
2016-06-23 | deal with fp16 batchnorm | Sergey Zagoruyko | |
2016-06-23 | half, double with tests | Sergey Zagoruyko | |
2016-06-11 | Modified tests and params, updated docs for clipped ReLU | SeanNaren | |
2016-06-11 | Added clipped ReLU | SeanNaren | |
2016-06-08 | Natalia fix corner case where both reference impl and cudnn produce nans | Boris Fomitchev | |
2016-05-22 | add avg pooling back compat test | Sergey Zagoruyko | |
2016-05-17 | fix for V5 GA RNN APIs | Natalia Gimelshein | |
2016-04-18 | Added tests, modified README and added RNN modules | SeanNaren | |
2016-04-18 | Add utility functions and correct tensor dimensions. | Anthony Sandrin | |
Tensor dimensions needed to be reversed since RNN function expect stride 1 in the first dimension. Remove experimental RNN scripts. Make inputSize a parameter to the RNN layer and use strings for enum values. * add missing cutorch.sychronize() to SpatialConvolution benchmark * rename bench to benchSpatial * add benchVolumetric Set weights in __init() Remove unnecessary input argument to resetIODescriptors - added new convolution modes - cleaned up benchVolumetric Add missing synchronization test reverted to non-verbose and random optimizer Return output from RNN:updateOutput and zero cell output. cuDNN does not set cell output for RNN_RELU so the output was garbage. Check for nil cx/hx when not training. Correct self.RNNDesc to self.rnnDesc | |||
2016-04-18 | Initial work for cudnn RNN api integration | Anthony Sandrin | |
Added cudnnFind auto-tuning Change RNN layer api and improve descriptor/tensor resizing conditions Implement updateGradInput and accGradParameters | |||
2016-04-16 | cudnn.convert for BN | Sergey Zagoruyko | |
2016-04-14 | fix running_var meaning in BN | Sergey Zagoruyko | |
2016-04-13 | R5 rebase | Sergey Zagoruyko | |
2016-03-30 | Merge pull request #152 from gheinrich/fix-temporal-convolution | Soumith Chintala | |
Fix TemporalConvolution output size | |||
2016-03-25 | Fix TemporalConvolution output size | Greg Heinrich | |
Issue seen in updateOutput() when batch size is decreasing (e.g. 256->128): ``` .../torch/install/share/lua/5.1/torch/Tensor.lua:462: Wrong size for view. Input size: 66060288. Output size: 128x256x1008x1 stack traceback: [C]: in function 'error' .../torch/install/share/lua/5.1/torch/Tensor.lua:462: in function 'view' ...orch/install/share/lua/5.1/cudnn/TemporalConvolution.lua:62: in function <...orch/install/share/lua/5.1/cudnn/TemporalConvolution.lua:54> ``` See new test cudnntest.TemporalConvolution_reduceBatchSize() for a repro. Calling a tensor's set() method without extra parameters but the requested storage causes the tensor to see a 1-D view of the full storage (not the associated tensor). For example: ``` th> x=torch.Tensor(10) [0.0001s] th> y=x:resize(5) [0.0001s] th> torch.Tensor():set(y:storage()):size() 10 [torch.LongStorage of size 1] [0.0002s] ``` Proposed fix is to specify the desired view through the optional parameters of the set() method. | |||
2016-03-24 | fix failing SpatialFullConvolution test | Natalia Gimelshein | |
2016-03-21 | full conv tests | Sergey Zagoruyko | |
2016-03-21 | removed double conversion tests | Sergey Zagoruyko | |
2016-03-17 | fixing tests | soumith | |
2016-03-16 | Adding support for SpatialFullConvolution. | Christopher D. Twigg | |
Since SpatialFullConvolution is just the transpose of the regular convolution operator, we can use CUDNN by swapping the forward and backward passes. This can be a substantial speed improvement over cunn.SpatialFullConvolution, which works by explicitly building the full matrix for a GEMM operation. | |||
2016-02-27 | Should fix #123 | ngimel | |
2016-02-25 | Add cudnn.BatchNormalization and cudnn.VolumetricBatchNormalization | Sam Gross | |
2016-02-17 | add forgotten reset for BN | Sergey Zagoruyko | |
2016-02-03 | Fixed issues with two tests | ivpopov | |
SpatialCrossEntropyCriterion - inconsistent averageSize normalization VolumetricMaxPooling_batch - will run out of memory: reducing maximal batch size | |||
2016-01-26 | fix volumetric convolution test | Sergey Zagoruyko | |
2016-01-26 | cudnn.convert conflicts manually applied | soumith | |
2016-01-26 | syncing uptil master/R3 9bc3cbac4f054438f5e77824f868cd94e9e22f81 | soumith | |
2016-01-21 | Delete modelTemp.t7 | ngimel | |
delete test output | |||
2016-01-21 | cudnn.TemporalConvolution inherits from nn.TemporalConvolution | Natalia Gimelshein | |
2016-01-05 | use cudnn for temporal convolution | Natalia Gimelshein | |
2015-11-21 | fixed conflict leftover | Boris Fomitchev | |
as per Natalia, restored randomization | |||
2015-11-13 | rebase from upstream | Boris Fomitchev | |
2015-11-13 | Natalia's fixed for BN. Added bntest.lua | Boris Fomitchev | |
2015-11-06 | integrating changes from master | soumith | |
2015-10-20 | adding SpatialCrossEntropyCriterion | soumith | |
2015-09-18 | test max pooling with padding | Sergey Zagoruyko | |
2015-09-15 | whitespace cleanups, fixing logsoftmax test | soumith | |
2015-09-15 | functional interface for R3 as well | soumith | |
2015-08-23 | flag to enable or not to enable auto-tuner | soumith | |
2015-08-07 | added LogSoftMax test | Sergey Zagoruyko | |
2015-08-05 | Volumetric max and avg pooling | soumith | |
2015-08-02 | working R3 bindings for non-new modules | Soumith Chintala | |