Welcome to mirror list, hosted at ThFree Co, Russian Federation.

github.com/torch/cunn.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
path: root/lib
AgeCommit message (Collapse)Author
2017-09-07fix static linkage and make THD statically linkedHEADmasterSoumith Chintala
2017-08-27Add numerically stable logsigmoidAlykhan Tejani
2017-08-26Adding implicit padding for 3d average poolingLu Fang
2017-08-25Fix typos.Zhou Mo
2017-08-25Updates for CUDA 9Christian Sarofeen
2017-08-03remove limitations on output_padding in Conv* routinesSoumith Chintala
2017-08-03add 2d and 3d dilated full ConvolutionSoumith Chintala
2017-07-26Merge pull request #477 from wickedfoo/feature_lp_poolingSoumith Chintala
GPU implementation of L_p feature pooling
2017-07-19Static linking against libstdc++ in Binary Build modeSoumith Chintala
2017-07-14add launch_bounds to greedy kernelsNatalia Gimelshein
2017-07-13LP pooling kernelsJeff Johnson
2017-06-23Fix segfault in SpatialDepthWiseConvolution w/o biasSimone Cirillo
2017-06-22add asserts to BCECriterioncph
2017-06-15nn.EmbeddingBag to compute a bag of word embeddings (Embedding + Sum/Mean)Soumith Chintala
2017-06-14Added GLU (gated linear unit)Sam Gross
From https://arxiv.org/abs/1612.08083
2017-06-07Add 3D upsampling (nearest and trilinear) with testsLuca Antiga
2017-06-07fix GRUFused signatureSoumith Chintala
2017-06-07Merge pull request #470 from qqning/masterSoumith Chintala
Fix the mix-up of height and width on depth-wise convolution
2017-06-07Remove clone in fused rnnChristian Sarofeen
2017-06-06Fixing the issue with incorrect normalized values in IndexLinearPavan Yalamanchili
2017-05-24Fix the mix-up of height and width on depth-wise convolutionningqingqun
2017-05-16ClassNLLCriterion ignoreIndexNicholas Leonard
2017-05-15Revert "Update to ignore zero targets"Soumith Chintala
2017-05-12SpatialDepthWiseConvolution.cu addedstooloveu
2017-05-12Merge pull request #458 from jnhwkim/masterNicholas Léonard
Update to ignore zero targets
2017-05-09Add a keepdim parameter for reduction functions over a single dimension.Gregory Chanan
By default, this parameter is False -- a backwards incompatible change, but one that follows numpy semantics, e.g. numpy.sum (numpy names the parameter "keepdims" since you can pass multiple dims to reduction functions). The old behavior seems desired for normalization type operations where the tensor will immediately be expanded out again, e.g.: probs.sum(1).expand_as(probs) which no longer works because the dimension to expand is missing. This can be fixed by simply passing True as "keepdim" argument to the reduction operation, e.g: probs.sum(1, keepdim=True).expand_as(probs)
2017-04-22fix typoSoumith Chintala
2017-04-22Indexing fix for fused GRU/LSTM kernels when all tensors are not contiguous.Christian Sarofeen
2017-04-22add contiguous checksSoumith Chintala
2017-04-18Fused RNN kernel remove explicit instantiation, isn't needed.Christian Sarofeen
2017-04-18Remove double precision math from LogSigmoid tooAdam Paszke
2017-04-18Update ops for Sigmoid and TanhAdam Paszke
2017-04-12fix THNN headerssoumith
2017-04-11Fused pointwise kernels for GRU/LSTMChristian Sarofeen
2017-04-09Merge pull request #455 from twitter-forks/indexlinearSoumith Chintala
Adding Indexlinear
2017-04-07Support TORCH_NVCC_FLAGS environment variableThomas Riccardi
This is already supported in cutorch since august 2016, and is used in pytorch integration (to reduce the binary size).
2017-04-05Update to ignore zero targetsJin-Hwa Kim
If the target is zero, loss and gradient of input are set to zero. It is useful for variable-length natural language generation models.
2017-03-31Merge pull request #456 from twitter-forks/addmm-fixesSoumith Chintala
Using temporary variables when performing transpose + addmm
2017-03-30Using temporary variables when performing transpose + addmmPavan Yalamanchili
2017-03-25Improving the performance of IndexLinear:updateOutputPavan Yalamanchili
- Removes separate kernel for updateOutputTrain
2017-03-24Fix inconsistent in-place and out-of-place for HardTanhngimel
in-place and out-of-place updateGradOutput results are different where input=min_val or input=max_val
2017-03-24Adding support for flattened inputs for IndexLinearPavan Yalamanchili
- Adding relevant tests
2017-03-24IndexLinear support for cunnPavan Yalamanchili
2017-03-22Merge pull request #453 from apaszke/lookup_renormSoumith Chintala
Cast accumulator in LookupTable renorm to accreal
2017-03-22Added support for multidimensional tensors in PReLU; Channel number now in ↵Hardik
second dimension
2017-03-22Cast accumulator in LookupTable renorm to accrealAdam Paszke
2017-03-13change lookup table sortJeff Johnson
2017-02-21Merge pull request #418 from ruotianluo/adaptiveAverageSoumith Chintala
Add SpatialAdaptiveAveragePooling.
2017-02-21Merge pull request #434 from bottler/masterSoumith Chintala
VolumetricFractionalMaxPooling like spatial
2017-02-21Merge pull request #442 from twitter-forks/half-fixesSoumith Chintala
Convert real to accreal in libTHCUNN