Welcome to mirror list, hosted at ThFree Co, Russian Federation.

github.com/torch/cunn.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2017-09-07fix static linkage and make THD statically linkedHEADmasterSoumith Chintala
2017-08-27Add numerically stable logsigmoidAlykhan Tejani
2017-08-26Adding implicit padding for 3d average poolingLu Fang
2017-08-25Fix typos.Zhou Mo
2017-08-25Updates for CUDA 9Christian Sarofeen
2017-08-03Merge pull request #480 from nicholas-leonard/BN-batchsize1Nicholas Léonard
BN supports batchsize=1
2017-08-03BN supports batchsize=1Nicholas Leonard
2017-08-03remove limitations on output_padding in Conv* routinesSoumith Chintala
2017-08-03add 2d and 3d dilated full ConvolutionSoumith Chintala
2017-07-26Merge pull request #477 from wickedfoo/feature_lp_poolingSoumith Chintala
GPU implementation of L_p feature pooling
2017-07-24Merge pull request #479 from mikepound/upsamplingSoumith Chintala
Added cunn tests for UpSampling module.
2017-07-24Added cunn tests for UpSampling module.Michael Pound
2017-07-20Merge pull request #478 from singam-sanjay/correct_READMESoumith Chintala
Update README
2017-07-20Update READMESingapuram Sanjay Srivallabh
Clarify control-flow oddities in control-flow terminology.
2017-07-19Static linking against libstdc++ in Binary Build modeSoumith Chintala
2017-07-14add launch_bounds to greedy kernelsNatalia Gimelshein
2017-07-13LP pooling kernelsJeff Johnson
2017-07-03Merge pull request #476 from lospooky/SpatialDepthWiseConvolution-segfaultSoumith Chintala
Fix segfault in SpatialDepthWiseConvolution w/o bias
2017-06-23Fix segfault in SpatialDepthWiseConvolution w/o biasSimone Cirillo
2017-06-22add asserts to BCECriterioncph
2017-06-15nn.EmbeddingBag to compute a bag of word embeddings (Embedding + Sum/Mean)Soumith Chintala
2017-06-14Added GLU (gated linear unit)Sam Gross
From https://arxiv.org/abs/1612.08083
2017-06-07Add 3D upsampling (nearest and trilinear) with testsLuca Antiga
2017-06-07fix GRUFused signatureSoumith Chintala
2017-06-07Merge pull request #470 from qqning/masterSoumith Chintala
Fix the mix-up of height and width on depth-wise convolution
2017-06-07Remove clone in fused rnnChristian Sarofeen
2017-06-06Merge pull request #472 from twitter-forks/indexlinear-fixSoumith Chintala
Fixing incorrect normalized values in IndexLinear during training
2017-06-06Fixing the issue with incorrect normalized values in IndexLinearPavan Yalamanchili
2017-05-24Fix the mix-up of height and width on depth-wise convolutionningqingqun
2017-05-21Merge pull request #468 from nicholas-leonard/ClassNLLCriterionSoumith Chintala
ClassNLLCriterion ignoreIndex
2017-05-16ClassNLLCriterion ignoreIndexNicholas Leonard
2017-05-15Merge pull request #467 from torch/revert-458-masterSoumith Chintala
Revert "Update to ignore zero targets"
2017-05-15Revert "Update to ignore zero targets"Soumith Chintala
2017-05-12SpatialDepthWiseConvolution.cu addedstooloveu
2017-05-12Merge pull request #458 from jnhwkim/masterNicholas Léonard
Update to ignore zero targets
2017-05-09Add a keepdim parameter for reduction functions over a single dimension.Gregory Chanan
By default, this parameter is False -- a backwards incompatible change, but one that follows numpy semantics, e.g. numpy.sum (numpy names the parameter "keepdims" since you can pass multiple dims to reduction functions). The old behavior seems desired for normalization type operations where the tensor will immediately be expanded out again, e.g.: probs.sum(1).expand_as(probs) which no longer works because the dimension to expand is missing. This can be fixed by simply passing True as "keepdim" argument to the reduction operation, e.g: probs.sum(1, keepdim=True).expand_as(probs)
2017-04-22fix typoSoumith Chintala
2017-04-22Indexing fix for fused GRU/LSTM kernels when all tensors are not contiguous.Christian Sarofeen
2017-04-22Merge pull request #465 from torch/cunnchecksSoumith Chintala
add contiguous checks
2017-04-22add contiguous checksSoumith Chintala
2017-04-18Fused RNN kernel remove explicit instantiation, isn't needed.Christian Sarofeen
2017-04-18Merge pull request #463 from apaszke/sig_tanhSoumith Chintala
Remove double precision math from LogSigmoid too
2017-04-18Remove double precision math from LogSigmoid tooAdam Paszke
2017-04-18Merge pull request #462 from apaszke/sig_tanhSoumith Chintala
Update ops for Sigmoid and Tanh
2017-04-18Update ops for Sigmoid and TanhAdam Paszke
2017-04-12fix THNN headerssoumith
2017-04-11Fused pointwise kernels for GRU/LSTMChristian Sarofeen
2017-04-09Merge pull request #455 from twitter-forks/indexlinearSoumith Chintala
Adding Indexlinear
2017-04-07Merge pull request #459 from SYSTRAN/feature/support_TORCH_NVCC_FLAGSSoumith Chintala
Support TORCH_NVCC_FLAGS environment variable
2017-04-07Support TORCH_NVCC_FLAGS environment variableThomas Riccardi
This is already supported in cutorch since august 2016, and is used in pytorch integration (to reduce the binary size).