Age | Commit message (Collapse) | Author | |
---|---|---|---|
2017-09-07 | fix static linkage and make THD statically linkedHEADmaster | Soumith Chintala | |
2017-08-27 | Add numerically stable logsigmoid | Alykhan Tejani | |
2017-08-26 | Adding implicit padding for 3d average pooling | Lu Fang | |
2017-08-25 | Fix typos. | Zhou Mo | |
2017-08-25 | Updates for CUDA 9 | Christian Sarofeen | |
2017-08-03 | Merge pull request #480 from nicholas-leonard/BN-batchsize1 | Nicholas Léonard | |
BN supports batchsize=1 | |||
2017-08-03 | BN supports batchsize=1 | Nicholas Leonard | |
2017-08-03 | remove limitations on output_padding in Conv* routines | Soumith Chintala | |
2017-08-03 | add 2d and 3d dilated full Convolution | Soumith Chintala | |
2017-07-26 | Merge pull request #477 from wickedfoo/feature_lp_pooling | Soumith Chintala | |
GPU implementation of L_p feature pooling | |||
2017-07-24 | Merge pull request #479 from mikepound/upsampling | Soumith Chintala | |
Added cunn tests for UpSampling module. | |||
2017-07-24 | Added cunn tests for UpSampling module. | Michael Pound | |
2017-07-20 | Merge pull request #478 from singam-sanjay/correct_README | Soumith Chintala | |
Update README | |||
2017-07-20 | Update README | Singapuram Sanjay Srivallabh | |
Clarify control-flow oddities in control-flow terminology. | |||
2017-07-19 | Static linking against libstdc++ in Binary Build mode | Soumith Chintala | |
2017-07-14 | add launch_bounds to greedy kernels | Natalia Gimelshein | |
2017-07-13 | LP pooling kernels | Jeff Johnson | |
2017-07-03 | Merge pull request #476 from lospooky/SpatialDepthWiseConvolution-segfault | Soumith Chintala | |
Fix segfault in SpatialDepthWiseConvolution w/o bias | |||
2017-06-23 | Fix segfault in SpatialDepthWiseConvolution w/o bias | Simone Cirillo | |
2017-06-22 | add asserts to BCECriterion | cph | |
2017-06-15 | nn.EmbeddingBag to compute a bag of word embeddings (Embedding + Sum/Mean) | Soumith Chintala | |
2017-06-14 | Added GLU (gated linear unit) | Sam Gross | |
From https://arxiv.org/abs/1612.08083 | |||
2017-06-07 | Add 3D upsampling (nearest and trilinear) with tests | Luca Antiga | |
2017-06-07 | fix GRUFused signature | Soumith Chintala | |
2017-06-07 | Merge pull request #470 from qqning/master | Soumith Chintala | |
Fix the mix-up of height and width on depth-wise convolution | |||
2017-06-07 | Remove clone in fused rnn | Christian Sarofeen | |
2017-06-06 | Merge pull request #472 from twitter-forks/indexlinear-fix | Soumith Chintala | |
Fixing incorrect normalized values in IndexLinear during training | |||
2017-06-06 | Fixing the issue with incorrect normalized values in IndexLinear | Pavan Yalamanchili | |
2017-05-24 | Fix the mix-up of height and width on depth-wise convolution | ningqingqun | |
2017-05-21 | Merge pull request #468 from nicholas-leonard/ClassNLLCriterion | Soumith Chintala | |
ClassNLLCriterion ignoreIndex | |||
2017-05-16 | ClassNLLCriterion ignoreIndex | Nicholas Leonard | |
2017-05-15 | Merge pull request #467 from torch/revert-458-master | Soumith Chintala | |
Revert "Update to ignore zero targets" | |||
2017-05-15 | Revert "Update to ignore zero targets" | Soumith Chintala | |
2017-05-12 | SpatialDepthWiseConvolution.cu added | stooloveu | |
2017-05-12 | Merge pull request #458 from jnhwkim/master | Nicholas Léonard | |
Update to ignore zero targets | |||
2017-05-09 | Add a keepdim parameter for reduction functions over a single dimension. | Gregory Chanan | |
By default, this parameter is False -- a backwards incompatible change, but one that follows numpy semantics, e.g. numpy.sum (numpy names the parameter "keepdims" since you can pass multiple dims to reduction functions). The old behavior seems desired for normalization type operations where the tensor will immediately be expanded out again, e.g.: probs.sum(1).expand_as(probs) which no longer works because the dimension to expand is missing. This can be fixed by simply passing True as "keepdim" argument to the reduction operation, e.g: probs.sum(1, keepdim=True).expand_as(probs) | |||
2017-04-22 | fix typo | Soumith Chintala | |
2017-04-22 | Indexing fix for fused GRU/LSTM kernels when all tensors are not contiguous. | Christian Sarofeen | |
2017-04-22 | Merge pull request #465 from torch/cunnchecks | Soumith Chintala | |
add contiguous checks | |||
2017-04-22 | add contiguous checks | Soumith Chintala | |
2017-04-18 | Fused RNN kernel remove explicit instantiation, isn't needed. | Christian Sarofeen | |
2017-04-18 | Merge pull request #463 from apaszke/sig_tanh | Soumith Chintala | |
Remove double precision math from LogSigmoid too | |||
2017-04-18 | Remove double precision math from LogSigmoid too | Adam Paszke | |
2017-04-18 | Merge pull request #462 from apaszke/sig_tanh | Soumith Chintala | |
Update ops for Sigmoid and Tanh | |||
2017-04-18 | Update ops for Sigmoid and Tanh | Adam Paszke | |
2017-04-12 | fix THNN headers | soumith | |
2017-04-11 | Fused pointwise kernels for GRU/LSTM | Christian Sarofeen | |
2017-04-09 | Merge pull request #455 from twitter-forks/indexlinear | Soumith Chintala | |
Adding Indexlinear | |||
2017-04-07 | Merge pull request #459 from SYSTRAN/feature/support_TORCH_NVCC_FLAGS | Soumith Chintala | |
Support TORCH_NVCC_FLAGS environment variable | |||
2017-04-07 | Support TORCH_NVCC_FLAGS environment variable | Thomas Riccardi | |
This is already supported in cutorch since august 2016, and is used in pytorch integration (to reduce the binary size). |