Welcome to
mirror list
, hosted at
ThFree Co
, Russian Federation.
github.com/torch/cutorch.git - Unnamed repository; edit this file 'description' to name the repository.
index
:
github.com/torch/cutorch.git
1.0
1.0-0
distfix
dotfix
master
p100fix
revert-589-random-refactor
revert-610-lazy
revert-639-patch-1
revert-641-cfuncs
thcstateheader
Unnamed repository; edit this file 'description' to name the repository.
www-data
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
init.c
Age
Commit message (
Expand
)
Author
2017-03-08
Add CUDA caching allocator accessor
Guillaume Klein
2017-02-07
Static build support + Query CUDA driver, runtime versions (#695)
Pavan Yalamanchili
2016-12-29
Add THHalfTensor support to cutorch (#655)
gchanan
2016-12-22
Enable the CUDA caching allocators by default
Sam Gross
2016-12-02
Add caching allocator for pinned (host) memory
Sam Gross
2016-12-01
Adds a CUDA "sleep" kernel
Sam Gross
2016-11-26
Lazily initialize CUDA devices
Sam Gross
2016-11-24
Revert "Lazily initialize CUDA devices"
revert-610-lazy
Soumith Chintala
2016-11-24
Merge pull request #610 from colesbury/lazy
Soumith Chintala
2016-11-24
Implemented cudaMemGetInfo for caching allocator (#600)
Boris Fomitchev
2016-11-23
Lazily initialize CUDA devices
Sam Gross
2016-11-05
THC UVA Allocator
Nicolas Vasilache
2016-10-18
correct input types to lua_pushboolean
soumith
2016-10-17
guards for half
Soumith Chintala
2016-10-15
Add stream API that is not based on indices
Sam Gross
2016-10-14
Fix caching allocator when used from multiple Lua threads
Sam Gross
2016-10-14
adding hasHalfInstructions and hasFastHalfInstructions exposed to lua
soumith
2016-09-30
Make some basic THC operations thread-safe
Sam Gross
2016-09-25
Add CUDA caching allocator
Sam Gross
2016-07-29
Merge pull request #456 from torch/more-cutorch-template-types
Soumith Chintala
2016-07-29
reduce and BLAS work
Jeff Johnson
2016-06-29
added field driverVersion to cutorch
Lukas Cavigelli
2016-06-11
add half cwrap type and enable math for CudaHalfTensor
soumith
2016-06-11
template work
Jeff Johnson
2016-03-28
Merge pull request #355 from apaszke/fp16
Soumith Chintala
2016-03-14
kernel p2p access and non-blocking streams
Jeff Johnson
2016-03-13
Add FP16 support (CudaHalfStorage, CudaHalfTensor)
Adam Paszke
2016-02-26
properly shutdown and free cutorch on exit
soumith
2015-12-29
synchronizeAll
Jeff Johnson
2015-12-26
Add generic CudaTensor types to cutorch
Adam Lerer
2015-11-13
cutorch copy/event changes
Jeff Johnson
2015-08-21
Merge pull request #222 from torch/streamfixes
Soumith Chintala
2015-08-21
streams patch from nvidia
Soumith Chintala
2015-08-19
cutorch gc
Adam Lerer
2015-06-24
Add MAGMA implementations of Torch LAPACK functions
Sam Gross
2015-06-24
Stream support for BLAS Handles.
soumith
2015-05-23
Fixing call to cudaMemGetInfo to use the correct device.
ztaylor
2015-05-19
Add CudaHostAllocator
Dominik Grewe
2015-05-13
Revert "Auto device: API changes, bug fixes, README.md"
Adam Lerer
2015-04-29
Lua 5.2 compatibility
Sam Gross
2015-04-29
Auto device mode, plus allocation helper functions.
Adam Lerer
2015-04-09
adding optional device id to getMemoryUsage
soumith
2015-04-09
depreceating deviceReset
soumith
2015-04-07
adding cutorch streams
soumith
2015-04-01
revamps TensorMath to remove sync points at many places, adds maskedSelect an...
soumith
2015-03-27
Recreate cuBLAS handle on deviceReset.
Dominik Grewe
2015-01-14
Pass a state to every THC function.
Dominik Grewe
2014-11-19
Reset RNG state after device reset.
Dominik Grewe
2014-11-12
fixed two implicit declaration bugs (found with -Werror)
soumith
2014-11-12
adding getDevice for tensor, manualSeedAll and seedAll
soumith
[next]