Welcome to mirror list, hosted at ThFree Co, Russian Federation.

github.com/torch/nn.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2016-06-06adding getParametersByDevice and a scatter-gather pattern for parameters. ↵getParamsByDevicesoumith
Some other minor changes
2016-06-05fix memory leak in SparseLinear (#844)Soumith Chintala
2016-06-05extended documentation of / added a test case for Narrow (#843)Simon Niklaus
extended documentation and test of Narrow for negative indices
2016-06-04Merge pull request #811 from ebetica/sparselinear_fixSoumith Chintala
Fixing sparse linear race condition
2016-06-04Merge pull request #822 from CodeRect/patch-1Soumith Chintala
added support for negative lenghts to Narrow
2016-06-03Merge pull request #842 from szagoruyko/clearstate-returnSoumith Chintala
Check that clearState returns itself
2016-06-03check that clearState returns itselfSergey Zagoruyko
2016-06-03Merge pull request #840 from curious-attempt-bunny/patch-1Soumith Chintala
Correct typo
2016-06-03Correct typocurious-attempt-bunny
2016-06-01Merge pull request #839 from elikosan/masterSoumith Chintala
Changes to compile on Windows
2016-05-28Merge pull request #836 from curious-attempt-bunny/masterSoumith Chintala
Fix optim example to use the correct tensor type for the label
2016-05-28Merge pull request #1 from curious-attempt-bunny/training-doc-fixcurious-attempt-bunny
Fix optim example to use the correct tensor type for the label
2016-05-28Fix optim example to use the correct tensor type for the labelcurious-attempt-bunny
Prior to this fix I was getting: ``` /Users/home/torch/install/bin/luajit: /Users/home/torch/install/share/lua/5.1/nn/THNN.lua:109: bad argument #3 to 'v' (cannot convert 'struct THByteTensor *' to 'struct THDoubleTensor *') stack traceback: [C]: in function 'v' /Users/home/torch/install/share/lua/5.1/nn/THNN.lua:109: in function 'MSECriterion_updateOutput' /Users/home/torch/install/share/lua/5.1/nn/MSECriterion.lua:14: in function 'forward' sum_using_bit_represenation.lua:42: in function 'opfunc' /Users/home/torch/install/share/lua/5.1/optim/sgd.lua:44: in function 'sgd' sum_using_bit_represenation.lua:48: in main chunk [C]: in function 'dofile' ...home/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk [C]: at 0x010088d1f0 ``` The composed snippets that were causing this error for me were: ``` require 'nn' local model = nn.Sequential(); -- make a multi-layer perceptron local inputs = 2; local outputs = 1; local HUs = 20; -- parameters model:add(nn.Linear(inputs, HUs)) model:add(nn.Tanh()) model:add(nn.Linear(HUs, outputs)) local criterion = nn.MSECriterion() local batchSize = 128 local batchInputs = torch.Tensor(batchSize, inputs) local batchLabels = torch.ByteTensor(batchSize) for i=1,batchSize do local input = torch.randn(2) -- normally distributed example in 2d local label = 1 if input[1]*input[2]>0 then -- calculate label for XOR function label = -1; end batchInputs[i]:copy(input) batchLabels[i] = label end local params, gradParams = model:getParameters() local optimState = {learningRate=0.01} require 'optim' for epoch=1,50 do -- local function we give to optim -- it takes current weights as input, and outputs the loss -- and the gradient of the loss with respect to the weights -- gradParams is calculated implicitly by calling 'backward', -- because the models weight and bias gradient tensors -- are simply views onto gradParams local function feval(params) gradParams:zero() local outputs = model:forward(batchInputs) local loss = criterion:forward(outputs, batchLabels) local dloss_doutput = criterion:backward(outputs, batchLabels) model:backward(batchInputs, dloss_doutput) return loss,gradParams end optim.sgd(feval, params, optimState) end x = torch.Tensor(2) x[1] = 0.5; x[2] = 0.5; print(model:forward(x)) x[1] = 0.5; x[2] = -0.5; print(model:forward(x)) x[1] = -0.5; x[2] = 0.5; print(model:forward(x)) x[1] = -0.5; x[2] = -0.5; print(model:forward(x)) ```
2016-05-27lib prefix doe libTHNN.dll is missing on WindowsEric Cosatto
2016-05-27Visual Studio doesn't allow in-loop declaration in the 'omp parallel for' ↵Eric Cosatto
construct
2016-05-27Visual Studio doesn't allow in-loop declaration in the 'omp parallel for' ↵Eric Cosatto
construct
2016-05-25Merge pull request #834 from torch/lsmfixSoumith Chintala
logsoftmax non-contiguous gradOutput fix
2016-05-25logsoftmax non-contiguous gradOutput fixSoumith Chintala
2016-05-20Merge pull request #829 from davidsaxton/nn_testSoumith Chintala
Fix flaky SpatialReflectionPadding test.
2016-05-20Fix flaky SpatialReflectionPadding test.David Saxton
If e.g. sizeX = 6 and padL = padR = -3, then fails.
2016-05-17Merge pull request #826 from hughperkins/example-tweaksSoumith Chintala
small tweaks to training example
2016-05-17move manual training to the frontHugh Perkins
2016-05-17additional tweaks to training exampleHugh Perkins
2016-05-17Merge pull request #824 from hughperkins/switch-example-to-optimSoumith Chintala
replace the StochasticGradient example with optim example
2016-05-16replace the StochasticGradient example with optim exampleHugh Perkins
2016-05-16added support for negative lenghts to NarrowSimon N
2016-05-15Merge pull request #821 from szagoruyko/thnn-assertSoumith Chintala
add assert to improve error handling
2016-05-15add assert to improve error handlingSergey Zagoruyko
2016-05-14Fixing sparse linear race conditionZeming Lin
2016-05-12Merge pull request #819 from asrata/masterSoumith Chintala
Add gradInput nil check for SplitTable
2016-05-12Add gradInput nil checkasrata
2016-05-11Merge pull request #816 from iamalbert/fix-clearstateSoumith Chintala
prevent SelectTable and NarrowTable clears input
2016-05-11copy clear state from Identity to {NarrowTable,SelectTable}Wen Li Zhuang
2016-05-10Merge pull request #808 from fbesse/masterSoumith Chintala
Adding an optional extra input to SpatialFull- and VolumetricFull- convolution.
2016-05-09Merge pull request #810 from yangky11/masterSoumith Chintala
add docs for nn.Log, nn.AddConstant & nn.MulConstant
2016-05-07add nn.Inc & nn.ScaleKaiyu Yang
fix docs add nn.Inc & nn.Scale fix add docs for nn.Log, nn.AddConstant and nn.MulConstant fix add docs for nn.Log add refs of nn.AddConstant and nn.MulConstant in simple.md edit docs for nn.Log, nn.AddConstant and nn.MulConstant edit docs edit docs
2016-05-07Merge pull request #809 from juesato/transpose-docsSoumith Chintala
Add documentation for nn.Transpose
2016-05-07Add documentation for nn.TransposeJonathan Uesato
2016-05-06Adding an optional extra input to SpatialFullConvolution and ↵Frederic Besse
VolumetricFullConvolution which can be used to dynamically set the output size, as an alternative to using the adj terms, which has the downside to be fixed at construction time.
2016-05-04Merge pull request #806 from albanD/SelectTable_consistencySoumith Chintala
Select table consistency and small test change
2016-05-04removed unused code in test.lua and limit thread numberalbanD
2016-05-04make SelectTable forward consistent with backwardalbanD
2016-05-02Merge pull request #805 from fmassa/softmax_noapproxSoumith Chintala
Remove THExpMinusApprox from SoftMax
2016-05-02Remove THExpMinusApprox from SoftMaxfsuzanomassa
Should address #804
2016-05-01Merge pull request #802 from akhilketkar/patch-1Soumith Chintala
MultiMarginCriterion description has a typo
2016-05-01MultiMarginCriterion description has a typoAkhil Ketkar
```lua loss(x, y) = sum_i(max(0, (margin - x[y] + x[i]))^p) / x:size(1) ```
2016-04-30Merge pull request #800 from JoostvDoorn/negativeSelectSoumith Chintala
nn.Select support for negative indices
2016-04-30nn.Select accepts negative indicesJoost van Doorn
2016-04-29Merge pull request #799 from fmassa/c11_intelSoumith Chintala
Fix CMakeLists for Intel compilers
2016-04-29Fix CMakeLists for Intel compilersFrancisco Massa