Age | Commit message (Collapse) | Author |
|
Some other minor changes
|
|
|
|
extended documentation and test of Narrow for negative indices
|
|
Fixing sparse linear race condition
|
|
added support for negative lenghts to Narrow
|
|
Check that clearState returns itself
|
|
|
|
Correct typo
|
|
|
|
Changes to compile on Windows
|
|
Fix optim example to use the correct tensor type for the label
|
|
Fix optim example to use the correct tensor type for the label
|
|
Prior to this fix I was getting:
```
/Users/home/torch/install/bin/luajit: /Users/home/torch/install/share/lua/5.1/nn/THNN.lua:109: bad argument #3 to 'v' (cannot convert 'struct THByteTensor *' to 'struct THDoubleTensor *')
stack traceback:
[C]: in function 'v'
/Users/home/torch/install/share/lua/5.1/nn/THNN.lua:109: in function 'MSECriterion_updateOutput'
/Users/home/torch/install/share/lua/5.1/nn/MSECriterion.lua:14: in function 'forward'
sum_using_bit_represenation.lua:42: in function 'opfunc'
/Users/home/torch/install/share/lua/5.1/optim/sgd.lua:44: in function 'sgd'
sum_using_bit_represenation.lua:48: in main chunk
[C]: in function 'dofile'
...home/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
[C]: at 0x010088d1f0
```
The composed snippets that were causing this error for me were:
```
require 'nn'
local model = nn.Sequential(); -- make a multi-layer perceptron
local inputs = 2; local outputs = 1; local HUs = 20; -- parameters
model:add(nn.Linear(inputs, HUs))
model:add(nn.Tanh())
model:add(nn.Linear(HUs, outputs))
local criterion = nn.MSECriterion()
local batchSize = 128
local batchInputs = torch.Tensor(batchSize, inputs)
local batchLabels = torch.ByteTensor(batchSize)
for i=1,batchSize do
local input = torch.randn(2) -- normally distributed example in 2d
local label = 1
if input[1]*input[2]>0 then -- calculate label for XOR function
label = -1;
end
batchInputs[i]:copy(input)
batchLabels[i] = label
end
local params, gradParams = model:getParameters()
local optimState = {learningRate=0.01}
require 'optim'
for epoch=1,50 do
-- local function we give to optim
-- it takes current weights as input, and outputs the loss
-- and the gradient of the loss with respect to the weights
-- gradParams is calculated implicitly by calling 'backward',
-- because the models weight and bias gradient tensors
-- are simply views onto gradParams
local function feval(params)
gradParams:zero()
local outputs = model:forward(batchInputs)
local loss = criterion:forward(outputs, batchLabels)
local dloss_doutput = criterion:backward(outputs, batchLabels)
model:backward(batchInputs, dloss_doutput)
return loss,gradParams
end
optim.sgd(feval, params, optimState)
end
x = torch.Tensor(2)
x[1] = 0.5; x[2] = 0.5; print(model:forward(x))
x[1] = 0.5; x[2] = -0.5; print(model:forward(x))
x[1] = -0.5; x[2] = 0.5; print(model:forward(x))
x[1] = -0.5; x[2] = -0.5; print(model:forward(x))
```
|
|
|
|
construct
|
|
construct
|
|
logsoftmax non-contiguous gradOutput fix
|
|
|
|
Fix flaky SpatialReflectionPadding test.
|
|
If e.g. sizeX = 6 and padL = padR = -3, then fails.
|
|
small tweaks to training example
|
|
|
|
|
|
replace the StochasticGradient example with optim example
|
|
|
|
|
|
add assert to improve error handling
|
|
|
|
|
|
Add gradInput nil check for SplitTable
|
|
|
|
prevent SelectTable and NarrowTable clears input
|
|
|
|
Adding an optional extra input to SpatialFull- and VolumetricFull- convolution.
|
|
add docs for nn.Log, nn.AddConstant & nn.MulConstant
|
|
fix docs
add nn.Inc & nn.Scale
fix
add docs for nn.Log, nn.AddConstant and nn.MulConstant
fix
add docs for nn.Log
add refs of nn.AddConstant and nn.MulConstant in simple.md
edit docs for nn.Log, nn.AddConstant and nn.MulConstant
edit docs
edit docs
|
|
Add documentation for nn.Transpose
|
|
|
|
VolumetricFullConvolution which can be used to dynamically set the output size, as an alternative to using the adj terms, which has the downside to be fixed at construction time.
|
|
Select table consistency and small test change
|
|
|
|
|
|
Remove THExpMinusApprox from SoftMax
|
|
Should address #804
|
|
MultiMarginCriterion description has a typo
|
|
```lua
loss(x, y) = sum_i(max(0, (margin - x[y] + x[i]))^p) / x:size(1)
```
|
|
nn.Select support for negative indices
|
|
|
|
Fix CMakeLists for Intel compilers
|
|
|