Welcome to mirror list, hosted at ThFree Co, Russian Federation.

github.com/torch/optim.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2017-11-28Merge pull request #165 from ProGamerGov/patch-1HEADmasterSoumith Chintala
Fixed the link to the Adam research paper
2017-11-28Fixed the link to the Adam research paperProGamerGov
Fixed the link to the, "Adam: A Method for Stochastic Optimization" research paper. This link no longer works: http://arxiv.org/pdf/1412.6980.pdf I and many others involved with machine learning, find it's better to link to the research paper's arXiv page itself, and not directly to the PDF file. This is because it's not easy to get to the research paper's arXiv page, directly from the PDF, but it is easy to get to the PDF from the arXiv page.
2017-02-08Merge pull request #150 from Amir-Arsalan/patch-1Soumith Chintala
Update algos.md
2017-02-08Update algos.mdAmir Arsalan Soltani
2016-10-16Merge pull request #142 from ibmua/patch-1Soumith Chintala
Fixed misspelling
2016-10-16Update algos.mdMenshykov
2016-10-09Merge pull request #138 from DmitryUlyanov/masterSoumith Chintala
Fix polyinterp to let lbfgs wtih lswolfe work on GPU
2016-10-09clean up commentsDmitry Ulyanov
2016-10-09fix polyinterp, so lswolfe can be used with CUDADmitry Ulyanov
2016-09-30Merge pull request #137 from Atcold/patch-1Soumith Chintala
Update intro.md
2016-09-30Fix formatting and add Cuda training infoAlfredo Canziani
2016-09-30Update intro.mdAlfredo Canziani
Refactored text for consistency with the rest of the doc. The goal of training a nn is to perform well on the validation set, not the training set. Removed `local` from snippet, so they are runnable in the interpreter.
2016-09-29Merge pull request #136 from wydwww/masterSoumith Chintala
Fix typos
2016-09-29Fix typosYiding Wang
2016-09-27Merge pull request #135 from Atcold/local-docSoumith Chintala
Enable local doc for inline help
2016-09-27Enable local doc for iline helpAlfredo Canziani
2016-09-20Merge pull request #134 from hughperkins/migrate-example-from-nnSoumith Chintala
move optim doc from nn
2016-09-20move optim doc from nnHugh Perkins
2016-09-15Merge pull request #132 from codeAC29/masterSoumith Chintala
Prevent displaying of plots and documentation for it
2016-09-15Added documentation for display and logscaleAbhishek Chaurasia
2016-09-15Added option to set/reset displaying of plotAbhishek Chaurasia
2016-09-13Merge pull request #131 from korymath/patch-1Soumith Chintala
Spelling mistake.
2016-09-13Spelling mistake.Kory
2016-09-06make initialMean to be configurableSoumith Chintala
2016-09-06Merge pull request #130 from iassael/masterSoumith Chintala
Reverted to zero mean squared values init
2016-09-06reverted to zero mean squared values initYannis Assael
2016-08-25Merge pull request #127 from gcinbis/patch-2Soumith Chintala
Copy C1 value, in case it is a Tensor reference
2016-08-25Keep objective values, in case they are referencesR. Gokberk Cinbis
When opfunc() simply returns the output state variable of a nn model (ie. when opfunc() simply returns my_net:forward()'s output), the second opfunc() call within the for loop updates not only C2, but also C1. In this case, dC_est is wrongly 0. Avoid this behaviour by blindly copying C1 contents when it is a Tensor/CudaTensor. The overhead should be bearable as C1 is a scalar.
2016-08-25Merge pull request #126 from gcinbis/patch-1Soumith Chintala
Reduce numerical errors.
2016-08-25Reduce numerical errors.R. Gokberk Cinbis
x[i]+eps-2*eps may not result in exactly the same x[i], which may increase approximation error in the gradient estimate.
2016-08-08Merge pull request #124 from Atcold/patch-1Soumith Chintala
One-line Logger initialisation
2016-08-08One-line Logger initialisationAlfredo Canziani
A `Logger` can be created and setup in one line log = optim.logger('foo'):setNames{'a', 'b'}:style{'-', '-'}
2016-08-07Merge pull request #123 from torch/deSoumith Chintala
Add Differential Evolution
2016-08-07Add Differential EvolutionLi Zhijian
2016-07-30fixing to be tensor type agnosticSoumith Chintala
2016-07-21Merge pull request #122 from Cadene/masterSoumith Chintala
Add LearningRateDecay to Adam
2016-07-21Add Adam learningRateDecay to docCadene
2016-07-21Add learningRateDecay to AdamCadene
2016-06-30Merge pull request #121 from Atcold/doc-fixSoumith Chintala
Documentation and code refactoring
2016-06-30Add optim.Logger() documentationAlfredo Canziani
2016-06-30Fix bad alignment, trailing spaces and tabsAlfredo Canziani
2016-06-30Fix state/config improper documentationAlfredo Canziani
2016-06-27Refactoring documentationAlfredo Canziani
2016-06-15Merge pull request #119 from chenb67/masterSoumith Chintala
add weight decay support to adamax
2016-06-15add weight decay support to adamaxChen Buskilla
2016-06-10Merge pull request #118 from gcheron/adam-wdecSoumith Chintala
add weight decay support to adam
2016-06-10add weight decay support to adamgcheron
2016-06-09Merge pull request #117 from andreaskoepf/rmsprop_warmupSoumith Chintala
Init rmsprop mean square state 'm' with 1 instead 0
2016-06-09Init rmsprop mean square state 'm' with 1 instead 0Andreas Köpf
With alpha near 1 (e.g. the default value 0.99) the gradient was likely scaled up by a division by a number <1 during the first few iterations. With the original impl the learning rate had to be set to a much smaller value when using rmsprop compared to plain-vanilla sgd in order not to diverge.
2016-06-03Merge pull request #115 from torch/revert-113-sgd-lrs-fixSoumith Chintala
Revert "Fix bug with sgd individual learning rates"