Welcome to mirror list, hosted at ThFree Co, Russian Federation.

github.com/torch/optim.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorAndreas Fidjeland <andreas@fidjeland.io>2013-12-03 16:36:46 +0400
committerAndreas Fidjeland <andreas@fidjeland.io>2013-12-03 16:36:46 +0400
commit6bf15e408a423b13abe81f4566bb457331581134 (patch)
tree1afe4d53404e305ff03d1cbbb9e0a325864b7fd5 /lbfgs.lua
parent89fe0a012f2547dbf70058c372029cfc95efd446 (diff)
Turned documentation into markdown format
...for compatability with dokx. The dotfile is not strictly necessary but is useful in excluding test code etc. from generated docs.
Diffstat (limited to 'lbfgs.lua')
-rw-r--r--lbfgs.lua70
1 files changed, 35 insertions, 35 deletions
diff --git a/lbfgs.lua b/lbfgs.lua
index e8395e0..bea9db3 100644
--- a/lbfgs.lua
+++ b/lbfgs.lua
@@ -1,38 +1,38 @@
-----------------------------------------------------------------------
--- An implementation of L-BFGS, heavily inspired by minFunc (Mark Schmidt)
---
--- This implementation of L-BFGS relies on a user-provided line
--- search function (state.lineSearch). If this function is not
--- provided, then a simple learningRate is used to produce fixed
--- size steps. Fixed size steps are much less costly than line
--- searches, and can be useful for stochastic problems.
---
--- The learning rate is used even when a line search is provided.
--- This is also useful for large-scale stochastic problems, where
--- opfunc is a noisy approximation of f(x). In that case, the learning
--- rate allows a reduction of confidence in the step size.
---
--- ARGS:
--- opfunc : a function that takes a single input (X), the point of
--- evaluation, and returns f(X) and df/dX
--- x : the initial point
--- state : a table describing the state of the optimizer; after each
--- call the state is modified
--- state.maxIter : Maximum number of iterations allowed
--- state.maxEval : Maximum number of function evaluations
--- state.tolFun : Termination tolerance on the first-order optimality
--- state.tolX : Termination tol on progress in terms of func/param changes
--- state.lineSearch : A line search function
--- state.learningRate : If no line search provided, then a fixed step size is used
---
--- RETURN:
--- x* : the new x vector, at the optimal point
--- f : a table of all function values:
--- f[1] is the value of the function before any optimization
--- f[#f] is the final fully optimized value, at x*
---
--- (Clement Farabet, 2012)
---
+--[[ An implementation of L-BFGS, heavily inspired by minFunc (Mark Schmidt)
+
+This implementation of L-BFGS relies on a user-provided line
+search function (state.lineSearch). If this function is not
+provided, then a simple learningRate is used to produce fixed
+size steps. Fixed size steps are much less costly than line
+searches, and can be useful for stochastic problems.
+
+The learning rate is used even when a line search is provided.
+This is also useful for large-scale stochastic problems, where
+opfunc is a noisy approximation of f(x). In that case, the learning
+rate allows a reduction of confidence in the step size.
+
+ARGS:
+
+- `opfunc` : a function that takes a single input (X), the point of
+ evaluation, and returns f(X) and df/dX
+- `x` : the initial point
+- `state` : a table describing the state of the optimizer; after each
+ call the state is modified
+ - `state.maxIter` : Maximum number of iterations allowed
+ - `state.maxEval` : Maximum number of function evaluations
+ - `state.tolFun` : Termination tolerance on the first-order optimality
+ - `state.tolX` : Termination tol on progress in terms of func/param changes
+ - `state.lineSearch` : A line search function
+ - `state.learningRate` : If no line search provided, then a fixed step size is used
+
+RETURN:
+- `x*` : the new `x` vector, at the optimal point
+- `f` : a table of all function values:
+ `f[1]` is the value of the function before any optimization
+ `f[#f]` is the final fully optimized value, at `x*`
+
+(Clement Farabet, 2012)
+]]
function optim.lbfgs(opfunc, x, config, state)
-- get/update state
local config = config or {}