diff options
author | GaetanMarceauCaron <gaetan.marceau-caron@inria.fr> | 2016-04-15 17:27:55 +0300 |
---|---|---|
committer | GaetanMarceauCaron <gaetan.marceau-caron@inria.fr> | 2016-04-15 17:27:55 +0300 |
commit | 3eb226834d822191027b914c366757fa81c8fcbe (patch) | |
tree | 6bc161f37a21c939304c9ae6e4bcccb3ee78d14f | |
parent | d0befcaecd0bccd38863d7dc29dff919478e5d00 (diff) |
small modif
-rw-r--r-- | README.md | 3 |
1 files changed, 1 insertions, 2 deletions
@@ -235,8 +235,7 @@ As always, the step-size must be chosen accordingly. Two additional arguments are also possible: * gamma (default=0.01): determine the update rate of the metric for a minibatch setting, i.e., (1-gamma) * oldMetric + gamma newMetric. Smaller minibatches require a smaller gamma. A default value depending on the size of the minibatches is `gamma = 1. - torch.pow(1.-1./nTraining,miniBatchSize)` where `nTraining` is the number of training examples of the dataset and `miniBatchSize` is the number of training examples per minibatch. * qdFlag (default=true): Whether to use the quasi-diagonal reduction (true) or only the diagonal (false). The former should be better. - -Replacing Linear by QDRiemmaNNLinear is a straightforward implementation of the outer product gradient descent. +This module is a straightforward implementation of the outer product gradient descent. ## Requirements |