diff options
author | nicholas-leonard <nick@nikopia.org> | 2015-08-11 05:15:15 +0300 |
---|---|---|
committer | nicholas-leonard <nick@nikopia.org> | 2015-08-11 05:15:15 +0300 |
commit | 0e05ac975476fff3ecf75894595d60ba04b5e0d6 (patch) | |
tree | e57984c7fc8fd4a2a77d6553ef0b4003486050f3 | |
parent | 9650d23e77032ebbd65fc60e50571498eb7263d6 (diff) |
fix lists
-rw-r--r-- | doc/containers.md | 10 | ||||
-rwxr-xr-x | doc/convolution.md | 40 | ||||
-rwxr-xr-x | doc/criterion.md | 38 | ||||
-rwxr-xr-x | doc/simple.md | 68 | ||||
-rwxr-xr-x | doc/table.md | 42 |
5 files changed, 99 insertions, 99 deletions
diff --git a/doc/containers.md b/doc/containers.md index 8d02ab9..9a83607 100644 --- a/doc/containers.md +++ b/doc/containers.md @@ -2,11 +2,11 @@ # Containers # Complex neural networks are easily built using container classes: - * [Container](#nn.Container) : abstract class inherited by containers ; - * [Sequential](#nn.Sequential) : plugs layers in a feed-forward fully connected manner ; - * [Parallel](#nn.Parallel) : applies its `ith` child module to the `ith` slice of the input Tensor ; - * [Concat](#nn.Concat) : concatenates in one layer several modules along dimension `dim` ; - * [DepthConcat](#nn.DepthConcat) : like Concat, but adds zero-padding when non-`dim` sizes don't match; + * [Container](#nn.Container) : abstract class inherited by containers ; + * [Sequential](#nn.Sequential) : plugs layers in a feed-forward fully connected manner ; + * [Parallel](#nn.Parallel) : applies its `ith` child module to the `ith` slice of the input Tensor ; + * [Concat](#nn.Concat) : concatenates in one layer several modules along dimension `dim` ; + * [DepthConcat](#nn.DepthConcat) : like Concat, but adds zero-padding when non-`dim` sizes don't match; See also the [Table Containers](#nn.TableContainers) for manipulating tables of [Tensors](https://github.com/torch/torch7/blob/master/doc/tensor.md). diff --git a/doc/convolution.md b/doc/convolution.md index 4f716c6..54b8da9 100755 --- a/doc/convolution.md +++ b/doc/convolution.md @@ -3,28 +3,28 @@ A convolution is an integral that expresses the amount of overlap of one function `g` as it is shifted over another function `f`. It therefore "blends" one function with another. The neural network package supports convolution, pooling, subsampling and other relevant facilities. These are divided base on the dimensionality of the input and output [Tensors](https://github.com/torch/torch7/blob/master/doc/tensor.md#tensor): - * [Temporal Modules](#nn.TemporalModules) apply to sequences with a one-dimensional relationship + * [Temporal Modules](#nn.TemporalModules) apply to sequences with a one-dimensional relationship (e.g. sequences of words, phonemes and letters. Strings of some kind). - * [TemporalConvolution](#nn.TemporalConvolution) : a 1D convolution over an input sequence ; - * [TemporalSubSampling](#nn.TemporalSubSampling) : a 1D sub-sampling over an input sequence ; - * [TemporalMaxPooling](#nn.TemporalMaxPooling) : a 1D max-pooling operation over an input sequence ; - * [LookupTable](#nn.LookupTable) : a convolution of width `1`, commonly used for word embeddings ; - * [Spatial Modules](#nn.SpatialModules) apply to inputs with two-dimensional relationships (e.g. images): - * [SpatialConvolution](#nn.SpatialConvolution) : a 2D convolution over an input image ; - * [SpatialSubSampling](#nn.SpatialSubSampling) : a 2D sub-sampling over an input image ; - * [SpatialMaxPooling](#nn.SpatialMaxPooling) : a 2D max-pooling operation over an input image ; - * [SpatialAveragePooling](#nn.SpatialAveragePooling) : a 2D average-pooling operation over an input image ; - * [SpatialAdaptiveMaxPooling](#nn.SpatialAdaptiveMaxPooling) : a 2D max-pooling operation which adapts its parameters dynamically such that the output is of fixed size ; - * [SpatialLPPooling](#nn.SpatialLPPooling) : computes the `p` norm in a convolutional manner on a set of input images ; - * [SpatialConvolutionMap](#nn.SpatialConvolutionMap) : a 2D convolution that uses a generic connection table ; - * [SpatialZeroPadding](#nn.SpatialZeroPadding) : padds a feature map with specified number of zeros ; - * [SpatialSubtractiveNormalization](#nn.SpatialSubtractiveNormalization) : a spatial subtraction operation on a series of 2D inputs using - * [SpatialBatchNormalization](#nn.SpatialBatchNormalization): mean/std normalization over the mini-batch inputs and pixels, with an optional affine transform that follows + * [TemporalConvolution](#nn.TemporalConvolution) : a 1D convolution over an input sequence ; + * [TemporalSubSampling](#nn.TemporalSubSampling) : a 1D sub-sampling over an input sequence ; + * [TemporalMaxPooling](#nn.TemporalMaxPooling) : a 1D max-pooling operation over an input sequence ; + * [LookupTable](#nn.LookupTable) : a convolution of width `1`, commonly used for word embeddings ; + * [Spatial Modules](#nn.SpatialModules) apply to inputs with two-dimensional relationships (e.g. images): + * [SpatialConvolution](#nn.SpatialConvolution) : a 2D convolution over an input image ; + * [SpatialSubSampling](#nn.SpatialSubSampling) : a 2D sub-sampling over an input image ; + * [SpatialMaxPooling](#nn.SpatialMaxPooling) : a 2D max-pooling operation over an input image ; + * [SpatialAveragePooling](#nn.SpatialAveragePooling) : a 2D average-pooling operation over an input image ; + * [SpatialAdaptiveMaxPooling](#nn.SpatialAdaptiveMaxPooling) : a 2D max-pooling operation which adapts its parameters dynamically such that the output is of fixed size ; + * [SpatialLPPooling](#nn.SpatialLPPooling) : computes the `p` norm in a convolutional manner on a set of input images ; + * [SpatialConvolutionMap](#nn.SpatialConvolutionMap) : a 2D convolution that uses a generic connection table ; + * [SpatialZeroPadding](#nn.SpatialZeroPadding) : padds a feature map with specified number of zeros ; + * [SpatialSubtractiveNormalization](#nn.SpatialSubtractiveNormalization) : a spatial subtraction operation on a series of 2D inputs using + * [SpatialBatchNormalization](#nn.SpatialBatchNormalization): mean/std normalization over the mini-batch inputs and pixels, with an optional affine transform that follows a kernel for computing the weighted average in a neighborhood ; - * [Volumetric Modules](#nn.VolumetricModules) apply to inputs with three-dimensional relationships (e.g. videos) : - * [VolumetricConvolution](#nn.VolumetricConvolution) : a 3D convolution over an input video (a sequence of images) ; - * [VolumetricMaxPooling](#nn.VolumetricMaxPooling) : a 3D max-pooling operation over an input video. - * [VolumetricAveragePooling](#nn.VolumetricAveragePooling) : a 3D average-pooling operation over an input video. + * [Volumetric Modules](#nn.VolumetricModules) apply to inputs with three-dimensional relationships (e.g. videos) : + * [VolumetricConvolution](#nn.VolumetricConvolution) : a 3D convolution over an input video (a sequence of images) ; + * [VolumetricMaxPooling](#nn.VolumetricMaxPooling) : a 3D max-pooling operation over an input video. + * [VolumetricAveragePooling](#nn.VolumetricAveragePooling) : a 3D average-pooling operation over an input video. <a name="nn.TemporalModules"></a> ## Temporal Modules ## diff --git a/doc/criterion.md b/doc/criterion.md index 4f89338..2928938 100755 --- a/doc/criterion.md +++ b/doc/criterion.md @@ -4,25 +4,25 @@ [`Criterions`](#nn.Criterion) are helpful to train a neural network. Given an input and a target, they compute a gradient according to a given loss function. - * Classification criterions: - * [`BCECriterion`](#nn.BCECriterion): binary cross-entropy (two-class version of [`ClassNLLCriterion`](#nn.ClassNLLCriterion)); - * [`ClassNLLCriterion`](#nn.ClassNLLCriterion): negative log-likelihood for [`LogSoftMax`](transfer.md#nn.LogSoftMax) (multi-class); - * [`CrossEntropyCriterion`](#nn.CrossEntropyCriterion): combines [`LogSoftMax`](transfer.md#nn.LogSoftMax) and [`ClassNLLCriterion`](#nn.ClassNLLCriterion); - * [`MarginCriterion`](#nn.MarginCriterion): two class margin-based loss; - * [`MultiMarginCriterion`](#nn.MultiMarginCriterion): multi-class margin-based loss; - * [`MultiLabelMarginCriterion`](#nn.MultiLabelMarginCriterion): multi-class multi-classification margin-based loss; - * Regression criterions: - * [`AbsCriterion`](#nn.AbsCriterion): measures the mean absolute value of the element-wise difference between input; - * [`MSECriterion`](#nn.MSECriterion): mean square error (a classic); - * [`DistKLDivCriterion`](#nn.DistKLDivCriterion): Kullback–Leibler divergence (for fitting continuous probability distributions); - * Embedding criterions (measuring whether two inputs are similar or dissimilar): - * [`HingeEmbeddingCriterion`](#nn.HingeEmbeddingCriterion): takes a distance as input; - * [`L1HingeEmbeddingCriterion`](#nn.L1HingeEmbeddingCriterion): L1 distance between two inputs; - * [`CosineEmbeddingCriterion`](#nn.CosineEmbeddingCriterion): cosine distance between two inputs; - * Miscelaneus criterions: - * [`MultiCriterion`](#nn.MultiCriterion) : a weighted sum of other criterions each applied to the same input and target; - * [`ParallelCriterion`](#nn.ParallelCriterion) : a weighted sum of other criterions each applied to a different input and target; - * [`MarginRankingCriterion`](#nn.MarginRankingCriterion): ranks two inputs; + * Classification criterions: + * [`BCECriterion`](#nn.BCECriterion): binary cross-entropy (two-class version of [`ClassNLLCriterion`](#nn.ClassNLLCriterion)); + * [`ClassNLLCriterion`](#nn.ClassNLLCriterion): negative log-likelihood for [`LogSoftMax`](transfer.md#nn.LogSoftMax) (multi-class); + * [`CrossEntropyCriterion`](#nn.CrossEntropyCriterion): combines [`LogSoftMax`](transfer.md#nn.LogSoftMax) and [`ClassNLLCriterion`](#nn.ClassNLLCriterion); + * [`MarginCriterion`](#nn.MarginCriterion): two class margin-based loss; + * [`MultiMarginCriterion`](#nn.MultiMarginCriterion): multi-class margin-based loss; + * [`MultiLabelMarginCriterion`](#nn.MultiLabelMarginCriterion): multi-class multi-classification margin-based loss; + * Regression criterions: + * [`AbsCriterion`](#nn.AbsCriterion): measures the mean absolute value of the element-wise difference between input; + * [`MSECriterion`](#nn.MSECriterion): mean square error (a classic); + * [`DistKLDivCriterion`](#nn.DistKLDivCriterion): Kullback–Leibler divergence (for fitting continuous probability distributions); + * Embedding criterions (measuring whether two inputs are similar or dissimilar): + * [`HingeEmbeddingCriterion`](#nn.HingeEmbeddingCriterion): takes a distance as input; + * [`L1HingeEmbeddingCriterion`](#nn.L1HingeEmbeddingCriterion): L1 distance between two inputs; + * [`CosineEmbeddingCriterion`](#nn.CosineEmbeddingCriterion): cosine distance between two inputs; + * Miscelaneus criterions: + * [`MultiCriterion`](#nn.MultiCriterion) : a weighted sum of other criterions each applied to the same input and target; + * [`ParallelCriterion`](#nn.ParallelCriterion) : a weighted sum of other criterions each applied to a different input and target; + * [`MarginRankingCriterion`](#nn.MarginRankingCriterion): ranks two inputs; <a name="nn.Criterion"></a> ## Criterion ## diff --git a/doc/simple.md b/doc/simple.md index bc4881b..ebb2d2f 100755 --- a/doc/simple.md +++ b/doc/simple.md @@ -2,40 +2,40 @@ # Simple layers # Simple Modules are used for various tasks like adapting Tensor methods and providing affine transformations : - * Parameterized Modules : - * [Linear](#nn.Linear) : a linear transformation ; - * [SparseLinear](#nn.SparseLinear) : a linear transformation with sparse inputs ; - * [Add](#nn.Add) : adds a bias term to the incoming data ; - * [Mul](#nn.Mul) : multiply a single scalar factor to the incoming data ; - * [CMul](#nn.CMul) : a component-wise multiplication to the incoming data ; - * [CDiv](#nn.CDiv) : a component-wise division to the incoming data ; - * [Euclidean](#nn.Euclidean) : the euclidean distance of the input to `k` mean centers ; - * [WeightedEuclidean](#nn.WeightedEuclidean) : similar to [Euclidean](#nn.Euclidean), but additionally learns a diagonal covariance matrix ; - * Modules that adapt basic Tensor methods : - * [Copy](#nn.Copy) : a [copy](https://github.com/torch/torch7/blob/master/doc/tensor.md#torch.Tensor.copy) of the input with [type](https://github.com/torch/torch7/blob/master/doc/tensor.md#tensor-or-string-typetype) casting ; - * [Narrow](#nn.Narrow) : a [narrow](https://github.com/torch/torch7/blob/master/doc/tensor.md#tensor-narrowdim-index-size) operation over a given dimension ; - * [Replicate](#nn.Replicate) : [repeats](https://github.com/torch/torch7/blob/master/doc/tensor.md#tensor-repeattensorresult-sizes) input `n` times along its first dimension ; - * [Reshape](#nn.Reshape) : a [reshape](https://github.com/torch/torch7/blob/master/doc/maths.md#res-torchreshaperes-x-m-n) of the inputs ; - * [View](#nn.View) : a [view](https://github.com/torch/torch7/blob/master/doc/tensor.md#result-viewresult-tensor-sizes) of the inputs ; - * [Select](#nn.Select) : a [select](https://github.com/torch/torch7/blob/master/doc/tensor.md#tensor-selectdim-index) over a given dimension ; - * Modules that adapt mathematical Tensor methods : - * [Max](#nn.Max) : a [max](https://github.com/torch/torch7/blob/master/doc/maths.md#torch.max) operation over a given dimension ; - * [Min](#nn.Min) : a [min](https://github.com/torch/torch7/blob/master/doc/maths.md#torchminresval-resind-x) operation over a given dimension ; - * [Mean](#nn.Mean) : a [mean](https://github.com/torch/torch7/blob/master/doc/maths.md#res-torchmeanres-x-dim) operation over a given dimension ; - * [Sum](#nn.Sum) : a [sum](https://github.com/torch/torch7/blob/master/doc/maths.md#res-torchsumres-x) operation over a given dimension ; - * [Exp](#nn.Exp) : an element-wise [exp](https://github.com/torch/torch7/blob/master/doc/maths.md#res-torchexpres-x) operation ; - * [Abs](#nn.Abs) : an element-wise [abs](https://github.com/torch/torch7/blob/master/doc/maths.md#res-torchabsres-x) operation ; - * [Power](#nn.Power) : an element-wise [pow](https://github.com/torch/torch7/blob/master/doc/maths.md#res-torchpowres-x) operation ; - * [Square](#nn.Square) : an element-wise square operation ; - * [Sqrt](#nn.Sqrt) : an element-wise [sqrt](https://github.com/torch/torch7/blob/master/doc/maths.md#res-torchsqrtres-x) operation ; - * [MM](#nn.MM) : matrix-matrix multiplication (also supports batches of matrices) ; - * Miscellaneous Modules : - * [BatchNormalization](#nn.BatchNormalization) - mean/std normalization over the mini-batch inputs (with an optional affine transform) ; - * [Identity](#nn.Identity) : forward input as-is to output (useful with [ParallelTable](table.md#nn.ParallelTable)); - * [Dropout](#nn.Dropout) : masks parts of the `input` using binary samples from a [bernoulli](http://en.wikipedia.org/wiki/Bernoulli_distribution) distribution ; - * [SpatialDropout](#nn.SpatialDropout) : Same as Dropout but for spatial inputs where adjacent pixels are strongly correlated ; - * [Padding](#nn.Padding) : adds padding to a dimension ; - * [L1Penalty](#nn.L1Penalty) : adds an L1 penalty to an input (for sparsity); + * Parameterized Modules : + * [Linear](#nn.Linear) : a linear transformation ; + * [SparseLinear](#nn.SparseLinear) : a linear transformation with sparse inputs ; + * [Add](#nn.Add) : adds a bias term to the incoming data ; + * [Mul](#nn.Mul) : multiply a single scalar factor to the incoming data ; + * [CMul](#nn.CMul) : a component-wise multiplication to the incoming data ; + * [CDiv](#nn.CDiv) : a component-wise division to the incoming data ; + * [Euclidean](#nn.Euclidean) : the euclidean distance of the input to `k` mean centers ; + * [WeightedEuclidean](#nn.WeightedEuclidean) : similar to [Euclidean](#nn.Euclidean), but additionally learns a diagonal covariance matrix ; + * Modules that adapt basic Tensor methods : + * [Copy](#nn.Copy) : a [copy](https://github.com/torch/torch7/blob/master/doc/tensor.md#torch.Tensor.copy) of the input with [type](https://github.com/torch/torch7/blob/master/doc/tensor.md#tensor-or-string-typetype) casting ; + * [Narrow](#nn.Narrow) : a [narrow](https://github.com/torch/torch7/blob/master/doc/tensor.md#tensor-narrowdim-index-size) operation over a given dimension ; + * [Replicate](#nn.Replicate) : [repeats](https://github.com/torch/torch7/blob/master/doc/tensor.md#tensor-repeattensorresult-sizes) input `n` times along its first dimension ; + * [Reshape](#nn.Reshape) : a [reshape](https://github.com/torch/torch7/blob/master/doc/maths.md#res-torchreshaperes-x-m-n) of the inputs ; + * [View](#nn.View) : a [view](https://github.com/torch/torch7/blob/master/doc/tensor.md#result-viewresult-tensor-sizes) of the inputs ; + * [Select](#nn.Select) : a [select](https://github.com/torch/torch7/blob/master/doc/tensor.md#tensor-selectdim-index) over a given dimension ; + * Modules that adapt mathematical Tensor methods : + * [Max](#nn.Max) : a [max](https://github.com/torch/torch7/blob/master/doc/maths.md#torch.max) operation over a given dimension ; + * [Min](#nn.Min) : a [min](https://github.com/torch/torch7/blob/master/doc/maths.md#torchminresval-resind-x) operation over a given dimension ; + * [Mean](#nn.Mean) : a [mean](https://github.com/torch/torch7/blob/master/doc/maths.md#res-torchmeanres-x-dim) operation over a given dimension ; + * [Sum](#nn.Sum) : a [sum](https://github.com/torch/torch7/blob/master/doc/maths.md#res-torchsumres-x) operation over a given dimension ; + * [Exp](#nn.Exp) : an element-wise [exp](https://github.com/torch/torch7/blob/master/doc/maths.md#res-torchexpres-x) operation ; + * [Abs](#nn.Abs) : an element-wise [abs](https://github.com/torch/torch7/blob/master/doc/maths.md#res-torchabsres-x) operation ; + * [Power](#nn.Power) : an element-wise [pow](https://github.com/torch/torch7/blob/master/doc/maths.md#res-torchpowres-x) operation ; + * [Square](#nn.Square) : an element-wise square operation ; + * [Sqrt](#nn.Sqrt) : an element-wise [sqrt](https://github.com/torch/torch7/blob/master/doc/maths.md#res-torchsqrtres-x) operation ; + * [MM](#nn.MM) : matrix-matrix multiplication (also supports batches of matrices) ; + * Miscellaneous Modules : + * [BatchNormalization](#nn.BatchNormalization) - mean/std normalization over the mini-batch inputs (with an optional affine transform) ; + * [Identity](#nn.Identity) : forward input as-is to output (useful with [ParallelTable](table.md#nn.ParallelTable)); + * [Dropout](#nn.Dropout) : masks parts of the `input` using binary samples from a [bernoulli](http://en.wikipedia.org/wiki/Bernoulli_distribution) distribution ; + * [SpatialDropout](#nn.SpatialDropout) : Same as Dropout but for spatial inputs where adjacent pixels are strongly correlated ; + * [Padding](#nn.Padding) : adds padding to a dimension ; + * [L1Penalty](#nn.L1Penalty) : adds an L1 penalty to an input (for sparsity); <a name="nn.Linear"></a> ## Linear ## diff --git a/doc/table.md b/doc/table.md index 221e4c3..61d1085 100755 --- a/doc/table.md +++ b/doc/table.md @@ -4,27 +4,27 @@ This set of modules allows the manipulation of `table`s through the layers of a neural network. This allows one to build very rich architectures: - * `table` Container Modules encapsulate sub-Modules: - * [`ConcatTable`](#nn.ConcatTable): applies each member module to the same input [`Tensor`](https://github.com/torch/torch7/blob/master/doc/tensor.md#tensor) and outputs a `table`; - * [`ParallelTable`](#nn.ParallelTable): applies the `i`-th member module to the `i`-th input and outputs a `table`; - * Table Conversion Modules convert between `table`s and `Tensor`s or `table`s: - * [`SplitTable`](#nn.SplitTable): splits a `Tensor` into a `table` of `Tensor`s; - * [`JoinTable`](#nn.JoinTable): joins a `table` of `Tensor`s into a `Tensor`; - * [`MixtureTable`](#nn.MixtureTable): mixture of experts weighted by a gater; - * [`SelectTable`](#nn.SelectTable): select one element from a `table`; - * [`NarrowTable`](#nn.NarrowTable): select a slice of elements from a `table`; - * [`FlattenTable`](#nn.FlattenTable): flattens a nested `table` hierarchy; - * Pair Modules compute a measure like distance or similarity from a pair (`table`) of input `Tensor`s: - * [`PairwiseDistance`](#nn.PairwiseDistance): outputs the `p`-norm. distance between inputs; - * [`DotProduct`](#nn.DotProduct): outputs the dot product (similarity) between inputs; - * [`CosineDistance`](#nn.CosineDistance): outputs the cosine distance between inputs; - * CMath Modules perform element-wise operations on a `table` of `Tensor`s: - * [`CAddTable`](#nn.CAddTable): addition of input `Tensor`s; - * [`CSubTable`](#nn.CSubTable): substraction of input `Tensor`s; - * [`CMulTable`](#nn.CMulTable): multiplication of input `Tensor`s; - * [`CDivTable`](#nn.CDivTable): division of input `Tensor`s; - * `Table` of Criteria: - * [`CriterionTable`](#nn.CriterionTable): wraps a [Criterion](criterion.md#nn.Criterion) so that it can accept a `table` of inputs. + * `table` Container Modules encapsulate sub-Modules: + * [`ConcatTable`](#nn.ConcatTable): applies each member module to the same input [`Tensor`](https://github.com/torch/torch7/blob/master/doc/tensor.md#tensor) and outputs a `table`; + * [`ParallelTable`](#nn.ParallelTable): applies the `i`-th member module to the `i`-th input and outputs a `table`; + * Table Conversion Modules convert between `table`s and `Tensor`s or `table`s: + * [`SplitTable`](#nn.SplitTable): splits a `Tensor` into a `table` of `Tensor`s; + * [`JoinTable`](#nn.JoinTable): joins a `table` of `Tensor`s into a `Tensor`; + * [`MixtureTable`](#nn.MixtureTable): mixture of experts weighted by a gater; + * [`SelectTable`](#nn.SelectTable): select one element from a `table`; + * [`NarrowTable`](#nn.NarrowTable): select a slice of elements from a `table`; + * [`FlattenTable`](#nn.FlattenTable): flattens a nested `table` hierarchy; + * Pair Modules compute a measure like distance or similarity from a pair (`table`) of input `Tensor`s: + * [`PairwiseDistance`](#nn.PairwiseDistance): outputs the `p`-norm. distance between inputs; + * [`DotProduct`](#nn.DotProduct): outputs the dot product (similarity) between inputs; + * [`CosineDistance`](#nn.CosineDistance): outputs the cosine distance between inputs; + * CMath Modules perform element-wise operations on a `table` of `Tensor`s: + * [`CAddTable`](#nn.CAddTable): addition of input `Tensor`s; + * [`CSubTable`](#nn.CSubTable): substraction of input `Tensor`s; + * [`CMulTable`](#nn.CMulTable): multiplication of input `Tensor`s; + * [`CDivTable`](#nn.CDivTable): division of input `Tensor`s; + * `Table` of Criteria: + * [`CriterionTable`](#nn.CriterionTable): wraps a [Criterion](criterion.md#nn.Criterion) so that it can accept a `table` of inputs. `table`-based modules work by supporting `forward()` and `backward()` methods that can accept `table`s as inputs. It turns out that the usual [`Sequential`](containers.md#nn.Sequential) module can do this, so all that is needed is other child modules that take advantage of such `table`s. |