Welcome to mirror list, hosted at ThFree Co, Russian Federation.

github.com/torch/dok.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorRonan Collobert <ronan@collobert.com>2014-02-13 21:18:52 +0400
committerRonan Collobert <ronan@collobert.com>2014-02-13 21:18:52 +0400
commit45593d5800dabd8cc3e1b6eb7c6f7b392736353a (patch)
tree03866edc390315acc18be1c98a0491774bae5ad6
parentfc99ce3e19d631955266cad6544dcd12bbba4b3e (diff)
dok -> md
-rw-r--r--docinstall/README.md (renamed from dokinstall/index.dok)453
-rw-r--r--docinstall/blas.md (renamed from dokinstall/blas.dok)121
-rw-r--r--docinstall/installqtdebian.md (renamed from dokinstall/installqtdebian.dok)17
-rw-r--r--docinstall/installqtwindows.md (renamed from dokinstall/installqtwindows.dok)13
-rw-r--r--doclua/README.md84
-rw-r--r--doctutorial/README.md (renamed from doktutorial/index.dok)259
-rw-r--r--doklua/index.dok83
7 files changed, 518 insertions, 512 deletions
diff --git a/dokinstall/index.dok b/docinstall/README.md
index e4b4ac2..77a9ad4 100644
--- a/dokinstall/index.dok
+++ b/docinstall/README.md
@@ -1,170 +1,170 @@
-====== Torch Installation Manual ======
-{{anchor:install.dok}}
+<a name="install.dok"/>
+# Torch Installation Manual #
Currently Torch7 installation can be done only from the
sources. Binary releaseses will be distributed soon.
-====== Installing from sources ======
-{{anchor:install.sources}}
+<a name="install.sources"/>
+# Installing from sources #
-''Torch7'' is mainly made out of ''ANSI C'' and ''Lua'', which makes
+`Torch7` is mainly made out of `ANSI C` and `Lua`, which makes
it easy to compile everywhere. The graphical interface is based on QT
-and requires a ''C++'' compiler.
+and requires a `C++` compiler.
The installation process became easily portable on most platforms,
-thanks to [[http://www.cmake.org|CMake]], a tool which replace the
-aging ''configure/automake'' tools. CMake allows us to detect and
+thanks to [CMake](http://www.cmake.org), a tool which replace the
+aging `configure/automake` tools. CMake allows us to detect and
configure Torch properly.
You will find here step-by-step instructions for each system we are supporting.
-You are also strongly encouraged to read the [[#CMakeHints|CMake hints]]
+You are also strongly encouraged to read the [CMake hints](#CMakeHints)
section for more details on CMake (and before reporting a problem).
If you are a programmer, you might want to produce your own
-[[#DevPackages|development package]].
+[development package](#DevPackages).
-===== Linux =====
-{{anchor:install.linux}}
+<a name="install.linux"/>
+## Linux ##
-==== A. Requirements ====
+### A. Requirements ###
Torch compilation requires a number of standard packages described below:
- * **Mandatory:**
- * A ''C/C++'' compiler. [[http://clang.llvm.org|CLang]] is great. The [[http://gcc.gnu.org|GNU compiler]] or Intel compiler work fine.
- * [[http://www.cmake.org|CMake]] version 2.6 or later is required.
- * [[http://gnuplot.info|Gnuplot]], version ''4.4'' or later is recommended for best experience.
-
- * **Recommended:**
- * [[http://tiswww.case.edu/php/chet/readline/rltop.html|GNU Readline]]
- * [[http://git-scm.com/|Git]] to keep up-to-date sources
- * [[http://trolltech.com/products|QT 4.4]] or newer development libraries
- * BLAS. [[https://github.com/xianyi/OpenBLAS|OpenBLAS]] is recommended for that purpose on Intel computers.
- * LAPACK. [[https://github.com/xianyi/OpenBLAS|OpenBLAS]] is recommended for that purpose on Intel computers.
+ * __Mandatory:__
+ * A `C/C++` compiler. [CLang](http:_clang.llvm.org) is great. The [GNU compiler](http:_gcc.gnu.org) or Intel compiler work fine.
+ * [CMake](http://www.cmake.org) version 2.6 or later is required.
+ * [Gnuplot](http://gnuplot.info), version `4.4` or later is recommended for best experience.
+
+ * __Recommended:__
+ * [GNU Readline](http://tiswww.case.edu/php/chet/readline/rltop.html)
+ * [Git](http://git-scm.com/) to keep up-to-date sources
+ * [QT 4.4](http://trolltech.com/products) or newer development libraries
+ * BLAS. [OpenBLAS](https://github.com/xianyi/OpenBLAS) is recommended for that purpose on Intel computers.
+ * LAPACK. [OpenBLAS](https://github.com/xianyi/OpenBLAS) is recommended for that purpose on Intel computers.
The installation of most of these packages should be rather
-straightforward. For ''Ubuntu 10.04 LTS'' system we use the
-''apt-get'' magic:
+straightforward. For `Ubuntu 10.04 LTS` system we use the
+`apt-get` magic:
For GCC:
-<file>
+```
sudo apt-get install gcc g++
-</file>
+```
If you prefer to use CLang:
-<file>
+```
sudo apt-get install clang
-</file>
+```
CMake reads CC and CXX variables. If you do not want to use the default compiler, just do
-<file>
+```
export CC=clang
export CXX=clang++
-</file>
+```
To install the additional packages, do:
-<file>
+```
sudo apt-get install cmake
sudo apt-get install libreadline5-dev
sudo apt-get install git-core
sudo apt-get install gnuplot
-</file>
+```
Please adapt according to your distribution.
Note: readline library is helpful for better command line interaction,
but it is not required. It is only used when QT is installed.
-We require ''QT 4.4'' for handling graphics (//beware// not installing QT 4.3
+We require `QT 4.4` for handling graphics (_beware_ not installing QT 4.3
or older). If it is not found at compile time, Torch will still compile but
-no graphics will be available. On ''Ubuntu 10.04 LTS'' distribution you can
+no graphics will be available. On `Ubuntu 10.04 LTS` distribution you can
install it with
-<file>
+```
sudo apt-get install libqt4-core libqt4-gui libqt4-dev
-</file>
+```
An excellent BLAS/LAPACK implementation is also recommended for speed. See
-our [[blas|BLAS recommendations]].
+our [BLAS recommendations](blas).
-==== B. Getting Torch sources ====
-{{anchor:install.sources}}
+<a name="install.sources"/>
+### B. Getting Torch sources ###
-Torch7 is being developed on [[http://github.com|github]].
+Torch7 is being developed on [github](http://github.com).
-<file>
+```
git clone git://github.com/andresy/torch.git
-</file>
+```
-==== C. Configuring Torch ====
-{{anchor:install.config}}
+<a name="install.config"/>
+### C. Configuring Torch ###
-We use ''CMake'' for configuring ''Torch''. We //highly// recommend to create
+We use `CMake` for configuring `Torch`. We _highly_ recommend to create
first a dedicated build directory. This eases cleaning up built objects,
-but also allow you to build Torch with //various configurations//
+but also allow you to build Torch with _various configurations_
(e.g. Release and Debug in two different build directories).
-<file>
+```
cd torch
mkdir build
cd build
cmake ..
-</file>
+```
-The ''..'' given to ''cmake'' indicates the directory where the
-sources are. We chose here to have a ''build'' directory inside
-''torch'', but it could be anywhere else. In that latter case, go
+The `..` given to `cmake` indicates the directory where the
+sources are. We chose here to have a `build` directory inside
+`torch`, but it could be anywhere else. In that latter case, go
instead in your build directory and then do:
-<file>
+```
cmake /path/to/torch/sources
-</file>
+```
CMake detects external libraries or tools necessary for Torch, and
produces Makefiles such that Torch is then easily compilable on your
platform. If you prefer the GUI version of CMake, you can replace
-''cmake'' by ''ccmake'' in the above command lines. In particular, it
-is //strongly encouraged// to use ''ccmake'' for finer configuration
+`cmake` by `ccmake` in the above command lines. In particular, it
+is _strongly encouraged_ to use `ccmake` for finer configuration
of Torch.
The most common Torch configuration step you might want to perform is
changing the installation path. By default, Torch will be installed in
-''/usr/local''. You will need super-user rights to perform that. If
+`/usr/local`. You will need super-user rights to perform that. If
you are not root on your computer, you can instead specifying a
-install directory to ''CMake'' on the above ''cmake'' command:
+install directory to `CMake` on the above `cmake` command:
-<file>
+```
cmake .. -DCMAKE_INSTALL_PREFIX=/my/install/path
-</file>
+```
-Equivalently you can set the variable ''CMAKE_INSTALL_PREFIX'' if you
-use ''ccmake'' GUI. Please, see [[http://www.cmake.org|CMake
-documentation]] or //at least// [[#CMakeHints|some of our CMake
+Equivalently you can set the variable `CMAKE_INSTALL_PREFIX` if you
+use `ccmake` GUI. Please, see [[http://www.cmake.org|CMake
+documentation]] or _at least_ [[#CMakeHints|some of our CMake
hints]] for more details on configuration.
-==== D. Compiling and installing ====
-{{anchor:install.compile}}
+<a name="install.compile"/>
+### D. Compiling and installing ###
If the configuration was successful, Makefiles should have appeared in
your build directory. Compile Torch with:
then compile and install with:
-<file>
+```
make install
-</file>
+```
-This last command might possibly be prefixed by ''sudo'' if you are
-installing Torch in ''/usr/local''.
+This last command might possibly be prefixed by `sudo` if you are
+installing Torch in `/usr/local`.
-==== E. Running Torch ====
-{{anchor:install.run}}
+<a name="install.run"/>
+### E. Running Torch ###
-Now Torch should be installed in ''/usr/local'' or in
-''/my/install/path'' if you chose to use the ''CMAKE_INSTALL_PREFIX''
-when configuring with CMake. Lua executables (''torch-lua'',
-''torch-qlua'' and ''torch'') are found in the ''bin'' sub-directory of
+Now Torch should be installed in `/usr/local` or in
+`/my/install/path` if you chose to use the `CMAKE_INSTALL_PREFIX`
+when configuring with CMake. Lua executables (`torch-lua`,
+`torch-qlua` and `torch`) are found in the `bin` sub-directory of
these installation directories.
-<file>
+```
/usr/local/bin/torch-lua
Lua 5.1.4 Copyright (C) 1994-2008 Lua.org, PUC-Rio
> require 'torch'
@@ -178,18 +178,18 @@ Lua 5.1.4 Copyright (C) 1994-2008 Lua.org, PUC-Rio
[torch.Tensor of dimension 5]
>
-</file>
+```
-For convenience, you might want to add to your ''PATH'' the path to
-lua binaries. The executable ''torch-lua'' is a simple Lua interpreter
-(as provided on [[http://www.lua.org|Lua website]]), while ''torch-qlua''
+For convenience, you might want to add to your `PATH` the path to
+lua binaries. The executable `torch-lua` is a simple Lua interpreter
+(as provided on [Lua website](http://www.lua.org)), while `torch-qlua`
has enhanced interactivity (like completion) and is able to handle
graphics and QT widgets.
-For best experience we suggest using the ''torch'' executable, which
+For best experience we suggest using the `torch` executable, which
preloads the most commonly used libraries into the global namespace.
-<file>
+```
/usr/local/bin/torch
Try the IDE: torch -ide
Type help() for more info
@@ -210,11 +210,11 @@ torch> =torch.randn(10,10)
torch>
-</file>
+```
-You can get more help about ''torch'':
+You can get more help about `torch`:
-<file>
+```
/usr/local/bin/torch -h
Torch7 Shell
@@ -232,24 +232,24 @@ Qt options:
-nographics|-ng disable all the graphical capabilities [false]
-ide enable IDE (graphical console) [false]
-onethread run lua in the main thread (might be safer) [false]
-</file>
+```
-===== MacOS X =====
+## MacOS X ##
-==== A. Requirements ====
+### A. Requirements ###
Torch compilation requires a number of standard packages described below:
- * **Mandatory:**
- * A ''C/C++'' compiler. [[http://clang.llvm.org|CLang]] is great. The [[http://gcc.gnu.org|GNU compiler]] or Intel compiler work fine.
- * [[http://www.cmake.org|CMake]] version 2.6 or later is required.
- * [[http://gnuplot.info|Gnuplot]], version ''4.4'' or later is recommended for best experience.
-
- * **Recommended:**
- * [[http://tiswww.case.edu/php/chet/readline/rltop.html|GNU Readline]]
- * [[http://git-scm.com/|Git]] to keep up-to-date sources
- * [[http://trolltech.com/products|QT 4.4]] or newer development libraries
- * BLAS. [[https://github.com/xianyi/OpenBLAS|OpenBLAS]] is recommended for that purpose on Intel computers.
- * LAPACK. [[https://github.com/xianyi/OpenBLAS|OpenBLAS]] is recommended for that purpose on Intel computers.
+ * __Mandatory:__
+ * A `C/C++` compiler. [CLang](http:_clang.llvm.org) is great. The [GNU compiler](http:_gcc.gnu.org) or Intel compiler work fine.
+ * [CMake](http://www.cmake.org) version 2.6 or later is required.
+ * [Gnuplot](http://gnuplot.info), version `4.4` or later is recommended for best experience.
+
+ * __Recommended:__
+ * [GNU Readline](http://tiswww.case.edu/php/chet/readline/rltop.html)
+ * [Git](http://git-scm.com/) to keep up-to-date sources
+ * [QT 4.4](http://trolltech.com/products) or newer development libraries
+ * BLAS. [OpenBLAS](https://github.com/xianyi/OpenBLAS) is recommended for that purpose on Intel computers.
+ * LAPACK. [OpenBLAS](https://github.com/xianyi/OpenBLAS) is recommended for that purpose on Intel computers.
Installation of gcc should be done by installing the
[[http://developer.apple.com/tools/xcode|the Apple developer
@@ -257,200 +257,200 @@ tools]]. These tools should also be available on you MacOS X
installation DVD.
CMake can be retrieved from
-[[http://www.cmake.org/HTML/Download.html|CMake website]] (you can
-take the **DMG** installer). However, we found it was as simple to use
-[[http://mxcl.github.com/homebrew/|Homebrew]], or [[http://www.macports.org/|MacPorts]]
+[CMake website](http://www.cmake.org/HTML/Download.html) (you can
+take the __DMG__ installer). However, we found it was as simple to use
+[Homebrew](http:_mxcl.github.com/homebrew/), or [MacPorts](http:_www.macports.org/)
which are necessary anyway for git and the Readline library. We recommend to avoid
-[[http://finkproject.org/|Fink]], which tends to be always
+[Fink](http://finkproject.org/), which tends to be always
outdated. Assuming you installed Homebrew, just do:
-<file>
+```
brew install readline
brew install cmake
brew install git
brew install gnuplot
-</file>
+```
For installing QT, one can use Homebrew, but it might take too long to
compile. Instead, you can
-[[http://trolltech.com/downloads/opensource/appdev/mac-os-cpp|download]]
-the binary **DMG** file available on [[http://trolltech.com|Trolltech
+[download](http://trolltech.com/downloads/opensource/appdev/mac-os-cpp)
+the binary __DMG__ file available on [[http://trolltech.com|Trolltech
website]] and install it.
An excellent BLAS/LAPACK implementation is also recommended for speed. See
-our [[blas|BLAS recommendations]].
+our [BLAS recommendations](blas).
Last but not least, GCC >= 4.6 is *required* to enable OpenMP on MacOS X. This
is a bit crazy, but compiling against OpenMP with previous versions of GCC
will give you random segfaults and trap errors (a known issue on the web).
We strongly recommend you to install GCC 4.6, to fully benefit from Torch's
fast numeric routines. A very simple way of doing so is to install the
-[[http://gcc.gnu.org/wiki/GFortranBinaries|GFortran]] libraries, which are
+[GFortran](http://gcc.gnu.org/wiki/GFortranBinaries) libraries, which are
packaged as a simple dmg, ready to install. That'll automatically install gcc
and g++. Once this is done, set your CC and CXX before building Torch:
-<file>
+```
export CC=/usr/local/gfortran/bin/gcc
export CXX=/usr/local/gfortran/bin/g++
-</file>
+```
-==== B. Getting Torch sources ====
+### B. Getting Torch sources ###
-Same as [[#install.sources|getting sources]] for linux.
+Same as [getting sources](#install.sources) for linux.
-==== C. Configuring Torch ====
+### C. Configuring Torch ###
-Same as [[#install.config|configuring]] for linux.
+Same as [configuring](#install.config) for linux.
-==== D. Compiling and Installing ====
+### D. Compiling and Installing ###
-Same as [[#install.compile|compiling]] for linux.
+Same as [compiling](#install.compile) for linux.
-==== E. Running Torch ====
+### E. Running Torch ###
-Same as [[#install.run|runnning]] for linux.
+Same as [runnning](#install.run) for linux.
-===== FreeBSD =====
-{{anchor:install.freebsd}}
+<a name="install.freebsd"/>
+## FreeBSD ##
-==== A. Requirements ====
+### A. Requirements ###
Torch compilation requires a number of standard packages described below:
- * **Mandatory:**
- * A ''C/C++'' compiler. [[http://clang.llvm.org|CLang]] is great. The [[http://gcc.gnu.org|GNU compiler]] or Intel compiler work fine.
- * [[http://www.cmake.org|CMake]] version 2.6 or later is required.
- * [[http://gnuplot.info|Gnuplot]], version ''4.4'' or later is recommended for best experience.
-
- * **Recommended:**
- * [[http://tiswww.case.edu/php/chet/readline/rltop.html|GNU Readline]]
- * [[http://git-scm.com/|Git]] to keep up-to-date sources
- * [[http://trolltech.com/products|QT 4.4]] or newer development libraries
- * BLAS. [[https://github.com/xianyi/OpenBLAS|OpenBLAS]] is recommended for that purpose on Intel computers.
- * LAPACK. [[https://github.com/xianyi/OpenBLAS|OpenBLAS]] is recommended for that purpose on Intel computers.
+ * __Mandatory:__
+ * A `C/C++` compiler. [CLang](http:_clang.llvm.org) is great. The [GNU compiler](http:_gcc.gnu.org) or Intel compiler work fine.
+ * [CMake](http://www.cmake.org) version 2.6 or later is required.
+ * [Gnuplot](http://gnuplot.info), version `4.4` or later is recommended for best experience.
+
+ * __Recommended:__
+ * [GNU Readline](http://tiswww.case.edu/php/chet/readline/rltop.html)
+ * [Git](http://git-scm.com/) to keep up-to-date sources
+ * [QT 4.4](http://trolltech.com/products) or newer development libraries
+ * BLAS. [OpenBLAS](https://github.com/xianyi/OpenBLAS) is recommended for that purpose on Intel computers.
+ * LAPACK. [OpenBLAS](https://github.com/xianyi/OpenBLAS) is recommended for that purpose on Intel computers.
GCC and CLang come with FreeBSD install. However, only GCC 4.2 is installed by default (for licensing reasons).
We prefer to use CLang. If you want to stick with GCC, we recommend installing GCC 4.4 or GCC 4.6 instead of using
GCC 4.2 (poor performance on recent CPUs).
-<file>
+```
pkg_add -r gcc46
-</file>
+```
CMake reads CC and CXX variables. If you do not want to use the default compiler, just do
-<file>
+```
export CC=clang
export CXX=clang++
-</file>
+```
Additional packages can be easily installed with:
-<file>
+```
pkg_add -r readline
pkg_add -r cmake
pkg_add -r git
pkg_add -r gnuplot
-</file>
+```
-Note: on FreeBSD 9.0, it seems ''pdflib'' (a dependency of gnuplot) is not available as binary. Please,
+Note: on FreeBSD 9.0, it seems `pdflib` (a dependency of gnuplot) is not available as binary. Please,
install gnuplot instead in the port tree:
-<file>
+```
cd /usr/ports/math/gnuplot
make install clean
-</file>
+```
-For installing QT, use also ''pkg_add -r qt4'', followed by ''pkg_add -r qt4-XXX'', where
-XXX is one of the components (or tools) listed on [[http://www.freebsd.org/doc/en/books/porters-handbook/using-qt.html|Qt FreeBSD page]].
+For installing QT, use also `pkg_add -r qt4`, followed by `pkg_add -r qt4-XXX`, where
+XXX is one of the components (or tools) listed on [Qt FreeBSD page](http://www.freebsd.org/doc/en/books/porters-handbook/using-qt.html).
Be sure to install all components and tools listed there.
An excellent BLAS/LAPACK implementation is also recommended for speed. See
-our [[blas|BLAS recommendations]].
+our [BLAS recommendations](blas).
-==== B. Getting Torch sources ====
+### B. Getting Torch sources ###
-Same as [[#install.sources|getting sources]] for linux.
+Same as [getting sources](#install.sources) for linux.
-==== C. Configuring Torch ====
+### C. Configuring Torch ###
-Same as [[#install.config|configuring]] for linux. Note that dynamic RPATH (related to ''$ORIGIN'') do not work properly
-on my FreeBSD 9. You can deactivate this with the ''WITH_DYNAMIC_RPATH'' option.
-<file>
+Same as [configuring](#install.config) for linux. Note that dynamic RPATH (related to `$ORIGIN`) do not work properly
+on my FreeBSD 9. You can deactivate this with the `WITH_DYNAMIC_RPATH` option.
+```
cmake .. -DCMAKE_INSTALL_PREFIX=/my/install/path -DWITH_DYNAMIC_RPATH=OFF
-</file>
+```
-==== D. Compiling and Installing ====
+### D. Compiling and Installing ###
-Same as [[#install.compile|compiling]] for linux.
+Same as [compiling](#install.compile) for linux.
-==== E. Running Torch ====
+### E. Running Torch ###
-Same as [[#install.run|runnning]] for linux.
+Same as [runnning](#install.run) for linux.
-===== Cygwin =====
+## Cygwin ##
-//We do not recommend// Cygwin installation. Cygwin is pretty slow, and we
+_We do not recommend_ Cygwin installation. Cygwin is pretty slow, and we
could not manage to make QT 4.4 work under Cygwin. Instead prefer
-[[#Windows|native windows]] installation.
+[native windows](#Windows) installation.
-===== Windows =====
-{{anchor:Windows}}
+<a name="Windows"/>
+## Windows ##
-//** Torch7 is not yet Windows compatible, coming soon **//
+___ Torch7 is not yet Windows compatible, coming soon ___
-===== CMake hints =====
-{{anchor:CMakeHints}}
+<a name="CMakeHints"/>
+## CMake hints ##
-CMake is well documented on [[http://www.cmake.org|http://www.cmake.org]].
+CMake is well documented on [http:_www.cmake.org](http:_www.cmake.org).
-====CMake and CLang====
+### CMake and CLang ###
-If you like to use [[http://clang.llvm.org|CLang]] for compiling Torch7, assuming a proper
+If you like to use [CLang](http://clang.llvm.org) for compiling Torch7, assuming a proper
CLang installation, you only have to do
-<file>
+```
export CC=clang
export CXX=clang++
-</file>
+```
before calling cmake command line.
-====CMake GUI====
+### CMake GUI ###
Under Windows, CMake comes by default with a GUI. Under Unix system it is
-quite handy to use the //text GUI// available through ''ccmake''.
-''ccmake'' works in the same way than ''cmake'': go in your build directory and
-<file>
+quite handy to use the _text GUI_ available through `ccmake`.
+`ccmake` works in the same way than `cmake`: go in your build directory and
+```
ccmake /path/to/torch/source
-</file>
+```
-Windows and Unix GUI works in the same way: you ''configure'', //possibly several times//,
-until CMake has detected everything and proposes to ''generate'' the configuration.
+Windows and Unix GUI works in the same way: you `configure`, _possibly several times_,
+until CMake has detected everything and proposes to `generate` the configuration.
After each configuration step, you can modify CMake variables to suit your needs.
-====CMake variables====
+### CMake variables ###
-CMake is highly configurable thanks to //variables// you can set when
+CMake is highly configurable thanks to _variables_ you can set when
executing it. It is really easy to change these variables with CMake GUI. If you want
to stick with the command line you can also change a variable by doing:
-<file>
+```
cmake /path/to/torch/source -DMY_VARIABLE=MY_VALUE
-</file>
-where ''MY_VARIABLE'' is the name of the variable you want to set and
-''MY_VALUE'' is its corresponding value.
+```
+where `MY_VARIABLE` is the name of the variable you want to set and
+`MY_VALUE` is its corresponding value.
-===Interesting standard CMake variables===
+#### Interesting standard CMake variables ####
- * ''CMAKE_INSTALL_PREFIX'': directory where Torch is going to be installed
- * ''CMAKE_BUILD_TYPE'': ''Release'' for optimized compilation, ''Debug'' for debug compilation.
- * ''CMAKE_C_FLAGS'': add here the flags you want to pass to the C compiler (like ''-Wall'' for e.g.)
+ * `CMAKE_INSTALL_PREFIX`: directory where Torch is going to be installed
+ * `CMAKE_BUILD_TYPE`: `Release` for optimized compilation, `Debug` for debug compilation.
+ * `CMAKE_C_FLAGS`: add here the flags you want to pass to the C compiler (like `-Wall` for e.g.)
-=== Notable Torch7 CMake variables ===
+#### Notable Torch7 CMake variables ####
- * ''WITH_BLAS'': specify which BLAS you want to use (if you have several on your computers). Can be mkl/open/goto/acml/atlas/accelerate/veclib/generic.
- * ''WITH_LUA_JIT'': say to CMake to compile Torch7 against LuaJIT instead of Lua. (default is OFF)
- * ''WITH_QTLUA'': compile QtLua if Qt is found (default is ON)
- * ''WITH_QTLUA_IDE'': compile QtLua IDE if Qt is found (default is ON)
- * ''WITH_RPATH'': use RPATH such that you do not need to add Torch7 install library path in LD_LIBRARY_PATH. (default is ON)
- * ''WITH_DYNAMIC_RPATH'': if used together with WITH_RPATH, will make library paths relative to the Torch7 executable. If you move the install directory, things will still work. This flag does not work on FreeBSD. (default is ON).
+ * `WITH_BLAS`: specify which BLAS you want to use (if you have several on your computers). Can be mkl/open/goto/acml/atlas/accelerate/veclib/generic.
+ * `WITH_LUA_JIT`: say to CMake to compile Torch7 against LuaJIT instead of Lua. (default is OFF)
+ * `WITH_QTLUA`: compile QtLua if Qt is found (default is ON)
+ * `WITH_QTLUA_IDE`: compile QtLua IDE if Qt is found (default is ON)
+ * `WITH_RPATH`: use RPATH such that you do not need to add Torch7 install library path in LD_LIBRARY_PATH. (default is ON)
+ * `WITH_DYNAMIC_RPATH`: if used together with WITH_RPATH, will make library paths relative to the Torch7 executable. If you move the install directory, things will still work. This flag does not work on FreeBSD. (default is ON).
-====CMake caches everything====
+### CMake caches everything ###
As soon as CMake performed a test to detect an external library, it saves
the result of this test in a cache and will not test it again.
@@ -459,31 +459,31 @@ If you forgot to install a library (like QT or Readline), and install it
after having performed a CMake configuration, it will not be used by Torch
when compiling.
-//In doubt//, if you changed, updated, added some libraries that should be used by Torch, you should
-//erase your build directory and perform CMake configuration again//.
+_In doubt_, if you changed, updated, added some libraries that should be used by Torch, you should
+_erase your build directory and perform CMake configuration again_.
-===== Development Torch packages =====
-{{anchor:DevPackages}}
+<a name="DevPackages"/>
+## Development Torch packages ##
-If you want to develop your own package, you can put it in the ''dev''
-sub-directory. Packages in ''dev'' are all compiled in the same way that the
-ones in ''packages'' sub-directory. We prefer to have this directory to make a
+If you want to develop your own package, you can put it in the `dev`
+sub-directory. Packages in `dev` are all compiled in the same way that the
+ones in `packages` sub-directory. We prefer to have this directory to make a
clear difference between official packages and development packages.
-Alternatively, you can use [[#PackageManager|Torch package manager]]
+Alternatively, you can use [Torch package manager](#PackageManager)
to build and distribute your packages.
-===== The Torch Package Management System =====
-{{anchor:PackageManager}}
+<a name="PackageManager"/>
+## The Torch Package Management System ##
Torch7 has a built-in package management system that makes it very easy
for anyone to get extra (experimental) packages, and create and distribute
yours.
-Calling ''torch-pkg'' without arguments will give you some help:
+Calling `torch-pkg` without arguments will give you some help:
-<file>
+```
/usr/local/bin/torch-pkg
Torch7 Package Manager
@@ -512,12 +512,12 @@ Options:
-l|-local local install
-n|-nodeps do not install dependencies (when installing)
-d|-dry dry run
-</file>
+```
It's fairly self-explanatory. You can easily get a list of the available
packages:
-<file>
+```
/usr/local/bin/torch-pkg list
--> retrieving package lists from servers
@@ -535,49 +535,49 @@ packages:
hosted at: https://github.com/koraykv/optim
...
-</file>
+```
To install a new package, simply do:
-<file>
+```
/usr/local/bin/torch-pkg install pkgname
-</file>
+```
The sources of the packages are downloaded and kept in a hidden
directory in your home:
-<file>
+```
torch-pkg install image
ls ~/.torch/torch-pkg/image/
-</file>
+```
If you just want to get the sources of a package, without
installing it, you can get it like this:
-<file>
+```
/usr/local/bin/torch-pkg download pkgname
-</file>
+```
And then build it and install it:
-<file>
+```
cd pkgname
/usr/local/bin/torch-pkg build
/usr/local/bin/torch-pkg deploy
-</file>
+```
If you need to distribute your own packages, you just have
to create a package file, which contains one entry per package,
and then make it available online. Users can then easily add
that file to their repository by doing:
-<file>
+```
/usr/local/bin/torch-pkg add http://url/to/config
-</file>
+```
The config typically looks like:
-<file>
+```
pkg = pkg or {}
pkg.image = {
@@ -600,13 +600,14 @@ pkg.parallel = {
dependencies = {'sys'},
commit = 'newpack'
}
-</file>
+```
-====== Installing from binaries ======
-{{anchor:install.binary}}
+<a name="install.binary"/>
+# Installing from binaries #
-** This section is not applicable now as we have not produced binaries yet. **
+__This section is not applicable now as we have not produced binaries yet.__
+
+__Please [install from sources](#install.sources).__
-** Please [[#install.sources|install from sources]]. **
diff --git a/dokinstall/blas.dok b/docinstall/blas.md
index da34a77..2d34df4 100644
--- a/dokinstall/blas.dok
+++ b/docinstall/blas.md
@@ -1,169 +1,170 @@
-====== BLAS and LAPACK ======
+# BLAS and LAPACK #
There are multiple BLAS and LAPACK libraries out there. Most Linux
distributions come with pre-compiled BLAS or ATLAS libraries.
-**We strongly discourage you to use those libraries**. According to our experience,
+__We strongly discourage you to use those libraries__. According to our experience,
these libraries are slow. Things have been improved with recent ATLAS
development versions, but they have still a hard time to catch up with Intel MKL
or GotoBLAS/OpenBLAS implementations.
We found that on Intel platforms,
-[[http://www.tacc.utexas.edu/tacc-projects/gotoblas2|GotoBLAS]]/[[https://github.com/xianyi/OpenBLAS|OpenBLAS]]
-or [[www.intel.com/software/products/mkl|Intel MKL]] implementations were
+[GotoBLAS](http:_www.tacc.utexas.edu/tacc-projects/gotoblas2)/[OpenBLAS](https:_github.com/xianyi/OpenBLAS)
+or [Intel MKL](www.intel.com/software/products/mkl) implementations were
the fastest. The advantage of GotoBLAS and OpenBLAS being that they are
distributed with a BSD-like license. The choice is yours.
-===== Installing OpenBLAS =====
+## Installing OpenBLAS ##
-[[http://www.tacc.utexas.edu/tacc-projects/gotoblas2|GotoBLAS]] has been
+[GotoBLAS](http://www.tacc.utexas.edu/tacc-projects/gotoblas2) has been
extremely well hand-optimized by Kazushige Goto. The project has been
released under a BSD-like license. Unfortunately, it is not maintained
anymore (at this time), but several forks have been released later. Our preference
-goes to [[https://github.com/xianyi/OpenBLAS|OpenBLAS]].
+goes to [OpenBLAS](https://github.com/xianyi/OpenBLAS).
We provide below simple instructions to install OpenBLAS.
First get the latest OpenBLAS stable code:
-<file>
+```
git clone git://github.com/xianyi/OpenBLAS.git
-</file>
+```
-You will need a Fortran compiler. On most Linux distributions, ''gfortran'' is available.
+You will need a Fortran compiler. On most Linux distributions, `gfortran` is available.
For e.g., on Debian,
-<file>
+```
apt-get install gfortran
-</file>
+```
If you prefer, you can also install GCC 4.6 which also supports Fortran language.
On FreeBSD, gfortran is not available, so please use GCC 4.6.
-<file>
+```
pkg_add -r gcc46
-</file>
+```
On MacOS X, you should install one gfortran package provided on
-[[http://gcc.gnu.org/wiki/GFortranBinaries|this GCC webpage]].
+[this GCC webpage](http://gcc.gnu.org/wiki/GFortranBinaries).
You can now go into the OpenBlas directory, and just do:
-<file>
+```
make NO_AFFINITY=1 USE_OPENMP=1
-</file>
+```
OpenBLAS uses processor affinity to go faster. However, in general, on a
computer shared between several users, this causes processes to fight for
-the same CPU. We thus disable it here with the ''NO_AFFINITY'' flag. We
-also use the ''USE_OPENMP'' flag, such that OpenBLAS uses OpenMP and not
+the same CPU. We thus disable it here with the `NO_AFFINITY` flag. We
+also use the `USE_OPENMP` flag, such that OpenBLAS uses OpenMP and not
pthreads. This is important to avoid some confusion in the number of
threads, as Torch7 uses OpenMP. Read OpenBLAS manual for more details.
-You can use ''CC'' and ''FC'' variables to control the C and Fortran compilers.
+You can use `CC` and `FC` variables to control the C and Fortran compilers.
On FreeBSD use 'gmake' instead of 'make'. You also have to specify the correct MD5 sum program
You will probably want to use the following command line:
-<file>
+```
gmake NO_AFFINITY=1 USE_OPENMP=1 CC=gcc46 FC=gcc46 MD5SUM='md5 -q'
-</file>
+```
On MacOS X, you will also have to specify the correct MD5SUM program:
-<file>
+```
make NO_AFFINITY=1 USE_OPENMP=1 MD5SUM='md5 -q'
-</file>
+```
Be sure to specify MD5SUM correctly, otherwise OpenBLAS might not compile LAPACK properly.
At the end of the compilation, you might want to do a
-<file>
+```
make PREFIX=/your_installation_path/ install
-</file>
+```
to install OpenBLAS at a specific location. You might also want to keep it where you compiled it.
-Note that on MacOS X, the generated **dynamic** (''.dylib'') library does not contain LAPACK. Simply remove
-the dylib (keeping the archive ''.a'') such that LAPACK is correctly detected.
+Note that on MacOS X, the generated __dynamic__ (`.dylib`) library does not contain LAPACK. Simply remove
+the dylib (keeping the archive `.a`) such that LAPACK is correctly detected.
-==== CMake detection ====
+### CMake detection ###
Make sure that CMake can find your OpenBLAS library. This can be done with
-<file>
+```
export CMAKE_LIBRARY_PATH=/your_installation_path/lib
-</file>
-before starting cmake command line. On some platforms, the ''gfortran''
+```
+before starting cmake command line. On some platforms, the `gfortran`
library might also be not found. In this case, add the path to the
-''gfortran'' library into ''CMAKE_LIBRARY_PATH''.
+`gfortran` library into `CMAKE_LIBRARY_PATH`.
-===== Installing Intel MKL =====
+## Installing Intel MKL ##
-[[www.intel.com/software/products/mkl|Intel MKL]] is a closed-source
-library //sold// by Intel. Follow Intel instructions to unpack MKL. Then make
-sure the libraries relevant for your system (e.g. ''em64t'' if you are on a
-64 bits distribution) are available in your ''LD_LIBRARY_PATH''. Both BLAS
+[Intel MKL](www.intel.com/software/products/mkl) is a closed-source
+library _sold_ by Intel. Follow Intel instructions to unpack MKL. Then make
+sure the libraries relevant for your system (e.g. `em64t` if you are on a
+64 bits distribution) are available in your `LD_LIBRARY_PATH`. Both BLAS
and LAPACK interfaces are readily included in MKL.
-==== CMake detection ====
+### CMake detection ###
Make sure that CMake can find your libraries. This can be done with something like
-<file>
+```
export CMAKE_INCLUDE_PATH=/path/to/mkl/include
export CMAKE_LIBRARY_PATH=/path/to/mkl/lib/intel64:/path/to/mkl/compiler/lib/intel64
export LD_LIBRARY_PATH=$CMAKE_LIBRARY_PATH:$LD_LIBRARY_PATH
-</file>
+```
before starting cmake command line.
-Of course, you have to adapt ''/path/to/mkl'' and ''/path/to/mkl/compiler'' to your installation setup. In the above
-case, we also chose the ''intel64'' libraries, which might not be what you need.
+Of course, you have to adapt `/path/to/mkl` and `/path/to/mkl/compiler` to your installation setup. In the above
+case, we also chose the `intel64` libraries, which might not be what you need.
A common mistake is to forgot the path to Intel compiler libraries. CMake
will not be able to detect threaded libraries in that case.
-===== CMake and BLAS/LAPACK =====
+## CMake and BLAS/LAPACK ##
As mentioned above, you should make sure CMake can find your
libraries. Carefully watch for libraries found (or not found) in the output
generated by cmake.
For example, if you see something like:
-<file>
+```
-- Checking for [openblas - gfortran]
-- Library openblas: /Users/ronan/open/lib/libopenblas.dylib
-- Library gfortran: BLAS_gfortran_LIBRARY-NOTFOUND
-</file>
+```
It means CMake found the OpenBLAS library, but could not make it work
properly because it did not find the required gfortran library. Make sure
that CMake can find all the required libraries through CMAKE_LIBRARY_PATH.
If your libraries are present in LD_LIBRARY_PATH, it should be fine too.
The locations to search for are generally as follows.
-<file>
+```
/usr/lib/gcc/x86_64-linux-gnu/
/usr/lib/gcc/x86_64-redhat-linux/4.4.4/
-</file>
+```
These are a bit crytic, but look around and find the path that contains libgfortran.so. And, use
-<file>
+```
export CMAKE_LIBRARY_PATH=...
-</file>
+```
before calling cmake to build torch, this makes sure that OpenBLAS will be found.
Note that CMake will try to detect various BLAS/LAPACK libraries. If you have several libraries
installed on your computer (say Intel MKL and OpenBLAS), or if you want to avoid all these checks,
you might want to select the one you want to use with:
-<file>
+```
cd torch7/build
cmake .. -DWITH_BLAS=open
-</file>
-Valid options for WITH_BLAS are ''mkl'' (Intel MKL), ''open'' (OpenBLAS),
-''goto'' (GotoBlas2), ''acml'' (AMD ACML), ''atlas'' (ATLAS),
-''accelerate'' (Accelerate framework on MacOS X), ''vecLib'' (vecLib
-framework on MacOS X) or ''generic''.
+```
+Valid options for WITH_BLAS are `mkl` (Intel MKL), `open` (OpenBLAS),
+`goto` (GotoBlas2), `acml` (AMD ACML), `atlas` (ATLAS),
+`accelerate` (Accelerate framework on MacOS X), `vecLib` (vecLib
+framework on MacOS X) or `generic`.
-Note again that the best choices are probably ''open'' or ''mkl''. For
+Note again that the best choices are probably `open` or `mkl`. For
consistency reasons, CMake will try to find the corresponding LAPACK
package (and does not allow mixing up different BLAS/LAPACK versions).
-===== GotoBLAS/OpenBLAS and MKL threads =====
+## GotoBLAS/OpenBLAS and MKL threads ##
GotoBLAS/OpenBLAS and MKL are multi-threaded libraries.
With MKL, the number of threads can be controlled by
-<file>
+```
export OMP_NUM_THREADS=N
-</file>
+```
where N is an integer.
Beware that running small problems on a large number of threads reduce
performance! Multi-threading should be enable only for large-scale
computations.
+
diff --git a/dokinstall/installqtdebian.dok b/docinstall/installqtdebian.md
index a0af403..ea4e6c5 100644
--- a/dokinstall/installqtdebian.dok
+++ b/docinstall/installqtdebian.md
@@ -1,11 +1,11 @@
-====== Appendix: Installing QT 4.4 ======
+# Appendix: Installing QT 4.4 #
The version 4.4 of QT might not be available on old distributions. Ubuntu
provides QT 4.4 backports for Hardy but not for Gutsy. Debian testing and
unstable contain QT 4.4 but not the Debian stable. If your distribution
does not provide QT 4.4, you will have to compile it yourself. This is easily
done on Debian and Ubuntu with the following:
-<file>
+```
# You will need these packages
sudo apt-get install wget fakeroot dpkg-dev
mkdir qt
@@ -16,16 +16,17 @@ wget http://ml.nec-labs.com/download/qt/qt4-x11_4.4.0-4.diff.gz
dpkg-source -x qt4-x11_4.4.0-4.dsc
cd qt4-x11-4.4.0
dpkg-buildpackage -rfakeroot
-</file>
-The command ''dpkg-buildpackage'' might complain for some unmet
-dependencies. Install them with ''apt-get install'', and then execute the
+```
+The command `dpkg-buildpackage` might complain for some unmet
+dependencies. Install them with `apt-get install`, and then execute the
command again. The compilation takes around two hours on a recent computer.
You can then install all the packages it created with:
-<file>
+```
cd ..
sudo dpkg -i *.deb
-</file>
+```
For distributions not Debian-based, please refer to the documentation for compiling
packages. You might also be able to find QT 4.4 binary packages on the web.
-We provide ourselves binary packages for [[http://ml.nec-labs.com/download/qt/binaries/|some architecture]].
+We provide ourselves binary packages for [some architecture](http://ml.nec-labs.com/download/qt/binaries/).
+
diff --git a/dokinstall/installqtwindows.dok b/docinstall/installqtwindows.md
index 083a285..7217ba5 100644
--- a/dokinstall/installqtwindows.dok
+++ b/docinstall/installqtwindows.md
@@ -1,19 +1,20 @@
-====== Appendix: Install QT 4.4 under Windows ======
+# Appendix: Install QT 4.4 under Windows #
-Download [[http://trolltech.com/downloads/opensource/appdev/windows-cpp|QT 4.4 sources for Windows]]
+Download [QT 4.4 sources for Windows](http://trolltech.com/downloads/opensource/appdev/windows-cpp)
on Trolltech website.
-Unzip then and move the directory to ''C:\Qt''.
+Unzip then and move the directory to `C:\Qt`.
-Set the system ''PATH'' such that it contains ''C:\Qt\bin''. It can be done by
+Set the system `PATH` such that it contains `C:\Qt\bin`. It can be done by
opening the Control Panel, then System, then go to Advanced and then
Environment Variables. Make sure you do that before compilation. The Torch
configuration procedure also needs it for finding QT.
Considering you have Microsoft Visual Studio, you can then do:
-<file>
+```
cd c:\Qt
configure -release
nmake
-</file>
+```
Given the size of QT, allow few hours for compilation.
+
diff --git a/doclua/README.md b/doclua/README.md
new file mode 100644
index 0000000..a2b7b96
--- /dev/null
+++ b/doclua/README.md
@@ -0,0 +1,84 @@
+# The Lua Language #
+
+`Lua` is a __powerful__, __fast__, __light-weight__, embeddable _scripting language_.
+`Lua` combines simple procedural syntax with powerful data description
+constructs based on associative arrays and extensible semantics. `Lua` is
+dynamically typed, runs by interpreting bytecode for a register-based
+virtual machine, and has automatic memory management with incremental
+garbage collection, making it ideal for configuration, scripting, and rapid
+prototyping. 'Lua' means 'moon' in Portuguese and is pronounced __LOO-ah__.
+
+Please visit [http:_www.lua.org](http:_www.lua.org) for more
+information, or have a look on the [Lua Reference Manual](LuaManual).
+
+## Why choose Lua? ##
+
+### Lua is a proven and robust language ###
+
+ Lua has been used in
+[many industrial applications](http://www.lua.org/uses.html) (e.g.,
+[Adobe's Photoshop Lightroom](http://since1968.com/article/190/mark-hamburg-interview-adobe-photoshop-lightroom-part-2-of-2)),
+with an emphasis on embedded systems and games. Lua
+is currently the leading scripting language in games. Lua has a solid
+[reference manual](LuaManual) and there are
+[several books about it](http://www.lua.org/docs.html#books). Several
+[versions](http://www.lua.org/versions.html) of Lua have been released
+and used in real applications since its creation in 1993.
+
+### Lua is fast ###
+
+Lua has a deserved reputation for performance. To
+claim to be "as fast as Lua" is an aspiration of other scripting
+languages. Several benchmarks show Lua as the fastest language in the realm
+of interpreted scripting languages. Lua is fast not only in fine-tuned
+benchmark programs, but in real life too. A substantial fraction of large
+applications have been written in Lua.
+
+### Lua is portable ###
+
+Lua is [distributed](http://www.lua.org/download.html) in a small
+package that builds out-of-the-box in all platforms that have an `ANSI/ISO C`
+compiler. Lua runs on all flavors of `Unix` and `Windows`, and also on mobile
+devices (such as handheld computers and cell phones that use `BREW`, `Symbian`,
+`Pocket PC`, etc.) and embedded microprocessors (such as `ARM` and `Rabbit`) for
+applications like `Lego MindStorms`.
+
+### Lua is embeddable ###
+
+Lua is a fast language engine with small footprint that you can embed into
+your application. Lua has a simple and well documented `API` that allows
+strong integration with code written in other languages. It is easy to
+extend Lua with libraries written in other languages. It is also easy to
+extend programs written in other languages with Lua. Lua has been used to
+extend programs written not only in `C` and `C++`, but also in `Java`, `C#`,
+`Smalltalk`, `Fortran`, `Ada`, and even in other scripting languages,
+such as
+`Perl` and `Ruby`.
+
+### Lua is simple and powerful ###
+
+A fundamental concept in the design of Lua is to provide _meta-mechanisms_
+for implementing features, instead of providing a host of features directly
+in the language. For example, although Lua is not a pure object-oriented
+language, it does provide meta-mechanisms for implementing classes and
+inheritance. Lua's meta-mechanisms bring an economy of concepts and keep
+the language small, while allowing the semantics to be extended in
+unconventional ways.
+
+### Lua is free ###
+
+Lua is free software, distributed under a
+[liberal license](http://www.lua.org/license.html) (the well-known `MIT`
+license). It can be used for both academic and commercial purposes at
+absolutely no cost. Just [download](http://www.lua.org/download.html) it and use it.
+
+### Where does Lua come from? ###
+
+Lua is designed and implemented by a team at
+[PUC-Rio](http://www.puc-rio.br/), the Pontifical Catholic University of
+Rio de Janeiro in Brazil. Lua was born and raised at
+[Tecgraf](http://www.tecgraf.puc-rio.br/), the Computer Graphics
+Technology Group of PUC-Rio, and is now housed at
+[Lablua](http://www.lua.inf.puc-rio.br/). Both Tecgraf and Lablua are
+laboratories of the [Department of Computer Science](http://www.inf.puc-rio.br/).
+
diff --git a/doktutorial/index.dok b/doctutorial/README.md
index eefc3c3..ce0a144 100644
--- a/doktutorial/index.dok
+++ b/doctutorial/README.md
@@ -1,5 +1,5 @@
-====== Torch Tutorial ======
-{{anchor:torch.tutorial}}
+<a name="torch.tutorial"/>
+# Torch Tutorial #
So you are wondering how to work with Torch?
This is a little tutorial that should help get you started.
@@ -10,27 +10,27 @@ vectors, matrices and tensors and how to build and train a basic
neural network. For anything else, you should know how to access the
html help and read about how to do it.
-===== What is Torch? =====
+## What is Torch? ##
Torch7 provides a Matlab-like environment for state-of-the-art machine
learning algorithms. It is easy to use and provides a very efficient
implementation, thanks to a easy and fast scripting language (Lua) and
an underlying C/C++ implementation. You can read more about Lua
-[[http://www.lua.org|here]].
+[here](http://www.lua.org).
-===== Installation =====
+## Installation ##
First before you can do anything, you need to install Torch7 on your
machine. That is not described in detail here, but is instead
-described in the [[..:install:index|installation help]].
+described in the [installation help](..:install:index).
-===== Checking your installation works and requiring packages =====
+## Checking your installation works and requiring packages ##
If you have got this far, hopefully your Torch installation works. A simple
way to make sure it does is to start Lua from the shell command line,
and then try to start Torch:
-<file lua>
+```lua
$ torch
Try the IDE: torch -ide
Type help() for more info
@@ -41,29 +41,29 @@ t7> x = torch.Tensor()
t7> print(x)
[torch.DoubleTensor with no dimension]
-</file>
+```
-You might have to specify the exact path of the ''torch'' executable
+You might have to specify the exact path of the `torch` executable
if you installed Torch in a non-standard path.
In this example, we checked Torch was working by creating an empty
-[[..:torch:tensor|Tensor]] and printing it on the screen. The Tensor
+[Tensor](..:torch:tensor) and printing it on the screen. The Tensor
is the main tool in Torch, and is used to represent vector, matrices
or higher-dimensional objects (tensors).
-''torch'' only preloads the basic parts of torch (including
+`torch` only preloads the basic parts of torch (including
Tensors). To see the list of all packages distributed with Torch7,
-click [[..:index|here]].
+click [here](..:index).
-===== Getting Help =====
+## Getting Help ##
There are two main ways of getting help in Torch7. One way is ofcourse
the html formatted help. However, another and easier method is to use
-inline help in torch interpreter. The ''torch'' executable also
+inline help in torch interpreter. The `torch` executable also
integrates this capability. Help about any function can be accessed by
-calling the ''help()'' function.
+calling the `help()` function.
-<file lua>
+```lua
t7> help(torch.rand)
@@ -75,15 +75,15 @@ random numbers from a uniform distribution on the interval (0,1).
distribution on the interval (0,1).
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
-</file>
+```
Even a more intuitive method is to use tab completion. Whenever any
-input is entered at the ''torch'' prompt, one can eneter two
-consecutive ''TAB'' characters (''double TAB'') to get the syntax
-completion. Moreover entering ''double TAB'' at an open paranthesis
+input is entered at the `torch` prompt, one can eneter two
+consecutive `TAB` characters (`double TAB`) to get the syntax
+completion. Moreover entering `double TAB` at an open paranthesis
also causes the help for that particular function to be printed.
-<file lua>
+```lua
t7> torch.randn( -- enter double TAB after (
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
@@ -98,21 +98,21 @@ distribution with mean zero and variance one.
/ \
t7> torch.randn(
-</file>
+```
-===== Lua Basics =====
+## Lua Basics ##
-Torch is entirely built around [[http://www.lua.org/|Lua]], so the first thing you have to
-know is get some basic knowledge about the language. The [[http://www.lua.org/docs.html|online book]]
+Torch is entirely built around [Lua](http://www.lua.org/), so the first thing you have to
+know is get some basic knowledge about the language. The [online book](http://www.lua.org/docs.html)
is great for that.
Here I'll just summarize a couple of very basic things to get you started.
-==== Variables ====
+### Variables ###
Creating variables is straightforward, Lua is dynamically typed language. Printing variables from the prompt is a bit misleading, you have to add the = sign before it:
-<file lua>
+```lua
t7> a = 10
t7> print(a)
10
@@ -121,9 +121,9 @@ t7> = a
t7> b = a + 1
t7> = b
11
-</file>
+```
-==== Lua's universal data structure: the table ====
+### Lua's universal data structure: the table ###
The best thing about Lua is its consistency, and compactness. The whole language relies on a single data structure, the table, which will allow you to construct the most complex programs, with style!
@@ -135,7 +135,7 @@ The best thing about Lua is its consistency, and compactness. The whole language
You already know enough about tables, let's hack around:
-<file lua>
+```lua
t7> t = {}
t7> =t
{}
@@ -152,28 +152,28 @@ t7> = {1,2,3,'mixed types',true}
[4] = string : "mixed types"
[5] = true}
t7> t = {4,3,2,1}
-</file>
+```
In the example above, we've shown how to use a table as a linear array. Lua is one-based, like Matlab, so if we try to get the length of this last array created, it'll be equal to the number of elements we've put in:
-<file lua>
+```lua
t7> =#t
4
-</file>
+```
Ok, let's see about hash-tables now:
-<file lua>
+```lua
t7> h = {firstname='Paul', lastname='Eluard', age='117'}
t7> =h
{[firstname] = string : "Paul"
[lastname] = string : "Eluard"
[age] = string : "117"}
-</file>
+```
So now mixing arrays and hash-tables is easy:
-<file lua>
+```lua
t7> h = {firstname='Paul', lastname='Eluard', age='117', 1, 2, 3}
t7> =h
{[1] = 1
@@ -183,14 +183,14 @@ t7> =h
[lastname] = string : "Eluard"
[age] = string : "117"}
t7>
-</file>
+```
Easy right?
So we've seen a couple of basic types already: strings, numbers, tables, booleans (true/false). There's one last type in Lua: the function.
are first-order citizens in Lua, which means that they can be treated as regular variables. This is great, because it's the reason why we can construct very powerful data structures (such as objects) with tables:
-<file lua>
+```lua
t7> h = {firstname='Paul', lastname='Eluard', age='117',
. > print=function(self)
. > print(self.firstname .. ' ' .. self.lastname
@@ -203,13 +203,13 @@ t7> =h
[print] = function: 0x7f885d00c430
[lastname] = string : "Eluard"
[age] = string : "117"}
-</file>
+```
In this example above, we're basically storing a function at the key (hash) print. It's fairly straightforward, note that the function takes one argument, named self, which is assumed to be the object itself. The function simply concatenates the fields of the table self, and prints the whole string.
One important note: accessing fields of a table is either done using square brackets [], or the . operator. The square brackets are more general: they allow the use of arbitrary strings. In the following, we now try to access the elements of h, that we just created:
-<file lua>
+```lua
t7> h. + TAB
h.age h.firstname h.lastname h.print(
@@ -221,17 +221,17 @@ Paul Eluard (age: 117)
t7> h:print()
Paul Eluard (age: 117)
-</file>
+```
On the first line we type h. and then use TAB to complete and automatically explore the symbols present in h. We then print h.print, and confirm that it is indeed a function.
At the next line, we call the function h.print, and pass h as the argument (which becomes self in the body of the function). This is fairly natural, but a bit heavy to manipulate objects. Lua provides a simple shortcut, :, the column, which passes the parent table as the first argument: h:print() is strictly equivalent to h.print(h).
-==== Functions ====
+### Functions ###
A few more things about functions: functions in Lua are proper closures, so in combination with tables, you can use them to build complex and very flexible programs. An example of closure is given here:
-<file lua>
+```lua
myfuncs = {}
for i = 1,4 do
local calls = 0
@@ -251,17 +251,17 @@ t7> myfuncs[4]()
2
t7> myfuncs[1]()
3
-</file>
+```
You can use such closures to create objects on the fly, that is, tables which combine functions and data to act upon. Thanks to closure, data can live in arbitrary locations (not necessarily the object's table), and simply be bound at runtime to the function's scope.
-===== Torch Basics: Playing with Tensors =====
+## Torch Basics: Playing with Tensors ##
Ok, now we are ready to actually do something in Torch. Lets start by
constructing a vector, say a vector with 5 elements, and filling the
i-th element with value i. Here's how:
-<file lua>
+```lua
t7> x=torch.Tensor(5)
t7> for i=1,5 do x[i]=i; end
t7> print(x)
@@ -274,13 +274,13 @@ t7> print(x)
[torch.DoubleTensor of dimension 5]
t7>
-</file>
+```
However, making use of Lua's powerfull closures and functions being
first class citizens of the language, the same code could be written
in much nicer way:
-<file lua>
+```lua
t7> x=torch.Tensor(5)
t7> i=0;x:apply(function() i=i+1;return i; end)
t7> =x
@@ -301,40 +301,40 @@ t7> =x
[torch.DoubleTensor of dimension 5]
t7>
-</file>
+```
To make a matrix (2-dimensional Tensor), one simply does something
-like ''x=torch.Tensor(5,5)'' instead:
+like `x=torch.Tensor(5,5)` instead:
-<file lua>
+```lua
x=torch.Tensor(5,5)
for i=1,5 do
for j=1,5 do
x[i][j]=math.random();
end
end
-</file>
+```
Another way to do the same thing as the code above is provided by torch:
-<file lua>
+```lua
x=torch.rand(5,5)
-</file>
+```
-The [[..:torch:maths|torch]] package contains a wide variety of commands
+The [torch](..:torch:maths) package contains a wide variety of commands
for manipulating Tensors that follow rather closely the equivalent
Matlab commands. For example one can construct Tensors using the commands
-[[..:torch:maths#torch.ones|ones]],
-[[..:torch:maths#torch.zeros|zeros]],
-[[..:torch:maths#torch.rand|rand]],
-[[..:torch:maths#torch.randn|randn]] and
-[[..:torch:maths#torch.eye|eye]], amongst others.
+[ones](..:torch:maths#torch.ones),
+[zeros](..:torch:maths#torch.zeros),
+[rand](..:torch:maths#torch.rand),
+[randn](..:torch:maths#torch.randn) and
+[eye](..:torch:maths#torch.eye), amongst others.
Similarly, row or column-wise operations such as
-[[..:torch:maths#torch.sum|sum]] and
-[[..:torch:maths#torch.max|max]] are called in the same way:
+[sum](..:torch:maths#torch.sum) and
+[max](..:torch:maths#torch.max) are called in the same way:
-<file lua>
+```lua
t7> x1=torch.rand(5,5)
t7> x2=torch.sum(x1,2);
t7> print(x2)
@@ -346,15 +346,15 @@ t7> print(x2)
[torch.DoubleTensor of dimension 5x1]
t7>
-</file>
+```
Naturally, many BLAS operations like matrix-matrix, matrix-vector products
are implemented. We suggest everyone to install ATLAS or MKL libraries since
Torch7 can optionally take advantage with these very efficient and multi-threaded
libraries if they are found in your system. Checkout
-[[..:torch:maths|Mathematical operations using tensors.]] for details.
+[Mathematical operations using tensors.](..:torch:maths) for details.
-<file lua>
+```lua
t7> a=torch.ones(5,5)
t7> b=torch.ones(5,2)
@@ -382,30 +382,30 @@ t7> =torch.mm(a,b)
5 5
[torch.DoubleTensor of dimension 5x2]
-</file>
+```
-===== Types in Torch7 =====
+## Types in Torch7 ##
In Torch7, different types of tensors can be used. By default, all
-tensors are created using ''double'' type. ''torch.Tensor'' is a
-convenience call to ''torch.DoubleTensor''. One can easily switch the
-default tensor type to other types, like ''float''.
+tensors are created using `double` type. `torch.Tensor` is a
+convenience call to `torch.DoubleTensor`. One can easily switch the
+default tensor type to other types, like `float`.
-<file lua>
+```lua
t7> =torch.Tensor()
[torch.DoubleTensor with no dimension]
t7> torch.setdefaulttensortype('torch.FloatTensor')
t7> =torch.Tensor()
[torch.FloatTensor with no dimension]
-</file>
+```
-===== Saving code to files, running files =====
+## Saving code to files, running files ##
Before we go any further, let's just review one basic thing: saving code to files, and executing them.
As Torch relies on Lua, it's best to give all your files a .lua extension. Let's generate a lua file that contains some Lua code, and then execute it:
-<file lua>
+```lua
$ echo "print('Hello World\!')" > helloworld.lua
...
@@ -417,25 +417,25 @@ $ torch
...
t7> dofile 'helloworld.lua'
Hello World!
-</file>
+```
That's it, you can either run programs form your shell, or from the Torch prompt. You can also run programs from the shell, and get an interactive prompt whenever an error occurs, or the program terminates (good for debugging):
-<file lua>
+```lua
$ torch -i helloworld.lua
...
Hello World!
t7>
-</file>
+```
We're good with all the basic things: you now know how to run code, from files or from the prompt, and write basic Lua (which is almost all Lua is!).
-===== Example: training a neural network =====
+## Example: training a neural network ##
-We will show now how to train a neural network using the [[..:nn:index|nn]] package
+We will show now how to train a neural network using the [nn](..:nn:index) package
available in Torch.
-==== Torch basics: building a dataset using Lua tables ====
+### Torch basics: building a dataset using Lua tables ###
In general the user has the freedom to create any kind of structure he
wants for dealing with data.
@@ -447,33 +447,33 @@ user's creativity.
However, if you want to use some convenience classes, like
-[[..:nn:index#nn.StochasticGradient|StochasticGradient]], which basically
+[StochasticGradient](..:nn:index#nn.StochasticGradient), which basically
does the training loop for you, one has to follow the dataset
convention of these classes. (We will discuss manual training of a
network, where one does not use these convenience classes, in a later
section.)
-StochasticGradient expects as a ''dataset'' an object which implements
-the operator ''dataset[index]'' and implements the method
-''dataset:size()''. The ''size()'' methods returns the number of
-examples and ''dataset[i]'' has to return the i-th example.
+StochasticGradient expects as a `dataset` an object which implements
+the operator `dataset[index]` and implements the method
+`dataset:size()`. The `size()` methods returns the number of
+examples and `dataset[i]` has to return the i-th example.
-An ''example'' has to be an object which implements the operator
-''example[field]'', where ''field'' often takes the value ''1'' (for
-input features) or ''2'' (for corresponding labels), i.e an example is
+An `example` has to be an object which implements the operator
+`example[field]`, where `field` often takes the value `1` (for
+input features) or `2` (for corresponding labels), i.e an example is
a pair of input and output objects. The input is usually a Tensor
(exception: if you use special kind of gradient modules, like
-[[..:nn:index#nn.TableLayers|table layers]]). The label type depends
+[table layers](..:nn:index#nn.TableLayers)). The label type depends
on the criterion. For example, the
-[[..:nn:index#nn.MSECriterion|MSECriterion]] expects a Tensor, but the
-[[..:nn:index#nn.ClassNLLCriterion|ClassNLLCriterion]] expects an
+[MSECriterion](..:nn:index#nn.MSECriterion) expects a Tensor, but the
+[ClassNLLCriterion](..:nn:index#nn.ClassNLLCriterion) expects an
integer (the class).
Such a dataset is easily constructed by using Lua tables, but it could any object
as long as the required operators/methods are implemented.
Here is an example of making a dataset for an XOR type problem:
-<file lua>
+```lua
dataset={};
function dataset:size() return 100 end -- 100 examples
for i=1,dataset:size() do
@@ -486,55 +486,55 @@ for i=1,dataset:size() do
end
dataset[i] = {input, output};
end
-</file>
+```
-==== Torch basics: building a neural network ====
+### Torch basics: building a neural network ###
To train a neural network we first need some data. We can use the XOR data
we just generated in the section before. Now all that remains is to define
our network architecture, and train it.
To use Neural Networks in Torch you have to require the
-[[..:nn:index|nn]] package.
-A classical feed-forward network is created with the ''Sequential'' object:
-<file lua>
+[nn](..:nn:index) package.
+A classical feed-forward network is created with the `Sequential` object:
+```lua
require "nn"
mlp=nn.Sequential(); -- make a multi-layer perceptron
-</file>
+```
To build the layers of the network, you simply add the Torch objects
-corresponding to those layers to the //mlp// variable created above.
+corresponding to those layers to the _mlp_ variable created above.
The two basic objects you might be interested in first are the
-[[..:nn:index#nn.Linear|Linear]] and
-[[..:nn:index#nn.Tanh|Tanh]] layers.
+[Linear](..:nn:index#nn.Linear) and
+[Tanh](..:nn:index#nn.Tanh) layers.
The Linear layer is created with two parameters: the number of input
dimensions, and the number of output dimensions.
So making a classical feed-forward neural network with one hidden layer with
-//HUs// hidden units is as follows:
-<file lua>
+_HUs_ hidden units is as follows:
+```lua
require "nn"
mlp=nn.Sequential(); -- make a multi-layer perceptron
inputs=2; outputs=1; HUs=20;
mlp:add(nn.Linear(inputs,HUs))
mlp:add(nn.Tanh())
mlp:add(nn.Linear(HUs,outputs))
-</file>
+```
-==== Torch basics: training a neural network ====
+### Torch basics: training a neural network ###
Now we're ready to train.
This is done with the following code:
-<file lua>
+```lua
criterion = nn.MSECriterion()
trainer = nn.StochasticGradient(mlp, criterion)
trainer.learningRate = 0.01
trainer:train(dataset)
-</file>
+```
You should see printed on the screen something like this:
-<file lua>
+```lua
# StochasticGradient: training
# current error = 0.94550937745458
# current error = 0.83996744568527
@@ -547,30 +547,30 @@ You should see printed on the screen something like this:
# current error = 0.34321901952818
# current error = 0.34206793525954
# StochasticGradient: you have reached the maximum number of iterations
-</file>
+```
-Some other options of the //trainer// you might be interested in are for example:
-<file lua>
+Some other options of the _trainer_ you might be interested in are for example:
+```lua
trainer.maxIteration = 10
trainer.shuffleIndices = false
-</file>
+```
See the nn package description of the
-[[..:nn:index#nn.StochasticGradient|StochasticGradient]] object
+[StochasticGradient](..:nn:index#nn.StochasticGradient) object
for more details.
-==== Torch basics: testing your neural network ====
+### Torch basics: testing your neural network ###
To test your network on a single example you can do this:
-<file lua>
+```lua
x=torch.Tensor(2); -- create a test example Tensor
x[1]=0.5; x[2]=-0.5; -- set its values
pred=mlp:forward(x) -- get the prediction of the mlp
print(pred) -- print it
-</file>
+```
You should see that your network has learned XOR:
-<file lua>
+```lua
t7> x=torch.Tensor(2); x[1]=0.5; x[2]=0.5; print(mlp:forward(x))
-0.5886
[torch.DoubleTensor of dimension 1]
@@ -586,17 +586,17 @@ t7> x=torch.Tensor(2); x[1]=0.5; x[2]=-0.5; print(mlp:forward(x))
t7> x=torch.Tensor(2); x[1]=-0.5; x[2]=-0.5; print(mlp:forward(x))
-0.5576
[torch.DoubleTensor of dimension 1]
-</file>
+```
-==== Manual Training of a Neural Network ====
+### Manual Training of a Neural Network ###
-Instead of using the [[..:nn:index#nn.StochasticGradient|StochasticGradient]] class
+Instead of using the [StochasticGradient](..:nn:index#nn.StochasticGradient) class
you can directly make the forward and backward calls on the network yourself.
This gives you greater flexibility.
In the following code example we create the same XOR data on the fly
and train each example online.
-<file lua>
+```lua
criterion = nn.MSECriterion()
mlp=nn.Sequential(); -- make a multi-layer perceptron
inputs=2; outputs=1; HUs=20;
@@ -630,23 +630,24 @@ for i = 1,2500 do
-- (3) update parameters with a 0.01 learning rate
mlp:updateParameters(0.01)
end
-</file>
+```
Super!
-===== Concluding remarks / going further =====
+## Concluding remarks / going further ##
That's the end of this tutorial, but not the end of what you have left
to discover of Torch! To explore more of Torch, you should take a look
-at the [[..:index|Torch package help]] which has been linked to
+at the [Torch package help](..:index) which has been linked to
throughout this tutorial every time we have mentioned one of the basic
Torch object types. The Torch library reference manual is available
-[[..:index|here]] and the external torch packages installed on your
-system can be viewed [[..:torch:index|here]].
+[here](..:index) and the external torch packages installed on your
+system can be viewed [here](..:torch:index).
We've also compiled a couple of demonstrations and tutorial scripts
that demonstrate how to train more complex models, and build gui-based
demos, and so on... All of these can be found in
-[[http://github.com/andresy/torch-demos|this repo]].
+[this repo](http://github.com/andresy/torch-demos).
Good luck and have fun!
+
diff --git a/doklua/index.dok b/doklua/index.dok
deleted file mode 100644
index 8728c6b..0000000
--- a/doklua/index.dok
+++ /dev/null
@@ -1,83 +0,0 @@
-====== The Lua Language ======
-
-''Lua'' is a **powerful**, **fast**, **light-weight**, embeddable //scripting language//.
-''Lua'' combines simple procedural syntax with powerful data description
-constructs based on associative arrays and extensible semantics. ''Lua'' is
-dynamically typed, runs by interpreting bytecode for a register-based
-virtual machine, and has automatic memory management with incremental
-garbage collection, making it ideal for configuration, scripting, and rapid
-prototyping. 'Lua' means 'moon' in Portuguese and is pronounced **LOO-ah**.
-
-Please visit [[http://www.lua.org|http://www.lua.org]] for more
-information, or have a look on the [[LuaManual|Lua Reference Manual]].
-
-===== Why choose Lua? =====
-
-==== Lua is a proven and robust language ====
-
- Lua has been used in
-[[http://www.lua.org/uses.html|many industrial applications]] (e.g.,
-[[http://since1968.com/article/190/mark-hamburg-interview-adobe-photoshop-lightroom-part-2-of-2|Adobe's Photoshop Lightroom]]),
-with an emphasis on embedded systems and games. Lua
-is currently the leading scripting language in games. Lua has a solid
-[[LuaManual|reference manual]] and there are
-[[http://www.lua.org/docs.html#books|several books about it]]. Several
-[[http://www.lua.org/versions.html|versions]] of Lua have been released
-and used in real applications since its creation in 1993.
-
-==== Lua is fast ====
-
-Lua has a deserved reputation for performance. To
-claim to be "as fast as Lua" is an aspiration of other scripting
-languages. Several benchmarks show Lua as the fastest language in the realm
-of interpreted scripting languages. Lua is fast not only in fine-tuned
-benchmark programs, but in real life too. A substantial fraction of large
-applications have been written in Lua.
-
-==== Lua is portable ====
-
-Lua is [[http://www.lua.org/download.html|distributed]] in a small
-package that builds out-of-the-box in all platforms that have an ''ANSI/ISO C''
-compiler. Lua runs on all flavors of ''Unix'' and ''Windows'', and also on mobile
-devices (such as handheld computers and cell phones that use ''BREW'', ''Symbian'',
-''Pocket PC'', etc.) and embedded microprocessors (such as ''ARM'' and ''Rabbit'') for
-applications like ''Lego MindStorms''.
-
-==== Lua is embeddable ====
-
-Lua is a fast language engine with small footprint that you can embed into
-your application. Lua has a simple and well documented ''API'' that allows
-strong integration with code written in other languages. It is easy to
-extend Lua with libraries written in other languages. It is also easy to
-extend programs written in other languages with Lua. Lua has been used to
-extend programs written not only in ''C'' and ''C++'', but also in ''Java'', ''C#'',
-''Smalltalk'', ''Fortran'', ''Ada'', and even in other scripting languages,
-such as
-''Perl'' and ''Ruby''.
-
-==== Lua is simple and powerful ====
-
-A fundamental concept in the design of Lua is to provide //meta-mechanisms//
-for implementing features, instead of providing a host of features directly
-in the language. For example, although Lua is not a pure object-oriented
-language, it does provide meta-mechanisms for implementing classes and
-inheritance. Lua's meta-mechanisms bring an economy of concepts and keep
-the language small, while allowing the semantics to be extended in
-unconventional ways.
-
-==== Lua is free ====
-
-Lua is free software, distributed under a
-[[http://www.lua.org/license.html|liberal license]] (the well-known ''MIT''
-license). It can be used for both academic and commercial purposes at
-absolutely no cost. Just [[http://www.lua.org/download.html|download]] it and use it.
-
-==== Where does Lua come from? ====
-
-Lua is designed and implemented by a team at
-[[http://www.puc-rio.br/|PUC-Rio]], the Pontifical Catholic University of
-Rio de Janeiro in Brazil. Lua was born and raised at
-[[http://www.tecgraf.puc-rio.br/|Tecgraf]], the Computer Graphics
-Technology Group of PUC-Rio, and is now housed at
-[[http://www.lua.inf.puc-rio.br/|Lablua]]. Both Tecgraf and Lablua are
-laboratories of the [[http://www.inf.puc-rio.br/|Department of Computer Science]].