diff options
author | Yuhao Zhang <zyh@stanford.edu> | 2020-03-14 08:49:24 +0300 |
---|---|---|
committer | Yuhao Zhang <zyh@stanford.edu> | 2020-03-14 08:49:35 +0300 |
commit | 67f4647f42a0231f40610ebb38f199821af748fb (patch) | |
tree | 4e2ab5fee8bc7497898f3bec46c60ae88d2bb13a /README.md | |
parent | d875ec0096ea8f8e40bb3724910eaa347b55c640 (diff) |
Add conda installation instructions
Diffstat (limited to 'README.md')
-rw-r--r-- | README.md | 20 |
1 files changed, 16 insertions, 4 deletions
@@ -40,20 +40,32 @@ If you use the CoreNLP software through Stanza, please cite the CoreNLP software To ask questions, report issues or request features, please use the [GitHub Issue Tracker](https://github.com/stanfordnlp/stanza/issues). -## Setup +## Installation + +### pip Stanza supports Python 3.6 or later. We strongly recommend that you install Stanza from PyPI. If you already have [pip](https://pip.pypa.io/en/stable/installing/), the Python package manager, installed on your system, simply run: ```bash pip install stanza ``` -this should also help resolve all of the dependencies of Stanza, for instance [PyTorch](https://pytorch.org/) 1.0.0 or above. +This should also help resolve all of the dependencies of Stanza, for instance [PyTorch](https://pytorch.org/) 1.2.0 or above. If you currently have a previous version of `stanza` installed, use: ```bash pip install stanza -U ``` -Alternatively, you can also install from source of this git repository, which will give you more flexibility in developing on top of Stanza and training your own models. For this option, run +### Anaconda + +To install Stanza via Anaconda, use the following conda command: + +```bash +conda install -c stanfordnlp stanza +``` + +### From Source + +Alternatively, you can also install from source of this git repository, which will give you more flexibility in developing on top of Stanza. For this option, run ```bash git clone https://github.com/stanfordnlp/stanza.git cd stanza @@ -121,7 +133,7 @@ To maximize speed performance, it is essential to run the pipeline on batches of ## Training your own neural pipelines -All neural modules in this library can be trained with your own data. The tokenizer, the multi-word token (MWT) expander, the POS/morphological features tagger, the lemmatizer and the dependency parser require [CoNLL-U](https://universaldependencies.org/format.html) formatted data, while the NER model requires the BIOES format. Currently, we do not support model training via the `Pipeline` interface. Therefore, to train your own models, you need to clone this git repository and set up from source. +All neural modules in this library can be trained with your own data. The tokenizer, the multi-word token (MWT) expander, the POS/morphological features tagger, the lemmatizer and the dependency parser require [CoNLL-U](https://universaldependencies.org/format.html) formatted data, while the NER model requires the BIOES format. Currently, we do not support model training via the `Pipeline` interface. Therefore, to train your own models, you need to clone this git repository and run training from the source. For detailed step-by-step guidance on how to train and evaluate your own models, please visit our [training documentation](https://stanfordnlp.github.io/stanza/training.html). |