Welcome to mirror list, hosted at ThFree Co, Russian Federation.

github.com/stanfordnlp/stanza.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
path: root/demo
diff options
context:
space:
mode:
authorChristopher Manning <manning@cs.stanford.edu>2019-07-30 23:20:44 +0300
committerChristopher Manning <manning@cs.stanford.edu>2019-07-30 23:20:44 +0300
commite2303f3f902e402d25d7012c6ca59da16150900b (patch)
treece0042c6104a10bcbb65277eae84e793d1583937 /demo
parentbec06832cd89fb0ea6f98cc2ba8665c04a611e92 (diff)
Edit text cells a little
Diffstat (limited to 'demo')
-rw-r--r--demo/StanfordNLP_Beginners_Guide.ipynb10
1 files changed, 5 insertions, 5 deletions
diff --git a/demo/StanfordNLP_Beginners_Guide.ipynb b/demo/StanfordNLP_Beginners_Guide.ipynb
index 83aeccf6..560d33de 100644
--- a/demo/StanfordNLP_Beginners_Guide.ipynb
+++ b/demo/StanfordNLP_Beginners_Guide.ipynb
@@ -27,7 +27,7 @@
"![Latest Version](https://img.shields.io/pypi/v/stanfordnlp.svg?colorB=bc4545)\n",
"![Python Versions](https://img.shields.io/pypi/pyversions/stanfordnlp.svg?colorB=bc4545)\n",
"\n",
- "StanfordNLP is a Python NLP toolkit that supports 50+ human languages. It is built with highly accurate neural network components that enable efficient training and evaluation with your own annotated data, and offers pretrained models on 70+ treebanks. Additionally, StanfordNLP provides a stable, officially maintained Python interface to Java Stanford CoreNLP Toolkit.\n",
+ "StanfordNLP is a Python NLP toolkit for core sentence analysis tasks like tokenization, lemmatization, part-of-speech tagging, and dependency parsing. It is built with highly accurate neural network components that enable efficient training and evaluation with your own annotated data, and it comes with pretrained models built on over 70+ [UD](https://universaldependencies.org/) treebanks that support 50+ human languages. Additionally, StanfordNLP provides a stable, officially maintained Python interface to the Java [Stanford CoreNLP](https://stanfordnlp.github.io/CoreNLP/) toolkit.\n",
"\n",
"In this tutorial, we will demonstrate how to set up StanfordNLP and annotate text with its native neural network NLP models. For the use of the Python CoreNLP interface, please see other tutorials."
]
@@ -41,7 +41,7 @@
"source": [
"## 1. Installing StanfordNLP\n",
"\n",
- "Note that StanfordNLP only supports Python 3. Installing and importing StanfordNLP are as simple as running the following commands:"
+ "StanfordNLP only supports Python 3.6 and above and uses [the PyTorch library](https://pytorch.org/). Installing and importing StanfordNLP are as simple as running the following commands:"
]
},
{
@@ -82,7 +82,7 @@
"source": [
"## 2. Downloading Models\n",
"\n",
- "You can download models with the `stanfordnlp.download` command. The language can be specified with either a full language name (e.g., \"english\"), or a short code (e.g., \"en\"). \n",
+ "You can download models with the `stanfordnlp.download` command. The language can be specified with either a full language name (e.g., \"english\"), or a short ISO 639 code (e.g., \"en\"). \n",
"\n",
"By default, models will be saved to your `~/stanfordnlp_resources` directory. If you want to specify your own path to save the model files, you can pass a `resource_dir=your_path` argument.\n"
]
@@ -214,9 +214,9 @@
"\n",
"Annotations can be accessed from the returned `Document` object. \n",
"\n",
- "A `Document` contains a list of `Sentence`s, and a `Sentence` contains a list of `Token`s and `Word`s. For the most part `Token`s and `Word`s overlap, but some tokens can be divided into mutiple words, for instance the French token `aux` is divided into the words `à` and `les`, while in English a word and a token are equivalent. Note that dependency parses are derived over `Word`s.\n",
+ "A `Document` contains a list of `Sentence`s, and a `Sentence` contains a list of `Token`s and `Word`s. For the most part `Token`s and `Word`s overlap, but some tokens can be divided into mutiple words, for instance the French token `aux` is divided into the words `à` and `les`. Note that dependency parses are derived over `Word`s.\n",
"\n",
- "The following example iterate over all English sentences and words, and print the word information one by one:"
+ "The following example iterates over all English sentences and words, and prints the word information one by one:"
]
},
{