Corpora¶
Data downloaders¶
The NLTK corpus and module downloader. This module defines several interfaces which can be used to download corpora, models, and other data packages that can be used with NLTK.
Downloading Packages¶
If called with no arguments, download()
will display an interactive
interface which can be used to download and install new packages.
If Tkinter is available, then a graphical interface will be shown,
otherwise a simple text interface will be provided.
Individual packages can be downloaded by calling the download()
function with a single argument, giving the package identifier for the
package that should be downloaded:
>>> download('treebank')
[nltk_data] Downloading package 'treebank'...
[nltk_data] Unzipping corpora/treebank.zip.
NLTK also provides a number of “package collections”, consisting of
a group of related packages. To download all packages in a
colleciton, simply call download()
with the collection’s
identifier:
>>> download('all-corpora')
[nltk_data] Downloading package 'abc'...
[nltk_data] Unzipping corpora/abc.zip.
[nltk_data] Downloading package 'alpino'...
[nltk_data] Unzipping corpora/alpino.zip.
...
[nltk_data] Downloading package 'words'...
[nltk_data] Unzipping corpora/words.zip.
Download Directory¶
By default, packages are installed in either a system-wide directory
(if Python has sufficient access to write to it); or in the current
user’s home directory. However, the download_dir
argument may be
used to specify a different installation target, if desired.
See Downloader.default_download_dir()
for more a detailed
description of how the default download directory is chosen.
NLTK Download Server¶
Before downloading any packages, the corpus and module downloader
contacts the NLTK download server, to retrieve an index file
describing the available packages. By default, this index file is
loaded from https://raw.githubusercontent.com/nltk/nltk_data/gh-pages/index.xml
.
If necessary, it is possible to create a new Downloader
object,
specifying a different URL for the package index file.
Usage:
python nltk/downloader.py [-d DATADIR] [-q] [-f] [-k] PACKAGE_IDS
or:
python -m nltk.downloader [-d DATADIR] [-q] [-f] [-k] PACKAGE_IDS
|
|
Corpus readers¶
Reader for corpora that consist of plaintext documents. |
|
Reader for simple part-of-speech tagged corpora. Paragraphs are assumed to be split using blank lines. Sentences and words can be tokenized using the default tokenizers, or by custom tokenizers specified as parameters to the constructor. Words are parsed using |
|
Reader for corpora that consist of parenthesis-delineated parse trees, like those found in the "combined" section of the Penn Treebank, e.g. |
|
A reader for plaintext corpora whose documents are divided into categories based on their file identifiers. |
|
A reader for part-of-speech tagged corpora whose documents are divided into categories based on their file identifiers. |
|
A reader for parsed corpora whose documents are divided into categories based on their file identifiers. |
|
A corpus reader for CoNLL-style files. |
|
A ConllCorpusReader whose data file contains three columns: words, pos, and chunk. |
|
Corpus reader for corpora whose documents are xml files. |
|
List of words, one per line. |
|
sentence_id verb noun1 preposition noun2 attachment |
|
Reader for chunked (and optionally tagged) corpora. |
|
Reader for the sinica treebank. |
|
List of words, one per line. |
|
Reader for the TIMIT corpus (or any other corpus with the same file layout and use of file formats). |
|
Corpus reader for the York-Toronto-Helsinki Parsed Corpus of Old English Prose (YCOE), a 1.5 million word syntactically-annotated corpus of Old English prose texts. |
|
A corpus reader for the MAC_MORPHO corpus. |
|
Reader for the Alpino Dutch Treebank. |
|
Corpus reader for corpora in RTE challenges. |
|
Reader for Europarl corpora that consist of plaintext documents. |
|
Corpus reader for the propbank corpus, which augments the Penn Treebank with information about the predicate argument structure of every verb instance. |
|
An NLTK interface to the VerbNet verb lexicon. |
|
Corpus reader for the XML version of the British National Corpus. |
|
A corpus reader used to access wordnet or its variants. |
|
A corpus reader for the WordNet information content corpus. |
|
Corpus reader for the nombank corpus, which augments the Penn Treebank with information about the predicate argument structure of every noun instance. |
|
Corpus reader designed to work with corpus created by IPI PAN. |
|
This class implements: |
|
Corpus reader for the XML version of the CHILDES corpus. |
|
Reader for corpora of word-aligned sentences. |
|
A corpus reader for tagged sentences that are included in the TIMIT corpus. |
|
Wrapper for the LISP-formatted thesauruses distributed by Dekang Lin. |
|
Corpus reader for the SemCor Corpus. |
|
A corpus reader for the Framenet Corpus. |
|
Corpus reader for the XML version of the British National Corpus. |
|
Reader for corpora that consist of Tweets represented as a list of line-delimited JSON. |
|
A corpus reader used to access language An Crubadan n-gram files. |
|
Reader for corpora following the TEI-p5 xml scheme, such as MULTEXT-East. |
|
Reader for the Customer Review Data dataset by Hu, Liu (2004). |
|
Reader for Liu and Hu opinion lexicon. |
|
Reader for the Pros and Cons sentence dataset. |
|
A reader for corpora in which each row represents a single instance, mainly a sentence. |
|
Reader for the Comparative Sentence Dataset by Jindal and Liu (2006). |
|
This is a class to read the nonbreaking prefixes textfiles from the Moses Machine Translation toolkit. |
|
This class is used to read lists of characters from the Perl Unicode Properties (see https://perldoc.perl.org/perluniprops.html). |
|
This class is used to read the list of word pairs from the subset of lexical pairs of The Paraphrase Database (PPDB) XXXL used in the Monolingual Word Alignment (MWA) algorithm described in Sultan et al. (2014a, 2014b, 2015):. |
|
This is a class to read the PanLex Swadesh list from |
Texts¶
A bidirectional index between words and their 'contexts' in a text. |
|
An index that can be used to look up the offset locations at which a given word occurs in a document. |
|
A class that makes it easier to use regular expressions to search over tokenized strings. |
|
A wrapper around a sequence of simple (string) tokens, which is intended to support initial exploration of texts (via the interactive console). |
|
A collection of texts, which can be loaded with list of texts, or with a corpus consisting of one or more texts, and which supports counting, concordancing, collocation discovery, etc. |
Visualizations¶
|
|
|
|
Generate a lexical dispersion plot. |