nltk.tokenize.TreebankWordDetokenizer

class nltk.tokenize.TreebankWordDetokenizer[source]

Bases: TokenizerI

The Treebank detokenizer uses the reverse regex operations corresponding to the Treebank tokenizer’s regexes.

Note:

  • There’re additional assumption mades when undoing the padding of [;@#$%&] punctuation symbols that isn’t presupposed in the TreebankTokenizer.

  • There’re additional regexes added in reversing the parentheses tokenization,

    such as the r'([\]\)\}\>])\s([:;,.])', which removes the additional right padding added to the closing parentheses precedding [:;,.].

  • It’s not possible to return the original whitespaces as they were because there wasn’t explicit records of where ‘n’, ‘t’ or ‘s’ were removed at the text.split() operation.

>>> from nltk.tokenize.treebank import TreebankWordTokenizer, TreebankWordDetokenizer
>>> s = '''Good muffins cost $3.88\nin New York.  Please buy me\ntwo of them.\nThanks.'''
>>> d = TreebankWordDetokenizer()
>>> t = TreebankWordTokenizer()
>>> toks = t.tokenize(s)
>>> d.detokenize(toks)
'Good muffins cost $3.88 in New York. Please buy me two of them. Thanks.'

The MXPOST parentheses substitution can be undone using the convert_parentheses parameter:

>>> s = '''Good muffins cost $3.88\nin New (York).  Please (buy) me\ntwo of them.\n(Thanks).'''
>>> expected_tokens = ['Good', 'muffins', 'cost', '$', '3.88', 'in',
... 'New', '-LRB-', 'York', '-RRB-', '.', 'Please', '-LRB-', 'buy',
... '-RRB-', 'me', 'two', 'of', 'them.', '-LRB-', 'Thanks', '-RRB-', '.']
>>> expected_tokens == t.tokenize(s, convert_parentheses=True)
True
>>> expected_detoken = 'Good muffins cost $3.88 in New (York). Please (buy) me two of them. (Thanks).'
>>> expected_detoken == d.detokenize(t.tokenize(s, convert_parentheses=True), convert_parentheses=True)
True

During tokenization it’s safe to add more spaces but during detokenization, simply undoing the padding doesn’t really help.

  • During tokenization, left and right pad is added to [!?], when detokenizing, only left shift the [!?] is needed. Thus (re.compile(r'\s([?!])'), r'\g<1>').

  • During tokenization [:,] are left and right padded but when detokenizing, only left shift is necessary and we keep right pad after comma/colon if the string after is a non-digit. Thus (re.compile(r'\s([:,])\s([^\d])'), r'\1 \2').

>>> from nltk.tokenize.treebank import TreebankWordDetokenizer
>>> toks = ['hello', ',', 'i', 'ca', "n't", 'feel', 'my', 'feet', '!', 'Help', '!', '!']
>>> twd = TreebankWordDetokenizer()
>>> twd.detokenize(toks)
"hello, i can't feel my feet! Help!!"
>>> toks = ['hello', ',', 'i', "can't", 'feel', ';', 'my', 'feet', '!',
... 'Help', '!', '!', 'He', 'said', ':', 'Help', ',', 'help', '?', '!']
>>> twd.detokenize(toks)
"hello, i can't feel; my feet! Help!! He said: Help, help?!"
CONTRACTIONS2 = [re.compile('(?i)\\b(can)\\s(not)\\b', re.IGNORECASE), re.compile("(?i)\\b(d)\\s('ye)\\b", re.IGNORECASE), re.compile('(?i)\\b(gim)\\s(me)\\b', re.IGNORECASE), re.compile('(?i)\\b(gon)\\s(na)\\b', re.IGNORECASE), re.compile('(?i)\\b(got)\\s(ta)\\b', re.IGNORECASE), re.compile('(?i)\\b(lem)\\s(me)\\b', re.IGNORECASE), re.compile("(?i)\\b(more)\\s('n)\\b", re.IGNORECASE), re.compile('(?i)\\b(wan)\\s(na)(?=\\s)', re.IGNORECASE)]
CONTRACTIONS3 = [re.compile("(?i) ('t)\\s(is)\\b", re.IGNORECASE), re.compile("(?i) ('t)\\s(was)\\b", re.IGNORECASE)]
ENDING_QUOTES = [(re.compile("([^' ])\\s('ll|'LL|'re|'RE|'ve|'VE|n't|N'T) "), '\\1\\2 '), (re.compile("([^' ])\\s('[sS]|'[mM]|'[dD]|') "), '\\1\\2 '), (re.compile("(\\S)\\s(\\'\\')"), '\\1\\2'), (re.compile("(\\'\\')\\s([.,:)\\]>};%])"), '\\1\\2'), (re.compile("''"), '"')]
DOUBLE_DASHES = (re.compile(' -- '), '--')
CONVERT_PARENTHESES = [(re.compile('-LRB-'), '('), (re.compile('-RRB-'), ')'), (re.compile('-LSB-'), '['), (re.compile('-RSB-'), ']'), (re.compile('-LCB-'), '{'), (re.compile('-RCB-'), '}')]
PARENS_BRACKETS = [(re.compile('([\\[\\(\\{\\<])\\s'), '\\g<1>'), (re.compile('\\s([\\]\\)\\}\\>])'), '\\g<1>'), (re.compile('([\\]\\)\\}\\>])\\s([:;,.])'), '\\1\\2')]
PUNCTUATION = [(re.compile("([^'])\\s'\\s"), "\\1' "), (re.compile('\\s([?!])'), '\\g<1>'), (re.compile('([^\\.])\\s(\\.)([\\]\\)}>"\\\']*)\\s*$'), '\\1\\2\\3'), (re.compile('([#$])\\s'), '\\g<1>'), (re.compile('\\s([;%])'), '\\g<1>'), (re.compile('\\s\\.\\.\\.\\s'), '...'), (re.compile('\\s([:,])'), '\\1')]
STARTING_QUOTES = [(re.compile('([ (\\[{<])\\s``'), '\\1``'), (re.compile('(``)\\s'), '\\1'), (re.compile('``'), '"')]
tokenize(tokens: List[str], convert_parentheses: bool = False) str[source]

Treebank detokenizer, created by undoing the regexes from the TreebankWordTokenizer.tokenize.

Parameters
  • tokens (List[str]) – A list of strings, i.e. tokenized text.

  • convert_parentheses (bool, optional) – if True, replace PTB symbols with parentheses, e.g. -LRB- to (. Defaults to False.

Returns

str

Return type

str

detokenize(tokens: List[str], convert_parentheses: bool = False) str[source]

Duck-typing the abstract tokenize().

Parameters
  • tokens (List[str]) –

  • convert_parentheses (bool) –

Return type

str

span_tokenize(s: str) Iterator[Tuple[int, int]][source]

Identify the tokens using integer offsets (start_i, end_i), where s[start_i:end_i] is the corresponding token.

Return type

Iterator[Tuple[int, int]]

Parameters

s (str) –

span_tokenize_sents(strings: List[str]) Iterator[List[Tuple[int, int]]][source]

Apply self.span_tokenize() to each element of strings. I.e.:

return [self.span_tokenize(s) for s in strings]

Yield

List[Tuple[int, int]]

Parameters

strings (List[str]) –

Return type

Iterator[List[Tuple[int, int]]]

tokenize_sents(strings: List[str]) List[List[str]][source]

Apply self.tokenize() to each element of strings. I.e.:

return [self.tokenize(s) for s in strings]

Return type

List[List[str]]

Parameters

strings (List[str]) –