Tokenize () in nltk.TweetTokenizer returns integers by splitting
Tokenize () nltk.TweetTokenizer
returns 32-bit integers, dividing them by digits. This only happens with some specific numbers and I see no reason why?
>>> from nltk.tokenize import TweetTokenizer
>>> tw = TweetTokenizer()
>>> tw.tokenize('the 23135851162 of 3151942776...')
[u'the', u'2313585116', u'2', u'of', u'3151942776', u'...']
The entrance has 23135851162
been split into[u'2313585116', u'2']
Interestingly, it seems to segment all numbers by 10 digits
>>> tw.tokenize('the 231358511621231245 of 3151942776...')
[u'the', u'2313585116', u'2123124', u'5', u'of', u'3151942776', u'...']
>>> tw.tokenize('the 231123123358511621231245 of 3151942776...')
[u'the', u'2311231233', u'5851162123', u'1245', u'of', u'3151942776', u'...']
The token length of the number affects tokenization:
>>> s = 'the 1234567890 of'
>>> tw.tokenize(s)
[u'the', u'12345678', u'90', u'of']
>>> s = 'the 123456789 of'
>>> tw.tokenize(s)
[u'the', u'12345678', u'9', u'of']
>>> s = 'the 12345678 of'
>>> tw.tokenize(s)
[u'the', u'12345678', u'of']
>>> s = 'the 1234567 of'
>>> tw.tokenize(s)
[u'the', u'1234567', u'of']
>>> s = 'the 123456 of'
>>> tw.tokenize(s)
[u'the', u'123456', u'of']
>>> s = 'the 12345 of'
>>> tw.tokenize(s)
[u'the', u'12345', u'of']
>>> s = 'the 1234 of'
>>> tw.tokenize(s)
[u'the', u'1234', u'of']
>>> s = 'the 123 of'
>>> tw.tokenize(s)
[u'the', u'123', u'of']
>>> s = 'the 12 of'
>>> tw.tokenize(s)
[u'the', u'12', u'of']
>>> s = 'the 1 of'
>>> tw.tokenize(s)
[u'the', u'1', u'of']
If adjacent digits + spaces are longer than 10:
>>> s = 'the 123 456 78901234 of'
>>> tw.tokenize(s)
[u'the', u'123 456 7890', u'1234', u'of']
source to share
TL; DR
This seems to be a bug / feature TweetTokenizer()
that we're not sure what motivates this.
Read on to find out where the error / function is happening ...
In the long
Looking at the function tokenize()
in the TweetTokenizer, the tokenizer does some preprocessing before actually tokenizing:
-
It first removes objects from the text by converting them to the corresponding unicode character via the function
_replace_html_entities()
-
It removes the username descriptors using a function as needed
remove_handles()
. -
Optionally, it normalizes the word length via the reduce_lengthening function
-
Then truncates the problematic character sequences with
HANG_RE
regex -
Finally, the actual tokenization happens through the
WORD_RE
regex
After the regex, WORD_RE
it
- it is optional to keep the emoji case before omitting the tokenized output
In code:
def tokenize(self, text):
"""
:param text: str
:rtype: list(str)
:return: a tokenized list of strings; concatenating this list returns\
the original string if `preserve_case=False`
"""
# Fix HTML character entities:
text = _replace_html_entities(text)
# Remove username handles
if self.strip_handles:
text = remove_handles(text)
# Normalize word lengthening
if self.reduce_len:
text = reduce_lengthening(text)
# Shorten problematic sequences of characters
safe_text = HANG_RE.sub(r'\1\1\1', text)
# Tokenize:
words = WORD_RE.findall(safe_text)
# Possibly alter the case, but avoid changing emoticons like :D into :d:
if not self.preserve_case:
words = list(map((lambda x : x if EMOTICON_RE.search(x) else
x.lower()), words))
return words
By default, descriptor deletion and length shrinking do not work unless specified by the user.
class TweetTokenizer:
r"""
Tokenizer for tweets.
>>> from nltk.tokenize import TweetTokenizer
>>> tknzr = TweetTokenizer()
>>> s0 = "This is a cooool #dummysmiley: :-) :-P <3 and some arrows < > -> <--"
>>> tknzr.tokenize(s0)
['This', 'is', 'a', 'cooool', '#dummysmiley', ':', ':-)', ':-P', '<3', 'and', 'some', 'arrows', '<', '>', '->', '<--']
Examples using `strip_handles` and `reduce_len parameters`:
>>> tknzr = TweetTokenizer(strip_handles=True, reduce_len=True)
>>> s1 = '@remy: This is waaaaayyyy too much for you!!!!!!'
>>> tknzr.tokenize(s1)
[':', 'This', 'is', 'waaayyy', 'too', 'much', 'for', 'you', '!', '!', '!']
"""
def __init__(self, preserve_case=True, reduce_len=False, strip_handles=False):
self.preserve_case = preserve_case
self.reduce_len = reduce_len
self.strip_handles = strip_handles
Skip steps and regex:
>>> from nltk.tokenize.casual import _replace_html_entities
>>> s = 'the 231358523423423421162 of 3151942776...'
>>> _replace_html_entities(s)
u'the 231358523423423421162 of 3151942776...'
Verified _replace_html_entities()
not to be the culprit.
By default, remove_handles()
and are reduce_lengthening()
omitted, but for the sake of common sense, let's see:
>>> from nltk.tokenize.casual import _replace_html_entities
>>> s = 'the 231358523423423421162 of 3151942776...'
>>> _replace_html_entities(s)
u'the 231358523423423421162 of 3151942776...'
>>> from nltk.tokenize.casual import remove_handles, reduce_lengthening
>>> remove_handles(_replace_html_entities(s))
u'the 231358523423423421162 of 3151942776...'
>>> reduce_lengthening(remove_handles(_replace_html_entities(s)))
u'the 231358523423423421162 of 3151942776...'
Also checked that none of the optional functions work badly
>>> import re
>>> s = 'the 231358523423423421162 of 3151942776...'
>>> HANG_RE = re.compile(r'([^a-zA-Z0-9])\1{3,}')
>>> HANG_RE.sub(r'\1\1\1', s)
'the 231358523423423421162 of 3151942776...'
Klar! HANG_RE
also clears on his behalf
>>> import re
>>> from nltk.tokenize.casual import REGEXPS
>>> WORD_RE = re.compile(r"""(%s)""" % "|".join(REGEXPS), re.VERBOSE | re.I | re.UNICODE)
>>> WORD_RE.findall(s)
['the', '2313585234', '2342342116', '2', 'of', '3151942776', '...']
Achso! What is where splits appear!
Now let's look deeper into WORD_RE
, this is a regex tuple.
The first is a massive url pattern regex from https://gist.github.com/winzig/8894715
Run through them one by one:
>>> from nltk.tokenize.casual import REGEXPS
>>> patt = re.compile(r"""(%s)""" % "|".join(REGEXPS), re.VERBOSE | re.I | re.UNICODE)
>>> s = 'the 231358523423423421162 of 3151942776...'
>>> patt.findall(s)
['the', '2313585234', '2342342116', '2', 'of', '3151942776', '...']
>>> patt = re.compile(r"""(%s)""" % "|".join(REGEXPS[:1]), re.VERBOSE | re.I | re.UNICODE)
>>> patt.findall(s)
[]
>>> patt = re.compile(r"""(%s)""" % "|".join(REGEXPS[:2]), re.VERBOSE | re.I | re.UNICODE)
>>> patt.findall(s)
['2313585234', '2342342116', '3151942776']
>>> patt = re.compile(r"""(%s)""" % "|".join(REGEXPS[1:2]), re.VERBOSE | re.I | re.UNICODE)
>>> patt.findall(s)
['2313585234', '2342342116', '3151942776']
Ah, ha! It seems like the 2nd regex from REGEXPS
is causing the problem!
If we take a look at https://github.com/alvations/nltk/blob/develop/nltk/tokenize/casual.py#L122 :
# The components of the tokenizer:
REGEXPS = (
URLS,
# Phone numbers:
r"""
(?:
(?: # (international)
\+?[01]
[\-\s.]*
)?
(?: # (area code)
[\(]?
\d{3}
[\-\s.\)]*
)?
\d{3} # exchange
[\-\s.]*
\d{4} # base
)"""
,
# ASCII Emoticons
EMOTICONS
,
# HTML tags:
r"""<[^>\s]+>"""
,
# ASCII Arrows
r"""[\-]+>|<[\-]+"""
,
# Twitter username:
r"""(?:@[\w_]+)"""
,
# Twitter hashtags:
r"""(?:\#+[\w_]+[\w\'_\-]*[\w_]+)"""
,
# email addresses
r"""[\w.+-]+@[\w-]+\.(?:[\w-]\.?)+[\w-]"""
,
# Remaining word types:
r"""
(?:[^\W\d_](?:[^\W\d_]|['\-_])+[^\W\d_]) # Words with apostrophes or dashes.
|
(?:[+\-]?\d+[,/.:-]\d+[+\-]?) # Numbers, including fractions, decimals.
|
(?:[\w_]+) # Words without apostrophes or dashes.
|
(?:\.(?:\s*\.){1,}) # Ellipsis dots.
|
(?:\S) # Everything else that isn't whitespace.
"""
)
The second regexp from REGEXP tries to parse numbers as phone numbers:
# Phone numbers:
r"""
(?:
(?: # (international)
\+?[01]
[\-\s.]*
)?
(?: # (area code)
[\(]?
\d{3}
[\-\s.\)]*
)?
\d{3} # exchange
[\-\s.]*
\d{4} # base
)"""
The sample is trying to recognize
- If desired, the first digits will correspond to the international code.
- next 3 digits as area code
- optional, then a dash
- then 3 more digits, which are the exchange code (telecom)
- another optional dash
- finally a 4-digit base phone number.
See https://regex101.com/r/BQpnsg/1 for details .
This is why it tries to split adjacent digits into a 10-digit block!
But note the quirks, since the phone number regex is hardcoded it is possible to catch real phone numbers in patterns \d{3}-d{3}-\d{4}
or \d{10}
, but if the dashes are in a different order, it won't work
>>> from nltk.tokenize.casual import REGEXPS
>>> patt = re.compile(r"""(%s)""" % "|".join(REGEXPS[1:2]), re.VERBOSE | re.I | re.UNICODE)
>>> s = '231-358-523423423421162'
>>> patt.findall(s)
['231-358-5234', '2342342116']
>>> s = '2313-58-523423423421162'
>>> patt.findall(s)
['5234234234']
Can we fix this?
source to share
There is a regex part TweetTokenizer
that recognizes phone numbers in every imaginary format (search for # Phone numbers: in this document: http://www.nltk.org/_modules/nltk/tokenize/casual.html#TweetTokenizer ). Some 10-digit numbers look like 10-digit phone numbers. This is why they turn into separate tokens.
source to share