要下载什么才能使 nltk.tokenize.word_tokenize 工作?

问题描述 投票:0回答:6

我将在我的帐户空间配额非常有限的集群上使用

nltk.tokenize.word_tokenize
。在家里,我通过
nltk
下载了所有
nltk.download()
资源,但我发现,它需要 ~2.5GB。

这对我来说似乎有点过分了。您能否建议

nltk.tokenize.word_tokenize
的最小(或几乎最小)依赖项是什么?到目前为止,我已经看到了
nltk.download('punkt')
,但我不确定它是否足够以及大小是多少。我到底应该运行什么才能使其正常工作?

python nltk
6个回答
47
投票

你是对的。您需要 Punkt Tokenizer 模型。它有 13 MB,

nltk.download('punkt')
应该可以解决问题。


15
投票

简而言之

nltk.download('punkt')

就够了。


如果您只是想使用

NLTK
进行标记化,则无需下载 NLTk 中可用的所有模型和语料库。

实际上,如果您只是使用

word_tokenize()
,那么您实际上并不需要
nltk.download()
中的任何资源。如果我们查看代码,默认的
word_tokenize()
基本上是 TreebankWordTokenizer 不应该使用任何额外的资源:

alvas@ubi:~$ ls nltk_data/
chunkers  corpora  grammars  help  models  stemmers  taggers  tokenizers
alvas@ubi:~$ mv nltk_data/ tmp_move_nltk_data/
alvas@ubi:~$ python
Python 2.7.11+ (default, Apr 17 2016, 14:00:29) 
[GCC 5.3.1 20160413] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from nltk import word_tokenize
>>> from nltk.tokenize import TreebankWordTokenizer
>>> tokenizer = TreebankWordTokenizer()
>>> tokenizer.tokenize('This is a sentence.')
['This', 'is', 'a', 'sentence', '.']

但是:

alvas@ubi:~$ ls nltk_data/
chunkers  corpora  grammars  help  models  stemmers  taggers  tokenizers
alvas@ubi:~$ mv nltk_data/ tmp_move_nltk_data
alvas@ubi:~$ python
Python 2.7.11+ (default, Apr 17 2016, 14:00:29) 
[GCC 5.3.1 20160413] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from nltk import sent_tokenize
>>> sent_tokenize('This is a sentence. This is another.')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/__init__.py", line 90, in sent_tokenize
    tokenizer = load('tokenizers/punkt/{0}.pickle'.format(language))
  File "/usr/local/lib/python2.7/dist-packages/nltk/data.py", line 801, in load
    opened_resource = _open(resource_url)
  File "/usr/local/lib/python2.7/dist-packages/nltk/data.py", line 919, in _open
    return find(path_, path + ['']).open()
  File "/usr/local/lib/python2.7/dist-packages/nltk/data.py", line 641, in find
    raise LookupError(resource_not_found)
LookupError: 
**********************************************************************
  Resource u'tokenizers/punkt/english.pickle' not found.  Please
  use the NLTK Downloader to obtain the resource:  >>>
  nltk.download()
  Searched in:
    - '/home/alvas/nltk_data'
    - '/usr/share/nltk_data'
    - '/usr/local/share/nltk_data'
    - '/usr/lib/nltk_data'
    - '/usr/local/lib/nltk_data'
    - u''
**********************************************************************

>>> from nltk import word_tokenize
>>> word_tokenize('This is a sentence.')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/__init__.py", line 106, in word_tokenize
    return [token for sent in sent_tokenize(text, language)
  File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/__init__.py", line 90, in sent_tokenize
    tokenizer = load('tokenizers/punkt/{0}.pickle'.format(language))
  File "/usr/local/lib/python2.7/dist-packages/nltk/data.py", line 801, in load
    opened_resource = _open(resource_url)
  File "/usr/local/lib/python2.7/dist-packages/nltk/data.py", line 919, in _open
    return find(path_, path + ['']).open()
  File "/usr/local/lib/python2.7/dist-packages/nltk/data.py", line 641, in find
    raise LookupError(resource_not_found)
LookupError: 
**********************************************************************
  Resource u'tokenizers/punkt/english.pickle' not found.  Please
  use the NLTK Downloader to obtain the resource:  >>>
  nltk.download()
  Searched in:
    - '/home/alvas/nltk_data'
    - '/usr/share/nltk_data'
    - '/usr/local/share/nltk_data'
    - '/usr/lib/nltk_data'
    - '/usr/local/lib/nltk_data'
    - u''
**********************************************************************

但如果我们查看https://github.com/nltk/nltk/blob/develop/nltk/tokenize/init.py#L93,情况似乎并非如此。看起来

word_tokenize
已隐式调用
sent_tokenize()
,这需要
punkt
模型。

我不确定这是一个错误还是一个功能,但鉴于当前的代码,旧的习惯用法似乎可能已经过时了:

>>> from nltk import sent_tokenize, word_tokenize
>>> sentences = 'This is a foo bar sentence. This is another sentence.'
>>> tokenized_sents = [word_tokenize(sent) for sent in sent_tokenize(sentences)]
>>> tokenized_sents
[['This', 'is', 'a', 'foo', 'bar', 'sentence', '.'], ['This', 'is', 'another', 'sentence', '.']]

它可以简单地是:

>>> word_tokenize(sentences)
['This', 'is', 'a', 'foo', 'bar', 'sentence', '.', 'This', 'is', 'another', 'sentence', '.']

但是我们看到

word_tokenize()
将字符串列表列表扁平化为单个字符串列表。


或者,您可以尝试使用基于 https://github.com/jonsafari/tok-tok

 添加到 NLTK 
toktok.py 的新分词器,无需预先训练模型。


1
投票

如果您在 lambda 中有大量 NLTK pickles,则代码编辑器将无法进行编辑。使用 Lambda 层。您可以只上传 NLTK 数据并将数据包含在如下代码中。

nltk.data.path.append("/opt/tmp_nltk")

0
投票
import nltk
nltk.download('punkt')

from nltk.tokenize import sent_tokenize, word_tokenize

EXAMPLE_TEXT = "Hello Mr.Smith,how are you doing today?"

print(sent_tokenize(EXAMPLE_TEXT))

0
投票

Nltk.download('punkt') 足以解决标记化问题


0
投票

尝试下载“punkt”和“punkt_tab”

import nltk
nltk.download('punkt_tab')
nltk.download('punkt')
© www.soinside.com 2019 - 2024. All rights reserved.