執(zhí)行如下命令
import nltk
nltk.download() #執(zhí)行這個下載語料庫命令的時候顯示超時,也沒有彈出像官方網(wǎng)站說的那個窗口,然后肯定是沒有下載成功,怎么辦?
In [12]: nltk.download()
Traceback (most recent call last):
File ", line 1, in
nltk.download()
File "D:\ProgramData\Anaconda3\lib\site-packages\nltk\downloader.py", line 765, in download
self._interactive_download()
File "D:\ProgramData\Anaconda3\lib\site-packages\nltk\downloader.py", line 1115, in _interactive_download
DownloaderGUI(self).mainloop()
File "D:\ProgramData\Anaconda3\lib\site-packages\nltk\downloader.py", line 1412, in __init__
self._fill_table()
File "D:\ProgramData\Anaconda3\lib\site-packages\nltk\downloader.py", line 1744, in _fill_table
items = self._ds.collections()
File "D:\ProgramData\Anaconda3\lib\site-packages\nltk\downloader.py", line 593, in collections
self._update_index()
File "D:\ProgramData\Anaconda3\lib\site-packages\nltk\downloader.py", line 954, in _update_index
ElementTree.parse(urlopen(self._url)).getroot()
File "D:\ProgramData\Anaconda3\lib\urllib\request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "D:\ProgramData\Anaconda3\lib\urllib\request.py", line 525, in open
response = self._open(req, data)
File "D:\ProgramData\Anaconda3\lib\urllib\request.py", line 542, in _open
result = self._call_chain(self.handle_open, protocol, protocol +
File "D:\ProgramData\Anaconda3\lib\urllib\request.py", line 502, in _call_chain
result = func(*args)
File "D:\ProgramData\Anaconda3\lib\urllib\request.py", line 1393, in https_open
return self.do_open(http.client.HTTPSConnection, req,
File "D:\ProgramData\Anaconda3\lib\urllib\request.py", line 1354, in do_open
r = h.getresponse()
File "D:\ProgramData\Anaconda3\lib\http\client.py", line 1347, in getresponse
response.begin()
File "D:\ProgramData\Anaconda3\lib\http\client.py", line 307, in begin
version, status, reason = self._read_status()
File "D:\ProgramData\Anaconda3\lib\http\client.py", line 268, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
File "D:\ProgramData\Anaconda3\lib\socket.py", line 669, in readinto
return self._sock.recv_into(b)
File "D:\ProgramData\Anaconda3\lib\ssl.py", line 1241, in recv_into
return self.read(nbytes, buffer)
File "D:\ProgramData\Anaconda3\lib\ssl.py", line 1099, in read
return self._sslobj.read(len, buffer)
TimeoutError: [WinError 10060] 由于連接方在一段時間后沒有正確答復(fù)或連接的主機沒有反應(yīng),連接嘗試失敗。
解決建議:通過繞過DNS解析,直接在本地綁定host
在瀏覽器中打開DNS查詢網(wǎng)站
http://tool.chinaz.com/dns,輸入
https://raw.githubusercontent.com/nltk/nltk_data/gh-pages/index.xml,然后點擊檢測,在下方的結(jié)果列表中,如下圖所示:
ttl值越小,下載速度越快,但是只有臺灣那幾個ip可以下載吧,不知道為什么ttl比較小的湖南那個沒有ip,等會刷新再試試。
先選擇臺灣中華電信里面第一個
185.199.108.133 [美國 GitHub+Fastly節(jié)點]
185.199.109.133 [美國 GitHub+Fastly節(jié)點]
185.199.111.133 [美國 GitHub+Fastly節(jié)點]
185.199.110.133 [美國 GitHub+Fastly節(jié)點]
"C:\Windows\System32\drivers\etc\lmhosts.sam"用記事本打開這個文件
在該文本文件中的末尾加入如下兩行命令
#解決https://raw.githubusercontent.com/nltk/nltk_data/gh-pages/index.xml下載慢的問題
185.199.108.133 raw.githubusercontent.com
3.然后保存并關(guān)閉這個文件
4.然后再在python里面執(zhí)行nltk.download()命令進(jìn)行數(shù)據(jù)下載,看能否下載成功
然后終于可以彈出官網(wǎng)所說的那個窗口了。注意那個窗口不會直接彈出到你面前,需要自己將鼠標(biāo)劃到如圖所示的地方,然后點擊才能查看。
彈出一個窗口
下載已經(jīng)進(jìn)入下載了,如圖所示
只能關(guān)閉NLTK Downloader窗口,下載終止
python頁面提示如下
In [2]: import nltk
...: #
...: nltk.download()
showing info https://raw.githubusercontent.com/nltk/nltk_data/gh-pages/index.xml
showing info https://raw.githubusercontent.com/nltk/nltk_data/gh-pages/index.xml
Exception in Tkinter callback
Traceback (most recent call last):
File "D:\ProgramData\Anaconda3\lib\tkinter\__init__.py", line 1883, in __call__
return self.func(*args)
File "D:\ProgramData\Anaconda3\lib\tkinter\__init__.py", line 804, in callit
func(*args)
File "D:\ProgramData\Anaconda3\lib\site-packages\nltk\downloader.py", line 2154, in _monitor_message_queue
self._select(msg.package.id)
AttributeError: 'str' object has no attribute 'id'
如果提前終止下載就可能返回True,如下面所示
import nltk
nltk.download()
showing info https://raw.githubusercontent.com/nltk/nltk_data/gh-pages/index.xml
showing info https://raw.githubusercontent.com/nltk/nltk_data/gh-pages/index.xml
Out[2]: True
返回True可不是你下載成功了呀,這一點一定要記住
也有可能出現(xiàn)下面這種錯誤
可以看到我們下載nltk_data這樣一個文件夾是多么的困難,但我們應(yīng)該是成功下載了部分資料文件的。但想哭
然后試下下面的語句??吹綉?yīng)該是我們自己從其他網(wǎng)站直接下載到一些文件放到C:\Users\Administrator\AppData\Roaming\nltk_data路徑下應(yīng)該就可以了。
nltk.download("nps_chat")
[nltk_data] Downloading package nps_chat to
[nltk_data] C:\Users\Administrator\AppData\Roaming\nltk_data...
[nltk_data] Unzipping corpora\nps_chat.zip.
Out[3]: True
nltk.download("nps_chat")
[nltk_data] Downloading package nps_chat to
[nltk_data] C:\Users\Administrator\AppData\Roaming\nltk_data...
[nltk_data] Package nps_chat is already up-to-date!
Out[4]: True
nltk.download("nps_chat")
[nltk_data] Downloading package nps_chat to
[nltk_data] C:\Users\Administrator\AppData\Roaming\nltk_data...
[nltk_data] Package nps_chat is already up-to-date!
Out[5]: True
可以看到那個文件夾下面有一個corpora文件夾,然后雙擊進(jìn)入,可以看到nps_chat已經(jīng)成功解壓
然后執(zhí)行下面的命令試一下
In [7]: nltk.download("alpino")
[nltk_data] Downloading package alpino to
[nltk_data] C:\Users\Administrator\AppData\Roaming\nltk_data...
[nltk_data] Unzipping corpora\alpino.zip.
Out[7]: True
提示解壓alpino文件了,我們截圖看下
然后再重新執(zhí)行nltk.download("alpino") 就會提示alpino 已經(jīng)升級了。
nltk.download("alpino")
[nltk_data] Downloading package alpino to
[nltk_data] C:\Users\Administrator\AppData\Roaming\nltk_data...
[nltk_data] Package alpino is already up-to-date!
Out[8]: True
也就是說應(yīng)該是我們把其他的這些庫資料從某個地方下載,然后再保存到這個路徑下應(yīng)該就可以了,
可以放置的路徑如下,到時候調(diào)用資料的時候默認(rèn)會從如下的這些路徑下面去尋找
wnl.lemmatize('countries')
Traceback (most recent call last):
File "D:\ProgramData\Anaconda3\lib\site-packages\nltk\corpus\util.py", line 83, in __load
root = nltk.data.find("{}/{}".format(self.subdir, zip_name))
File "D:\ProgramData\Anaconda3\lib\site-packages\nltk\data.py", line 585, in find
raise LookupError(resource_not_found)
LookupError:
**********************************************************************
Resource wordnet not found.
Please use the NLTK Downloader to obtain the resource:
import nltk
nltk.download('wordnet')
For more information see: https://www.nltk.org/data.html
Attempted to load corpora/wordnet.zip/wordnet/
Searched in:
- 'C:\\Users\\Administrator/nltk_data'
- 'D:\\ProgramData\\Anaconda3\\nltk_data'
- 'D:\\ProgramData\\Anaconda3\\share\\nltk_data'
- 'D:\\ProgramData\\Anaconda3\\lib\\nltk_data'
- 'C:\\Users\\Administrator\\AppData\\Roaming\\nltk_data'
- 'C:\\nltk_data'
- 'D:\\nltk_data'
- 'E:\\nltk_data'
**********************************************************************
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "
wnl.lemmatize('countries')
File "D:\ProgramData\Anaconda3\lib\site-packages\nltk\stem\wordnet.py", line 38, in lemmatize
lemmas = wordnet._morphy(word, pos)
File "D:\ProgramData\Anaconda3\lib\site-packages\nltk\corpus\util.py", line 120, in __getattr__
self.__load()
File "D:\ProgramData\Anaconda3\lib\site-packages\nltk\corpus\util.py", line 85, in __load
raise e
File "D:\ProgramData\Anaconda3\lib\site-packages\nltk\corpus\util.py", line 80, in __load
root = nltk.data.find("{}/{}".format(self.subdir, self.__name))
File "D:\ProgramData\Anaconda3\lib\site-packages\nltk\data.py", line 585, in find
raise LookupError(resource_not_found)
LookupError:
**********************************************************************
Resource wordnet not found.
Please use the NLTK Downloader to obtain the resource:
import nltk
nltk.download('wordnet')
For more information see: https://www.nltk.org/data.html
Attempted to load corpora/wordnet
Searched in:
- 'C:\\Users\\Administrator/nltk_data'
- 'D:\\ProgramData\\Anaconda3\\nltk_data'
- 'D:\\ProgramData\\Anaconda3\\share\\nltk_data'
- 'D:\\ProgramData\\Anaconda3\\lib\\nltk_data'
- 'C:\\Users\\Administrator\\AppData\\Roaming\\nltk_data'
- 'C:\\nltk_data'
- 'D:\\nltk_data'
- 'E:\\nltk_data'
**********************************************************************
從上面的文件夾里去看一下就知道,wordnet那個資料我們沒有下載成功,當(dāng)然從這些路徑下找不到。我們只下載成功了nps_chat和alpino
那在哪里可以下載到這些文件呢?打開下面的鏈接,手動下載那些資源。
https://github.com/nltk/nltk_data
唉,又因為網(wǎng)速問題無法下載
https://github.com/nltk/nltk_data/archive/refs/heads/gh-pages.zip 把這個數(shù)據(jù)鏈接隨便粘貼到一個網(wǎng)址上,竟然立刻可以下載了
很神奇呀
可以看到已經(jīng)下載多少,為什么看不見總得文件大小
我們下載完成后得到一個如下圖所示的壓縮文件,我把文件傳到了百度云盤上,有需要的可以下載
鏈接:https://pan.baidu.com/s/1NIEiOWxxViTj4bzAR5MwJw
提取碼:trk6
復(fù)制這段內(nèi)容后打開百度網(wǎng)盤手機App,操作更方便哦
我們講這個壓縮文件解壓如下,
然后我們打開里面的packages文件夾如下,這些文件就是我們用的文件。
我們將上面文件夾中的文件復(fù)制或者剪貼到C:\\Users\\Administrator\\AppData\\Roaming\\nltk_data下。
在粘貼之前我們需要先做一些準(zhǔn)備工作。
如果你的nltk_data文件夾不存在,則你需要重新建立一個nltk_data文件夾,
如果你的nltk_data文件夾存在,但是下面有東西,則需要把這些東西全部刪掉。
然后再進(jìn)行粘貼。粘貼完成后如下圖所示:
然后我們就可以執(zhí)行代碼了
import nltk
#nltk.download()如果你的電腦上已經(jīng)把相關(guān)的資源下載好了,就不需要執(zhí)行這句命令了
from nltk.stem import SnowballStemmer
from nltk.stem import WordNetLemmatizer
wnl = WordNetLemmatizer()
wnl.lemmatize('countries')
結(jié)果已經(jīng)可以正確執(zhí)行了





