alv*_*vas 6 python nlp machine-learning dataset scikit-learn
在NLTK中,有一个nltk.download()函数可以下载NLP套件附带的数据集.
在sklearn中,它讨论了加载数据集(http://scikit-learn.org/stable/datasets/)和从http://mldata.org/获取数据,但对于其余的数据集,说明是下载来自消息来源.
我应该在哪里保存从源下载的数据?在我从python代码调用之前将数据保存到正确的目录后是否还有其他步骤?
有一个如何下载例如20newsgroups数据集的例子吗?
我已经安装了sklearn并尝试了这个,但我得到了一个IOError.很可能是因为我没有从源下载数据集.
>>> from sklearn.datasets import fetch_20newsgroups
>>> fetch_20newsgroups(subset='train')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python2.7/dist-packages/sklearn/datasets/twenty_newsgroups.py", line 207, in fetch_20newsgroups
    cache_path=cache_path)
  File "/usr/local/lib/python2.7/dist-packages/sklearn/datasets/twenty_newsgroups.py", line 89, in download_20newsgroups
    tarfile.open(archive_path, "r:gz").extractall(path=target_dir)
  File "/usr/lib/python2.7/tarfile.py", line 1678, in open
    return func(name, filemode, fileobj, **kwargs)
  File "/usr/lib/python2.7/tarfile.py", line 1727, in gzopen
    **kwargs)
  File "/usr/lib/python2.7/tarfile.py", line 1705, in taropen
    return cls(name, mode, fileobj, **kwargs)
  File "/usr/lib/python2.7/tarfile.py", line 1574, in __init__
    self.firstmember = self.next()
  File "/usr/lib/python2.7/tarfile.py", line 2334, in next
    raise ReadError("empty file")
tarfile.ReadError: empty file
网络连接问题可能已损坏驱动器上的源存档.从scikit_learn_data用户主目录中的文件夹中删除20个组相关文件或文件夹,然后重试.
$ cd ~/scikit_learn_data'
$ rm -rf 20news_home
$ rm 20news-bydate.pkz
| 归档时间: | 
 | 
| 查看次数: | 3882 次 | 
| 最近记录: |