AttributeError:'numpy.ndarray'对象没有属性'toarray'

Lea*_*ner 5 python numpy machine-learning scikit-learn

我正在从文本语料库中提取特征,并使用 td-fidf 矢量器和 scikit-learn 中的截断奇异值分解来实现这一目标。然而,由于我想尝试的算法需要密集矩阵,并且矢量化器返回稀疏矩阵,我需要将这些矩阵转换为密集数组。但是,每当我尝试转换这些数组时,我都会收到一条错误消息,告诉我我的 numpy 数组对象没有属性“toarray”。我究竟做错了什么?

功能:

def feature_extraction(train,train_test,test_set):
    vectorizer = TfidfVectorizer(min_df = 3,strip_accents = "unicode",analyzer = "word",token_pattern = r'\w{1,}',ngram_range = (1,2))        

    print("fitting Vectorizer")
    vectorizer.fit(train)

    print("transforming text")
    train = vectorizer.transform(train)
    train_test = vectorizer.transform(train_test)
    test_set = vectorizer.transform(test_set)

    print("Dimensionality reduction")
    svd = TruncatedSVD(n_components = 100)
    svd.fit(train)
    train = svd.transform(train)
    train_test = svd.transform(train_test)
    test_set = svd.transform(test_set)

    print("convert to dense array")
    train = train.toarray()
    test_set = test_set.toarray()
    train_test = train_test.toarray()

    print(train.shape)
    return train,train_test,test_set
Run Code Online (Sandbox Code Playgroud)

追溯:

Traceback (most recent call last):
  File "C:\Users\Anonymous\workspace\final_submission\src\linearSVM.py", line 24, in <module>
    x_train,x_test,test_set = feature_extraction(x_train,x_test,test_set)
  File "C:\Users\Anonymous\workspace\final_submission\src\Preprocessing.py", line 57, in feature_extraction
    train = train.toarray()
AttributeError: 'numpy.ndarray' object has no attribute 'toarray'
Run Code Online (Sandbox Code Playgroud)

更新: 威利指出我对矩阵稀疏的假设可能是错误的。因此,我尝试将数据输入到具有降维功能的算法中,它实际上无需任何转换即可工作,但是当我排除降维(这给了我大约 53k 个特征)时,我收到以下错误:

    Traceback (most recent call last):
  File "C:\Users\Anonymous\workspace\final_submission\src\linearSVM.py", line 28, in <module>
    result = bayesian_ridge(x_train,x_test,y_train,y_test,test_set)
  File "C:\Users\Anonymous\workspace\final_submission\src\Algorithms.py", line 84, in bayesian_ridge
    algo = algo.fit(x_train,y_train[:,i])
  File "C:\Python27\lib\site-packages\sklearn\linear_model\bayes.py", line 136, in fit
    dtype=np.float)
  File "C:\Python27\lib\site-packages\sklearn\utils\validation.py", line 220, in check_arrays
    raise TypeError('A sparse matrix was passed, but dense '
TypeError: A sparse matrix was passed, but dense data is required. Use X.toarray() to convert to a dense numpy array.
Run Code Online (Sandbox Code Playgroud)

有人可以解释一下吗?

更新2

根据要求,我将给出所有涉及的代码。由于它分散在不同的文件中,我将分步骤发布它。为了清楚起见,我将保留所有模块导入。

这就是我预处理代码的方式:

def regexp(data):
    for row in range(len(data)):
        data[row] = re.sub(r'[\W_]+'," ",data[row])
        return data

def clean_the_text(data):
    alist = []
    data = nltk.word_tokenize(data)
    for j in data:
        j = j.lower()
        alist.append(j.rstrip('\n'))
    alist = " ".join(alist)
    return alist
def loop_data(data):
    for i in range(len(data)):
        data[i] = clean_the_text(data[i])
    return data  


if __name__ == "__main__":
    print("loading train")
    train_text = porter_stemmer(loop_data(regexp(list(np.array(p.read_csv(os.path.join(dir,"train.csv")))[:,1]))))
    print("loading test_set")
    test_set = porter_stemmer(loop_data(regexp(list(np.array(p.read_csv(os.path.join(dir,"test.csv")))[:,1]))))
Run Code Online (Sandbox Code Playgroud)

将 train_set 拆分为 x_train 和 x_test 进行交叉验证后,我使用上面的 feature_extraction 函数转换数据。

x_train,x_test,test_set = feature_extraction(x_train,x_test,test_set)
Run Code Online (Sandbox Code Playgroud)

最后我将它们输入到我的算法中

def bayesian_ridge(x_train,x_test,y_train,y_test,test_set):
    algo = linear_model.BayesianRidge()
    algo = algo.fit(x_train,y_train)
    pred = algo.predict(x_test)
    error = pred - y_test
    result.append(algo.predict(test_set))
    print("Bayes_error: ",cross_val(error))
    return result
Run Code Online (Sandbox Code Playgroud)

Fre*_*Foo 2

TruncatedSVD.transform返回一个数组,而不是稀疏矩阵。事实上,在当前版本的 scikit-learn 中,只有向量化器返回稀疏矩阵。