LogisticRegression:未知标签类型:在python中使用sklearn'连续'

har*_*on4 45 python numpy scikit-learn

我有以下代码来测试一些最流行的sklearn python库的ML算法:

import numpy as np
from sklearn                        import metrics, svm
from sklearn.linear_model           import LinearRegression
from sklearn.linear_model           import LogisticRegression
from sklearn.tree                   import DecisionTreeClassifier
from sklearn.neighbors              import KNeighborsClassifier
from sklearn.discriminant_analysis  import LinearDiscriminantAnalysis
from sklearn.naive_bayes            import GaussianNB
from sklearn.svm                    import SVC

trainingData    = np.array([ [2.3, 4.3, 2.5],  [1.3, 5.2, 5.2],  [3.3, 2.9, 0.8],  [3.1, 4.3, 4.0]  ])
trainingScores  = np.array( [3.4, 7.5, 4.5, 1.6] )
predictionData  = np.array([ [2.5, 2.4, 2.7],  [2.7, 3.2, 1.2] ])

clf = LinearRegression()
clf.fit(trainingData, trainingScores)
print("LinearRegression")
print(clf.predict(predictionData))

clf = svm.SVR()
clf.fit(trainingData, trainingScores)
print("SVR")
print(clf.predict(predictionData))

clf = LogisticRegression()
clf.fit(trainingData, trainingScores)
print("LogisticRegression")
print(clf.predict(predictionData))

clf = DecisionTreeClassifier()
clf.fit(trainingData, trainingScores)
print("DecisionTreeClassifier")
print(clf.predict(predictionData))

clf = KNeighborsClassifier()
clf.fit(trainingData, trainingScores)
print("KNeighborsClassifier")
print(clf.predict(predictionData))

clf = LinearDiscriminantAnalysis()
clf.fit(trainingData, trainingScores)
print("LinearDiscriminantAnalysis")
print(clf.predict(predictionData))

clf = GaussianNB()
clf.fit(trainingData, trainingScores)
print("GaussianNB")
print(clf.predict(predictionData))

clf = SVC()
clf.fit(trainingData, trainingScores)
print("SVC")
print(clf.predict(predictionData))
Run Code Online (Sandbox Code Playgroud)

前两个工作正常,但我在LogisticRegression调用中遇到以下错误:

root@ubupc1:/home/ouhma# python stack.py 
LinearRegression
[ 15.72023529   6.46666667]
SVR
[ 3.95570063  4.23426243]
Traceback (most recent call last):
  File "stack.py", line 28, in <module>
    clf.fit(trainingData, trainingScores)
  File "/usr/local/lib/python2.7/dist-packages/sklearn/linear_model/logistic.py", line 1174, in fit
    check_classification_targets(y)
  File "/usr/local/lib/python2.7/dist-packages/sklearn/utils/multiclass.py", line 172, in check_classification_targets
    raise ValueError("Unknown label type: %r" % y_type)
ValueError: Unknown label type: 'continuous'
Run Code Online (Sandbox Code Playgroud)

输入数据与之前的调用相同,那么这里发生了什么?

顺便说一句,为什么第一次预测LinearRegression()SVR()算法存在巨大差异(15.72 vs 3.95)

Max*_*ers 56

您正在将浮点数传递给分类器,该分类器将分类值作为目标向量.如果你将它转换为int它将被接受作为输入(虽然这是正确的方法,这将是有问题的).

最好通过使用scikit labelEncoder功能转换您的训练分数.

您的DecisionTree和KNeighbors限定符也是如此.

from sklearn import preprocessing
from sklearn import utils

lab_enc = preprocessing.LabelEncoder()
encoded = lab_enc.fit_transform(trainingScores)
>>> array([1, 3, 2, 0], dtype=int64)

print(utils.multiclass.type_of_target(trainingScores))
>>> continuous

print(utils.multiclass.type_of_target(trainingScores.astype('int')))
>>> multiclass

print(utils.multiclass.type_of_target(encoded))
>>> multiclass
Run Code Online (Sandbox Code Playgroud)

  • 但是,在此示例中,输入数据使用LogisticRegression函数具有浮点数:http://machinelearningmastery.com/compare-machine-learning-algorithms-python-scikit-learn/ ...并且它可以正常工作.为什么? (3认同)
  • 谢谢你!所以我必须将 `2.3` 转换为 `23` 等等,不是吗?有一种优雅的方法可以使用 numpy 或 pandas 进行这种转换吗? (2认同)
  • 输入可以是浮点数,但输出需要是分类的,即int.在该示例中,列8仅为0或1.通常情况下,你有分类标签,例如['red','big','sick'],你需要转换它的数值.试试http://scikit-learn.org/stable/modules/preprocessing.html#encoding-categorical-features或http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html (2认同)

Tho*_* G. 31

LogisticRegression不是回归而是分类

Y变量必须是分类的类,

(例如01

而不是一个continuous变量,

那将是一个回归问题。

  • 这应该是正确的答案。事实上,LogisticRegression 是一个分类器。因此出现了错误。 (3认同)
  • 我希望这不是垃圾邮件,但我多次来到这里,并且错误提示不是很直观。 (2认同)

Sam*_*rry 19

在尝试将浮动数据输入分类器时,我遇到了同样的问题.我想保留浮点数而不是整数来保证准确性.尝试使用回归算法.例如:

import numpy as np
from sklearn import linear_model
from sklearn import svm

classifiers = [
    svm.SVR(),
    linear_model.SGDRegressor(),
    linear_model.BayesianRidge(),
    linear_model.LassoLars(),
    linear_model.ARDRegression(),
    linear_model.PassiveAggressiveRegressor(),
    linear_model.TheilSenRegressor(),
    linear_model.LinearRegression()]

trainingData    = np.array([ [2.3, 4.3, 2.5],  [1.3, 5.2, 5.2],  [3.3, 2.9, 0.8],  [3.1, 4.3, 4.0]  ])
trainingScores  = np.array( [3.4, 7.5, 4.5, 1.6] )
predictionData  = np.array([ [2.5, 2.4, 2.7],  [2.7, 3.2, 1.2] ])

for item in classifiers:
    print(item)
    clf = item
    clf.fit(trainingData, trainingScores)
    print(clf.predict(predictionData),'\n')
Run Code Online (Sandbox Code Playgroud)