DN1*_*DN1 3 python machine-learning logistic-regression
我是机器学习和 Python 编码的完全初学者,我的任务是从头开始对逻辑回归进行编码,以了解幕后发生的事情。到目前为止,我已经对假设函数、成本函数和梯度下降进行了编码,然后对逻辑回归进行了编码。然而,在编码打印精度时,我得到一个低输出(0.69),它不会随着迭代次数的增加或学习率的变化而改变。我的问题是,我下面的准确度代码有问题吗?任何指向正确方向的帮助将不胜感激
X = data[['radius_mean', 'texture_mean', 'perimeter_mean',
'area_mean', 'smoothness_mean', 'compactness_mean', 'concavity_mean',
'concave points_mean', 'symmetry_mean', 'fractal_dimension_mean',
'radius_se', 'texture_se', 'perimeter_se', 'area_se', 'smoothness_se',
'compactness_se', 'concavity_se', 'concave points_se', 'symmetry_se',
'fractal_dimension_se', 'radius_worst', 'texture_worst',
'perimeter_worst', 'area_worst', 'smoothness_worst',
'compactness_worst', 'concavity_worst', 'concave points_worst',
'symmetry_worst', 'fractal_dimension_worst']]
X = np.array(X)
X = min_max_scaler.fit_transform(X)
Y = data["diagnosis"].map({'M':1,'B':0})
Y = np.array(Y)
X_train,X_test,Y_train,Y_test = train_test_split(X,Y,test_size=0.25)
X = data["diagnosis"].map(lambda x: float(x))
def Sigmoid(z):
if z < 0:
return 1 - 1/(1 + math.exp(z))
else:
return 1/(1 + math.exp(-z))
def Hypothesis(theta, x):
z = 0
for i in range(len(theta)):
z += x[i]*theta[i]
return Sigmoid(z)
def Cost_Function(X,Y,theta,m):
sumOfErrors = 0
for i in range(m):
xi = X[i]
hi = Hypothesis(theta,xi)
error = Y[i] * math.log(hi if hi >0 else 1)
if Y[i] == 1:
error = Y[i] * math.log(hi if hi >0 else 1)
elif Y[i] == 0:
error = (1-Y[i]) * math.log(1-hi if 1-hi >0 else 1)
sumOfErrors += error
constant = -1/m
J = constant * sumOfErrors
#print ('cost is: ', J )
return J
def Cost_Function_Derivative(X,Y,theta,j,m,alpha):
sumErrors = 0
for i in range(m):
xi = X[i]
xij = xi[j]
hi = Hypothesis(theta,X[i])
error = (hi - Y[i])*xij
sumErrors += error
m = len(Y)
constant = float(alpha)/float(m)
J = constant * sumErrors
return J
def Gradient_Descent(X,Y,theta,m,alpha):
new_theta = []
constant = alpha/m
for j in range(len(theta)):
CFDerivative = Cost_Function_Derivative(X,Y,theta,j,m,alpha)
new_theta_value = theta[j] - CFDerivative
new_theta.append(new_theta_value)
return new_theta
def Accuracy(theta):
correct = 0
length = len(X_test, Hypothesis(X,theta))
for i in range(length):
prediction = round(Hypothesis(X[i],theta))
answer = Y[i]
if prediction == answer.all():
correct += 1
my_accuracy = (correct / length)*100
print ('LR Accuracy %: ', my_accuracy)
def Logistic_Regression(X,Y,alpha,theta,num_iters):
theta = np.zeros(X.shape[1])
m = len(Y)
for x in range(num_iters):
new_theta = Gradient_Descent(X,Y,theta,m,alpha)
theta = new_theta
if x % 100 == 0:
Cost_Function(X,Y,theta,m)
print ('theta: ', theta)
print ('cost: ', Cost_Function(X,Y,theta,m))
Accuracy(theta)
initial_theta = [0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1]
alpha = 0.0001
iterations = 1000
Logistic_Regression(X,Y,alpha,initial_theta,iterations)
Run Code Online (Sandbox Code Playgroud)
这是使用来自威斯康星乳腺癌数据集 ( https://www.kaggle.com/uciml/breast-cancer-wisconsin-data ) 的数据,我在其中权衡了 30 个特征——尽管将特征更改为已知相关的特征也不会改变我的准确性。
val*_*e55 10
Python 为我们提供了这个 scikit-learn 库,使我们的工作更轻松,这对我有用:
from sklearn.metrics import accuracy_score
y_pred = log.predict(x_test)
score =accuracy_score(y_test,y_pred)
Run Code Online (Sandbox Code Playgroud)
准确性是最直观的性能衡量标准之一,它只是正确预测的观测值与总观测值的比率。更高的准确度意味着模型的性能更好。
Accuracy = TP+TN/TP+FP+FN+TN
TP = True positives
TN = True negatives
FN = False negatives
TN = True negatives
Run Code Online (Sandbox Code Playgroud)
当您使用准确性测量时,您的误报和漏报应该具有相似的成本。更好的指标是 F1 分数,由下式给出
F1-score = 2*(Recall*Precision)/Recall+Precision where,
Precision = TP/TP+FP
Recall = TP/TP+FN
Run Code Online (Sandbox Code Playgroud)
在这里阅读更多内容
https://en.wikipedia.org/wiki/Precision_and_recall
Python 机器学习的美妙之处在于,像 scikit-learn 这样的重要模块是开源的,因此您可以随时查看实际代码。请使用下面的链接访问 scikit learn 指标源代码,这将让您了解 scikit-learn 在您执行操作时如何计算准确度分数
from sklearn.metrics import accuracy_score
accuracy_score(y_true, y_pred)
Run Code Online (Sandbox Code Playgroud)
https://github.com/scikit-learn/scikit-learn/tree/master/sklearn/metrics
我不确定你是如何得到0.0001for的值alpha,但我认为它太低了。将您的代码与癌症数据结合使用表明,成本随着每次迭代而下降——速度缓慢。
当我将其提高到 0.5 时,成本仍然下降,但处于更合理的水平。1000 次迭代后,它报告:
cost: 0.23668000993020666
Run Code Online (Sandbox Code Playgroud)
修复该Accuracy函数后,我在数据的测试部分得到了 92% 的结果。
您已经安装了 Numpy,如 所示X = np.array(X)。您确实应该考虑将它用于您的操作。对于这样的工作来说,速度会快几个数量级。这是一个矢量化版本,可以立即给出结果而不是等待:
import math
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
df = pd.read_csv("cancerdata.csv")
X = df.values[:,2:-1].astype('float64')
X = (X - np.mean(X, axis =0)) / np.std(X, axis = 0)
## Add a bias column to the data
X = np.hstack([np.ones((X.shape[0], 1)),X])
X = MinMaxScaler().fit_transform(X)
Y = df["diagnosis"].map({'M':1,'B':0})
Y = np.array(Y)
X_train,X_test,Y_train,Y_test = train_test_split(X,Y,test_size=0.25)
def Sigmoid(z):
return 1/(1 + np.exp(-z))
def Hypothesis(theta, x):
return Sigmoid(x @ theta)
def Cost_Function(X,Y,theta,m):
hi = Hypothesis(theta, X)
_y = Y.reshape(-1, 1)
J = 1/float(m) * np.sum(-_y * np.log(hi) - (1-_y) * np.log(1-hi))
return J
def Cost_Function_Derivative(X,Y,theta,m,alpha):
hi = Hypothesis(theta,X)
_y = Y.reshape(-1, 1)
J = alpha/float(m) * X.T @ (hi - _y)
return J
def Gradient_Descent(X,Y,theta,m,alpha):
new_theta = theta - Cost_Function_Derivative(X,Y,theta,m,alpha)
return new_theta
def Accuracy(theta):
correct = 0
length = len(X_test)
prediction = (Hypothesis(theta, X_test) > 0.5)
_y = Y_test.reshape(-1, 1)
correct = prediction == _y
my_accuracy = (np.sum(correct) / length)*100
print ('LR Accuracy %: ', my_accuracy)
def Logistic_Regression(X,Y,alpha,theta,num_iters):
m = len(Y)
for x in range(num_iters):
new_theta = Gradient_Descent(X,Y,theta,m,alpha)
theta = new_theta
if x % 100 == 0:
#print ('theta: ', theta)
print ('cost: ', Cost_Function(X,Y,theta,m))
Accuracy(theta)
ep = .012
initial_theta = np.random.rand(X_train.shape[1],1) * 2 * ep - ep
alpha = 0.5
iterations = 2000
Logistic_Regression(X_train,Y_train,alpha,initial_theta,iterations)
Run Code Online (Sandbox Code Playgroud)
我想我可能有不同版本的 scikit,因为我改变了线路MinMaxScaler以使其工作。结果是我眨眼间就可以进行 10K 次迭代,并且将模型应用于测试集的结果约为 97% 的准确率。