Jaf*_*deq 3 python sparse-matrix linear-regression scikit-learn sklearn-pandas
我需要在稀疏矩阵上使用线性回归。我得到的结果很差,所以我决定在稀疏表示的非稀疏矩阵上测试它。数据取自https://www.analyticsvidhya.com/blog/2021/05/multiple-linear-regression-using-python-and-scikit-learn/。
我已经为某些列生成了最大归一化值。CSV 文件位于: https://drive.google.com/file/d/17wHv1Cc3RKgshprIKTcWUSxZOWlG68__/view ?usp=sharing
运行正常的线性回归效果很好。示例代码:
df = pd.read_csv("maxnorm_50_Startups.csv")
y = pd.DataFrame()
y = df['Profit']
x = pd.DataFrame()
x = df.drop('Profit', axis=1)
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2)
LR = LinearRegression()
LR.fit(x_train, y_train)
y_prediction = LR.predict(x_test)
score=r2_score(y_test, y_prediction)
print('r2 score is', score)
Run Code Online (Sandbox Code Playgroud)
与样本结果:
r2 score is 0.9683831928840445
Run Code Online (Sandbox Code Playgroud)
我想用稀疏矩阵重复这一点。我将 CSV 转换为稀疏表示: https://drive.google.com/file/d/1CFWbBbtiSqTSlepGuYXsxa00MSHOj-Vx/view ?usp=sharing
这是我对其进行线性回归的代码:
df = pd.read_csv("maxnorm_50_Startups_relational.csv")
df['x'] = pd.to_numeric(df['x'], errors='raise')
m = len(df.x.unique())
for i in range(0, m): # randomize the 'x' values to randomize train test split
n = random.randint(0, m)
df.loc[df['x'] == n, 'x'] = m
df.loc[df['x'] == i, 'x'] = n
df.loc[df['x'] == m, 'x'] = i
y = pd.DataFrame()
y = df[df['feature'] == 'Profit']
x = pd.DataFrame()
x = df[df['feature'] != 'Profit']
y = y.drop('feature', axis=1)
x['feat'] = pd.factorize(x['feature'])[0] # sparse matrix code below can't work with strings
x_train = pd.DataFrame()
x_train = x[x['x'] <= 39]
x_test = pd.DataFrame()
x_test = x[x['x'] >= 40]
y_train = pd.DataFrame()
y_train = y[y['x'] <= 39]
y_test = pd.DataFrame()
y_test = y[y['x'] >= 40]
x_test['x'] = x_test['x'] - 40 # sparse matrix assumes that if something is numbered 50
y_test['x'] = y_test['x'] - 40 # there must be 50 records. there are 10. so renumber to 10
x_train_sparse = scipy.sparse.coo_matrix((x_train.value, (x_train.x, x_train.feat)))
# print(x_train_sparse.todense())
x_test_sparse = scipy.sparse.coo_matrix((x_test.value, (x_test.x, x_test.feat)))
LR = LinearRegression()
LR.fit(x_train_sparse, y_train)
y_prediction = LR.predict(x_test_sparse)
score = r2_score(y_test, y_prediction)
print('r2 score is', score)
Run Code Online (Sandbox Code Playgroud)
运行此程序,我得到负 R2 分数,例如:
r2 score is -10.794519939249602
Run Code Online (Sandbox Code Playgroud)
意味着线性回归不起作用。我不知道我哪里错了。我尝试自己实现线性回归方程,而不是使用库函数,但我仍然得到负的 r2 分数。我的错误是什么?
Linear Regression在稀疏数据上表现不佳。
还有其他线性算法,例如Ridge、 、和Lasso,它们在密集数据和稀疏数据上的性能相同。这些算法与线性回归类似,但它们的损失函数包含额外的惩罚项。Bayesian RidgeElasticNet
有一些非线性算法,如RandomForestRegressor、GradientBoostingRegressor、ExtraTreesRegressor等XGBoostRegressor,在稀疏矩阵和稠密矩阵上也表现相同。
我建议您使用这些算法而不是简单的线性回归。
| 归档时间: |
|
| 查看次数: |
1346 次 |
| 最近记录: |