XGboost模型始终如一地获得100%的准确度?

Lis*_*a O 5 statistics r machine-learning cross-validation xgboost

我与Airbnb的数据时,可以在这里对Kaggle,并预测国家的用户将预定他们的第一次旅行与XGBoost模型和R.近600个功能,通过50轮的5倍交叉验证的运行算法,我得到每次100%准确.在将模型拟合到训练数据并预测保持的测试集之后,我也获得了100%的准确度.这些结果不可能是真实的.我的代码肯定有问题,但到目前为止我还没弄清楚.我在下面列出了我的一部分代码.它基于这篇文章.继文章(使用文章的数据+复制代码),我收到类似的结果.无论如何将其应用于Airbnb的数据,我始终可以获得100%的准确率.我不知道发生了什么事.我是否错误地使用了xgboost包?感谢您的帮助和时间.

# set up the data  
# train is the data frame of features with the target variable to predict
full_variables <- data.matrix(train[,-1]) # country_destination removed
full_label <- as.numeric(train$country_destination) - 1 

# training data 
train_index <- caret::createDataPartition(y = train$country_destination, p = 0.70, list = FALSE)
train_data <- full_variables[train_index, ]
train_label <- full_label[train_index[,1]]
train_matrix <- xgb.DMatrix(data = train_data, label = train_label)

# test data 
test_data <- full_variables[-train_index, ]
test_label <- full_label[-train_index[,1]]
test_matrix <- xgb.DMatrix(data = test_data, label = test_label)

# 5-fold CV
params <- list("objective" = "multi:softprob",
               "num_class" = classes,
               eta = 0.3, 
               max_depth = 6)
cv_model <- xgb.cv(params = params,
               data = train_matrix,
               nrounds = 50,
               nfold = 5,
               early_stop_round = 1,
               verbose = F,
               maximize = T,
               prediction = T)

# out of fold predictions 
out_of_fold_p <- data.frame(cv_model$pred) %>% mutate(max_prob = max.col(., ties.method = "last"),label = train_label + 1)
head(out_of_fold_p)

# confusion matrix
confusionMatrix(factor(out_of_fold_p$label), 
                factor(out_of_fold_p$max_prob),
                mode = "everything")
Run Code Online (Sandbox Code Playgroud)

通过运行此代码,可以在此处找到我用于此目的的数据示例:

library(RCurl)
x < getURL("https://raw.githubusercontent.com/loshita/Senior_project/master/train.csv")
y <- read.csv(text = x)
Run Code Online (Sandbox Code Playgroud)

mis*_*use 6

如果您正在使用train_users_2.csv.zip可用的kaggle,那么问题是您没有country_destination从列车数据集中删除它,因为它处于适当位置16而不是1.

which(colnames(train) == "country_destination")
#output
16
Run Code Online (Sandbox Code Playgroud)

1id这是每一个独特的观察,也应该被删除.

length(unique(train[,1)) == nrow(train)
#output
TRUE
Run Code Online (Sandbox Code Playgroud)

当我使用以下修改运行您的代码时:

full_variables <- data.matrix(train[,-c(1, 16)]) 

  library(xgboost)

params <- list("objective" = "multi:softprob",
               "num_class" = length(unique(train_label)),
               eta = 0.3, 
               max_depth = 6)
cv_model <- xgb.cv(params = params,
                   data = train_matrix,
                   nrounds = 50,
                   nfold = 5,
                   early_stop_round = 1,
                   verbose = T,
                   maximize = T,
                   prediction = T)
Run Code Online (Sandbox Code Playgroud)

在上述设置的交叉验证0.12期间,我获得了测试错误.

out_of_fold_p <- data.frame(cv_model$pred) %>% mutate(max_prob = max.col(., ties.method = "last"),label = train_label + 1)

head(out_of_fold_p[,13:14], 20)
#output
   max_prob label
1         8     8
2        12    12
3        12    10
4        12    12
5        12    12
6        12    12
7        12    12
8        12    12
9         8     8
10       12     5
11       12     2
12        2    12
13       12    12
14       12    12
15       12    12
16        8     8
17        8     8
18       12     5
19        8     8
20       12    12
Run Code Online (Sandbox Code Playgroud)

总而言之,你没有y从中删除x.

编辑:下载真正的火车组和玩耍后,我可以说精度是5倍CV的100%.不仅仅是这只通过22个功能(可能更少)实现.

model <- xgboost(params = params,
                   data = train_matrix,
                   nrounds = 50,
                   verbose = T,
                   maximize = T)
Run Code Online (Sandbox Code Playgroud)

该模型在测试集上也获得100%的准确度:

pred <- predict(model, test_matrix)
pred <- matrix(pred, ncol=length(unique(train_label)), byrow = TRUE)
out_of_fold_p <- data.frame(pred) %>% mutate(max_prob = max.col(., ties.method = "last"),label = test_label + 1)

sum(out_of_fold_p$max_prob != out_of_fold_p$label) #0 errors
Run Code Online (Sandbox Code Playgroud)

现在让我们检查哪些功能具有歧视性:

xgb.plot.importance(importance_matrix = xgb.importance(colnames(train_matrix), model))
Run Code Online (Sandbox Code Playgroud)

在此输入图像描述

现在,如果你只使用这些功能运行xgb.cv:

train_matrix <- xgb.DMatrix(data = train_data[,which(colnames(train_data) %in% xgboost::xgb.importance(colnames(train_matrix), model)$Feature)], label = train_label)

set.seed(1)
cv_model <- xgb.cv(params = params,
                   data = train_matrix,
                   nrounds = 50,
                   nfold = 5,
                   early_stop_round = 1,
                   verbose = T,
                   maximize = T,
                   prediction = T)
Run Code Online (Sandbox Code Playgroud)

您还可以在测试折叠上获得100%的准确度

原因部分在于班级的非常大的失衡:

table(train_label)
train_label
  0   1   2   3   4   5   6   7   8   9  10  11 
  3  10  12  13  36  16  19 856   7  73   3 451 
Run Code Online (Sandbox Code Playgroud)

以及通过1个虚拟变量很容易区分次要类的事实:

gg <- data.frame(train_data[,which(colnames(train_data) %in% xgb.importance(colnames(train_matrix), model)$Feature)], label = as.factor(train_label))

gg %>%
  as.tibble() %>%
  select(1:9, 11, 12, 15:21, 23) %>%
  gather(key, value, 1:18) %>%
  ggplot()+
  geom_bar(aes(x = label))+
  facet_grid(key ~ value) +
  theme(strip.text.y = element_text(angle = 90))
Run Code Online (Sandbox Code Playgroud)

在此输入图像描述

基于22个最重要特征中0/1的分布,它看起来对任何树模型都能够达到相当好的准确性,如果不是100%的准确性.

人们可能会认为0级和10级对于5倍CV是有问题的,因为所有受试者都有可能落入一次,因此模型至少在那种情况下不会知道它们.如果通过随机抽样设计CV,那将是可能的.xgb.cv不会发生这种情况:

lapply(cv_model$folds, function(x){
  table(train_label[x])})
Run Code Online (Sandbox Code Playgroud)

  • 但是你从来没有使用保持测试数据`test_data`进行任何预测,或者我错过了一些明显的东西?`pred`项目包含基于训练数据的预测,并且误差是基于训练数据的5倍CV误差. (2认同)