Caffe:火车网络精度= 1不变!准确性问题

shi*_*tel 1 machine-learning training-data neural-network deep-learning caffe

现在,我是带有2类数据的火车网络...但是在第一次迭代后精度是恒定的1!

输入数据是灰度图像.当HDF5Data创建时,两个类图像都是随机选择的.

为什么会这样?怎么了,错在哪里!

network.prototxt:

name: "brainMRI"
layer {
  name: "data"
  type: "HDF5Data"
  top: "data"
  top: "label"
  include: {
    phase: TRAIN
  }
  hdf5_data_param {
    source: "/home/shivangpatel/caffe/brainMRI1/train_file_location.txt"
    batch_size: 10
  }
}
layer {
  name: "data"
  type: "HDF5Data"
  top: "data"
  top: "label"
  include: {
    phase: TEST
  }
  hdf5_data_param {
    source: "/home/shivangpatel/caffe/brainMRI1/test_file_location.txt"
    batch_size: 10
  }
}

layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 20
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool1"
  type: "Pooling"
  bottom: "conv1"
  top: "pool1"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "conv2"
  type: "Convolution"
  bottom: "pool1"
  top: "conv2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 50
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool2"
  type: "Pooling"
  bottom: "conv2"
  top: "pool2"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "ip1"
  type: "InnerProduct"
  bottom: "pool2"
  top: "ip1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 500
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "relu1"
  type: "ReLU"
  bottom: "ip1"
  top: "ip1"
}
layer {
  name: "ip2"
  type: "InnerProduct"
  bottom: "ip1"
  top: "ip2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 2
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "softmax"
  type: "Softmax"
  bottom: "ip2"
  top: "smip2"
}

layer {
  name: "loss"
  type: "SoftmaxWithLoss"
  bottom: "ip2"
  bottom: "label"
  top: "loss"
}

layer {
  name: "accuracy"
  type: "Accuracy"
  bottom: "smip2"
  bottom: "label"
  top: "accuracy"
  include {
    phase: TEST
  }
}
Run Code Online (Sandbox Code Playgroud)

输出:

I0217 17:41:07.912580  2913 net.cpp:270] This network produces output loss
I0217 17:41:07.912607  2913 net.cpp:283] Network initialization done.
I0217 17:41:07.912739  2913 solver.cpp:60] Solver scaffolding done.
I0217 17:41:07.912789  2913 caffe.cpp:212] Starting Optimization
I0217 17:41:07.912813  2913 solver.cpp:288] Solving brainMRI
I0217 17:41:07.912832  2913 solver.cpp:289] Learning Rate Policy: inv
I0217 17:41:07.920737  2913 solver.cpp:341] Iteration 0, Testing net (#0)
I0217 17:41:08.235076  2913 solver.cpp:409]     Test net output #0: accuracy = 0.98
I0217 17:41:08.235194  2913 solver.cpp:409]     Test net output #1: loss = 0.0560832 (* 1 = 0.0560832 loss)
I0217 17:41:35.831647  2913 solver.cpp:341] Iteration 100, Testing net (#0)
I0217 17:41:36.140849  2913 solver.cpp:409]     Test net output #0: accuracy = 1
I0217 17:41:36.140949  2913 solver.cpp:409]     Test net output #1: loss = 0.00757247 (* 1 = 0.00757247 loss)
I0217 17:42:05.465395  2913 solver.cpp:341] Iteration 200, Testing net (#0)
I0217 17:42:05.775877  2913 solver.cpp:409]     Test net output #0: accuracy = 1
I0217 17:42:05.776000  2913 solver.cpp:409]     Test net output #1: loss = 0.0144996 (* 1 = 0.0144996 loss)
.............
.............
Run Code Online (Sandbox Code Playgroud)

Sha*_*hai 7

总结评论中的一些信息:
- 您以test_interval:100迭代间隔运行测试.
- 每个测试间隔超过test_iter:5*batch_size:10= 50个样本.
- 您的火车和测试装置似乎非常不合适:所有阴性样本(标签= 0)在所有阳性样本之前组合在一起.


考虑一下您的SGD迭代求解器,您可以batch_size:10在训练期间为它们提供批量.在任何阳性样本之前,您的训练集有14,746个阴性样本(即1474个批次).因此,对于前1474次迭代,您的求解器仅"看到"负面示例而不是正面示例.
你期望这个解算器能学到什么?

问题

你的求解器只看到负面的例子,因此得知无论输入是什么,它都应该输出"0".您的测试集也以相同的方式排序,因此在每个test_interval仅测试50个样本,您只测试测试集中的负面示例,得到的完美准确度为1.
但正如您所指出的,您的网络实际上什么都没学到.

我想你现在已经猜到了解决方案应该是什么.您需要随机播放训练集,并在整个测试集上测试您的网络.