计算CNN实现中的卷积层

sna*_*ken 14 matlab neural-network

我正在尝试使用稀疏自动控制器训练卷积神经网络,以便计算卷积层的滤波器.我正在使用UFLDL代码来构建补丁并训练CNN网络.我的代码如下:

===========================================================================
imageDim = 30;         % image dimension
imageChannels = 3;     % number of channels (rgb, so 3)

patchDim = 10;          % patch dimension
numPatches = 100000;    % number of patches

visibleSize = patchDim * patchDim * imageChannels;  % number of input units 
outputSize = visibleSize;   % number of output units
hiddenSize = 400;           % number of hidden units 

epsilon = 0.1;         % epsilon for ZCA whitening

poolDim = 10;          % dimension of pooling region

optTheta =  zeros(2*hiddenSize*visibleSize+hiddenSize+visibleSize, 1);
ZCAWhite =  zeros(visibleSize, visibleSize);
meanPatch = zeros(visibleSize, 1);

load patches_16_1
===========================================================================

% Display and check to see that the features look good
W = reshape(optTheta(1:visibleSize * hiddenSize), hiddenSize, visibleSize);
b =     optTheta(2*hiddenSize*visibleSize+1:2*hiddenSize*visibleSize+hiddenSize);

displayColorNetwork( (W*ZCAWhite));

stepSize = 100; 
assert(mod(hiddenSize, stepSize) == 0, stepSize should divide hiddenSize);

load train.mat % loads numTrainImages, trainImages, trainLabels
load train.mat  % loads numTestImages,  testImages,  testLabels
% size 30x30x3x8862

numTestImages = 8862;
numTrainImages = 8862;

pooledFeaturesTrain = zeros(hiddenSize, numTrainImages, floor((imageDim -     patchDim + 1) / poolDim), floor((imageDim - patchDim + 1) / poolDim) );
pooledFeaturesTest = zeros(hiddenSize, numTestImages, ...
floor((imageDim - patchDim + 1) / poolDim), ...
floor((imageDim - patchDim + 1) / poolDim) );

 tic();

 testImages = trainImages;

for convPart = 1:(hiddenSize / stepSize)

 featureStart = (convPart - 1) * stepSize + 1;
 featureEnd = convPart * stepSize;

  fprintf('Step %d: features %d to %d\n', convPart, featureStart, featureEnd);  
  Wt = W(featureStart:featureEnd, :);
  bt = b(featureStart:featureEnd);    

  fprintf('Convolving and pooling train images\n');
  convolvedFeaturesThis = cnnConvolve(patchDim, stepSize, ...
    trainImages, Wt, bt, ZCAWhite, meanPatch);
  pooledFeaturesThis = cnnPool(poolDim, convolvedFeaturesThis);
  pooledFeaturesTrain(featureStart:featureEnd, :, :, :) = pooledFeaturesThis;   
  toc();
  clear convolvedFeaturesThis pooledFeaturesThis;

  fprintf('Convolving and pooling test images\n');
  convolvedFeaturesThis = cnnConvolve(patchDim, stepSize, ...
    testImages, Wt, bt, ZCAWhite, meanPatch);
  pooledFeaturesThis = cnnPool(poolDim, convolvedFeaturesThis);
  pooledFeaturesTest(featureStart:featureEnd, :, :, :) = pooledFeaturesThis;   
  toc();

  clear convolvedFeaturesThis pooledFeaturesThis;

 end
Run Code Online (Sandbox Code Playgroud)

我在计算卷积和汇集层时遇到问题.我正在收集pooledFeaturesTrain(featureStart:featureEnd,:,:,:) = pooledFeaturesThis; 下标分配尺寸不匹配.路径通常已经计算出来,它们是:

在此输入图像描述

我试图了解convPart变量究竟在做什么以及pooledFeaturesThis是什么.其次我注意到我的问题是这一行不匹配pooledFeaturesTrain(featureStart:featureEnd, :, :, :) = pooledFeaturesThis; ,我得到的信息是变量不匹配.pooledFeatures的大小这是100x3x2x2,其中pooledFeaturesTrain的大小是400x8862x2x2.究竟pooledFeaturesTrain代表什么?每个过滤器的结果是2x2吗?可以在这里找到CnnConvolve :

编辑:我已经改变了一点我的代码,它的工作原理.但是我有点担心代码的理解.

ABC*_*ABC 1

好的,在这一行中您正在设置池区域。

poolDim = 10;          % dimension of pooling region
Run Code Online (Sandbox Code Playgroud)

这部分意味着对于每层中的每个内核,您将获取 10x10 像素的图像、池化和区域。从您的代码来看,您似乎正在应用一个均值函数,这意味着它是一个补丁并计算平均值并在下一层中输出它......也就是说,将图像从 100x100 变为 10x10。在你的网络中,你会根据这个输出重复卷积+池化,直到得到 2x2 图像(顺便说一句,根据我的经验,这通常不是一个好的做法)。

400x8862x2x2
Run Code Online (Sandbox Code Playgroud)

无论如何回到你的代码。请注意,在训练开始时,您会执行以下初始化:

 pooledFeaturesTrain = zeros(hiddenSize, numTrainImages, floor((imageDim -     patchDim + 1) / poolDim), floor((imageDim - patchDim + 1) / poolDim) );
Run Code Online (Sandbox Code Playgroud)

所以你的错误非常简单和正确 - 保存卷积+池化输出的矩阵的大小不是你初始化的矩阵的大小。

现在的问题是如何修复它。我认为一个懒人修复它的方法是删除初始化。它会大大减慢您的代码速度,并且如果您的层数超过 1 层,则不能保证能够正常工作。

我建议您将 pooledFeaturesTrain 改为 3 维数组的结构。所以而不是这个

pooledFeaturesTrain(featureStart:featureEnd, :, :, :) = pooledFeaturesThis; 
Run Code Online (Sandbox Code Playgroud)

你会做更多这样的事情:

pooledFeaturesTrain{n}(:, :, :) = pooledFeaturesThis; 
Run Code Online (Sandbox Code Playgroud)

其中 n 是当前层。

CNN 网络并不像他们吹捧的那么容易,即使他们没有崩溃,让他们好好训练也是一项壮举。我强烈建议阅读 CNN 理论 - 这将使编码和调试变得更加容易。

祝你好运!:)