CUDA内存限制

Imp*_*ian 0 memory memory-management cuda

如果我尝试向我的CUDA设备发送一个比可用内存大的结构,那么CUDA会给我任何警告或错误吗?

我问这是因为我的GPU有1024 MBytes(1073414144字节)全局内存总量,但我不知道应该如何处理和最终的问题.

那是我的代码:

#define VECSIZE 2250000
#define WIDTH 1500
#define HEIGHT 1500



// Matrices are stored in row-major order:
// M(row, col) = *(M.elements + row * M.width + col)
struct Matrix
{

    int width;
    int height;
    int* elements;

};


   int main()
   {


Matrix M;
M.width = WIDTH;
M.height = HEIGHT;
M.elements = (int *) calloc(VECSIZE,sizeof(int));   

int row, col;   


// define Matrix M
// Matrix generator:
for (int i = 0; i < M.height; i++)
    for(int j = 0; j < M.width; j++)
    {
    row = i;
    col = j; 

    if (i == j)
    M.elements[row * M.width + col] = INFINITY;
        else
        {
        M.elements[row * M.width + col] = (rand() % 2); // because 'rand() % 1' just does not seems to work ta all.
        if (M.elements[row * M.width + col] == 0)  // can't have zero weight.
            M.elements[row * M.width + col] = INFINITY;
            else if (M.elements[row * M.width + col] == 2)
                M.elements[row * M.width + col] = 1;    

        }

    }





// Declare & send device Matrix to Device.
Matrix d_M;
d_M.width = M.width;
d_M.height = M.height;
size_t size = M.width * M.height * sizeof(int);
cudaMalloc(&d_M.elements, size);
cudaMemcpy(d_M.elements, M.elements, size, cudaMemcpyHostToDevice);

int *d_k=  (int*) malloc(sizeof(int));
cudaMalloc((void**) &d_k, sizeof (int));



int *d_width=(int*)malloc(sizeof(int));
cudaMalloc((void**) &d_width, sizeof(int));
unsigned int *width=(unsigned int*)malloc(sizeof(unsigned int));
width[0] = M.width;
cudaMemcpy(d_width, width, sizeof(int), cudaMemcpyHostToDevice);

int *d_height=(int*)malloc(sizeof(int));
cudaMalloc((void**) &d_height, sizeof(int));
unsigned int *height=(unsigned int*)malloc(sizeof(unsigned int));
height[0] = M.height;   
cudaMemcpy(d_height, height, sizeof(int), cudaMemcpyHostToDevice);
    /*

        et cetera .. */
Run Code Online (Sandbox Code Playgroud)

fli*_*art 6

虽然您当前可能没有向GPU发送足够的数据来最大化其内存,但是当您这样做时,您cudaMalloc将返回错误代码cudaErrorMemoryAllocation,根据cuda api文档,它会发出内存分配失败的信号.我注意到在您的示例代码中,您没有检查cuda调用的返回值.需要检查这些返回代码以确保程序正常运行.cuda api不会抛出异常:您必须检查返回代码.有关检查错误和获取有关错误的有意义消息的信息,请参阅此文章