当我开始使用 12 个线程时给出错误答案的 OpenMP 代码

smi*_*dha 2 c++ multithreading openmp

我在这里有一段 Open MP 代码,它4.0/(1+x^2)在 interval 上执行函数的整数化[0,1]。对此的分析答案是pi = 3.14159...

对函数积分的方法只是简单的近似黎曼和。现在,当我使用 1 个 OpenMP 线程,最多 11 个 OpenMP 线程时,代码给了我正确的答案。

然而,一旦我开始使用 12 个或更多的 OpenMP 线程,它就会开始给出越来越多的错误答案。为什么会发生这种情况?首先是C++代码。我在 Ubuntu 10.10 环境中使用 gcc。代码编译为g++ -fopenmp integration_OpenMP.cpp

// f(x) = 4/(1+x^2) 
// Domain of integration: [0,1] 
// Integral over the domain = pi =(approx) 3.14159 

#include <iostream>
#include <omp.h>
#include <vector>
#include <algorithm>
#include <functional>
#include <numeric>


int main (void)
{
  //Information common to serial and parallel computation.
  int    num_steps = 2e8;
  double dx        = 1.0/num_steps;


  //Serial Computation: Method pf integration is just a plain Riemann sum
   double start = omp_get_wtime();

   double serial_sum = 0;
   double x          = 0;
   for (int i=0;i< num_steps; ++i)
      {
         serial_sum += 4.0*dx/(1.0+x*x);
              x += dx;
     }

    double end = omp_get_wtime();
    std::cout << "Time taken for the serial computation: "      << end-start         << " seconds";
    std::cout << "\t\tPi serial: "                              << serial_sum        <<   std::endl;





   //OpenMP computation. Method of integration, just a plain Riemann sum
    std::cout << "How many OpenMP threads do you need for parallel computation? ";
    int t;//number of openmp threads
    std::cin >> t; 

    start  = omp_get_wtime(); 
    double  parallel_sum = 0; //will be modified atomically
    #pragma omp parallel num_threads(t)
    {
      int threadIdx = omp_get_thread_num();
      int begin = threadIdx * num_steps/t; //integer index of left end point of subinterval
      int end   = begin + num_steps/t;   // integer index of right-endpoint of sub-interval
      double dx_local = dx;
      double temp = 0;
      double x    = begin*dx; 

      for (int i = begin; i < end; ++i)
    {     
         temp += 4.0*dx_local/(1.0+x*x);
         x    += dx_local;
    }
     #pragma omp atomic
      parallel_sum += temp;
     }
    end   = omp_get_wtime();
    std::cout << "Time taken for the parallel computation: "    << end-start << " seconds";
    std::cout << "\tPi parallel: "                                << parallel_sum        <<   std::endl;

    return 0;
}
Run Code Online (Sandbox Code Playgroud)

这是从 11 个线程开始的不同线程数的输出。

OpenMP: ./a.out
Time taken for the serial computation: 1.27744 seconds      Pi serial: 3.14159
How many OpenMP threads do you need for parallel computation? 11
Time taken for the parallel computation: 0.366467 seconds   Pi parallel: 3.14159
OpenMP: 
OpenMP: 
OpenMP: 
OpenMP: 
OpenMP: 
OpenMP: ./a.out
Time taken for the serial computation: 1.28167 seconds      Pi serial: 3.14159
How many OpenMP threads do you need for parallel computation? 12
Time taken for the parallel computation: 0.351284 seconds   Pi parallel: 3.16496
OpenMP: 
OpenMP: 
OpenMP: 
OpenMP: 
OpenMP: 
OpenMP: ./a.out
Time taken for the serial computation: 1.28178 seconds      Pi serial: 3.14159
How many OpenMP threads do you need for parallel computation? 13
Time taken for the parallel computation: 0.434283 seconds   Pi parallel: 3.21112


OpenMP: ./a.out
Time taken for the serial computation: 1.2765 seconds       Pi serial: 3.14159
How many OpenMP threads do you need for parallel computation? 14
Time taken for the parallel computation: 0.375078 seconds   Pi parallel: 3.27163
OpenMP: 
Run Code Online (Sandbox Code Playgroud)

Tud*_*dor 5

为什么不直接使用parallel for带有静态分区的a呢?

#pragma omp parallel shared(dx) num_threads(t)
{
   double x = omp_get_thread_num() * 1.0 / t;

   #pragma omp for reduction(+ : parallel_Sum) 
   for (int i = 0; i < num_steps; ++i)
   {     
       parallel_Sum += 4.0*dx/(1.0+x*x);
       x += dx;
   }
}
Run Code Online (Sandbox Code Playgroud)

那么你就不需要自己管理所有的分区和结果的原子集合。

为了正确初始化x,我们注意到x = (begin * dx) = (threadIdx * num_steps/t) * (1.0 / num_steps) = (threadIdx * 1.0) / t.

编辑:刚刚在我的机器上测试了这个最终版本,它似乎工作正常。