use*_*436 8 performance multithreading openmp affinity numa
我使用8个OpenMP线程运行以下循环:
float* data;
int n;
#pragma omp parallel for schedule(dynamic, 1) default(none) shared(data, n)
for ( int i = 0; i < n; ++i )
{
DO SOMETHING WITH data[i]
}
Run Code Online (Sandbox Code Playgroud)
由于NUMA,我想用线程0,1,2,3和后半部分(i = n/2,...)运行循环的前半部分(i = 0,...,n/2-1). ..,n-1)与线程4,5,6,7.
本质上,我想并行运行两个循环,每个循环使用一组独立的OpenMP线程.
如何使用OpenMP实现这一目标?
谢谢
PS:理想情况下,如果来自一个组的线程完成了它们的一半循环,而另一半循环仍然没有完成,我希望来自完成组的线程加入未完成的组处理循环的另一半.
我正在考虑下面的内容,但我想知道我是否可以使用OpenMP执行此操作并且不需要额外的簿记:
int n;
int i0 = 0;
int i1 = n / 2;
#pragma omp parallel for schedule(dynamic, 1) default(none) shared(data,n,i0,i1)
for ( int i = 0; i < n; ++i )
{
int nt = omp_get_thread_num();
int j;
#pragma omp critical
{
if ( nt < 4 ) {
if ( i0 < n / 2 ) j = i0++; // First 4 threads process first half
else j = i1++; // of loop unless first half is finished
}
else {
if ( i1 < n ) j = i1++; // Second 4 threads process second half
else j = i0++; // of loop unless second half is finished
}
}
DO SOMETHING WITH data[j]
}
Run Code Online (Sandbox Code Playgroud)
最好的方法是使用嵌套并行化,首先是NUMA节点,然后是每个节点; 那么你可以使用基础设施,dynamic同时仍然在线程组之间打破数据:
#include <omp.h>
#include <stdio.h>
int main(int argc, char **argv) {
const int ngroups=2;
const int npergroup=4;
const int ndata = 16;
omp_set_nested(1);
#pragma omp parallel for num_threads(ngroups)
for (int i=0; i<ngroups; i++) {
int start = (ndata*i+(ngroups-1))/ngroups;
int end = (ndata*(i+1)+(ngroups-1))/ngroups;
#pragma omp parallel for num_threads(npergroup) shared(i, start, end) schedule(dynamic,1)
for (int j=start; j<end; j++) {
printf("Thread %d from group %d working on data %d\n", omp_get_thread_num(), i, j);
}
}
return 0;
}
Run Code Online (Sandbox Code Playgroud)
运行这个给出
$ gcc -fopenmp -o nested nested.c -Wall -O -std=c99
$ ./nested | sort -n -k 9
Thread 0 from group 0 working on data 0
Thread 3 from group 0 working on data 1
Thread 1 from group 0 working on data 2
Thread 2 from group 0 working on data 3
Thread 1 from group 0 working on data 4
Thread 3 from group 0 working on data 5
Thread 3 from group 0 working on data 6
Thread 0 from group 0 working on data 7
Thread 0 from group 1 working on data 8
Thread 3 from group 1 working on data 9
Thread 2 from group 1 working on data 10
Thread 1 from group 1 working on data 11
Thread 0 from group 1 working on data 12
Thread 0 from group 1 working on data 13
Thread 2 from group 1 working on data 14
Thread 0 from group 1 working on data 15
Run Code Online (Sandbox Code Playgroud)
但请注意,嵌套方法可能会改变线程分配而不是单级线程,因此您可能需要更多地使用KMP_AFFINITY或其他机制来重新绑定绑定.