多线程程序中的地图模式性能低于预期(4倍加速比8倍)

Exa*_*awa 5 c++ performance multithreading tbb hyperthreading

我正在开始使用多线程编程,所以如果以下内容显而易见,请原谅.我正在为图像处理程序添加多线程,并且加速并不完全是我期望的.

我目前在具有超线程(4)的4个物理处理器CPU上获得4倍的加速,所以我想知道这种加速是否是预期的.我唯一能想到的是,如果单个物理CPU的两个超线程都必须共享某种内存总线,那么它可能是有意义的.

考虑到所有内存都分配在RAM中(我明白我的操作系统的虚拟内存管理器将决定是否进入页面,因此对于多线程来说,我不完全清楚这是否会被视为I/O绑定程序)这个假设的内存量来自堆)我的机器有16Gb的RAM,以防它决定分页/交换是否是一个问题.

我编写了一个测试程序,展示了使用QThreadPool和tbb :: parallel_for的串行案例和两个并行案例

您可以看到的当前程序除了将假定的图像从黑色设置为白色之外没有任何实际操作,并且它是为了在将任何实际操作应用于图像之前知道基线是什么而完成的.

我正在附加该程序,希望有人可以解释我,如果我在这种处理算法中寻求大约8倍的加速是一个失败的原因.请注意,我对其他类型的优化(例如SIMD)不感兴趣,因为我真正关心的不仅仅是让它更快,而是使用纯多线程使其更快,而不需要进入SSE或处理器缓存级别优化.

#include <iostream>
#include <sys/time.h>

#include <vector>
#include <QThreadPool>
#include "/usr/local/include/tbb/tbb.h"

#define LOG(x) (std::cout << x << std::endl)

struct col4
{
    unsigned char r, g, b, a;
};

class QTileTask : public QRunnable
{
public:
    void run()
    {
        for(uint32_t y = m_yStart; y < m_yEnd; y++)
        {
            int rowStart = y * m_width;
            for(uint32_t x = m_xStart; x < m_xEnd; x++)
            {
                int index = rowStart + x;
                m_pData[index].r = 255;
                m_pData[index].g = 255;
                m_pData[index].b = 255;
                m_pData[index].a = 255;
            }
        }
    }

    col4*          m_pData;
    uint32_t       m_xStart;
    uint32_t       m_yStart;
    uint32_t       m_xEnd;
    uint32_t       m_yEnd;
    uint32_t       m_width;
};

struct TBBTileTask
{
    void operator()()
    {
        for(uint32_t y = m_yStart; y < m_yEnd; y++)
        {
            int rowStart = y * m_width;
            for(uint32_t x = m_xStart; x < m_xEnd; x++)
            {
                int index = rowStart + x;
                m_pData[index].r = 255;
                m_pData[index].g = 255;
                m_pData[index].b = 255;
                m_pData[index].a = 255;
            }
        }
    }

    col4*          m_pData;
    uint32_t       m_xStart;
    uint32_t       m_yStart;
    uint32_t       m_xEnd;
    uint32_t       m_yEnd;
    uint32_t       m_width;
};

struct TBBCaller
{
    TBBCaller(std::vector<TBBTileTask>& t)
        : m_tasks(t)
    {}

    TBBCaller(TBBCaller& e, tbb::split)
        : m_tasks(e.m_tasks)
    {}

    void operator()(const tbb::blocked_range<size_t>& r) const
    {
        for (size_t i=r.begin();i!=r.end();++i)
            m_tasks[i]();
    }

    std::vector<TBBTileTask>& m_tasks;
};

inline double getcurrenttime( void )
{
    timeval t;
    gettimeofday(&t, NULL);
    return static_cast<double>(t.tv_sec)+(static_cast<double>(t.tv_usec) / 1000000.0);
}

char* getCmdOption(char ** begin, char ** end, const std::string & option)
{
    char ** itr = std::find(begin, end, option);
    if (itr != end && ++itr != end)
    {
        return *itr;
    }
    return 0;
}

bool cmdOptionExists(char** begin, char** end, const std::string& option)
{
    return std::find(begin, end, option) != end;
}

void baselineSerial(col4* pData, int resolution)
{
    double t = getcurrenttime();
    for(int y = 0; y < resolution; y++)
    {
        int rowStart = y * resolution;
        for(int x = 0; x < resolution; x++)
        {
            int index = rowStart + x;
            pData[index].r = 255;
            pData[index].g = 255;
            pData[index].b = 255;
            pData[index].a = 255;
        }
    }
    LOG((getcurrenttime() - t) * 1000 << " ms. (Serial)");
}

void baselineParallelQt(col4* pData, int resolution, uint32_t tileSize)
{
    double t = getcurrenttime();

    QThreadPool pool;
    for(int y = 0; y < resolution; y+=tileSize)
    {
        for(int x = 0; x < resolution; x+=tileSize)
        {
            uint32_t xEnd = std::min<uint32_t>(x+tileSize, resolution);
            uint32_t yEnd = std::min<uint32_t>(y+tileSize, resolution);

            QTileTask* t = new QTileTask;
            t->m_pData = pData;
            t->m_xStart = x;
            t->m_yStart = y;
            t->m_xEnd = xEnd;
            t->m_yEnd = yEnd;
            t->m_width = resolution;
            pool.start(t);
        }
    }
    pool.waitForDone();
    LOG((getcurrenttime() - t) * 1000 << " ms. (QThreadPool)");
}

void baselineParallelTBB(col4* pData, int resolution, uint32_t tileSize)
{
    double t = getcurrenttime();

    std::vector<TBBTileTask> tasks;
    for(int y = 0; y < resolution; y+=tileSize)
    {
        for(int x = 0; x < resolution; x+=tileSize)
        {
            uint32_t xEnd = std::min<uint32_t>(x+tileSize, resolution);
            uint32_t yEnd = std::min<uint32_t>(y+tileSize, resolution);

            TBBTileTask t;
            t.m_pData = pData;
            t.m_xStart = x;
            t.m_yStart = y;
            t.m_xEnd = xEnd;
            t.m_yEnd = yEnd;
            t.m_width = resolution;
            tasks.push_back(t);
        }
    }

    TBBCaller caller(tasks);
    tbb::task_scheduler_init init;
    tbb::parallel_for(tbb::blocked_range<size_t>(0, tasks.size()), caller);

    LOG((getcurrenttime() - t) * 1000 << " ms. (TBB)");
}

int main(int argc, char** argv)
{
    int resolution = 1;
    uint32_t tileSize = 64;

    char * pResText = getCmdOption(argv, argv + argc, "-r");
    if (pResText)
    {
        resolution = atoi(pResText);
    }

    char * pTileSizeChr = getCmdOption(argv, argv + argc, "-b");
    if (pTileSizeChr)
    {
        tileSize = atoi(pTileSizeChr);
    }

    if(resolution > 16)
        resolution = 16;

    resolution = resolution << 10;

    uint32_t tileCount = resolution/tileSize + 1;
    tileCount *= tileCount;

    LOG("Resolution: " << resolution << " Tile Size: "<< tileSize);
    LOG("Tile Count: " << tileCount);

    uint64_t pixelCount = resolution*resolution;
    col4* pData = new col4[pixelCount];

    memset(pData, 0, sizeof(col4)*pixelCount);
    baselineSerial(pData, resolution);

    memset(pData, 0, sizeof(col4)*pixelCount);
    baselineParallelQt(pData, resolution, tileSize);

    memset(pData, 0, sizeof(col4)*pixelCount);
    baselineParallelTBB(pData, resolution, tileSize);

    delete[] pData;

    return 0;
}
Run Code Online (Sandbox Code Playgroud)

lve*_*lla 6

是的,预计会加速4倍.超线程是一种在硬件中实现的时间共享,因此如果一个线程耗尽了核心上可用的所有超标量流水线,您就不能期望从中受益,就像您的情况一样.另一个线程必须等待.

如果内存总线带宽因运行的线程数小于可用内核总数而饱和,则可以获得更低的加速比.如果您有太多内核,通常会发生这种情况,例如:

为什么这段代码不能线性扩展?