使用DirectX11像素着色器将BGRA转换为YUV444的绿色图像

Bal*_*ati 7 yuv hlsl directx-11 pixel-shader rgba

在此输入图像描述
  我是HLSL的新手.我正在尝试将使用DXGI桌面复制API从BGRA捕获的图像的色彩空间转换为使用纹理作为渲染目标的YUV444.
  我已设置像素着色器以执行所需的转换.从渲染目标纹理中取4:2:0子采样YUV并使用ffmpeg将其编码为H264,我可以看到图像.
  问题是 - 它是绿色的.
  着色器的输入颜色信息是浮点数据类型,但RGB到YUV转换的系数矩阵假定整数颜色信息.
  如果我使用钳位功能并将整数从输入颜色中取出,我将失去准确性.
  欢迎任何建议和指示.如果有任何其他信息有帮助,请告诉我.
  我怀疑我写的Pixel着色器,因为我第一次使用它.这是像素着色器.

float3 rgb_to_yuv(float3 RGB)
{
    float y = dot(RGB, float3(0.29900f, -0.16874f, 0.50000f));
    float u = dot(RGB, float3(0.58700f, -0.33126f, -0.41869f));
    float v = dot(RGB, float3(0.11400f, 0.50000f, -0.08131f));
    return float3(y, u, v);
}
float4 PS(PS_INPUT input) : SV_Target
{
    float4 rgba, yuva;
    rgba = tx.Sample(samLinear, input.Tex);
    float3 ctr = float3(0, 0, .5f);
    return float4(rgb_to_yuv(rgba.rgb) + ctr, rgba.a);
}
Run Code Online (Sandbox Code Playgroud)

  渲染目标映射到CPU可读纹理,并将YUV444数据复制到3个BYTE阵列并提供给ffmpeg libx264编码器.编码器将编码的数据包写入视频文件.
  在这里,我考虑每个2X2像素矩阵一个U(Cb)和一个V(Cr)和4个Y值.
  我从纹理中检索yuv420数据:

for (size_t h = 0, uvH = 0; h < desc.Height; ++h)
{
    for (size_t w = 0, uvW = 0; w < desc.Width; ++w)
    {
        dist = resource1.RowPitch *h + w * 4;
        distance = resource.RowPitch *h + w * 4;
        distance2 = inframe->linesize[0] * h + w;
        data = sptr[distance + 2 ];
        pY[distance2] = data;
        if (w % 2 == 0 && h % 2 == 0)
        {
            data1 = sptr[distance + 1];
            distance2 = inframe->linesize[1] * uvH + uvW++;
            pU[distance2] = data1;
            data1 = sptr[distance ];
            pV[distance2] = data1;
        }
    }
    if (h % 2)
        uvH++;
} 
Run Code Online (Sandbox Code Playgroud)

EDIT1:添加混合状态desc:

D3D11_BLEND_DESC BlendStateDesc;
    BlendStateDesc.AlphaToCoverageEnable = FALSE;
    BlendStateDesc.IndependentBlendEnable = FALSE;
    BlendStateDesc.RenderTarget[0].BlendEnable = TRUE;
    BlendStateDesc.RenderTarget[0].SrcBlend = D3D11_BLEND_SRC_ALPHA;
    BlendStateDesc.RenderTarget[0].DestBlend = D3D11_BLEND_INV_SRC_ALPHA;
    BlendStateDesc.RenderTarget[0].BlendOp = D3D11_BLEND_OP_ADD;
    BlendStateDesc.RenderTarget[0].SrcBlendAlpha = D3D11_BLEND_ONE;
    BlendStateDesc.RenderTarget[0].DestBlendAlpha = D3D11_BLEND_ZERO;
    BlendStateDesc.RenderTarget[0].BlendOpAlpha = D3D11_BLEND_OP_ADD;
    BlendStateDesc.RenderTarget[0].RenderTargetWriteMask = D3D11_COLOR_WRITE_ENABLE_ALL;
    hr = m_Device->CreateBlendState(&BlendStateDesc, &m_BlendState);
FLOAT blendFactor[4] = {0.f, 0.f, 0.f, 0.f};
    m_DeviceContext->OMSetBlendState(nullptr, blendFactor, 0xffffffff);
    m_DeviceContext->OMSetRenderTargets(1, &m_RTV, nullptr);
    m_DeviceContext->VSSetShader(m_VertexShader, nullptr, 0);
    m_DeviceContext->PSSetShader(m_PixelShader, nullptr, 0);
    m_DeviceContext->PSSetShaderResources(0, 1, &ShaderResource);
    m_DeviceContext->PSSetSamplers(0, 1, &m_SamplerLinear);
    m_DeviceContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
Run Code Online (Sandbox Code Playgroud)

EDIT2:在CPU上计算YUV的值:45 200 170和像素着色器之后的值,涉及浮点计算:86 141 104.相应的RGB:48 45 45.有什么区别?

小智 1

看起来你的矩阵已经转置了。

根据:www.martinreddy.net/gfx/faqs/colorconv.faq [6.4] ITU.BT-601 Y'CbCr:

Y'= 0.299*R' + 0.587*G' + 0.114*B'
Cb=-0.169*R' - 0.331*G' + 0.500*B'
Cr= 0.500*R' - 0.419*G' - 0.081*B'
Run Code Online (Sandbox Code Playgroud)

您误解了复制源中 numpy.dot 的行为。

另外,看起来@harold是正确的,你应该抵消U和V。