如何使用 OpenCV 理解 Python 中的头部姿势估计角度?

Chr*_*ice 8 python opencv euler-angles pose-estimation

我正在完成此处找到的 Python 和 OpenCV 头部姿势估计教程:

https://www.learnopencv.com/head-pose-estimation-using-opencv-and-dlib/

我能够准确地将 3D 点投影到 2D 图像上。但是,我无法理解我使用计算的欧拉角(偏航、俯仰、滚转)的实际含义cv2.decomposeProjectionMatrix

我需要知道这些值是否对应于 (yaw, pitch, roll) 或 (roll, pitch, yaw) 等。此外,我需要了解所使用的轴方向,以便我知道旋转度数在哪里从测量。

输出图像https : //www.learnopencv.com/wp-content/uploads/2016/09/head-pose-example-1024x576.jpg

输出角度[[-179.30011146], [ 53.77756583], [-176.6277211 ]]

这是我的代码

# --- Imports ---
import cv2
import numpy as np
# --- Main ---
if __name__ == "__main__":

    # Read Image
    im = cv2.imread("headPose.jpg");
    size = im.shape

    #2D image points. If you change the image, you need to change vector
    image_points = np.array([
                                (359, 391),     # Nose tip
                                (399, 561),     # Chin
                                (337, 297),     # Left eye left corner
                                (513, 301),     # Right eye right corne
                                (345, 465),     # Left Mouth corner
                                (453, 469)      # Right mouth corner
                            ], dtype="double")

    # 3D model points.
    model_points = np.array([
                                (0.0, 0.0, 0.0),             # Nose tip
                                (0.0, -330.0, -65.0),        # Chin
                                (-225.0, 170.0, -135.0),     # Left eye left corner
                                (225.0, 170.0, -135.0),      # Right eye right corne
                                (-150.0, -150.0, -125.0),    # Left Mouth corner
                                (150.0, -150.0, -125.0)      # Right mouth corner
                            ])

    # Camera internals
    focal_length = size[1]
    center = (size[1]/2, size[0]/2)
    camera_matrix = np.array(
                             [[focal_length, 0, center[0]],
                             [0, focal_length, center[1]],
                             [0, 0, 1]], dtype = "double"
                             )

    # Lens distortion - assumed to be zero
    dist_coeffs = np.zeros((4,1)) 

    # Calculate perspective and point
    (_, rvec, tvec) = cv2.solvePnP(model_points, image_points, camera_matrix, dist_coeffs, flags=cv2.SOLVEPNP_ITERATIVE)

    # Calculate Euler angles
    rmat = cv2.Rodrigues(rvec)[0] # rotation matrix
    pmat = np.hstack((rmat, tvec)) # projection matrix
    eulers = cv2.decomposeProjectionMatrix(pmat)[-1]
    print(eulers)

    # Projecting a 3D point
    ## features
    for p in image_points:
        cv2.circle(im, (int(p[0]), int(p[1])), 3, (0,0,255), -1)

    ## projection of multiple points
    proj = np.array([(0., 0., 1000.)])
    (poi1, jacobian1) = cv2.projectPoints(model_points[0]+proj, rvec, tvec, camera_matrix, dist_coeffs)

    ## 2D space    
    p1 = ( int(image_points[0][0]), int(image_points[0][1]))
    c1 =  ( int(poi1[0][0][0]), int(poi1[0][0][1]))

    cv2.line(im, p1, c1, (255,0,0), 2)

    # Display image
    cv2.imshow("Output", im)
    cv2.waitKey(0)
Run Code Online (Sandbox Code Playgroud)

测试图片https : //www.learnopencv.com/wp-content/uploads/2016/09/headPose.jpg

谢谢!

zte*_*ffi 1

是将旋转矩阵转换为欧拉角(横滚角、俯仰角、偏航角)的方法。方向取决于model_points并且似乎面向相机的方向颠倒(因此看相机的人应该偏航和滚动约 180 度)

文章中的代码(x、y、z 重命名为横滚、俯仰、偏航)。角度以弧度为单位。

# Calculates rotation matrix to euler angles
# The result is the same as MATLAB except the order
# of the euler angles ( roll and yaw are swapped ).
def rotationMatrixToEulerAngles(R) :
    
    sy = math.sqrt(R[0,0] * R[0,0] +  R[1,0] * R[1,0])
    
    singular = sy < 1e-6

    if  not singular :
        roll = math.atan2(R[2,1] , R[2,2])
        pitch = math.atan2(-R[2,0], sy)
        yaw = math.atan2(R[1,0], R[0,0])
    else :
        roll = math.atan2(-R[1,2], R[1,1])
        pitch = math.atan2(-R[2,0], sy)
        yaw = 0

    return np.array([roll, pitch, yaw])
Run Code Online (Sandbox Code Playgroud)