Datasets:

ArXiv:
License:

Clarification on direction3d Values and Their Relation to extrinsicMatrix and Euler Angle Conventions

#14
by ry-yoshi-yoshi - opened

Hello,

I have a question regarding the calibration.attributes.direction3d values in relation to the camera extrinsicMatrix.

When I extract the rotation matrix 𝑅 from the extrinsicMatrix and compute Euler angles using

R = extrinsic[:, :3]
roll, pitch, yaw = Rotation.from_matrix(R).as_euler("xyz", degrees=True)

However, the roll angle does not differ by a simple fixed offset. Instead, it roughly satisfies one of these relations depending on the sign of roll_in_direction3d:

  • roll_computed β‰ˆ 180Β° - roll_in_direction3d (if roll_in_direction3d > 0)
    or
  • roll_computed β‰ˆ -180Β° βˆ’ roll_in_direction3d (roll_in_direction3d<0)

calibration.attributes.direction3d -> roll:77.27 pitch:37.45 yaw:7.82
Rotation.from_matrix(R) -> roll:102.73 pitch:37.45 yaw:7.82

On the other hand, when I compute the camera position from the extrinsic matrix by

T = extrinsic[:, 3].reshape(3, 1)
camera_position = np.array(-R.T @ T).flatten()

It matches the provided calibration["coordinates"] exactly.

Could you kindly clarify the following points?

  • Is direction3d supposed to be derived from the current extrinsicMatrix?
  • What exact Euler angle convention (axis order and rotation direction) is used for the direction3d values?

Thank you very much for your assistance.

Dear Ryuto Yoshida,

Thank you for your detailed analysis. Based on your information, we validated it using Camera_0000 under Warehouse_000 (the same camera setup you used) and obtained results consistent with direction3d. Please find our responses to your questions below:

  1. Is direction3d supposed to be derived from the current extrinsicMatrix?

Yes, direction3d was derived from the current extrinsicMatrix. However, since the initial camera's up direction in NVIDIA Omniverse differs from the more common convention, the following steps should be performed before calculating the Euler angles:

R = extrinsic[:, :3]

1) Negate the second row of R (corresponding to the Y-axis).

2) Negate the third row of R (corresponding to the Z-axis).

3) Transpose R to obtain the final rotation matrix.

roll, pitch, yaw = Rotation.from_matrix(R).as_euler("XYZ", degrees=True)
  1. What exact Euler angle convention (axis order and rotation direction) is used for the direction3d values?

As shown in the code above, the Euler angle convention used for direction3d is rotating-axis with a X–Y–Z rotation order.

P.S.
I also tried to reconstruct the rotation matrix R directly from the direction3d values using the following code:

However, the resulting matrix does not match the rotation matrix extracted from extrinsicMatrix[:, :3], even when I try all standard axis orders like "xyz", "zyx", etc. This further suggests that direction3d may not be a direct Euler decomposition of the extrinsic rotation matrix, or it may be using a different convention or interpretation.

Rotation.from_euler(order, np.array([attributes.direction3d]), degrees=True).as_matrix()

I would appreciate any clarification regarding how exactly direction3d relates to the extrinsicMatrix.

Dear Yuxing Wang,

Thank you very much for the clarification. I happened to send my follow-up message at the same time. I’ll try applying the suggested transformations and confirm the results.

zhengthomastang changed discussion status to closed

Dear Yuxing Wang,
After applying the transformation that you mentioned for the Rotation Matrix. I tried to convert the 2d bounding box visible coordinates to the 3d global coordinates using the depth data. But the generated 3 global coordinates are way off the actual position of the object.
Can you please re confirm if the changes that you mentioned are true?

Hi,

I was able to apply the transformations using the negation logic mentioned above.. After apply the transformation to each camera point cloud, the poincloud has to be transformed by the 2D camera position. Howver there still seems to be errors in correctly registering after these transformations.. The following are my steps:

  1. project depth to point cloud using intrinsic
  2. Apply rotation (R from extrinsic matrix) using the negation logic mentioned above
  3. Convert pointcloud to meters and divide by scale
  4. Apply the translation (T from extrinsic matrix) after converting units to meters
  5. Apply the translation to global coordinates (from sensors.translationToGlobalCoordinates) from calibration file.

Perhaps could there be help to verify the transforms?

Dear Ryuto Yoshida,

Sorry for the late reply. For point cloud registration, the negation logic of the extrinsic matrix mentioned above is not applicable to this. the raw extrinsic matrix should be used as-is.

We have verified the point cloud registration and it works across cameras using given calibration information including intrinsic matrix, extrinsic matrix, and depth map, as shown in attached image. The following is pseudocode for your reference:

  1. Load the intrinsic and extrinsic matrices from the sensor.
    - intrinsic_matrix ← sensor["intrinsicMatrix"]
    - extrinsic_matrix ← sensor["extrinsicMatrix"]

  2. Extend the extrinsic matrix to 4Γ—4 homogeneous form.
    - extended_extrinsic ← identity matrix (4Γ—4)
    - Replace first 3 rows of extended_extrinsic with extrinsic_matrix

  3. Extract camera intrinsics:
    - fx, fy ← focal lengths from intrinsic_matrix
    - cx, cy ← principal point from intrinsic_matrix

  4. Get image dimensions:
    - height ← sensor attribute for image height
    - width ← sensor attribute for image width

  5. Load the depth map and convert depth units to meters.
    - depth_image ← load depth_path using appropriate image reader
    - depth ← depth_image / 1000

  6. Generate pixel coordinate grids:
    - x_coords, y_coords ← meshgrid for image dimensions

  7. (Core) Project 2D pixel coordinates into 3D camera space:
    - x_camera ← (x_coords - cx) Γ— depth / fx
    - y_camera ← (cy - (height - y_coords)) Γ— depth / fy
    - z_camera ← -depth

  8. Flatten all 3D coordinate arrays and stack into NΓ—3 format:
    - pts ← [x_camera, y_camera, -z_camera] as rows

  9. Convert to homogeneous coordinates:
    - pts ← concatenate a column of 1s to make pts of shape NΓ—4

  10. Transform points from camera space to world space:
    - world_pts ← pts Γ— inverse(extended_extrinsic).transpose

  11. Return only the XYZ coordinates from world_pts.

image.png

Sign up or log in to comment