Hi, I’m trying to augment the color stream with object 3D poses computed from the depth stream, but there seems to be a constant translation shift of a few pixels (rotation seems good) between the displayed model and the real object. I’m using depth frame’s visibleCameraPoseInDepthCoordinateFrame() method and color frame’s intrinsics to process coordinate frame change and 2D projection. Am I missing something? Is it possible that camera’s extrinsics and intrinsics are outdated and need a calibration? Talking about that I haven’t seen any documentation mentioning Structure Core calibration process, does it sometimes need one?
Thanks for your help,