Structure Core color stream augmentation issue

Hi, I’m trying to augment the color stream with object 3D poses computed from the depth stream, but there seems to be a constant translation shift of a few pixels (rotation seems good) between the displayed model and the real object. I’m using depth frame’s visibleCameraPoseInDepthCoordinateFrame() method and color frame’s intrinsics to process coordinate frame change and 2D projection. Am I missing something? Is it possible that camera’s extrinsics and intrinsics are outdated and need a calibration? Talking about that I haven’t seen any documentation mentioning Structure Core calibration process, does it sometimes need one?

Thanks for your help,

Regards,

Albert

Hi amurienne:

Thanks for your posting! Every sensor did go through a comprehensive calibration process and then can be shipped. That’s the reason why we doesn’t specially mention it in our README since it’s an official manufacturing process. I recently tested the mono depth registration and it seemed good. We have an depth registration implementation in SCtoSensorMsgConverter.h under ROS folder. Probably you can take it a look and compare with your implementation.

Also, sometimes users experienced the bad quality b/c they mounted the structure core on some frames, which caused some deformation. So you can also check if your depth quality is good enough.If the depth quality is not good, you can follow the “recalibrate depth” instructions on our forum: https://support.structure.io/article/442-how-to-recalibrate-depth-with-coreplayground. Let me know how it goes!

Best.

Hi @allenicc, unfortunately depth recalibration didn’t help to solve my issue. For now, after some code reviewing, I still haven’t found the origin of the error. Nevertheless, I’ve come up with a workaround that gives pretty statisfying results, but that is not “clean” at all: if I add constant offsets of a few pixels to color camera optical center intrinsic (+6 on cx, -1 on cy), I get pretty good registration. Any idea?

Regards