We are creating an iOS app (using a Swift conversion of the demo scanning app as a base) where we need to capture point clouds for a small object, and do calculations on those depth frames ourselves. We are extracting depth frames to file at the tap of a button. Our plan is to then use the camera pose for each frame to combine each captured depthframe point cloud into a single point cloud. In order to keep the point cloud points we process to a minimum, we are trying to use one of the SDK methods to get a mask which will tell us which points in the depth frame are within the cube that the user initially placed.
When we get the ‘mask’ for the current depth frame using cameraPoseInitializer.detectInnerPixels (where cameraPoseInitializer is our instance of STCameraPoseInitializer), it does not give us the expected mask.
The mask it gives seems to assume that the cube is always in the same position with respect to the Structure Scanner. That is to say, it works fine for my first depth frame, when the cube is straight on and 50 cm away, but if I then move sideways until the cube is on the the right side of my preview screen, the mask acts as if the cube is still straight ahead, and give values of 255 for whatever happens to be in that displaced cube, instead of where the cube currently is!
Some images to illustrate my issue:
Colour 1: https://ibb.co/eGNaa5
Colour 2: https://ibb.co/dyRe2k
Photo 1: https://ibb.co/iHFp2k
Photo 2: https://ibb.co/duxf8Q
Mask 1: https://ibb.co/bTb7oQ
Mask 2: https://ibb.co/bxfdv5
I'm clearly using this 'detectInnerPixels' method wrong (perhaps calling it from the wrong object, though I wouldn't know how else to call it, I cannot find any details about it in the documentation).
Any advice to remedy this situation would be appreciated, whether it utilises the aforementioned method or not!