Object tracking in Structure Core



I’m developing a Windows Unity application and need to use my structure core to track two known objects (attached to shoes). The camera is stationary - the feet move.

What is the easiest way to do this? I thought it worked out of the box with Structure Core, but I don’t see any docs for it anywhere. Should I be using OpenNI2 or some other library? I haven’t done point cloud work for a few years, so I’m not sure what the current standard is.

Kind regards



I saw the question below that says you don’t offer 6dof tracking from the perception engine on structure core yet. When you do, will it include the ability to track objects from a stationary camera?


Hi Steve,

I don’t think you should be using OpenNI2, but the SDK for the Core. Also if you want to track visible objects and then show them in 3D you could use the the video feed for it, apply some ML on the images you want to track, apply the trained model to the video and find the 3D correspondence from 2D giving you an accurate 3D point in space. I just used firebase 2 weeks ago to do face recognition through android’s camera. It is basically the same, but then once you get the center of the detected object just use the pixel reference on the cloud. It probably takes a bit of time to setup, but you can probably get it working in a day or two.

Also thinking out of the box, you probably could have done this with the Vive and a couple of pucks (each attached to a shoe) and would work in a Unity application.