Unproject point with a custom depth value?


#1

Hello,

I am getting into the fancy new Core depth scanner and SDK. I’ve been able to capture a point cloud from a single depth frame and then do some processing on it using the Point Cloud Library (PCL), with the end goal of generating a model/mesh of that scene (I’m close but surface reconstruction is still elusive).

I’m scanning a stationary object (a human leg) from a Core at a fixed location (on a tripod angled down at the leg). My goal is to create as accurate a scan as possible of the leg to then model it in order to create AFOs (Ankle Foot Orthotics - basically leg braces) in 3D modelling software that is then 3D printed.

I’ve set up a similar proof of concept with the Sensor before using OpenNI. It was close but the quality was just a tad shy of what we needed. I’m hoping the Core’s improved quality will cross the line and make this a viable option.

I’ve seen that the depth data is not always consistent over multiple frames. If I process say 20 consecutive frames from the same scan (repeatable using an OCC recording), at the same x, y point the depth could vary by a few millimetres in each frame. Since my goal is accuracy, I am planning to average out the depth over about 1 second’s worth of scan data to get, what I hope, is the most accurate depth value for that x, y.

I was able to do this before with the Sensor and OpenNI using this method:

CoordinateConverter.convertDepthToWorld(data.videoStream, x, y, depth);

However the only method I can see in the Structure SDK that gives me something similar is:

depthFrame.unprojectPoint(x, y)

Which uses the depth value at the x, y in the depth frame, and doesn’t let me specify my averaged out depth.

Is there any method I can use to do this? Alternatively would it be possible to get the code or basic algorithm that is used under this method so I can replicate it in my code?

I considered instantiating my own DepthFrame instance and populating it with my average depth but that looked scary with internals I don’t know enough about, but if that was an option then some pseudo code for that would work too.

I’m also open to being told that the averaging out of depths is a bad idea and there are better ways to get the most accurate scans.

Cheers,
Dan


#2

The below code should work at creating an average depth frame over a set period of time.

std::vector frames; // Assume you added the last 30s of depth frames to this, though you could add them in any increment you’d like.

const int width = frames[0].width();
const int height = frames[0].height();

std::vector averagedPoints;
// Should probably ensure every frame you add to `frames’ is a valid frame using STDepthFrame::isValid()
// and that the width and height are the same for all frames
averagedPoints.resize(height * width);

// Sum the average
for (const auto& frame : frames)
{
for (int row = 0; row < frame.height(); ++row) {
for (int col = 0; col < frame.width(); ++col) {
auto p = averagedPoints.at(row * height + col);
auto q = frame.undistortPoint(row, col);
// Depends on the number of frames but you should probably do a smarter mean operation than this
averagedPoints.at(row * height + col) = Vector3f(p.x() + q.x(), p.y() + q.y(), p.z() + q.z());
}
}
}

// Divide the average
const auto numberOfFrames = frames.size();
for (size_t i = 0; i < averagedPoints.size(); ++i) {
auto p = averagedPoints.at(i);
averagedPoints.at(i) = Vector3f(p.x() / numberOfFrames, p.y() / numberOfFrames, p.z() / numberOfFrames);
}`