How to get the more precise depth and color image


#1

Hello everyone,

I’m quite new using the structure sensor on a university project. Little by little I start to understand where to look at and how it works. The main task right now for me is to grab and send via wifi a depth image along a color image.

Considering the depthFrame I have 2 solutions for now, first is to create a UIImage by using the

_depthAsRgbaVisualizer.rgbaBuffer

and set colorSpace as gray (we do not wish to have rgb colors on the depth image)

174_depth 262_depth

but as you can see, the range of the depth information is not that huge (probably no more than ~ 1.20 meter, something like that). So I was wondering how

convertDepthFrameToRgba

works to create this depth picture, is it using the shiftBufferor even depthInMillimeter buffers ? Since I wasn’t sure I gave a try with

depthFrame.shifData

since depth values are stored as uint16_t I thought the data might contain more precise informations, however the rendering is really not good (you will probably only see a black square but it is not completely, there’s a very blacky gray that even I, hardly see on my real size image).

55_1_1_0depth

I wondered if it wouldn’t come from my code or if it is because values go from 0 to 2047 inside the shiftData buffer.

glPixelStorei(GL_PACK_ALIGNMENT, 2);
glReadPixels(0, 0, width, height, GL_RED_EXT, GL_UNSIGNED_SHORT, depthFrame.shiftData);
int bitsPerComponent = 16, bitsPerPixel = 16, bytesPerRow = width*2;

NSData * data = [NSData dataWithBytes:depthFrame.shiftData length:width*height*sizeof(GL_UNSIGNED_SHORT)];
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceGray();
CGImageRef iref = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorspace, kCGImageAlphaNone|kCGBitmapByteOrder16Little, provider, NULL, NO, kCGRenderingIntentDefault);
UIImage *myImage = [UIImage imageWithCGImage:iref];

I was not completely sure about the lenght of the NSData but I don’t think it has a huge impact.

It could help me to have some opinions about if/how considering the first pictures, I could get some more precision.

My second issue is the camera picture, it seems like whatever picture is saved is in graySpace (the thubnail is, the jpg created when scanning also) so I’m not sure about how I could get it with rgb colors, maybe should I try to work on it via the sampleBuffer of the colorFrame or on the chroma texture ?


#2

I think you will find what you need if you look through the Viewer sample app. They include a function that is the same as convertDepthFrameToRgba with the STDepthToRgbaStrategyRedToBlueGradient strategy. Look for the function - (void)convertShiftToRGBA:(const uint16_t*)shiftValues depthValuesCount:(size_t)depthValuesCount


#3

Thank you for pointing out where to look at, I finally got what I was looking for. I think that during one of my attempt I was not that far but I messed up the dataProviderSize (because it says that depthFrame.shiftData is a ‘width * height’ 16 bit shift values) so at that time when I was initializing my CGDataProviderRef I used this same size, but maybe since I set

bytesPerRow = width*2;

I should had also used

CGDataProviderCreateWithData(NULL, depthFrame.shiftData, width * height * 2, NULL);

instead of

CGDataProviderCreateWithData(NULL, depthFrame.shiftData, width * height, NULL);

but I’m not completely sure why. Well, in any case, the result is much nicer.

41_depth