I’m quite new using the structure sensor on a university project. Little by little I start to understand where to look at and how it works. The main task right now for me is to grab and send via wifi a depth image along a color image.
Considering the depthFrame I have 2 solutions for now, first is to create a UIImage by using the
and set colorSpace as gray (we do not wish to have rgb colors on the depth image)
but as you can see, the range of the depth information is not that huge (probably no more than ~ 1.20 meter, something like that). So I was wondering how
works to create this depth picture, is it using the shiftBufferor even depthInMillimeter buffers ? Since I wasn’t sure I gave a try with
since depth values are stored as uint16_t I thought the data might contain more precise informations, however the rendering is really not good (you will probably only see a black square but it is not completely, there’s a very blacky gray that even I, hardly see on my real size image).
I wondered if it wouldn’t come from my code or if it is because values go from 0 to 2047 inside the shiftData buffer.
glReadPixels(0, 0, width, height, GL_RED_EXT, GL_UNSIGNED_SHORT, depthFrame.shiftData);
int bitsPerComponent = 16, bitsPerPixel = 16, bytesPerRow = width*2;
NSData * data = [NSData dataWithBytes:depthFrame.shiftData length:width*height*sizeof(GL_UNSIGNED_SHORT)]; CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data); CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceGray(); CGImageRef iref = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorspace, kCGImageAlphaNone|kCGBitmapByteOrder16Little, provider, NULL, NO, kCGRenderingIntentDefault); UIImage *myImage = [UIImage imageWithCGImage:iref];
I was not completely sure about the lenght of the NSData but I don’t think it has a huge impact.
It could help me to have some opinions about if/how considering the first pictures, I could get some more precision.
My second issue is the camera picture, it seems like whatever picture is saved is in graySpace (the thubnail is, the jpg created when scanning also) so I’m not sure about how I could get it with rgb colors, maybe should I try to work on it via the sampleBuffer of the colorFrame or on the chroma texture ?