Save depth image along with color image?


#1

Hello everyone.

I am currently working on a project involving 3D object recognition from RGB-D data obtained from the Structure sensor. For this, I need to create a database of such images, and consequently be able to save pairs of RGB and depth images streamed from the sensor.

The Viewer sample app creates an UIImage from both color and colored depth data in order to display them, as follows:

CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

CGBitmapInfo bitmapInfo;
bitmapInfo = (CGBitmapInfo)kCGImageAlphaNoneSkipLast;
bitmapInfo |= kCGBitmapByteOrder32Big;


NSData *data = [NSData dataWithBytes:_coloredDepthBuffer length:cols * rows * 4];
CGDataProviderRef provider = CGDataProviderCreateWithCFData((CFDataRef)data); //toll-free ARC bridging

CGImageRef imageRef = CGImageCreate(cols,                       //width
                                    rows,                        //height
                                    8,                           //bits per component
                                    8 * 4,                       //bits per pixel
                                    cols * 4,                    //bytes per row
                                    colorSpace,                  //Quartz color space
                                    bitmapInfo,                  //Bitmap info (alpha channel?, order, etc)
                                    provider,                    //Source of data for bitmap
                                    NULL,                        //decode
                                    false,                       //pixel interpolation
                                    kCGRenderingIntentDefault);  //rendering intent

// Assign CGImage to UIImage
_currentColoredDepthImage = [UIImage imageWithCGImage:imageRef];
_depthImageView.image = _currentColoredDepthImage;

Here the UIImage is associated with an image view to display it, but we can also save the image to the iPad since it is a UIImage object (by converting it to PNG data for instance). This works for color images, but in my case I also want to save also depth image in the format of floating-point png. I wanted to know if there was a more direct means to save pairs of depth and RGB images to the disk, or if it is not the case which parameters I should use for the creation of a floating-point depth image to be saved later.

Regards,
Aurélien Ducournau


#2

Look in the following function. This is where your application receives a synchronized frame:

// This synchronized API will only be called when two frames match. Typically, timestamps are within 1ms of each other.
// Two important things have to happen for this method to be called:
// Tell the SDK we want framesync with options @{kSTFrameSyncConfigKey : @(STFrameSyncDepthAndRgb)} in [STSensorController startStreamingWithOptions:error:]
// Give the SDK color frames as they come in:     [_ocSensorController frameSyncNewColorBuffer:sampleBuffer];
- (void)sensorDidOutputSynchronizedDepthFrame:(STDepthFrame *)depthFrame
                                andColorFrame:(STColorFrame *)colorFrame

The property depthInMillimeters in STDepthFrame should give you what you need as a contiguous float buffer that you can write to disk or whatever (using the width and height members to determine that chunk of memory’s size).

for example:

    NSData* depthFrameData = [NSData dataWithBytes:depthFrame.depthInMillimeters 
                                     length:depthFrame.width*depthFrame.height*4];
        [depthFrameData writeToFile:depthFramePath atomically:YES];

Writing the color frame to disk as a nice jpg or png is slightly more involved but not bad. If you just want the 640x480 frames used in the Viewer project it is pretty straightforward to turn the colorFrame.sampleBuffer into a UIImage(as referenced in your post) and then save the UIImage as a png or jpg. For example:

//write UIImage to disk as a jpeg
[UIImageJPEGRepresentation(image, 0.9) writeToFile:jpgPath atomically:YES];

If you want to save a high resolution image like the ones used in the scanning applications in the .5+ SDK this process is more involved as the sample buffer uses a different pixel format. You may also want to consider not using kSTHoleFilterConfigKey as this may distort (or could help) your results.

Good luck!


#3

@contact4 was you able to save correctly the depth map image with the value in mm ?

@abryden I did like you have mentioned but I’m not able to save correctly the depth map locally.

NSData* depthFrameData = [NSData dataWithBytes:depthFrame.depthInMillimeters 
                                 length:depthFrame.width*depthFrame.height*4];
    [depthFrameData writeToFile:depthFramePath atomically:YES];
 UIImage* depthImage = [[UIImage alloc] initWithData:depthFrameData];
NSLog(@"depth image size height %f",depthImage.size.height);
NSLog(@"depth image size width %f",depthImage.size.width);
NSData *depthImageData = UIImagePNGRepresentation(depthImage);

But It does not work and I have nothing in my depthImageData…

Thank in advance


Using the depth Image / data and RGB problems
#4

In my original post the only thing that happens with the depth data is directly writing the depth frame to disk. This will produce a file with a bunch of floats representing depth samples and no other information.

The section talking about writing the frame to disk as a jpg refers to the color frame. When you call initWithData with the depth floats, UIImage can’t know what to do with this data because it contains no signature or marker segment.

What are you trying to do? You might be best served by using the float buffer directly or using the methods Occipital demonstrates to visualize the depth frame. If you really need a 32 bit single channel float image of the depth frame you will need to investigate the best way to achieve this. You will probably need to pack the floats into multiple channels and then save the image.


#5

@abryden Thank you for your answer.

Actually I would like to save RGB color image along with depth frame image in order to later apply computer vision algorithm on it.
I would like to have a 32 float depth image in order then to apply some things on it.

“You will probably need to pack the floats into multiple channels and then save the image.”

Do you have any clues how can I achieve that ? Or maybe is there another way to save the raw depth image ?

Thank you !


#6

The line:
[depthFrameData writeToFile:depthFramePath atomically:YES];

will do that to a file at depthFramePath. You will need to either know the height+width of the depth image when you process or if you want save a metadata file (text or json is best) alongside the raw frame file.

One approach to this is to grab the depth frame, the color frame and a bunch of metadata of your choosing all as separate buffers. You could then store these in a zip file that creates a package of correlated data described the meta file. If you wanted to use these in opencv you could either unzip this file onto the file system, and then load the color and raw depth buffers into opencv mats using the meta data file to define the dimensions of the depth image and other things of interest or you load them directly out of zip file.


#7

Hello, I have encountered the same needs as you. I also want to save RGB color images and depth images and apply them to computer vision, but I am not familiar with ios, only programs that are processed by X-code. Installation operation, I want to know if you have solved this problem, if you can, I hope you can help me solve this problem, thank you.