DepthFrame and Color Image out of sync


#1

Hello,

When using “sensorDidOutputSynchronizedDepthFrame(_ depthFrame: STDepthFrame!, colorFrame: STColorFrame!)”, the captured STDepthFrame’s depths are not synchronized with the color image (depth pixels do not align with the color pixels).

The below image is the purple shaded STDepthFrame superimposed onto the color image. We can see the miss-match between the STDepthFrame and the STColorFrame. The shaded STDepthFrame was produced from a CSV of the STDepthFrame, loaded into an Excel file. The file has the depths in one sheet, and the object coordinates in another sheet. I ran a VBA macro to colour the cells of the depths in a shade of purple, and another VBA macro to colour in lime the cells of the object. They are not at the same pixels. The depth pixels of the object are visible by the shades of purple, but do not match the colour pixels of the object. How can we make sure both match? The scan occurred on a flat floor.

I output the STDepthFrame in a CSV file using the following Swift code:
func logDepthFrame(_ depthFrame: STDepthFrame){

    let pointer: UnsafeMutablePointer<Float> = UnsafeMutablePointer(mutating: depthFrame.depthInMillimeters)
    let size: Int32 = depthFrame.width * depthFrame.height
    let depthArray = Array(UnsafeBufferPointer(start: pointer, count: Int(size)))
    let dfHeight = depthFrame.height
    let dfWidth = depthFrame.width
    
    var printLine = ""
    var csvLine = ""
    var csvFile = ""
    var currentPixel = 0
    for j in 0..<dfHeight {
        for i in 0..<dfWidth {
            printLine += "\(depthArray[currentPixel] / 10)) "
            csvLine += "\(depthArray[currentPixel]),"
            currentPixel += 1
            
            if i == dfWidth - 1 {
                printLine += "line width \(i + 1)"
            }
        }
        if j == dfHeight - 1 {
            printLine += "\n dfHeight \(j + 1)"
        }
        printLine = ""
        csvLine += "\n"
        csvFile += csvLine
        csvLine = ""
    }

    let depthFrameFileName = "\(FileLogger.sharedInstance.getDailyFileName())_DepthFrame"
    FileLogger.sharedInstance.write(thisMessage: csvFile, toFileFullName: "\(depthFrameFileName).csv")

}

I convert the STColorFrame to a UIImage with the following:
func imageFromSampleBuffer(_ sampleBuffer : CMSampleBuffer) -> UIImage? {
if let cvPixels = CMSampleBufferGetImageBuffer(sampleBuffer) {
let coreImage = CIImage(cvPixelBuffer: cvPixels)
let context = CIContext()
let rect = CGRect(x: 0, y: 0, width: CGFloat(CVPixelBufferGetWidth(cvPixels)), height: CGFloat(CVPixelBufferGetHeight(cvPixels)))
let cgImage = context.createCGImage(coreImage, from: rect)
let image = UIImage(cgImage: cgImage!)
return image
}
return nil
}

And finally I save to the Photos app the UIImage with the following:
func save_new(color: UIImage!) {
var height = depthMap.size.height
var size = CGSize(width: depthMap.size.width, height: height)

        UIGraphicsBeginImageContext(size)
        let context = UIGraphicsGetCurrentContext()
        guard color.size.width == depthMap.size.width, depthMap.size.height == depthMap.size.height else {
            fatalError("color.size.width \(color.size.width) != depthMap.size.width \(depthMap.size.width)")
        }
        
        color.draw(in: CGRect(x: 0, y: 0, width: min(color.size.width, size.width), height: min(color.size.height, height)))
        context!.interpolationQuality = CGInterpolationQuality.none
        let colorImageFromCurrentImageContext = UIGraphicsGetImageFromCurrentImageContext()
        UIGraphicsEndImageContext()
        
        if let imageData = UIImagePNGRepresentation(colorImageFromCurrentImageContext!) {
            if let png = UIImage(data: imageData) {
                UIImageWriteToSavedPhotosAlbum(png, nil, nil, nil)
            }
        }
}

#2

No replies on this yet. I have looked into your example project 3D scanner with the three views. I noticed this method “registeredToColorFrame”. I am assuming that I might have to call this methods to “Return a version of the depth frame aligned to the color camera viewpoint” as the documentation suggests. Right?


#3

Yes, you will need to register the color frame and the depth frames together similar to what is seen on line 310 in the ViewController+SLAM.mm file in the Scanner sample application.

depthFrameForCubeInitialization = [depthFrame registeredToColorFrame:colorFrame];

Please also make sure that your Structure Sensor has been calibrated to the iOS device you are using with our Calibrator App.