Scanner + OpenCV


#1

Hello,

I’m trying to use OpenCV with with the Scanner App but without success mainly because everything is built on OpenGL and I have difficulties to figure out where I can process the frames and display them on the screen.

So far what I have done:

  • Included OpenCV framework
  • Created methods to convert UIImage to cv::Mat and inversely

I noticed that the frames are processed into OpenGL here (I slightly modified the function) :

- (void)sensorDidOutputSynchronizedDepthFrame:(STDepthFrame*)depthFrame
                                andColorFrame:(STColorFrame*)colorFrame
{
    if (_slamState.initialized)
    {
      [self processSimple:depthFrame colorFrameOrNil: colorFrame];
    [self renderSceneForDepthFrameSimple:depthFrame colorFrameOrNil:colorFrame];

    }
  
}

With those two methods:

- (void)processSimple:(STDepthFrame *)depthFrame
      colorFrameOrNil:(STColorFrame *)colorFrame {
    // Upload the new color image for next rendering.
    if (_useColorCamera && colorFrame != nil)
    {
        [self uploadGLColorTexture: colorFrame];
    }
    else if(!_useColorCamera)
    {
        [self uploadGLColorTextureFromDepth:depthFrame];
    }
    
    // Update the projection matrices since we updated the frames.
    {
        _display.depthCameraGLProjectionMatrix = [depthFrame glProjectionMatrix];
        if (colorFrame)
            _display.colorCameraGLProjectionMatrix = [colorFrame glProjectionMatrix];
    }
    
}

    - (void)renderSceneForDepthFrameSimple:(STDepthFrame*)depthFrame colorFrameOrNil:(STColorFrame*)colorFrame;
    {
        // Activate our view framebuffer.
        [(EAGLView *)self.view setFramebuffer];
        
        glClearColor(0.0, 0.0, 0.0, 1.0);
        glClear(GL_COLOR_BUFFER_BIT);
        glClear(GL_DEPTH_BUFFER_BIT);
        
        glViewport (_display.viewport[0], _display.viewport[1], _display.viewport[2], _display.viewport[3]);
        
        // Check for OpenGL errors
        GLenum err = glGetError ();
        if (err != GL_NO_ERROR)
            NSLog(@"glError = %x", err);
        
        // Display the rendered framebuffer.
        [(EAGLView *)self.view presentFramebuffer];
 }

First, I wanted to process the frames directly below by using a function that I created in (4) but it crashed:

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
    
    
    // (1) Convert CMSampleBufferRef to CVImageBufferRef
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    
    // (2) Lock pixel buffer
    CVPixelBufferLockBaseAddress(imageBuffer, kCVPixelBufferLock_ReadOnly);
    
    // (3) Construct VideoFrame struct
    uint8_t *baseAddress = (uint8_t*)CVPixelBufferGetBaseAddress(imageBuffer);
    size_t width = CVPixelBufferGetWidth(imageBuffer);
    size_t height = CVPixelBufferGetHeight(imageBuffer);
    size_t stride = CVPixelBufferGetBytesPerRow(imageBuffer);
    VideoFrame frame = {width, height, stride, baseAddress};
    
    
    // (4) Dispatch VideoFrame
    [self frameReady:frame];
    
    //[self.delegate frameReady:frame];
    
    // Pass color buffers directly to the driver, which will then produce synchronized depth/color pairs.
    [_sensorController frameSyncNewColorBuffer:sampleBuffer];
    
    
    // (5) Unlock pixel buffer
     CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
    
 }

In this function (4) I wanted to do basically just to test some basic processing like converting into B&W or look for edges using OpenCV.

Could you please help by telling me where I can inject images processing using OpenCV ? Thanks


#2

Anyone who was able to use OpenCv correctly with the device ?


#3

I only get the gray image like this.

CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(colorFrame.sampleBuffer);
OSType format = CVPixelBufferGetPixelFormatType(pixelBuffer);

NSAssert(format == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange, @"Only YUV is supported");

// The first plane / channel (at index 0) is the grayscale plane
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
void *baseaddress = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);

CGFloat width = CVPixelBufferGetWidth(pixelBuffer);
CGFloat height = CVPixelBufferGetHeight(pixelBuffer);

cv::Mat mat(height, width, CV_8UC1, baseaddress, 0);

// Use the mat here
cvTemp = mat.clone();

mat.release();
mat.deallocate();

CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);

then you can do anything on the cvTemp, like edge detection, etc.


#4

It’s perfect ! Thank you. I was struggling to make it work.

Do you know why ?

Then do you know how you can convert from cv::Mat to the colorFrame.sampleBuffer ? So that I can visualize what happen ?

Thank

EDIT : I did the following but did not work. I got a green screen. Could you please help me ?

- (void) uploadGLColorTextureFromMat:(cv::Mat)image{
    if(image.empty()){
         NSLog(@"image empty.");
    }else{

        // Clear the previous color texture.
        if (_display.lumaTexture)
        {
            CFRelease (_display.lumaTexture);
            _display.lumaTexture = NULL;
        }
          NSLog(@"Clear previous color texture");
        // Clear the previous color texture
        if (_display.chromaTexture)
        {
            CFRelease (_display.chromaTexture);
            _display.chromaTexture = NULL;
        }

        // Displaying image with width over 1280 is an overkill. Downsample it to save bandwidth.
        while( image.cols > 2560 )
            cv::resize(image, image, cv::Size(image.cols/2,image.rows/2));
        
           NSLog(@"Size image after downsample by 2: (%d,%d)", image.rows,image.cols);
   
        // Allow the texture cache to do internal cleanup.
        CVOpenGLESTextureCacheFlush(_display.videoTextureCache, 0);
        
        int height = image.rows;
        int width = image.cols;

        // set pixel buffer attributes so we get an iosurface
        NSDictionary *pixelBufferAttributes = [NSDictionary dictionaryWithObjectsAndKeys:
                                               [NSDictionary dictionary], kCVPixelBufferIOSurfacePropertiesKey,
                                               nil];
        // create planar pixel buffer
        CVPixelBufferRef pixelBuffer = nil;
        CVPixelBufferCreate(kCFAllocatorDefault, width, height, kCVPixelFormatType_420YpCbCr8BiPlanarFullRange,
                            (__bridge CFDictionaryRef)pixelBufferAttributes, &pixelBuffer);

        // lock pixel buffer
        CVPixelBufferLockBaseAddress(pixelBuffer, 0);
        
        // get image details
        size_t widthB = CVPixelBufferGetWidth(pixelBuffer);
        size_t heightB = CVPixelBufferGetHeight(pixelBuffer);
        
        OSType pixelFormat = CVPixelBufferGetPixelFormatType (pixelBuffer);
        NSAssert(pixelFormat == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange, @"YCbCr is expected!");
        
        // Activate the default texture unit.
        glActiveTexture (GL_TEXTURE0);
        CVReturn err;
        // Create an new Y texture from the video texture cache.
        err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, _display.videoTextureCache,
                                                           pixelBuffer,
                                                           NULL,
                                                           GL_TEXTURE_2D,
                                                           GL_RED_EXT,
                                                           (int)widthB,
                                                           (int)heightB,
                                                           GL_RED_EXT,
                                                           GL_UNSIGNED_BYTE,
                                                           0,
                                                           &_display.lumaTexture);
        
        if (err) {
            NSLog(@"CVOpenGLESTextureCacheCreateTextureFromImage failed (error: %d)", err);
            CVPixelBufferRelease(pixelBuffer);
            return;
        }
       

        // Set good rendering properties for the new texture.
        glBindTexture(CVOpenGLESTextureGetTarget(_display.lumaTexture), CVOpenGLESTextureGetName(_display.lumaTexture));
        glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
        glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
        
        // Activate the default texture unit.
        glActiveTexture (GL_TEXTURE1);
        // Create an new CbCr texture from the video texture cache.
        err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
                                                           _display.videoTextureCache,
                                                           pixelBuffer,
                                                           NULL,
                                                           GL_TEXTURE_2D,
                                                           GL_RG_EXT,
                                                           (int)widthB/2,
                                                           (int)heightB/2,
                                                           GL_RG_EXT,
                                                           GL_UNSIGNED_BYTE,
                                                           1,
                                                           &_display.chromaTexture);
        
        if (err)
        {
            NSLog(@"Error with CVOpenGLESTextureCacheCreateTextureFromImage: %d", err);
            return;
        }
        
        // Set rendering properties for the new texture.
        glBindTexture(CVOpenGLESTextureGetTarget(_display.chromaTexture), CVOpenGLESTextureGetName(_display.chromaTexture));
        glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
        glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
     
}

#5

Sorry, I don’t know how to convert cvmat to opengl texture properly.
But you can use this code to convert it to UIImage and show it as a sub view.

    UIImageView *edgeImage = [[UIImageView alloc] initWithImage:MatToUIImage(cvTemp)];
    edgeImage.frame = CGRectMake(0, 0, cvTemp.cols, cvTemp.rows);
    [self.view addSubview:edgeImage];

take care for the size, the sub view will cover background opengl drawing.
I take this just as debugging, don’t know how to combine it to opengl.
If anyone has a solution, I really want to learn too.


#6

Thank for this snippet of code ! But yes I would like to connect with OpenGl as I would like to perform some interaction between the results of some algorithms and the drawing of OpenGl. For example, detect something and once it has been detected, execute the scanner.

Thank in advance ! Any help would be appreciated :slight_smile:


#7

I tried what you said but here the result ! After 5-10s, it crashed and I got this message:
Scanner[861:92693] Received memory warning.

Is it working for you ? Maybe I got a green screen because of this. It seems there is a problem…


#8

It works fine to me, I don’t know why you’ve got this…


#9

Finally it worked. The problem came from the addSubView. What I did instead is to go to ViewController_iPad.xib et add manually an imageView.

Then in the code, I did the following :
self.imageViewFrame.image = imageDisplay;

Now I’m still struggling about how to draw the result of the processing into OpenGL. If anyone could help up, would be great !


#10

I finally was able to make a function which allows to display an OpenCv Mat with OpenGL :

Check this post.


#11

Congratulations! it’s great.