I have another problem, this time concerning the use of the depth Image/ data.
In the Scanner app (just like everywhere else), the depthFrame gets created with every new frame.
Right now I am trying to use this frame in the following 2 approaches for OpenGL ES:
- creating a buffer with the depth data and send it to my shaders
- creating a depth texture (non rgb, only depth data) and use this instead of the buffer
For this I looked at the viewer example, since there the colored depth texture gets created and rendered using CoreGraphics, but since I am using OpenGL ES, it might not be the right approach (and because i dont need the colored texture).
So I am more stuck with how to process the depth data to make it usable in the shaders.
Also the colored depth texture from the SDKs renderDepth (using the texture in custom rendering or the renderDepth itself) doesn't work either.
Something just goes terribly wrong and i dont know why and how.
Plus (concerning working with the provided Textures created by the SDK) I am Stuck with creating the RGB Image from the luma and chroma textures.
Using the SDKs preimplemented render function, the Image gets colored right, but using the same approach for convertation (YCbCr to RGB) in my fragment shader (same calculations, same matrices, same Input) the Output just gets blue.
Is there a special needed thing that must be done for this to work?
I hope someone can provide help or has a hint with what could be wrong, since I just don't know how to go on anymore.
Many thanks in advance.