SDK issues/questions


Hi all. I’ve enjoyed getting familiar with the Core and getting off the ground now that the SDK has been released :tada: Here are a few questions or issues I’ve run into:


DepthFrame::colorCameraPoseInDepthCoordinateFrame() is still returning matrix of NAN s. Is there something that needs to be done before this matrix gets populated? Also am I correct in understanding that this the intended way to get the extrinsics between color & depth?

Serial numbers

Is there a way to get the serial number for a connected sensor on Windows? I understand that enumerateConnectedSensors() is Linux-only. I’ve tried looking at the return value from CaptureSession::settings() or CaptureSession::sensorSerialNumber() after starting the stream, thinking that perhaps it gets filled in by the SDK, but the sensorSerial is empty.

Crash without setting a delegate

The app crashes if you call startStreaming() without setting a delegate. I was attempting to use the callback functions and calling the appropriate setters (setOutputCallback() and setEventCallback()) instead of using a delegate, but I can’t seem to get around this crash. If I call setDelegate() first, the crash goes away. Curiously, I tried calling setDelegate(nullptr) before calling the callback setters, as these are supposed to clear the delegate, but the app still crashes at that point.


Serial numbers
Following up on part of my post here… looks like the way to get the serial is by calling CaptureSession::sensorSerialNumber() rather than checking the CaptureSessionSettings::sensorSerial property like I was doing before. It looks like you must call startStreaming() and wait for a Ready event before this call will return a non-empty string.

Related, if you plug in a device, the Ready event will fire but the serial string will be empty at that point. It’s not until you call startStreaming() that you’ll get another Ready event with the serial correctly populated.

It’d be great if there was a programmatic way to get the serial before starting the stream. My use case is to present a UI with a list of connected sensors and serial numbers, and allow a user to select the desired one. I understand that you can specify the serial of the desired camera in the CaptureSessionSettings, but it seems like there’s no way to know the serial until after you’ve started streaming, so it’s a bit of a chicken & egg situation.


Hi Matt!

colorCameraPoseInDepthCoordinateFrame returning NAN: This is a know issue and will be resolved in the in next release

enumerateConnectedSensors on Windows: We’ve made some USB backend changes to allow this and it will be updated in the next release

• Crashing if capture session delegate is not used: This is also a known issue, and we’re working on a fix


Adding another one to the list: I’m seeing crashes when trying to change CaptureSessionSettings and restart the stream.

I’m adding a UI to control various camera parameters. Some options, like gain and exposure, I see are easily changed while the camera is streaming with the setters on the CaptureSession. Others, like the despeckling filter, resolution, framerate, etc., must be changed via CaptureSessionSettings. When I call startMonitoring(updatedSettings) while the camera is already streaming, I get a crash.

Instead, I tried stopping & starting the stream, which doesn’t help, i.e doing these steps sequentially:

  • modify values of CaptureSessionSettings object
  • call stopStreaming()
  • call startMonitoring() with updated CaptureSessionSettings
  • call startStreaming()

As an experiment, I added UI buttons to stop & start the stream, and using these before & after I make any changes seems to work. To clarify, doing these steps instead works:

  • click Stop button which calls stopStreaming()
  • use UI to make changes to CaptureSessionSettings object
  • click Start button which calls startMonitoring() and startStreaming()

So, I added some sleep()s to the code after stopStreaming() and startMonitoring(), and this seems to help. I haven’t figured out the minimum time needed, but using 500ms for each works. It seems like it just needs a bit of time to breathe.

Is there a recommended way to change these parameters that doesn’t require manually sleeping? I don’t see any events that get raised after calling stopStreaming(), such as the sensor Ready event signalling it’s ready to stream again. Hopefully there’s a seamless way to this programatically with a minimal, if any, delay.


Did you find out how to get the extrinsics between the visible camera & the depth map?


Yes, there is a function I think called visibleCameraPoseInDepthCoordinateFrame() which works correctly in the latest SDK release


Thanks! I’ve been trying to use it, but the the results are bad. Hopefully is a stupid error on my part and not on the calibration. Do you have code the to transform the depth map to the visible camera?



I made “decent” progress in the ROS wrapper with RGB depth registration. I would recommend this as a starting point. If you find any mistakes I made, a pull request would be greatly appreciated!


Thanks! That helped me a lot, I was not converting from milimeters to micrometers.