For my application, I try to calculate the number of dropped frames and will notify the user that their machine may be underpowered if some some percentage of frames are dropped per session.
I’ve been looking at the
PartiallySynchronizedSample callback to see if this provides the notification that I need, but it looks like it’s not whole story. When setting a target framerate of 30 for both depth & visible (with the grayscale model), I am getting around 25-30 samples coming in per second (I’ve just been logging timestamps from the depth frame and batching them to the nearest second).
When I calculate the time delta between frames and and log a warning if those deltas exceed some threshold (i.e. at 30fps the theoretical delta is 33ms so I log if greater than 40ms, which is 33ms + a 20% wiggle room allowance), I see a consistent number of “missing” frames compared to the target. For example, I’ll see 25 frames logged and 5 log messages about a missing frame, or 27 + 3, etc. Interestingly, the
PartiallySynchronizedSample callback isn’t called in this. Instead, I generally only see this fire either once when I start streaming, or if I were “pause” the app by clicking in the Windows output console which essentially blocks the app temporarily. Once clicking again to resume, I see that fire once, rather than for each frame that was missed/dropped.
Can you elaborate on the expectations around a target framerate, and how to effectively calculate a number of dropped frames? If using a supported framerate (e.g. 30) on a well-powered machine, is it realistic to expect the library will deliver 30 samples per second? In addition, is there a way to be notified of each time the frame count falls short without needing to compare the timestamps on sequential frames manually?