Announcing Structure SDK (iOS) 0.9: STCaptureSession


The First Major Release of 2019: Structure SDK (iOS) 0.9

Our first major release of 2019 is bringing a completely new way of interacting with the Structure Sensor through STCaptureSession!

The STCaptureSession API introduces several major improvements to the SDK, including streamlined setup and a configuration of sensors, wide-vision lens (WVL) support, options to toggle the use WVL detection on the color camera feed, and brand-new support for using our OCC (.occ) file format to record and replay data (even without a sensor attached).

Along with this update, we have also completely updated our sample applications to Structure SDK 0.9 and fully optimized it for iOS 12 both within the Structure SDK release and on the App Store.

Important Note for Upgrading your apps:

When changing your application from pre-0.9 to 0.9, please change your Mesh rendering files in the following way:

We are no longer utilizing GL_UNSIGNED_SHORT and these should be changed to GL_UNSIGNED_INT.


Download Structure SDK (iOS) 0.9

Full Release Notes


The STCaptureSession API manages many of the more complex tasks in setting up the sensor and configuring data streams within an application. Across our sample apps, we’ve seen roughly a 1000 lines-of-code reduction compared to using the old STSensorController API configuration. We hope that this will make developing applications a bit easier, especially for those just picking up the Structure Sensor for the first time. The picture below, for example, shows what STCaptureSessionn can do in just 267 lines of code where the old SDK took 655.

Sensor setup and configuration

  • Structure Sensor and the iOS Color Camera (along with IMU sensor) can now be configured in the same place, through the STCaptureSession.

  • STCaptureSession provides an easier means of enabling higher resolution color camera frames for better mesh texturing.

  • STCaptureSession now internally handles the correct image intrinsics for both Structure Sensor and the iOS color camera. With the addition of WVL support (see below), you can be sure that the correct image intrinsics will be chosen for the correct lens type.

  • STCaptureSession allows for easily modifying the exposure, ISO, and focus of the iOS color camera via the properties dictionary. This allows for developers to dynamically set these color camera properties within their environment, before or during scanning.

Wide Vision Lens Support

  • By using the STCaptureSession, you can now utilize the same WVL modules used by Canvas. Using a wide-vision lens mounted over the iOS color camera can enhance tracking reliability when scanning large areas. WVL support can be enabled by setting the lens property to STLensWideVision in STCaptureSession.

  • STCaptureSession also supports automatic lens detection, to identify if a WVL is mounted or not on a Structure Sensor bracket. This capability can be turned on and off by setting the lensDetection property to STLensDetectorOn or STLensDetectorOff respectively.

  • If the lensDetection property is set to STLensDetectorWarnOnMismatch, the lens detector itself won’t set the lens type directly, however, can inform the application of what lens it has estimated is attached through the use of a delegate callback, captureSession:onLensDetectorOutput:. This allows for a more fine-grained control of the choice of lens in challenging environments.

OCC Recording and Replay

  • We are introducing a new file type to record and replay sensor data, namely OCC files. STCaptureSession can be configured to start and stop OCC recording for any sensor events (camera frames, accelerometer events, gyroscope events, etc) coming in via STCaptureSession. The incoming data is written to a single file of the user’s choosing and will contain all of the data that was streamed up until the point that the recording was stopped. Color frame data is h264 compressed by default, so you don’t need to worry about OCC files eating up your hard drive too quickly.

  • STCaptureSession can also be configured to replay OCC files. When replaying OCC files, a sensor does not need to be connected to your iOS device. When streaming is started sensor events will be played back in the exact order with the exact same timing as when the OCC was recorded. Some configuration options may do nothing when OCC playback is used (e.g. frame data will be returned in the same resolution it was recorded, and cannot be modified as it can be coming directly from the sensor). This feature should allow developers to easily collect the raw data from their applications, and make it possible to test their applications against old data, streaming as if a sensor is directly connected.

Meshing Improvements

  • Made some improvements to allow for meshing with smaller voxel sizes. Close range detail will improve if voxel size is minimized when scanning close range objects.

  • Modified our mesh representation, now STMesh objects will not break the underlying mesh data into smaller sub-meshes. No API change has occurred here, so old code should still work.

Updated Applications

In addition to the introduction of the STCaptureSession, our sample apps have been updated to accommodate some of the new features.

Room Capture

  • Now optimized for newer (post-2017) devices, especially improved texture to mesh alignment, and when scanning without having performed calibration.

  • Has a button that enables or disables wide-vision lens scanning in the bottom right corner.

  • Writes OCC files for every data capture, saved to the app documents directory. These OCC files can be extracted from the device using iTunes.

  • Supports higher color resolutions for mesh texturing on newer (post-2017) devices.

  • Fixed a bug where the hole-filled mesh would not render correctly when rendering in X-Ray mode.


  • Scanner is now optimized for newer (post-2017) devices. Additionally, the Scanner app has been optimized to work better on iPhone-family devices.

  • Writes OCC files for every data capture, saved to the app documents directory. These OCC files can be extracted from the device using iTunes.

  • Fixed a race condition in the Scanner app where the calibration overlay would pop up even if users have already run through Calibrator.

  • Supports higher color resolutions for mesh texturing on newer (post-2017) devices.

  • Fixed a bug where some UI elements would no longer be user-interactive after the Structure Sensor is disconnected and reconnected.

  • Fixed a bug where toggling high-resolution color in Scanner would crash the app on certain (post-2017) devices.

  • Fixed a bug where keyframes would sometimes never be added in Scanner, causing a message “Please hold the device still while we collect a keyframe” to indefinitely overlay the scan.

  • Split the sample app into two separate apps: Scanner and Scanner-low level. The low-level variant utilizes the STSensorController API, for longtime users on the SDK or for users who want low-level access from our SDK. “Scanner,” as it is now, utilizes the STCaptureSession API.


  • Viewer is now optimized for newer (post-2017) devices.

  • Split the sample app into two separate apps: Viewer and Viewer-low level. The low-level variant utilizes the STSensorController API, providing a basic example of how to stream data using our older, lower level API. “Viewer,” as it is now, provides a basic example for streaming from the STCaptureSession API.

listed #2


@anthony.monaco, will apps now need to include 3RD-PARTY-LICENSE.html in an About page, or something similar?


You can view the app submission process and rules on the following page:


At a glance I don’t see anything specifically called out in the app submission documentation about the licensed bits that afaik are new for this version of the SDK. @n6xej suggested one approach might be the inclusion of such licenses in the App bundle. What do you think?