Structure Core VS Intel Realse D400s?



I was curious on the quality of the point cloud from the structure core compared to the intel realsense. I understand that the D415 is for more accurate point clouds and the D430/D435 is more for motion/moving objects. Quality wise is the structure core point clouds less “wavy”? Would someone be able to share an ascii/xyz file of a flat wall? I have a D415 and I can post one as well… I am just trying to figure out if its better, if so how much better, before I purchase it.

Thank you so much!


Has no one tried to compare the data on this?


I’ve been playing with the SDK using an OCC someone (who got a camera quicker than me) recorded for me.

This is a point cloud from a scan at about 50cm from a plaster cast of a (small child’s) leg:

And a screenshot of it (in Cloud Compare):

But my recording is only in low resolution (640x480 and not the best setup) so hard to say. I am expecting my scanner in early Feb when I can start doing some real usage. Happy to do some comparison scans for you then if you are willing to wait.

I also have a D435 but I didn’t get too far into making it work. I did a bit of a scan with it but it didn’t seem that much better than the data I was getting from the original Sensor (don’t hold me to that though, I didn’t deep dive on options).

I’m on the hunt for the best option for cost effective, portable, easy to use body scanning (well, leg scanning, anyway) with enough accuracy to use for Orthotics. Hoping the Core’s quality will get me there but will only know when I have one to play with. Happy to compare notes with your realsense though.


Ah so this is from the Structure sensor using the iPad and not Openni? (The openni sdk lacks some extra calibration found on the iOS SDK). The point cloud data looks noise, which is fine for measuring wood furniture dimensions… but it does not seem to do well with the contour of the leg foot. I was looking into hardware for a similar application and started exploring with the Tango device the Asus phab pro 2 to be specific (unfortunately that is discontinued and google dropped the 3D reconstruction on their devices). Then I purchased a structure sensor, then the D410, and the D415 so far (the problem with the D430/D435 is that it is more for capture with motion rather than the D410/D415 that are geared towards accuracy… on a side note the intel sensors are supposed to work best in outside without glare from the sun). A main reason I was considering also the structure core since it has an IMU and you can probably obtain the pose of the sensor much like the google Tango devices.

I am going to try and get some scans from similar objects that can be scanned somewhere else… like a wall. The intel sensors do produce a lot of wavy noise like on the scan data I have found it to be more than the Tango devices (but the intel sensors have a lot of settings to play with and has been hard to get it right, the low resolution with the hole filling does an ok job and making it smoother. From what I read though for best accuracy you need the max resolution, and I am still working on the quirks).

I will follow up on this, might take a bit… thank you so much Zonski! I do would be interested in collaborating on this.


Great, I’ll let you know when I have more on my end, and when my scanner arrives.

Just to clarify, the scan in my previous post (both the data and screenshot) is from the Core (not the Sensor) and using the new Structure SDK for Windows (which is conceptually similar to OpenNI). But the Core was set to record at 640x480 and I think not really setup best for the use case.

The code is super crude (I’m more a Node and Java dev these days, my C++ is pretty rusty as I haven’t used it for 10 years) but if you are interested it is available here:

In theory you could run your own tests if you can get this code running - we just need people with Core’s to record OCC files for us. If anyone is out there with a Core already and wants to record some things and share, please do. Otherwise as soon as I have mine, I’ll be recording a few things.


Hi, Thank you for sharing. I wonder how you use the occ file and generate these point cloud features.


Hi Spooky,

The OCC file is captured using the demo code included in the SDK. Then the code in the link I shared above is used to playback the OCC file and process it into a point cloud (using the Core SDK).

Happy to explain in more detail if you have specific areas you have questions on?

@alessandro.negri my scanner has arrived and I am keen to do some recordings but the timing is not great. My wife is about to give birth any day. So I most likely will be looking at this in a few weeks.


Hi, I cannot find the occ data file. The link seems not exist anymore. Could you double check on that? Thank you!


No worries, I have been following and busy as well with work and kids. However, I did do a scan with two devices: the Asus Tango phone, and the D415 camera. I scanned a toy ball (because it is a sphere and those are great object to test and can be replicated) . I do have a 3D Digital E-scan that I will also use for it once I get around to borrowing it. I will do as you did and post the point cloud, obj files, and screenshots for reference.

I really hope the structure core gives better data than the structure sensor, tango, and D415.

Also I created my own tool for using the intel D400 series to capture and triangulate (it does it really fast), but i am in a feature creep mode trying to clean it up to be useful. It is only for Windows though, I had functionality for the structure sensor with Openni2 (but I gifted the sensor to my father).

Hope you have a healthy kid and the wife does well with labor!


I took the 3D scans with the D415 intel sensor, here are the images of the triangulated point clouds from my software rendered in Blender:

Here is a screen shot of my modified Tango sample application using the Lenovo PHAB2 PRO:

I will put the files to download on good drive later today when I get a chance, and add the 3D Digital E-scan screen shot along with the point cloud.


Hi, can you show the original image or the visible image? Just want to compare and see the accuracy


This is the original image, I had to blur the setting since my kid photo-bombed the scan…

Here are the scans of the Tango Lenovo device, Intel D415, and the 3DD E-scan 1MP (also note that I did the E-scan on a different day and that the ball air intake was centered, so this changes a bit on true comparison… however right now I just want to know scan quality of the structure core):

green ball scans


I did another scan of a small size basketball with all three devices, I have the download link under the image (I fitted a sphere to the ball)


Tango & D415

1MP e-scan

Right now my software is only triangulating the D415 point clouds, but I am working on the feature to triangulate ascii txt/xyz files.


Thank you for your image. Could you show me the capture of the frame from the D415? Like the window of the Depth frame I mean. It seems like a very good 3D camera.


Spooky did you look at all the files? I have images within the zip files…


Sorry. I just found it. Thank you for sharing!!


Spooky do you have a structure core or a structure sensor? It would be nice if you could also share a scan a basketball and post the OBJ or ascii format of the point clouds to compare.

Thank you!


Hi everyone,
I tried to get the zonski´s code running (I installed visual studio 2015, latest version of cmake and the PCL all in one installer Windows MSVC 2010 (64bit)). When I follow the instructions (all on windows 10 ) I get a cmake configure error:
CMake Error at C:/Program Files/PCL 1.6.0/cmake/PCLConfig.cmake:39 (message):
common is required but boost was not found
Call Stack (most recent call first):
C:/Program Files/PCL 1.6.0/cmake/PCLConfig.cmake:354 (pcl_report_not_found)
C:/Program Files/PCL 1.6.0/cmake/PCLConfig.cmake:500 (find_external_library)
CMakeLists.txt:27 (find_package)
Any idea what I am doing wrong/ how to solve this ?


I think you have the wrong all in one installer for he PCL, you need to find one that is for MSVC 2015 (64 bits). Also make sure your compile setting is 64 bits on the project. You also need to set your paths for Boost, Eigen, Hull, vtk, … and include the libraries in your project (the correct ones, you have debug and release versions).


I have both an Intel RealSense D435 and a Structore Core sensor. Here a little review.

Advantages of the Structure Core:

  • SDK installation is easy. I develop on Linux and after running a script, the SDK is built and you can launch the CorePlayground app.
  • depth accuracy / quality compared to the RealSense D435, see the following images.

This is the pointcloud acquired by the D435:

The same scene acquired with the Structure Core:

Much less “waves effect” for flat surfaces with the Structure Core and depth transition is cleaner.

Two another screenshots with the D435 first:

While the depth quality of the D435 is inferior of the Core sensor, the RealSense has better depth fill rate in my experiments. Also, it looks like the Structure core thresholds the max depth distance and it is possible to see farther with the D435.

Disadvantages of the Structure Core:

  • I am disappointed about the build quality

First, you have to screw the tripod mount at the back of the sensor with this system:


I am not able to totally screw and I don’t want to force and damage it. Thus, there is a little play and the tripod mount is not rigidly attached to the sensor.

  • second, the case is completely close. There is no vent and the sensor heats a lot. The plastic seems a little cheap also in my opinion:

Compared to the D435:

  • third, I choose the Structure Core with the color camera. The quality of the color camera is really really bad, the Core then the D435:


Not sure but it looks like there is no auto-exposure for the Core color camera.

Some thoughts about the Structure Core SDK:

  • I would have appreciated to have the possibility to do CMake findStructure for my C++ test program.
  • for now, it seems that the way to acquire the data is to use a “delegate” function. I would like to do “polling” or “wait for frames” instead of using callback to grab the images.
  • retrieving the IR frames is done via one buffer with the left and right IR images embedded. It would be great if there is a way to “deinterlace” on the device or in the API.

To conclude:

  • better depth quality vs the D435
  • easy installation, API ok
  • cheap build quality, awful color camera quality
  • choose the D435 if you don’t need good depth accuracy
  • for now, the librealsense SDK is more mature since the Structure Core has just been released. But it means also that the RealSense community is way bigger and it will be more easy to find code for the RealSense.