Structure Core VS Intel Realse D400s?


#23

Thank you for sharing @cv_hobbyist! I did manage to lower the noise on the D435 (also try scanning outdoors as the intel are supposed to be better outside), the viewer tool provided by Intel does have settings that can get you a better point cloud (however it seems that the core still beats it). I would be interested to know how well spatial tracking works on the core, as it is a big selling point (I think the D435i added it, but not integrated it as well from all the marketing and comments I read about).


#24

Thanks for sharing this!

Yes I am also a little disappointed by the color camera quality on the structure core. I suppose it is useful enough to use it to calibrate with another higher quality RGB camera, but yeah, it is pretty blurry and dim.

The depth quality difference there looks pretty significant though, which is important.

I know the spatial tracking on the Structure Core is a selling point, however that is not functional right now since we have no software for that yet. It is supposed to be in the perception engine.

Right now I am using a Realsense T265 for tracking. Would be nice to not need a secondary device like that.


#25

I just found this for the D400s, if you scroll down to the comments a user named guillermohor was able to get the noise out of the sensor (you lose distance, but he was able to get a lot of accuracy). I am going to give it a try on my sensor when I get a chance and post results.


#26

One thing to note that they mentioned in that thread is that the Realsense has best results on a high powered desktop with CUDA.

My use case requires a relatively cheap and portable setup, a tablet, smartphone, raspberry pi, etc. Structure core works well on my Surface Go tablet.


#27

CUDA should improve only post-processing performance (e.g. frame alignment or color conversion) but not the depth quality for the RealSense.

Indeed, both the RealSense and the Structure Core use an ASIC processor (for Intel I believe it should a derivative of this, for the Structure Core it is this one) to compute the depth onboard. This has the advantage to give constant performance (e.g. 640x480 depth at 60 fps) regardless of the host CPU. However they cannot improve the depth reconstruction algorithm once the product enters in production. This is different from a sensor like the ZED camera but you will need a CUDA compatible host computer.

For Intel this is not exactly true. They can update the firmware to fix some bugs since they have designed the VPU.

For Intel, there are these documents to help improving the depth quality by tuning some parameters: