I’ve been trying to create an Android app like the room scanner app but I’m having difficulty creating it. The problem I’m facing is that I don’t know how to create a full point cloud when turning/rotating the Structure Sensor.
I tried to use the orientation sensors in my Android device and apply that data to the Structure Sensor so that it knew where it is in the world’s frame of reference. I then wanted to apply that again to the point cloud, for example: If the structure sensor moved 10 degrees to the right, I drew all new points in the point cloud 10 points to the left(so x+10). I had some mixed results with this but the sensors in my Android device cause to much drift so I’m unable to get accurate data and therefore it renders this solution useless.
I also looked at the source code of the room scanner app but I’m not sure whats happening there and how the data is applied to the point cloud. Any chance somebody can help me or give me a clue?
Some more information about my project:
- I use the Android NDK and I write most of the code in C++.
- I use VES and VTK to create the point cloud.
I’m trying a different approach now. I’m trying to get the translation and rotation between consecutive images. Is this the way to go?