Hey i’m reading through documentation and trying to get a understanding of the layout and sample scanner application. I’m reading about the SLAM library as well but I’m just looking for a bit of guidance. The scanner app setup is conveniently put together for what I’m trying to experiment with. Once the Mesh is finished and presented in the viewer, is their any way to access the point cloud object/array to analyze the possible size or dimensions of the object I’m trying to take a rendering of. I’m currently having trouble understanding at which point in the code i could rip the point array out and filter through the points and run some algorithms to find a 3D measurement of my object. I’m fairly familiar with 3D sensors but my interaction with sensors has been much more raw where im just getting point data from the camera and using pre and post processing of my own to filter clouds and use the data for different applications. I’m just now getting familiar with the structure sensor so any guidance on a task like this would be very helpful. The use of this sensor and an iPad could be very helpful to my companies product line.
I have figured out how i can grab the different depth frames as they come in and especially when i hit the scan process. I could generate some data off of those coordinates but they are also coming in frame by frame so i was more thinking that upon the “done” and going to the meshViewer i would want to access the object and data there. Still looking at this hoping to gain some more understanding. Any guidance would be much appreciated!