Building a New iPad App


#1

Hello All,
Greetings from Berlin,

I want to start the iOS App development using the Structure SDK. So, what will be the best approach to do it. I mean only modify the SDK or Should I need to write an app from Scratch. i f I want to write the app from scratch then I nee to create an app in a Single View or AR based app.

Which language will be suitable for development Swift OR Objective-c?

Any suggestions would be appreciated.

Thanks in advance.


#2

You might want to take a look at Chris Worley’s (@n6xej) excellent Swift port of the Scanner sample app here: https://github.com/n6xej/RRStructureScannerSwift4


#3

Thanks @jim_selikoff
I have created the App and work on it.


#4

Hi again @jim_selikoff,

I created a new App in Swift and having a login page on startup. After login I want to capture the room like Room Capture App. I go through with Chris Worley’s Scanner sample code. The problem is when I tap the login button it should go to the next page called Room Capture but it’s loading too late.
Do you have any idea about it?

Login Page Camera Button Action code snippet-

@IBAction func CameraButton( _sender: Any)
{

func prepare(for segue: UIStoryboardSegue, sender: Any?) {

if segue.identifier == “segueCaptureViewController” {

captureViewController = segue.destination as? CaptureViewController}}

}

PS- It’s calling the CaptureViewController but on that view means camera open too late. I debug it also but didn’t find any progress. Sometimes it opens when I calibrate the device but that also too late responding.

Thanks & Regards,
Puneet


#5

In ViewController I placed something like the following at the end of viewDidAppear, and set the Bool performSensorInit before performing the segue back to the ViewController:

    if performSensorInit {
        thingToDo()
        performSensorInit = false
    }

In my case I’m transitioning from a SceneKit view back to scanning, so thingToDo looks like:

func thingToDo() {
    //        previously performed in func meshViewWillDismiss() {
    
    // If we are running colorize work, we should cancel it.
    if _naiveColorizeTask != nil {
        _naiveColorizeTask!.cancel()
        _naiveColorizeTask = nil
    }
    
    if _enhancedColorizeTask != nil {
        _enhancedColorizeTask!.cancel()
        _enhancedColorizeTask = nil
    }
    
    //        previously performed in func meshViewDidDismiss() {
    
    _appStatus.statusMessageDisabled = false
    updateAppStatusMessage()
    
    let _ = connectToStructureSensorAndStartStreaming()
    resetSLAM()
}

Maybe that will help!


#6

Thanks Jim -:slight_smile: for the quick reply. I have a look of that and come back to you.


#7

Hello Again jim,

One quick question what is exactly performSensorInit function. Because it’s not define in SDK or I need to create mine accordingly.

I understand your concern but again stuck with performSensorInit.


#8

It’s just a Bool variable set by whatever view you’re segueing from. Used to conditionally execute the sensor initialization code (thingToDo) that I added to the ViewController.

For example:

    override func prepare(for segue: UIStoryboardSegue, sender: Any?) {
    if segue.identifier == "presentSegueToScanView" {
        let vc = segue.destination as! ViewController
        vc.performSensorInit = true }

-jim


#9

Thanks Jim,
It’s working as I expected.
Now after login I am calling the Scanner App. Which methods I need to call for Room Capture App.


#10

Hi Jim,

it would be so helpful if you can guide me little more about the room capture app editing. As you already know I create a Swift App on the basis of scanner Sample App and now want to add room capture app functionality.

I tried to change the below method-

  • (void)renderSceneWithDepthFrame:(STDepthFrame*)depthFrame colorFrame:(STColorFrame*)colorFrame
    {
    // Activate our view framebuffer.
    [(EAGLView *)self.view setFramebuffer];

    glClearColor(0.0, 0.0, 0.0, 1.0);
    glClear(GL_COLOR_BUFFER_BIT);
    glClear(GL_DEPTH_BUFFER_BIT);

    glViewport (_display.viewport[0], _display.viewport[1], _display.viewport[2], _display.viewport[3]);

    switch (_slamState.roomCaptureState)
    {
    case RoomCaptureStatePoseInitialization:
    {
    // Render the background image from the color camera.
    [self renderColorImage];

          // Render the feedback overlay to tell us if we are inside the scanning volume.
          [self renderScanningVolumeFeedbackOverlayWithDepthFrame:depthFrame colorFrame:colorFrame];
          
          break;
      }
          
      case RoomCaptureStateScanning:
      {
          // Render the background image from the color camera.
          [self renderColorImage];
          
          GLKMatrix4 depthCameraPose = [_slamState.tracker lastFrameCameraPose];
          GLKMatrix4 cameraGLProjection = _display.colorCameraGLProjectionMatrix;
          
          // In case we are not using registered depth.
          GLKMatrix4 colorCameraPoseInDepthCoordinateSpace;
          [depthFrame colorCameraPoseInDepthCoordinateFrame:colorCameraPoseInDepthCoordinateSpace.m];
          
          // colorCameraPoseInWorld
          GLKMatrix4 cameraViewpoint = GLKMatrix4Multiply(depthCameraPose, colorCameraPoseInDepthCoordinateSpace);
          
          // Render the current mesh reconstruction using the last estimated camera pose.
          [_slamState.scene renderMeshFromViewpoint:cameraViewpoint
                                 cameraGLProjection:cameraGLProjection
                                              alpha:1.0
                           highlightOutOfRangeDepth:false
                                          wireframe:true];
          break;
      }
          
          // MeshViewerController handles this.
      case RoomCaptureStateViewing:
      default: {}
    

    };

    // Check for OpenGL errors
    GLenum err = glGetError ();
    if (err != GL_NO_ERROR)
    {
    NSLog(@“glError: %d”, err);
    }

    // Display the rendered framebuffer.
    [(EAGLView *)self.view presentFramebuffer];
    }

But still it’s calling the scan Cube rendering.


#11

I’m a little confused, that’s Objective-C code. :slight_smile:

What are the overall project goals?

Best regards,

Jim Selikoff


#12

Sorry for misunderstanding,
Yes, that was the Objective-c code.
I need to create an App like RoomCapture Sample App in Swift. I already modified the scanner app and now wants to add the Room Capture app method so I can Swap the cube texture into the room Capture Method.

So, My question is- Is this a right way or need to modified the RoomCapture App from beginning. Because mostly methods and functions are same in both App. Please suggest me the what will be the better way.

Thanks.


#13

I agree that it makes sense to start with the Swift port of the Scanner sample as you are doing, and modify according to the differences between the two scanning modes.

What kinds of spaces (room dimensions) are you hoping to capture? Does the Room Capture sample from the App Store produce the kind of data you’re looking for? The reason I ask is that the bounding cube for the Scanner sample can be enlarged and used for capturing rooms. It’s not the same experience as standing in the center of the room (model), but you can set a large bounding cube and then “walk into” the space as you are scanning to deal with occlusions etc.

Best regards,

Jim


#14

Yes, I am looking for look like same as Room Capture Sample App. I want to capture on the X, Y and Z plane. Basically, I want to capture the Whole Apartment and then save as Video locally on to the device and later on cloud.
Great! If “bounding cube for the Scanner sample can be enlarged and used for capturing rooms. It’s not the same experience as standing in the center of the room (model)” will work. I will work on it and see the result output if it will work for us or not.

Best Regards,
Puneet


#15

Hi @jim_selikoff ,

I tried it but it will not work because cube only give the one particular object to scan but I am looking for to Scan the whole room/building with depth and save as .mov or any other video format.
It would be look like Room Capture Sample App. Please guide me for the same?

Best Regards,
Puneet


#16

Hi Puneet,

How would depth be translated to video? Are you thinking of creating a 3D walkthrough after creating the model, or something else?

Best regards,

Jim


#17

Yes, exactly. After 3D walkthrough then save as Video.


#18

So, is the end goal to develop software to automate the process somehow? That’s a big project! If the goal is simply to capture a model and then to use other software tools to do the walkthrough, then I recommend you look at Occipital’s Canvas offering: https://canvas.io


#19

Thanks. I have seen Canvas app. So, Can I integrate the Canvas app too with my App.
Also, I have only option to send the E-mail after capturing the Scene. Is there any method to save it locally on the device?


#20

The Canvas application is not included as part of the sample applications within the Structure SDK (iOS).

For more information about saving the STMesh, take a look at the STMesh class information located in the reference material (StructureSDK > Reference > index.html). In there, you will find the writeToFile:options:error: function, which is what you can use to save the STMesh.

Could you be a little more specific about the app you are building? Are you looking at developing an application to create 3D virtual tours and 3D video?