To use this site, please enable javascript

A teaser on EIVA’s implementation of VSLAM to support real-time camera based scanning

2 October 2018

During EIVA Days Denmark 2018, we presented a major expansion to the functionality of camera based scanning.

There are many photogrammetry packages available, and EIVA’s customers have used them for several years, but existing solutions all have one big disadvantage: processing time. Even just a few minutes of video can require hours of processing; something that becomes an especially obvious problem with large subsea inspections.

It has therefore been a key goal for EIVA to develop and implement a real-time processing capability, or 1:1 processing as we say, so that one minute of video takes less than one minute to process.

Among other improvements, we employ state-of-the-art Visual Simultaneous Localisation And Mapping (VSLAM) to achieve the ambitious goal of real-time photogrammetry.

Key processing steps of EIVA’s implementation of VSLAM are detailed below:

EIVA VSLAM workflow

VSLAM feature tracking

EIVA’s VSLAM software algorithms track features from one image to the next and thereby enable the calculation of camera position and feature position in a 3D environment.

This VSLAM functionality will be made available as a real-time tracking module for autonomous use. The goal is to use it on ROVs and AUVs for tracking subsea structures and automatic scanning of structures, in addition to a fleet of other applications.

For tethered ROV operations, VSLAM data processing is handled in real-time by a topside computer.

Although VSLAM itself has a number of inherent challenges – drift and lack of scale/orientation being some – these have been addressed in EIVA’s implementation of the technology.

Construction of a dense point cloud

Using the 3D positions of the camera and tracked features, a dense point cloud is constructed from image pixels. These can be generated without any post processing (when using a calibrated camera), or with only small delay should an automatic positioning and rectification step be required (uncalibrated cameras).

Construction of a mesh model

A 3D mesh model featuring colour textures can be generated from VSLAM derived point clouds to support detailed 3D scanning of volumes and objects.

Both the dense point cloud and mesh can be used as end-deliverables, and can be combined with other sensor data such as sonar/LiDAR scans inside EIVA’s NaviModel Pro package – our leading software for survey data model visualisation, analyses and manipulation.

Did we mention it’s fast?

Delivering real-time results, and avoiding heavy post processing – such a powerful camera-based scanning tool has never come with so little delay.

Stay in touch with us about VSLAM

If you are interested in trying the functionality or providing data sets for the testing of it, please feel free to contact us through the EIVA contact form.

Latest news

Show all news