EIVA A/S

To use this site, please enable javascript

A teaser on EIVA’s implementation of VSLAM to support real-time camera based scanning

2 October 2018

During EIVA Days Denmark 2018, we presented a major expansion to the functionality of camera based scanning.

There are many photogrammetry packages available, and EIVA’s customers have used them for several years, but existing solutions all have one big disadvantage: processing time. Even just a few minutes of video can require hours of processing; something that becomes an especially obvious problem with large subsea inspections.

It has therefore been a key goal for EIVA to develop and implement a real-time processing capability, or 1:1 processing as we say, so that one minute of video takes less than one minute to process.

Among other improvements, we employ state-of-the-art Visual Simultaneous Localisation And Mapping (VSLAM) to achieve the ambitious goal of real-time photogrammetry.

Key processing steps of EIVA’s implementation of VSLAM are detailed below:

EIVA VSLAM workflow

VSLAM feature tracking

EIVA’s VSLAM software algorithms track features from one image to the next and thereby enable the calculation of camera position and feature position in a 3D environment.

This VSLAM functionality is available as a real-time tracking module for autonomous use, and can be used on AUVs for tracking subsea structures and automatic scanning of structures, as well as other purposes.

For tethered ROV operations, VSLAM data processing is handled in real-time by a topside computer.

Although VSLAM itself has a number of inherent challenges – drift and lack of scale/orientation being some – these have been addressed in EIVA’s implementation of the technology.

Construction of a dense point cloud

Using the 3D positions of the camera and tracked features, a dense point cloud is constructed from image pixels. These can be generated without any post processing (when using a calibrated camera), or with only small delay should an automatic positioning and rectification step be required (uncalibrated cameras).

Construction of a mesh model

A 3D mesh model featuring colour textures can be generated from VSLAM derived point clouds to support detailed 3D scanning of volumes and objects.

Both the dense point cloud and mesh can be used as end-deliverables, and can be combined with other sensor data such as sonar/LiDAR scans inside EIVA’s NaviModel Pro package – our leading software for survey data model visualisation, analyses and manipulation.

Significant benefits

EIVA’s implementation of VSLAM technology is unique in several ways:

Single camera compatible

EIVA’s implementation of VSLAM only requires a single camera to perform all operations outlined above. Other photogrammetry solutions are based on large, bulky stereo camera solutions that impact on vehicle payloads and are therefore limited to large platforms or tied to a specific supplier.

Of course, more cameras can be used to gain more coverage, but this is not a requirement.

It is not tied to a specific camera

EIVA’s VSLAM solution can interface with any camera feed already installed on your vehicle, and utilise still images or video.

Hardware-independent

VSLAM doesn’t require any special hardware – just a powerful PC. This means you can use it on any mobile vehicle, including very small micro- and mini-ROVs. We’re confident that the ability to turn any small ROV into a powerful scanning tool will open up entirely new possibilities for operators and the market.

Did we mention it’s fast?

Delivering real-time results, and avoiding heavy post processing – such a powerful camera-based scanning tool has never come with so little delay.

Stay in touch with us about VSLAM

VSLAM functionality is still being finalised and we expect official release at the end of 2018.

 

If you are interested in trying the functionality or providing data sets for the testing of it, please feel free to contact us through the EIVA contact form.

Latest news

Show all news