Camera equipped robotic vehicles now routinely collect tens to hundreds of thousands of images during their operations. In particular, autonomous underwater vehicles (AUVs) can efficiently generate 3d visual reconstructions of multi-hectare underwater scenes. This talk will introduce recent developments under the UK Natural Environment Research Council's BioCam project designed to enable better use of imagery, both within expedition relevant time frames to enable better-targeted observations, and post expedition to build data archives that can efficiently extract and collate the information within images. BioCam develops a high-altitude (5-8m) seafloor 3d colour mapping system together with data processing pipelines to correct colour information, localise images and generate seamless 3d visual reconstructions with fully characterised dimensional uncertainty. One of the methods developed in this project is the georeferenced autoencoder, an unsupervised feature learner that is applied to automatically cluster and query images based on their content.
A key advantage of this method is that it can learn features that occur on spatial scales larger than the footprint of a single image. This is particularly important for underwater imaging where the strong light attenuation limits the area visible in a single image frame. The approach has been applied in the Southern Hydrate Ridge gas hydrate field off Oregon, in hydrothermal vent fields in the Okinawa trough, manganese deposits in the Northwest Pacific and cold-water coral reefs off Scotland.
During the #AdaptiveRobotics expedition off Oregon, the method was applied to build rapid understanding of a multi-hectare region mapped during the expedition. The information was used to plan the subsequent deployments during the same expedition to make more detailed observations in areas of most scientific interest. The talk will also introduce new initiatives for low-cost seabed imaging being developed under the UK Engineering and Physical Science Research Council's DriftCam project, which is developing low-cost, multi-week long endurance lagrangian imaging floats designed to make visual observations while drifting passively over the seafloor on near-bottom currents. The talk will also introduce the platforms used by our group for online archiving and sharing of imagery via Squidle+ (http://soi.squidle.org) and the development of automatic segmentation algorithms to identify and outline different bottom dwelling species from large volumes of imagery. Details of the activities these activities can be found at https://ocean.soton.ac.uk.