AI 4 Corals: Developing our Autonomous Reef-Monitoring Catamaran

Development of automated and AI-based analytical tools for coral propagation projects

We are proud to announce the launch of our AI 4 Corals fund-raising programme to help achieve our ambitious goals.
We are putting robots and artificial intelligence to work, to help save the coral reefs … and we need YOUR HELP!

Collecting consistent data on our large scale coral propagation projects is a challenge that the Reefscapers have been tackling for over 10 years. Currently this is achieved through the diligence of the Coral biologists, who manually photo monitor each coral frame. We have amassed over 200,000 photographs in 15 years of operation. Unfortunately, this amount of data is difficult to analyse manually, and a wealth of information lays unexploited.

Computer Vision (CV) used to extract information from photographic data on large data sets, along with Artificial Intelligence (AI), emerges as a possible way to replace the traditional bio-statistical framework, which proves mostly inconclusive in uncontrolled environments. It is therefore paramount to try and develop such tools for the future of coral propagation and offer scaling up of such operations, which, if climate forecast are to be trusted, may prove to be necessary in the future.

Reefscapers is embarking on an exciting project, concerning these avenues, using a two pronged approach. An autonomous catamaran is being developed in order to make data collection more robust, as slight inconsistencies can result in painstaking corrections. The solar powered catamaran would use both a GPS autopilot, in order to move around the island, and CV techniques for close range positioning. Once in position, the catamaran captures photographs and video thanks to a camera which can be lowered to the appropriate depth. Retrieved data will be uploaded when in range and could potentially provide a live feed. In addition the catamaran will constantly log different parameters such as temperature, current or wave height.

Feature-matching for steroscopic vision

Prototype design and construction

Good data collection is essential, but it is ineffective without the necessary data analysis. AI algorithms need to be implemented to coral reef propagation in all aspects, from species identification to growth rate calculations and correlation with environmental parameters. In coral research, many questions still remain and some of them can only be answered using large scale data analysis. For example, how important are various environmental factors relative to coral growth? How can we improve coral propagation methods? What makes individual colonies more resistant to bleaching events?

Thanks to our monitoring program we have collected a huge amount of data: We have 47,000 monitoring datasets (4 pictures each), from around 6,200 coral frames, which contain more than 300,000 individual coral fragments. This vast dataset warrants large-scale analysis to determine survival and growth rates, ultimately improving coral restoration techniques worldwide.

State of the art artificial intelligence methods including convolutional neural networks (CNN), object classification and image segmentation will be applied to extract valuable information from the collected raw data. Where is the coral fragment located in the picture? What is its size? What is next to it?

Planned Developments

Following on from the first prototype, which will validate the overall framework for the process, fine tuning and additional development will be carried out on a second prototype for which funding is required. The retractable camera will be replaced by an autonomous rover. This will be tethered to the catamaran and will be able to navigate to the different colonies/fragments and carry out basic maintenance tasks (e.g cleaning the base of corals thanks to water jets). More sensors will be installed to retrieve fundamental environmental parameters.

AI development will focus on a database of coral pictures used for automatic species identification. The program will provide a species specific updated growth rate curve, in almost real time, and summary information concerning mortality occurrence in order to guide the coral biologists in their observations. A. millepora polyps and Coral Bleaching 2016

The proposed project development will take place over a one year period, beginning April 2019. An overall costing of $100,000 USD is forecast. In addition to the expenditure for the technological and scientific components, these project costs include staff salaries, flights and lodging for two programmers. Programmers with the necessary coral reef and coral propagation training will be sourced and will join the existing team in the Maldives.

DECEMBER 2018

Deep learning framework for coral growth data

We have tens of thousands of coral frame photographs in our database, with the potential to analyse the visible coral growth. To do this, we will need to automatically identify coral fragments (and hopefully their taxonomic family) and to calculate size/growth between consecutive monitoring dates. Conventional image processing software is unable to do this, so we will need to develop and use deep learning algorithms (artificial neural networks).

Development of our autonomous reef-monitoring catamaran to take our coral frame photographs

We have started to design and build a robotic camera system on a floating catamaran, that we hope in 2019 will be able to photograph all our coral frames automatically! This will enable us to scale up our operations, and to gather more data on coral growth.

For it to be autonomous (work independently and automatically), the system needs to know its position and be able to visually recognise frames. We are going to implement ‘Computer Vision’ algorithms that will enable the autopilot system to analyse the immediate environment, in real-time.

JANUARY 2019

Construction of the catamaran is well under way, and we have now received the internal electronic components from overseas. We have also been working on our Artificial Intelligence system, developing software for identifying the coral fragments attached to our frames.

We use a pre-trained ‘Faster-R-CNN’ model, with extra training on 170 manually-annotated frame photographs to identify different objects (Pocillopora fragments, Acropora fragments, dead coral fragments). With the current model, it takes about 10 seconds to process one coral frame photograph, with a promising success rate of 40-45% (mean Average Precision).

Once the performance has increased and the results are more reliable, we will be able to calculate survivability for all frames and link it to recorded parameters.

FEBRUARY 2019

We have received delivery of our catamaran! We subsequently installed the solar panels and completed the electrical connections, and we are currently testing the battery before installing the remaining systems.

We are also currently training a more complex model for the identification of coral fragments in our frame monitoring pictures. After 14 days of training, the performance is slowly improving. In the meantime, we have configured a database and various data processing tools to handle the coral fragment pictures that have been detected by this algorithm. They will then be processed through a segmentation model, which will identify the fragment more precisely.

MARCH 2019

The new software model we have trained for detection of the coral fragments is now much more successful than the old one (up from 45% to 70%). This means we now have an automated method of detecting and identifying coral species of any age, without human intervention.

The new model is a Faster R-CNN (Region-Convoluted neural network) with Resnet101 structure pre-trained on iNaturalist species dataset.

Work on the physical catamaran is continuing, as we continue to assemble and solder the electronic components.

APRIL 2019

Our new crowd-funding campaign AI 4 Corals is now launched on Indiegogo!

All the essential parts have been set up on the catamaran (propellers, battery, charge controller, Pixhawk autopilot) and using a computer we are now able to remotely control the catamaran’s movements. We plan to upgrade the charge controller to provide more power, install the cameras and a then a Raspberry Pi (an onboard computer to control trajectory).

MAY 2019

Following installation of a new charge controller, the catamaran embarked on its longest runs to date, testing manoeuvrability and reliability. The next step is the introduction of the Raspberry Pi module.
– The catamaran’s on board micro-computer has been installed, allowing us to remotely control trajectory.
– An additional charge controller has been added to eliminate some early issues with power consumption.

– Adjustments were made to the Pixhawk autopilot in conjunction with the installation of the micro-computer.
– Initial trials of autonomous trajectory control have started.

SEPTEMBER 2019

To make our electronic connections sturdier and more resilient to rusting, we have replaced some wiring and glued many of the components to an acrylic plate. We added small fans to prevent overheating, and the stereo camera is assembled and ready to be attached to the catamaran.

Improvements are continuing to improve coral detection and identification rates; corrections are fed back into the model as training pictures, giving faster and more accurate results. We are also working on a new Python process to detect the frame structure, which will position the fragments relative to the frame and to each other.

We have submitted an abstract on our AI project for the International Coral Reef Symposium in Bremen (ICRS-2020), with a view to presenting our work and raising awareness.

OCTOBER 2019

We have ordered some replacement propeller parts, prepared a designated parking area on land, and will be working on a custom-designed trailer for transport to the ocean.

Our deep-learning model has now been trained on almost 400 coral frame photographs, comprising more than 6,000 individual pieces of coral, and is now relatively accurate at detecting and identifying corals. Next, we are using successive image processing algorithms to detect the metal bars, to then extract the shape of the frame for precise positioning of the corals on the structure.

NOVEMBER

Our catamaran is currently on stand-by as we are having quality problems with the propellers. We are in the process of replacing them and building a custom trolley to move it from the garage to the beach.

We are making good progress with the artificial intelligence. The aim is to identify which coral species are growing on the frames, just from auto-analysis of the monitoring pictures. This way we can identify frames which have lost their tags, and most importantly analyse coral survival and growth. There are several parts that need to be achieved for this to work:

1. Coral detection: This is done by a deep learning framework, a convolutional neural network (CNN) that learns what coral fragments look like, from pictures that are manually annotated. We have had good results with this, and we continue to provide the network with new training pictures to widen the scope of successful situations. To date, we have analysed a total of 605 pictures with 9,442 annotations. The mean Average Precision is an encouraging 0.74-0.77, but this is highly dependent on the similarity between test pictures and training pictures.

2. Frame detection: To know the position of coral fragments on the frame, we need an approximation of the 3D position of the frame, relative to the camera. We can detect the metal bars by merging the straight-line segments in the picture, to give the position of a few bars (we don’t know which ones) plus some false positives (rubble or other frames in the background). We then use a probabilistic method to try thousands of different camera angles and check how well the resulting pictures match the bars that we have detected. In the end, the algorithm proposes the most likely camera position it found. Once we know the camera position, we can calculate the proposed location of the whole frame on the picture. For now, we have an approximate 60% success rate.

3. Combined information: Once we know the position of the frame and the corals on the picture, we can link them together. For each coral fragment that we detect, we can know which bar it is on, and its position on the bar. We can then merge the data from the 4 monitoring pictures to have a global description of the frame, specifically which coral fragments are present, with size, living state, taxonomic family and bar position.

DECEMBER

In order to diversify the training photographs, we have been providing the AI software model with new monitoring sets from different environmental conditions. The current version of the artificial neural network is now trained on 605 monitoring pictures, a total of 9,442 individual fragments. To obtain these results, we had to train the model for more than 3 million seconds… which equates to approximately 35 days!

We have developed a method to detect the frame structure from the monitoring pictures:

  1. Convert the image to monochrome and increase the contrast to emphasise the shapes.
  2. Apply a ‘ridge detector’ to analyse the shapes and convert them into a network of simple curves.
  3. Apply a ‘line segment detector’ to check the curves for straight lines.
  4. Group these line segments into clusters to represent the frame’s bars; merge into longer/fewer segments.
  5. Apply a ‘probabilistic method’ that tries random camera angles and checks if the position of the frame for that angle matches the bars that we have detected. This gives us a best fit.

This method works well on 60% of our photographs, but is less successful at processing visually complex environments, for instance when the ground is covered with rubble. So we are developing a second deep-learning method using artificial intelligence (similar to fragment detection), and have started training an artificial neural network to recognise frame bars. This will replace steps 2 to 4 above, and the final step is now used to determine the camera position.

JANUARY 2020

We have installed stronger replacement propellers for the Catamaran, and started to conduct sea trials, and we are making good progress with our new AI software model, which can accurately detect the position of coral frame subjects in 85% of monitoring pictures.

FEBRUARY

We have developed a new way to visualise our results, by displaying each frame bar in a different colour, with the attached fragments in the same colour. This enables us to assign each fragment to a position on a specific bar, to compare successive monitoring pictures of a specific frame (and also to match unknown frames by analysing their corals).

However, these models are currently not very reliable, with a cumulative error rate of 10-20%. We are continuing with additional training to enhance the code through trial and error feedback. The next steps will improve the overall accuracy, so we can start using these models in real world conditions. We will then be able to calculate survival rates and match frames that have lost their tag.

We received the new battery for our Catamaran and have resumed sea trials. We reached a new milestone during February… the cameras are now set up and fully functioning! We are, however, currently investigating a power-supply problem that is causing unexpected shutdowns, loss of remote connectivity and propeller unresponsiveness.

MARCH

Frame Recognition – Through prolonged tuning of the parameters, we have greatly improved the results of the frame recognition algorithm. We can now successfully detect the frame structure’s position in 99.8% of monitoring photos (up from 85-90%). From the 516 test images, only a single photograph was not processed successfully.

Coral Detection – We have improved the performance of our deep-learning algorithm for detecting the coral fragments. By adding data augmentation (random rotations and flipping of the training pictures), we are now able to detect corals more accurately. Further training still needs to be done for some specific cases, such as newly transplanted fragments (which are small and difficult to detect).

Frame analysis – It has taken a lot of work to develop and improve the algorithm that matches the detected coral fragments to the bar. There are many significant algorithmic challenges, including merging data from the four monitoring pictures. The algorithm is already much more accurate, but improvements are essential to be able to track down the growth of our corals at the colony level.

Catamaran – We have installed an additional battery for the propellers, and we are currently investigating an issue with the onboard cameras. We have noticed that nearby boats can sometimes generate radio interference, so we are researching whether a 4G modem would be more reliable than the radio antenna.

PROJECT MILESTONES

2019 February: First automatic detection of coral fragments
2019 April: First sea trials with the catamaran
2019 November: First automatic detection of coral frames
2020 January: First automatic 3D reconstitution of whole coral frames (structure + coral fragments)
2020 February: First underwater pictures taken by the catamaran
2020 March: 99.8% success on coral frame detection

APRIL

After more than a year of work on artificial intelligence, we have reached an important milestone… our program successfully analysed several complete frames! The software automatically identified the taxonomic genus of the coral fragments and recorded their sizes, all without any human intervention.

This progress opens the door to ‘big data’ analyses by running the program on the whole database (20-30 days of continuous data processing). To do this, we need to make the deep learning models reliable in a wider variety of situations through additional training. We have already conducted one more batch for fragment detection and two more batches for frame segmentation (our two deep learning models).

Currently, we are using:

  • Frame segmentation – 193 manually annotated photographs (now very reliable).
  • Fragment detection – 12,286 manual annotations from 759 photographs (more training required, for specific situations).

We have also updated the code:

  • Parallel operations to use all CPU cores.
  • Added commenting to the python files.
  • Added a command line interface for simple execution.
  • Full documentation for all the steps.
Reefscapers AI analysis of growth (cm) of the coral colonies on frame #LG3324, over time (2017-2019)

AI analysis of coral colony growth (cm) on frame #LG3324, from 2017 to 2019

Reef monitoring catamaran AI recognition of the coral frame

AI recognition of the coral frame

Reef monitoring catamaran AI analysis of the coral frame

AI analysis of the coral frame

Reef monitoring catamaran AI identification of coral colonies

AI identification of coral colonies