technology

Coral! At The Disco: Using fluorescence (and computer science) to label reef data

Beijbom, O., Treibitz, T., Kline, D. I., Eyal, G., Khen, A., Neal, B., Loya, Y., Mitchell, B.G., & Kriegman, D. (2016). Improving Automated Annotation of Benthic Survey Images Using Wide-band Fluorescence. Scientific Reports, 6, 23166. doi:10.1038/srep23166

An interdisciplinary, international team of coral ecologists, computer scientists, and engineers has developed a new system to automatically label images of coral by type with near-human level accuracy. The group, led by Dr. Oscar Beijbom of the University of California San Diego, set out to build a cost effective, fast imaging and analysis workflow to make studying coral reefs more efficient. The speed up in data processing opens the possibility for more extensive reef surveys and enhanced ecosystem monitoring.

Coral reefs, the colorful communities of invertebrates that grow in the tropics, are critical to the global ocean ecosystem — by some estimates, they provide a home for at least 25% of all the species in the ocean. Many human populations also depend on them for food and income. But coral reefs are fragile and, like much of the ocean, are changing rapidly due to environmental stress. In addition to direct human impacts like over-fishing, reefs must contend with ocean acidification and bleaching events connected to climate change.

Scientists, policy makers, and ecosystem managers are all very interested in keeping tabs on coral reefs to better understand how they are being impacted. But doing assessments is no trivial task; while the reefs only cover about 0.1% of the ocean bottom, they collectively occupy almost 300,000 square kilometers. Most studies currently sample coral reefs by doing digital photography surveys conducted by SCUBA divers or autonomous vehicles.

just_coral
Figure 1: Image of coral taken in Eilat, Israel. Can you differentiate between all the coral, rock, and sand? Adapted from Beijbom et al., 2016

Taking these images, however, only solves part of the problem. Once the data is collected, researchers must then extract the relevant scientific information by painstakingly labeling the pictures. Figure 1 is an example of one such image. Imagine going through thousands of those and picking out what is a coral, rock, or sand. The process is difficult, time consuming, and prone to errors. This bottleneck between data collection and scientific output slows the ability of researchers to draw conclusions about what is happening on a reef.

To alleviate the bottleneck, Dr. Beijbom’s team experimented with both an innovative imaging procedure and a new type of automated computer classifier. First, to highlight the corals, the group captured both reflectance and fluorescence images of coral using a modified camera. Reflectance photography is done with the sorts of cameras we have on our cell phones – they record light bouncing off of an object. Fluorescence is the process by which an object sheds excess energy after it has absorbed light. Zooxanthellae, the tiny photosynthetic organisms that live on coral, fluoresce just outside the visible light spectrum when they are saturated with light. In other words, capturing the fluorescence signal gets images of corals at a rave. Averaging the standard and infrared images makes the living corals pop out (Figure 2).

coral_series02
Figure 2: An example of using fluorescence to pick out coral. Notice how the center image has bright green patches. That is the color of light being emitted by the symbiotic algae, zooxanthellae. The corals become much easier to spot when the reflectance, or standard image, is merged with the fluorescence’s image. Adapted from Beijbom et al., 2016.

Both types of pictures were then used to train and test an automatic computer classifier called a Convolutional Neural Network (CNN). CNNs are a kind of machine learning framework that attempt to mimic the activity of a human brain. The models are trained by exposing them to many hand-sorted images. Computer vision researchers have applied this technology to a range of image classification problems with fantastic results.

srep23166-f7
Figure 3: Demonstrating deploying the camera system. First the frame is placed over the target. Then the regular camera is placed in the frame followed by the modified fluorescence camera. Adapted from Beijbom et al., 2016.

The coral images Dr. Beijbom used for testing were taken during a series of night dives on the reefs off of Eilat, Israel. Divers placed a specially designed frame over coral and took pictures with two cameras: First, a regular consumer camera in an underwater housing; Second, a modified camera designed to capture coral fluorescence (Figure 3). After the data was collected, several regional coral experts then labeled the images. These “ground-truth” pictures were used to train the computer and test its ability to recognize corals.

Dr. Beijbom’s group tested many combinations of learning methods and color information. The punch line is that CNNs using both regular and fluorescence images beat out all other methods by a significant margin. The best network they trained was able to accurately classify 90.5% of the regions of interest. Furthermore, the learning is fast: the program takes about 5 hours to train and can classify a new image in less than a second.

While this represents a significant step forward, much work remains to be done. Indeed, Dr. Beijbom suggests that developing a new camera that could capture both a regular and fluorescence image at the same time could yield further improvements. But the fact remains: this procedure does a really good job of recognizing corals.

Fully eliminating the bottleneck between image collection and image labeling will enable scientists to design more data intensive experiments. The net result will hopefully be a better understanding of how we impact both coral reefs and the ocean as a whole.

For those who are interested, all the labeled coral images taken in Eilat for these experiments is availabe at doi:10.5061/dryad.t4362. Some of the automated labeling code is also publically accessible at the UCSD Computer Vision group’s website.

Leave a Reply

Your email address will not be published.