Artificial intelligence can now recognize a bird just by looking at a photo

AI bird recognition

Artificial intelligence technology has established itself useful in many different areas, and now birdwatching has acquired the A.I. treatment. A new A.I. tool can classify more than 200 different species of birds just by viewing one photo.

The technology comes from a research team at Duke University that used more than 11,000 photos of 200 bird species to train a machine to distinguish them. The tool was presented birds from ducks to hummingbirds and was competent to pick out particular patterns that match a specific species of bird.

“Progressing, it spits out a series of heat maps that principally say: ‘This isn’t just any warbler. It’s a hooded warbler, and here are the characteristics — like its yellow belly and masked head— that give it away,’” wrote Robin Smith, senior science writer in Duke’s communications division, in a blog about the new technology.

The academics included Duke computer science Ph.D. student Chaofan Chen, team members from the Prediction Analysis Lab, Duke undergraduate Oscar Li, and Duke professor Cynthia Rudin. The team found that the machine learning rightly identified bird species 84% of the time.

Basically, the technology is alike facial-recognition software, which remembers faces on social media sites to propose tags or to recognize people in surveillance videos. Different from other controversial facial-recognition software, however, the technology from Duke is meant to be clear in how the machine learns identifiable features.

“[Rudin] and her lab are creating deep learning models that explain the rationale behind their predictions, making it clear exactly why and how they came up with their solutions. When such a model makes an error, its in-built transparency makes it possible to see why,” the blog post states.

The expectation is to take this technology to another level so it can be used to categorize areas in medical images, such as detecting a lump in a mammogram.

“It’s case-based reasoning,” Rudin said. “We’re hoping we can better explain to patients or physicians why their image was categorized by the network as either benign or malignant.”

We tried to contact Duke University to find out what other ways the new tool can be employed for, but we haven’t heard back.

Leave a Reply

Your email address will not be published. Required fields are marked *