Amazon’s Facial Recgonition Software Has a Dangerous Race Problem

Confirmation Hearings - Washington, DC

Getty ImagesThe Washington Post

In a report published Thursday, the American Civil Liberties Union found that Amazon’s facial recognition software mistakenly matched 28 U.S. Congresspeople to photos from a mugshot database. The software—which is already in use by some police departments—was disproportionately inaccurate in identifying people of color.

Advertisement – Continue Reading Below

In the test, the ACLU used Amazon’s Rekognition software to compare photos of the 535 members of the House and Senate to a database of 25,000 mugshots, for an overall inaccuracy rate of 5%. But while only 20% of the members of Congress are non-white, about 40% of the falsely ID’d legislators were men and women of color.

The potential outcomes of such misidentifications in life-or-death police encounters are terrifying to consider. And yet Oregon’s Washington County Sheriff’s office has created a 300,000-mugshot strong database to use with the platform, and has armed its deputies with a facial recognition mobile app. Orlando police have partnered with Amazon to experiment with real-time applications of the service, with the goal of tethering the platform to public security cameras, Minority Report-style.

The 28 misidentified members of Congress.


Advertisement – Continue Reading Below

Amazon Deep Learning and AI General Manager Dr. Matt Wood responded to the ACLU’s report in a blog post. Though the ACLU used the Amazon’s default settings, including all mugshot matches that met or exceeded an 80% confidence threshold, Wood countered that for law enforcement purposes, the company recommends the use of a 99% confidence threshold.

As Gizmodo’s Sidney Fussell pointed out, Wood’s post inadvertently struck at the heart of one of the primary arguments against the use of facial recognition. “In addition to setting the confidence threshold far too low, the Rekognition results can be significantly skewed by using a facial database that is not appropriately representative that is itself skewed,” Wood wrote. “In this case, ACLU used a facial database of mugshots that may have had a material impact on the accuracy of Rekognition findings.”

Technology can’t save us from our worst selves. Instead, we’re coding our worst selves into it.

Wood is absolutely right—the problem is that every collection of mugshots in the U.S. is skewed. Poverty, the over-policing of non-white communities, and out-and-out law enforcement racism have all created a criminal justice system that finds people of color arrested at a rate disproportionate to our share of the population. If you’re a black person in America, you might be more likely to find a photo of someone who happens to look a hell of a lot like you in a mugshot database.

Naturally, the ACLU’s experiment has attracted Congress’s attention, and wrongly identified legislators are speaking out. Massachusetts Senator Ed Markey joined with two Congressmen in addressing a letter to Jeff Bezos, while California Rep. Jimmy Gomez helmed a letter signed by a bipartisan array of 25 representatives, inviting Bezos to a meeting.

Advertisement – Continue Reading Below

Advertisement – Continue Reading Below

The letter’s media release included a comment from another Congressman mistakenly matched to a mugshot—Civil Rights icon Rep. John Lewis, one of the original 13 Freedom Riders and an organizer of the March on Washington. “The results of the ACLU’s test of Amazon’s Rekognition software are deeply troubling,” Lewis’ comments read. “As a society, we need technology to help resolve human problems, not to add to the mountain of injustices presently facing of people of color in this country.”

Race-based inaccuracies in facial recognition tech were well-known even before the recent ACLU report. In February, The New York Times reported that across three different facial recognition platforms, the gender of light-skinned men was determined with only 1% inaccuracy, while the gender of dark skinned women was found with around 35% inaccuracy. In March, Wired wrote of a tech company that found that its software struggled to tell Asian people apart.

These are technological representations of all-too-human phenomena. The “other race effect”—a tendency towards being less able to visibly distinguish between people of races different than our own—is well documented. And this isn’t the first time it’s been found that we can breathe our racism into our tech: In 2015, Google came under fire when it was found that its photo recognition software labeled some images of black people as photos of gorillas, echoing racist tropes found everywhere from 19th century scientific racism to the Twitter fingers of Roseanne Barr.

Wood correctly noted that Amazon’s technology was operating from a “skewed” pool of mugshots. But so is all tech—programmed by skewed minds, collecting data points from a skewed world. Just as the democratic promise of social media has curdled into the prospect that it may hasten democracy’s demise, we’re finding that technology can’t save us from our worst selves. Instead, we’re coding our worst selves into it.

Let’s block ads! (Why?)

Lifestyle – Esquire

Leave a Reply

Your email address will not be published. Required fields are marked *

WP2Social Auto Publish Powered By :