For the last few years, police forces around China have invested heavily to build the world's largest video surveillance and facial recognition[1] system, incorporating more than 170 million cameras so far. In a December test of the dragnet in Guiyang, a city of 4.3 million people in southwest China, a BBC reporter was flagged[2] for arrest within seven minutes of police adding his headshot to a facial recognition database. And in the southeast city of Nanchang, Chinese police say that last month they arrested a suspect wanted for "economic crimes" after a facial recognition system spotted him at a pop concert[3] amidst 60,000 other attendees.

These types of stories, combined with reports that computer vision recognizes some types of images more accurately than humans[4], makes it seem like the Panopticon has officially arrived. In the US alone, 117 million Americans[5], or roughly one in two US adults, have their picture in a law enforcement facial-recognition database.

But the technology's accuracy and reliability at this point is much more modest than advertised, and those imperfections make law enforcement's use of it potentially sinister in a different way. They're prone to both false positives—a program incorrectly identifies Lisa as Ann—and false negatives, in which a person goes unidentified even if they're in the database.

Bad Reads

For an extreme example of what can go wrong, take data recently released by an EU Freedom of Information request[6] and then posted[7] by the South Wales police. It shows that at the Champions League final game in Cardiff last year, South Wales police logged 173 true face matches and wrongly identified a whopping 2,297 people as suspicious—a 92 percent false positive rate.

"From

Read more from our friends at Wired.com