Problematic Uses of AI in Interpreting Police Body-Cam Videos

2 minute read

In their article “The Trouble With Trusting AI to Interpret Police Body-Cam Video”, Dan Greene and Genevieve Patterson detail how a company, Axon, is using AI to interpret police body-cam video. When considering the usage of artificial intelligence in real-world applications, it is important to consider how this tool will be trained. Not just any data can be fed into the AI: the automated image-classification system can only learn from the data it is given. If there is incorrect or biased data, the result is an AI that will be incorrect and have a bias. If the dataset is incomplete, more problems can arise because there will be situations the AI hasn’t learned yet. All of this is further exacerbated by the mere fact that any AI system is error prone because it cannot be 100% accurate in all instances. The authors note that even if police know of these possible errors, “they might suffer from ‘automation bias,’ a tendency for people to accept a computer’s judgments over their own because of the perceived objectivity of machines.” Unfortunately, machines cannot be objective: they are created and maintained by humans, who have unconscious biases. In academia and research, AI experts deal with these problems by making their work open to the public, so it can be scrutinized by other experts and improved upon. Unfortunately, Axon and other companies have no obligation to do that. These kind of closed AI systems can further “rapidly degenerate” because mistakes that are not corrected can be used to train the system, further amplifying biases and creating unreliable results.

Essentially, it appears the body-cameras police wear are a black box: the camera provides input, and certain events may be flagged that can result in police action. Since this technology is proprietary, it causes issues of accountability and transparency. It is impossible to determine if the AI is making correct suggestions to police, or if police are taking actions guided by a malfunctioning AI. There are other implications of using AI-powered body-cams as well. Axon is actively developing facial recognition for the body-cams. This means in the not-so-distant future, these body-cams could identify people in public, then record the time and location they were spotted. The government would have access to a database with information on where and when they can find individuals of interest. On the other hand, when the facial recognition makes a mistake, it could lead police to arresting the wrong individual. If the technology remains proprietary and the police do not publish detailed records of how the technology impacts their decisions, the public would be faced against a surveillance system that lies outside of their control. Except, the government would not be in control of this system: the technology is Axon’s. Axon’s technology could lead to the arrests of innocent individuals, or worse. Axon could further increase their profits by selling information about people from their database to advertisers. In the event the database is leaked or hacked, people’s location information would be exposed. This information cannot be changed like a password or credit card; location information contains human behaviors and habits.