in

Police are using AI to write crime reports. What could go wrong?

Douglas Sacha/Getty Images

Despite the documented risks, some US police departments are testing out artificial intelligence (AI) chatbots that craft crime reports as a time-saving solution. What could go wrong? 

According to the Associated Press (AP), Oklahoma City police officers have adopted the use of AI chatbots to write up “first drafts” of crime and incident reports using body camera audio. Police Sgt. Matt Gilmore used an AI chatbot called Draft One to help write an incident report after an unsuccessful suspect search was captured on his body camera, which recorded “every word and [police dog] bark.” The audio was put into the AI tool to “churn out a report in eight seconds.”

Also: Why the NSA advises you to turn off your phone once a week

Draft One pulls from OpenAI’s GPT-4 model, which powers ChatGPT, to analyze and summarize audio from body cameras. Axon, a technology and weapons developer for the military, law enforcement, and civilians, launched the product earlier this year as an “immediate force multiplier” and timesaver for departments, according to the release. 

ChatGPT has been known to hallucinate — but Axon representatives say they’ve accounted for this. Noah Spitzer-Williams, a senior product manager at Axon, told the AP that Draft One lowers ChatGPT’s “creativity dial” so that it “doesn’t embellish or hallucinate in the same ways that you would find if you were just using ChatGPT on its own.” 

Based on advice from prosecutors, Oklahoma City’s police department is using Draft One solely for “minor” incidents — no felonies or violent crimes. The reports it creates don’t lead to arrests. But other police departments, including in Fort Collins, Colorado, and Lafayette, Indiana, have already introduced the technology as a primary aid in writing reports for all cases. One police chief told the AP that “it’s been incredibly popular.” 

<!–>

However, some experts have concerns. Legal scholar Andrew Ferguson told AP that he is “concerned that automation and the ease of the technology would cause police officers to be sort of less careful with their writing.” 

Also: The best AI chatbots of 2024: ChatGPT, Copilot, and worthy alternatives

His hesitancy about AI deployment in police departments to streamline workflows speaks to overarching issues with relying on AI systems to automate certain work processes. There are numerous examples of AI-powered tools worsening systemic discrimination. For example, research shows employers using AI-driven tools to automate their hiring processes “without active measures to mitigate them, [lead to] biases arising in predictive hiring tools by default.”

In a release, Axon says Draft One “includes a range of critical safeguards, requiring every report to be reviewed and approved by a human officer, ensuring accuracy and accountability of the information before reports are submitted.” Of course, this leaves room for human error and bias, which are already long-known issues in policing. 

What’s more, linguist researchers found that large language models (LLMs) such as GPT-4 “embody covert racism” and can’t be trained to counter raciolinguistic stereotypes in regard to marginal languages like African American English (AAE). Essentially, LLMs can perpetuate dialect prejudice when they detect languages like AAE.  

Also: Despite DALL-E military pitch, OpenAI maintains its tools won’t be used to develop weapons

Logic(s) Magazine editor Edward Ongweso Jr. and IT professor Jathan Sadowski also criticized automated crime reports on the podcast This Machine Kills, noting that racial biases accrued from Western-centric training data and body cameras themselves can harm marginalized people.

When asked how Axon offsets these concerns, Director of Strategic Communications Victoria Keough reiterated the importance of human review. In an email to ZDNET, she noted that “police narrative reports continue to be the responsibility of officers” and that “Axon rigorously tests our AI-enabled products and adheres to a set of guiding principles to ensure we innovate responsibly.” 

The company conducted two internal studies using 382 sample reports specifically for racial bias testing. They evaluated three dimensions – Consistency, Completeness, and Word Choice Severity – to detect any racial biases that may arise and whether chatbots churn out wording or narratives that differ from the “source transcript.” The studies found no “statistically significant difference” between Draft One reports and the transcripts. 

Also: Want to access OpenAI’s new o1 model? You have two options, and ChatGPT is not required

While Draft One only interprets audio, Axon also tested using computer vision to summarize video footage. However, Axon CEO Rick Smith stated that “given all the sensitivities around policing, around race and other identities of people involved, that’s an area where I think we’re going to have to do some real work before we would introduce it.” 

Axon, whose goal is to reduce gun-related deaths between police and civilians by 50%, also makes body cameras, which are intended to improve policing with objective evidence. However, according to the Washington Post’s police shootings database, police have killed more people every year since 2020, despite mass body camera adoption all across US police departments.  

Also: Will an AI-powered robocop keep New York’s busiest subway station safe?

It’s still to be seen whether more departments will adopt tools like Draft One and how they will impact public safety. 

OLED vs. QLED TV: Which panel type is best suited for your home?

Fedora 41 beta is blazing fast and a great reason to try a new Linux distro