The Future Is Here: AI-Generated Police Report Claims Officer Turned into Frog

police holding a frog
Mindy Schauer/Digital First Media/Orange County Register/Getty

In an embarrassing incident highlighting the pitfalls of relying on artificial intelligence in law enforcement, the police department in Heber City, Utah, was forced to explain why a police report generated by AI claimed a responding officer had transformed into a frog.

Futurism reports that the Heber City Police Department in Utah recently began testing an AI-powered software called Draft One, developed by police tech company Axon, to automatically generate police reports from body camera footage. The goal was to reduce the amount of paperwork for officers. However, the results have been far from satisfactory, with mistakes falling through the cracks in multiple cases.

In one particularly glaring instance, the AI software picked up on background chatter from the Disney movie The Princess and the Frog that was playing during the creation of the report. This led to the bizarre inclusion that the responding officer was himself transforming into a frog. “That’s when we learned the importance of correcting these AI-generated reports,” stated police sergeant Rick Keel.

Even a simple mock traffic stop meant to demonstrate the tool’s capabilities turned into a headache, with the resulting report requiring numerous corrections. Despite these drawbacks, Keel claimed the software is saving him “six to eight hours weekly.”

The use of AI in law enforcement has drawn criticism from experts who warn of potential biases and accountability issues. Andrew Ferguson, a law professor at American University, expressed concern that the ease of the technology could cause officers to be less careful with their writing.

Critics also argue that AI-generated reports could introduce deniability and make officers less accountable for mistakes. An investigation by the Electronic Frontier Foundation found that Draft One “seems deliberately designed to avoid audits that could provide any accountability to the public.” The investigation revealed that it is often impossible to distinguish which parts of a report were generated by AI and which were written by an officer.

Read more at Futurism here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

COMMENTS

Please let us know if you're having issues with commenting.