A federal judge has condemned the practice of US Immigration and Customs Enforcement (ICE) agents using artificial intelligence (AI) tools, such as ChatGPT, to draft official use-of-force reports. She warned that the practice “undermines the agents’ credibility and may explain the inaccuracy of these reports.” The condemnation has been issued by US District Judge Sara Ellis in a two-sentence footnote tucked within a 223-page opinion last week regarding law enforcement responses to immigration protests in the Chicago area.
What the judge said on factual discrepancies in reports
The judge highlighted specific evidence found in body camera footage, noting that at least one agent was observed asking ChatGPT to compile a narrative for a report after providing the program with only “a brief sentence of description and several images.”
She highlighted factual discrepancies between the official narratives generated by the AI and what the body camera footage actually showed, as per a report in Fortune. Experts warn that the failure to use an officer’s actual experience raises serious concerns about accuracy and privacy in high-stakes legal documentation.Experts are calling the incident a severe breach of protocol, especially concerning reports that justify law enforcement actions.“What this guy did is the worst of all worlds. Giving it a single sentence and a few pictures — if that’s true, if that’s what happened here — that goes against every bit of advice we have out there. It’s a nightmare scenario,” Ian Adams, an assistant criminology professor at the University of South Carolina and a member of the Council for Criminal Justice’s AI task force, was quoted as saying.“We need the specific articulated events of that event and the specific thoughts of that specific officer… That is the worst case scenario, other than explicitly telling it to make up facts,” he added.Meanwhile, Katie Kinsey, tech policy counsel at the Policing Project, noted that use of public AI tools also creates critical privacy risks. She highlighted that the agent may have unwittingly violated policy by uploading images to a public version of ChatGPT, potentially making them part of the public domain.
