Artificial intelligence startup Perplexity is courting law enforcement with a new program that gives public safety agencies one free year of its Enterprise Pro tier for up to 200 users. The company pitches the offer as a way to help officers make faster, better decisions and automate routine work like drafting reports, summarizing body-camera transcripts, and analyzing news. But researchers warn that even “mundane” AI use in policing can quietly shape cases and outcomes — and that the guardrails aren’t keeping pace.
Why a Free AI Deal for Police Raises Red Flags
Perplexity’s proposal sounds pragmatic: let AI tidy paperwork, sift documents, and assemble clean summaries. The trouble, experts say, is that large language models can sound precise while being subtly wrong. A stitched-together description of a crime scene, an off-by-one timestamp, or a misplaced inference in an investigative memo can ripple into charging decisions and plea negotiations.

Katie Kinsey, chief of staff and AI policy counsel at the Policing Project at NYU School of Law, notes that administrative tasks are not neutral. Narrative framing baked into reports and summaries influences prosecutorial choices and judicial perceptions. That’s a high-stakes arena for tools known to hallucinate or amplify bias.
There’s precedent for small AI errors creating outsized harm. Courts have sanctioned attorneys for filing briefs filled with fabricated citations generated by chatbots, illustrating how confident prose can mask falsehoods. A recent assessment by the European Broadcasting Union and the BBC found that leading chatbots, including Perplexity, frequently produced answers with at least one significant accuracy or sourcing issue when asked about current events.
In criminal justice, the risk matrix differs from consumer search. An embellished detail in a police report is not merely a typo; it can shape probable cause, detention decisions, or a jury’s interpretation of facts.
The Accountability Gap: Who Oversees Police AI
Andrew Ferguson, a professor at George Washington University Law School, argues that officers using AI bear responsibility for its outputs, just as they do for any investigative tool. But Kinsey counters that without hard law setting standards, responsibility falls through the cracks between vendors, agencies, and individual users.

Policymakers are only beginning to fill that gap. The White House’s recent government-wide AI guidance requires federal agencies to assess and mitigate risks for “safety-impacting” AI, but local police departments are not uniformly covered. The National Institute of Standards and Technology’s AI Risk Management Framework offers a blueprint for testing, documentation, and monitoring, yet adoption is voluntary. In Europe, the EU AI Act classifies many law enforcement AI systems as high-risk, triggering mandatory testing, transparency, and oversight — a template some US jurisdictions are watching but have not widely implemented.
Without clear procurement rules and discovery obligations, crucial questions remain unanswered:
- Who validates model updates?
- How are prompts and outputs logged for chain of custody?
- When must AI involvement be disclosed to courts and defense?
Guardrails Agencies Can Deploy Now to Reduce Risk
- Experts suggest no AI-generated text should be the sole basis for reasonable suspicion, arrest, charging, or warrants. Require independent corroboration for any material fact surfaced by a model, and codify that rule in policy.
- Embed traceability. Every AI interaction tied to a case file should capture the model name and version, prompt, timestamp, data sources cited, confidence notes, and the human reviewer’s sign-off. Treat these records as discoverable material to meet Brady and Giglio obligations.
- Procure with proof. Contracts should mandate pre-deployment red-teaming, bias and error-rate evaluations on relevant datasets, and third-party audits aligned to the NIST framework or ISO/IEC 42001. Vendors must disclose update schedules and provide a safe rollback path if accuracy regresses.
- Protect sensitive data. Use zero-retention settings, encryption, and access controls. Train officers on “prompt hygiene” to prevent leaking personal or case details into systems that learn from inputs.
- Open the books. Publish public-facing impact assessments, incident reports on AI-related errors, and annual usage statistics. Establish community oversight bodies with access to audit summaries.
A Familiar Pattern In Police Tech Adoption
Law enforcement has long been an early adopter of emerging analytics, from predictive policing tools in the 2000s to today’s facial recognition. That history is checkered. NIST’s face recognition evaluations have documented demographic differentials in false positives across many algorithms, and multiple wrongful arrests linked to faulty face matches — including the widely cited case of Robert Williams in Detroit — have fueled calls for stronger safeguards and narrow use policies.
The lesson: when agencies adopt sophisticated tools without rigorous validation and transparency, the public pays for the learning curve. AI chatbots, tuned for fluency more than factual fidelity, risk repeating that cycle in a new guise.
Bottom Line: Who Polices the Police AI in Practice
Perplexity’s free-for-a-year offer will be attractive to resource-constrained departments. But the core question remains: who polices the police AI? Until enforceable standards, auditability, and disclosure are baked into procurement and practice, seemingly harmless AI assistance can quietly tilt the scales of justice. The technology may be new, but the oversight playbook is not — it just needs to be applied before, not after, the damage is done.
