A high school in Baltimore County is under fire after an AI-based system allegedly flagged a student’s bag of Doritos as a possible firearm, leading to police being called and the student being handcuffed and searched. The episode, which pits Kenwood High School and a gun-detection software made by Omnilert against questions about false positives, oversight, and the laws of the physical world that mediate algorithmic surveillance in schools, is playing out on Baltimore-area television stations.
How A Snack Morphed Into A Security Alert
District officials said in a statement to families and local media that the AI system produced an alert, which was reviewed and deactivated by school security. But a lack of communication caused the report to be referred to the school resource officer and local law enforcement, who detained the student before the misunderstanding was cleared up. The student had been holding a bag of chips in a position the system interpreted as dangerous — an assessment that unraveled when other humans examined the data.

Omnilert acknowledged this and expressed worry about the student and the community, but said that the process functioned as designed: The AI detected, human reviewers overrode it, but the action taken downstream did not comport with that call. That tension — between detection, verification and action — is at the heart of the debate around AI in school safety.
Behind A.I. Gun Detection And Its Limits
AI gun detection in schools normally depends on computer vision models that search live video feeds for shapes and textures that resemble firearms. Vendors like Omnilert and ZeroEyes stress putting a “human in the loop” to confirm alerts before they automatically send buildings into lockdowns or police calls. Others, like Evolv, are pairing sensors with A.I. to screen people at entries for schools and other venues, attempting to keep crowds moving (without traditional metal detectors).
These systems are trained with large image corpora, synthetic data, and scene graph footage. But real hallways are messy: backpacks, musical instruments, laptops, and even hands holding mundane objects can confound models. Lighting, motion blur, camera angles, and occlusion all add noise to the images, making it more likely that harmless objects are classified as threats.
Unlike in areas like face recognition, there is no established federal benchmark for assessing AI weapon detection within the varied setting of schools. Performance claims — frequently measured using false-positive rate and sensitivity (“true positive” rate) — can be tough to verify beyond the cozy confines of a demo. Independent testing by investigative publications has shown error rates higher than some marketing implies, and federal regulators have questioned how some vendors present their accuracy in promotional materials.
False Positives Bear Real Consequences for Students and Schools
Even when there is no weapon, AI-triggered alarms cascading could still be traumatic. Students describe feeling frightened and humiliated after being detained. When security staff respond to camera calls, they must make split-second decisions that can carry liability. Unless districts communicate quickly and clearly, parents are out of the loop. Civil liberties organizations, including the ACLU, caution that these kinds of tools can amplify bias and result in some students being subjected to disproportionate contact with law enforcement.

Security tech has become a cause célèbre in schools under pressure from the public to make it safe to learn, often with the help of federal and state grants. But a review by the U.S. Government Accountability Office has found little evidence that many school security technologies, including surveillance and analytics, lower harm under realistic conditions. The contrast between lab performance and crowded hallways is evident in stories of districts where umbrellas, Chromebooks, or band equipment have led to alerts at stadium gates and school entrances.
This latest episode is illustrative of a fundamental truth about AI: low-probability errors become inevitable when any algorithmic decision-making process reaches sufficient scale. In high-stakes environments like school safety, even a minuscule false-positive rate can mean that thousands of students passing in front of cameras every day will trigger frequent disruptions. Procurement pitches — and the protocols that determine what happens after a model pings — often gloss over that math.
Policy, Protocols, and Transparency in School Safety
Experts recommend a few safeguards.
- Make decision ownership explicit: if human reviewers decide to cancel an alert, that decision should immediately be visible in police-facing workflows.
- Run scenario-driven drills that incorporate false alarms as well as active threat simulations to stress-test communications and de-escalation.
- Release anonymized metrics — alert volume, false positives, response time — to build community trust and continually tune the systems.
Districts should also require vendor transparency on the scope of training data, limitations of the model, and known edge cases. Independent audits and third-party testing in the specific environment where systems would be deployed can expose failure modes before a rollout. Finally, students and families need clear guidance on how the tech is employed, how to challenge actions taken based upon AI prompts, and what recourse exists when mistakes are made.
A Teachable Moment For A.I. In The Classroom
The Doritos misfire might seem like a ridiculous anecdote, but it underscores a larger issue with the design: when classroom suspicion meets algorithmic scrutiny — when real students intersect with an automated surveillance system — there’s very little room for error. The siren call of faster detection and more rapid action is strong, particularly for administrators who are trying to navigate stringent safety expectations. But that promise only holds if institutions also match AI with disciplined guardrails, radical transparency, and the readiness not to roll tech out when it has failed.
For Kenwood High School and other districts that take up AI security, what happens next can be as important as the initial leap. Judge performance in the open; correct miscommunications and make the human judgment layer faster and more resolute than the algorithm. Otherwise, the tools that are designed to make sure students stay safe also risk transforming snacks into threats — and trust into collateral damage.