If we are to believe the purveyors of school surveillance systems, K-12 schools will soon operate in a manner akin to some agglomeration of Minority Report, Person of Interest, and Robocop. “Military grade” systems would slurp up student data, picking up on the mere hint of harmful ideations, and dispatch officers before the would-be perpetrators could carry out their vile acts. In the unlikely event that someone were able to evade the predictive systems, they would inevitably be stopped by next-generation weapon-detection systems and biometric sensors that interpret the gait or tone of a person, warning authorities of impending danger. The final layer might be the most technologically advanced—some form of drone or maybe even a robot dog, which would be able to disarm, distract, or disable the dangerous individual before any real damage is done. If we invest in these systems, the line of thought goes, our children will finally be safe.
Not only is this not our present, it will never be our future—no matter how expansive and intricate surveillance systems become.
In the past several years, a host of companies have sprouted up, all promising a variety of technological interventions that will curtail or even eliminate the risk of school shootings. The proposed “solutions” range from tools that use machine learning and human monitoring to predict violent behavior, to artificial intelligence paired with cameras that determine the intent of individuals via their body language, to microphones that identify potential for violence based on a tone of voice. Many of them use the specter of dead children to hawk their technology. Surveillance company AnyVision, for instance, uses images of the Parkland and Sandy Hook shootings in presentations pitching its facial- and firearm-recognition technology. Immediately after the Uvalde shooting last month, the company Axon announced plans for a taser-equipped drone as a means of dealing with school shooters. (The company later put the plan on pause, after members of its ethics board resigned.) The list goes on, and each company would have us believe that it alone holds the solution to this problem.
The failure here is not only in the systems themselves (Uvalde, for one, seemed to have at least one of these “security measures” in place), but in the way people conceive of them. Much like policing itself, every failure of a surveillance or security system most typically results in people calling for more extensive surveillance. If a danger is not predicted and prevented, companies often cite the need for more data to address the gaps in their systems—and governments and schools often buy into it. In New York, despite the many failures of surveillance mechanisms to prevent (or even capture) the recent subway shooter, the mayor of the city has decided to double down on the need for even more surveillance technology. Meanwhile, the city’s schools are reportedly ignoring the moratorium on facial recognition technology. The New York Times reports that US schools spent $3.1 billion on security products and services in 2021 alone. And Congress’ recent gun legislation includes another $300 million for increasing school security.
But at their root, what many of these predictive systems promise is a measure of certainty in situations about which there can be none. Tech companies consistently pitch the notion of complete data, and therefore perfect systems, as something that is just over the next ridge—an environment where we are so completely surveilled that any and all antisocial behavior can be predicted and thus violence can be prevented. But a comprehensive data set of ongoing human behavior is like the horizon: It can be conceptualized but never actually reached.
Currently, companies engage in a variety of bizarre techniques to train these systems: Some stage mock attacks; others use action movies like John Wick, hardly good indicators of real life. At some point, macabre as it sounds, it’s conceivable that these companies would train their systems on data from real-world shootings. Yet, even if footage from real incidents did become available (and in the large quantities these systems require), the models would still fail to accurately predict the next tragedy based on previous ones. Uvalde was different from Parkland, which was different from Sandy Hook, which was different from Columbine.
Technologies that offer predictions about intent or motivations are making a statistical bet on the probability of a given future based on what will always be incomplete and contextless data, no matter its source. The basic assumption when using a machine-learning model is that there is a pattern to be identified; in this case, that there’s some “normal” behavior that shooters exhibit at the scene of the crime. But finding such a pattern is unlikely. This is especially true given the near-continual shifts in the lexicon and practices of teens. Arguably more than many other segments of the population, young people are shifting the way they speak, dress, write, and present themselves—often explicitly to avoid and evade the watchful eye of adults. Developing a consistently accurate model of that behavior is near impossible.