Community

Philadelphia’s AI Experiment: The Risk of Tech-Based Policing

Temple University’s Department of Public Safety (TUDPS) has adopted ZeroEyes, an AI-powered gun detection system. It’s a landmark move—Temple is the first university in Pennsylvania to implement such technology.

Temple University’s Department of Public Safety (TUDPS) has adopted ZeroEyes, an AI-powered gun detection system. It’s a landmark move—Temple is the first university in Pennsylvania to implement such technology.

ZeroEyes claims to spot firearms in surveillance footage in real-time, sending alerts to security personnel before shots are fired. To some, this is a groundbreaking leap forward in campus safety. To others, it’s a Pandora’s box of surveillance and algorithmic overreach.

AI’s role in criminal justice is expanding, and Philadelphia’s embrace of it reflects a national trend. Proponents of ZeroEyes argue that such systems offer a vital edge in preventing mass shootings and other forms of gun violence. It’s easy to see the appeal: quicker response times, potentially saved lives, and a sense of security for students and faculty.

But AI’s track record in criminal justice is far from spotless, raising critical questions about fairness, accuracy, and accountability. If Philadelphia doesn’t approach this with caution, the consequences could be severe.

AI technologies like ZeroEyes are designed to do what human security personnel cannot: monitor vast networks of surveillance cameras 24/7, identifying threats with precision and speed. The system relies on machine learning, trained on thousands of firearm images, to detect potential danger. In theory, this is AI at its best—augmenting human capabilities in a way that saves lives.

In Philadelphia, where gun violence is a persistent crisis, such tools seem almost necessary. In 2023 alone, the city recorded over 500 homicides, most involving firearms. Temple’s North Philadelphia campus is not immune to these trends. Implementing ZeroEyes could help address concerns about safety and serve as a deterrent. Proponents argue that such technologies are indispensable in environments where the stakes are as high as human lives.

However, the criminal justice system’s history with AI offers plenty of cautionary tales. Consider PredPol (short for Predictive Policing), an algorithmic tool once touted as the future of crime prevention. PredPol analyzed historical crime data to predict where crimes might occur. The results? It disproportionately targeted low-income neighborhoods.

By relying on flawed and biased historical data, PredPol essentially automated discrimination. There’s little to suggest that tools like ZeroEyes are immune to similar pitfalls. What happens when the system misidentifies a benign object as a firearm, triggering an unnecessary—and potentially violent—law enforcement response? What about the privacy implications of ubiquitous surveillance? AI systems are only as good as the data they’re trained on, and when that data reflects societal biases, the algorithms amplify them.

One of the most chilling examples of AI’s failure in criminal justice occurred in Detroit in 2020. Robert Williams, a Black man, was wrongfully arrested after facial recognition software incorrectly identified him as a suspect in a theft case. The software’s error disrupted his life and highlighted a troubling reality: AI’s margin for error, however small, can have life-altering consequences for individuals, particularly in marginalized communities.

ZeroEyes may not use facial recognition, but the principle is the same. If the system makes an error, who bears the cost? Misidentifications could lead to unnecessary police confrontations, or even harm to innocent people. These risks underscore the need for rigorous testing, oversight, and transparency before deploying such technologies widely.

To be clear, the risks of AI in criminal justice do not negate its potential in other areas. In medicine, for instance, AI is doing very impressive work. It’s revolutionizing diagnostics, identifying diseases like cancer and Alzheimer’s earlier than ever before.

AI technology has been a force in addressing the sickle cell illness in the Black community, it’s responsible for curing the first Black child from sickle cell. It’s also transforming industries like education and climate science.

AI’s promise in these domains offers a counterpoint to its challenges in criminal justice. The issue isn’t that AI is inherently bad; it’s that its application in high-stakes, deeply human contexts requires careful thought and design. In medicine, AI operates in highly controlled environments with robust checks and balances. In policing, the stakes are just as high, but the safeguards are often weaker.

The debate over ZeroEyes in Philadelphia exemplifies the broader tension in AI adoption: How do we balance innovation with accountability? On one hand, tools like ZeroEyes could help prevent tragedies. On the other, they risk perpetuating the inequities and errors that already plague the criminal justice system.

Policymakers, technologists, and the public must grapple with these questions. Transparency is key. How does ZeroEyes’ algorithm work? What are its error rates? Who is responsible when mistakes occur? Without clear answers, the public’s trust in these systems will erode, undermining their effectiveness. Philadelphia has an opportunity to set an example for how to implement AI responsibly.

That means prioritizing transparency, conducting independent audits, and involving community stakeholders in decision-making. It also means acknowledging that some problems—like gun violence—require deeper, systemic solutions that no algorithm can provide.

AI will inevitably become a larger part of our daily lives. Acting as if it won’t is like saying you’ll keep riding a horse and buggy when everyone else is driving in cars. However, its deployment in criminal justice demands a higher standard of scrutiny. Philadelphia’s embrace of ZeroEyes could pave the way for safer campuses and communities, or it could deepen the very problems it seeks to solve.

The difference lies in how the technology is used and understood. For now, the lesson is clear: AI is not a one-size-fits-all solution. It’s a tool, and like any tool, its impact depends on how it’s wielded. Philadelphia must proceed with caution, ensuring that the promise of AI doesn’t come at the expense of safety and justice.