I’ve been a science fiction fan for as long as I can remember. As a child, I grew up on stories that challenged us to imagine the future. Today, that has built in me a deeply held belief that we, as a company and as a society, must approach what we’re building with humility, thoughtfulness, and accountability.
The world imagined by a science fiction film released last month, Mercy, is filled with innovation in public safety, from drone as first responder technology that gives officers clear situational awareness without putting them in harm's way, to quadcopters that allow first responders to soar above the cityscape and respond more quickly. However, these next-generation tools are clouded by the central conflict of the film: an AI court empowered to determine guilt and assign punishment, believing itself perfectly impartial and fair. The premise is unsettling not because of the potential of such a system to exist, but because it forces us to consider whether we would ever choose to let it exist.
We are living through an intelligence explosion as AI systems increasingly outperform humans at processing information and identifying patterns. That is fact, not fiction. At Axon, we are a part of this moment through our work in applying AI to help public servants make smarter, faster decisions when it matters most. It is a great responsibility, and we must get it right.
The future we are building at Axon is one where AI helps people see more clearly, act more quickly, and make better decisions under pressure. That means designing and deploying AI that supports human judgment and strengthens trust between institutions and the communities they serve.
My biggest takeaway from Mercy is that systems fail not because they become evil, but because people stop probing them. The danger is not intelligence itself; it is the moment humans stop reviewing how systems behave and start treating “the system decided” as a final answer instead of an assistive tool.
As the story of Mercy unfolds, AI shifts from an unquestioned authority to a tool that reveals the truth. That evolution reflects an important principle: AI’s greatest strength lies in augmenting human inquiry, not replacing it. In the real world, AI can help surface insights in large datasets, make sense of chaotic scenes, and enable officers to get the necessary information to make decisions faster, all in support of human judgment.
However, all of this only works if we build systems intentionally from the ground up.
AI should never be put in a position to decide guilt, innocence, punishment, or mercy, because moral responsibility cannot be outsourced.
In fiction, it makes for gripping drama to imagine an all‑seeing AI court acting as judge, jury, and executioner. But in the real world, that model is both unacceptable and incompatible with the basic principles of due process, presumption of innocence, and democratic oversight that must govern public safety technology. At Axon, that line is non‑negotiable.
That is why our Responsible Innovation framework plays such a central role in how we build new technology. We build AI to support human decision‑making, never to automate responsibility. For example, while our Draft One technology has dramatically streamlined the process of writing police reports, using generative AI and body‑worn camera audio to produce high‑quality draft narratives in seconds, officers must still review, edit, and approve every report.
Responsible Innovation is a filter we apply before any product ever reaches the field. We develop our products around ethical use cases where the consensus is clear on the benefits of using AI in public safety, like streamlining report writing with Draft One and allowing officers to communicate across language barriers directly from their body-worn camera in critical moments with Real-Time Translation.
When AI is applied in public-safety contexts, it must be designed with proper parameters from the start, including:
Keeping humans firmly in the loop for critical decisions.
Designing safeguards that ensure ethical outcomes.
Listening to practitioners and communities at every step.
As I told our incredible team at Axon’s Company Kickoff 2026, what an incredible privilege it is that we get to help lead society through one of the most important transitions in human history. We’re not doing this just because someone else will if we don’t. We’re doing it because if we build these systems, we can build them the right way, in service of communities with a shared mission to protect life.
Stories like Mercy endure because they challenge us to examine our choices on how we design technology, how we govern it, and how we preserve the values that matter most as we evolve into a more technological future. The choice Mercy presents, between outsourcing responsibility and enhancing human capability, is not fictional at all.
At Axon, we choose responsibility. We choose accountability. And we choose to build AI that strengthens, rather than replaces, human judgment.
Read more about our Responsible Innovation framework.