AI, Trust, and the Future of Justice

Table of Contents

Reading time: 4 min

AI is reshaping how we perceive truth, trust, and justice, raising critical ethical questions, especially in forensic image and video analysis. While AI can be a powerful tool, over-reliance on it risks distorting reality rather than clarifying it. This article explores how Amped Software plans to move forward with today’s and tomorrow’s challenges and potentials raised by AI.

AI, trust and the future of justice

One of the most valuable principles in life is: “Focus on what you can control, and don’t worry about what you can’t.” Simple in theory, difficult in practice.

For years, I believed that, despite challenges, society was progressing toward a smarter, more peaceful, and better future. Even during the pandemic, one of the greatest crises of our time, humanity united against a common threat. Nations collaborated, new medical breakthroughs emerged, and frontline workers became global heroes. It felt like a defining moment: a glimpse of what we could achieve together.

But that optimism didn’t last.

As the crisis faded, deeper fractures in society became more apparent. Disinformation surged, social divisions widened, and democratic institutions faced unprecedented pressure. And now, artificial intelligence has entered the picture: not just as a tool, but as something that could fundamentally reshape how we perceive truth, trust, and justice.

AI: An Elevator or a Crutch?

AI has already changed the way we create, research, and interact. I use it to generate images, make “my own” music, brainstorm ideas, and explore new possibilities. Heck, I even used it to edit and improve this draft. But there’s a critical difference between using AI and relying on it.

We’ve seen technology erode certain human abilities before. Sedentary jobs weakened our muscles. Social media shortened our attention spans. However, we can still exercise and take back control of our time with a bit of good judgment and willpower.

But what happens when AI starts assisting with reasoning itself? What if we trust AI’s judgment more than our own? If AI becomes the gatekeeper of truth, who ensures its fairness? What if it decides who is guilty and who is innocent? What if AI doesn’t become an elevator for our brain, but a crutch without which we can’t think independently?

These questions aren’t theoretical. In the field of forensic video analysis, where our company operates, the implications are huge. Multimedia evidence is one of the most powerful tools for solving crimes and supporting justice, yet it is also one of the most difficult to analyze and vulnerable to manipulation. In an age where AI is present on most smartphone camera pipelines, where socials already include generative AI features, and even video codecs and image formats start to be based on AI, we cannot take for granted even what we believe to be real.

New and upcoming regulations, such as the AI Act in Europe, are trying to mitigate some of the issues, but a good part of the responsibility lies in each one of us.

Your Intelligence, Our Tools

At our company, we believe AI should empower investigators and analysts, not replace them. AI should help them speed up investigations to solve more crimes in a shorter time, but not at the expense of accuracy. We are working to ensure that forensic video analysis remains transparent, reliable, and resistant to manipulation, especially in an era of deepfakes and AI everywhere.

Technology should help uncover the truth, not fabricate it. As AI becomes more sophisticated, we must set clear ethical boundaries, develop robust verification methods, and ensure that human expertise remains at the center of justice.

We need to take a pragmatic approach to AI in the forensic field. While there are situations where AI can be effectively used, there are also cases where even seemingly outstanding performance does not justify the risks, at least with the current technologies and in the current legal context. AI-enhanced faces, for example, can be impressive, but with today’s available tools, they often create what Sam Altman once described as a “misleading impression of greatness” in reference to ChatGPT. They instill confidence in results that should not be trusted, as several studies have demonstrated.

For the same reason, in our DeepPlate tool, we have deliberately chosen not to generate images as output. Instead, we provide only possible character combinations as textual output. Once a high-quality image is presented, regardless of how it was generated, it tends to receive more trust than it deserves. By avoiding visual outputs, we ensure a more responsible and reliable use of AI in forensic applications.

AI is not inherently good or bad; it is a tool. How we integrate it into our systems will determine whether it strengthens or weakens our society.

A Call to Action: No Shortcuts

We need safeguards to prevent AI-driven disinformation.
Forensic standards must protect the integrity of digital evidence.
We need a world where AI serves humanity, not the other way around.

Would you trust an oracle that gets things right most of the time but that you don’t fully understand, maybe to convict a person to a life sentence? In forensic science, justice, and democracy, “most of the time” is not enough.

Returning to the principle outlined at the beginning of this article and the border issues affecting our society, I decided to focus my worries and my energy on things where I think I can have a bit of control and where I think I can make a difference. Specifically, I want to address the impact that AI will have on multimedia evidence and justice. After all, our tools are used by law enforcement and government agencies worldwide, and I regularly engage with stakeholders to discuss security and justice-related challenges. This is an area where my voice matters, where I can make a meaningful contribution, and, most importantly, where I have a responsibility to do so.

At Amped Software, we are committed to ensuring that AI enhances truth, not distorts it. Let’s build a future where we control AI, not the other way around.

Table of Contents

Share on

Subscribe to our Blog

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Subscribe to our Blog

Related posts