Finding the Flaw: A Journey Toward Ethical AI


Giuseppe Primiero is Professor of Logic at the Department of Philosophy, University of MIlan and co-founder and Chief Research Officer of MIRAI. His focus in MIRAI is the development of formal methods for bias assessment, risk evaluation, and correction of AI models.
https://www.linkedin.com/in/gprimiero/

Davide Posillipo is a Lead Data Scientist and co-founder of MIRAI, with experience in data science, data engineering, and machine learning projects. His focus in MIRAI is on developing appropriate strategies to handle data and AI models with respect to their risk analysis
https://www.linkedin.com/in/davide-posillipo/

MIRAI – Milano Responsible AI
https://mirai.systems/
https://www.linkedin.com/company/milanoresponsibleai/


In a world increasingly shaped by artificial intelligence, trust is no longer a luxury—it’s a necessity. But how can we trust systems that operate in ways we can’t observe or fully understand? How do we ensure fairness when inaccessible or profoundly opaque algorithms make decisions that profoundly impact our lives—decisions about loans, hiring, healthcare, or even justice?

We were excited to be invited to the Ethical Data Initiative Town Hall 2025: Shaping the Future of Data Ethics held at TUM Think Tank in Munich on 31 March 2025, to present MIRAI, a spinoff of the Department of Philosophy at the University of Milan, born to contribute to the creation of fairer, more transparent AI systems.
Back in 2020, a diverse group of logicians, philosophers, mathematicians, computer scientists, and data scientists came together around a bold and timely research question: What if we could debug AI systems not just for technical errors, but for ethical flaws? Checking for errors in algorithms has been the task of logic for decades. But two things have changed from the past: first, in many real-world applications today, AI systems behave like black boxes; second, they make high-stakes decisions with no clear explanation, and when those decisions are unfair, the failure is not only technical—it’s moral. That’s why fairness cannot be an afterthought. It must be treated as a proxy for trustworthiness, and trust is the very foundation of any responsible and socially acceptable AI system.

This shared motivation gave rise to BRIO—Bias, Risk, and Opacity in AI ( ) a four-year project funded by the Italian Ministry of University and Research. In its initial phase, BRIO focused on laying solid theoretical groundwork through philosophical reflection and formal logic. From there, the team moved toward the development of a practical tool designed to implement a central principle: Even if you cannot look inside a black-box AI system, you can still evaluate its ethics from the outside.
How? By observing how the system behaves across different population groups and comparing those outcomes to a well-defined normative benchmark—your chosen legal, ethical, or institutional standard.

To do this effectively, three key steps are required:

Clarity of purpose – What exactly are you monitoring the system for?

Defined benchmarks – What does a desirable or acceptable outcome look like?

Robust metrics – How will you measure divergence, and what is the threshold of acceptability?

In its third year, BRIO released a tool as an open-source project, allowing researchers, developers, and regulators to apply post-hoc evaluations of AI fairness in practice. But that wasn’t the end of the journey—it was the beginning of a new one. In May 2024, MIRAI – Milano Responsible AI (https://mirai.systems/ ) was founded to carry the project forward. Its purpose: to support public and private organizations in navigating the complex, fast-moving landscape of AI regulation by offering concrete tools to make fairness measurable and trust achievable.

The timing couldn’t be more critical. The EU AI Act—the most ambitious regulatory framework for AI to date—is reshaping the field. Key milestones include:
• 2025: Prohibition of unacceptable-risk AI systems and new obligations for general-purpose systems.
• 2026–2027: Full compliance required for high-risk applications in sectors such as finance, law, education, and healthcare.

Organizations will now have to demonstrate that their AI systems are safe, fair, and transparent, or face serious legal and financial consequences.
To meet this challenge, MIRAI is further developing BRIO, enhancing its ability to identify hidden biases and systemic risks in algorithmic decision-making. BRIO already produces transparent reports to guide corrections where needed. Beyond this development is a full-lifecycle governance platform for AI to empower organizations to:
• Monitor fairness from data pre-processing through to model predictions.
• Manage risk and ensure regulatory compliance with clarity and ease.
• Maintain human oversight over automated decisions.

And in 2026, MIRAI will expand its focus to emerging frontiers such as generative AI and data governance.

But MIRAI’s mission goes beyond technology. It’s about building an approach — a framework of values and well-reasoned decisions—that supports the development of AI that serves people, rather than replacing or misleading them.
Because at the end of the day, ethical AI isn’t just about avoiding risk—it’s about doing what’s right.

    Authors

    Leave a Reply

    Discover more from The Ethical Data Initiative

    Subscribe now to keep reading and get access to the full archive.

    Continue reading