By Laura Gorrieri
About the author: Laura Gorrieri is a PhD candidate in Philosophy with a focus on Ethics and Artificial Intelligence at the University of Turin. Laura’s work focuses on the ethics of artificial intelligence, with particular attention to the social impacts of Large Language Models and the linguistic functioning of AI systems.
What should we expect from data-driven technologies? Is it reasonable to ask for explanations, or should we accept that data—and the systems built on it—simply don’t answer back? Until recently, making sense of artificial intelligence (AI) outputs wasn’t particularly difficult: not because we were better at it, but because the systems themselves were relatively simple. Now, highly complex AI systems are embedded in a range of high-stakes decision-making processes: filtering CVs (Deshpande et al., 2020), reading medical scans (Al-Antari, 2023), deciding who gets a loan (Purificato et al., 2023). And this is where things get tricky!
Imagine a neural network—complex, opaque, trained on massive datasets—automatically rejecting your loan application. You might be tempted to ask for an explanation; under EU laws it is your right (Casey et al., 2019). What comes back might be mathematically correct, but as a human being trying to understand why something is happening to you, it may not be satisfying. This is precisely the space in which Explainable AI (XAI) tries to intervene. In the emerging literature on XAI, two concepts seem to be cardinal: interpretability and explainability. The distinction is subtle but important. Interpretability asks how the system arrived at its decision—for example, which features in the input data mattered most. Explainability, by contrast, asks why the system made that decision—a question closer to what we typically ask of other humans when something significant is at stake.
For big data-driven technologies, answering the “how” is already hard. The “why” is even more elusive. Why-questions are slippery: they depend on the context, the audience, the stakes. A good explanation for a software engineer may sound like gibberish to someone unfamiliar with machine learning. And depending on what’s at stake, our standards for what counts as “good” shift dramatically. Let’s say your new smart watch doesn’t track the full distance of your run. You might want a short technical justification, but no one’s losing sleep over it. Now imagine a virtual doctor dismissing your symptoms with a vague reference to “stress.” Here, the need for a clear and satisfying explanation is urgent.
So, what does XAI propose? Not much agreement, unfortunately. We need to start by recognizing the fact that artificial neural networks are not humans: They don’t “think” in any familiar sense. Edsger Dijkstra (1984) eloquently put it as: asking if an AI system thinks is just as asking if a submarine swims. Of course, this does not invalidate technical understanding, necessary for debugging and model development. But maybe we should stop short of expecting explanations that make human sense. If we want to avoid becoming trapped in a labyrinth of vaguely defined concepts and overreaching expectations that populate XAI discourse, perhaps the philosophy of science can offer a way out? Much like Ariadne’s red thread in the Labyrinth, the work done by philosophers of science might help us navigate the mess.
Philosophy of science has long grappled with the limits of models, the ways that knowledge is situated in particular practices, and the responsibility embedded in scientific practices. That toolkit might be more helpful here than stitching on a layer of explanation to the AI “black box.” After all, AI models share several epistemic features with scientific ones (Fleisher, 2022): They notably abstract, idealize, and simplify in order to represent complex phenomena. Reframing the question in this way shifts the burden from the system to the practices. Instead of expecting the model to explain itself, we might ask whether the modelling practices around it are epistemically sound. Are the data chosen for training the system representative? Who gets to decide what data is selected and why? These are questions not for the algorithm, but for the humans building, deploying, and shaping it.
Otherwise, if we treat explainability as an add-on, a layer that comes only after technical development, the risk is that developers might feel as though ethical concerns belong to someone else— typically, downstream users or compliance officers (Leonelli, 2016). Instead, ethical reasoning should be baked into data science from the start (Leonelli et al., 2017), not as a checklist or compliance form, but as part of what it means to develop a good model. In this sense, good epistemic practices are ethical practices. And philosophy of science can help articulate what “good” looks like—not in the abstract, but in the messy, applied contexts where these systems now operate.
References:
Al-Antari, M. A. (2023). Artificial intelligence for medical diagnostics—Existing and future AI technology! Diagnostics, 13(4), 688.
Casey, B., Farhangi, A., & Vogl, R. (2019). Rethinking explainable machines: The GDPR’s right to explanation debate and the rise of algorithmic audits in enterprise. Berkeley Technology Law Journal, 34(1), 143.
Deshpande, K. V., Pan, S., & Foulds, J. R. (2020, July). Mitigating demographic Bias in AI-based resume filtering. In Adjunct publication of the 28th ACM conference on user modeling, adaptation and personalization (pp. 268-275).
Dijkstra, E.W. (1984) ‘The threats to computing science’. Available at: https://hdl.handle.net/2152/129821 (Accessed: 4 June 2025).
Fleisher, W. (2022). Understanding, Idealization, and Explainable AI. Episteme, 19(4), 534–560.
Leonelli, S. (2009). The impure nature of biological knowledge and the practice of understanding. Philosophical Perspectives on Scientific Understanding, 189–209.
Leonelli, S., Rappert, B., & Davies, G. (2017). Data Shadows: Knowledge, Openness, and Absence. Science, Technology, & Human Values, 42(2), 191–202.
Mittelstadt, B., Russell, C., & Wachter, S. (2019). Explaining Explanations in AI. Proceedings of the Conference on Fairness, Accountability, and Transparency, 279–288.Purificato, E., Lorenzo, F., Fallucchi, F., & De Luca, E. W. (2023). The use of responsible artificial intelligence techniques in the context of loan approval processes. International Journal of Human–Computer Interaction, 39(7), 1543-1562.

Leave a Reply