Share

Publications

2024

  • The field of explainable AI: machines to explain machines?
    • Berkouk Nicolas
    • Pialat Romain
    • Arfaoui Mehdi
    • Francois Pierre
    • Barry Laurence
    , 2024. Deep learning techniques have undergone a massive development since 2012, while a mathematical understanding of their learning processes still remains out of reach. The upcoming successes of systems based on such techniques in critical fields (medicine, military, public services) urges policy makers as well as economic actors to interpret the systems’ operations and provide means for accountability of their outcomes. In 2016, DARPA’s Explainable AI program was followed by a sudden appearance of scientific publications on “Explainable AI” (xAI). With a majority of publications coming from computer science, this literature generally frames xAI as a technical problem rather than an epistemological and political one. Exploring the tension between market strategies, institutional demand for explanation, and a lack of mathematical resolution, our presentation proposes to establish a critical typology of xAI techniques. - We first systematically categorized 12,000+ papers in the xAI research field, then proceeded to a content analysis of a representatively diversified sample. - As a first result, we show that xAI methods come considerably diversified. We summarize this diversity in a 3-dimensional typology: technical dimension (what kind of calculation is used?), empirical dimension (what is being looked at?) and ontological dimension (what makes the explanation right?) standpoints. The heterogeneity of those techniques not only illustrates disciplinary specificities, but also points at the opportunistic methodologies developed by AI-practitioners to respond to this tension. The future of this work aims to identify the social conditions that generate the diversity of these techniques, and help regulators navigate through them.
  • Heterogeneity without controversy: the field of xAI as the encounter between market strategies and institutional demands for deep learning accountability
    • Berkouk Nicolas
    • Pialat Romain
    • Arfaoui Mehdi
    • Francois Pierre
    • Barry Laurence
    , 2024. The upcoming successes of deep learning based systems in critical fields (medicine, military, public services) is conducive to serious concerns on the interpretability and accountability of its outcomes. Therefore, the research production on “Explainable AI” (xAI) should raise considerable scientific controversy and social debate. In contrast, this communication emphasizes the actual almost non-existence of controversy emerging from the development of the xAI literature. Even though, in 2016, DARPA’s “Explainable AI program” was followed by a sudden appearance of scientific publications on xAI, those generally framed xAI as a technical problem rather than an epistemological and political one. Exploring this paradox between an abundant literature on xAI and an absence of controversy, we intend to open the black box of self-appointed AI-explainers. Our presentation thus urges a renewal of STS methodologies to establish a critical typology of xAI techniques. Our methodology was twofold: we first systematically categorized 12,000+ papers in the xAI research field, then proceeded to an analysis of the mathematical content of a representatively diversified sample. As a first result, we show that xAI methods come considerably diversified. We summarize this diversity in a 3-dimensional typology: technical dimension (what kind of calculation is used?), empirical dimension (what is being looked at?) and ontological dimension (what makes the explanation right?) standpoints. The heterogeneity of those techniques not only illustrates disciplinary specificities, but also shows that the research field on xAI progresses rather autonomously and opportunistically with primary objectives to fuel market strategies and answer the institutional demand for explanation
  • L'assurance dans la couverture et la prévention des catastrophes
    • Francois Pierre
    • Barry Laurence
    , 2024.