Ai 6
Re-Engineering the Concept of Understanding for AI
With Pierre Beckmann.
Argues that the concept of understanding needs to be re-engineered for artificial cognition in a way that is empirically informed by mechanistic interpretability research and theoretically informed by a grasp of the functions of the concept.
AI, conceptual engineering, mechanistic interpretability, understanding, conceptual change, functions
PDF coming soon
The Invented Inventor: Adapting Intellectual Property to Generative AI
Under review
As AI increasingly drives discovery, the concept of inventor faces severe strain. Recent judicial decisions, such as the Swiss Federal Administrative Court’s 2025 DABUS ruling, expose a deepening tension: courts demand intellectual creation by a natural person even as human contributions to AI-assisted discovery become increasingly nominal. This paper approaches the resulting tension from the standpoint of political philosophy rather than jurisprudence: the strain AI places on the concept of inventorship is too fundamental to be resolved by interpretative methods taking existing conceptual architectures for granted. Inspired by Hume’s genealogy of property, the paper reconstructs the historical “need matrices” that forged the concept of inventorship, tracing its evolution from Venetian guild economics through Romantic genius ideology to corporate R&D. This reveals the concept to be an overburdened bundle serving four social functions: incentivising innovation, disseminating knowledge, legitimating monopolies, and resolving priority disputes. It also clarifies the mismatch between the concept and the emerging realities of AI-driven discovery. To resolve this mismatch, we must disaggregate the concept of inventorship and develop specialised conceptual resources for each of these functions. If we invented the notion of inventor to perform certain functions, we can reinvent it to perform them better.
intellectual property rights, patents, inventor, genealogy, AI, conceptual adaptation
PDF coming soon
Can AI Rely on the Systematicity of Truth? The Challenge of Modelling Normative Domains
Philosophy & Technology 38 (34): 1–27. 2025. doi:10.1007/s13347-025-00864-x
Argues that the asystematicity of normative domains, stemming from the plurality, incompatibility, and incommensurability of values, poses a challenge to AI’s ability to comprehensively model these domains and underscores the indispensable role of human agency in practical deliberation.
AI, asystematicity, LLM, philosophy of technology, normativity, systematicity
Download PDF
Dropping Anchor in Rough Seas: Co-Reasoning with Personalized AI Advisors and the Liberalism of Fear
Philosophy & Technology 38 (170): 1–7. 2025. Invited commentary. doi:10.1007/s13347-025-01006-z
A political critique of personalized AI advisors through the lens of the liberalism of fear. Highlights the asymmetries of power involved and argues that personalization risks stabilizing domination by translating structural injustices into individualized aspirational challenges. Three political constraints on personalized AI are then proposed: the priority of non-domination, the public contestability of operative norms, and the recognition of non-personalizable civic burdens.
AI, AI ethics, deliberation, liberalism, liberalism of fear, non-domination
Download PDF
Explainability through Systematicity: The Hard Systematicity Challenge for Artificial Intelligence
Minds and Machines 35 (35): 1–39. 2025. doi:10.1007/s11023-025-09738-9
Offers a framework for thinking about “the systematicity of thought” that distinguishes four senses of the phrase, defuses the alleged tension between systematicity and connectionism that Fodor and Pylyshyn influentially diagnosed, and identifies a “hard” form of the systematicity challenge that continues to defy connectionist models.
AI, explainable AI, philosophy of AI, rationality, systematicity, conceptual change
Download PDF
On the Fundamental Limitations of AI Moral Advisors
Philosophy & Technology 38 (71): 1–4. 2025. Invited commentary. doi:10.1007/s13347-025-00896-3
Argues that while the asystematicity of truth militates against the personalization of AI moral advisors, it also imposes limitations on generalist AI moral advisors.
AI, AI ethics, deliberation, asystematicity, LLM, normativity
Download PDF