Discussions of artificial moral agency often maintain that current AI systems
cannot be morally responsible because they lack capacities associated with
responsible agency, such as autonomy, normative self-governance, and
meaningful control. Although rarely framed in terms of the Principle of
Alternative Possibilities (PAP), this view typically assumes that responsibility
requires access to genuine alternatives. I argue that we lack the epistemic
resources to determine whether any agent, human or artificial, possesses such
alternatives. On an epistemic interpretation of Frankfurt-style cases, the central
issue is not whether PAP is false, but whether we can know whether an agent
could have done otherwise. This uncertainty extends even to paradigmatic
human agents. If we treat humans as responsible under this opacity,
consistency prevents us from excluding AI on the same basis. I call this
epistemic parity. Responsibility practices should be guided not by unverifiable
metaphysical assumptions, but by what can be known.
PDF version