As AI systems proliferate, questions about their emergent capacities focus on intelligence, sentience, and control. But the issue of agency, the capacity for action with consequences, brings other design issues into play. Agency takes many forms including mechanical, incidental, probabilistic, and intentional, but is largely assessed on the basis of behaviors. The challenge of designing agency can be met by considering what must be programmed into a system to provide it with the capacity for action, but the distinction between the appearance of agency (simulacral) and actual agency (intentional) is difficult to test. This paper discusses some of the connections between agency and debates in physics about determinism and probability as they relate to the question of human capacities for intentional action and concludes with a discussion of the difficulties of conceptualizing agency without falling into Romantic models of disruptive behavior. No easy answers arise in regard to the problem of designing intentional agency in a way that can be either tested or constrained.