Robotic warfare has now become a real prospect. One issue that has generated heated debate concerns the development of ‘Killer Robots’. These are weapons that, once programmed, are capable of finding and engaging a target without supervision by a human operator. From a conceptual perspective, the debate on Killer Robots has been rather confused, not least because it is unclear how central elements of these weapons can be defined. Offering a precise take on the relevant conceptual issues, the article contends that Killer Robots are best seen as executors of targeting decisions made by their human programmers. However, from a normative perspective, the execution of targeting decisions by Killer Robots should worry us. The article argues that what is morally bad about Killer Robots is that they replace human agency in warfare with artificial agency, a development which should be resisted. Finally, the article contends that the issue of agency points to a wider problem in just war theory, namely the role of moral rights in our normative reasoning on armed conflict.