There is increasing speculation within military and policy circles that the future of armed conflict is likely to include extensive deployment of robots designed to identify targets and destroy them without the direct oversight of a human operator. My aim in this paper is twofold. First, I will argue that the ethical case for allowing autonomous targeting, at least in specific restricted domains, is stronger than critics have typically acknowledged. Second, I will attempt to defend the intuition that, even so, there is something ethically problematic about such targeting. I argue that an account of the nonconsequentialist foundations of the principle of distinction suggests that the use of autonomous weapon systems (AWS) is unethical by virtue of failing to show appropriate respect for the humanity of our enemies. However, the success of the strongest form of this argument depends upon understanding the robot itself as doing the killing. To the extent that we believe that, on the contrary, AWS are only a means used by combatants to kill, the idea that the use of AWS fails to respect the humanity of our enemy will turn upon an account of what is required by respect, which is essential conventional. Thus, while the theoretical foundations of the idea that AWS are weapons that are “evil in themselves” are weaker than critics have sometimes maintained, they are nonetheless sufficient to demand a prohibition of the development and deployment of such weapons.