In response to widespread use of automated decision-making technology, some have considered a right to explanation. In this article, I draw on insights from philosophical work on explanation to present a series of challenges to this idea, showing that the normative motivations for access to such explanations ask for something difficult, if not impossible, to extract from automated systems. I consider an alternative, outcomes-focused approach to the normative evaluation of automated decision making and recommend it as a way to pursue the goods originally associated with explainability.