Existing approaches to ‘algorithmic accountability’, such as transparency, provide an important baseline, but are insufficient to address the (potential) harm to human rights caused by the use of algorithms in decision-making. In order to effectively address the impact on human rights, we argue that a framework that sets out a shared understanding and means of assessing harm; is capable of dealing with multiple actors and different forms of responsibility; and applies across the full algorithmic life cycle, from conception to deployment, is needed. While generally overlooked in debates on algorithmic accountability, in this article, we suggest that international human rights law already provides this framework. We apply this framework to illustrate the effect it has on the choices to employ algorithms in decision-making in the first place and the safeguards required. While our analysis indicates that in some circumstances, the use of algorithms may be restricted, we argue that these findings are not ‘anti-innovation’ but rather appropriate checks and balances to ensure that algorithms contribute to society, while safeguarding against risks.