Transparency is widely acknowledged as a core value in the governance of artificial intelligence (AI) technologies. However, scholarship on AI technologies and their regulation often casts this need for transparency in terms of requirements for the explanation of algorithmic outputs and/or decisions produced with the involvement of opaque black-box AI systems. Our article argues that this discourse has re-interpreted and reshaped transparency in fundamental ways away from its original meaning. The target of transparency – in most cases, the provider of AI software – determines and shapes what is made visible to the outside world, and there is no external check on the validity and accuracy of such mediated accounts and explanations, opening transparency up for manipulation. Through a theoretically informed and critical analysis of the transparency provisions in the European Union’s AI Act proposal, the article shows that the substitution of transparency with mediated explanations faces important technical constraints, creates opportunities and incentives for both providers and public-sector users of AI systems to adopt opaque practices, and reinforces secrecy requirements that gag accountability in practice. An approach to transparency as disclosure thus becomes necessary, even if not sufficient in and of itself, to ensure the accountable development and use of AI technologies in the European Union. Transparency needs to be reclaimed as a core concept, accountability tailored and reinforced and the necessity for secrecy re-examined and cordoned off.