One of the drivers for pushing for open data as a form of corruption control stems from the belief that in making government operations more transparent, it would be possible to hold public officials accountable for how public resources are spent. These large datasets would then be open to the public for scrutiny and analysis, resulting in lower levels of corruption. Though data quality has been largely studied and many advancements have been made, it has not been extensively applied to open data, with some aspects of data quality receiving more attention than others. One key aspect however—accuracy—seems to have been overlooked. This gap resulted in our inquiry: how is accurate open data produced and how might breakdowns in this process introduce opportunities for corruption? We study a government agency situated within the Brazilian Federal Government in order to understand in what ways is accuracy compromised. Adopting a distributed cognition (DCog) theoretical framework, we found that the production of open data is not a neutral activity, instead it is a distributed process performed by individuals and artifacts. This distributed cognitive process creates opportunities for data to be concealed and misrepresented. Two models mapping data production were generated, the combination of which provided an insight into how cognitive processes are distributed, how data flow, are transformed, stored, and processed, and what instances provide opportunities for data inaccuracies and misrepresentations to occur. The results obtained have the potential to aid policymakers in improving data accuracy.