Big Tech companies such as Meta, the owner of Facebook, are increasingly accused of enabling human rights violations. The proliferation of toxic speech in their digital platforms has been in the background of recent episodes of mass atrocity, the most salient of which recently transpired in Myanmar and Ethiopia. The involvement of Big Tech companies in mass atrocity raises multiple normative and conceptual challenges. One is to properly conceptualize Meta’s responsibility for the circulation of toxic speech. On one view, endorsed by the corporation itself, Meta can be absolved from any significant share of responsibility for these atrocities because toxic speech is the speech of some (rogue) users, hosted but neither created nor endorsed by the company; if anything, Meta is responsible for failing to anticipate and swiftly remove that speech. I will argue that this view is misleading, as it misses the underlying forces crafting toxic speech. Meta’s business model relies on what one might call the algorithmic capture of attention, which it achieves by manipulating its users and by creating an environment in which manipulative practices of some users thrive over others. This fact alone turns the company into a co-creator of toxic speech rather than a mere conduit of the toxic speech of others. As a result, it is safe to claim that Meta bears significant causal responsibility and sufficient moral responsibility for the dissemination of toxic speech, such that it justifies its inclusion in transitional justice processes and grounds its moral obligation to act in ways that advance these processes.