In this paper, we provide a systematic review of existing artificial intelligence (AI) regulations in Europe, the United States, and Canada. We build on the qualitative analysis of 129 AI regulations (enacted and not enacted) to identify patterns in regulatory strategies and in AI transparency requirements. Based on the analysis of this sample, we suggest that there are three main regulatory strategies for AI: AI-focused overhauls of existing regulation, the introduction of novel AI regulation, and the omnibus approach. We argue that although these types emerge as distinct strategies, their boundaries are porous as the AI regulation landscape is rapidly evolving. We find that across our sample, AI transparency is effectively treated as a central mechanism for meaningful mitigation of potential AI harms. We therefore focus on AI transparency mandates in our analysis and identify six AI transparency patterns: human in the loop, assessments, audits, disclosures, inventories, and red teaming. We contend that this qualitative analysis of AI regulations and AI transparency patterns provides a much needed bridge between the policy discourse on AI, which is all too often bound up in very detailed legal discussions and applied sociotechnical research on AI fairness, accountability, and transparency.