The development of a policy framework for the sustainable and ethical use of artificial intelligence (AI) techniques has gradually become one of the top policy priorities in developed countries as well as in the international context, including the G7 and G20 and work within the Organisation for Economic Cooperation and Development (OECD), the World Economic Forum, and the International Telecommunications Union. Interestingly, this mounting debate has taken place with very little attention to the definition of what AI is and its phenomenology in the real world, as well as its expected evolution. Politicians evoke the imminent takeover of smart autonomous robots; entrepreneurs announce the end of mankind, or the achievement of immortality through brain upload; and academics fight over the prospects of Artificial General Intelligence, which appears inevitable to some, and preposterous to others. In all this turmoil, governments developed the belief that, as both Vladimir Putin and Xi Jinping recently put it, the country that will lead in AI will, as a consequence, come to dominate the world. As AI gains positions in the ranking of top government priorities, a digital arms race has also emerged, in particular between the United States and China. This race bears far-reaching consequences when it comes to earmarking funds for research, innovation, and investment on AI technologies: gradually, AI becomes an end, rather than a means, and military and domestic security applications are given priority over civilian use cases, which may contribute more extensively to social and environmental sustainability. As of today, one could argue that the top priority in US AI policy is contrasting the rise of China, and vice versa.1