This opinion piece has been cleared by the DHS Office of Public Affairs.
Artificial intelligence is not merely a technological revolution; it is the central arena of 21st-century strategic competition. The race to develop and deploy advanced AI will shape the global balance of power, economic prosperity, and future of governance itself. As the United States and its allies contend with the People’s Republic of China’s rapid advancements and state-driven AI ambitions, the regulatory frameworks governing this technology become instruments of geostrategy. Now, as the European Union prepares to finalize its critical AI Code of Practice for General-Purpose AI (GPAI) models—expected around May 2025—a crucial juncture emerges. The current trajectory risks embedding a critical geopolitical blind spot into the heart of AI governance, potentially ceding strategic ground to Beijing.
The landmark EU AI Act sets a global precedent, but its effectiveness hinges on implementation details, particularly those being hammered out in the AI Code of Practice. Worryingly, the current draft includes provisions leaning towards exemptions or significantly lighter scrutiny for open-weight AI models not initially deemed to pose “systemic risks,” which could inadvertently favor models originating from China. While many cutting-edge Western models are either closed-source or powerful enough to trigger systemic risk thresholds, China has strategically invested in and promoted a plethora of open-weight models. This isn’t merely a technical preference; it appears calculated to navigate anticipated regulatory landscapes like the EU’s, potentially allowing PRC-linked models to proliferate with fewer compliance burdens than their Western counterparts. The result? A fragmented transatlantic approach and a regulatory environment that could ironically stifle Western innovation while facilitating the spread of models tied to a strategic competitor.
This highlights a critical misunderstanding of the contemporary AI landscape: treating open source solely as a benign force for collaboration ignores its potent role as a vector for geopolitical influence. Beijing understands this well; by promoting the widespread adoption of its capable and accessible open-weight AI models, especially across the developing world, China aims to embed its platforms as the foundation for global AI applications. This aligns with its broader “Digital Silk Road” strategy and enables China to cultivate technological dependencies, boost its soft power, and normalize AI systems developed under its distinct, state-centric approach. Furthermore, when the global open-source community—including developers in the US and Europe—builds upon, fine-tunes, and validates these PRC base models, it effectively risks subsidizing Beijing’s AI research and development, accelerating its progress in the strategic AI race. A regulatory framework that fails to recognize and address this strategic dimension treats a potential Trojan horse as a gift.
Proponents of the current approach might point to the AI Act’s built-in safeguards as a means to prevent this approach. Indeed, Article 51 allows for models below the compute threshold to still be deemed capable of systemic risk based on technical evaluation of “high impact capabilities” or via a Commission decision considering broader criteria outlined in Annex XIII. The Act textually provides avenues for comprehensive assessment, but the crucial question lies in implementation and political will. Partially, this is simply a bandwidth problem—assessing complex, emergent risks like sophisticated tool use, potential for misuse in generating advanced disinformation, or alignment with authoritarian surveillance goals among the more numerous open-weight GPAI models present immense practical challenges. The EU will need to ensure adequate resources, expertise, and geopolitical mandate to detail sufficiently robust methods for technical evaluations apply the discretionary criteria to PRC originating in the PRC, where state influence is opaque and intentions require deep scrutiny. The danger is a de facto loophole where comprehensive assessment exists on paper but falters in practice against the scale and speed of AI deployment by strategic rivals and the bandwidth challenges posed by the quantity of open-weight models entering the market.
This is not merely an economic or governance challenge; it is a pressing national security imperative. We must move beyond generic discussions of AI safety to confront the specific threats posed by models potentially weaponized by state actors. The Volt Typhoon malware campaign, which saw Chinese actors prepositioning code within US critical infrastructure, provides a concerning analogy. Could widely adopted open-weight AI models, originating from PRC entities, contain similarly hidden functionalities designed for activation during a crisis? The risks span offensive cyber operations, manipulation of critical infrastructure, accelerated development of novel threats, and the industrial-scale generation of tailored disinformation to destabilize societies. A “country-blind” risk assessment framework for AI ignores the fundamental difference between models developed within transparent, democratic systems and those originating from strategic competitors controlled or influenced by authoritarian states known for specific adversarial actions. This risk has been made evident in other sectors as well with concerns of untrusted vendors in maritime critical infrastructure and the efforts to review, rip, and replace telecom equipment. Geopolitically aware risk management, incorporating scrutiny based on provenance, is essential.
Failure to address this challenge head-on may lead to risks fracturing the like-minded approach to AI governance precisely when unity is most needed. Divergent regulatory pathways between the US and EU—where one jurisdiction potentially applies stricter rules than the other to comparable models based on origin or design—create seams that Beijing can try to exploit. This undermines efforts to establish global AI norms grounded in shared democratic values and allows China’s vision of state-controlled, surveillance-enabling AI to gain ground globally. A fragmented West strengthens China’s hand, allowing it to push its technology and standards unopposed in key international forums and markets. This empowers China’s efforts to shape global digital infrastructure adoption and, in turn, facilitate the cyber electromagnetic space warfare concepts – which do not distinguish between wartime and peacetime—as laid out in China’s Science of Military Strategy. To counter this, like-minded partners need to look to strengthening, promoting, and making accessible proprietary AI models developed by the US and our allies to ensure that value-aligned technologies, secure and clearly subject to the rule of law, underpin critical applications rather than opaque alternatives.
The finalization of the EU’s GPAI Code of Practice in the coming weeks is not a technical footnote; it is a geostrategic decision point. The window to ensure its robustness and geopolitical awareness is closing. The EU must ensure the Code mandates rigorous, practical assessment methodologies that explicitly account for origin-based risks and the potential for strategic manipulation via open-source vectors. Concurrently, the US and EU must urgently deepen their coordination to present a united front in managing AI risks emanating from strategic competitors. The future of AI, and arguably the future global order, demands a clear-eyed strategy that acknowledges the complex realities of technological competition—including the weaponization of openness itself. The West must adapt its playbook, or risk being outmaneuvered.
PacNet commentaries and responses represent the views of the respective authors. Alternative viewpoints are always welcomed and encouraged.
James Paisley ([email protected]), a member of the Pacific Forum’s Young Leader’s Program, is a cyber policy and geopolitical risk professional. A graduate of Columbia University, he has previously worked on international cyber policy for the Department of Homeland Security, information technology management, and private sector intelligence. This article was a personal endeavor and does not represent the view of the US Government.
Photo Credit: Business Human Rights Journal Blog