Session 3: Building Interoperability and Security in Data Flow and Artificial Intelligence
Key Findings
On June 22, 2023 (US), with support from the US Embassy Tokyo, and in partnership with the Research Center for Advanced Science and Technology Open Laboratory for Emergence Strategies (ROLES) at the University of Tokyo, the Pacific Forum hosted the third session of the US-Japan Cyber Forum, “Building Interoperability and Security in Data Flow and Artificial Intelligence.” Over 27 participants from the government, private sector, academia, and non-governmental organizations joined the webinar.
This discussion featured expert remarks from Dr. Hiroki Habuka, Research Professor at Kyoto University, Ms. Ngor Luong, Research Analyst at the Center for Security and Emerging Technology (CSET), and Mr. Mark Manantan, Director of Cybersecurity and Critical Technologies at Pacific Forum. The expert remarks and accompanying panel Q&A were moderated by Dr. Ryo Hinata-Yamaguchi, Project Assistant Professor at the Research Center for Advanced Science and Technology (RCAST), at the University of Tokyo.
Emerging regulatory approaches to Artificial Intelligence (AI)
With the release of large language models like ChatGPT, AI is generating global attention across policy, industry and research communities. Due to its wide-reaching implications, many governments are evaluating its potential risks, and have prompted intense debates toward its regulation. At the G7 Summit, AI governance figured prominently in the agenda. The Hiroshima process has sought to provide a balanced approach to AI regulation and innovation, as well as managing the benefits and the vulnerabilities inherent to AI-enabled tools and platforms. Building on Japan’s stewardship of the Hiroshima process, the leaders of G7 have recognized the value of building interoperability among like-minded countries through establishing institutional partnerships based on shared technical standards, normative agreements, and strengthening collaborative frameworks like the ‘Data Free Flow with Trust’ (DFFT).
A major focus of the growing AI governance debate is the unintended consequences of its dual-use applications. The US is wary of AI’s capabilities in the context of autonomous weapons systems and cyberwarfare, while Japan displays a more optimistic tone. Aside from its military applications, Japan views AI from the prism of its socio-economic concerns like aging demographics and declining human capital. A doomsday perspective on AI should therefore be moderated with the aim of maximizing its positive impact on society. Because AI’s advancements are unfolding rapidly in a complex multi-stakeholder environment, it will require thoughtful governance based on agility, flexibility, and a willingness to iterate among policymaking circles.
US-Japan alignment and synergies in AI development and innovation
The robust US-Japan alliance provides the foundation for effective cooperation in setting AI guardrails while encouraging innovation. Both nations are at the forefront of academic research and commercial innovation in AI, and they collaborate frequently in the form of joint papers, projects, ventures, and cross-investment. While Japan is a global leader in AI simulation and human-machine interaction, the US excels in machine-learning and data science. Such comparative advantages can help accelerate AI development in a complementary manner.
Although the two liberal democracies share a mutual vision for maintaining a ‘free and open Indo-Pacific,’ there is a slight variation on how they incorporate AI-enabled technologies (robots, for instance) because of their distinct philosophies based on socio-cultural values. Nevertheless, their complementarity on technology policy issues overrides any fundamental differences. The two allies agree on critical issues like cross-border data flow through their support of the DFFT and the importance of transparency, and open communication in AI development. Areas of practical and deepening US-Japan AI cooperation abound such as alignment of institutional stakeholders, agreement on “privacy-preserving machine learning,” greater capital coordination, and coordinated outbound investment screening frameworks.
Beyond the US-Japan alliance, Tokyo and Washington D.C. have also exhibited a growing appetite for AI policy and capacity synchronization across multilateral groupings like the G20, minilateral configurations like the Quad, and among technologically advanced countries like Singapore and South Korea.
Global competition and considerations for AI standards and safety
While the US and Japan are maximizing various avenues to lead constructive regional initiatives toward AI regulation, key challenges stemming from the ongoing AI arms race with strategic competitors like China and Russia persist. The Biden administration’s decision to curb China’s access to highly sensitive semiconductors by using technology controls (export control and foreign investment screening) was a direct response to the People’s Liberation Army’s advancement in AI military applications. The escalating tempo of technological competition, fueled in large part by economic security, has ramifications for international collaboration in science, technology and innovation. It undermines trust among the research communities in China, Japan, the US, and others. The lack of transparency may result in a possible degradation of AI standards and security, further leading into the proliferation of divergent AI systems that could introduce new types of risks and potential harms.
Japan’s convening power in the Indo-Pacific could prevent the further erosion of diplomatic and communication channels among the science and technology communities. Some segments of the Japanese research community still collaborate with a relative number of Chinese experts and institutions. While this may prove increasingly difficult under intensifying US-China competition, Tokyo is uniquely positioned to facilitate AI trust-building exchanges and to potentially coordinate a more harmonious regulatory dialogue and approach on AI standards and safety between the US and China.
As the US and China vie for technological leadership, the emerging digital economy in the Global South may serve as the arena for competition especially on rules and norms that underpin AI innovation and development. Amid its unique challenges, the vibrant, and emerging tech industries in Southeast Asia can play a key role in shaping the future of global AI governance. To this end, China has positioned itself as a viable partner through its ‘Digital Silk Road’ initiative to support Southeast Asia’s digital transformation strategies. Through Chinese state-owned enterprises and private tech firms, Beijing is facilitating knowledge-transfer to help establish new standards and norms that are incompatible to the US and the West’s model of open and multistakeholder tech governance.
Rather than undertaking a knee-jerk reaction to Chinese initiatives, Japan and the US must adopt an agile mindset that demonstrate a thoughtful and long-term commitment to the region’s economic development. Whether through the Indo-Pacific Economic Framework, the Comprehensive and Progressive Agreement on the Transpacific Partnership, or the Regional Cooperation on Economic Partnership, the US and Japan must temper their growing emphasis on economic security with digital economic policies and strategies that align with the region’s growth priorities, while promoting risk-based and human-centric approaches to AI innovation and regulation.
This document was prepared by Brandt Mahbuni and Mark Manantan. For more information, please contact Mark Manantan ([email protected]), Director of Cybersecurity and Critical Technologies at Pacific Forum. These preliminary findings provide a general summary of the discussion. This is not a consensus document.