The United States & Japan: Allied Against Disinformation #1 — Definitions & Defenses
20 April, 2023
【ウェビナー第一弾】日米の「偽情報対策」協力に向けて:定義と対抗策
The US, Japan and their partners in the Indo-Pacific face threats, not only from conventional armies, but virtual ones who promote disinformation online, spreading doubt, distrust and a lack of confidence in the alliance system. Join Pacific Forum and the Economic Security Program at RCAST, the University of Tokyo on April 20 at 2 pm (Hawaii) / April 21 at 9 am (Tokyo) for a discussion of what disinformation is, how it impacts modern society, and the tools we have to fight it.
日米両国とインド太平洋地域のパートナーは、従来の軍事的な脅威だけではなく、ネット上で偽情報を広めることで、同盟システムへの疑念や不信を助長するバーチャルな脅威にも直面しています。このウェビナーでは、偽情報とは何か、それが現代社会にどのような影響を与えるのか、そして偽情報に対抗するためのツールは何かについて議論します。4月21日午前9時(日本時間)より、パシフィックフォーラムと東大先端研の経済安全保障プログラムが共催で行います。
Speaker:
Christopher Paul, RAND Corporation
Moderator:
Crystal Pryor, Pacific Forum
登壇者
クリストファー・ポール(RAND研究所)
モデレーター
クリスタル・プライアー(パシフィックフォーラム)
Key Findings
Defining & Distinguishing Disinformation
Social media and modern internet culture has made spreading falsehoods, whether intentionally (disinformation) or unintentionally (misinformation), easier than ever before. This can be understood as a megaphone-shaped process with three distinct stages: production, redistribution, and consumption. Disinformation begins with production, where disingenuous actors such as government agents in Russia, China, or North Korea, advance an agenda by manipulating information to fit a persuasive narrative. This can be done through wholly or partially fabricated media, selective use of facts, deliberate obfuscation, exploitation of an appeal to authority, or the application of rhetorical fallacies such as false equivalency and strawmanning. Further proliferation of this disinformation by “bot” accounts and compromised actors sharing the material, is called redistribution. Finally, consumption is the end stage where a falsehood reaches its target audience, often influencing their opinions and affecting sentiment on an issue.
Frameworks for Countering Disinformation
Humans are demonstrably poor at discriminating truth from falsehood, so it is imperative to equip people in free and open internet societies like the US or Japan with frameworks to dismantle pernicious social media campaigns where possible. Governments, platforms, and civil society each have a responsibility to combat disinformation, but a balance between all three needs to be maintained to protect the rights of civil society and maintain a free media industry. While governments may consider regulating the production and distribution of disinformation on social media, they are likely too slow and clumsy to implement this effectively in the ever-evolving environment. The threat of regulation may be a more productive force than regulation itself, by incentivizing social media platforms to self-regulate for users’ benefit. Actions that platforms may take in countering disinformation include revising terms of service, enforcing terms of service, fact-checking efforts, warning labels, algorithm reworks, and investment in moderation. Civil society for its part can combat disinformation with responsible social media habits, and reporting known bad actors while promoting credible voices.
Artificial Intelligence as a Double-Edged Sword
Japan’s 2022 National Security Strategy included both the intent and structure for greater functions in countering disinformation, so the government recently announced a new secretariat dedicated to this end. Emerging new capabilities in AI have supercharged the ability of those on the disinformation “offensive” to produce at volume. While the defense is stuck playing catchup, AI may also prove to be a powerful tool for platforms. Rapid automated responses such as consumer warnings or flagging mechanisms may assist moderation systems in stopping the spread of bad information early on. Generative Pre-trained Transformer (GPT) type AI tools have incredible potential but still possess shortfalls. First, these tools can occasionally produce false information themselves, and will assert their falsehoods with undue confidence. Second, is the black box problem. Though an AI tool may propose a solution, it cannot fully explain how or why it reasoned that the given solution was appropriate or best. While AI will almost certainly magnify efforts both good and bad in combating falsehoods online, it can still only make a given claim more persuasive to an extent. Disinformation can challenge a weakly-held belief, or exacerbate an already strongly-held belief, but it is unlikely to dissuade someone from a strongly-held belief.
This document was prepared by Brandt Mabuni. For more information, please contact Rob York ([email protected]), Director for Regional Affairs at Pacific Forum. These preliminary findings provide a general summary of the discussion. This is not a consensus document. The views expressed are those of the speaker and do not necessarily reflect the views of all participants. The speaker has approved this summation of their presentation.
Christopher Paul is a senior social scientist at the RAND Corporation and a professor at the Pardee RAND Graduate School. Paul provides research support related to operations in the information environment, information warfare, the information joint/warfighting function, counterpropaganda, cyber operations, and related policy to a range of Department of Defense and U.S. Government offices, organizations, and commands.
Crystal PRYOR is Director of Research at the Pacific Forum. Crystal works on nonproliferation in Asia while developing research agendas on technology policy and Women, Peace, and Security. She has researched U.S.-Japan outer space security cooperation, strategic trade control implementation in advanced countries, and Japan’s defense industry and arms exports. Crystal received her doctorate in political science from the University of Washington, master’s degrees in political science from the University of Washington and the University of Tokyo, and bachelor’s degree in international relations with honors from Brown University.