GSTDTAP  > 资源环境科学
A Machine Learning Approach Could Help Counter Disinformation
admin
2020-06-25
发布年2020
语种英语
国家美国
领域资源环境
正文(英文)

Disinformation has become a central feature of the COVID-19 crisis. According to a recent poll (PDF), false or misleading information about the pandemic reaches close to half of all online news consumers in the U.K. As this type of malign information and high-tech “deepfake” imagery can spread so fast online, it poses a risk to democratic societies worldwide by increasing public mistrust in governments and public authorities—a phenomenon referred to as “truth decay.” New research, however, highlights new ways to detect and dispel disinformation online.

There are several factors that may account for the rapid spread of disinformation during the COVID-19 pandemic. Given the global nature of the pandemic, more groups are using disinformation to further their agendas. Advances in machine or computer learning also contribute to the problem, as disinformation campaigns powered by artificial intelligence extend the reach of malign information online and on social media platforms.

Research from Carnegie Mellon University suggests that social media “bots” may account for 45 to 60 percent of all reviewed Twitter activity related to COVID-19, in contrast to the 10 to 20 percent of Twitter activity for other events such as U.S. elections and natural disasters. These bots can automatically generate messages, advocate ideas, follow other users, and use fake accounts to gain followers themselves.

The university's research identified (PDF) more than 100 inaccurate COVID-19 theories, including misleading reporting on prevention, cures, and emergency measures implemented by state and local authorities. In this context, disinformation can have harmful effects for individuals, communities, society, and democratic governance. False or misleading claims concerning the coronavirus may encourage people to take more risks and pose a threat to themselves and the health of others, for example, by consuming harmful substances or disregarding social distancing guidelines (PDF).

Disinformation may also be used to target vulnerable populations including migrants and refugees, heightening the risk of xenophobic violence and hate crimes.

Online media-literacy programs designed to enhance the ability of online users to recognize false or misleading information can help strengthen public resilience to disinformation.

Share on Twitter

Public and private sector groups, as well as civil society organisations, have already introduced various countermeasures to tackle online disinformation. This includes initiatives to moderate content and the use of social media algorithms to identify the presence of disinformation. Online media-literacy programs designed to enhance the ability of online users to recognize false or misleading information can help strengthen public resilience to disinformation.

The Facebook-owned company WhatsApp has now also imposed new limits on message forwarding to tackle the spread of false information over its messaging channels.

The findings of a new RAND Europe study could now help strengthen these efforts further. Commissioned by the U.K. Defence Science and Technology Laboratory, or DSTL, this study shows how machine-learning models can be used to detect malicious actors online, one of them being Russian-sponsored trolls.

The Kremlin's disinformation tactics have continued apace during COVID-19 and include coordinated narratives with China claiming that the coronavirus was caused by migrants or originated as a biological weapon developed in a U.S. military lab.

Disinformation has also included false claims regarding Russian “humanitarian aid” to countries including the United States and Italy. These efforts all act to undermine the resilience, recovery, and crisis responses of national governments.

In the study for DSTL, researchers drew on Twitter data from the 2016 U.S. presidential election, and they used a computer model to distinguish between the narratives of Russian “trolls” and authentic political supporters.

The model was able to successfully identify the trolls by detecting the manipulative “us versus them” language used to target Democratic and Republican partisans.

The analysis explains how specific language tactics can be used to identify trolls in real time, while also highlighting the targets of these manipulation tactics. Discord is stoked online by highlighting emotive issues for each side by using repeated linguistic patterns.

To raise awareness and build resilience to these tactics, government bodies could make this visible to members of targeted groups so they can recognize social media manipulation techniques.

By examining how the trolls targeted online debates in relation to the 2016 U.S. presidential election, the community detection, text analysis, machine learning, and visualization components of the model could be reconfigured in the future to create a robust, general-purpose social media–monitoring tool. Such a tool could help focus public sector efforts to counter online disinformation in relation to COVID-19, among other issues of public importance.

Understanding how online actors can target countries' vulnerabilities could serve as a first step toward building wider resilience to disinformation. Further developing such approaches to defend against these manipulation tactics could be instrumental in fighting disinformation at scale—a problem that is evidently at the core of COVID-19.


Kate Cox is a senior analyst and Linda Slapakova is an analyst in the defense, security, and infrastructure group at RAND Europe. William Marcellino is a senior behavioral scientist at the RAND Corporation.

This commentary originally appeared on C4ISRNET on June 25, 2020. Commentary gives RAND researchers a platform to convey insights based on their professional expertise and often on their peer-reviewed research and analysis.

URL查看原文
来源平台RAND Corporation
文献类型新闻
条目标识符http://119.78.100.173/C666/handle/2XK7JSWQ/277695
专题资源环境科学
推荐引用方式
GB/T 7714
admin. A Machine Learning Approach Could Help Counter Disinformation. 2020.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[admin]的文章
百度学术
百度学术中相似的文章
[admin]的文章
必应学术
必应学术中相似的文章
[admin]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。