Technical Meeting on Public Communication in Emergencies: Tackling Misinformation and Retaining Public Trust in Disruptive Information Environments
Misinformation? and disinformation are ever-more frequent disruptive factors in the public communication environment during disasters, emergencies, and crises.
Generative artificial intelligence (GAI) is now widely available, creating novel challenges and opportunities for emergency responders. Artificial intelligence (AI) has the potential to both help diminish the disruption caused by dis- and misinformation, as well as to worsen its harms. While GAI’s potential to be misused to produce and distribute disinformation on a mass-scale remains under study, its capabilities continue to advance: GAI currently can be used to generate multilingual content easily, affordably, and swiftly in text, audio, images, or video formats. AI generated text, audio, image, and video are increasingly indistinguishable from human-made content. Experimental research demonstrates that GAI can utilize publicly available personal information to tailor persuasive arguments in direct dialogue with humans that succeed in persuading human interlocutors more effectively than their human counterparts.
In controlled studies that cannot fully replicate the authoritative, dynamic and contextual nature of real-world communication by emergency response organisations, regulatory authorities and operators, among other official sources of information, GAI can produce social media disinformation that is more convincing than human-produced disinformation on the same subject.
Preparedness enhancement is needed since persuasive and deceptive AI generated content is already maliciously used to manipulate the public, highlighting GAI’s potential for emergency response disruption.
Artificial content is now used to defraud and extort consumers and businesses; deceive investors; spread harmful comments, confuse the public about the authenticity of political leader’s photographs, audio recordings and video statements; and engender false, divisive narratives about the cause of emergencies and the equitability of the response. More broadly, research indicates that human-made,and in future AI-enabled, disruptive disinformation can act as a hybrid threat, harming “collective decision-making" processes, which in turn reduce emergency response resilience.
Radiological and nuclear emergencies are perceived by the public as “high risk” events, regardless the actual hazards. The public’s misperception of radiological and nuclear emergencies risks is exacerbated by extensive publicly accessible misinformation on the internet and social media . The public’s overestimation of the radiological or nuclear risks is complicated by the health consequences the public fears, as well as the “societal risks” such as evacuation from contaminated areas. Deployed during the response to a severe emergency, AI generated deceptive content could trigger significant socio-economic disturbance, potentially creating or exacerbating a transnational challenge.
To help mitigate these harms, multi-sectoral collaboration could be coordinated to monitor, analyze and counteract AI generated mis/disinformation given its disruptive consequences. For instance, research and other activities undertaken could improve detection of deceptive or harmful content dealing with radiological and nuclear emergencies, help develop capabilities for evidence-based attribution to models and/or perpetrators of disinformation and further deployment of standards and practices to enhance the ability to verify the provenance and legitimacy of emergency preparedness and response information. Cross-sectoral collaboration could support the exchange of knowledge on risks to emergency preparedness and response measures from AI generated mis/disinformation, while strengthening the public’s resilience against manipulation via disinformation.
In an emergency, the public must be able to rely on credible, actionable, authoritative information that is provided by emergency response organisations, regulatory organisations, operators, local authorities, and the mass media. In a disruptive communication environment, it may be more difficult to find authoritative information that helps the public avoid or reduce safety risks. Mis/disinformation could encourage both inappropriate public responses and increased sharing of false narratives, which in turn increases public anxiety and confusion, undermining trust in authoritative information and public protective instructions,thereby potentially further increasing public safety hazards. There is also a risk that widely shared AI-generated content could deceive emergency responders into misallocating response resources that are more critically needed elsewhere.
Objectives
The purpose of the Technical Meeting is to share good practices and experience, gather expert advice, operational knowledge, and research results to develop more resilient and effective emergency public communication preparedness and response measures to mitigate the harms caused by human-made and AI produced disruptive disinformation during both routine operation and emergencies, including supporting emergency response organisations, regulatory authorities and operators in identifying dis/misinformation.
Target Audience
The target audience includes emergency response managers, planners, and public communicators dealing with nuclear and radiological emergencies, as well as emergencies and disasters triggered by other causes, among emergency response organisations, nuclear regulatory agencies, critical infrastructure operators, nuclear facility operators, social media platforms, digital network operators, generative artificial intelligence model providers, and mass media.
Participation and Registration
All persons wishing to participate in the event have to be designated by an IAEA Member State or should be members of organizations that have been invited to attend.
In order to be designated by an IAEA Member State or invited organization, participants are requested to submit their application via the InTouch+ platform (https://intouchplus.iaea.org) to the competent national authority (Ministry of Foreign Affairs, Permanent Mission to the IAEA or National Atomic Energy Authority) or organization for onward transmission to the IAEA by 15 March 2025, following the registration procedure in InTouch+.
Papers and Presentations
The IAEA encourages participants to give presentations on the work of their respective institutions that falls under the topics of the Technical Meeting.
Participants who wish to give presentations are requested to submit an abstract of their work. The abstract will be reviewed as part of the selection process for presentations. The abstract should be in A4 page format, should extend to no more than 2 pages (including figures and tables) and should not exceed 600 words. It should be sent electronically to the Scientific Secretary of the event (see contact details in the attached information sheet), not later than 1 March 2025. Authors will be notified of the acceptance of their proposed presentations by 1 April 2025.
In addition to the registration already submitted through the InTouch+ platform, participants have to submit the abstract, together with the Form for Submission of a Paper (Form B), to the competent national authority (e.g. Ministry of Foreign Affairs, Permanent Mission to the IAEA or National Atomic Energy Authority) or organization for onward transmission to the IAEA not later than 1 March 2025.