<dd id="rw0xn"></dd>

  • <label id="rw0xn"></label>

  • <sup id="rw0xn"><strike id="rw0xn"></strike></sup><label id="rw0xn"></label>
      <th id="rw0xn"></th>
    1. <var id="rw0xn"></var>
        1. <table id="rw0xn"></table>

          <sub id="rw0xn"><meter id="rw0xn"></meter></sub>

          Artificial intelligence, misinformation and emergency communication

          Turning risk into opportunity

          The IAEA’s simulator trains countries to use social media effectively during nuclear or radiological emergencies, including how to counter misinformation. (Photo: IAEA)

           

          From translation bots to deepfake detectors, artificial intelligence (AI) tools are transforming how authorities warn, inform and reassure people about emergencies. But these technologies can be risky if they fall into the wrong hands, or if they are deployed before facts are verified. In a world saturated with synthetic content, perception can overshadow reality. This is especially critical in emergency communications — where speed matters, but so does trust.

          In June 2025, the IAEA convened a group of leading experts on public communication in nuclear and radiological emergencies to examine how AI is changing the rules of engagement. The goal was to help countries adapt through evidence-based guidance, new research and practical capacity building.

          “As AI reshapes the information landscape, we want to support countries with effective guidance, connect them with leading experts, and help them navigate this fast-moving and constantly evolving field,” said Nayana Jayarajan, Outreach Officer at the IAEA’s Incident and Emergency Centre.

          Preventing the misuse of AI in crises

          Today, deepfakes can create more panic than an actual emergency alert. During Hurricane Helene in 2024, social media in the United States of America was flooded with AI-generated images, including one of a distraught young girl clutching a puppy in a rescue boat. Though entirely synthetic, the image spread faster than official updates. Such fabricated visuals can divert scarce resources, erode public trust in government efforts and reinforce manipulative narratives.

          “The first and most pressing challenge posed by generative AI,” said Kalina Bontcheva, Professor of Computer Science at the University of Sheffield, “is improving the models’ safeguards to prevent their misuse in producing persuasive, polarizing disinformation at scale — either for free or at very low cost.”

          “Since the intermediary is cut out in modern communication, public access to information has changed dramatically,” said Achim Neuhauser, Head of the President’s Office at Germany’s Federal Office for Radiation Protection.

          AI-powered misinformation doesn't just distort narratives — it challenges legitimacy.

          Making AI more effective in crisis communications

          “Crises alter the relational dynamics between organizations and the public,” said Alice Cheng, Associate Professor at North Carolina State University. “Trust and satisfaction may temporarily evaporate, and publics may re-evaluate these relationships based on perceived legitimacy and responsiveness.”

          Cheng surveyed over 660 people in the United Kingdom to understand what drives trust in AI during emergencies and how that trust affects behaviour. Participants were presented with a realistic scenario involving a major company using AI tools — such as predictive alerts and evacuation guidance — during a disaster. They then answered questions about the company’s ethics, the AI’s abilities, social influences and their own levels of trust and intentions. The results revealed that people’s trust in AI was shaped by how ethical they perceived the company to be, how capable they thought the AI was, and what they believed others expected them to think.

          Trust in AI significantly boosted trust in the company, which in turn encouraged people to spread word-of-mouth endorsements and to be willing to support the company during emergencies. The endorsements directly increased people’s intentions to help. Overall, the study found that ethics, AI competence and social norms work together to build trust that spreads by word-of-mouth and strengthens people’s willingness to cooperate in a crisis.

          In this shifting terrain, communicators are looking for help — and AI, when used well, offers real support. “These technologies can enhance crisis communication in meaningful ways,” said Sophie Boutaud de la Combe, Director of the IAEA Office of Public Information and Communication. “For instance, real time digital threat mapping and sentiment analysis allow organizations to track how the public is emotionally responding to crisis messages across platforms. This feedback enables more responsive and empathetic communication strategies, helping authorities fine-tune their tone and timing in high-stakes situations.”

          Human oversight remains indispensable

          According to Neuhauser, “individuals who are traumatized by an event need real people to interact with them — whether over the phone or face-to-face.” He said that research on perceptions of AI and chatbots shows that people accept this technology when it uses human conversational habits.

          Cheng’s research underscores the need for strong ethical and technical safeguards when integrating AI into emergency communication systems. While these tools can boost effectiveness, privacy concerns weigh heavily on public trust. Accessible explanations of how the technology works, transparency about how data is used and clear opt-in mechanisms are all essential to maintain public confidence and engagement.

          Cheng sees the communicator’s role evolving. “In an increasingly AI-assisted communication environment, human judgement becomes not less relevant but more strategic. The role of the communicator is shifting from message creation to oversight, curation and trust mediation,” she said.

          Boutaud de la Combe emphasized the importance of considering disinformation in national emergency risk assessments and said that emergency response organizations should institutionalize rapid public communication protocols, train designated spokespersons and pre-position multilingual public information assets.

          When developing policies on AI use, the focus needs to be on the balance between automation and accountability. “AI won’t replace human judgement in emergencies, but it will reshape how we detect, respond to, and simulate crises,” said Dubai-based crisis communication consultant Philippe Borremans. “In a disinformation-rich environment, it’s both amplifier and filter. The challenge is to ensure the algorithms serve clarity, not confusion, and that the human voice remains the anchor.”

          And that voice — ethical, responsive, grounded — may be the most strategic asset of all.

           

          <dd id="rw0xn"></dd>

        2. <label id="rw0xn"></label>

        3. <sup id="rw0xn"><strike id="rw0xn"></strike></sup><label id="rw0xn"></label>
            <th id="rw0xn"></th>
          1. <var id="rw0xn"></var>
              1. <table id="rw0xn"></table>

                <sub id="rw0xn"><meter id="rw0xn"></meter></sub>
                97碰成人国产免费公开视频