At the 2025 CAP Implementation Workshop, Ahmed Lyahou from Intersec explored how Artificial Intelligence is reshaping the way authorities use public warning systems. His central message was that crisis management is shifting from a reactive model to one that is increasingly proactive and adaptive, supported by a growing volume of data from the field.
Why traditional alerting needs to evolve
Crises today are more frequent and more complex. Climate change, geopolitical tensions, and rapid urbanisation create situations that can escalate quickly. At the same time, authorities receive more information than ever before. Meteorological feeds, satellite imagery, sensors, video surveillance, and social media all generate valuable signals, but correlating them in real time is extremely difficult without advanced tools.
There is also another challenge: public trust can be damaged if authorities send alerts that prove unnecessary or poorly targeted. For an early warning system to remain credible, operators must be confident in both the risk and the impact before alerting the population.
AI helps address these challenges by analysing large data sets, detecting patterns, and transforming raw information into clear, actionable recommendations for crisis managers.
"AI does not replace human judgment. It acts as a force multiplier that helps operators work faster and make better decisions, but humans remain responsible for validating and sending alerts. This is essential for both accountability and public trust,” said Ahmed Lyahou.
How AI strengthens CAP-enabled early warning systems
Ahmed presented three main ways AI enhances public warning systems:
- by making reaction times faster
AI improves reaction time by assisting crisis teams and keeping operators trained. Many users of early warning systems activate them very rarely, which means they don’t remember obviously how to use it perfectly. AI can generate simulations, quizzes, and personalised refreshers to ensure operators remain ready. When an incident occurs, such as a gas leak or a flash flood, AI can analyse incoming data, propose a target zone, suggest the most appropriate dissemination channel, and draft an alert message. Intersec already uses a conversational agent that allows operators to describe the situation in natural language and receive a complete alert proposal in return.
- by helping alerts evolve during a crisis
AI also helps warning systems adapt to changing conditions. Situations can evolve rapidly. A fire may shift direction, roads in an evacuation plan may become blocked, or traffic patterns can change unexpectedly. By monitoring sensor data, meteorological updates, and open data sources, AI can recommend modifications to an ongoing alert. It can adjust the zone, refine the evacuation plan, modify the duration, or update the instructions. This reduces the workload on crisis teams and keeps the public informed with greater accuracy.
- by preparing the path toward more proactive alerting
The third dimension is proactivity. With the ability to analyse forecasts and historical patterns, AI can support crisis teams even before a crisis unfolds. If rising river levels indicate a potential flood several days ahead, for example, AI can draft an informational SMS to help people prepare, while still leaving the final decision and validation entirely to human operators.
Crucially, this proactive support is not delivered as a black box. Before making a recommendation, the AI agent explains in clear terms why it suggests a specific action, showing the indicators, trends, and data sources that informed its reasoning. This transparency allows operators to understand, trust, and challenge the system when necessary. Over time, as the AI learns from previous events and operator feedback, it can refine its proposals without ever replacing human oversight, ensuring that public alerting becomes more anticipatory while remaining fully accountable.
AI is already widely used in daily life, and its role in crisis management continues to grow. By integrating AI into CAP-enabled early warning systems, Intersec supports authorities in building more resilient, anticipatory, and accurate alerting strategies that better protect populations.
You’ll find below the recording of Ahmed’s presentation:
Q&A session
Q: Among the challenges mentioned regarding AI, how do you envision integrating alerting systems with television station automation, which may be responsible for nationwide content playout?
Ahmed Lyahou: We already integrate with television broadcasters. TV is one of the channels supported in our alerting ecosystem, and we have existing connectors that allow alerts to be pushed automatically. That said, human validation always remains mandatory before any alert is broadcast.
AI can automate every step up to that point. It can prepare the alert, select channels, and suggest parameters, but a human operator reviews and validates the final version before dissemination.
Q: Following up on validation, there is concern about relying too heavily on AI. When the system proposes an alert, can the reviewer access the underlying reasoning, for example: why the AI thinks flooding is likely?
AL: Yes. We store historical and real-time data such as meteorological inputs, sensor readings, and other collected datasets. When the system suggests that an event like flooding is likely, that suggestion is tied to the underlying data.
Operators can access dashboards showing what was collected, current values, and which inputs led to the decision. The rationale is transparent.
Q: Is that explanation immediately accessible to the person validating the alert?
AL: Absolutely. Dashboards can be configured to show the relevant information, the collected data, and the reasoning behind the proposed decision. All of this is available to the validator.
Q: Regarding the CAP-enabled infrastructure, is this product strictly for official alerting authorities, or could any private entity (say, a private security firm) purchase it and act as an alerting authority?
AL: Today, we work exclusively with governments and official authorities, often through mobile operators mandated by regulation. Policies vary by country, but our deployments serve public authorities only.
Q: AI appears to ask an authority whether to send an alert. If the authority says yes, does the AI then write the alert?
AL: We use two layers of intelligence.
First, our machine learning and deep learning models analyse incoming data (sensors, meteorological feeds, etc.) to determine whether an alert should be recommended.
Then we use LLM-based, agentic AI, combined with retrieval-augmented generation using our documentation and APIs, to help construct the alert. The operator can interact with the system conversationally: choose channels, define the message in one language, request translations, and more.
Visually, the system presents an alert composed in a human-friendly form, but underneath it is building a complete CAP message.
Q: So to confirm: once permission is given, the AI writes the alert as a CAP message?
AL: Correct. It generates all components step by step (zones, message content, parameters) and then assembles the full CAP message.
Q: My next question is about legal responsibility. If the AI makes a mistake in the alert, who is legally accountable?
AL: Our system defines multiple levels of validation. The person who creates the alert is not the one sending it. The final decision is made by one or two human validators, depending on the country’s procedures.
Accountability lies with the human validators. They approve the dissemination.
Q: But doesn’t that mean the AI could generate text the human didn’t expect?
AL: No. The AI never sends alerts. It only prepares them. The creator reviews the full message and submits it. The validator reviews it again and decides whether to disseminate. Nothing is sent without explicit human approval.
Q: So, human validation is required for every alert generated with AI?
AL: Yes, absolutely.
Q: Some systems can use predefined templates to send alerts automatically. With your AI system, will confirmation always be required?
AL: We do offer templates and wizards to accelerate alert creation. Templates allow authorities to reuse predefined messages; wizards walk them through the steps. These tools speed up the process, but still require manual approval.
AI improves speed and consistency, but does not replace validation.
Q: A question on alert templates: can the system incorporate community-specific templates developed with local groups, for example, fishing communities requiring particular wording?
AL: Yes, the system can ingest and use community-specific template information.
Q: Second, you mentioned humans choosing the alerting channels. Can the AI recommend channels based on the type and severity of the hazard?
AL: Yes. That’s part of its role. It can suggest the most suitable channels depending on the nature and criticality of the event.
Q: Third, regarding CAP: if the system generates a full CAP alert and truncated versions for various channels, does it also publish the alert to CAP feeds that others might consume?
AL: Our system always generates a complete CAP message. Publishing it to external CAP feeds depends on the authority’s requirements and network constraints.
For example, in some countries like France, the alerting system is deployed in an isolated network with no external connections, so external publishing requires explicit approval and security review.
Where customers request RSS or CAP feed publication, we implement it; where they do not, we cannot push it automatically.
Q: In Europe, the European Standard Agency now requires that any alert posted on social media include a link to the underlying CAP alert. Is this something your system supports?
AL: We already store the CAP message for auditing, so making it available for posting is straightforward when the customer requests it.
If European authorities require that social media alerts include a CAP link, we can support that as long as the customer authorizes the publication flow.
Q: One more note: for your LLMs, you could train them using the IFRC POPE (Public Order and Public Education) message sets. These are field-tested across dozens of countries and languages. They ensure that people understand and act correctly. Wouldn’t this be an ideal training source?
AL: Yes, absolutely. These field-tested instructions are valuable and would be a strong foundation to extend into additional languages as needed.