19 March 2026

The True Cost of Cyber Downtime: A UK Board-Level Briefing

Written by Sean Tilley, Senior Sales Director EMEA at 11:11 Systems

 

Cyber downtime carries measurable financial consequences, and those consequences are becoming clearer with each major incident. Research from 11:11 Systems shows that 78% of European organisations report losses of up to $500,000 per hour following a cyber-related outage, while 6% face costs exceeding £1 million per hour. When recovery extends beyond containment, the disruption begins to register in revenue performance, contractual exposure, and customer stability rather than remaining confined to the technology function.



For UK leadership teams, the issue centres on continuity of income, fulfilment of obligations, and the strength of customer relationships under strain.

 

Recovery delays compound risk

Half of organisations surveyed require between one and two weeks to fully recover from a cyber incident. Over that period, cost exposure builds in ways that are rarely reflected in early estimates.

 

Revenue stalls, particularly where digital platforms underpin billing and subscriptions, while service commitments are breached, supply chains experience secondary disruption, and internal teams divert time and budget away from planned initiatives towards remediation and communications.

 

Extended recovery places additional pressure on customer relationships, especially in sectors where availability is assumed as standard. Regulatory scrutiny increases in parallel, particularly under UK GDPR and sector-specific resilience requirements, where organisations must demonstrate that appropriate safeguards were established before the incident occurred.

 

A significant proportion of the cost emerges over time rather than immediately. Insurance premiums adjust at renewal, forensic specialists and legal advisers remain engaged, customer notification programmes continue long after systems are restored, and remediation work extends into future quarters. By the time the full impact is visible, the loss total often exceeds initial projections.

 

According to Cyber Monitoring Centre recent UK attacks across retail, healthcare and critical infrastructure have collectively cost businesses more than £1.9 billion. At an individual level, even a contained incident can translate into multi-million-pound losses once revenue interruption, remediation spend and longer-term customer attrition are properly accounted for.

 

Recovery time remains the decisive variable, steadily increasing commercial strain and regulatory attention the longer disruption persists.

 

For boards, cyber downtime is no longer a technical failure but a test of governance. In the immediate aftermath of an incident, external scrutiny rarely focuses on how the attack occurred. Instead, attention turns to whether leadership understood its exposure, validated recovery assumptions and exercised informed oversight before disruption struck. Where recovery falters, questions follow around board assurance, investment prioritisation and whether resilience was treated as a compliance exercise rather than a core commercial safeguard worthy of sustained board attention. In that context, prolonged downtime can quickly become a proxy for broader leadership risk.

 

The preparedness gap

Despite recent high-profile incidents, many organisations still overestimate their ability to recover.

Backup environments may exist without having been stress-tested under realistic conditions, recovery objectives are documented but rarely validated, crisis governance structures that appear clear on paper can lose coherence under pressure and visibility across cloud platforms, SaaS providers, and outsourced partners frequently remains incomplete.

 

Modern enterprises operate across layered digital ecosystems that depend on managed services, third-party infrastructure, and interconnected suppliers, each introducing dependencies that may sit outside direct oversight. Without a consolidated view of these relationships, recovery planning remains fragmented and assumptions around restoration timelines tend to be optimistic rather than proven. When those assumptions fail, cost exposure accelerates quickly.

 

Resilience as a strategic advantage

The organisations that recover fastest are rarely those with the most technology, but those with the clearest decision rights. During major incidents, value is lost less through system failure than through delayed executive judgement such as uncertainty over who authorises restoration priorities, how customer communications are sequenced, and which commercial trade-offs are acceptable under pressure. Boards that rehearse these decisions in advance shorten recovery by eliminating hesitation at the moment it matters most. In competitive markets, that decisiveness increasingly separates resilient businesses from those that merely survive disruption.

 

Containing the cost of downtime requires disciplined preparation rather than reactive response.

 

Scenario-based recovery testing that includes executive leadership brings clarity to decision-making authority, communication sequencing and operational prioritisation, while tabletop exercises expose governance gaps before they are tested in live conditions.

 

Disaster Recovery as a Service can materially reduce restoration timelines where isolated environments and immutable backups are properly implemented. Equal attention should be given to external dependencies, with clear understanding of partner capabilities, escalation paths, and recovery commitments established in advance of disruption.

 

Effective resilience planning therefore extends across internal systems, cloud providers, and supply chain partners, ensuring that recovery capability is aligned rather than siloed.

 

Preparation does not prevent incidents, but it materially reduces their financial and operational impact.

 

What This Means for Boards

The commercial exposure created by cyber downtime is now quantifiable and, in many cases, escalating. The central question for boards is how effectively the organisation can absorb disruption without sustained damage to revenue, customer trust or regulatory standing.

 

Organisations that embed recovery capability into broader business planning place themselves in a stronger position to manage that exposure with discipline, control and credibility.

16 March 2026

When insider risk is a wellbeing issue, not just a disciplinary one

Written by Katie Barnett, Director of Cyber Security at Toro Solutions

Insider risk is still often framed around intent, with the focus placed on malicious employees, disgruntled contractors, or deliberate misuse of access for personal gain.
Those cases exist and they matter, but they are rarely where risk first begins, and they do not reflect how most insider-related incidents actually develop.

In reality, many cases take shape slowly and quietly. They are shaped by pressure, fatigue, disengagement, coercion, manipulation or personal strain rather than hostility. The behaviour that later causes harm is often preceded by long periods of stress, isolation, being influenced or unresolved workplace issues. By the time someone is formally labelled an insider threat,the opportunity for early, proportionate support has usually passed, and the organisation is left with far fewer options.

This is why treating insider risk purely as a disciplinary or compliance issue consistently falls short. In many situations, the underlying issue is one of wellbeing first, with security consequences following later, whether the organisation recognises that link or not.

The scale of the problem

Insiders are a significant and consistent factor in security incidents. Accenture[1] has reported that a significant proportion of security incidents involve insiders, many of which are linked not to sophisticated intent, but to frustration, opportunism, or poor judgement under pressure.

Research from the Ponemon Institute[2] also shows that many employees who leave an organisation take some form of sensitive data with them, often without seeing it as wrongdoing. These findings do not mean that most people are inherently risky. They show how easily people can justify their actions when they feel unsupported, unheard, or under strain.

Despite this, insider risk is still often pushed aside or handled in isolation. In many organisations it moves between HR, security, and legal teams without a shared understanding of what is really driving behaviour. When this happens, patterns are missed and early warning signs become normal, until a more serious incident finally brings the issue to senior attention.

How insider risk really develops

Insider risk rarely begins with a clear breach of policy. More often we find that it develops incrementally through small changes in behaviour that are easy to explain away, particularly in high-pressure or highly trusted roles.

Someone may start working excessive hours to manage workload, gradually bypassing controls that feel obstructive rather than protective. They may disengage from colleagues, become defensive when challenged, or withdraw from routine interaction. None of this suggests malicious intent in isolation, but it often marks the point at which judgement can begin to erode.

In roles with wide access and limited oversight, these issues can go unnoticed for a long time. As people grow more comfortable with the systems, informal shortcuts start to feel normal, and risk builds in the background. By the time leadership becomes aware, it’s often because something has already gone wrong.

In some cases, the influence is external. Individuals may be targeted by criminals, competitors or organised groups who exploit personal vulnerabilities, financial stress or emotional pressure. This does not always look like blackmail or explicit threats. It can begin with flattery, requests for small favours, or appeals to sympathy, and gradually escalate into access, information sharing or rule-bending that feels difficult to refuse.

Coercion does not always come from outside. In some environments it can arise internally through power imbalances, unrealistic expectations, or pressure from senior colleagues that makes it hard to say no without fear of consequences.

Connection without closeness

Modern ways of working have added a new layer of complexity. We are more digitally connected than ever, yet many people now experience their work in relative isolation. Messages replace face to face conversations, context gets lost, and informal check-ins happen far less often.

Judgement does not exist in a vacuum. Stress, fatigue, and emotional strain shape how people interpret information and how carefully they make decisions. When pressure rises and support feels distant, people are more likely to misread situations, take shortcuts, or justify behaviour they would normally question.

This is not just a wellbeing issue. It is a resilience issue. Emotional strain narrows perspective and makes people more open to influence, whether that influence comes from outside the organisation or from their own internal reasoning.

Why the wider environment matters

These dynamics are being intensified by wider economic uncertainty. Prolonged cost-of-living pressures, geopolitical instability, and sustained disruption across global markets are all putting strain on individuals’ finances.

Financial pressure affects how people behave. It makes it harder to focus, increases anxiety, and can reduce how seriously people think about consequences. Some may even feel they have little left to lose. This does not mean they intend to do harm, but it does raise risk, especially for those who have access to sensitive systems, information, or assets.

From a security point of view, money stress increases risk. When organisations treat financial wellbeing as separate from security, they overlook an important part of the problem.

Financial strain also increases susceptibility to manipulation. People under pressure are more likely to respond to offers of help, opportunities to “fix” problems quickly, or requests that promise relief from stress. From a security perspective, this creates conditions where coercion becomes easier and more effective, even when individuals have no intention of causing harm.
Why controls alone are not enough

When insider risk is identified, organisations often respond in a technical way by tightening access, increasing monitoring, and reinforcing policies, but while these actions are important, they rarely address the underlying conditions that allowed the risk to develop in the first place.

Controls alone do not reduce burnout. Monitoring does not ease financial pressure, and policy reminders do not restore sound judgement. In some situations, a poorly timed escalation can actually increase feelings of mistrust or isolation, which pushes risk further underground instead of resolving it.

Both research and practical experience show that behavioural warning signs often appear before any technical breach occurs, including changes in performance, disengagement, conflict with management, and financial difficulty, and when organisations wait until behaviour crosses a formal threshold, their options become limited and the consequences are usually far more severe.

What “support as prevention” looks like in practice

Support does not mean ignoring misconduct or lowering standards, but instead means expanding the prevention toolkit so organisations can step in earlier, when the impact is lower and when individuals still have realistic options.

In practice, this often includes:
  • Clear, normalised escalation routes, so staff can raise concerns without automatically triggering a disciplinary process.
  • Line managers trained to notice and act on changes in behaviour, workload strain, or disengagement, and to involve the right functions early.
  • Shared ownership between HR, security, and operational leadership, so people risk does not fall between organisational boundaries.
  • Proportionate, temporary risk management, such as short-term access adjustments or additional oversight while a personal issue is being addressed.
This approach reflects the direction set out in UK protective security guidance, which emphasises treating insider events as connected, strengthening leadership understanding, and addressing the reasons insider risk is often deprioritised or avoided.
Culture determines whether people speak up

In many insider cases, colleagues notice warning signs but decide not to raise them because they worry about getting someone into trouble, triggering an investigation, or being seen as overreacting.

Where people believe that raising concerns will lead to fair and supportive action, reporting becomes more likely, but where they expect blame or punishment, staying silent feels safer.

This is not a training failure. It is a cultural one.

A quieter form of prevention

The most effective insider risk programmes are often the least visible because they are built into everyday management practice, supported by leadership, and grounded in trust, and they recognise that people are both the greatest asset and the most complex part of any security system.

In a world that is increasingly connected but emotionally fragmented, emotional and financial pressures are no longer side issues. They are part of the risk landscape.

For organisations that are serious about resilience, insider risk must be understood not only through controls and compliance, but also through culture, support, and leadership judgement, and this shift does not weaken security. It strengthens it.

13 March 2026

Building Trust in AI SOC Analyst Solutions: A UK and EU CISO Perspective

By Brett Candon, VP International at Dropzone AI

Trust has always been critical in security operations, but in the UK and Europe it carries significant regulatory weight. GDPR, NIS2 and similar related data‑protection frameworks shape far more than legal risk, they directly influence architectural decisions, supplier selection, and how security data can be accessed, processed and reviewed. That becomes more pronounced as autonomous AI systems move from proof‑of‑concept to daily SOC tooling. 


The appeal is undeniable. Faster investigations, more consistent outcomes, and the ability to scale Tier‑1 response are all compelling. However, without clear answers on data flows, access and accountability, AI introduces risk as easily as it removes it. And speed alone does not result in trust.

Against this backdrop, AI‑native approaches to SOC operations are gaining traction, grounded in the idea that autonomy, transparency, and repeatability must be foundational design principles rather than retrofitted controls. These systems are positioned to investigate alerts end‑to‑end using agent‑based reasoning, producing structured, auditable outputs in minutes. If implemented with the right governance, this operating model has the potential to meet the elevated trust and accountability expectations that characterise UK and EU security environments.

Data Sensitivity Changes the Trust Model

However, as SOC data often contains personal data, whether in endpoint identifiers, usernames, IP mapping, or embedded message content, it requires a closer look at where the investigative work happens and who performs it. This is particularly true for UK and European users that must adhere to GDPR. If a platform relies on offshore human review behind the scenes, organisations may be exposing sensitive operational context to jurisdictions with different privacy standards. 

As a result, interest in autonomous SOC analysis extends beyond speed and efficiency. It reflects a desire to reduce opaque manual processes and replace them with systems that can complete investigations independently, while still producing outputs that are auditable, jurisdictionally compliant. For UK and EU organisations, autonomy only builds trust when it removes uncertainty rather than creating new blind spots. Customers need to be in control of what the AI is investigating, have visibility of what it is doing and have control over the output.

Explainability and Accuracy Are Key Trust Factors

For CISOs, explainability forms the next pillar of trust. An alert closed in seconds means little if the underlying reasoning behind the decision cannot be reviewed. Boards, auditors and regulators increasingly expect security leaders to justify decisions with evidence. Investigation reports need to show what data was examined, which hypotheses were tested, and how conclusions were reached. AI systems that show this reasoning are far better suited to audit review, incident analysis, and regulatory inquiry than those that operate as black boxes. 

As European AI regulatory frameworks move from legislative text to supervisory enforcement, CISOs should expect closer scrutiny of how AI‑assisted decisions are documented, monitored, and justified after the fact.

Accuracy is another key pillar of trust. European buyers are sceptical of headline claims that cannot be verified. False‑positive and false‑negative rates only matter if they hold up under real-world conditions. This has increased interest in evaluation models that allow security teams to test AI‑driven investigation capabilities against their own data, rather than relying solely on vendor‑curated demonstrations. In environments shaped by due diligence and evidence, the ability to validate claims independently is itself is a signal of trust.

From Alert Volume to Analyst Impact

Strategically, the shift toward autonomous SOC operations goes beyond incremental optimisation. It reflects a broader move away from manpower‑bound, alert‑driven models toward operating frameworks that allow AI to absorb routine investigative workload and free experienced analysts to focus on high‑impact decisions. 

Advances in large language models and agent‑based reasoning have made this shift technically possible, while market pressure and workforce constraints have made it necessary. Importantly, industry research increasingly positions this transition as augmentation rather than replacement, a distinction that resonates strongly in European environments and balancing transformation with workforce responsibility.

None of this removes buyer accountability. UK and EU CISOs still need to apply the same rigour they would to any high‑sensitivity platform, with questions tailored to AI’s specific risk. This starts with end-to-end data-flow transparency to where data is processed, what categories are ingested, and how artefacts are stored or discarded. 

It also includes understanding whether investigative workflows involve human access outside approved jurisdictions. It requires assessing explainability through real investigation outputs including evidence citation, and decision traceability. 

Finally, it demands validation of accuracy and consistency under realistic conditions. Public metrics may provide context, but operational value is determined locally.

What Trust Looks Like Going Forward

Trust builds over time. Market maturity, breadth of deployment, and exposure to real-world scrutiny all contribute to confidence in any emerging operating model. In conservative buying environments, these signals provide evidence that systems have been tested across varied conditions and constraints. Staged rollouts, reference checks, and contractual clarity remain best practice, particularly when incident response decisions may later be examined by regulators or courts.

Looking ahead, the question for UK and EU CISOs is no longer whether AI will play a role in the SOC – it already does – but how to deploy it without compromising sovereignty, privacy, or auditability. The path forward lies in autonomy that supports security teams by reducing opaque processes, investigations that make their reasoning visible, and performance claims that can be tested rather than taken on trust. 

In a region where trust is both a security principle and a legal requirement, AI systems that are transparent in operation, verifiable in design, and accountable in outcome will earn their place at the centre of modern SOCs.

08 March 2026

AI Is Moving Faster Than Security Controls

AI is entering organisations faster than the security controls designed to govern it.

Artificial intelligence is rapidly becoming embedded across organisations.

AI assistants are now writing code, summarising documents, analysing data, and supporting operational decisions.

What began as experimentation is quickly becoming operational dependency.

For security teams, the challenge is not simply adopting AI. The real challenge is understanding how AI changes the way cybersecurity controls need to be validated.

In many organisations, AI tools are already interacting with corporate data, internal systems, and operational workflows.

Yet when security leaders ask a simple question

“How do we know these AI systems are operating within our control boundaries?”

…the answer is often less clear than expected.


Why AI Security Controls Are Different

Traditional software behaves in predictable ways. Security teams can audit code, validate configuration, monitor logs, and confirm whether controls are operating as intended.

AI systems behave differently.

Modern AI models generate probabilistic outputs rather than deterministic ones. The same prompt may produce different responses, models can evolve through updates, and outputs may influence decisions that were never explicitly coded into the system.

This creates a shift in how security controls need to be assessed.

Controls designed for traditional systems do not always translate neatly into AI-driven environments.

Examples are already appearing in practice:

  • AI coding assistants generating insecure or non-compliant code
  • Employees uploading confidential documents into AI tools
  • AI platforms accessing internal data through integrations
  • AI agents interacting with APIs or automation platforms beyond their intended scope

In many cases, organisations technically have policies that cover these scenarios.

The real challenge is proving those policies are actually effective in practice.


The Growing Problem of Shadow AI

Just as “Shadow IT” emerged when employees adopted unsanctioned cloud services, many organisations are now experiencing Shadow AI.

Employees are increasingly using AI tools independently to improve productivity. These tools often bypass procurement processes, security reviews, and governance frameworks

Common examples include:

  • Uploading documents into AI summarisation tools
  • Using AI assistants to analyse internal reports or spreadsheets
  • Generating code snippets with public AI models
  • Connecting AI plug-ins to automate existing workflows

From a security perspective, this creates several unknowns.

Organisations may not know:

  • Which AI tools are being used
  • What data is being shared with them
  • Whether prompts or outputs are stored externally
  • How AI-generated outputs influence operational decisions

The result is a widening gap between policy intent and operational reality.


AI Governance Without Visibility

Many organisations have already responded to AI risk by introducing policies, governance groups, or internal guidance.

These are important foundations.

But policy alone does not create assurance.

The real question is whether organisations can demonstrate that controls around AI usage are actually working.

That means being able to answer questions such as:

  • Do we know where AI tools are being used across the organisation?
  • Can we detect when sensitive data is submitted to external AI services?
  • Are AI-generated outputs influencing critical processes without validation?
  • Do we monitor AI integrations and access permissions?

Without measurable answers, AI governance risks becoming another form of dashboard compliance.

Controls may appear compliant on paper but lack operational validation.


Moving Toward Practical AI Security Assurance

Organisations that are managing AI adoption successfully are beginning to treat AI risk in the same way they treat other critical security controls.

The focus shifts from policy statements to evidence, monitoring, and validation.

Practical steps increasingly include:

  • Maintaining an inventory of approved AI systems
  • Monitoring integrations and API activity
  • Detecting data flows to external AI platforms
  • Ensuring human oversight for critical AI outputs
  • Continuously reviewing permissions and access scope

These measures do not remove risk entirely.

But they shift the conversation from:

“Do we have an AI policy?” to the far more important question

“Can we prove our AI controls are working?


The Next Cybersecurity Challenge

Every major technology shift has forced organisations to rethink how security controls are validated.

Cloud computing did. DevOps did. SaaS platforms did. AI is now doing the same.

The organisations that manage this transition successfully will not necessarily be those that deploy AI the fastest.

They will be the ones that understand how to measure and validate the controls surrounding it.

Because in cybersecurity, the most important question is rarely whether a control exists.

The real question is whether it works.

03 March 2026

NCSC Warns UK Organisations to Prepare for Potential Iran-Linked Cyber Activity

Geopolitical conflict rarely stays confined to physical battlefields. Increasingly, it spills into the digital domain. The latest escalation of tensions in the Middle East has prompted the UK’s National Cyber Security Centre (NCSC) to issue a warning to organisations to review their cyber security posture and prepare for possible cyber activity linked to Iran.


While the NCSC has stressed that there is currently no confirmed significant increase in direct cyber threats to the UK, it has warned that the situation is fast-moving and organisations should remain alert.

Rising Tensions and Cyber Spillover
The warning follows a sharp escalation in the regional conflict involving Iran, the United States and Israel. Military developments have been accompanied by cyber activity targeting digital infrastructure and online services in the region, highlighting how modern conflicts now run across both physical and digital fronts.

In response, the NCSC has advised UK organisations to review their cyber defences and ensure they are prepared for possible disruption. The agency noted that while the direct cyber threat level to the UK has not significantly changed, there is “almost certainly a heightened risk of indirect cyber threat” for organisations with operations, assets or supply chains in the Middle East.

This includes potential activity from Iranian state actors as well as Iran-aligned hacktivist groups.

Iran’s established Cyber Capabilities
Iran has long viewed cyber operations as a strategic tool that allows it to project influence asymmetrically against more technologically advanced adversaries. Over the past decade, Iranian cyber groups have targeted sectors such as energy, finance, transportation and government networks.

Previous campaigns linked to Iranian actors have included destructive malware operations, espionage campaigns and disruptive attacks against critical infrastructure. For example, the widely documented Operation Cleaver campaign targeted energy and transportation organisations globally.

Although Iranian cyber capabilities are generally considered less sophisticated than those of Russia or China, they have demonstrated a willingness to conduct disruptive and politically motivated attacks.

What the NCSC is advising Organisations to do
The NCSC’s guidance is not calling for panic, but it does emphasise the importance of cyber resilience during periods of geopolitical instability.

Organisations are advised to:
  • Review their external attack surface and internet-exposed services
  • Increase monitoring for suspicious activity
  • Prepare for common threat tactics such as phishing and distributed denial-of-service (DDoS) attacks
  • Ensure patching and vulnerability management processes are up to date
  • Review incident response plans and escalation procedures
The NCSC has also encouraged organisations to sign up to its Early Warning service, which provides alerts about potential security issues affecting UK networks.

The Risk of Opportunistic Cyber Activity
One important point highlighted in the advisory is that not all cyber activity during geopolitical crises comes directly from state actors.
  • Periods of international tension often attract:
  • politically motivated hacktivists
  • cybercriminal groups seeking to exploit confusion
  • proxy actors aligned with nation-state interests
These groups may launch attacks intended to disrupt services, deface websites or leak stolen data for political impact.

A Reminder for Boards and Security Teams
Events like this are a reminder that cyber risk does not exist in isolation from geopolitical developments. Organisations operating globally, particularly those with supply chains or business interests in politically sensitive regions, must assume that digital infrastructure could become collateral damage during international conflicts.

For security teams, the key takeaway is not that a wave of attacks is imminent, but that situational awareness and operational readiness matter.

Cyber resilience is most effective when organisations treat security posture reviews as routine practice rather than emergency reactions.

Sources:
• National Cyber Security Centre alert: https://www.ncsc.gov.uk/news/ncsc-advises-uk-organisations-take-action-following-conflict-in-the-middle-east