Provable Cyber Resilience - Cybersecurity Expert More from Cybersecurity Expert Practitioner-led cybersecurity analysis, AI Labs tools, book updates and evidence-based assurance thinking. Visit the website Explore AI Labs Read about the book

11 May 2026

AI Agents Are Creating a New Cybersecurity Blind Spot

The cybersecurity industry has spent years focusing on visibility. Dashboards expanded. Detection tooling improved. Telemetry volumes exploded. Yet one of the biggest emerging risks in 2026 is not hidden malware or an unknown zero-day. It is the rapid deployment of AI agents that organisations barely understand, cannot fully inventory, and often cannot meaningfully govern.

AI agents are moving beyond chat interfaces and simple copilots. They are increasingly capable of reasoning, planning, accessing systems, invoking tools, retrieving information, and taking autonomous actions with limited human involvement. That changes the security conversation entirely.

This is not simply another software category. It is the emergence of autonomous digital workers operating across identity systems, APIs, SaaS platforms, cloud environments, and business processes.

And most organisations are deploying them faster than they can secure them.

Research and industry reporting throughout 2026 show a growing concern across both government and enterprise sectors around agentic AI security risks. Security leaders increasingly view autonomous AI systems as one of the most significant new attack surfaces facing organisations.

The concern is justified.

AI agents introduce a combination of risks that traditional governance and security models were never designed to handle.

AI Agents Change the Nature of Identity Risk

Most cybersecurity programmes were built around managing human identities and traditional service accounts. AI agents disrupt that model because they behave more like autonomous actors than passive software components.

Many organisations are now deploying AI agents with:

  • access to internal documentation
  • integration into SaaS platforms
  • permissions to execute workflows
  • API access to sensitive systems
  • delegated authority to make operational decisions

The problem is not simply access. It is scale and autonomy.

Industry forecasts suggest AI agent identities may soon outnumber human identities dramatically inside enterprise environments.

That creates several immediate challenges:

  • identity sprawl
  • excessive permissions
  • unmanaged API tokens
  • poor lifecycle governance
  • invisible machine-to-machine trust relationships
  • difficulty attributing actions and accountability

In many environments, organisations already struggle to maintain accurate inventories of privileged accounts or SaaS integrations. AI agents accelerate that problem significantly.

The result is a growing gap between operational reality and governance visibility.

AI Agents Create a New Attack Surface

The security industry often focuses heavily on model risks such as prompt injection or data poisoning. Those are important, but they are only part of the picture.

The bigger issue is that AI agents operate across interconnected runtime environments.

Modern agents may:

  • consume external data
  • invoke plugins and APIs
  • interact with cloud services
  • maintain persistent memory
  • chain multiple actions together
  • collaborate with other agents
  • execute operational workflows automatically

That creates an entirely new form of runtime attack surface.

Recent research highlights risks including:

The important point is this:

Many of these attacks do not exploit traditional software vulnerabilities. They exploit trust, autonomy, orchestration, and context.

That makes detection and governance significantly harder.

Why Existing Security Controls Are Struggling

One of the most dangerous assumptions organisations can make is believing existing security tooling automatically extends to AI agents.

In many cases it does not.

Traditional controls were largely designed for:

  • deterministic systems
  • predictable workflows
  • static permissions
  • human-driven actions
  • relatively stable software behaviour

AI agents are fundamentally different.

They are probabilistic, adaptive, and capable of unexpected behaviour under changing context conditions.

This creates several assurance problems:

  • inventories quickly become outdated
  • permissions drift continuously
  • actions may not be fully explainable
  • logging lacks meaningful context
  • governance ownership becomes unclear
  • accountability boundaries blur

The challenge is not merely technical. It is operational.

Security teams increasingly face environments where AI functionality appears inside:

  • SaaS products
  • collaboration platforms
  • development tooling
  • cloud management interfaces
  • workflow automation systems
  • productivity platforms

Often these capabilities are enabled by default or adopted informally by business teams before governance frameworks exist.

This is rapidly becoming one of the largest forms of Shadow IT the industry has seen.

The Real Risk Is Governance Lag

The most significant AI security risk in many organisations is not the AI itself.

It is governance lag.

Technology deployment is moving faster than:

  • control validation
  • identity governance
  • operational assurance
  • policy adaptation
  • board understanding
  • security architecture redesign

This creates a dangerous illusion of control.

Dashboards may still appear green while autonomous systems quietly accumulate:

  • privileges
  • integrations
  • external dependencies
  • sensitive data access
  • operational authority

Without strong governance, organisations risk repeating familiar mistakes:

  • deploying first
  • governing later
  • discovering exposure during incidents

The difference now is speed.

AI systems compress timelines dramatically.

What Security Leaders Should Do Next

The organisations responding most effectively are not trying to ban AI agents entirely. They are focusing on visibility, containment, and evidence-driven governance.

Several priorities are emerging:

1. Build an AI Asset Inventory

Most organisations cannot currently answer:

  • which AI agents exist
  • what systems they access
  • what permissions they hold
  • what data they process
  • who owns them

That must change quickly.

AI agents should be treated as managed operational assets with clear ownership and lifecycle governance.

2. Apply Least Privilege Aggressively

Many AI deployments currently operate with excessive permissions for convenience.

That is unsustainable.

AI agents should operate with:

  • constrained access scopes
  • segmented permissions
  • time-limited credentials
  • monitored API activity
  • restricted tool invocation

The principle of least privilege matters even more in autonomous environments.

3. Treat AI Runtime Behaviour as an Assurance Problem

The industry increasingly needs continuous validation rather than static approval models.

Security teams should focus on:

  • runtime monitoring
  • behavioural drift detection
  • evidence freshness
  • control verification
  • anomalous workflow analysis

This aligns closely with broader Continuous Control Monitoring (CCM) approaches already emerging across cybersecurity assurance programmes.

4. Update Governance Frameworks

Most governance structures were not designed for autonomous operational actors.

Boards, risk committees, and security leadership teams need clearer accountability models around:

  • AI deployment ownership
  • operational risk tolerance
  • human override mechanisms
  • auditability
  • resilience testing
  • third-party AI exposure

The governance gap is becoming as important as the technical gap.

Final Thought

AI agents are not simply another cybersecurity trend. They represent a structural change in how digital systems operate.

The organisations that succeed will not necessarily be those deploying AI fastest.

They will be the organisations that can answer:

  • what their AI systems are doing
  • what authority they possess
  • how they are governed
  • how they are monitored
  • whether their controls still work under real operational conditions

That is ultimately the real challenge of AI security in 2026.

Not visibility alone.

But provable assurance.

Sources and further reading:

07 May 2026

Mythos AI: What Security Leaders Should Do Next

The recent discussion around Anthropic’s Claude Mythos Preview and Project Glasswing has caught the attention of the cybersecurity industry for good reason.

Mythos is not just another AI announcement. It is being positioned as a frontier model with advanced cybersecurity capability, particularly around finding and exploiting software vulnerabilities. Anthropic has stated that Project Glasswing is intended to give selected defenders early access to this capability to help secure critical software, rather than releasing the model broadly.

Cisco has also published guidance following its work with Mythos, explaining that it is changing its near-term threat modelling of AI-enabled attackers and issuing defensive recommendations for customers. That is the important point.

Whether Mythos itself remains tightly controlled or not, the direction of travel is clear. AI-enabled vulnerability discovery and exploitation capability is improving quickly. Security teams need to prepare for a world where attackers can find, chain and act on weaknesses faster than many organisations can currently respond.

Why Mythos Matters

The concern is not that every attacker suddenly has access to Mythos today.

The concern is that Mythos shows what is becoming possible.

If AI can accelerate vulnerability discovery, exploit development and attack path analysis, then the defensive timeline changes. Security teams cannot rely on slow review cycles, stale evidence or manual-only response models when the speed of threat discovery is increasing.

This does not mean the fundamentals no longer matter.

It means they matter more.

Cisco’s guidance focuses heavily on strengthening fundamentals such as phishing-resistant MFA, Zero Trust, least privilege for AI agents, disciplined patch management and full asset visibility. It also highlights removing end-of-life systems, automating detection and containment, embedding active defences and using AI defensively for threat hunting, validation and testing.

That is where the practical response needs to start.

The Risk Is Speed

Many organisations still manage cyber risk through processes designed for a slower environment.

  • Monthly reporting.
  • Quarterly reviews.
  • Annual testing.
  • Periodic evidence collection.
  • Manual triage.
  • Long remediation cycles.

Those activities still have a place, but they are not enough on their own.

AI-enabled attackers will not wait for the next governance cycle. They will look for exposed systems, weak identity controls, unpatched vulnerabilities, misconfigured cloud services and overlooked legacy platforms.

The key question becomes:

Can we identify and reduce exposure quickly enough?

That is a very different question from simply asking whether a control exists.

What Security Leaders Should Focus On

The response to Mythos should not be panic, hype or rushing to buy more AI tooling.

It should be disciplined improvement in the areas that matter most.

1. Strengthen Security Fundamentals

Start with the controls that reduce the most likely paths of attack:

  • Phishing-resistant MFA.
  • Least privilege.
  • Complete asset visibility.
  • Disciplined patch management.
  • Removal of end-of-life systems.
  • Secure configuration.
  • Segmentation.
  • Logging and monitoring.
  • Tested incident response.

These are not new ideas. The challenge is proving they are actually working across the environment.

2. Reduce Structural Risk

End-of-life platforms, unsupported systems and brittle legacy dependencies become more dangerous when attackers can find and chain weaknesses faster.

This is not just a technology hygiene issue.

It is a resilience issue.

Organisations should be clear on where structural risk exists, who owns it, what compensating controls are in place and by when the risk will be reduced.

3. Automate Where Speed Matters

Manual response will always have a role, especially where decisions affect operations. But manual-only models will struggle against AI-driven attack velocity.

Security teams should look at where automation can safely support:

  • Detection.
  • Enrichment.
  • Prioritisation.
  • Containment.
  • Evidence collection.
  • Control validation.

The aim is not blind automation.

The aim is controlled speed.

4. Apply Least Privilege to AI Agents

One important point in the Cisco guidance is that least privilege must also apply to AI agents.

That is a point worth taking seriously.

AI agents may interact with systems, APIs, data, workflows and security tooling. If they are not properly governed, they can become powerful operational pathways.

Security teams should be asking:

  • What can the agent access?
  • What actions can it take?
  • Who approved that access?
  • How is activity logged?
  • How is behaviour reviewed?
  • How is access removed when no longer needed?

AI agents should not sit outside normal identity, access and change control disciplines.

5. Improve Control Assurance

This is where Mythos becomes especially relevant.

It is not enough to say controls exist.

Security leaders need confidence that key controls are operating effectively and that the evidence behind them is current.

For example, if patch compliance is reported as high, are internet-facing assets included? Are exceptions approved? Are unsupported systems visible? Does asset inventory match the patching data?

If MFA is reported as complete, are privileged users covered? Are break-glass accounts monitored? Are service accounts excluded? Are temporary bypasses reviewed?

If endpoint protection is deployed, are agents active, current and reporting from all in-scope assets?

This is the practical value of control assurance. It challenges assumptions before attackers do.

What Boards Should Ask

The Mythos discussion should also sharpen board-level cyber questions.

Instead of only asking:

Are we secure?

Boards should increasingly ask:

  • How quickly can we identify exposure?
  • How fresh is our control evidence?
  • Which critical systems still rely on unsupported technology?
  • Where are we dependent on manual response?
  • Are AI agents governed through least privilege?
  • Can we prove key controls are operating effectively?

These are practical questions. They move the conversation away from confidence statements and towards evidence.

Using AI Defensively

AI should not only be seen as an attacker advantage.

Defenders should also use AI where it improves speed, analysis and prioritisation. That might include threat hunting, vulnerability analysis, configuration review, testing, simulation and control validation.

But AI-generated outputs still need challenge.

AI can support assurance, but it should not replace evidence.

Final Thoughts

Mythos matters because it signals where cybersecurity is heading.

AI-enabled capability is likely to make vulnerability discovery, exploit chaining and attack planning faster. That increases pressure on organisations still relying on slow remediation, incomplete visibility and periodic assurance.

The answer is not fear.

The answer is preparation.

Strengthen the fundamentals. Reduce structural risk. Improve visibility. Automate carefully. Govern AI agents. Validate controls with current evidence.

At Cybersecurity Expert UK, I am continuing to explore these themes around practical cyber resilience, assurance and measurable control effectiveness.

I have also been developing AI Labs tools to help security leaders think through exposure, control assurance and operational resilience in a more practical way, including:

  • Threat Exposure Analysis.
  • Control Assurance Validation.
  • Operational Resilience Mapping.
  • Cyber Control Failure Simulation.

You can explore the AI Labs tools here:

AI Labs – Provable Cyber Resilience Tools

The core message is simple.

In an AI-accelerated threat environment, assumptions will not be enough.

Security leaders need evidence they can trust.

30 April 2026

Adaptive Security Leadership in an Expanding Threat Surface

Last week I joined fellow security leaders at CISO Inspire Summit North for a panel discussion on The Expanding Threat Surface: Adaptive Security Leadership for 2026 and Beyond.



It was a timely discussion, because the challenge facing security leaders today is not simply more threats. It is more connections, more dependencies, and more complexity. Suppliers, SaaS, identities, automation and distributed ways of working have all expanded the attack surface in ways that traditional control models often struggle to keep pace with.

One theme I returned to during the discussion was that many cyber risks are not new. They are often familiar control failures appearing at greater scale and speed.

That matters, because it shifts the focus from chasing every emerging technology risk to strengthening fundamentals.

Security fundamentals still matter most
Identity, ownership, visibility and resilience remain foundational.

As organisations scale, risk often hides where ownership is unclear, where no one truly owns a critical service, a supplier dependency, or a privileged access path.

Adaptive security leadership is not simply about adding more controls. It is about making sure the right controls are owned, evidenced, validated and able to hold under pressure.

Visibility alone is not assurance
Another discussion point was the danger of equating visibility with confidence.

Dashboards can inform. They do not, on their own, assure.

Confidence should come not just from seeing controls, but from evidence they work in practice.

That distinction matters even more as regulatory expectations increase and boards ask harder questions about resilience, not merely compliance.


Complexity is becoming a risk in itself
One point raised during the panel was that we may sometimes over-engineer controls while under-investing in fundamentals.

Complexity can create blind spots.

Adaptive leadership often means simplifying security, making the secure path the default, and reducing friction rather than adding layers that become difficult to sustain.

In many cases resilience improves not through more complexity, but through clearer ownership, stronger validation and simpler control design.

Zero Trust is a direction, not a destination
We also touched on Zero Trust, which is often discussed as an architectural ambition.

I tend to see it more practically.

Strong identity, least privilege, continuous validation and measurable progress matter far more than treating Zero Trust as a finished state.

It is less a destination than a discipline.

One practical takeaway
If there was one practical action I would emphasise, it would be this:
  • Make ownership explicit for critical services, then test one real failure end-to-end.
  • That often reveals more about operational resilience than many reporting packs ever will.
  • Turning assumptions into proven resilience remains one of the most important shifts organisations can make.
Final reflection
A strong message from the session was that adaptive security leadership today is increasingly about judgement, accountability and evidence.

Not just technology.

Not just compliance.

But proving controls hold when conditions are less than perfect.

That is where confidence is built.

Thanks again to the organisers, moderator and fellow panellists for a thoughtful discussion.

26 April 2026

AI Agents, Security Culture and a Conversation at Abbey Road Studios

I recently joined a panel at the iconic Abbey Road Studios to discuss a provocative theme: Your AI agent doesn’t care about your security culture. 


It captures an important truth. AI will often scale the quality of the environment it is given, whether that environment is built on strong governance and effective controls, or weak assumptions and poor oversight.

One of the themes explored was accountability. As organisations move from experimenting with AI to operationalising it, the challenge is not only what AI can do, but who governs it, how outcomes are verified, and how control effectiveness keeps pace.

My own takeaway was simple: AI does not compensate for weak controls. It can amplify them.

A fitting discussion in an iconic setting.

25 March 2026

What the UK Cyber Security & Resilience Bill Means for Security Practitioners

The UK Cyber Security & Resilience Bill is progressing through Parliament Royal Assent expected later in 2026.
The UK's Cyber Security and Resilience Bill is working its way through Parliament, and if you haven't started paying serious attention yet, now is the time. Introduced to the House of Commons in November 2025, the Bill represents the most significant overhaul of UK cyber regulation since the NIS Regulations in 2018, and its implications for security practitioners are immediate and practical.

What's Actually Changing
At its core, the Bill expands the existing Network and Information Systems regulatory framework. It brings more organisations into scope, imposes stricter incident notification requirements, and hands regulators substantially more enforcement power. Secondary legislation and statutory Codes of Practice will follow, but the primary architecture of what you'll be working within is already taking shape.

One of the most significant shifts for practitioners working in or alongside managed services is the creation of a new regulated entity category: the Relevant Managed Service Provider (RMSP). For the first time, MSPs providing services to in-scope sectors face direct regulatory obligations. If your organisation is an MSP, or relies heavily on one, your compliance exposure has materially changed.

⚠ Key Point - Incident Reporting Timelines
 The Bill introduces two-stage incident reporting: an initial notification within 24 hours and a full report within 72 hours, with copies sent to the NCSC. Your detection, triage, and escalation workflows need to meet these timelines under real pressure, not just on paper.

Penalties That Command Attention
The financial exposure for non-compliance is substantial and should feature prominently in any board-level conversation about investment in cyber controls.

Maximum Penalty Structure
  • Standard maximum penalty - £10m or 2% of global turnover
  • Higher maximum (serious breaches) - £17m or 4% of worldwide turnover
  • Continuing contraventions (daily) - Up to £100,000 per day
  • Extended ceiling (exceptional cases) - Up to 10% of worldwide turnover
These are not hypothetical. Regulators will also gain cost recovery powers, able to levy periodic fees to fund their oversight activities. Expect more active enforcement, not passive monitoring.

UK vs NIS2: Don't Assume Alignment
If your organisation already operates under the EU's NIS2 framework, a critical warning: the UK Bill and NIS2 share objectives but diverge in material ways. Reporting thresholds differ, customer notification requirements differ, and the sectors in scope are structured differently. A NIS2-aligned incident response playbook will not automatically satisfy UK obligations.

Practitioners managing cross-border environments will need jurisdiction-specific runbooks. A single process attempting to satisfy both simultaneously risks failing both under pressure.
Supply Chain Risk Is Now Statutory

The Bill introduces the concept of designated "critical suppliers" organisations whose compromise could cause major disruption to the economy or wider society, even if they are not themselves regulated entities. These suppliers will receive formal written notice and will have the right to make representations or appeal.

Secondary legislation will likely impose specific supply chain security obligations on regulated entities potentially including contractual requirements, security assessments, and continuity planning mandates. The era of passing a questionnaire and considering supply chain risk managed is ending.

🔗 Supply Chain Reality Check
Without consolidated visibility across cloud platforms, SaaS providers, and outsourced partners, your compliance posture is built on assumptions, not evidence. The Bill will expose that gap when regulators come calling.

What Practitioners Should Do Now
The Bill has passed its Report Stage in the Commons and is heading to the House of Lords. Royal Assent is expected later in 2026. Waiting for the final text before acting is not a defensible position.
  • Determine whether your organisation or key MSPs fall into newly in-scope categories, including data centres with Rated IT Load above 1 MW
  • Review incident detection and escalation workflows against the 24-hour initial notification requirement
  • Map divergence between your current NIS/NIS2 compliance posture and what the UK Bill will require
  • Audit your supplier assurance programme, move beyond annual questionnaires towards continuous oversight
  • Engage legal, compliance, and operational teams together; this cannot be owned by security alone
  • Monitor the Bill's progress and watch for secondary legislation, which will contain the operational detail
The regulatory environment for UK cyber security is shifting substantially. The organisations best placed when the Bill receives Royal Assent will be those treating this as a live operational project, not a future compliance task.

Track the Bill's progress via the UK Parliament Bills tracker and the House of Commons Library briefing.

19 March 2026

The True Cost of Cyber Downtime: A UK Board-Level Briefing

Written by Sean Tilley, Senior Sales Director EMEA at 11:11 Systems

 

Cyber downtime carries measurable financial consequences, and those consequences are becoming clearer with each major incident. Research from 11:11 Systems shows that 78% of European organisations report losses of up to $500,000 per hour following a cyber-related outage, while 6% face costs exceeding £1 million per hour. When recovery extends beyond containment, the disruption begins to register in revenue performance, contractual exposure, and customer stability rather than remaining confined to the technology function.



For UK leadership teams, the issue centres on continuity of income, fulfilment of obligations, and the strength of customer relationships under strain.

 

Recovery delays compound risk

Half of organisations surveyed require between one and two weeks to fully recover from a cyber incident. Over that period, cost exposure builds in ways that are rarely reflected in early estimates.

 

Revenue stalls, particularly where digital platforms underpin billing and subscriptions, while service commitments are breached, supply chains experience secondary disruption, and internal teams divert time and budget away from planned initiatives towards remediation and communications.

 

Extended recovery places additional pressure on customer relationships, especially in sectors where availability is assumed as standard. Regulatory scrutiny increases in parallel, particularly under UK GDPR and sector-specific resilience requirements, where organisations must demonstrate that appropriate safeguards were established before the incident occurred.

 

A significant proportion of the cost emerges over time rather than immediately. Insurance premiums adjust at renewal, forensic specialists and legal advisers remain engaged, customer notification programmes continue long after systems are restored, and remediation work extends into future quarters. By the time the full impact is visible, the loss total often exceeds initial projections.

 

According to Cyber Monitoring Centre recent UK attacks across retail, healthcare and critical infrastructure have collectively cost businesses more than £1.9 billion. At an individual level, even a contained incident can translate into multi-million-pound losses once revenue interruption, remediation spend and longer-term customer attrition are properly accounted for.

 

Recovery time remains the decisive variable, steadily increasing commercial strain and regulatory attention the longer disruption persists.

 

For boards, cyber downtime is no longer a technical failure but a test of governance. In the immediate aftermath of an incident, external scrutiny rarely focuses on how the attack occurred. Instead, attention turns to whether leadership understood its exposure, validated recovery assumptions and exercised informed oversight before disruption struck. Where recovery falters, questions follow around board assurance, investment prioritisation and whether resilience was treated as a compliance exercise rather than a core commercial safeguard worthy of sustained board attention. In that context, prolonged downtime can quickly become a proxy for broader leadership risk.

 

The preparedness gap

Despite recent high-profile incidents, many organisations still overestimate their ability to recover.

Backup environments may exist without having been stress-tested under realistic conditions, recovery objectives are documented but rarely validated, crisis governance structures that appear clear on paper can lose coherence under pressure and visibility across cloud platforms, SaaS providers, and outsourced partners frequently remains incomplete.

 

Modern enterprises operate across layered digital ecosystems that depend on managed services, third-party infrastructure, and interconnected suppliers, each introducing dependencies that may sit outside direct oversight. Without a consolidated view of these relationships, recovery planning remains fragmented and assumptions around restoration timelines tend to be optimistic rather than proven. When those assumptions fail, cost exposure accelerates quickly.

 

Resilience as a strategic advantage

The organisations that recover fastest are rarely those with the most technology, but those with the clearest decision rights. During major incidents, value is lost less through system failure than through delayed executive judgement such as uncertainty over who authorises restoration priorities, how customer communications are sequenced, and which commercial trade-offs are acceptable under pressure. Boards that rehearse these decisions in advance shorten recovery by eliminating hesitation at the moment it matters most. In competitive markets, that decisiveness increasingly separates resilient businesses from those that merely survive disruption.

 

Containing the cost of downtime requires disciplined preparation rather than reactive response.

 

Scenario-based recovery testing that includes executive leadership brings clarity to decision-making authority, communication sequencing and operational prioritisation, while tabletop exercises expose governance gaps before they are tested in live conditions.

 

Disaster Recovery as a Service can materially reduce restoration timelines where isolated environments and immutable backups are properly implemented. Equal attention should be given to external dependencies, with clear understanding of partner capabilities, escalation paths, and recovery commitments established in advance of disruption.

 

Effective resilience planning therefore extends across internal systems, cloud providers, and supply chain partners, ensuring that recovery capability is aligned rather than siloed.

 

Preparation does not prevent incidents, but it materially reduces their financial and operational impact.

 

What This Means for Boards

The commercial exposure created by cyber downtime is now quantifiable and, in many cases, escalating. The central question for boards is how effectively the organisation can absorb disruption without sustained damage to revenue, customer trust or regulatory standing.

 

Organisations that embed recovery capability into broader business planning place themselves in a stronger position to manage that exposure with discipline, control and credibility.

16 March 2026

When insider risk is a wellbeing issue, not just a disciplinary one

Written by Katie Barnett, Director of Cyber Security at Toro Solutions

Insider risk is still often framed around intent, with the focus placed on malicious employees, disgruntled contractors, or deliberate misuse of access for personal gain.
Those cases exist and they matter, but they are rarely where risk first begins, and they do not reflect how most insider-related incidents actually develop.

In reality, many cases take shape slowly and quietly. They are shaped by pressure, fatigue, disengagement, coercion, manipulation or personal strain rather than hostility. The behaviour that later causes harm is often preceded by long periods of stress, isolation, being influenced or unresolved workplace issues. By the time someone is formally labelled an insider threat,the opportunity for early, proportionate support has usually passed, and the organisation is left with far fewer options.

This is why treating insider risk purely as a disciplinary or compliance issue consistently falls short. In many situations, the underlying issue is one of wellbeing first, with security consequences following later, whether the organisation recognises that link or not.

The scale of the problem

Insiders are a significant and consistent factor in security incidents. Accenture[1] has reported that a significant proportion of security incidents involve insiders, many of which are linked not to sophisticated intent, but to frustration, opportunism, or poor judgement under pressure.

Research from the Ponemon Institute[2] also shows that many employees who leave an organisation take some form of sensitive data with them, often without seeing it as wrongdoing. These findings do not mean that most people are inherently risky. They show how easily people can justify their actions when they feel unsupported, unheard, or under strain.

Despite this, insider risk is still often pushed aside or handled in isolation. In many organisations it moves between HR, security, and legal teams without a shared understanding of what is really driving behaviour. When this happens, patterns are missed and early warning signs become normal, until a more serious incident finally brings the issue to senior attention.

How insider risk really develops

Insider risk rarely begins with a clear breach of policy. More often we find that it develops incrementally through small changes in behaviour that are easy to explain away, particularly in high-pressure or highly trusted roles.

Someone may start working excessive hours to manage workload, gradually bypassing controls that feel obstructive rather than protective. They may disengage from colleagues, become defensive when challenged, or withdraw from routine interaction. None of this suggests malicious intent in isolation, but it often marks the point at which judgement can begin to erode.

In roles with wide access and limited oversight, these issues can go unnoticed for a long time. As people grow more comfortable with the systems, informal shortcuts start to feel normal, and risk builds in the background. By the time leadership becomes aware, it’s often because something has already gone wrong.

In some cases, the influence is external. Individuals may be targeted by criminals, competitors or organised groups who exploit personal vulnerabilities, financial stress or emotional pressure. This does not always look like blackmail or explicit threats. It can begin with flattery, requests for small favours, or appeals to sympathy, and gradually escalate into access, information sharing or rule-bending that feels difficult to refuse.

Coercion does not always come from outside. In some environments it can arise internally through power imbalances, unrealistic expectations, or pressure from senior colleagues that makes it hard to say no without fear of consequences.

Connection without closeness

Modern ways of working have added a new layer of complexity. We are more digitally connected than ever, yet many people now experience their work in relative isolation. Messages replace face to face conversations, context gets lost, and informal check-ins happen far less often.

Judgement does not exist in a vacuum. Stress, fatigue, and emotional strain shape how people interpret information and how carefully they make decisions. When pressure rises and support feels distant, people are more likely to misread situations, take shortcuts, or justify behaviour they would normally question.

This is not just a wellbeing issue. It is a resilience issue. Emotional strain narrows perspective and makes people more open to influence, whether that influence comes from outside the organisation or from their own internal reasoning.

Why the wider environment matters

These dynamics are being intensified by wider economic uncertainty. Prolonged cost-of-living pressures, geopolitical instability, and sustained disruption across global markets are all putting strain on individuals’ finances.

Financial pressure affects how people behave. It makes it harder to focus, increases anxiety, and can reduce how seriously people think about consequences. Some may even feel they have little left to lose. This does not mean they intend to do harm, but it does raise risk, especially for those who have access to sensitive systems, information, or assets.

From a security point of view, money stress increases risk. When organisations treat financial wellbeing as separate from security, they overlook an important part of the problem.

Financial strain also increases susceptibility to manipulation. People under pressure are more likely to respond to offers of help, opportunities to “fix” problems quickly, or requests that promise relief from stress. From a security perspective, this creates conditions where coercion becomes easier and more effective, even when individuals have no intention of causing harm.
Why controls alone are not enough

When insider risk is identified, organisations often respond in a technical way by tightening access, increasing monitoring, and reinforcing policies, but while these actions are important, they rarely address the underlying conditions that allowed the risk to develop in the first place.

Controls alone do not reduce burnout. Monitoring does not ease financial pressure, and policy reminders do not restore sound judgement. In some situations, a poorly timed escalation can actually increase feelings of mistrust or isolation, which pushes risk further underground instead of resolving it.

Both research and practical experience show that behavioural warning signs often appear before any technical breach occurs, including changes in performance, disengagement, conflict with management, and financial difficulty, and when organisations wait until behaviour crosses a formal threshold, their options become limited and the consequences are usually far more severe.

What “support as prevention” looks like in practice

Support does not mean ignoring misconduct or lowering standards, but instead means expanding the prevention toolkit so organisations can step in earlier, when the impact is lower and when individuals still have realistic options.

In practice, this often includes:
  • Clear, normalised escalation routes, so staff can raise concerns without automatically triggering a disciplinary process.
  • Line managers trained to notice and act on changes in behaviour, workload strain, or disengagement, and to involve the right functions early.
  • Shared ownership between HR, security, and operational leadership, so people risk does not fall between organisational boundaries.
  • Proportionate, temporary risk management, such as short-term access adjustments or additional oversight while a personal issue is being addressed.
This approach reflects the direction set out in UK protective security guidance, which emphasises treating insider events as connected, strengthening leadership understanding, and addressing the reasons insider risk is often deprioritised or avoided.
Culture determines whether people speak up

In many insider cases, colleagues notice warning signs but decide not to raise them because they worry about getting someone into trouble, triggering an investigation, or being seen as overreacting.

Where people believe that raising concerns will lead to fair and supportive action, reporting becomes more likely, but where they expect blame or punishment, staying silent feels safer.

This is not a training failure. It is a cultural one.

A quieter form of prevention

The most effective insider risk programmes are often the least visible because they are built into everyday management practice, supported by leadership, and grounded in trust, and they recognise that people are both the greatest asset and the most complex part of any security system.

In a world that is increasingly connected but emotionally fragmented, emotional and financial pressures are no longer side issues. They are part of the risk landscape.

For organisations that are serious about resilience, insider risk must be understood not only through controls and compliance, but also through culture, support, and leadership judgement, and this shift does not weaken security. It strengthens it.