08 March 2026

AI Is Moving Faster Than Security Controls

AI is entering organisations faster than the security controls designed to govern it.

Artificial intelligence is rapidly becoming embedded across organisations.

AI assistants are now writing code, summarising documents, analysing data, and supporting operational decisions.

What began as experimentation is quickly becoming operational dependency.

For security teams, the challenge is not simply adopting AI. The real challenge is understanding how AI changes the way cybersecurity controls need to be validated.

In many organisations, AI tools are already interacting with corporate data, internal systems, and operational workflows.

Yet when security leaders ask a simple question

“How do we know these AI systems are operating within our control boundaries?”

…the answer is often less clear than expected.


Why AI Security Controls Are Different

Traditional software behaves in predictable ways. Security teams can audit code, validate configuration, monitor logs, and confirm whether controls are operating as intended.

AI systems behave differently.

Modern AI models generate probabilistic outputs rather than deterministic ones. The same prompt may produce different responses, models can evolve through updates, and outputs may influence decisions that were never explicitly coded into the system.

This creates a shift in how security controls need to be assessed.

Controls designed for traditional systems do not always translate neatly into AI-driven environments.

Examples are already appearing in practice:

  • AI coding assistants generating insecure or non-compliant code
  • Employees uploading confidential documents into AI tools
  • AI platforms accessing internal data through integrations
  • AI agents interacting with APIs or automation platforms beyond their intended scope

In many cases, organisations technically have policies that cover these scenarios.

The real challenge is proving those policies are actually effective in practice.


The Growing Problem of Shadow AI

Just as “Shadow IT” emerged when employees adopted unsanctioned cloud services, many organisations are now experiencing Shadow AI.

Employees are increasingly using AI tools independently to improve productivity. These tools often bypass procurement processes, security reviews, and governance frameworks

Common examples include:

  • Uploading documents into AI summarisation tools
  • Using AI assistants to analyse internal reports or spreadsheets
  • Generating code snippets with public AI models
  • Connecting AI plug-ins to automate existing workflows

From a security perspective, this creates several unknowns.

Organisations may not know:

  • Which AI tools are being used
  • What data is being shared with them
  • Whether prompts or outputs are stored externally
  • How AI-generated outputs influence operational decisions

The result is a widening gap between policy intent and operational reality.


AI Governance Without Visibility

Many organisations have already responded to AI risk by introducing policies, governance groups, or internal guidance.

These are important foundations.

But policy alone does not create assurance.

The real question is whether organisations can demonstrate that controls around AI usage are actually working.

That means being able to answer questions such as:

  • Do we know where AI tools are being used across the organisation?
  • Can we detect when sensitive data is submitted to external AI services?
  • Are AI-generated outputs influencing critical processes without validation?
  • Do we monitor AI integrations and access permissions?

Without measurable answers, AI governance risks becoming another form of dashboard compliance.

Controls may appear compliant on paper but lack operational validation.


Moving Toward Practical AI Security Assurance

Organisations that are managing AI adoption successfully are beginning to treat AI risk in the same way they treat other critical security controls.

The focus shifts from policy statements to evidence, monitoring, and validation.

Practical steps increasingly include:

  • Maintaining an inventory of approved AI systems
  • Monitoring integrations and API activity
  • Detecting data flows to external AI platforms
  • Ensuring human oversight for critical AI outputs
  • Continuously reviewing permissions and access scope

These measures do not remove risk entirely.

But they shift the conversation from:

“Do we have an AI policy?” to the far more important question

“Can we prove our AI controls are working?


The Next Cybersecurity Challenge

Every major technology shift has forced organisations to rethink how security controls are validated.

Cloud computing did. DevOps did. SaaS platforms did. AI is now doing the same.

The organisations that manage this transition successfully will not necessarily be those that deploy AI the fastest.

They will be the ones that understand how to measure and validate the controls surrounding it.

Because in cybersecurity, the most important question is rarely whether a control exists.

The real question is whether it works.

03 March 2026

NCSC Warns UK Organisations to Prepare for Potential Iran-Linked Cyber Activity

Geopolitical conflict rarely stays confined to physical battlefields. Increasingly, it spills into the digital domain. The latest escalation of tensions in the Middle East has prompted the UK’s National Cyber Security Centre (NCSC) to issue a warning to organisations to review their cyber security posture and prepare for possible cyber activity linked to Iran.


While the NCSC has stressed that there is currently no confirmed significant increase in direct cyber threats to the UK, it has warned that the situation is fast-moving and organisations should remain alert.

Rising Tensions and Cyber Spillover
The warning follows a sharp escalation in the regional conflict involving Iran, the United States and Israel. Military developments have been accompanied by cyber activity targeting digital infrastructure and online services in the region, highlighting how modern conflicts now run across both physical and digital fronts.

In response, the NCSC has advised UK organisations to review their cyber defences and ensure they are prepared for possible disruption. The agency noted that while the direct cyber threat level to the UK has not significantly changed, there is “almost certainly a heightened risk of indirect cyber threat” for organisations with operations, assets or supply chains in the Middle East.

This includes potential activity from Iranian state actors as well as Iran-aligned hacktivist groups.

Iran’s established Cyber Capabilities
Iran has long viewed cyber operations as a strategic tool that allows it to project influence asymmetrically against more technologically advanced adversaries. Over the past decade, Iranian cyber groups have targeted sectors such as energy, finance, transportation and government networks.

Previous campaigns linked to Iranian actors have included destructive malware operations, espionage campaigns and disruptive attacks against critical infrastructure. For example, the widely documented Operation Cleaver campaign targeted energy and transportation organisations globally.

Although Iranian cyber capabilities are generally considered less sophisticated than those of Russia or China, they have demonstrated a willingness to conduct disruptive and politically motivated attacks.

What the NCSC is advising Organisations to do
The NCSC’s guidance is not calling for panic, but it does emphasise the importance of cyber resilience during periods of geopolitical instability.

Organisations are advised to:
  • Review their external attack surface and internet-exposed services
  • Increase monitoring for suspicious activity
  • Prepare for common threat tactics such as phishing and distributed denial-of-service (DDoS) attacks
  • Ensure patching and vulnerability management processes are up to date
  • Review incident response plans and escalation procedures
The NCSC has also encouraged organisations to sign up to its Early Warning service, which provides alerts about potential security issues affecting UK networks.

The Risk of Opportunistic Cyber Activity
One important point highlighted in the advisory is that not all cyber activity during geopolitical crises comes directly from state actors.
  • Periods of international tension often attract:
  • politically motivated hacktivists
  • cybercriminal groups seeking to exploit confusion
  • proxy actors aligned with nation-state interests
These groups may launch attacks intended to disrupt services, deface websites or leak stolen data for political impact.

A Reminder for Boards and Security Teams
Events like this are a reminder that cyber risk does not exist in isolation from geopolitical developments. Organisations operating globally, particularly those with supply chains or business interests in politically sensitive regions, must assume that digital infrastructure could become collateral damage during international conflicts.

For security teams, the key takeaway is not that a wave of attacks is imminent, but that situational awareness and operational readiness matter.

Cyber resilience is most effective when organisations treat security posture reviews as routine practice rather than emergency reactions.

Sources:
• National Cyber Security Centre alert: https://www.ncsc.gov.uk/news/ncsc-advises-uk-organisations-take-action-following-conflict-in-the-middle-east

20 February 2026

AI in the SOC: Why Complete Autonomy Is the Wrong Goal

Dan Petrillo, VP of Product at BlueVoyant 

 

As artificial intelligence (AI) becomes more deeply embedded in security operations, a divide has emerged in how its role is defined. Some argue the security operations centre (SOC) should be fully autonomous, with AI replacing human analysts. Others believe that augmentation is the right path, using AI to support and extend existing teams. 

 

Augmentation probably reflects how SOCs operate in practice. It helps analysts triage alerts, investigate incidents faster, and it brings better context into their work, while still ensuring humans are accountable for decisions.  

 

Complete autonomy assumes a level of reliable, end-to-end decision-making that can operate without continuous human oversight. That’s a high bar. In real SOC environments, the technology, data quality, and operational constraints rarely support that assumption. Detection pipelines are noisy, context is fragmented across tools, and threat signals often require human judgment to interpret correctly. Even the most advanced automation struggles with edge cases, ambiguous alerts, and the dynamic nature of attacker behaviour. 

 

Why an Autonomous SOC Falls Short 

Delving deeper and examining why AI cannot fully replace SOC analysts; in short, it comes down to the oversimplification of the complexities inherent in what security operations involve. Investigation is only one part of a functioning SOC. Organisations also depend on experienced practitioners to interpret ambiguous signals, manage escalation, and communicate risk to senior leadership. When incidents become business issues, that same expertise is required to apply judgement, coordinate stakeholders, and produce reporting that stands up to scrutiny. 

 

When something goes wrong, such as a logging failure, a broken parser following a third-party firewall update, or months of missing telemetry, automated systems cannot resolve the issue alone. Human expertise is needed to understand context, reconstruct events, and guide remediation. 

 

Governance is another constraint. The cost of false negatives remains unacceptably high, and security leaders are unlikely to deploy solutions that act without clear oversight. Even where AI can execute parts of a workflow, organisations still require process controls, quality checks, and human validation for complex or unfamiliar scenarios. A fully autonomous model cannot reliably make the right judgement call in every situation, particularly when decisions carry real business impact. 

 

Accuracy risks also remain. AI systems can make mistakes, draw incorrect conclusions, or miss important signals if left unchecked. Human oversight therefore remains essential to spot errors early and prevent them from turning into operational problems. 

 

Ultimately, fully autonomous SOC models ask organisations to trade human judgement and accountability for AI that is still maturing. That trade-off is impractical in an environment where consequences are measured in real-world disruption. 

 

Why AI in the SOC Is Still Essential 

However, none of the above suggests that AI does not have a place in the SOC. When implemented with purpose it delivers measurable improvements in the areas where teams are under the most pressure. 

 

AI can take on repetitive, high-volume tasks such as alert triage and enrichment, allowing analysts to focus on more complex investigations, decision-making, and response. Deployed effectively, AI in the SOC is essential to reclaiming human time from low value activity, enabling teams to apply expertise where it has the greatest operational payoff. 

 

Some of the most significant benefits of integrating AI agents into human-led SOC teams include: 

  • Workload reduction: AI can handle repetitive, high-volume tasks such as alert triage, dynamic enrichment, and report generation, reducing analyst fatigue and operational backlog. 
  • Process consistency: AI helps standardise workflows across varying skill levels, smoothing differences in tool syntax and operating procedures so teams perform more consistently. 
  •  Improved alert quality: By incorporating external threat intelligence, control telemetry, and asset context, AI can reduce false positives and support more accurate prioritisation. 
  • Faster decision-making: Attack timelines, path mapping, and context-rich summaries enable analysts to assess scope, impact, and containment options more quickly. 
  • Knowledge retention: AI working alongside human analysts captures operational insights over time, mitigating the impact of staff churn and preserving institutional knowledge. It can also identify patterns that may be missed by individuals and recommend rules or remediations accordingly. 
  • Always on: AI doesn’t need breaks, get tired, fall ill, take holidays, or turn up late. It becomes a consistently reliable coworker for stretched teams working under pressure. 

 

Where Augmentation Delivers the Most Value 

AI delivers the greatest value when applied to SOC activities that are slow, manual, or prone to inconsistency, while keeping humans accountable for decisions and execution. 

 

Augmentation should be introduced first in areas where AI can speed up analysis, surface insight, and support judgement, without removing human oversight. Below are a few areas where you might consider using AI to augment your team:

  • Alert triage: False-positive reduction, dynamic enrichment, and contextual prioritisation using threat intelligence, asset criticality, and exposure data. 
  • Augmented investigations: Natural language querying, attack path and timeline visualisation, and suggested queries that speed root-cause analysis. 
  • Incident and case summarisation: Automated executive- and GRC-ready reporting that consolidates findings with clear, decision-ready context. 
  • Hypothesis generation: Continuous pattern and behaviour analysis to surface new detections, investigative approaches, and remediation opportunities for human approval. 
  • Operational oversight: AI that learns expected procedures and flags process deviations, bottlenecks, or underperformance for leadership attention. 
  • Response recommendations: Context-aware guidance and playbook generation, with optional integration-driven execution remaining under human control. 

 

What This Means for Security Teams 

Security teams manage millions of investigations every year, even after automating many routine cases. While automation can streamline these routine tasks, full autonomy remains unrealistic. The most critical stages of an investigation still rely on human judgement, context and accountability.  

 

AI will continue to enhance the speed, scale and consistency of security operations, but the SOC of the future will remain human led, with AI augmenting, not replacing, analysts. Organisations that adopt AI in targeted, outcome driven ways will scale more effectively, reduce risk and preserve institutional knowledge. As threats evolve, AI augmented SOC teams will not only keep pace but stay ahead of adversaries.

15 February 2026

It’s 2026. Why are the basics still being missed?

Written by Katie Barnett, Director of Cyber Security, and Gavin Wilson, Director of Physical Security and Risk, at Toro Solutions

After spending years working with organisations on security, one thing becomes hard to ignore. When something serious happens, the root causes are sadly rarely surprising and there is often a sense of inevitability to them. Access that was never quite tidied up, controls that were written down but not really enforced, multi factor authentication that was recommended but not mandatory or decisions that made sense in the moment and were never revisited.

Last year’s headlines about the Louvre brought this into focus. The Louvre Museum, the world’s most visited cultural landmark, faced heavy criticism after investigators revealed that its internal video surveillance system was protected by the password “Louvre.” This came after a daylight heist in which thieves stole French Crown Jewels valued at over $100 million. The striking thing was not how bold the theft was, but how familiar the weakness behind it felt.

It would be comforting to see that as a one-off mistake, but it rarely is. The Louvre was simply visible. Similar assumptions exist inside many organisations, often sitting quietly in the background while attention is pulled towards more immediate concerns. In most cases, people are not unaware of the issues they are just not the ones that shout the loudest.

As you will know there is no shortage of discussion about how the threat landscape is changing, it’s changing every day. AI, geopolitical tension, supply chain exposure and the blending of physical and cyber risks are all moving fast and often featuring heavily in conversations with leadership. However, at the same time, whilst the big conversations are happening it is not unusual to walk into environments where access is loosely understood, vulnerabilities have been accepted by default, and physical security relies on a shared sense of trust rather than consistent control.

Access and identity management is a good example of how this plays out. Access is granted to keep work moving, which is usually the right decision at the time, but we find that what happens less reliably is the follow-up. Projects end, people change roles, suppliers move on, and amid increasingly demanding workloads, access is forgotten or missed and remains because removing it is never a priority. Over time, confidence creeps in where certainty should exist, and that only becomes obvious when something goes wrong.

This is also where passwords and multi-factor authentication continue to cause problems, despite years of attention. It’s been drilled into everyone that passwords alone are weak, reused and easily compromised. Multi-factor authentication (MFA) is now heavily recommended across organisations, yet it is still common to find critical systems without MFA enabled, with MFA applied inconsistently, or disabled because it caused friction. Exceptions become normal and service accounts are excluded because they always have been. None of these decisions feel dramatic on their own, but together they leave credential compromise as one of the easiest ways in.

The Louvre example resonates precisely because it reduces this to something uncomfortably simple. A globally recognised institution, with significant resources, still relying on a password that offered little real protection for a critical system. This is not a technology problem; it's just what happens when basic controls are never quite treated as urgent enough to demand sustained attention.

Vulnerability management tends to follow a similar path. Patching is rarely ignored outright instead it is delayed, deferred and worked around, often for understandable reasons. Each decision feels small, but the cumulative effect is not. When an incident eventually occurs, it is often described as sophisticated or unavoidable, even when the weakness involved had been known about for some time and often one that could be easily resolved. 

Physical security is another area where every day behaviour quietly undermines formal controls. We have all seen people wearing work badges in public places or holding secure doors open because it feels impolite not to. These moments are easy to dismiss, but they say a lot about how security is experienced day to day. In environments where physical access can be the door opener for cyber compromise, those behaviours carry more weight than many organisations realise.

Third-party risk is similar. Businesses rely on suppliers to function, and that reliance grows each year. Initial checks are usually done with good intent, but ongoing scrutiny is harder to sustain. Access persists, assumptions build, and visibility fades. When incidents occur through these routes, the surprise often comes from how little the organisation really knew about its own exposure.

Response and recovery are where many of these gaps finally surface. Plans exist, backups are in place, and there is confidence that people will respond sensibly under pressure. In reality, uncertainty plays a bigger role than expected. Decisions take longer and responsibilities are less clear. Recovery takes more effort than anticipated and the damage often comes as much from this uncertainty which causes delay as from the original incident.

The reason the basics continue to be missed is not a lack of knowledge or capability. It is that foundational security work rarely feels urgent, and it competes constantly with an ever-changing risk landscape and slick tools and initiatives that promise growth, efficiency or innovation. The basics do not generate visible wins when they work, and they rarely fail in isolation and as a result, risk accumulates quietly as it is normalised by the absence of immediate consequence.

The organisations that make genuine progress take a different approach. They accept that security fundamentals require ongoing attention, not periodic clean-up. Access is treated as something that changes continuously, physical security is reinforced through everyday behaviour, not just policy and response and recovery are practised because disruption is assumed, not because it is feared.

As 2026 progresses, the question is no longer whether threats will continue to evolve. They will. The more challenging question is whether organisations are prepared to be disciplined about the things they already know matter. Until the basics are given the same weight as innovation and growth, we will continue to see familiar failures surface in very public ways, followed by the same uncomfortable question of how something so simple was missed again.