20 February 2026

AI in the SOC: Why Complete Autonomy Is the Wrong Goal

Dan Petrillo, VP of Product at BlueVoyant 

 

As artificial intelligence (AI) becomes more deeply embedded in security operations, a divide has emerged in how its role is defined. Some argue the security operations centre (SOC) should be fully autonomous, with AI replacing human analysts. Others believe that augmentation is the right path, using AI to support and extend existing teams. 

 

Augmentation probably reflects how SOCs operate in practice. It helps analysts triage alerts, investigate incidents faster, and it brings better context into their work, while still ensuring humans are accountable for decisions.  

 

Complete autonomy assumes a level of reliable, end-to-end decision-making that can operate without continuous human oversight. That’s a high bar. In real SOC environments, the technology, data quality, and operational constraints rarely support that assumption. Detection pipelines are noisy, context is fragmented across tools, and threat signals often require human judgment to interpret correctly. Even the most advanced automation struggles with edge cases, ambiguous alerts, and the dynamic nature of attacker behaviour. 

 

Why an Autonomous SOC Falls Short 

Delving deeper and examining why AI cannot fully replace SOC analysts; in short, it comes down to the oversimplification of the complexities inherent in what security operations involve. Investigation is only one part of a functioning SOC. Organisations also depend on experienced practitioners to interpret ambiguous signals, manage escalation, and communicate risk to senior leadership. When incidents become business issues, that same expertise is required to apply judgement, coordinate stakeholders, and produce reporting that stands up to scrutiny. 

 

When something goes wrong, such as a logging failure, a broken parser following a third-party firewall update, or months of missing telemetry, automated systems cannot resolve the issue alone. Human expertise is needed to understand context, reconstruct events, and guide remediation. 

 

Governance is another constraint. The cost of false negatives remains unacceptably high, and security leaders are unlikely to deploy solutions that act without clear oversight. Even where AI can execute parts of a workflow, organisations still require process controls, quality checks, and human validation for complex or unfamiliar scenarios. A fully autonomous model cannot reliably make the right judgement call in every situation, particularly when decisions carry real business impact. 

 

Accuracy risks also remain. AI systems can make mistakes, draw incorrect conclusions, or miss important signals if left unchecked. Human oversight therefore remains essential to spot errors early and prevent them from turning into operational problems. 

 

Ultimately, fully autonomous SOC models ask organisations to trade human judgement and accountability for AI that is still maturing. That trade-off is impractical in an environment where consequences are measured in real-world disruption. 

 

Why AI in the SOC Is Still Essential 

However, none of the above suggests that AI does not have a place in the SOC. When implemented with purpose it delivers measurable improvements in the areas where teams are under the most pressure. 

 

AI can take on repetitive, high-volume tasks such as alert triage and enrichment, allowing analysts to focus on more complex investigations, decision-making, and response. Deployed effectively, AI in the SOC is essential to reclaiming human time from low value activity, enabling teams to apply expertise where it has the greatest operational payoff. 

 

Some of the most significant benefits of integrating AI agents into human-led SOC teams include: 

  • Workload reduction: AI can handle repetitive, high-volume tasks such as alert triage, dynamic enrichment, and report generation, reducing analyst fatigue and operational backlog. 
  • Process consistency: AI helps standardise workflows across varying skill levels, smoothing differences in tool syntax and operating procedures so teams perform more consistently. 
  •  Improved alert quality: By incorporating external threat intelligence, control telemetry, and asset context, AI can reduce false positives and support more accurate prioritisation. 
  • Faster decision-making: Attack timelines, path mapping, and context-rich summaries enable analysts to assess scope, impact, and containment options more quickly. 
  • Knowledge retention: AI working alongside human analysts captures operational insights over time, mitigating the impact of staff churn and preserving institutional knowledge. It can also identify patterns that may be missed by individuals and recommend rules or remediations accordingly. 
  • Always on: AI doesn’t need breaks, get tired, fall ill, take holidays, or turn up late. It becomes a consistently reliable coworker for stretched teams working under pressure. 

 

Where Augmentation Delivers the Most Value 

AI delivers the greatest value when applied to SOC activities that are slow, manual, or prone to inconsistency, while keeping humans accountable for decisions and execution. 

 

Augmentation should be introduced first in areas where AI can speed up analysis, surface insight, and support judgement, without removing human oversight. Below are a few areas where you might consider using AI to augment your team:

  • Alert triage: False-positive reduction, dynamic enrichment, and contextual prioritisation using threat intelligence, asset criticality, and exposure data. 
  • Augmented investigations: Natural language querying, attack path and timeline visualisation, and suggested queries that speed root-cause analysis. 
  • Incident and case summarisation: Automated executive- and GRC-ready reporting that consolidates findings with clear, decision-ready context. 
  • Hypothesis generation: Continuous pattern and behaviour analysis to surface new detections, investigative approaches, and remediation opportunities for human approval. 
  • Operational oversight: AI that learns expected procedures and flags process deviations, bottlenecks, or underperformance for leadership attention. 
  • Response recommendations: Context-aware guidance and playbook generation, with optional integration-driven execution remaining under human control. 

 

What This Means for Security Teams 

Security teams manage millions of investigations every year, even after automating many routine cases. While automation can streamline these routine tasks, full autonomy remains unrealistic. The most critical stages of an investigation still rely on human judgement, context and accountability.  

 

AI will continue to enhance the speed, scale and consistency of security operations, but the SOC of the future will remain human led, with AI augmenting, not replacing, analysts. Organisations that adopt AI in targeted, outcome driven ways will scale more effectively, reduce risk and preserve institutional knowledge. As threats evolve, AI augmented SOC teams will not only keep pace but stay ahead of adversaries.

15 February 2026

It’s 2026. Why are the basics still being missed?

Written by Katie Barnett, Director of Cyber Security, and Gavin Wilson, Director of Physical Security and Risk, at Toro Solutions

After spending years working with organisations on security, one thing becomes hard to ignore. When something serious happens, the root causes are sadly rarely surprising and there is often a sense of inevitability to them. Access that was never quite tidied up, controls that were written down but not really enforced, multi factor authentication that was recommended but not mandatory or decisions that made sense in the moment and were never revisited.

Last year’s headlines about the Louvre brought this into focus. The Louvre Museum, the world’s most visited cultural landmark, faced heavy criticism after investigators revealed that its internal video surveillance system was protected by the password “Louvre.” This came after a daylight heist in which thieves stole French Crown Jewels valued at over $100 million. The striking thing was not how bold the theft was, but how familiar the weakness behind it felt.

It would be comforting to see that as a one-off mistake, but it rarely is. The Louvre was simply visible. Similar assumptions exist inside many organisations, often sitting quietly in the background while attention is pulled towards more immediate concerns. In most cases, people are not unaware of the issues they are just not the ones that shout the loudest.

As you will know there is no shortage of discussion about how the threat landscape is changing, it’s changing every day. AI, geopolitical tension, supply chain exposure and the blending of physical and cyber risks are all moving fast and often featuring heavily in conversations with leadership. However, at the same time, whilst the big conversations are happening it is not unusual to walk into environments where access is loosely understood, vulnerabilities have been accepted by default, and physical security relies on a shared sense of trust rather than consistent control.

Access and identity management is a good example of how this plays out. Access is granted to keep work moving, which is usually the right decision at the time, but we find that what happens less reliably is the follow-up. Projects end, people change roles, suppliers move on, and amid increasingly demanding workloads, access is forgotten or missed and remains because removing it is never a priority. Over time, confidence creeps in where certainty should exist, and that only becomes obvious when something goes wrong.

This is also where passwords and multi-factor authentication continue to cause problems, despite years of attention. It’s been drilled into everyone that passwords alone are weak, reused and easily compromised. Multi-factor authentication (MFA) is now heavily recommended across organisations, yet it is still common to find critical systems without MFA enabled, with MFA applied inconsistently, or disabled because it caused friction. Exceptions become normal and service accounts are excluded because they always have been. None of these decisions feel dramatic on their own, but together they leave credential compromise as one of the easiest ways in.

The Louvre example resonates precisely because it reduces this to something uncomfortably simple. A globally recognised institution, with significant resources, still relying on a password that offered little real protection for a critical system. This is not a technology problem; it's just what happens when basic controls are never quite treated as urgent enough to demand sustained attention.

Vulnerability management tends to follow a similar path. Patching is rarely ignored outright instead it is delayed, deferred and worked around, often for understandable reasons. Each decision feels small, but the cumulative effect is not. When an incident eventually occurs, it is often described as sophisticated or unavoidable, even when the weakness involved had been known about for some time and often one that could be easily resolved. 

Physical security is another area where every day behaviour quietly undermines formal controls. We have all seen people wearing work badges in public places or holding secure doors open because it feels impolite not to. These moments are easy to dismiss, but they say a lot about how security is experienced day to day. In environments where physical access can be the door opener for cyber compromise, those behaviours carry more weight than many organisations realise.

Third-party risk is similar. Businesses rely on suppliers to function, and that reliance grows each year. Initial checks are usually done with good intent, but ongoing scrutiny is harder to sustain. Access persists, assumptions build, and visibility fades. When incidents occur through these routes, the surprise often comes from how little the organisation really knew about its own exposure.

Response and recovery are where many of these gaps finally surface. Plans exist, backups are in place, and there is confidence that people will respond sensibly under pressure. In reality, uncertainty plays a bigger role than expected. Decisions take longer and responsibilities are less clear. Recovery takes more effort than anticipated and the damage often comes as much from this uncertainty which causes delay as from the original incident.

The reason the basics continue to be missed is not a lack of knowledge or capability. It is that foundational security work rarely feels urgent, and it competes constantly with an ever-changing risk landscape and slick tools and initiatives that promise growth, efficiency or innovation. The basics do not generate visible wins when they work, and they rarely fail in isolation and as a result, risk accumulates quietly as it is normalised by the absence of immediate consequence.

The organisations that make genuine progress take a different approach. They accept that security fundamentals require ongoing attention, not periodic clean-up. Access is treated as something that changes continuously, physical security is reinforced through everyday behaviour, not just policy and response and recovery are practised because disruption is assumed, not because it is feared.

As 2026 progresses, the question is no longer whether threats will continue to evolve. They will. The more challenging question is whether organisations are prepared to be disciplined about the things they already know matter. Until the basics are given the same weight as innovation and growth, we will continue to see familiar failures surface in very public ways, followed by the same uncomfortable question of how something so simple was missed again.