20 February 2026

AI in the SOC: Why Complete Autonomy Is the Wrong Goal

Dan Petrillo, VP of Product at BlueVoyant 

 

As artificial intelligence (AI) becomes more deeply embedded in security operations, a divide has emerged in how its role is defined. Some argue the security operations centre (SOC) should be fully autonomous, with AI replacing human analysts. Others believe that augmentation is the right path, using AI to support and extend existing teams. 

 

Augmentation probably reflects how SOCs operate in practice. It helps analysts triage alerts, investigate incidents faster, and it brings better context into their work, while still ensuring humans are accountable for decisions.  

 

Complete autonomy assumes a level of reliable, end-to-end decision-making that can operate without continuous human oversight. That’s a high bar. In real SOC environments, the technology, data quality, and operational constraints rarely support that assumption. Detection pipelines are noisy, context is fragmented across tools, and threat signals often require human judgment to interpret correctly. Even the most advanced automation struggles with edge cases, ambiguous alerts, and the dynamic nature of attacker behaviour. 

 

Why an Autonomous SOC Falls Short 

Delving deeper and examining why AI cannot fully replace SOC analysts; in short, it comes down to the oversimplification of the complexities inherent in what security operations involve. Investigation is only one part of a functioning SOC. Organisations also depend on experienced practitioners to interpret ambiguous signals, manage escalation, and communicate risk to senior leadership. When incidents become business issues, that same expertise is required to apply judgement, coordinate stakeholders, and produce reporting that stands up to scrutiny. 

 

When something goes wrong, such as a logging failure, a broken parser following a third-party firewall update, or months of missing telemetry, automated systems cannot resolve the issue alone. Human expertise is needed to understand context, reconstruct events, and guide remediation. 

 

Governance is another constraint. The cost of false negatives remains unacceptably high, and security leaders are unlikely to deploy solutions that act without clear oversight. Even where AI can execute parts of a workflow, organisations still require process controls, quality checks, and human validation for complex or unfamiliar scenarios. A fully autonomous model cannot reliably make the right judgement call in every situation, particularly when decisions carry real business impact. 

 

Accuracy risks also remain. AI systems can make mistakes, draw incorrect conclusions, or miss important signals if left unchecked. Human oversight therefore remains essential to spot errors early and prevent them from turning into operational problems. 

 

Ultimately, fully autonomous SOC models ask organisations to trade human judgement and accountability for AI that is still maturing. That trade-off is impractical in an environment where consequences are measured in real-world disruption. 

 

Why AI in the SOC Is Still Essential 

However, none of the above suggests that AI does not have a place in the SOC. When implemented with purpose it delivers measurable improvements in the areas where teams are under the most pressure. 

 

AI can take on repetitive, high-volume tasks such as alert triage and enrichment, allowing analysts to focus on more complex investigations, decision-making, and response. Deployed effectively, AI in the SOC is essential to reclaiming human time from low value activity, enabling teams to apply expertise where it has the greatest operational payoff. 

 

Some of the most significant benefits of integrating AI agents into human-led SOC teams include: 

  • Workload reduction: AI can handle repetitive, high-volume tasks such as alert triage, dynamic enrichment, and report generation, reducing analyst fatigue and operational backlog. 
  • Process consistency: AI helps standardise workflows across varying skill levels, smoothing differences in tool syntax and operating procedures so teams perform more consistently. 
  •  Improved alert quality: By incorporating external threat intelligence, control telemetry, and asset context, AI can reduce false positives and support more accurate prioritisation. 
  • Faster decision-making: Attack timelines, path mapping, and context-rich summaries enable analysts to assess scope, impact, and containment options more quickly. 
  • Knowledge retention: AI working alongside human analysts captures operational insights over time, mitigating the impact of staff churn and preserving institutional knowledge. It can also identify patterns that may be missed by individuals and recommend rules or remediations accordingly. 
  • Always on: AI doesn’t need breaks, get tired, fall ill, take holidays, or turn up late. It becomes a consistently reliable coworker for stretched teams working under pressure. 

 

Where Augmentation Delivers the Most Value 

AI delivers the greatest value when applied to SOC activities that are slow, manual, or prone to inconsistency, while keeping humans accountable for decisions and execution. 

 

Augmentation should be introduced first in areas where AI can speed up analysis, surface insight, and support judgement, without removing human oversight. Below are a few areas where you might consider using AI to augment your team:

  • Alert triage: False-positive reduction, dynamic enrichment, and contextual prioritisation using threat intelligence, asset criticality, and exposure data. 
  • Augmented investigations: Natural language querying, attack path and timeline visualisation, and suggested queries that speed root-cause analysis. 
  • Incident and case summarisation: Automated executive- and GRC-ready reporting that consolidates findings with clear, decision-ready context. 
  • Hypothesis generation: Continuous pattern and behaviour analysis to surface new detections, investigative approaches, and remediation opportunities for human approval. 
  • Operational oversight: AI that learns expected procedures and flags process deviations, bottlenecks, or underperformance for leadership attention. 
  • Response recommendations: Context-aware guidance and playbook generation, with optional integration-driven execution remaining under human control. 

 

What This Means for Security Teams 

Security teams manage millions of investigations every year, even after automating many routine cases. While automation can streamline these routine tasks, full autonomy remains unrealistic. The most critical stages of an investigation still rely on human judgement, context and accountability.  

 

AI will continue to enhance the speed, scale and consistency of security operations, but the SOC of the future will remain human led, with AI augmenting, not replacing, analysts. Organisations that adopt AI in targeted, outcome driven ways will scale more effectively, reduce risk and preserve institutional knowledge. As threats evolve, AI augmented SOC teams will not only keep pace but stay ahead of adversaries.

15 February 2026

It’s 2026. Why are the basics still being missed?

Written by Katie Barnett, Director of Cyber Security, and Gavin Wilson, Director of Physical Security and Risk, at Toro Solutions

After spending years working with organisations on security, one thing becomes hard to ignore. When something serious happens, the root causes are sadly rarely surprising and there is often a sense of inevitability to them. Access that was never quite tidied up, controls that were written down but not really enforced, multi factor authentication that was recommended but not mandatory or decisions that made sense in the moment and were never revisited.

Last year’s headlines about the Louvre brought this into focus. The Louvre Museum, the world’s most visited cultural landmark, faced heavy criticism after investigators revealed that its internal video surveillance system was protected by the password “Louvre.” This came after a daylight heist in which thieves stole French Crown Jewels valued at over $100 million. The striking thing was not how bold the theft was, but how familiar the weakness behind it felt.

It would be comforting to see that as a one-off mistake, but it rarely is. The Louvre was simply visible. Similar assumptions exist inside many organisations, often sitting quietly in the background while attention is pulled towards more immediate concerns. In most cases, people are not unaware of the issues they are just not the ones that shout the loudest.

As you will know there is no shortage of discussion about how the threat landscape is changing, it’s changing every day. AI, geopolitical tension, supply chain exposure and the blending of physical and cyber risks are all moving fast and often featuring heavily in conversations with leadership. However, at the same time, whilst the big conversations are happening it is not unusual to walk into environments where access is loosely understood, vulnerabilities have been accepted by default, and physical security relies on a shared sense of trust rather than consistent control.

Access and identity management is a good example of how this plays out. Access is granted to keep work moving, which is usually the right decision at the time, but we find that what happens less reliably is the follow-up. Projects end, people change roles, suppliers move on, and amid increasingly demanding workloads, access is forgotten or missed and remains because removing it is never a priority. Over time, confidence creeps in where certainty should exist, and that only becomes obvious when something goes wrong.

This is also where passwords and multi-factor authentication continue to cause problems, despite years of attention. It’s been drilled into everyone that passwords alone are weak, reused and easily compromised. Multi-factor authentication (MFA) is now heavily recommended across organisations, yet it is still common to find critical systems without MFA enabled, with MFA applied inconsistently, or disabled because it caused friction. Exceptions become normal and service accounts are excluded because they always have been. None of these decisions feel dramatic on their own, but together they leave credential compromise as one of the easiest ways in.

The Louvre example resonates precisely because it reduces this to something uncomfortably simple. A globally recognised institution, with significant resources, still relying on a password that offered little real protection for a critical system. This is not a technology problem; it's just what happens when basic controls are never quite treated as urgent enough to demand sustained attention.

Vulnerability management tends to follow a similar path. Patching is rarely ignored outright instead it is delayed, deferred and worked around, often for understandable reasons. Each decision feels small, but the cumulative effect is not. When an incident eventually occurs, it is often described as sophisticated or unavoidable, even when the weakness involved had been known about for some time and often one that could be easily resolved. 

Physical security is another area where every day behaviour quietly undermines formal controls. We have all seen people wearing work badges in public places or holding secure doors open because it feels impolite not to. These moments are easy to dismiss, but they say a lot about how security is experienced day to day. In environments where physical access can be the door opener for cyber compromise, those behaviours carry more weight than many organisations realise.

Third-party risk is similar. Businesses rely on suppliers to function, and that reliance grows each year. Initial checks are usually done with good intent, but ongoing scrutiny is harder to sustain. Access persists, assumptions build, and visibility fades. When incidents occur through these routes, the surprise often comes from how little the organisation really knew about its own exposure.

Response and recovery are where many of these gaps finally surface. Plans exist, backups are in place, and there is confidence that people will respond sensibly under pressure. In reality, uncertainty plays a bigger role than expected. Decisions take longer and responsibilities are less clear. Recovery takes more effort than anticipated and the damage often comes as much from this uncertainty which causes delay as from the original incident.

The reason the basics continue to be missed is not a lack of knowledge or capability. It is that foundational security work rarely feels urgent, and it competes constantly with an ever-changing risk landscape and slick tools and initiatives that promise growth, efficiency or innovation. The basics do not generate visible wins when they work, and they rarely fail in isolation and as a result, risk accumulates quietly as it is normalised by the absence of immediate consequence.

The organisations that make genuine progress take a different approach. They accept that security fundamentals require ongoing attention, not periodic clean-up. Access is treated as something that changes continuously, physical security is reinforced through everyday behaviour, not just policy and response and recovery are practised because disruption is assumed, not because it is feared.

As 2026 progresses, the question is no longer whether threats will continue to evolve. They will. The more challenging question is whether organisations are prepared to be disciplined about the things they already know matter. Until the basics are given the same weight as innovation and growth, we will continue to see familiar failures surface in very public ways, followed by the same uncomfortable question of how something so simple was missed again.

31 March 2025

UK Cybersecurity Weekly News Roundup - 31 March 2025

UK Cybersecurity Weekly News Roundup - 31 March 2025

Welcome to this week's edition of our cybersecurity news roundup, bringing you the latest developments and insights from the UK and beyond.

UK Warned of Inadequate Readiness Against State-Backed Cyberattacks

Cybersecurity experts have sounded the alarm over the UK's growing vulnerability to state-sponsored cyber threats. A recent report by the National Cyber Security Centre (NCSC) shows a 16% increase in severe cyber incidents affecting national infrastructure in 2024. A worrying 64% of public sector IT leaders said they are unsure about best practices, with legacy systems worsening the risk. As digital transformation accelerates, public infrastructure like energy and healthcare face increasing exposure to ransomware and espionage. Read more

NCSC Publishes Roadmap for Post-Quantum Cryptography Migration

The NCSC has published official guidance on migrating to post-quantum cryptography (PQC) to protect against future quantum computing threats. The document urges critical infrastructure operators to begin preparations now, with system discovery and risk assessments expected by 2028. Full migration should be completed by 2035. The roadmap highlights the need for cryptographic agility and risk-based planning in anticipation of quantum threats. Read more

UK Government to Update Software Vendor Security Code of Practice

Following a public consultation, the UK government will publish a revised voluntary code of practice for software vendors later this year. The updated framework will include clearer technical requirements and a new attestation mechanism for vendors to demonstrate compliance. The initiative aims to raise the standard of cybersecurity in commercial software used by UK businesses and public services. Read more

Google Patches Actively Exploited Chrome Zero-Day (CVE-2025-2783)

Google has released an emergency update for Chrome to patch CVE-2025-2783, a high-severity zero-day vulnerability that was being actively exploited in the wild. The flaw allowed attackers to bypass sandbox protections. All users are urged to update their browsers immediately. This marks the second major Chrome zero-day reported in 2025. Read more

UK Considers Ransomware Payment Ban for Public Sector

A proposal to ban ransomware payments by UK public sector and critical infrastructure organizations is under review. While the policy aims to discourage threat actors, experts warn that it may increase the pressure on under-prepared organizations and push attacks toward entities with no ability to recover quickly

24 March 2025

UK Cybersecurity Weekly News Roundup - 23 March 2025

Welcome to this week's edition of our cybersecurity news roundup, bringing you the latest developments and insights from the UK and beyond.

NHS Scotland Confirms Cyberattack Disruption

On 20 March 2025, NHS Scotland reported a major cyber incident that caused network outages across multiple health boards. The cyberattack disrupted clinical systems and led to delayed patient care, with staff reverting to paper-based processes. The incident has been linked to a suspected ransomware group, although official attribution is still pending. Investigations are ongoing with support from the National Cyber Security Centre (NCSC).

Further coverage from The Register confirmed that some systems were taken offline to prevent further spread, while emergency care remained operational. The affected regions included NHS Dumfries and Galloway, which issued a statement urging patients to only attend if absolutely necessary. (Read more on The Register)

NCSC Weekly Threat Report – 22 March 2025

The NCSC's latest threat report highlights ongoing exploitation of known vulnerabilities in Progress Telerik UI by state-aligned threat actors. The report urges UK organisations to patch vulnerable systems immediately, as attackers continue to target unpatched web servers.

Additionally, the NCSC notes an increase in malicious QR code campaigns—so-called "quishing"—where attackers embed phishing URLs into QR codes used in emails, posters, or even receipts. Organisations are advised to educate staff and implement QR code scanning policies.

Cyber Threats on the Rise as UK Eyes General Election

As the UK gears up for a general election later this year, the NCSC has raised concerns over potential interference campaigns and disinformation efforts by hostile states. Security services are reportedly on high alert, coordinating with political parties to bolster cyber resilience. While no major incidents have been reported yet, the threat landscape is being closely monitored.

Quick Bytes

  • New phishing campaign mimics HMRC emails demanding urgent tax repayment. Be vigilant and double-check all official correspondence.
  • UK universities warned of increased targeting by espionage-motivated groups, particularly in the fields of AI and quantum computing.
  • ICO fines a London-based telemarketing firm £130,000 for unlawful data use and non-compliance with GDPR.

That’s all for this week! Stay tuned for more updates, and follow best practices to keep your systems secure.

➡️ Previous Post: UK Cybersecurity Weekly News Roundup - 17 March 2025

16 March 2025

UK Cybersecurity Weekly News Roundup - 16 March 2025

Welcome to this week's edition of our cybersecurity news roundup, bringing you the latest developments and insights from the UK and beyond.

UK Government's Stance on Encryption Raises Global Concerns

The UK government has ordered Apple to provide backdoor access to iCloud users' encrypted backups under the Investigatory Powers Act of 2016. This secret order applies not just to UK users but potentially to Apple users worldwide. In response, Apple has removed its Advanced Data Protection feature in the UK, expressing disappointment. This move has significant implications, raising concerns about global user privacy and security. Experts argue that creating backdoors compromises overall security, potentially allowing malicious entities to gain access. Apple's compliance or resistance will set a precedent for other governments seeking similar access. Read more

Sellafield Nuclear Site Improves Physical Security Amid Cybersecurity Concerns

Sellafield, the world's largest plutonium store, has been taken out of special measures for physical security by the UK's nuclear industry regulator, the Office for Nuclear Regulation (ONR). This decision follows significant improvements in guarding arrangements, allowing routine inspections instead of enhanced regulatory oversight. However, concerns regarding its cybersecurity remain. Last year, Sellafield was fined almost £400,000 for cybersecurity failings, allegedly involving hacking groups linked to Russia and China. While there was no conclusive evidence of a successful cyber-attack, cybersecurity remains a critical concern. Read more

UK Businesses Face Significant Financial Impact from Cyberattacks

In the past five years, cyberattacks have cost British businesses approximately £44 billion ($55.08 billion) in lost revenue, with 52% of private sector companies experiencing at least one attack during that period, according to insurance broker Howden. On average, these attacks cost companies 1.9% of their annual revenue. Larger companies, with over £100 million in annual revenue, are more likely to be targeted. Despite the significant risk, only 61% of businesses employ anti-virus software, and only 55% use network firewalls, due to cost and lack of internal IT resources. Read more

Global Sanctions Target Russian Cybercrime Network

The United States, United Kingdom, and Australia have jointly sanctioned Zservers, a Russian bulletproof web-hosting service provider, and two Russian operators linked to it for supporting the LockBit ransomware syndicate. The U.S. Treasury Department's Office of Foreign Assets Control, along with its U.K. and Australian counterparts, targeted Zservers for facilitating LockBit attacks by providing specialized servers resistant to law enforcement actions. Lock