07 May 2026

Mythos AI: What Security Leaders Should Do Next

The recent discussion around Anthropic’s Claude Mythos Preview and Project Glasswing has caught the attention of the cybersecurity industry for good reason.

Mythos is not just another AI announcement. It is being positioned as a frontier model with advanced cybersecurity capability, particularly around finding and exploiting software vulnerabilities. Anthropic has stated that Project Glasswing is intended to give selected defenders early access to this capability to help secure critical software, rather than releasing the model broadly.

Cisco has also published guidance following its work with Mythos, explaining that it is changing its near-term threat modelling of AI-enabled attackers and issuing defensive recommendations for customers. That is the important point.

Whether Mythos itself remains tightly controlled or not, the direction of travel is clear. AI-enabled vulnerability discovery and exploitation capability is improving quickly. Security teams need to prepare for a world where attackers can find, chain and act on weaknesses faster than many organisations can currently respond.

Why Mythos Matters

The concern is not that every attacker suddenly has access to Mythos today.

The concern is that Mythos shows what is becoming possible.

If AI can accelerate vulnerability discovery, exploit development and attack path analysis, then the defensive timeline changes. Security teams cannot rely on slow review cycles, stale evidence or manual-only response models when the speed of threat discovery is increasing.

This does not mean the fundamentals no longer matter.

It means they matter more.

Cisco’s guidance focuses heavily on strengthening fundamentals such as phishing-resistant MFA, Zero Trust, least privilege for AI agents, disciplined patch management and full asset visibility. It also highlights removing end-of-life systems, automating detection and containment, embedding active defences and using AI defensively for threat hunting, validation and testing.

That is where the practical response needs to start.

The Risk Is Speed

Many organisations still manage cyber risk through processes designed for a slower environment.

  • Monthly reporting.
  • Quarterly reviews.
  • Annual testing.
  • Periodic evidence collection.
  • Manual triage.
  • Long remediation cycles.

Those activities still have a place, but they are not enough on their own.

AI-enabled attackers will not wait for the next governance cycle. They will look for exposed systems, weak identity controls, unpatched vulnerabilities, misconfigured cloud services and overlooked legacy platforms.

The key question becomes:

Can we identify and reduce exposure quickly enough?

That is a very different question from simply asking whether a control exists.

What Security Leaders Should Focus On

The response to Mythos should not be panic, hype or rushing to buy more AI tooling.

It should be disciplined improvement in the areas that matter most.

1. Strengthen Security Fundamentals

Start with the controls that reduce the most likely paths of attack:

  • Phishing-resistant MFA.
  • Least privilege.
  • Complete asset visibility.
  • Disciplined patch management.
  • Removal of end-of-life systems.
  • Secure configuration.
  • Segmentation.
  • Logging and monitoring.
  • Tested incident response.

These are not new ideas. The challenge is proving they are actually working across the environment.

2. Reduce Structural Risk

End-of-life platforms, unsupported systems and brittle legacy dependencies become more dangerous when attackers can find and chain weaknesses faster.

This is not just a technology hygiene issue.

It is a resilience issue.

Organisations should be clear on where structural risk exists, who owns it, what compensating controls are in place and by when the risk will be reduced.

3. Automate Where Speed Matters

Manual response will always have a role, especially where decisions affect operations. But manual-only models will struggle against AI-driven attack velocity.

Security teams should look at where automation can safely support:

  • Detection.
  • Enrichment.
  • Prioritisation.
  • Containment.
  • Evidence collection.
  • Control validation.

The aim is not blind automation.

The aim is controlled speed.

4. Apply Least Privilege to AI Agents

One important point in the Cisco guidance is that least privilege must also apply to AI agents.

That is a point worth taking seriously.

AI agents may interact with systems, APIs, data, workflows and security tooling. If they are not properly governed, they can become powerful operational pathways.

Security teams should be asking:

  • What can the agent access?
  • What actions can it take?
  • Who approved that access?
  • How is activity logged?
  • How is behaviour reviewed?
  • How is access removed when no longer needed?

AI agents should not sit outside normal identity, access and change control disciplines.

5. Improve Control Assurance

This is where Mythos becomes especially relevant.

It is not enough to say controls exist.

Security leaders need confidence that key controls are operating effectively and that the evidence behind them is current.

For example, if patch compliance is reported as high, are internet-facing assets included? Are exceptions approved? Are unsupported systems visible? Does asset inventory match the patching data?

If MFA is reported as complete, are privileged users covered? Are break-glass accounts monitored? Are service accounts excluded? Are temporary bypasses reviewed?

If endpoint protection is deployed, are agents active, current and reporting from all in-scope assets?

This is the practical value of control assurance. It challenges assumptions before attackers do.

What Boards Should Ask

The Mythos discussion should also sharpen board-level cyber questions.

Instead of only asking:

Are we secure?

Boards should increasingly ask:

  • How quickly can we identify exposure?
  • How fresh is our control evidence?
  • Which critical systems still rely on unsupported technology?
  • Where are we dependent on manual response?
  • Are AI agents governed through least privilege?
  • Can we prove key controls are operating effectively?

These are practical questions. They move the conversation away from confidence statements and towards evidence.

Using AI Defensively

AI should not only be seen as an attacker advantage.

Defenders should also use AI where it improves speed, analysis and prioritisation. That might include threat hunting, vulnerability analysis, configuration review, testing, simulation and control validation.

But AI-generated outputs still need challenge.

AI can support assurance, but it should not replace evidence.

Final Thoughts

Mythos matters because it signals where cybersecurity is heading.

AI-enabled capability is likely to make vulnerability discovery, exploit chaining and attack planning faster. That increases pressure on organisations still relying on slow remediation, incomplete visibility and periodic assurance.

The answer is not fear.

The answer is preparation.

Strengthen the fundamentals. Reduce structural risk. Improve visibility. Automate carefully. Govern AI agents. Validate controls with current evidence.

At Cybersecurity Expert UK, I am continuing to explore these themes around practical cyber resilience, assurance and measurable control effectiveness.

I have also been developing AI Labs tools to help security leaders think through exposure, control assurance and operational resilience in a more practical way, including:

  • Threat Exposure Analysis.
  • Control Assurance Validation.
  • Operational Resilience Mapping.
  • Cyber Control Failure Simulation.

You can explore the AI Labs tools here:

AI Labs – Provable Cyber Resilience Tools

The core message is simple.

In an AI-accelerated threat environment, assumptions will not be enough.

Security leaders need evidence they can trust.

No comments: