Skip to main content
All Insights
Cyber Risk·6 min read·

The New Baseline: Why AI Changed What 'Secure Enough' Means

By Dritan Saliovski

Anthropic's Project Glasswing demonstrated that autonomous AI systems can discover zero-day vulnerabilities at a scale and speed that decades of human and automated testing could not match. That single development did not just introduce a new tool. It redefined the minimum standard for what constitutes an adequate cybersecurity posture.

Key Takeaways

  • AI-augmented attack tools can identify vulnerabilities faster than most organizations can patch them
  • Security assessments designed for human-speed threats are no longer sufficient baselines
  • Organizations running penetration tests without AI-assisted attack simulation are benchmarking against yesterday's threat landscape
  • PE deal teams conducting cyber due diligence without AI-augmented external intelligence are absorbing risk they cannot quantify
27 yrsAge of oldest flaw found by AI in OpenBSDAnthropic Project Glasswing, April 2026
HoursTime for AI to generate working exploit chainsIndustry threat intelligence, 2026
27 secAverage breakout time in AI-enabled attacksCrowdStrike Global Threat Report, 2026

The Baseline Has Moved

For most of the past two decades, the cybersecurity baseline was defined by frameworks: ISO 27001, NIST CSF, SOC 2. These remain valuable, but they were designed to address threats that move at human speed. The assumption was that vulnerability discovery, exploitation, and lateral movement follow a timeline measured in days or weeks.

AI-assisted attack tooling compresses that timeline to hours. Mythos and similar models can scan codebases, identify logic flaws, and generate working exploit chains without human intervention. The Linux Foundation has begun mapping how AI agents interact with open-source dependencies across software supply chains, recognizing that the discovery surface has expanded beyond what manual review can cover.

This creates an asymmetry that framework compliance alone cannot resolve. An organization can be fully certified against ISO 27001 and still be exposed to attack vectors that no human tester would have found within a standard engagement window.

What This Means for Security Assessments

Traditional penetration testing engagements typically scope a defined set of assets, allocate a fixed number of consultant days, and produce a findings report based on what a skilled human can discover within that window. That model was effective when attackers operated under similar constraints.

The constraint has been removed on the attacker side. AI-assisted reconnaissance tools can enumerate exposed infrastructure, identify misconfigured cloud storage, map third-party dependencies, and correlate credential exposures from prior breaches, all within minutes of targeting an organization.

Security assessments that do not account for AI-enabled threat capability are testing against a threat model that no longer reflects reality. This does not mean traditional assessments are worthless. It means they are incomplete.

The following table illustrates the gap between traditional and AI-calibrated assessment approaches:

Scroll right to see more
DimensionTraditional AssessmentAI-Calibrated Assessment
Vulnerability discoveryManual + automated scanning against known CVE databasesAI-assisted attack simulation including zero-day identification
Supply chain analysisVendor questionnaires, SLA reviewAI-speed dependency mapping, component-level risk scoring
Threat model assumptionHuman-speed attacker with bounded timeAI-speed attacker with near-unlimited reconnaissance capacity
Patch cycle benchmark30 to 90 day remediation windowsHours-to-days discovery-to-exploit timelines
ScopeDefined asset list, fixed consultant daysDynamic, continuous, expanding to full attack surface
Scroll right to see more

Three areas require immediate recalibration. First, penetration testing scope should include AI-assisted attack simulation as a standard component, not an optional add-on. Second, supply chain risk models need to account for the speed at which AI agents can map dependency chains and identify exploitable components. Third, vulnerability management programs should be benchmarked against AI-speed discovery timelines, not human-speed patch cycles. For the broader context on how agentic attackers and 27-second breakout times are changing the threat model, the recalibration imperative becomes even more urgent.

Implications for PE Deal Teams

Cybersecurity due diligence in M&A transactions faces the same baseline shift. An external assessment that relies solely on passive scanning and questionnaire-based review was already limited. In a landscape where AI tools can generate a comprehensive external risk profile of a target company in hours, deal teams that do not incorporate AI-augmented intelligence into their process are operating with an incomplete picture. For the complete due diligence methodology, see our practitioner's framework for cybersecurity due diligence.

The financial exposure is direct. Vulnerabilities that an AI-assisted attacker could find in minutes will eventually be found. The question is whether that discovery happens during diligence, when it informs valuation and risk allocation, or post-close, when remediation costs fall entirely on the acquirer. For more on how cybersecurity due diligence protects deal value, the pre-close window is the only point of leverage.

What To Do Now

Organizations should evaluate their current security assessment methodology against the following questions: Does your penetration testing scope include AI-assisted attack simulation? Does your external threat intelligence program account for AI-speed reconnaissance? Does your supply chain risk model reflect the speed at which dependencies can be mapped and exploited? Does your vulnerability management SLA align with AI-speed discovery timelines?

If the answer to any of these is no, the baseline your security program is built on may already be outdated.

The AI-Augmented Assessment Framework covers the threat model recalibration guidance, AI-assisted testing integration checklist, and a PE due diligence overlay for incorporating AI-speed risk into deal evaluation.

Work With Us

Recalibrate Your Security Assessment Baseline

Innovaiden works with leadership teams deploying AI agents across their organizations, from initial setup and training to security framework alignment and governance readiness. Reach out to discuss how we can help your team.

Get in Touch

Frequently Asked Questions

Why is framework compliance no longer sufficient as a security baseline?

Frameworks like ISO 27001, NIST CSF, and SOC 2 were designed for threats that move at human speed, assuming vulnerability discovery and exploitation take days or weeks. AI-assisted attack tooling compresses that timeline to hours. An organization can be fully certified and still be exposed to attack vectors that no human tester would find within a standard engagement window.

What did Project Glasswing demonstrate about AI vulnerability discovery?

Anthropic's Project Glasswing demonstrated that autonomous AI systems can discover zero-day vulnerabilities at a scale and speed that decades of human and automated testing could not match. The model found flaws that survived 27 years of review, including vulnerabilities in OpenBSD and FFmpeg that conventional tools missed entirely.

What three areas of security assessment need immediate recalibration?

First, penetration testing scope should include AI-assisted attack simulation as a standard component. Second, supply chain risk models need to account for the speed at which AI agents can map dependency chains and identify exploitable components. Third, vulnerability management programs should be benchmarked against AI-speed discovery timelines, not human-speed patch cycles.

How does this baseline shift affect PE deal teams conducting cyber due diligence?

Deal teams that do not incorporate AI-augmented intelligence into their process are operating with an incomplete picture. Vulnerabilities that an AI-assisted attacker could find in minutes will eventually be found. The question is whether discovery happens during diligence, when it informs valuation and risk allocation, or post-close, when remediation costs fall entirely on the acquirer.

What questions should organizations ask about their current security posture?

Four diagnostic questions: Does your penetration testing scope include AI-assisted attack simulation? Does your external threat intelligence program account for AI-speed reconnaissance? Does your supply chain risk model reflect the speed at which dependencies can be mapped and exploited? Does your vulnerability management SLA align with AI-speed discovery timelines?