Anthropic tightens AI access as cyberattack risk looms for crypto

2 hours ago 10

Rommie Analytics

Anthropic Tightens Ai Access As Cyberattack Risk Looms For Crypto

Anthropic has moved Claude Mythos Preview into a limited testing phase with a select group of enterprise partners after the model surfaced thousands of critical vulnerabilities across operating systems, web browsers and other software. The disclosure highlights both the immense potential of AI-powered security tools and the new, accompanying risks as capabilities proliferate in the wild.

The company described Mythos Preview as a general-purpose model that, during its internal evaluation, identified high-severity weaknesses across major platforms. Anthropic cautioned that such capabilities could spread rapidly if not managed responsibly, noting that adversaries may deploy these tools before safeguards are in place.

“Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely.”

Security researchers have long warned that AI can accelerate cyberattacks by automating discovery and exploitation. In a broader landscape where AI-driven threats are increasingly common, Anthropic pointed to alarming trends. AllAboutAI reports a 72% year-over-year increase in AI-powered cyberattacks, and that 87% of global organizations experienced AI-enabled attacks in 2025. Against that backdrop, Anthropic emphasized the need for defensive AI tools to outrun the bad actors.

To shore up defenses, Anthropic announced Project Glasswing on the same day. The initiative unites more than 40 companies, including Amazon Web Services, Apple, Cisco, Google, JPMorgan, the Linux Foundation, Microsoft and Nvidia, with the goal of using Claude Mythos Preview’s capabilities to find bugs, share data with partners and patch critical vulnerabilities before criminals exploit them.

Key takeaways

Claude Mythos Preview has identified thousands of critical vulnerabilities across operating systems, browsers and cryptography libraries, underscoring a broad surface area for potential exploitation. The majority of these flaws remain unpatched, with Anthropic noting that about 99% of the vulnerabilities it found have not yet been fixed. Project Glasswing mobilizes a cross‑industry coalition to operationalize AI-driven defense, aiming to accelerate bug discovery, disclosure and remediation across the software stack. The vulnerabilities span decades, hinting at long-standing fragility in widely used software and the persistent risk to critical infrastructure and crypto ecosystems.

AI-driven vulnerability discovery and decades‑old weaknesses

Anthropic’s early findings reveal a troubling reality: flaws that have lingered for years or even decades can still pose meaningful threats today. Among the examples cited were now-patched but historically significant bugs in OpenBSD—a 27-year-old vulnerability that resurfaced in testing—alongside a 16-year-old flaw in the FFmpeg library, and a 17-year-old remote code execution vulnerability in the FreeBSD operating system. The disclosures extended to multiple vulnerabilities within the Linux kernel, illustrating that even well-maintained open-source projects are not immune to latent risks.

Beyond operating systems, Mythos Preview flagged weaknesses in the cryptography landscape—areas that are foundational to secure communications and transactions. The model reportedly identified flaws in widely used libraries and protocols, including TLS, AES-GCM and SSH. Web applications emerged as a particularly fertile ground for vulnerability discovery, with a spectrum of issues ranging from cross-site scripting to SQL injection and cross-site request forgery, the latter often leveraged in phishing-style campaigns.

Anthropic stressed that many of these issues are subtle, context-specific or deeply embedded in complex code paths, making them hard to surface through traditional auditing alone. The implication for developers and operators is clear: even mature software stacks can hide critical flaws that AI could help uncover much faster than conventional methods.

The company also highlighted a stark statistic accompanying the findings: the majority of these vulnerabilities had not yet been patched, creating a window of exposure that could be exploited by opportunistic attackers if not addressed promptly.

Glasswing: a coalition for proactive defense

Project Glasswing is pitched as a proactive defense program rather than a retrospective analysis initiative. By pooling resources and expertise from participants across cloud providers, hardware developers, financial institutions and open-source ecosystems, Glasswing seeks to turn AI-driven vulnerability discovery into a learning loop that accelerates patch creation and deployment. The collaboration aims to share insights about emerging threats, coordinate disclosure with vendors and suppliers, and push for rapid remediation before exploitation becomes widespread.

Key participants span industry giants and pivotal security ecosystems: Amazon Web Services, Apple, Cisco, Google, JPMorgan, the Linux Foundation, Microsoft and Nvidia, among others. The initiative reflects a growing trend in which large technologist coalitions coordinate to harden software supply chains and reduce the window between vulnerability discovery and patching—an objective that is especially relevant to blockchain and crypto infrastructure, where security incidents can trigger cascading failures across networks and ecosystems.

What this shift means for crypto and cybersecurity ecosystems

For investors and builders in the crypto space, the Mythos Preview findings and Glasswing’s collaborative model lend a more nuanced view of risk and resilience. On the one hand, AI-assisted vulnerability discovery could markedly improve the security posture of crypto platforms, wallets, node software and smart-contract ecosystems by uncovering weaknesses that would have taken humans far longer to detect. On the other hand, early access to such powerful tools poses governance and safety questions: who controls the disclosure of findings, how quickly patches are issued, and how risk is priced for users in real-time markets?

From a market perspective, the activity around AI-enabled security tools could influence demand for security primitives, auditing suites and formal verification services within crypto infrastructure. It also underscores the importance of strong supply-chain security, given that a single zero-day in a widely used library or OS could ripple across decentralized networks, exchanges and custodial services.

Analysts note that the transition period for defense‑driven AI is likely to be fraught. In the long run, advocates expect defense capabilities to dominate, yielding a more secure software ecosystem, but the interim phase will be characterized by widespread misconfigurations, patch delays and evolving threat tactics as attackers adapt to new defensive technologies. Anthropic’s framing suggests that the shift toward AI-assisted defense will not be instantaneous; it will require sustained collaboration, standardized disclosures and rapid patch cycles to reduce the window of exploitation.

Beyond the immediate technical implications, industry observers are watching how policy and governance frameworks adapt to these capabilities. The balance between sharing threat intelligence and protecting sensitive vulnerability data will shape how quickly organizations can benefit from AI-driven defense, including in crypto-focused environments where liability, transparency and user trust are paramount.

As coverage in security circles notes, similar narratives have emerged around AI-enabled code security and the broader debate over how to regulate and deploy AI safely. The media and market response to these discussions has included volatility in cybersecurity equities, underscoring that investors are weighing the reliability of AI-driven defense against the risk of enabling more capable attackers.

In the near term, readers should watch how Glasswing translates the model’s findings into tangible patches and how quickly participating firms can operationalize the shared intelligence. The outcome will likely influence security budgets, developer workflows and incident-response readiness across both traditional tech and crypto-native ecosystems.

What remains uncertain is how quickly the industry can close the patch gap for the vast array of uncovered vulnerabilities and whether AI-assisted defenses can stay ahead of increasingly sophisticated exploitation techniques. The coming months will be telling for developers, operators and policymakers about the feasibility and effectiveness of large-scale, AI-enabled defense programs in reducing systemic risk.

For now, Anthropic’s disclosures reinforce a critical takeaway: as AI capabilities grow, so does the imperative to pair powerful discovery tools with disciplined, collaborative defense—especially in sectors where security is inseparable from trust and continuity.

This article was originally published as Anthropic tightens AI access as cyberattack risk looms for crypto on Crypto Breaking News – your trusted source for crypto news, Bitcoin news, and blockchain updates.

Read Entire Article