Anthropic has announced the general availability of its latest AI model, Claude Opus 4.7, positioning it as a direct upgrade over Opus 4.6 with significant gains in advanced software engineering and multimodal capabilities. The release comes as the company continues to iterate toward more powerful systems, while cautiously testing safety mechanisms ahead of broader deployment of its more advanced Claude Mythos Preview.
Anthropic says Opus 4.7 performs better on complex, long-running coding tasks, allowing users to delegate work that previously required close oversight. The model shows improved instruction-following, with a more literal interpretation of prompts, which may require developers to adjust existing workflows. It also introduces stronger self-verification behavior, meaning it attempts to validate its outputs before returning results.
A key upgrade is in multimodal performance. Opus 4.7 can process images up to 2,576 pixels on the long edge, more than triple the resolution of earlier Claude models. This enables use cases such as analyzing dense screenshots, extracting data from diagrams, and supporting pixel-precise design workflows. Internally, Anthropic reports improved performance in domains such as finance, legal reasoning, and document analysis, including stronger results on third-party benchmarks measuring economically valuable knowledge work.
The model is now available across Anthropic’s ecosystem, including its API and integrations with platforms such as Amazon Bedrock, Google Vertex AI, and Microsoft Foundry. Pricing remains unchanged from Opus 4.6 at $5 per million input tokens and $25 per million output tokens.
Anthropic is also introducing new controls for developers, including an “xhigh” effort level that balances reasoning depth and latency, as well as task budgeting tools to manage token usage in longer workflows. Additional features in its coding environment include an automated code review tool and expanded autonomous execution modes.
On safety, Opus 4.7 is the first model released under Anthropic’s new cybersecurity framework introduced with Project Glasswing. The system includes safeguards that detect and block high-risk cyber-related queries. For vetted professionals, the company has launched a Cyber Verification Program to allow legitimate security research and testing.
Why This Matters
The release reflects a broader shift in enterprise AI toward reliability and autonomy. Improvements in coding and long-task execution make models like Opus 4.7 more viable for real-world development workflows, reducing the need for constant human supervision.
Enhanced vision capabilities also expand AI’s role in design, analytics, and operations, where interpreting complex visuals is critical. At the same time, the introduction of cybersecurity safeguards highlights growing concerns about misuse as models become more capable.
For businesses, the combination of higher performance and unchanged pricing could accelerate adoption, particularly in software development, finance, and knowledge work automation.
Context
Anthropic has been steadily iterating on its Claude model family, competing with offerings from companies like OpenAI and Google. The company’s strategy emphasizes safety alongside capability, often limiting access to its most advanced systems while testing controls on intermediate models.
The mention of Claude Mythos Preview suggests Anthropic is preparing for a next generation of more powerful AI systems, but is proceeding cautiously due to potential risks, particularly in cybersecurity.
The addition of finer-grained control over compute effort and token usage also reflects an industry-wide trend toward giving developers more control over cost-performance tradeoffs, as AI systems are increasingly deployed in production environments.