Anthropic's new product, powerful enough to make the AI Agent Infrastructure team unemployed?
Original Title: "Anthropic Today Released a New Product That May Cause a Wave of AI Agent Infrastructure Teams to Lose Their Jobs"
Original Author: Bayu, AI Engineer
This product is called Claude Managed Agents. In a nutshell: you tell Anthropic what kind of AI agent you want, and it helps you run it in the cloud, including all infrastructure, with usage-based pricing. Sentry used it to go live with end-to-end automated bug fixing in a few weeks, while Rakuten deployed a specialized agent in a week. Previously, these tasks would require an entire engineering team working for months.

Meanwhile, Anthropic's annual recurring revenue has just surpassed $30 billion, triple that of December last year. Most of the growth comes from enterprise customers. Wall Street has started to get nervous, with the WSJ stating that investors are becoming increasingly cautious about the stock prices of traditional SaaS companies, fearing that products like Anthropic's could make some traditional software services obsolete.
What exactly is this product? How does it differ from the Claude Code you are already using? How was it achieved technically?
What Is It? How Does It Differ from Claude Code?
If you have used Claude Code, you know how AI agents work: you give them a task, and they autonomously plan steps, use tools, write code, modify files, and complete the task step by step.
Claude Code runs on your own computer and is a command-line tool for personal developer use. It stops running when you shut down your computer.
Managed Agents run on Anthropic's cloud and are an API service for enterprise use. They can run continuously 24/7, retain progress even if disconnected, and your product can directly embed AI agent capabilities.
This is how Notion operates: users assign tasks to Claude agents within Notion, the agents work in the background, complete the tasks, and return the results, all without users having to leave Notion.

Several Typical Use Cases:
· Event-triggered: The system discovers a bug, automatically assigns a bot to fix it and raise a pull request, with no human intervention in between.
· Scheduled: Automatically generates a GitHub activity summary or team work brief every morning.
· Fire-and-forget: Assign a task to a bot in Slack, it completes the task and returns the document, PowerPoint, or app.
· Long-Running Task: Running a deep research or code refactoring task for several hours.
What's the Difference Between Cloud-hosted Bots and In-house Bots?
You could self-host, but it's costly and slow.
An intelligent bot that can go live requires much more than just "calling an API": a sandbox environment (an isolated secure space where AI can run code, modify files, tinker without affecting the real external system, like providing AI with a dedicated virtual machine), credential management, state recovery, permission control, end-to-end tracing, and more.
Many enterprise customers used to need an entire engineering team dedicated to these tasks. Now, it's plug and play, freeing up engineers to focus on the core of the product.
However, the pain points solved by Managed Agents go beyond just saving labor.
Matt Dongslee (@dongxi_nlp) has a succinct summary:

There's a specific example in the Anthropic Engineering Blog:
When Claude Sonnet 4.5 nears the context window limit, it "panics" and hastily ends the task. They added context reset in the scheduling framework to address this. However, with Claude Opus 4.5, this issue disappeared, and the previous patch actually became a burden.
If you build your scheduling framework, you have to update it with every model upgrade. Delegate it to Anthropic; they optimize it for you, essentially optimizing what they sell to you.

Who's Using It? How?
Notion allows users to offload tasks such as coding, creating PPTs, and organizing spreadsheets directly to Claude within the workspace, running dozens of tasks in parallel, with the whole team collaborating on the same output. Notion Product Manager Eric Liu said users can delegate open-ended complex tasks directly without leaving Notion.

Sentry implemented a fully automated process "from bug discovery to code fix submission." Their AI debugging tool Seer, after identifying the root cause, allows Claude to write patches directly, open PRs (pull requests). Engineering Director Indragie Karunaratne said they were able to launch in a few weeks, saving the ongoing maintenance cost of self-built infrastructure.
Atlassian integrated it into Jira, enabling developers to directly assign tasks to Claude AI.
Asana created AI Teammates, adding AI collaborators in project management who can take on tasks and deliverables.
General Legal (legal tech company) has the most interesting approach: their AI can temporarily create tools to search data based on user queries. Previously, each user query had to be anticipated, and a retrieval tool developed in advance, but now the AI generates them on-demand. The CTO said development time has been reduced by 10x.
Rakuten deployed specialized AI agents in engineering, product, sales, marketing, and finance departments, each going live within a week, receiving tasks via Slack and Teams and delivering tangible outputs such as spreadsheets, PPTs, and apps.
Technical Principle: Decoupling the Brain from the Hands
The Anthropic engineering team wrote a tech blog post titled Scaling Managed Agents: Decoupling the brain from the hands, discussing the architectural evolution behind Managed Agents.


Initially, they shoved everything into one container: AI's inference loop, code execution environment, and session log, all together. The benefit was simplicity, but the downside was that all eggs were in one basket—if the container went down, the entire session was lost, and individual parts could not be replaced separately.
Later, they made a key split:
· The "Brain" is Claude and its scheduling framework, responsible for thinking and decision-making.
· The "Hand" is the sandbox and various tools, responsible for executing specific operations.
· The "Memory" is an independent session log, recording everything that happens.
The three are independent, and if one goes down, it does not affect the other two.
This split brought several practical benefits:
Speed
Not every task needs to start the full sandbox environment. Now, the sandbox is only launched on-demand when AI truly needs to run code. The median first response latency decreased by about 60%, and in extreme cases, it dropped by over 90%.
Security
Code generated by AI runs in the sandbox, while credentials to access external systems are stored in a secure vault outside the sandbox, with physical isolation on both sides. For example, to access a Git repository, the system clones the code during initialization, and AI interacts with git push/pull normally, but the Token itself is not visible to AI. For services like Slack and Jira, they are accessed via the MCP protocol, where requests go through a proxy layer, the proxy layer retrieves the credentials from the vault to call the service, and AI never handles the credentials throughout the process.
Flexibility
The Brain doesn't care what the Hand is. There's an interesting phrase in the engineering blog: the scheduling framework doesn't know if the sandbox is a container, a mobile phone, or a Pokémon emulator. It just needs to adhere to the "input a name, get a string out" interface.
This also means that multiple Brains can share the Hand, and one Brain can hand over the Hand to another Brain, laying the foundation for multi-agent collaboration.
Limitations
Managed Agents are not all-powerful. There are several points to note:
Some features are still in the research preview stage. Abilities such as multi-agent collaboration, advanced memory tools, and self-assessment iteration (allowing the agent to judge its own task completion quality and iteratively improve) are not fully open yet and require application for access.
Platform Lock-in. Opting for Managed Agents means your agent infrastructure is tied to the Anthropic ecosystem. If you plan to switch models or platforms in the future, migration costs should not be overlooked.
Context management remains a challenge. While session logs are stored independently, deciding which information to retain or discard during long tasks still involves irreversible decisions. This is an ongoing challenge, and their current approach separates context storage from context management: storage ensures preservation, while management policies adjust with model evolution.
Cost Predictability. $0.08 per session hour may sound reasonable, but for complex tasks requiring the agent to run for several hours, considering token consumption and runtime costs, the overall cost may not be insignificant. Enterprises need to assess their budgets accordingly.
Managed Agents indicate that most enterprises still have a long way to go before they can "fully rely on AI agents for work."
While the infrastructure barrier has been lowered, Managed Agents cannot assist with defining good tasks, designing workflows, or establishing trust to allow AI to access core business data.
The "AWS Moment" of AI Agent Infrastructure
Managed Agents seem to be following the path AWS took in its early days: first providing computing power, then encapsulating the runtime environment.
Ten years ago, enterprises debated whether to "move to the cloud"; now, the debate is whether to "self-host Agent infrastructure or go with managed services." Historical experience tells us that most enterprises eventually choose managed services because infrastructure is never a core competency. OpenAI has also launched its own Agent platform, Frontier, and the competition in this space is just beginning.
From a technological perspective, the "separation of brain and hand" architectural approach is worth noting. It allows each part of the system to evolve independently: upgrade the model, change the brain; need a new tool, add a hand; alter the storage solution, replace the memory layer.
A good analogy from an engineering blog: the read() command of an operating system doesn't care whether it's dealing with a 1970s disk or a modern SSD; the abstraction layer is stable, allowing the underlying implementation to be swapped out easily.
From a usage perspective, if you are an enterprise developer looking to embed AI agent capability in your product, Managed Agents might save you several months of infrastructure work.
Six languages (Python, TypeScript, Java, Go, Ruby, PHP) are supported by SDKs. If you are already using Claude Code, update to the latest version, type /claude-api managed-agents-onboarding to get started.
If you are a casual AI enthusiast, the most immediate impact you might feel is: in the SaaS products you use, more and more AI agents will be working in the background to assist you, with these agents likely running on Managed Agents.
Pricing Reference: Token costs are based on the Anthropic API standard pricing, with a runtime cost of $0.08 per session hour (idle time is not billed) and $10 per thousand web searches.
Do you think that the infrastructure for AI agents will eventually be dominated by a few major players, similar to how cloud computing is today?
You may also like

Transcript of Dr. Han, founder of Gate, speaking at the University of Hong Kong: Breaking the Matthew Effect and Winning in Asymmetric Competition

Who will replace AAVE as the new king?

Fu Peng 2026 First Public Speech: What Exactly Are Crypto Assets? Why Did I Join the Crypto Asset Industry?

Lattice Capital Founder: Crypto VC, Seeing is Believing Because of Faith

The Pitch Is Set. So Is the Trade: CHZ, SportFi, and the UCL Window That Won't Wait
CHZ is gaining momentum as SportFi narratives accelerate alongside the UEFA Champions League(UCL) and global football cycles. This article explores how CHZ, fan tokens, and the broader SportFi ecosystem are driven by real-world events, market narratives, and capital flows—offering insights into why SportFi is emerging as one of the most dynamic sectors in crypto.

Morning Report | SpaceX acquires Cursor for $60 billion; Kalshi and Polymarket launch perpetual contract trading; NeoCognition completes $40 million financing

IMF | The Future of Stablecoins and Payments: Evidence from Financial Markets

56% Spike in Memecoin Trading Volume, Yet Shiba Inu (SHIB) Remains Static With 0 Netflow
Key Takeaways: Recent memecoin market saw a volume increase of 56%, highlighting a shift in investor interest towards…

American Airlines Praises Ripple, Surprising XRP Community
Key Takeaways: American Airlines reports exceptional results from Ripple Treasury usage. Ripple Treasury aids treasury efficiency without needing…

USDT Supply Surges to $188B as Tether Solidifies Stablecoin Dominance
Key Takeaways: Tether’s USDT supply reaches an all-time high of $188 billion, maintaining its dominance in the stablecoin…

North Korea’s $500M DeFi Heist Unveils New Cyber Warfare Tactics
Key Takeaways: North Korean operatives have obtained over $500 million from DeFi platforms in under three weeks. The…

How Crypto Futures Markets Are Fueling ‘Scam Coin’ Insider Schemes
Key Takeaways: RAVE’s market cap skyrocketed to $6.7 billion before plummeting by 95% due to insider control and…

XRP Price Prediction: Wall Street Giants Eye Ripple – Should You?
Key Takeaways: Wall Street players like Mastercard and BlackRock are adopting bullish XRP positions. XRP Ledger sees a…

WOJAK Crypto Meme Coin Pumps 87% as MAXI Targets $5M: Unveiling the Trading Insights of 2026
Key Takeaways: WOJAK crypto surged 87% in 24 hours, driven by aggressive accumulation, signaling renewed interest in meme…

XRP Price Prediction: Wall Street Giants Back Ripple’s Future – Should You?
Key Takeaways: Leading Wall Street firms are showing bullish interest in XRP’s price potential. At the Digital Assets…

XRP Price Prediction: Wall Street Giants Shift Focus to Ripple
Key Takeaways: XRP Ledger is seeing massive institutional interest from giants like Mastercard and BlackRock, aligning with overall…

Protect Your Crypto: Practical Steps to Avoid Scams
Key Takeaways: Recognize red flags early by knowing scams like phishing and rug pulls. Secure your assets with…

How Much Is Blueface Worth? Latest Net Worth Revealed (2026)
Key Takeaways: Blueface’s net worth in 2026 ranges from $4 million to $7 million, reflecting both his musical…
Transcript of Dr. Han, founder of Gate, speaking at the University of Hong Kong: Breaking the Matthew Effect and Winning in Asymmetric Competition
Who will replace AAVE as the new king?
Fu Peng 2026 First Public Speech: What Exactly Are Crypto Assets? Why Did I Join the Crypto Asset Industry?
Lattice Capital Founder: Crypto VC, Seeing is Believing Because of Faith
The Pitch Is Set. So Is the Trade: CHZ, SportFi, and the UCL Window That Won't Wait
CHZ is gaining momentum as SportFi narratives accelerate alongside the UEFA Champions League(UCL) and global football cycles. This article explores how CHZ, fan tokens, and the broader SportFi ecosystem are driven by real-world events, market narratives, and capital flows—offering insights into why SportFi is emerging as one of the most dynamic sectors in crypto.




