Anthropic Adds Telegram and Discord Access to Claude Code

Anthropic introduced Claude Code Channels, enabling developers to interact with AI agents via Telegram and Discord. The feature allows remote workflows and real-time event handling.

By Daniel Mercer Edited by Maria Konash Published:
Anthropic Adds Telegram and Discord Access to Claude Code
Anthropic brings Claude Code to Telegram and Discord for real-time AI agent control. Image: Anthropic

Anthropic has introduced a new feature called Claude Code Channels, allowing developers to interact with its AI coding agent through messaging platforms such as Telegram and Discord. The update expands how users can manage AI-driven workflows beyond the terminal.

The feature, currently in research preview, enables users to send messages, alerts, and automated events directly into an active Claude Code session. This allows the AI agent to respond to inputs even when the user is not actively working in a development environment.

Claude Code Channels function as a bridge between external platforms and the local AI session. Messages sent through supported platforms are delivered into the session, where the AI can process requests, execute tasks, and send responses back through the same channel.

Extending AI Beyond the Terminal

The integration reflects a broader shift toward persistent AI agents that operate continuously and respond to real-time inputs. Developers can use channels to forward notifications such as CI results, monitoring alerts, or chat messages, enabling the AI to take action autonomously.

Telegram and Discord are the first supported platforms, available as plugins that can be installed within Claude Code. Once configured, users can pair their accounts with the AI agent and restrict access through allowlists, ensuring only authorized senders can interact with the system.

The system supports two-way communication. While incoming messages appear in the developer’s terminal, the AI’s responses are delivered directly through the external platform, creating a seamless chat-like experience.

However, the feature requires an active session to function. To enable continuous operation, developers must run Claude Code in a persistent environment, such as a background process.

Toward Always-On AI Agents

The introduction of channels aligns with the growing trend of AI agents acting as continuous collaborators rather than on-demand tools. By integrating messaging platforms, Anthropic is positioning Claude Code as part of a broader ecosystem where AI can monitor, respond, and act across workflows in real time.

The feature also highlights increasing interest in event-driven AI systems. Instead of waiting for user input, these systems can react to external triggers, making them suitable for tasks such as DevOps automation, system monitoring, and collaborative development.

Security controls are a key component of the release. Each channel maintains a sender allowlist, and enterprise users must explicitly enable the feature through administrative settings. This reflects the need to balance automation with controlled access, particularly in team environments.

Anthropic noted that the feature is still evolving, with potential changes to functionality and protocol as feedback is incorporated. For now, channel support is limited to approved plugins, though developers can experiment with custom integrations under restricted conditions.

Nvidia CEO Proposes AI Tokens as Engineer Compensation

Nvidia CEO Jensen Huang proposed paying engineers with AI tokens to boost productivity through AI agents. The idea reflects a shift toward AI-driven workflows in tech hiring.

By Samantha Reed Edited by Maria Konash Published:
Nvidia eyes AI tokens in engineer pay, signaling a shift to agent-driven productivity. Image: Google DeepMind / Unsplash

Nvidia CEO Jensen Huang has proposed a new compensation model for engineers that includes AI “tokens” as part of their pay, reflecting a broader shift toward AI-driven productivity in the workplace.

Speaking at Nvidia’s annual GPU Technology Conference, Huang suggested that engineers could receive token budgets alongside their base salaries. These tokens, which represent units of compute used to run AI models and agents, would allow employees to deploy AI systems to automate tasks and enhance output.

Huang said engineers could earn several hundred thousand dollars in base pay, with an additional allocation of tokens valued at a significant portion of that salary. The tokens would effectively function as a productivity resource, enabling workers to scale their output by leveraging AI tools.

AI Agents Reshape Workflows

The proposal is tied to Huang’s vision of a future workplace where engineers oversee large networks of AI agents capable of executing complex, multi-step tasks. In this model, human workers act as supervisors, directing digital systems that handle coding, analysis, and other functions.

Huang has previously described a future in which Nvidia’s workforce includes far more AI agents than human employees. These systems would rely on software infrastructure, increasing demand for computing resources and development tools.

The concept aligns with a growing trend in the technology sector, where companies are integrating AI agents into everyday workflows. These systems can perform tasks such as writing code, analyzing data, and generating reports with minimal human input.

Industry observers note that this shift is changing how software is developed. Instead of writing code line by line, engineers increasingly describe desired outcomes in natural language, with AI systems generating and executing the underlying logic.

Labor Market Impact and Talent Shift

The rise of AI agents has intensified debate about the future of work. Some analysts warn that automation could displace a significant share of white-collar roles, particularly those involving repetitive or entry-level tasks.

Estimates suggest AI could automate up to a quarter of work hours in the United States, with potential productivity gains of around 15%. At the same time, companies face a “talent paradox,” where demand for AI-skilled workers is rising even as automation reduces the need for certain roles.

Entry-level positions are seen as particularly vulnerable, as AI systems increasingly handle foundational tasks that once served as training grounds for new employees. This could widen skill gaps and complicate workforce development.

Despite these concerns, economists point out that technological shifts historically create new categories of jobs, even as they eliminate others. Emerging roles related to AI management, oversight, and integration are expected to grow.

AI & Machine Learning, Enterprise Tech, News

Convicted Nikola Founder Trevor Milton Seeks $1B for AI Jet Comeback

Nikola founder Trevor Milton is raising $1 billion to build AI-powered business jets after acquiring SyberJet. The effort follows his pardon and comes with significant technical and financial risks.

By Samantha Reed Edited by Maria Konash Published: Updated:
Trevor Milton seeks $1B for AI jets via SyberJet, betting on a risky comeback. Image: Lisa Vanthournout / Unsplash

Trevor Milton, the founder of bankrupt electric truck startup Nikola, is attempting a return to the technology sector with a new venture focused on artificial intelligence-powered aircraft.

Milton is seeking to raise $1 billion to fund the development of advanced business jets equipped with AI-driven flight systems. The effort follows his acquisition of SyberJet Aircraft, a struggling business jet manufacturer, as part of a broader plan to revive the company and reposition it around next-generation aviation technology.

The move comes after a turbulent period in Milton’s career. Nikola, once valued at $34 billion, collapsed after its products failed to reach commercial viability. Milton was convicted of fraud in 2022 and sentenced to four years in prison in 2023, but was later pardoned by former President Donald Trump before serving his sentence.

AI Aviation Ambitions

Milton’s new venture aims to build what he describes as a high-performance business jet with extended range and speed, supported by a newly designed avionics system centered on artificial intelligence. The system would be developed from the ground up, with the goal of enabling more autonomous flight capabilities.

According to reports, the company is exploring the concept of an “AI-first” aircraft, where software plays a central role in navigation, decision-making, and operational efficiency. Such capabilities could have applications beyond commercial aviation, including potential use in defense contracts.

To support the effort, Milton has recruited dozens of former Nikola engineers, leveraging existing technical talent familiar with complex systems development. The company is also engaging with potential investors in the United States and the Middle East, including outreach to Saudi-backed funding sources.

High Risk, Uncertain Outcome

Despite the ambition of the project, Milton has acknowledged the challenges involved. He reportedly described aircraft development as significantly more complex than his previous work in electric vehicles, highlighting the technical, regulatory, and financial barriers facing the venture.

Developing a new aircraft platform requires extensive certification, safety validation, and capital investment, often spanning many years before reaching commercial deployment. Integrating advanced AI systems into aviation further increases complexity, particularly given strict regulatory standards governing autonomous or semi-autonomous flight technologies.

Milton has indicated that the probability of success is low, underscoring the speculative nature of the initiative. The project also faces reputational challenges, given his prior conviction and the collapse of Nikola.

Still, the effort reflects a broader trend of applying artificial intelligence to transportation systems, including aviation. Companies and governments are increasingly exploring AI-driven automation to improve efficiency, reduce pilot workload, and enable new operational models.

AI & Machine Learning, Mobility & Transportation Tech, News

OpenAI to Acquire Astral to Expand Codex

OpenAI plans to acquire developer tools startup Astral to strengthen its Codex platform. The move aims to integrate AI deeper into the full software development lifecycle.

By Daniel Mercer Edited by Maria Konash Published:
OpenAI to acquire Astral, boosting Codex with Python tools for full-lifecycle development. Image: Mariia Shalabaieva / Unsplash

OpenAI announced plans to acquire Astral, a startup known for its widely used open-source Python developer tools, in a move aimed at expanding the capabilities of its Codex platform.

The acquisition, which remains subject to regulatory approval, will bring Astral’s tooling and engineering team into OpenAI’s ecosystem. The companies said Astral’s products will continue to be supported as open-source projects after the deal closes.

Astral has built a suite of tools that are widely adopted across the Python ecosystem. These include uv for dependency and environment management, Ruff for code linting and formatting, and ty for enforcing type safety. Together, these tools are used by millions of developers to streamline workflows, improve code quality, and reduce errors.

OpenAI said integrating these tools into Codex will allow its AI systems to interact more directly with real-world development environments. The goal is to move beyond code generation and toward AI systems that can participate across the entire software lifecycle.

Expanding AI in Developer Workflows

Codex, OpenAI’s AI system for programming tasks, has seen rapid growth in recent months. The company reported a threefold increase in users and a fivefold rise in usage since the start of the year, with more than two million weekly active users.

The platform is being positioned as a broader development agent rather than a standalone coding assistant. OpenAI aims to enable Codex to plan code changes, modify existing codebases, run development tools, test outputs, and maintain software over time.

Astral’s tools are seen as a key component in that vision because they are already embedded in developer workflows. By integrating them, Codex could gain the ability to operate directly within established development pipelines rather than functioning as a separate interface.

This approach reflects a wider trend in AI development, where systems are increasingly designed to act as collaborators that can execute tasks across multiple stages of production.

Strengthening the Python Ecosystem

Python remains one of the most widely used programming languages, particularly in artificial intelligence, data science, and backend systems. OpenAI said the acquisition will reinforce its commitment to supporting the language and its developer community.

Astral’s tools play a central role in maintaining code quality and project consistency in Python environments. Their integration with AI systems could enable more automated and reliable development processes, especially in large or complex codebases.

The companies indicated that future updates will focus on deeper integration between Codex and Astral’s tooling. This could allow AI agents to manage dependencies, enforce coding standards, and validate outputs as part of a continuous workflow.

For now, OpenAI and Astral will continue operating independently until the transaction is finalized. After closing, Astral’s team is expected to join OpenAI’s Codex group, where it will contribute to building more advanced AI-driven development systems.

The acquisition highlights OpenAI’s broader strategy to embed AI more deeply into professional tools, positioning its models not just as assistants, but as active participants in software creation and maintenance.

Pentagon Faces Resistance Over Anthropic AI Ban

Pentagon staff and contractors are resisting orders to phase out Anthropic’s AI tools, citing performance concerns and operational disruption. The ban highlights tensions between policy decisions and AI adoption.

By Samantha Reed Edited by Maria Konash Published:
Pentagon users push back on Anthropic AI ban, citing disruption and better performance. Image: Joel Rivera-Camacho / Unsplash

The U.S. Department of Defense is facing internal resistance as it moves to phase out artificial intelligence tools developed by Anthropic, following a decision to classify the company as a supply-chain risk.

Defense Secretary Pete Hegseth issued the designation on March 3 after a dispute with Anthropic over usage guardrails for its AI systems. The order bars the Pentagon and its contractors from using Anthropic’s technology, including its widely adopted Claude model, with a six-month transition period.

However, military personnel, IT staff, and contractors say the directive is proving difficult to implement. Many users have grown reliant on Anthropic’s tools and view them as more effective than competing systems. Some are delaying compliance, while others expect the ban may eventually be reversed.

Operational Dependence on AI Tools

Anthropic’s AI systems have become embedded in military workflows, supporting tasks ranging from data analysis to operational planning. Claude was the first AI model approved for use on classified Pentagon networks, and adoption expanded rapidly following a $200 million defense contract awarded in 2025.

Users say the tools significantly improved efficiency, particularly in handling large datasets and automating repetitive processes. In some cases, developers relied on Anthropic’s Claude Code system to generate software and build automated workflows.

With the phase-out underway, some of these processes are reverting to manual methods. One official said tasks previously handled by AI, such as querying large datasets, are now being performed using traditional tools like spreadsheets, resulting in slower workflows and reduced productivity.

Replacing Anthropic’s systems is also technically complex. Contractors note that recertifying alternative AI models for use on classified networks could take between 12 and 18 months. This process includes rigorous security and compliance checks, making a rapid transition unlikely.

Cost, Complexity, and Strategic Uncertainty

The removal of Anthropic’s tools is expected to carry both financial and operational costs. Systems built around Claude may require partial redesign, particularly in cases where workflows and prompts were tailored to its architecture.

For example, software platforms used for intelligence analysis and targeting operations rely on AI-driven workflows that would need to be rebuilt using alternative models. Contractors say this process could delay projects and reduce efficiency in the short term.

At the same time, Pentagon officials and contractors are weighing whether to fully transition to other providers, such as OpenAI, Google, or xAI, or to adopt a more gradual approach. Some agencies are reportedly slowing their phase-out efforts in anticipation of a potential resolution between the government and Anthropic.

The situation highlights a broader challenge in AI adoption within government systems: balancing security concerns with operational effectiveness. As AI tools become more deeply integrated into critical workflows, replacing them can create significant disruption.

The Pentagon’s experience underscores how quickly AI technologies have moved from experimental tools to essential infrastructure. It also illustrates the growing tension between policy decisions and the practical realities of deploying advanced AI systems at scale.

Meta Shuts Down Horizon Worlds VR Platform

Meta will shut down the VR version of Horizon Worlds and transition the platform to mobile-only. The move reflects a broader shift away from metaverse investments toward AI.

By Samantha Reed Edited by Maria Konash Published:
Meta shuts down Horizon Worlds on VR, pivots to mobile and doubles down on AI.

Meta is shutting down the virtual reality version of Horizon Worlds, marking a significant shift in its metaverse strategy as the company reallocates resources toward artificial intelligence.

The company said the Horizon Worlds app will be removed from the Quest VR store by the end of March and fully discontinued on VR devices by June 15. After that, the platform will continue to operate only as a mobile application.

Meta described the move as a strategic separation of platforms, allowing each to “grow with greater focus.” The decision effectively ends Horizon Worlds’ role as a flagship VR social experience, a position it held since Meta’s high-profile pivot to the metaverse in 2021.

Retreat From VR-Centric Metaverse Vision

Horizon Worlds was launched in late 2021 as a social platform where users could interact through avatars in virtual environments. It was designed to showcase the potential of immersive digital spaces powered by Meta’s Quest headsets.

However, adoption remained limited. The platform struggled to gain traction beyond a relatively small user base, with monthly active users reportedly staying in the hundreds of thousands. Broader consumer skepticism toward virtual reality, along with hardware limitations, slowed mainstream uptake.

Meta attempted to expand accessibility by launching a mobile version of Horizon Worlds in 2023. The app allowed users without VR headsets to participate in virtual environments, similar to platforms like Roblox. Despite this, engagement levels did not reach expectations.

The shutdown follows a series of cost-cutting measures within Reality Labs, Meta’s division responsible for virtual reality and metaverse development. The company recently laid off more than 1,000 employees in the unit, including teams working on first-party VR content.

Reality Labs has been a major financial burden for Meta, reporting multi-billion-dollar operating losses each quarter. In its most recent earnings report, the division posted a loss exceeding $6 billion for a single quarter.

Shift Toward AI and Platform Realignment

The decision to scale back Horizon Worlds underscores Meta’s broader pivot toward artificial intelligence, which has become a central focus across the technology sector.

While Meta is not abandoning VR entirely, it is restructuring its approach. The company said it will continue investing in the VR developer ecosystem while repositioning Horizon Worlds as a mobile-first experience.

Executives indicated that separating VR and mobile platforms will allow for more targeted development strategies. The mobile version is expected to serve as a more accessible entry point for users, potentially reaching a wider audience without requiring specialized hardware.

Meta’s evolving strategy reflects changing priorities within the company and across the industry. As generative AI gains momentum and attracts investment, companies are reassessing earlier bets on immersive virtual environments.

The shutdown of Horizon Worlds on VR highlights the challenges of building large-scale consumer platforms in emerging technologies. It also signals a shift in how Meta plans to balance its long-term ambitions between virtual reality and artificial intelligence.

AI & Machine Learning, Immersive Reality (AR, VR, MR, and XR), News
Exit mobile version