Salesforce Expands Slackbot With AI-Powered Enterprise Capabilities

Salesforce is expanding Slackbot into an AI-powered enterprise teammate, integrating workflows, apps, and data into a single conversational interface. The update introduces new capabilities aimed at improving productivity and coordination across teams.

By Daniel Mercer Edited by Maria Konash Published:
Salesforce Expands Slackbot With AI-Powered Enterprise Capabilities
Salesforce upgrades Slackbot with AI, unifying workflows, apps, and CRM in one interface. Image: Salesforce

Salesforce is positioning Slack as a central interface for enterprise AI with a major expansion of Slackbot, transforming it from a personal assistant into a collaborative, organization-wide AI teammate. The update introduces more than 30 new capabilities designed to connect data, applications, and workflows into a unified conversational experience.

The move reflects a broader shift in enterprise AI adoption. While many organizations have deployed multiple AI tools across departments, Salesforce argues that fragmentation limits their effectiveness. Slackbot aims to address this by acting as a shared intelligence layer that connects systems and delivers actionable insights directly within team workflows.

Slackbot operates inside Slack’s existing environment, leveraging access to conversations, files, and organizational context. It inherits existing permissions and governance settings, allowing it to interact across enterprise systems without requiring additional configuration. This design reduces friction in adoption while maintaining compliance controls.

One of the key additions is meeting intelligence. Slackbot can now transcribe meetings, summarize discussions, and extract action items. It can also trigger follow-up actions in connected systems such as customer relationship management tools, reducing the need for manual updates after meetings.

Integration Across Enterprise Systems

A central feature of the update is Slackbot’s ability to orchestrate workflows across multiple enterprise tools. Through a new model context protocol client, Slackbot can route tasks to various AI agents and applications, including systems used for sales, customer service, and IT operations. Employees can issue requests in natural language without needing to know which system executes the task.

Salesforce is also introducing reusable AI “skills,” which allow teams to standardize recurring workflows. These skills define inputs, steps, and outputs for specific tasks, enabling consistent execution across teams. Slackbot can automatically recognize when a task matches a predefined skill and apply it without user intervention.

For smaller businesses, Salesforce has embedded customer relationship management capabilities directly into Slackbot. The system can automatically capture customer interactions from conversations, update records, and track deals without requiring a separate CRM interface. For larger enterprises, Slackbot serves as a conversational layer over Salesforce’s Customer 360 platform, enabling users to update opportunities, manage cases, and trigger workflows without leaving Slack.

Slackbot also extends to desktop-level interactions, allowing users to act on content across applications while maintaining context from Slack and connected systems. This reduces the need to switch between tools and manually transfer information.

Salesforce reports strong early adoption, with Slackbot becoming one of the fastest-growing features in its product history. Internal data suggests that employees using the tool can save significant time on routine tasks, reflecting growing demand for AI systems that integrate directly into existing workflows.

The expansion underscores Salesforce’s broader strategy to position Slack as the operating system for work, where human collaboration and AI-driven automation converge in a single interface.

AI & Machine Learning, Enterprise Tech, News

Anthropic Partners with Australia on AI Safety and Research

Anthropic has signed an agreement with the Australian government to collaborate on AI safety and research. The deal includes funding for scientific institutions and expanded use of Claude in healthcare and education.

By Samantha Reed Edited by Maria Konash Published:
Anthropic Partners with Australia on AI Safety and Research
Anthropic partners with Australia on AI safety, research, healthcare, and education expanding Claude adoption. Image: Anthropic

Anthropic has signed a Memorandum of Understanding with the Australian government to collaborate on artificial intelligence safety, marking a strategic expansion of its international partnerships. The agreement aligns with Australia’s National AI Plan and formalizes cooperation between Anthropic and the country’s AI Safety Institute.

Under the arrangement, Anthropic will share insights on emerging AI model capabilities and associated risks, while participating in joint safety and security evaluations. The company will also collaborate with academic institutions to advance research on responsible AI development. Similar partnerships are already in place with safety institutes in the United States, United Kingdom, and Japan.

A key component of the agreement involves sharing Anthropic’s Economic Index data with the Australian government. This dataset is designed to track how AI tools are being adopted across industries and assess their economic impact. Initial focus areas include sectors critical to Australia’s economy, such as natural resources, agriculture, healthcare, and financial services.

The collaboration also includes plans to support workforce development through AI education and training initiatives. According to Anthropic, Australian users are already applying its Claude model across a wide range of professional and technical tasks, particularly in high-skill domains.

In parallel, the company is exploring potential investments in data center infrastructure and energy capacity in Australia, reflecting growing demand for compute resources tied to AI deployment.

Investment in Science and Education

Anthropic is extending its AI for Science program to Australia with an investment of AUD$3 million in API credits for research institutions. The funding will support projects focused on healthcare, genomics, and computer science education.

Participating institutions include the Australian National University, Murdoch Children’s Research Institute, the Garvan Institute of Medical Research, and Curtin University. These organizations will use Anthropic’s Claude model to accelerate research in areas such as rare disease diagnosis, precision medicine, and genetic analysis.

At the Australian National University, researchers are applying Claude to analyze genetic sequencing data, while also integrating the model into computing curricula. The Garvan Institute is using AI to study genetic variation and identify potential treatments, including efforts to automate complex diagnostic processes for rare childhood conditions.

Murdoch Children’s Research Institute is focusing on stem cell research and therapeutic discovery, while Curtin University is expanding the use of AI across multiple academic disciplines, including health sciences, engineering, and law.

Anthropic also announced a new startup support initiative targeting deep tech companies in Australia. Eligible startups working in areas such as drug discovery, climate modeling, and materials science will receive up to $50,000 in API credits, along with technical resources.

The partnership signals Anthropic’s broader push into the Asia-Pacific region, with plans to establish a local presence in Sydney. It also reflects increasing collaboration between AI developers and governments as countries seek to balance innovation with safety and economic impact. Alongside these efforts, Anthropic has launched a $100 million Claude Partner Network to support consultancies and AI firms deploying its technology, while also exploring an initial public offering as early as October amid intensifying competition with peers such as OpenAI.

AI & Machine Learning, News

OpenAI Raises $122B to Supercharge Global AI Infrastructure

OpenAI has closed a $122 billion funding round at an $852 billion valuation to scale its AI infrastructure and products. The company aims to accelerate enterprise adoption and global deployment of intelligent systems.

By Maria Konash Published:
OpenAI Raises $122B to Supercharge Global AI Infrastructure
OpenAI raises $122B at $852B valuation, accelerating infrastructure and its AI superapp strategy. Image: OpenAI

OpenAI has raised $122 billion in committed capital, marking one of the largest private funding rounds in technology history. The deal values the company at $852 billion post-money, underscoring strong investor confidence in the long-term role of artificial intelligence as core infrastructure for the global economy.

The round was co-led by SoftBank and Andreessen Horowitz, with participation from major institutional investors and strategic partners including Amazon, Nvidia, and Microsoft. OpenAI also expanded access to individual investors, raising more than $3 billion through bank distribution channels. In parallel, the company increased its revolving credit facility to approximately $4.7 billion, providing additional financial flexibility.

The capital will primarily support compute expansion, which OpenAI describes as its central strategic advantage. The company has built a diversified infrastructure strategy spanning multiple cloud providers, chip platforms, and data center partnerships. Nvidia GPUs remain foundational to its training and inference systems, while additional collaborations include AMD, Broadcom, and cloud providers such as Oracle, Google Cloud, and AWS.

This infrastructure investment reflects rising demand for large-scale AI systems. OpenAI stated that its APIs now process more than 15 billion tokens per minute, highlighting the scale at which its models are being deployed across applications and industries.

Product Momentum and Enterprise Expansion

OpenAI’s rapid growth is closely tied to the adoption of ChatGPT and its broader product ecosystem. The platform now serves over 900 million weekly active users, with more than 50 million paying subscribers. The company reports that it is generating approximately $2 billion in monthly revenue, driven by both consumer subscriptions and enterprise usage.

Enterprise adoption has become a key revenue driver, accounting for more than 40% of total income. OpenAI expects enterprise revenue to reach parity with its consumer business by 2026, as organizations increasingly integrate AI into workflows and operations.

Recent product updates include the release of GPT-5.4, which introduces improvements in reasoning, workflow execution, and multimodal capabilities. OpenAI has also expanded Codex, its AI-powered coding agent, which now serves over 2 million weekly users and is experiencing rapid growth.

The company is positioning itself as a unified AI platform, combining consumer applications, developer tools, and enterprise solutions. Its strategy centers on building a “superapp” that integrates chat, coding, search, and agent-based automation into a single interface. This approach aims to simplify user experience while increasing engagement and cross-platform adoption.

Despite strong revenue growth, OpenAI remains unprofitable and continues to invest heavily in infrastructure and research. The scale of its latest funding round reflects both the high costs associated with AI development and the expectation that advanced models will drive productivity gains across industries.

As competition intensifies, OpenAI’s ability to translate its infrastructure advantage into sustainable revenue and operational efficiency will be critical in justifying its valuation and maintaining its leadership position in the AI sector.

Nvidia Invests $2B in Marvell to Expand AI Infrastructure

Nvidia has invested $2 billion in Marvell as part of a partnership to expand AI infrastructure, including custom chips, networking, and silicon photonics.

By Olivia Grant Edited by Maria Konash Published:
Nvidia Invests $2B in Marvell to Expand AI Infrastructure
Nvidia invests $2B in Marvell to expand AI infrastructure with custom chips and silicon photonics. Image: Nvidia

Nvidia has announced a strategic partnership with Marvell Technology, backed by a $2 billion investment, to expand its AI infrastructure ecosystem and accelerate development of next-generation computing systems.

The collaboration connects Marvell’s hardware capabilities to Nvidia’s NVLink Fusion platform, a rack-scale architecture designed to support custom AI infrastructure. The partnership reflects growing demand for scalable systems capable of handling increasingly complex AI workloads.

The move comes as companies across the industry race to build “AI factories,” large-scale computing environments optimized for training and deploying advanced models.

Expanding the NVLink Ecosystem

Under the agreement, Marvell will develop custom XPUs and networking technologies compatible with Nvidia’s NVLink Fusion platform. Nvidia will provide core components including its Vera CPU, NVLink interconnect, Spectrum-X switches, and networking solutions such as ConnectX NICs and BlueField DPUs.

The integration allows customers to design semi-custom AI systems while remaining fully compatible with Nvidia’s broader ecosystem. This approach supports heterogeneous computing environments, where different types of processors and accelerators work together within a unified architecture.

By enabling tighter integration between custom silicon and Nvidia’s infrastructure, the companies aim to provide greater flexibility for enterprises building specialized AI systems.

Focus on Networking and AI at Scale

The partnership also includes joint development of advanced networking technologies, particularly in silicon photonics and optical interconnects. These components are critical for improving data transfer speeds and reducing latency in large-scale AI deployments.

In addition, Nvidia and Marvell plan to collaborate on AI-RAN technology, which applies AI to telecommunications networks, including 5G and future 6G systems. The goal is to transform telecom infrastructure into distributed AI computing platforms.

As AI workloads increasingly rely on distributed systems, high-speed connectivity and efficient data movement are becoming central to performance and cost efficiency.

Strategic Bet on AI Infrastructure

Nvidia’s investment underscores the importance of partnerships in scaling AI infrastructure beyond standalone chips. As demand for inference and model deployment grows, companies are focusing on building integrated systems that combine compute, networking, and storage.

The collaboration with Marvell positions Nvidia to strengthen its role not only as a GPU provider but as a broader infrastructure platform. It also highlights the increasing role of custom silicon and specialized hardware in meeting the needs of enterprise AI applications.

AI & Machine Learning, Cloud & Infrastructure, News, Startups & Investment

Nebius Plans Massive 310MW AI Data Center in Finland

Nebius will build a 310MW AI data center in Finland, expanding Europe’s growing push to develop large-scale compute infrastructure.

By Olivia Grant Edited by Maria Konash Published:
Nebius Plans Massive 310MW AI Data Center in Finland
Nebius plans 310MW AI data center in Finland, expanding Europe’s AI infrastructure amid rising competition. Image: Joakim Honkasalo / Unsplash

Nebius has announced plans to build a large-scale AI data center in Finland, as Europe accelerates efforts to expand computing infrastructure needed for artificial intelligence.

The facility will be located in Lappeenranta and is expected to reach a capacity of up to 310 megawatts, making it one of the largest AI data centers in the region. Nebius said the site is scheduled to begin initial operations in 2027.

The project forms part of the company’s broader strategy to scale global AI infrastructure and meet rising demand for compute resources.

Europe Accelerates AI Infrastructure Buildout

The announcement comes amid a wave of data center investments across Europe, as governments and companies seek to reduce reliance on external infrastructure and support domestic AI development.

French startup Mistral recently secured $830 million in debt financing to operate a data center near Paris, while additional projects have been announced in Sweden and other parts of the region. Meanwhile, companies including Nvidia and institutional investors have backed large-scale AI campuses, highlighting the growing importance of compute capacity.

Nebius, headquartered in the Netherlands and listed in the United States, has positioned itself as a key provider of AI infrastructure in Europe. The company said it is targeting more than 3 gigawatts of contracted power globally by the end of the year, with over 750 megawatts already secured across Europe, the Middle East, and Africa.

Energy and Scaling Challenges

Despite strong momentum, Europe faces structural challenges in building AI infrastructure. Energy costs remain higher than in the United States, and developers often encounter delays related to grid access and permitting.

These constraints have pushed companies to carefully select locations that offer reliable energy supply and supportive regulatory environments. Finland, with its access to renewable energy and established data center ecosystem, has become an attractive destination for such projects.

Nebius is also expanding beyond Europe, with plans for a gigawatt-scale AI data center in Missouri, reflecting a dual strategy of regional expansion and global diversification.

AI & Machine Learning, Cloud & Infrastructure, News

Anthropic Accidentally Exposes Claude Code Source Again

Anthropic has again exposed the source code of its Claude Code tool due to a packaging error, raising concerns over software release practices.

By Daniel Mercer Edited by Maria Konash Published:
Anthropic Accidentally Exposes Claude Code Source Again
Anthropic leak exposes Claude Code source via npm error, raising security concerns. Image: Sasun Bughdaryan / Unsplash

Anthropic has inadvertently exposed the full source code of its Claude Code tool for the second time in a year, following a packaging error that left sensitive development files publicly accessible.

The issue was discovered on March 31 by security researcher Chaofan Shou, who found that the latest version of Claude Code included a source map file in its npm package. This file allowed reconstruction of the tool’s underlying TypeScript codebase, effectively revealing the entire internal implementation.

The exposure was not the result of a cyberattack but a configuration oversight, highlighting potential gaps in software release processes at a time when AI tools are increasingly used in enterprise environments.

Source Map Error Reveals Full Codebase

The leak stemmed from a file known as a source map, typically used during development to map compiled code back to its original human-readable form. While useful for debugging, such files are usually removed before public release.

In this case, the source map enabled access to approximately 1,900 internal source files, including components related to API design, telemetry systems, encryption mechanisms, and inter-process communication.

Because the file was included in a public npm package, the code was easily accessible and quickly archived in external repositories. Within hours, copies of the codebase had spread across developer platforms.

Importantly, the leak did not include model weights or user data, and there is no indication that customer information was compromised.

Repeat Incident Raises Concerns

This is the second time Anthropic has faced a similar issue. An earlier version of Claude Code was exposed in 2025 under comparable circumstances, prompting the company to remove the affected files.

The recurrence of the same type of error has raised questions about internal controls and release validation processes, particularly for tools aimed at professional developers.

While the exposure does not pose immediate risks to users, it reveals detailed insights into the system’s architecture and internal logic. This level of transparency could make it easier for external parties to analyze, replicate, or potentially exploit aspects of the tool.

Growing Scrutiny on AI Tooling

The incident comes as AI development platforms are becoming central to software engineering workflows, increasing the importance of reliability and security in their deployment.

Anthropic has not issued a public statement on the latest leak. However, the situation is likely to draw attention from both developers and enterprise customers who rely on such tools for critical operations.

The episode also unfolds alongside broader developments at the company, including a separate data leak that revealed details of its upcoming Claude Mythos model, described internally as a major leap in AI capabilities. Together, these incidents highlight the operational risks facing fast-moving AI firms as they scale both their technology and product ecosystems.