Behind the AI Boom, Europe’s Banks Are Planning Deep Workforce Cuts

European banks could eliminate more than 200,000 jobs by 2030 as artificial intelligence reshapes operations and accelerates branch closures. Back-office, risk, and compliance roles are expected to be most affected.

By Maria Konash Published: Updated:
Behind the AI Boom, Europe’s Banks Are Planning Deep Workforce Cuts
AI reshapes Europe’s banking workforce as the industry prepares for massive job cuts.

Europe’s banking sector is preparing for a significant wave of job losses as lenders increasingly deploy artificial intelligence to improve efficiency and reduce costs. According to a Morgan Stanley analysis reported by the Financial Times, more than 200,000 banking jobs could be eliminated across Europe by 2030, representing about 10% of the workforce at 35 major banks.

The analysis points to AI-driven automation and continued branch closures as the primary forces behind the expected reductions. Banks are targeting efficiency gains of up to 30%, as algorithms take over routine and data-heavy tasks that were traditionally handled by human staff.

The impact is expected to be most pronounced in back-office functions, including operations, risk management, and compliance. These roles involve large volumes of repetitive processes and regulatory documentation, areas where AI systems are already demonstrating faster processing and lower error rates than manual workflows.

Physical branch networks are also shrinking as customers shift to digital channels, further reducing the need for frontline staff. Many European banks have accelerated branch consolidation plans in recent years, and AI adoption is reinforcing that trend.

Global Pattern of Automation-Led Cuts

The move by European lenders mirrors broader global trends. In the United States, large financial institutions and technology companies are increasingly tying workforce reductions to AI initiatives. Goldman Sachs warned employees in October of job cuts and a hiring freeze through the end of 2025 as part of its “OneGS 3.0” program, which applies AI across client onboarding, internal operations, and regulatory reporting.

Some European banks have already announced concrete plans. Dutch lender ABN Amro said it intends to cut roughly 20% of its workforce by 2028 as it modernizes operations. Société Générale’s chief executive has signaled that no part of the organization is off limits as the bank restructures around digital tools.

Not all industry leaders are fully aligned on the pace of change. A senior JPMorgan Chase executive cautioned that excessive reliance on automation could undermine training for junior bankers. The concern is that if early-career staff no longer perform foundational tasks, they may lack the skills needed for more complex roles later on.

Labor Market and Economic Implications

The banking sector’s plans come amid a wider debate about AI’s impact on employment. In the U.S., artificial intelligence has already been cited as a driver of more than 55,000 job cuts in 2025, according to Challenger, Gray & Christmas, contributing to elevated layoff levels across multiple industries.

Academic research reinforces those concerns. A recent study from the Massachusetts Institute of Technology and Oak Ridge National Laboratory found that AI could already perform tasks equivalent to 11.7% of U.S. jobs, putting as much as $1.2 trillion in wages at risk across finance, healthcare, and professional services.

For European banks, the challenge will be managing the transition while maintaining regulatory compliance and institutional knowledge. While AI promises meaningful cost savings and productivity gains, the scale of projected job losses underscores the social and economic consequences of rapid automation.

As lenders push ahead with AI adoption, regulators, unions, and policymakers are likely to scrutinize how banks balance efficiency with workforce stability in an industry that remains central to Europe’s economy.

Anthropic Launches $100M Claude Partner Network

Anthropic has launched the Claude Partner Network with a $100 million investment to support consultancies and AI firms helping enterprises deploy Claude. The program includes certifications, technical support, and joint market initiatives.

By Samantha Reed Edited by Maria Konash Published:
Anthropic Launches $100M Claude Partner Network
Anthropic launches a $100M Claude Partner Network to support partners with training, certification, and technical resources. Image: Anthropic

Anthropic has introduced the Claude Partner Network, a new initiative designed to support organizations helping enterprises deploy its Claude AI models. The company said it will commit an initial $100 million to the program in 2026, funding partner training, technical assistance, and joint market development.

The program targets consulting firms, professional services companies, and AI specialists that guide enterprises through the adoption of AI systems. These partners typically help businesses identify high-value use cases for AI and implement production-ready applications within complex corporate environments.

Anthropic said the initiative reflects growing enterprise demand for structured support in adopting generative AI tools.

“Anthropic is the most committed AI company in the world to the partner ecosystem—and we’re putting $100 million behind that this year to prove it,” said Steve Corfield, Anthropic’s head of global business development and partnerships. “Our partners are instrumental in getting enterprises from proof of concept to production with Claude.”

Claude is currently available across all three major cloud providers: Amazon Web Services, Google Cloud, and Microsoft Azure, allowing partners to deploy the model across multiple enterprise environments.

Training, Certification, and Implementation Support

The Claude Partner Network will provide a range of resources designed to help organizations build services around Anthropic’s AI platform.

Partners will receive access to technical training through Anthropic Academy, along with dedicated engineering support for enterprise deployments. The company plans to expand its partner-facing team significantly, providing applied AI engineers and technical architects to assist with complex customer implementations.

Anthropic is also introducing its first technical certification, called Claude Certified Architect, Foundations. The certification is designed for solution architects developing production applications using Claude models.

Additional certifications for developers, architects, and sales professionals are expected to launch later this year. Partners joining the network will receive priority access to these programs.

In addition, Anthropic has released a code modernization starter kit intended to help enterprises migrate legacy software systems using Claude’s agentic coding capabilities. The company said modernizing older codebases and reducing technical debt is one of the most common enterprise AI use cases.

Growing Demand for Enterprise AI Deployment

Anthropic said the partner network will help enterprises move from experimental AI deployments to large-scale production systems. Many organizations have begun testing AI models but require technical and operational guidance to integrate them into existing workflows.

Members of the program will gain access to Anthropic’s partner portal, which includes sales playbooks, training materials, and co-marketing resources. Qualified partners will also appear in a public services directory where enterprises can find firms experienced in implementing Claude-based solutions.

The company said the Claude Partner Network is open to any organization involved in deploying Claude AI systems, and membership will be free of charge. Applications for the program opened today.

The partner program also comes as Anthropic expands its broader research and policy efforts around advanced AI. The company recently launched the Anthropic Institute, an initiative that brings together engineers, economists, and social scientists to study the societal, economic, and governance implications of increasingly powerful AI systems.

AI & Machine Learning, Enterprise Tech, News, Startups & Investment

Amazon and Cerebras Combine AI Chips for New AWS Inference Platform

Amazon and Cerebras have partnered to combine their AI chips in a new AWS service designed to accelerate inference for chatbots, coding tools, and other generative AI applications.

By Olivia Grant Edited by Maria Konash Published:
Amazon and Cerebras Combine AI Chips for New AWS Inference Platform
Amazon and Cerebras partner to launch an AWS AI inference service combining Trainium3 and Cerebras chips. Image: BoliviaInteligente / Unsplash

Amazon and Cerebras Systems have announced a partnership to integrate their computing chips into a new artificial intelligence inference service hosted on Amazon Web Services (AWS). The service aims to accelerate AI workloads such as chatbots, coding assistants, and other generative AI applications.

Under the agreement, Cerebras processors will be installed inside AWS data centers and connected with Amazon’s Trainium3 custom AI chips through proprietary networking technology. The companies did not disclose the financial terms of the partnership.

Cerebras, valued at roughly $23.1 billion, has developed a unique AI chip architecture designed to compete with Nvidia’s processors. Unlike many AI accelerators, Cerebras chips do not rely on high-bandwidth memory, a costly component used in Nvidia’s flagship GPUs.

Cerebras Chief Executive Andrew Feldman said the partnership could expand access to the company’s technology through AWS’s large customer base.

“Every customer large or small is on AWS, from individual developers to the largest banks in the world,” Feldman said. “This will make it easy as a click to get on Cerebras.”

Divide-and-Conquer Inference Architecture

The joint service will focus on inference, the stage where trained AI models generate responses to user prompts. Instead of running the entire inference workload on a single chip architecture, Amazon and Cerebras plan to split the process between two specialized systems.

Amazon’s Trainium3 chips will handle the “prefill” stage, which converts user input into tokens that AI systems process. Cerebras processors will manage the “decode” stage, where the model generates output tokens to produce responses.

According to Feldman, this “divide and conquer” approach is intended to improve speed and efficiency for AI inference workloads.

The strategy reflects a broader industry trend toward specialized AI infrastructure where different processors handle distinct parts of AI pipelines to improve performance and cost efficiency.

Competition in AI Infrastructure

The partnership comes amid intensifying competition in AI hardware and infrastructure. Nvidia currently dominates the AI accelerator market, but several companies are attempting to develop alternatives to its GPU-based architecture.

Analysts expect Nvidia to unveil a similar multi-chip strategy soon. Reports indicate the company may combine its GPUs with chips from startup Groq, which Nvidia acquired in a deal valued at $17 billion late last year.

Amazon said its service, expected to launch in the second half of the year, could offer better price-performance compared with GPU-based solutions.

The company added that its Trainium chip roadmap, including the upcoming Trainium4 processor, is designed to provide an alternative to third-party AI hardware by optimizing cost and efficiency for AWS customers.

AI & Machine Learning, Cloud & Infrastructure, Enterprise Tech, News

Nvidia Releases Nemotron 3 Super Agentic AI Model

Nvidia introduced Nemotron 3 Super, a 120B-parameter open model built for multi-agent AI systems with a 1M-token context window and improved reasoning efficiency.

By Daniel Mercer Edited by Maria Konash Published:
Nvidia Releases Nemotron 3 Super Agentic AI Model
Nvidia launches Nemotron 3 Super, a 120B-parameter open model built for multi-agent systems. Image: Nvidia

Nvidia has introduced Nemotron 3 Super, a new open AI model designed to support large-scale agentic AI systems. The model contains 120 billion parameters, with 12 billion active during inference, and is optimized for complex reasoning tasks across multi-agent workflows.

Nemotron 3 Super is designed to address performance and cost challenges that arise when organizations deploy multiple AI agents working together on complex tasks. These systems often require large volumes of context and repeated reasoning across multiple steps, increasing both computing costs and latency.

The model includes a context window of up to one million tokens, enabling AI agents to retain full workflow states during extended operations. This helps prevent “goal drift,” a problem where agents lose alignment with the original objective as conversations grow longer and more complex.

Nvidia said the model is capable of handling multi-step reasoning tasks with high accuracy and is intended for applications such as software development agents, enterprise automation systems, and scientific research tools.

Architecture and Performance Improvements

Nemotron 3 Super uses a hybrid mixture-of-experts architecture that activates only a subset of its parameters during inference. While the model contains 120 billion parameters in total, only 12 billion are active at any given time, improving efficiency while maintaining performance.

The system combines transformer layers with Mamba layers, which provide improved memory and compute efficiency. Nvidia said the design delivers up to five times higher throughput and double the accuracy compared with earlier Nemotron Super models.

The model also incorporates latent mixture-of-experts techniques that allow four specialized expert components to contribute to token generation at the cost of activating only one.

Another optimization is multi-token prediction, which enables the system to predict several words simultaneously. This technique can accelerate inference speeds by up to three times compared with standard token-by-token generation.

When deployed on Nvidia’s Blackwell platform, the model runs in NVFP4 precision, which reduces memory requirements and increases inference performance by up to four times compared with FP8 precision on Hopper GPUs.

Open Model and Enterprise Deployment

Nvidia is releasing Nemotron 3 Super with open weights under a permissive license, allowing developers to customize and deploy the model across workstations, data centers, or cloud platforms.

The company has also published training methodologies and datasets used to build the model, including more than 10 trillion tokens of synthetic and curated training data. Developers can further adapt the model using Nvidia’s NeMo platform for fine-tuning and reinforcement learning.

Several companies have already integrated Nemotron 3 Super into their systems. AI-native platforms such as Perplexity are using the model for search and agent orchestration, while developer tools including CodeRabbit, Factory, and Greptile are incorporating it into software engineering agents.

Enterprise software providers including Amdocs, Palantir, Cadence, Dassault Systèmes, and Siemens are also deploying the model to automate workflows in industries such as telecommunications, cybersecurity, and semiconductor design.

The model is available through multiple distribution channels including Nvidia’s build platform, Hugging Face, OpenRouter, and Perplexity. Cloud providers including Google Cloud, Oracle Cloud Infrastructure, and Nvidia cloud partners such as CoreWeave and Together AI are offering deployment support, with availability planned on Amazon Web Services and Microsoft Azure.

Nemotron 3 Super is packaged as an Nvidia NIM microservice, allowing organizations to deploy the model across on-premises systems and cloud environments as they scale multi-agent AI applications.

Microsoft Launches Copilot Health AI Assistant

Microsoft has introduced Copilot Health, a secure AI-powered assistant designed to analyze medical records, wearable data, and health history to deliver personalized health insights.

By Samantha Reed Edited by Maria Konash Published:

Microsoft has announced Copilot Health, a new artificial intelligence-powered service designed to help users better understand and manage their health data. The system creates a dedicated environment within Copilot where personal medical information, wearable device metrics, and health history can be analyzed to generate personalized insights.

The platform aims to help individuals interpret complex health information and prepare for discussions with healthcare providers. Microsoft said the tool is not intended to replace clinicians but to help users arrive at medical appointments with better context and questions about their health.

Copilot Health combines multiple sources of health data into a single profile. The system can analyze information such as activity levels, sleep patterns, vital signs, and other metrics collected from wearable devices. It also integrates electronic health records, medication lists, and clinical summaries from healthcare providers.

Microsoft said the platform can process health data from more than 50 wearable devices including Apple Health, Fitbit, and Oura, as well as medical records from over 50,000 healthcare organizations through HealthEx. It can also incorporate laboratory test results from services such as Function Health.

AI Insights From Health Data

The system uses AI models to detect patterns and generate insights from aggregated health data. For example, it may identify relationships between lifestyle metrics such as sleep quality and activity levels or highlight trends in medical test results over time.

Microsoft said the goal is to help users move beyond generic online health searches by providing guidance grounded in their individual health history.

Copilot Health also includes tools to help users locate medical providers. The platform connects to real-time healthcare directories in the United States, allowing users to search for doctors based on specialty, location, language, and insurance coverage.

The company said responses generated by Copilot Health are supported by information from medical organizations across more than 50 countries. The system includes citations and references from verified health sources and expert-written content from institutions such as Harvard Health.

Focus on Privacy and Clinical Oversight

Microsoft emphasized that Copilot Health is designed with strict privacy protections. Conversations and health data stored within the platform are isolated from the broader Copilot environment and protected with encryption and additional access controls.

The company said personal health information in Copilot Health will not be used for training AI models, and users can disconnect data sources or delete their information at any time.

Development of the system involved Microsoft’s internal clinical team as well as an external panel of more than 230 physicians from 24 countries who provided medical guidance and safety feedback. The platform also follows Microsoft’s responsible AI framework and has achieved ISO/IEC 42001 certification, an international standard for AI management systems.

Copilot Health will initially launch in the United States in English for adults aged 18 and older. Microsoft said access will begin through a waitlist as the company gradually expands the service and gathers feedback from early users.

The launch reflects a broader push to apply AI to healthcare navigation and patient support. Amazon recently introduced Health AI, an agentic assistant available through its website and app that helps users interpret medical records, manage prescriptions, and connect with doctors, with Prime members receiving limited free virtual consultations. Meanwhile, Anthropic has expanded its Claude platform with healthcare and life sciences tools including HIPAA-compliant connectors for clinical and research workflows, and OpenAI launched ChatGPT Health, a secure AI experience designed to integrate personal health data and assist users with wellness insights, lab results, and appointment preparation.

AI & Machine Learning, Consumer Tech, News

Netflix Buys Ben Affleck’s AI Startup for Up to $600 Million

Netflix has acquired InterPositive, an AI startup focused on post-production editing tools co-founded by Ben Affleck. The deal could reach $600 million as streaming platforms expand AI use in filmmaking.

By Samantha Reed Edited by Maria Konash Published:
Netflix Buys Ben Affleck’s AI Startup for Up to $600 Million
Netflix acquires Ben Affleck’s AI startup for up to $600M to develop post-production editing tools for filmmakers. Image: Jakob Owens / Unsplash

Netflix has acquired InterPositive, an artificial intelligence company focused on post-production tools for filmmakers. The startup was co-founded by actor and filmmaker Ben Affleck and develops software designed to assist editors and production teams in refining footage during the editing process.

According to Bloomberg, the transaction could be worth up to $600 million, potentially making it one of the largest acquisitions in Netflix’s history. Netflix has not publicly confirmed the full financial terms of the deal.

Sources familiar with the agreement told Bloomberg that the upfront cash payment may be lower, with additional payouts tied to performance targets. If those targets are met, the total value of the acquisition could approach the reported figure.

The largest acquisition previously completed by Netflix was the purchase of the Roald Dahl Story Company for about $700 million.

AI Tools Designed for Film Post-Production

InterPositive develops artificial intelligence tools aimed at helping filmmakers work more efficiently during the editing phase of production. The company’s software focuses on improving workflows rather than generating entirely new content.

Its technology can assist with tasks such as identifying continuity issues, enhancing scenes, and streamlining the process of reviewing and organizing large volumes of footage. These capabilities are intended to reduce time spent on manual editing work while maintaining creative control for filmmakers.

The company’s tools also avoid using copyrighted footage without permission or generating new scenes from scratch, which has been a point of concern for many professionals in the entertainment industry as generative AI becomes more widely adopted.

Netflix has not yet announced how InterPositive’s technology will be integrated into its production pipeline. However, the acquisition aligns with the company’s ongoing efforts to incorporate artificial intelligence into film and television production workflows.

Streaming Platforms Accelerate AI Adoption

Netflix has already experimented with AI-assisted visual effects in its original content. One example includes the use of generative AI technology to create a building-collapse sequence in the Argentine series The Eternaut.

Other entertainment companies are also expanding AI initiatives. Amazon has been developing internal AI teams focused on film and television production, while Disney recently signed an agreement with OpenAI to explore the use of artificial intelligence in media workflows.

The growing role of AI in filmmaking has also raised concerns among industry professionals. Workers in the film and television sectors have warned that AI tools could affect employment in editing, visual effects, and other creative roles.

There are also ongoing debates about how AI models should compensate creators when training on copyrighted material. Industry unions and advocacy groups have called for clearer guidelines to ensure that creative professionals are fairly credited and compensated as AI technologies become more integrated into production pipelines.