LinkedIn Adds Verified AI Skill Certificates to Profiles

LinkedIn is rolling out verified AI skill certifications that let users showcase real-world proficiency with popular AI tools, based on ongoing product usage rather than tests or self-declared skills.

By Samantha Reed Edited by Maria Konash Published: Updated:
LinkedIn debuts AI skill certifications tied to real-world tool usage. Photo: Zulfugar Karimov / Unsplash

LinkedIn has introduced a new feature that allows users to display verified AI skill certifications on their profiles, signaling a shift away from self-reported skills and short-form tests toward proof based on real-world usage. The update is part of a broader effort by the Microsoft-owned professional networking platform to make profiles more reflective of applied, in-demand capabilities.

The company said the certifications will be issued through partnerships with AI-first software platforms, starting with Lovable, Relay.app, and Replit. Qualified users can link their accounts on those services to LinkedIn, where certificates reflecting their level of proficiency will appear automatically. Additional partners, including Gamma, GitHub, Zapier, and Descript, are expected to join the program in the coming months.

Unlike traditional certifications that rely on exams or one-time assessments, LinkedIn’s model is based on continuous evaluation. Partner platforms assess how users work within their products, analyzing usage patterns, outcomes, and overall sophistication over time. Once a user meets a platform’s internal threshold for proficiency, the verified skill badge is added to their LinkedIn profile.

Pat Whelan, head of career products at LinkedIn, said the goal is to provide hiring managers with a more reliable signal of capability. The certifications are also designed to feed into LinkedIn’s own hiring and recruiting tools, including AI-driven candidate matching.

Proof Through Usage, Not Tests

LinkedIn said the exact criteria for proficiency will vary by partner and has not disclosed benchmarks or minimum usage requirements. The company said this flexibility allows product makers, rather than LinkedIn, to define what meaningful expertise looks like for their tools. Experience gained through side projects or independent work will count toward certification, not just usage in a formal job setting.

Hari Srinivasan, LinkedIn’s vice president of product, described verified skills as an extension of the platform’s broader trust initiatives. LinkedIn’s identity verification system has been adopted by more than 100 million users, and the company views verified AI skills as an additional layer of credibility for both job seekers and employers.

The move reflects changing hiring expectations as AI tools become embedded across roles beyond software engineering. Employers are increasingly seeking candidates who can demonstrate practical experience with modern tools rather than familiarity in name only.

Rising Demand for AI Skills

The rollout comes amid strong growth in demand for AI-related skills across industries. An edX report published last year found that job postings requiring AI capabilities doubled over a 12-month period. Data from Indeed’s Hiring Lab showed that by the end of 2025, more than four percent of U.S. job listings referenced AI skills, with growing demand in fields such as finance, marketing, and operations.

By anchoring certifications to hands-on tools such as Replit and GitHub, LinkedIn is promoting a more applied definition of AI literacy. The approach may help employers cut through inflated skill claims, but it also raises questions about transparency, consistency, and how disputes over automated assessments will be handled as the program scales.

For now, LinkedIn is betting that verified proof of work will carry more weight than endorsements or buzzwords, as AI tools become a core requirement in an increasingly competitive job market.

The feature also fits into a broader expansion of AI across the platform, including the recent launch of a natural language–based people search tool that lets users find relevant professionals by describing who they are looking for rather than relying on filters or job titles. Together, these updates underscore LinkedIn’s effort to make profiles and connections more dynamic, skill-driven, and useful in an increasingly AI-shaped job market.

AI & Machine Learning, News

Microsoft Adds Multi-Model AI Workflows to Copilot

Microsoft has introduced multi-model capabilities in Copilot, allowing GPT and Claude to collaborate on responses to improve accuracy and reliability.

By Daniel Mercer Edited by Maria Konash Published: Updated:
Microsoft upgrades Copilot with GPT and Claude workflows, boosting accuracy and reducing hallucinations. Image: Microsoft

Microsoft has introduced new multi-model capabilities to its Copilot assistant, enabling users to leverage multiple artificial intelligence systems within a single workflow as competition intensifies in the enterprise AI market.

The update allows Copilot’s Researcher agent to combine outputs from OpenAI’s GPT models and Anthropic’s Claude, marking a shift from relying on a single model to a collaborative AI approach. The company said the feature is designed to improve accuracy, reduce errors, and enhance overall productivity.

The move reflects a broader industry trend toward integrating multiple AI systems to balance strengths and mitigate weaknesses, particularly as businesses increasingly depend on AI for critical workflows.

AI Models Collaborate on Responses

At the core of the update is a new feature called “Critique.” In this workflow, GPT generates an initial response, which is then reviewed by Claude for quality and accuracy before being delivered to the user.

Microsoft said this layered approach helps address one of the key challenges in generative AI, known as hallucinations, where models produce incorrect or misleading information. By introducing a second model as a reviewer, the system aims to provide more reliable outputs.

The company plans to expand this capability further by making the process bi-directional, allowing GPT to also review Claude-generated responses. This would create a feedback loop between models, potentially improving performance over time.

Toward Multi-Model AI Systems

Microsoft is also launching a feature called “Model Council,” which allows users to compare outputs from different AI models side by side. This gives users greater visibility into how different systems interpret the same query and enables more informed decision-making.

The updates are part of Microsoft’s broader effort to evolve Copilot into a more advanced agentic system capable of handling complex, multi-step tasks. The company has been expanding access to Copilot Cowork, an AI agent designed to assist with collaborative workflows across enterprise environments.

The introduction of multi-model functionality highlights a shift in strategy, where AI tools are no longer tied to a single provider or architecture. Instead, platforms are increasingly designed to orchestrate multiple models to deliver better results.

Microsoft faces growing competition from other AI providers, including Google’s Gemini and Anthropic’s enterprise-focused tools. By enabling collaboration between leading models, the company is positioning Copilot as a flexible platform that can integrate capabilities from across the AI ecosystem.

The latest updates underscore the importance of reliability and interoperability in enterprise AI adoption, as organizations seek systems that can deliver consistent and trustworthy results at scale. The expansion also aligns with Microsoft’s broader push into applied AI, including the launch of Copilot Health, a secure assistant designed to analyze medical records, wearable data, and health history to deliver personalized health insights.

OpenAI Launches Codex Plugins With Slack and Notion Integrations

OpenAI has launched plugin support for Codex, enabling integrations with tools like Slack, Notion, and Gmail as it builds an ecosystem for AI-driven workflows.

By Daniel Mercer Edited by Maria Konash Published:
OpenAI launches Codex plugins with Slack, Notion, Figma, and Gmail, expanding into a full workflow ecosystem. Image: Joshua Reddekopp / Unsplash

OpenAI has introduced plugin support for Codex, expanding its development tool into a broader platform for AI-driven workflows with integrations across popular workplace applications.

The new feature allows users to connect Codex with services including Slack, Notion, Figma, Gmail, and Google Drive. Through these integrations, Codex can access external data, automate tasks, and execute workflows that extend beyond traditional code generation.

The launch also marks the beginning of a plugin marketplace strategy, where reusable AI workflows can be distributed and adopted across teams with minimal setup.

Building an AI Workflow Ecosystem

Plugins in Codex are designed as bundled units that combine predefined workflows, integrations with external applications, and support for Model Context Protocol servers. This structure allows developers and teams to create reusable configurations tailored to specific tasks.

For example, Codex can be used to summarize Slack channels, manage documents in Google Drive, or generate and modify designs through Figma integrations. These capabilities position the tool as more than a coding assistant, enabling it to function as a general-purpose productivity layer across enterprise environments.

Previously, similar workflows required manual configuration and technical expertise. With the introduction of plugins, users can install and deploy these capabilities through a centralized directory, lowering the barrier to adoption.

The approach aligns with a broader shift in the AI sector toward agent-based systems that can execute multi-step tasks across different tools and services.

Competing in the AI Platform Race

The expansion of Codex into a plugin-enabled platform reflects increasing competition among AI providers to build extensible ecosystems. Rivals have already emphasized integrations and modular architectures, particularly for enterprise use cases.

By launching a plugin marketplace, OpenAI is aiming to create a network effect around Codex, where third-party developers can contribute tools and workflows that enhance the platform’s capabilities. This model mirrors strategies seen in cloud software and developer platforms, where ecosystems play a key role in driving adoption.

The inclusion of widely used services such as Slack, Notion, and Gmail highlights a focus on real-world productivity use cases. It also signals a move toward embedding AI more deeply into everyday workflows, rather than limiting it to isolated development tasks.

As organizations increasingly adopt AI agents to automate complex processes, tools like Codex are evolving to serve as coordination layers across software environments. The addition of plugins positions OpenAI to capture a larger share of this emerging market for AI-powered work platforms.

AI & Machine Learning, News

DDR5 Prices Fall as Google TurboQuant Reshapes AI Memory Demand

DDR5 memory prices are showing early signs of decline after Google’s TurboQuant algorithm reduced AI memory requirements, easing pressure on global DRAM supply.

By Olivia Grant Edited by Maria Konash Published:
DDR5 prices dip as TurboQuant cuts AI memory demand, easing post-OpenAI surge. Image: Liam Briese / Unsplash

DDR5 memory prices are beginning to show signs of easing after a prolonged surge driven by artificial intelligence demand, with analysts pointing to a recent breakthrough in AI efficiency as a key turning point.

The shift follows the introduction of TurboQuant, a compression algorithm unveiled by Google that significantly reduces the memory requirements of large AI models. By lowering demand for high-bandwidth memory and DRAM, the development is starting to rebalance a market that had been under intense pressure from AI infrastructure expansion.

The price movement marks a rare reversal after a sharp increase in 2025, when expectations around AI-driven demand pushed memory costs to record levels.

AI Demand Fueled Price Surge

Last year, the market reacted strongly to reports that OpenAI had signed preliminary agreements with major memory manufacturers Samsung and SK Hynix for up to 40% of global DRAM output. Although the agreements were non-binding letters of intent rather than firm purchase commitments, they were widely interpreted as indicative of massive future demand.

That perception drove DDR5 prices up by as much as 171%, with high-capacity memory kits becoming significantly more expensive. The surge also reflected broader investment in AI data centers, where memory is a critical component for training and running large-scale models.

However, some large infrastructure projects later faced delays or cancellations amid uncertainty over actual demand, contributing to growing volatility in the memory market.

TurboQuant Shifts Market Dynamics

The release of Google’s TurboQuant algorithm has introduced a new variable into the equation. The technology reduces key-value cache memory requirements by up to six times while maintaining performance, potentially lowering the amount of DRAM needed for AI workloads.

This improvement could have a direct impact on data center design, enabling operators to run large models with fewer memory modules. As a result, some supply may shift back toward consumer markets, including gaming and personal computing.

Early signs of this shift are emerging. In the United States, certain DDR5 modules, including Corsair Vengeance kits, have seen modest price declines at major retailers. Similar trends have been reported in parts of Europe, suggesting a broader stabilization in pricing.

Limited Relief for Consumers

Despite these developments, the overall memory market remains constrained. Most DRAM supply continues to be prioritized for enterprise customers, particularly hyperscalers building AI infrastructure.

Industry trackers indicate that while prices are leveling off, widespread declines have yet to materialize across all products. Analysts caution that improvements in efficiency could paradoxically drive further AI adoption, sustaining long-term demand for memory.

The broader impact of TurboQuant may depend on how quickly it is adopted and whether it leads to a net reduction in hardware requirements or enables even larger and more complex AI systems.

For now, the easing of DDR5 prices reflects an early adjustment in a market that has been heavily influenced by AI-driven expectations. It also highlights how advances in software efficiency can have immediate ripple effects across hardware supply chains.

AI & Machine Learning, Cloud & Infrastructure, Enterprise Tech, News

Anthropic Leak Reveals New Claude Mythos Model

A data leak at Anthropic exposed details of its upcoming Claude Mythos model, described as a major leap in AI capabilities, along with internal documents.

By Samantha Reed Edited by Maria Konash Published:
Anthropic leak reveals Claude Mythos, an advanced reasoning and cybersecurity AI model. Image: Anthropic

Anthropic has confirmed details of a forthcoming AI model after a security lapse exposed internal documents, revealing what the company describes as a significant advancement in its Claude family of systems.

The leak, caused by a configuration error in Anthropic’s content management system, made nearly 3,000 unpublished assets publicly accessible. The exposed data included draft blog posts, images, and internal PDFs. Security researchers identified the issue and alerted the company, which then restricted access.

Anthropic said the incident resulted from “human error” and described the materials as early drafts intended for future publication.

New Model Tier Above Opus

Among the leaked documents was information about a new model referred to as Claude Mythos, internally codenamed “Capybara.” The model is expected to introduce a new tier above Anthropic’s current lineup, which includes Opus, Sonnet, and Haiku.

According to the draft materials, the new system is designed to be more capable than the existing Opus models, particularly in areas such as coding, academic reasoning, and cybersecurity. Anthropic confirmed it is developing a next-generation general-purpose model and described it as a “step change” in capability.

The addition of a higher-tier model suggests Anthropic is continuing to scale its systems in response to growing competition in advanced AI, particularly in enterprise and technical domains.

Cybersecurity Concerns and Controlled Release

The leaked documents highlighted cybersecurity as a key area of focus for the new model. Anthropic reportedly considers its capabilities in this domain to be significantly ahead of existing systems, raising concerns about potential misuse.

To address these risks, the company plans to limit early access to organizations focused on cybersecurity defense. This approach is intended to allow institutions to strengthen protections before broader deployment.

Anthropic has previously taken steps to mitigate misuse of its models, including blocking attempts to use its tools for cybercrime. The enhanced capabilities described in the leaked materials indicate a growing emphasis on both offensive and defensive implications of AI systems.

Broader Implications and Internal Exposure

In addition to model details, the leak revealed plans for internal events, including an invite-only gathering for European business leaders. The exposure of such materials underscores the risks associated with managing sensitive information in rapidly evolving AI organizations.

The incident comes at a time when Anthropic is expanding its influence in the AI sector, with increased enterprise adoption and ongoing infrastructure investments. It also aligns with broader strategic developments, as the company is reportedly targeting an IPO as early as October while intensifying its enterprise push and scaling infrastructure to compete more directly with OpenAI.

While Anthropic has moved quickly to secure the exposed data, the leak provides an early look at its next-generation model strategy. It also illustrates how operational vulnerabilities can expose critical information in an industry where technological advances are closely watched.

SoftBank Secures $40B Loan to Expand OpenAI Investment

SoftBank has secured a $40 billion bridge loan to deepen its investment in OpenAI and accelerate its broader AI strategy.

By Samantha Reed Edited by Maria Konash Published:
SoftBank secures $40B loan to double down on OpenAI and AI infrastructure. Image: insung yoon / Unsplash

SoftBank Group has secured a $40 billion bridge loan to fund its growing investments in artificial intelligence, including a deeper commitment to OpenAI, as competition intensifies across the sector.

The Japanese investment firm said the unsecured loan will be used to support its AI strategy and general corporate purposes. The financing, which matures in March 2027, was arranged by a group of major lenders including JPMorgan Chase, Goldman Sachs, Mizuho Bank, Sumitomo Mitsui Banking Corporation, and MUFG Bank.

The move marks one of SoftBank’s largest financing efforts in recent years and highlights founder Masayoshi Son’s renewed focus on AI following a period of volatility in the company’s Vision Fund performance.

Expanding Partnership With OpenAI

SoftBank has been steadily increasing its exposure to OpenAI, the developer of ChatGPT, as generative AI adoption accelerates globally. The company previously committed $30 billion to OpenAI through its Vision Fund 2, positioning itself among the largest investors in the space.

The new financing is expected to further strengthen that relationship, as SoftBank seeks to capitalize on the rapid growth of AI-driven applications and infrastructure. OpenAI, backed by Microsoft, has emerged as a central player in the industry, attracting significant enterprise demand and investor interest.

SoftBank and OpenAI have also collaborated on large-scale initiatives, including the Stargate Project, which aims to invest up to $500 billion in AI infrastructure in the United States over four years. The project reflects the increasing importance of computing capacity and data centers in supporting advanced AI systems.

Strategic Shift Toward AI Infrastructure

The loan underscores SoftBank’s broader strategy to position itself at the center of the AI ecosystem, spanning both software and infrastructure investments. The company has signaled plans to deploy substantial capital into AI-related projects, including a previously announced $100 billion investment in U.S. technology and infrastructure.

This approach aligns with a wider industry trend, where companies are investing heavily in data centers, chips, and cloud platforms to support the growing computational demands of AI models.

SoftBank’s renewed focus on AI comes after years of mixed performance from its Vision Fund, which saw both significant gains and losses across technology investments. By concentrating on AI, the firm is betting on a sector widely viewed as a key driver of future economic growth.

The scale of the financing also reflects the capital-intensive nature of AI development. As companies race to build more powerful systems, access to funding and infrastructure is becoming a critical competitive factor.

AI & Machine Learning, Enterprise Tech, News, Startups & Investment
Exit mobile version