Chinese Chipmakers See Record Revenue on AI Demand

Chinese chipmakers report record revenue driven by AI demand, memory shortages, and U.S. export curbs boosting domestic semiconductor growth.

By Olivia Grant Edited by Maria Konash Published:
Chinese Chipmakers See Record Revenue on AI Demand
Chinese chip firms post record revenue as AI demand and export curbs drive local adoption. Image: Igor Omilaev / Unsplash

Chinese semiconductor companies are reporting record revenues as demand for artificial intelligence infrastructure accelerates and U.S. export restrictions reshape global supply chains. The combined effect has boosted domestic chip production and strengthened Beijing’s push for technological self-sufficiency.

Semiconductor Manufacturing International Co. (SMIC), China’s largest chipmaker, reported a 16% year-over-year revenue increase to $9.3 billion in 2025, with projections exceeding $11 billion in 2026. Hua Hong also posted record quarterly revenue, reflecting strong demand across multiple chip segments.

The growth is being driven in part by domestic technology firms investing heavily in AI infrastructure. With limited access to advanced U.S. chips due to export controls, Chinese companies are increasingly turning to local suppliers to meet computing needs.

U.S. restrictions, particularly on high-performance GPUs and advanced semiconductor equipment, have accelerated China’s efforts to develop its own chip ecosystem. Analysts describe the restrictions as a catalyst that has intensified demand for domestically produced components across industries including AI, electric vehicles, and data centers.

Companies such as Moore Threads are benefiting from this shift, with the firm projecting more than 200% annual revenue growth as it works to position itself as a local alternative to global GPU leaders.

Memory Shortages and Technology Gaps Persist

In addition to logic chips, Chinese memory manufacturers are seeing significant gains. ChangXin Memory Technologies (CXMT) reported a sharp rise in revenue, driven by global shortages and rising demand for memory used in AI systems and consumer electronics.

High-bandwidth memory, a critical component for AI workloads, remains dominated by global players such as Samsung, SK Hynix, and Micron. However, export restrictions have created opportunities for domestic firms like CXMT to supply the Chinese market, even with older-generation technologies.

Despite strong revenue growth, Chinese semiconductor firms continue to lag behind global leaders in advanced manufacturing capabilities. Companies such as SMIC and Hua Hong are unable to produce cutting-edge chips at scale due to limited access to advanced lithography equipment from suppliers like ASML.

Efforts to build a fully domestic semiconductor supply chain are ongoing but face significant technical and financial challenges. China is attempting to replicate large portions of the global chip ecosystem, a process expected to take years.

While current growth is supported by import substitution and strong domestic demand, analysts warn of potential overcapacity in mature-node chips. Sustained progress will depend on whether Chinese firms can advance into higher-value segments, including next-generation memory and advanced logic chips, which are critical for long-term competitiveness in AI infrastructure.

AI & Machine Learning, Cloud & Infrastructure, News

OpenAI’s TBPN Deal Raises Questions Amid AI Expansion

OpenAI’s acquisition of tech media platform TBPN highlights an unconventional M&A strategy as the company prepares for a potential IPO and faces rising competition in AI.

By Samantha Reed Edited by Maria Konash Published:
OpenAI’s TBPN Deal Raises Questions Amid AI Expansion
OpenAI acquires TBPN amid IPO push, expanding beyond core AI as competition intensifies. Image: Christian Wiediger / Unsplash

OpenAI’s acquisition of Technology Business Programming Network (TBPN), a live tech media platform, is drawing attention as the company expands beyond its core artificial intelligence products. The move comes more than 10 months after OpenAI’s $6.4 billion acquisition of Jony Ive’s device startup, underscoring an increasingly diverse and difficult-to-define M&A strategy.

The TBPN deal, financial terms undisclosed, adds a media asset to OpenAI’s growing portfolio at a time when the company is under pressure to justify its valuation and spending. With billions invested in infrastructure and ongoing operating losses, OpenAI is balancing rapid expansion with increasing investor scrutiny ahead of a potential IPO.

TBPN, founded in 2024, has gained traction within the tech ecosystem through its daily live programming and high-profile guests. While relatively small in scale, the platform has built influence among founders, investors, and developers, making it a strategic channel for industry engagement.

OpenAI leadership has framed the acquisition as part of a broader effort to shape conversations around AI. The company has emphasized the importance of creating a space for constructive dialogue about the societal and economic impact of the technology. TBPN will operate with editorial independence, though it will be integrated into OpenAI’s strategy organization.

Analysts note that the acquisition may serve as a communications and positioning tool rather than a direct revenue driver. As competition intensifies, maintaining visibility and narrative control is becoming increasingly important for AI companies.

M&A Activity Intensifies Amid Competitive Pressure

The TBPN acquisition comes as OpenAI faces growing competition from companies such as Google, Anthropic, and Elon Musk’s xAI, which has been acquired by SpaceX in February. At the same time, rivals are advancing toward public markets, increasing pressure on OpenAI to demonstrate sustainable growth and strategic clarity.

OpenAI has made several acquisitions and hires across sectors in recent months, including software, cybersecurity, and healthcare startups. The company has also brought in experienced leadership to guide corporate development, signaling continued interest in strategic deals.

Despite this activity, questions remain about how these acquisitions fit into a cohesive long-term strategy. Industry analysts suggest that OpenAI may be experimenting with different approaches to expand its ecosystem, from hardware and developer tools to media and community platforms.

The company’s ability to absorb and integrate these assets will be closely watched, particularly as it prepares for a potential IPO. Media-related acquisitions, while potentially valuable for influence and reach, have historically carried higher execution risk compared to core technology investments.

Still, OpenAI’s recent $122 billion funding round provides significant financial flexibility, allowing it to pursue smaller, high-visibility bets like TBPN. As the AI market evolves rapidly, the company appears to be testing multiple pathways to maintain relevance and differentiation.

The outcome of this strategy will likely depend on whether these investments translate into stronger user engagement, clearer positioning, and sustained competitive advantage in an increasingly crowded AI landscape.

AI & Machine Learning, News, Startups & Investment

Trump Administration Appeals Ruling Blocking Anthropic Pentagon Ban

The Trump administration has appealed a court decision blocking the Pentagon’s designation of Anthropic as a supply chain risk. The case centers on AI safety disagreements and government contracting restrictions.

By Samantha Reed Edited by Maria Konash Published:
Trump Administration Appeals Ruling Blocking Anthropic Pentagon Ban
U.S. court blocks Pentagon ban on Anthropic, spotlighting tensions over AI safety and contracts. Image: Wesley Tingey / Unsplash

The Trump administration has filed an appeal against a federal court ruling that temporarily blocked the Pentagon from designating Anthropic as a supply chain risk. The move escalates a legal dispute that underscores growing tensions between AI developers and government agencies over safety standards and operational control.

The appeal follows a decision by U.S. District Judge Rita Lin, who sided with Anthropic and halted both the supply chain risk designation and a broader directive requiring federal agencies to sever ties with the company. The directive, if enforced, would prevent Anthropic from securing government contracts and restrict companies working with the military from partnering with the firm.

Judge Lin delayed the implementation of her ruling by one week, allowing the administration time to seek relief through the appeals process. The government’s response was widely anticipated given the implications of the decision for federal procurement and national security policy.

Anthropic initiated legal action after the Pentagon labeled the company a supply chain risk, reportedly following disagreements over AI safety conditions. The company has maintained that its technology should not be used in fully autonomous lethal weapons or for large-scale domestic surveillance.

In its complaint, Anthropic argued that the Pentagon’s actions were retaliatory and violated its constitutional rights. The company claimed it was penalized for expressing a “protected viewpoint” on the ethical use of artificial intelligence.

Broader Implications for AI Regulation

Judge Lin indicated that Anthropic is likely to succeed in its claims, citing concerns that due process requirements were not properly followed by the Department of Defense. The ruling has drawn attention across the technology sector, where companies are increasingly navigating complex relationships with government agencies.

Anthropic has previously partnered with the Pentagon, reflecting a trend in which AI firms collaborate with government agencies while attempting to maintain internal safeguards on how their technologies are deployed.

The outcome of the appeal could set a precedent for how far governments can go in restricting private AI companies based on policy disagreements. It may also influence how future contracts between AI developers and defense agencies are structured, particularly regarding usage limitations and compliance requirements.

AI & Machine Learning, News, Regulation & Policy

OpenAI Rolls Out Pay-As-You-Go Codex Pricing for Developers

OpenAI has introduced pay-as-you-go Codex pricing for ChatGPT Business and Enterprise users. The update aims to simplify adoption and expand usage across development teams.

By Samantha Reed Edited by Maria Konash Published:
OpenAI Rolls Out Pay-As-You-Go Codex Pricing for Developers
OpenAI introduces pay-as-you-go Codex pricing and cuts ChatGPT Business costs to boost adoption. Image: OpenAI

OpenAI is introducing a new pricing structure for Codex, its AI coding agent, aimed at making adoption more flexible for enterprise teams. Starting today, organizations using ChatGPT Business and Enterprise can add Codex-only seats with pay-as-you-go pricing, removing the need for fixed per-seat fees.

The update allows teams to access Codex without upfront commitments, enabling smaller groups to run targeted pilots before scaling usage across the organization. Instead of subscription-based pricing, usage is billed based on token consumption, offering more transparency into how activity translates into cost.

Codex-only seats also remove rate limits, allowing unrestricted usage based on demand. This model is designed to give teams greater control over budgeting, particularly for engineering workflows that may vary in intensity over time.

At the same time, OpenAI is adjusting its broader pricing strategy. The annual cost of ChatGPT Business has been reduced from $25 to $20 per seat, making standard access more affordable for organizations that want bundled features, including limited Codex usage.

To further encourage adoption, OpenAI is offering promotional credits for new Codex users. Eligible ChatGPT Business workspaces can receive $100 in credits per new Codex-only user, up to $500 per team, as part of a limited-time incentive.

Rapid Growth in Developer Usage

The pricing changes come as Codex adoption accelerates across enterprise environments. OpenAI reports that more than 2 million developers are now using Codex weekly, while overall ChatGPT business users exceed 9 million. Within Business and Enterprise tiers, Codex usage has grown sixfold since the beginning of the year.

Companies including Notion, Ramp, Braintrust, and Wasmer are using Codex to streamline software development workflows. Common use cases include automating repetitive coding tasks, generating production-ready code, and improving collaboration across engineering teams.

The company is also expanding Codex’s integration capabilities. New features such as plugins and automations allow teams to connect Codex with internal tools and external systems, enabling more complex and coordinated workflows. Dedicated applications for macOS and Windows further support adoption by embedding the tool directly into developer environments.

The shift toward usage-based pricing reflects a broader trend in enterprise AI, where companies are moving away from fixed licensing models toward consumption-based billing. This approach aligns costs more closely with value delivered, particularly for tools that are used intermittently or scale dynamically across teams.

AI & Machine Learning, News

OpenAI Acquires Tech Media Firm TBPN to Shape AI Narrative

OpenAI has acquired tech media company TBPN to strengthen its communication strategy around AI. The deal aims to scale industry dialogue while preserving TBPN’s editorial independence.

By Samantha Reed Edited by Maria Konash Published:
OpenAI Acquires Tech Media Firm TBPN to Shape AI Narrative
OpenAI acquires TBPN to expand its AI communications strategy while preserving editorial independence. Image: OpenAI

OpenAI has acquired Technology Business Programming Network (TBPN), a fast-growing tech media platform, as part of a broader effort to reshape how it communicates about artificial intelligence. The company said the acquisition is aimed at strengthening engagement with developers, businesses, and the wider public as AI adoption accelerates globally.

TBPN has built a strong following through its daily live programming focused on technology, startups, and AI developments. The platform is known for convening industry voices and providing real-time commentary on major announcements across the tech ecosystem.

OpenAI plans to integrate TBPN into its Strategy organization, where the team will report to Chris Lehane. The move reflects a shift toward more direct and continuous communication channels, rather than relying solely on traditional corporate messaging.

According to OpenAI leadership, the scale and pace of AI development require new approaches to public engagement. The company is positioning TBPN as a platform to facilitate ongoing discussions about the impact of AI, particularly among builders and users of the technology.

A key condition of the acquisition is maintaining TBPN’s editorial independence. The platform will continue to manage its programming, guest selection, and content decisions independently, a structure OpenAI said is essential to preserving credibility and trust within the tech community.

Strategic Role in the AI Ecosystem

Beyond content, OpenAI expects TBPN to contribute to its broader communications and marketing efforts. The media team’s experience in digital engagement and audience development is expected to support how OpenAI presents its products and research to a global audience.

TBPN’s founders and leadership team, including Jordi Hays, John Coogan, and Dylan Abruscato, will join OpenAI as part of the transition. The company noted that TBPN’s ability to capture industry sentiment and translate complex developments into accessible discussions aligns with its long-term goals.

The acquisition also reflects increasing competition among AI companies not only in technology development but in shaping public understanding of the field. As AI systems become more embedded in daily life, communication strategies are emerging as a key differentiator alongside model performance and infrastructure.

TBPN stated that its move from independent commentary to direct involvement in AI distribution and communication represents an opportunity to have a more tangible impact on how the technology is understood globally.

AI & Machine Learning, News, Startups & Investment

Microsoft Launches MAI Models for Speech, Voice, and Image AI

Microsoft has introduced MAI-Transcribe-1, MAI-Voice-1, and MAI-Image-2, expanding its AI model lineup with faster performance and competitive pricing for developers. The models are now available through Microsoft Foundry.

By Samantha Reed Edited by Maria Konash Published:
Microsoft Launches MAI Models for Speech, Voice, and Image AI
Microsoft launches MAI models for voice, transcription, and images with faster speeds and lower costs. Image: Microsoft

Microsoft has unveiled a new suite of AI models under its MAI branding, including MAI-Transcribe-1, MAI-Voice-1, and MAI-Image-2, aimed at strengthening its position in multimodal AI. The models are now available through Microsoft Foundry and the MAI Playground, targeting developers building applications across speech, voice, and visual content.

MAI-Transcribe-1 focuses on speech-to-text capabilities, delivering state-of-the-art accuracy across 25 widely used languages. According to benchmark results, the model achieves a lower average word error rate compared to several competing systems, indicating improved transcription quality. It is also designed for real-world conditions, handling noisy or complex audio environments.

Performance is a key differentiator. Microsoft states that MAI-Transcribe-1 processes batch transcription tasks up to 2.5 times faster than its existing Azure-based offerings. The model is priced starting at $0.36 per hour, positioning it competitively among cloud providers offering similar services.

MAI-Voice-1, the company’s latest voice generation model, emphasizes realism and expressive output. It supports natural speech synthesis with emotional nuance and can maintain speaker identity across longer audio segments. Developers can also create custom voices using short audio samples, expanding use cases in voice assistants, media production, and enterprise applications.

Focus on Speed, Cost, and Enterprise Adoption

MAI-Image-2 completes the model trio, targeting image generation with improved speed and quality. Microsoft reports that the model delivers at least twice the generation speed of earlier versions while maintaining visual fidelity. It is designed for professional use cases such as marketing, design, and content creation, with a focus on realistic lighting, accurate textures, and legible in-image text.

Pricing reflects Microsoft’s broader strategy to compete on cost efficiency. MAI-Image-2 is offered at $5 per million tokens for text input and $33 per million tokens for image output, while MAI-Voice-1 starts at $22 per million characters. The company is positioning the MAI family as offering strong price-to-performance across modalities.

Enterprise adoption is already underway. WPP, a global marketing and communications group, is among early partners using MAI-Image-2 for large-scale creative production. Microsoft plans to integrate these models across its own ecosystem, including Copilot products and enterprise tools.

The company said the MAI models were developed with built-in safety measures and tested through internal evaluation processes, reflecting ongoing efforts to align performance improvements with responsible AI deployment.

The company is also expanding Copilot with multi-model AI workflows, enabling systems like GPT and Claude to collaborate on responses to improve accuracy and reliability, further reinforcing its strategy to integrate diverse AI capabilities into a unified platform.