Monthly Archives: December 2025
SoftBank Reportedly Completes $40 Billion Investment in OpenAI
SoftBank has finalized its $40 billion investment in OpenAI, increasing its stake above 10%. The funding supports AI infrastructure initiatives, joint ventures, and OpenAI’s broader growth, including plans for a potential IPO.
SoftBank has completed its $40 billion investment commitment to OpenAI, sources told CNBC, with the Japanese conglomerate transferring a final $22–22.5 billion last week. Earlier, SoftBank had invested $8 billion directly in OpenAI and syndicated an additional $10 billion with co-investors, bringing its total stake in the company to over 10%.
The funding was initially reported in February at a $260 billion pre-money valuation and was scheduled to be distributed over a 12-to-24-month period. Some of the capital is earmarked for OpenAI’s Stargate joint venture with Oracle and SoftBank, aimed at expanding AI infrastructure to meet growing compute demands.
Supporting AI Infrastructure and Partnerships
OpenAI has committed more than $1.4 trillion in infrastructure investments over the coming years, including agreements with chipmakers Nvidia, Advanced Micro Devices, and Broadcom. These commitments reflect the company’s effort to scale capacity for AI solutions, particularly as demand for generative AI continues to surge.
SoftBank has been an active investor in AI and technology companies for years, previously backing Nvidia and most recently agreeing to acquire DigitalBridge for $4 billion to strengthen its AI initiatives. In parallel, the conglomerate liquidated its $5.8 billion stake in Nvidia, freeing capital to support its OpenAI investment.
Growing Strategic and Corporate Backing
Ahead of a potential IPO, OpenAI has also received substantial backing from major tech players. Microsoft has been a longstanding investor and strategic partner, while Amazon is reportedly considering a potential investment exceeding $10 billion. Entertainment giant Disney recently invested $1 billion, giving users of OpenAI’s video generator Sora access to licensed content such as Mickey Mouse.
The completed SoftBank funding strengthens OpenAI’s position as it explores a potential initial public offering and continues to expand AI infrastructure, product offerings, and partnerships globally.
Meta Acquires AI Startup Manus for $2 Billion
Meta Platforms has agreed to acquire Singapore-based AI startup Manus for about $2 billion, gaining a rare revenue-generating AI agent platform. The deal strengthens Meta’s AI strategy as investor scrutiny grows around heavy infrastructure spending.
Meta Platforms has agreed to acquire Manus, a Singapore-based AI startup, in a deal valued at roughly $2 billion, according to people familiar with the matter. The acquisition gives Meta control of one of the most closely watched AI agent platforms to emerge in the past year and adds a business with meaningful revenue at a time when returns on AI investment are under scrutiny.
Manus drew attention last spring after releasing a demo video showing an AI agent capable of screening job candidates, planning travel, and analyzing stock portfolios. The company claimed its system outperformed OpenAI’s Deep Research on certain tasks, helping it gain rapid visibility in Silicon Valley.
In April, venture capital firm Benchmark led a $75 million funding round that valued Manus at $500 million post-money. Benchmark general partner Chetan Puttagunta joined the company’s board. Chinese media later reported that Tencent, ZhenFund, and HSG, formerly Sequoia China, had also invested through an earlier $10 million round.
Since launch, Manus has reported signing up millions of users and generating more than $100 million in annual recurring revenue from monthly and yearly subscriptions. That traction appears to have prompted Meta’s interest. The Wall Street Journal reported that Meta agreed to pay the valuation Manus was seeking for its next funding round.
Strategic Fit and Political Scrutiny
For Meta Chief Executive Officer Mark Zuckerberg, the acquisition offers a concrete example of an AI product that is already monetizing at scale. Meta has committed tens of billions of dollars to AI infrastructure, part of an industry-wide spending surge that has unsettled some investors concerned about long-term returns.
Meta said Manus will continue to operate independently while its AI agents are integrated across Facebook, Instagram, and WhatsApp. Meta AI is already available across those platforms, and Manus’ task-oriented agents could expand use cases beyond conversational assistants into productivity and commerce.
The deal also carries geopolitical considerations. Manus’ founders are Chinese nationals who established its parent company, Butterfly Effect, in Beijing in 2022 before relocating operations to Singapore in mid-2025. The company’s earlier Chinese investment has already drawn attention from U.S. lawmakers.
Senator John Cornyn, a Republican member of the Senate Intelligence Committee, previously criticized Benchmark over its investment in Manus, raising concerns about U.S. capital supporting companies with Chinese roots. Cornyn has been a vocal advocate for tighter oversight of technology investment tied to China, a stance that has gained bipartisan support in Congress.
Meta has moved to address those concerns. The company told Nikkei Asia that following the acquisition, Manus will sever ties with Chinese investors and cease operations in China. A Meta spokesperson said there will be no continuing Chinese ownership interests after the transaction.
The acquisition also aligns with Meta’s broader AI roadmap. The company is developing new image, video, and text models under its superintelligence lab for release in the first half of 2026, while Meta AI has also begun partnering with major news publishers to deliver real-time global and entertainment coverage, signaling a push to pair advanced AI capabilities with fresh content and monetizable use cases across its platforms.
ACCA Ends Remote Exams Over AI Misconduct Risks
The Association of Chartered Certified Accountants will discontinue remote exams from March 2026, citing rising misconduct linked to AI tools. Most candidates will return to in-person test centers.
The Association of Chartered Certified Accountants plans to discontinue remote examinations from March 2026, requiring most candidates to return to in-person exam centers. The decision follows a review of remote invigilation practices and growing concerns that technological advances, including artificial intelligence tools, have made online assessments harder to supervise effectively.
The ACCA introduced remote exams during the Covid-19 pandemic to allow students to continue qualifying while test centers were closed. The organization now counts about 257,900 members and more than 500,000 students globally. It concluded that while safeguards had improved, the pace of innovation in digital tools has increased the risk of misconduct beyond manageable levels.
Helen Brand, the ACCA’s chief executive, said the organization had worked intensively to prevent cheating but acknowledged that those seeking to bypass controls were adapting quickly. She said the rapid development of AI had raised the complexity of monitoring candidates in remote environments and pushed online testing to a tipping point.
AI and Exam Integrity
Remote proctoring relies on identity verification, video monitoring, screen tracking, and automated behavior analysis. These systems were designed to detect suspicious activity, but generative AI tools can now assist candidates in producing real-time answers, rewriting text, or summarizing complex material in ways that are difficult for monitoring software to identify.
The shift reflects broader challenges facing educational and professional institutions as AI becomes more accessible and capable. Automated writing tools, voice interfaces, and image recognition systems reduce the friction of external assistance during assessments, creating enforcement gaps for remote testing models. The ACCA said it remains confident in the robustness of its overall assessment framework but concluded that physical supervision provides stronger assurance of fairness and consistency.
The decision comes against a backdrop of exam-related scandals across the accounting profession. PwC, KPMG, and Deloitte have faced multimillion-dollar fines in jurisdictions including the United States, Canada, Australia, and the Netherlands for misconduct linked to internal training or compliance assessments. In 2022, Ernst & Young agreed to pay $100 million to U.S. regulators over allegations that employees cheated on an internal ethics exam and that the firm misled investigators.
While those cases involved firm-run assessments rather than professional qualification exams, regulators in the United Kingdom have also flagged multiple instances of misconduct in recent years. The Institute of Chartered Accountants in England and Wales reported in 2024 that cheating incidents continued to rise, even as some professional bodies still permit limited online testing.
Curriculum Modernization and Skills Shift
The move away from remote exams coincides with a broader overhaul of the ACCA’s main qualification, its first in a decade. The updated curriculum will place greater emphasis on AI, blockchain, and data science, reflecting how automation and advanced analytics are reshaping accounting workflows.
Brand said AI has fundamentally changed the skills accountants need, shifting the focus from static knowledge recall to applied judgment and professional skepticism. New modules will use real-time simulations rather than fixed exams to assess how candidates respond to evolving scenarios, risk signals, and data-driven insights.
By tightening exam controls while modernizing its curriculum, the ACCA aims to preserve the credibility of its certification process as digital tools continue to accelerate change across the profession.
OpenAI Seeks Executive to Study Emerging AI Risks
OpenAI is hiring a Head of Preparedness to study emerging risks tied to rapidly advancing AI models, including mental health impacts and cybersecurity threats. The move reflects rising concern over how frontier capabilities could be misused as models grow more powerful.
OpenAI is seeking a new executive to lead its work on emerging risks posed by increasingly capable AI systems. The role, Head of Preparedness, will focus on assessing and mitigating potential harms linked to frontier models, including mental health effects, cybersecurity vulnerabilities, and the misuse of advanced technical capabilities.
Chief Executive Officer Sam Altman said in a recent post that AI systems are now entering a phase where their benefits are accompanied by more complex challenges. He pointed to early signs in 2025 that some models could negatively affect mental health, as well as more recent developments showing that AI systems are becoming skilled enough in computer security to identify critical software vulnerabilities.
We are hiring a Head of Preparedness. This is a critical role at an important time; models are improving quickly and are now capable of many great things, but they are also starting to present some real challenges. The potential impact of models on mental health was something we…
— Sam Altman (@sama) December 27, 2025
According to OpenAI, the company has built a foundation for measuring growing model capabilities, but now needs a more nuanced understanding of how those capabilities could be abused. The Head of Preparedness will be responsible for refining how OpenAI evaluates risk, limits downsides in its products, and balances safety with continued deployment of advanced models.
The job listing describes the role as executing OpenAI’s Preparedness Framework, which outlines how the company tracks and prepares for frontier AI capabilities that could create severe harm. Compensation for the position is listed at $555,000 plus equity, reflecting the seniority and scope of responsibility.
Cybersecurity, Mental Health, and High-Risk Capabilities
Altman emphasized that the role will involve difficult trade-offs, noting that many proposed safety measures have edge cases and limited precedent. One priority area is enabling cybersecurity defenders to use advanced AI tools while preventing attackers from exploiting the same capabilities. The goal, he said, is to improve overall system security rather than shifting risk.
The position will also engage with questions around the release of sensitive capabilities, including biological research applications and systems that may eventually be able to self-improve. OpenAI has said that gaining confidence in the safety of such systems is becoming more urgent as models improve at a faster pace.
The company first introduced its preparedness team in 2023 to study risks ranging from near-term threats like phishing to more speculative scenarios involving large-scale harm. Since then, the group has undergone changes. Former Head of Preparedness Aleksander Madry was reassigned to work on AI reasoning, and several other safety-focused leaders have moved into different roles or left the company.
Rising Scrutiny and Internal Pressure
The hiring effort comes amid growing scrutiny of generative AI tools. Lawsuits and public criticism have raised concerns about how chatbots handle sensitive mental health conversations, with allegations that some interactions reinforced harmful behaviors. OpenAI has said it is working to improve detection of emotional distress and encourage users to seek human support.
OpenAI recently updated its Preparedness Framework to allow for adjustments if competing AI labs release high-risk models without comparable safeguards. The change reflects competitive pressure in the AI sector, where companies are racing to deploy more capable systems.
Altman warned that the Head of Preparedness role will be demanding, describing it as stressful and requiring immediate immersion into complex problems. By expanding this function, OpenAI is signaling that managing risk alongside rapid technological progress will be central to its strategy as AI capabilities continue to advance.
Amazon Expands Alexa+ With Four New AI App Integrations
Amazon is adding four new integrations to its AI-powered Alexa+ assistant, enabling users to book hotels, schedule services, and access lifestyle apps. The update extends the assistant’s capabilities across multiple sectors starting in 2026.
Amazon announced that its AI-powered digital assistant, Alexa+, will gain four new app integrations starting in 2026, expanding the assistant’s functionality for consumers. The additions include Angi, Expedia, Square, and Yelp, enabling users to book hotels, get home service quotes, schedule salon appointments, and more.
With Expedia, for example, Alexa+ can compare, book, and manage hotel reservations, or provide personalized recommendations based on user preferences, such as pet-friendly accommodations for a specific weekend. These new services join existing integrations with Fodor, OpenTable, Suno, Ticketmaster, Thumbtack, and Uber.
The expanded capabilities allow users to interact with Alexa+ using natural language, refining requests through back-and-forth conversations. This model mirrors recent trends in AI platforms, such as ChatGPT’s in-app marketplace, where users connect and leverage apps directly through an AI interface.
Industry Shift Toward AI-Integrated Apps
Amazon noted that early adopters of Alexa+ have shown strong engagement, particularly with home and personal service providers like Thumbtack and Vagaro. The move reflects a broader industry trend of transforming AI assistants into platforms for app-based interactions, offering a unified, conversational interface for services traditionally accessed via web or mobile apps.
Success in this model will require AI providers to match or exceed the breadth of services offered by conventional app stores, while also recommending apps in contextually relevant ways without appearing intrusive. Providers must balance convenience with user experience to encourage adoption and make AI-mediated service interactions feel as easy, or easier, than traditional methods.
The Alexa+ update underscores Amazon’s strategy of embedding AI into daily routines, providing more seamless access to services while expanding the practical utility of its digital assistant across personal, travel, and lifestyle tasks.
OpenAI Rolls Out a Personalised AI Recap “Your Year With ChatGPT”
OpenAI is rolling out “Your Year with ChatGPT,” an annual review feature that summarizes how users interacted with the chatbot over the past year. The tool is available to eligible users across several English-speaking markets, including the United States.
OpenAI has begun rolling out a new annual recap experience called “Your Year with ChatGPT,” offering users a personalized summary of how they used the chatbot over the past year. The feature is being made available to eligible consumers in select markets, including the United States, Canada, the United Kingdom, Australia, and New Zealand.
The experience is available to users on free, Plus, and Pro plans, provided they have enabled reference saved memories and reference chat history and meet a minimum conversation activity threshold. According to OpenAI, Team, Enterprise, and Education accounts are excluded from access to the feature.
Your Year with ChatGPT!
Now rolling out to everyone in the US, UK, Canada, New Zealand, and Australia who have reference saved memory and reference chat history turned on.
Just make sure your app is updated. pic.twitter.com/whVkS1qxKu
— OpenAI (@OpenAI) December 22, 2025
OpenAI described the rollout as lightweight, privacy-forward, and fully user-controlled. The recap is not automatically triggered and will not open by default. Instead, it is promoted on the ChatGPT home screen and can also be launched by directly prompting ChatGPT to generate the experience. The feature is accessible through both the ChatGPT web interface and mobile apps on iOS and Android.
The release reflects OpenAI’s continued focus on consumer-facing features as ChatGPT adoption grows across personal, educational, and professional use cases. It also signals an effort to increase engagement without expanding persistent data collection beyond existing user settings.
Engagement Data Presented Through AI
The format of “Your Year with ChatGPT” closely mirrors popular year-end recap products from consumer platforms. Similar to Spotify Wrapped, the experience uses visual graphics and personalized summaries to highlight user behavior across the year. OpenAI assigns themed “awards” based on usage patterns, such as recognizing users who frequently relied on ChatGPT for creative problem-solving or conceptual work.
In addition to usage highlights, the feature generates a short poem and an image reflecting the user’s most common topics and interests. These elements are produced by the model itself, reinforcing OpenAI’s emphasis on generative capabilities as both a functional and expressive tool.
OpenAI has emphasized that the recap relies only on existing conversation history and saved memory settings chosen by the user. No new data collection is required to generate the experience, and users who have disabled chat history or memory references are not eligible.
The launch comes as OpenAI continues to expand product differentiation across its consumer tiers while maintaining strict separation between individual users and organizational accounts. By keeping the feature optional and limited to personal plans, the company appears to be balancing engagement with privacy expectations.
“Your Year with ChatGPT” adds to a growing set of features aimed at reinforcing ChatGPT’s role as a daily-use AI assistant, while framing long-term interaction with the system in a more reflective and user-friendly format.
The launch also coincides with a broader product expansion at OpenAI. ChatGPT recently introduced an in-app marketplace across iOS, Android, and the web, allowing users to browse and connect third-party apps for productivity, lifestyle, and entertainment, further extending how consumers build AI-powered workflows within the platform.
Alphabet to Acquire Intersect for $4.75 Billion to Advance U.S. Energy Innovation
Alphabet announced plans to acquire Intersect for $4.75 billion, aiming to expand data center capacity and accelerate energy innovation. The acquisition includes in-development projects and Intersect’s technical team.
Alphabet has entered a definitive agreement to acquire Intersect, a provider of data center and energy infrastructure solutions, for $4.75 billion in cash plus debt assumption. Alphabet previously held a minority stake in Intersect through a funding round.
The deal is designed to accelerate the deployment of new data center and power generation capacity while supporting energy innovation in the U.S. Intersect’s team and multiple gigawatts of projects under development or construction will join Alphabet and Google’s initiatives. The acquisition will enable faster buildout of co-located data centers and energy projects, such as the first joint site in Haskell County, Texas.
Intersect will continue operating under its own brand, led by founder and CEO Sheldon Kimber, and work closely with Google’s technical infrastructure team. Existing operating assets in Texas and California will remain independent, supported by current investors including TPG Rise Climate, Climate Adaptive Infrastructure, and Greenbelt Capital Partners, ensuring continuity for customers.
Sundar Pichai, CEO of Google and Alphabet, said the acquisition will allow the company to expand capacity, align new power generation with data center demand, and drive U.S. innovation. Kimber emphasized that the partnership will accelerate infrastructure modernization, noting the importance of energy innovation and community investment for competitiveness in AI.
Supporting U.S. Energy Innovation
The acquisition aligns with Alphabet and Google’s broader strategy to partner with utilities and energy developers to deliver abundant, reliable, and affordable energy. The companies are focused on scaling data center infrastructure without passing costs to grid customers while commercializing advanced energy technologies such as geothermal, long-duration storage, and gas with carbon capture.
AI will also be deployed to speed grid connections for new power plants and enhance energy efficiency in data center communities.
The acquisition is expected to close in the first half of 2026, pending customary closing conditions. The deal positions Alphabet to expand data center capacity in tandem with sustainable energy development, strengthening its infrastructure for cloud and AI services, and comes as the company also explores multi-billion dollar chip deals with Meta to supply TPUs for data centers.
China’s OpenAI Rivals Reveal IPO Economics
Chinese generative AI startups MiniMax and Zhipu AI have disclosed their financials ahead of potential Hong Kong listings, highlighting modest revenues and mounting losses compared with U.S. peers.
MiniMax and Zhipu AI, two of China’s most prominent generative artificial intelligence startups, have offered the first detailed look at their business models through recent listing filings. The disclosures highlight the financial challenges facing Chinese AI developers as they prepare for potential initial public offerings in Hong Kong.
Both companies are backed by major domestic technology groups, including Alibaba Group Holding and Tencent Holdings, and are widely viewed as leading domestic alternatives to OpenAI. They are among the remaining players after an intense period of competition in China’s AI sector, often referred to as the “Battle of One Hundred Models,” which saw dozens of startups exit or consolidate.
According to the filings shared by Bloomberg, Zhipu generated 312.4 million yuan, or about $44.4 million, in revenue in 2024. MiniMax reported $30.5 million in revenue for the same period. While growth has been steady, those figures remain small compared with U.S. rivals. OpenAI is estimated to be generating about $13 billion in annualized revenue, while Anthropic is projected to reach roughly $9 billion this year.
The contrast underscores the scale advantage enjoyed by Silicon Valley AI labs, which benefit from deeper enterprise adoption, higher pricing power, and broader international reach. Zhipu and MiniMax are each valued at around $4 billion, far below the valuations attached to leading U.S. developers.
Diverging Strategies and Rising Costs
The two companies have pursued different paths to commercialization. Zhipu, founded by researchers from Tsinghua University, has focused on building advanced foundation models while delivering customized AI systems for government-linked and state-owned clients. This approach has helped drive revenue but has also tied growth closely to public sector demand.
MiniMax has emphasized consumer-facing and international products, including AI companion applications and video editing tools. The strategy aims to diversify revenue sources and reduce reliance on domestic enterprise clients, though monetization remains limited at this stage.
Both companies reported widening losses as research and development spending increased. Investment in model training, compute infrastructure, and talent continues to weigh heavily on profitability. The filings show that operating expenses have grown faster than revenue, reflecting the capital-intensive nature of large language model development.
Gross margins provide further insight into their business models. Zhipu currently reports higher margins, supported by enterprise contracts and bespoke deployments. MiniMax’s margins trail but have been improving as the company refines pricing and scales select products.
IPO Push Tests Investor Appetite
The disclosures come as MiniMax and Zhipu race to become the first Chinese generative AI startups to go public. Analysts expect each company could raise several hundred million dollars in Hong Kong listings, depending on market conditions.
The IPO push will test investor appetite for AI firms with heavy losses and long paths to profitability, particularly as global enthusiasm for AI stocks becomes more selective. Unlike their U.S. counterparts, Chinese AI startups face tighter domestic pricing, regulatory constraints, and limited access to foreign markets.
Even so, the filings suggest both companies believe scale and state backing can eventually narrow the gap. For investors, the documents offer a clearer view of the economic realities behind China’s push to build domestic AI champions in a market still dominated by U.S. players.
Meanwhile, OpenAI has also begun preparing for a potential IPO that could value the company at up to $1 trillion, underscoring the scale gap between Chinese AI startups and their Silicon Valley counterparts and highlighting how global capital markets continue to favor companies with larger revenue bases and international reach.
AI Drives Rising Corporate Layoffs in 2025
Artificial intelligence has become a major factor behind U.S. job cuts in 2025, with companies citing automation and efficiency gains as reasons for workforce reductions. More than 55,000 layoffs this year have been directly attributed to AI adoption.
Layoffs have defined the U.S. job market in 2025, with artificial intelligence increasingly cited as a driver of workforce reductions. Consulting firm Challenger, Gray & Christmas estimates that nearly 55,000 layoffs announced this year were directly linked to AI adoption, as companies accelerate automation and streamline operations.
Overall, U.S. employers announced about 1.17 million job cuts through 2025, the highest total since the Covid-19 pandemic in 2020, when 2.2 million layoffs were recorded. Monthly figures remain elevated. Employers announced roughly 153,000 job cuts in October and more than 71,000 in November. Challenger data shows AI was cited in over 6,000 layoffs in November alone.
Rising inflation, higher operating costs, and the impact of tariffs have pushed companies to seek faster efficiency gains. AI has emerged as a short-term solution, allowing firms to reduce headcount while maintaining output. A November study from the Massachusetts Institute of Technology found that AI systems can already perform tasks equivalent to 11.7% of U.S. jobs, potentially saving up to $1.2 trillion in wages across finance, healthcare, and professional services.
Not all experts agree that AI is the primary cause of layoffs. Fabian Stephany, assistant professor of AI and work at the Oxford Internet Institute, has argued that some companies may be using AI as justification for corrections after pandemic-era overhiring. He said many firms expanded too aggressively in prior years and are now adjusting headcount rather than replacing workers purely with automation.
Major Companies Cite AI in Restructuring
Several large technology firms have openly linked layoffs to AI-driven restructuring. Amazon announced its largest round of layoffs on record in October, cutting about 14,000 corporate roles. The company said it is reallocating resources toward major growth areas, including AI. Chief Executive Andy Jassy has warned employees that AI will reduce the need for some roles while increasing demand for others.
Microsoft has cut roughly 15,000 jobs in 2025, including 9,000 announced in July. Chief Executive Satya Nadella told employees the company must reimagine its mission for an AI-driven era, positioning Microsoft as an intelligence platform rather than a traditional software provider.
Salesforce confirmed in September that it eliminated about 4,000 customer support roles, with Chief Executive Marc Benioff stating that AI now performs up to half of the company’s workload. IBM has also acknowledged AI-related job displacement, with Chief Executive Arvind Krishna saying chatbots replaced several hundred human resources roles, even as hiring increased in engineering and sales. In November, IBM announced a 1% global workforce reduction.
Other firms have followed suit. HP highlighted plans to cut up to 6000 employees worldwide, citing AI-driven efficiencies. Workday reduced staff by roughly 1,750 jobs earlier this year to prioritize AI investment.
Together, these announcements highlight how AI is reshaping corporate labor strategies. While automation-driven efficiency gains remain attractive, the scale of job displacement has intensified debate over how much of the current wave reflects technological change versus delayed market corrections.
OpenAI Explores $100 Billion Funding Round
OpenAI is in talks to raise up to $100 billion in new funding, a deal that could value the ChatGPT maker as high as $830 billion. The discussions reflect rising capital needs as AI development and infrastructure costs accelerate.
OpenAI is in discussions to raise as much as $100 billion in a funding round that could value the company at up to $830 billion, according to a Wall Street Journal report citing people familiar with the matter. The company is targeting completion of the round by the end of the first calendar quarter next year and may seek backing from sovereign wealth funds.
The Information earlier reported on the potential fundraise, though it estimated a lower valuation of around $750 billion. If completed at the higher figure, the deal would rank among the largest private capital raises ever and significantly expand OpenAI’s balance sheet.
The talks come as OpenAI commits to massive spending on infrastructure and global partnerships to maintain its lead in artificial intelligence. The company has been investing heavily in model training and inference, with costs increasingly covered by cash rather than cloud credits. This shift suggests that compute expenses have outgrown what existing partnerships can fully offset.
OpenAI is also moving faster to release new models and expand its developer tools as competition intensifies from rivals such as Anthropic and Google. These efforts require sustained capital to support research, product launches, and the scaling of enterprise services tied to ChatGPT and related APIs.
Market Conditions and Strategic Options
The potential fundraise is unfolding against a more cautious investment backdrop. Broader sentiment around AI has cooled as investors question whether debt fueled spending by major technology companies can be sustained. Amazon, Microsoft, Oracle, and OpenAI have all committed tens of billions of dollars to AI infrastructure, raising concerns about long-term returns.
Supply constraints add another challenge. Shortages in advanced memory chips are limiting the pace of data center expansion and could affect the broader technology sector. These bottlenecks increase costs and complicate deployment timelines for AI developers that rely on high performance hardware.
OpenAI has also been rumored to be exploring an IPO as a longer-term path to raising tens of billions of dollars. The company is reported to be generating annualized revenue of about $20 billion, driven by subscriptions, enterprise contracts, and API usage. An IPO could provide liquidity and a more permanent source of capital, though no formal plans have been confirmed.
Separately, industry speculation has pointed to discussions with Amazon over a potential $10 billion investment. Such a deal could include access to Amazon’s in-house AI computing chips, offering OpenAI another option to manage rising infrastructure costs.
According to PitchBook data, OpenAI currently holds more than $64 billion in capital and was most recently valued at roughly $500 billion in a secondary transaction. A successful $100 billion raise would further strengthen its position but also underscore how capital intensive the AI race has become.
OpenAI, Anthropic Expand Teen AI Safety Controls
OpenAI and Anthropic are rolling out new safety measures aimed at protecting teenage users as AI chatbots become more embedded in education and social life. The updates focus on age-appropriate interactions, parental oversight, and stricter content controls.
As artificial intelligence tools become more common in classrooms and daily life, technology companies are under growing pressure to address how these systems affect adolescents. OpenAI and Anthropic have both announced new safety initiatives designed to reduce risks for teenage users while preserving access to educational and creative benefits.
OpenAI has updated a set of U18 Principles that govern how ChatGPT interacts with users aged 13 to 17. The framework is intended to ensure conversations are developmentally appropriate and prioritize user safety. According to the company, these users receive heightened protections, including stricter filtering of content related to self-harm, sexual role play, eating disorders, or dangerous challenges.
The safeguards operate at the model level and are supported by expanded parental controls. Parents can link their accounts to a child’s profile, manage usage hours, and restrict access to sensitive topics. OpenAI says the goal is to encourage healthy digital habits while allowing teens to use AI for homework help, research, and creative projects.
The changes follow mounting scrutiny of AI systems after reports that chatbots had engaged in unsafe or emotionally manipulative conversations. In one high-profile example, OpenAI has faced lawsuits from families alleging that an earlier GPT-4o model encouraged harmful behavior, including suicide, due to inadequate safeguards and premature deployment. The case has become a reference point in broader debates over AI accountability and youth protection.
Anthropic Maintains Strict Age Limits
Anthropic has taken a more restrictive approach. The company requires Claude users to be at least 18 years old and is strengthening enforcement of that policy. Its systems are being updated to detect underage use through conversational signals, automated classifiers, and user disclosures. Accounts suspected of belonging to minors can be reviewed or disabled.
The company is also refining how Claude responds to sensitive mental health topics. When conversations involve suicidal thoughts or self-harm, the model is designed to avoid acting as emotional support. Instead, responses encourage users to seek help from trusted adults or professional resources. Anthropic has said this design choice reflects a belief that AI should not replace human intervention in high-risk situations.
Industry Pressure and Mental Health Concerns
The new measures come amid increasing concern from researchers, educators, and health professionals. Studies from Stanford Medicine and Common Sense Media have found that widely used chatbots often provide inconsistent or unsafe responses to mental health prompts. Pediatric psychologists have warned that teens may form emotional attachments to AI systems, potentially treating them as substitutes for real-world support.
Regulators and advocacy groups in the United States have also called for stronger age verification and clearer accountability standards. Lawmakers have questioned whether existing safeguards are sufficient as AI tools scale rapidly among younger users.
Together, OpenAI and Anthropic’s updates signal a shift toward age-based and risk-based AI design. While the long-term effectiveness of these measures remains to be tested, they set new expectations for how AI systems should interact with children and adolescents as adoption continues to grow.
ChatGPT Launches In-App Marketplace
ChatGPT now features an app directory across iOS, Android, and web, allowing users to connect apps for productivity, lifestyle, and entertainment while exploring new AI-powered workflows.
OpenAI has launched a new app directory inside ChatGPT, enabling users to connect apps directly to their AI conversations. According to OpenAI, apps extend ChatGPT interactions by providing new context and letting users take actions such as ordering groceries, creating slide decks, or searching for apartments. Connector apps like Google Drive are now simply referred to as “apps.”
The directory is available across iOS, Android, and web platforms, divided into Feature, Lifestyle, and Productivitycategories. Popular apps like Booking.com, Spotify, and Dropbox are included. To use an app, users click “Connect,” authorize access, and can then start a chat related to the app. For example, Dropbox users can “gather insights, prepare briefs, and summarize reports or internal documents.” Connected apps can also be accessed via @ mentions in conversations.
The update includes an Apple Music app, which allows users to find music, create playlists, and manage libraries through chat, similar to Spotify. DoorDash integration lets users convert recipes, meal plans, and pantry staples into actionable shopping carts directly from ChatGPT.
Developer Access and Monetization
OpenAI is also opening submissions for developers to publish apps in ChatGPT. Developers can follow the company’s app submission guidelines and leverage resources such as open-source example apps, a UI library for chat-native interfaces, a quickstart guide, and the SDK introduced in October.
Currently, developers can monetize apps by linking users to native apps or websites, though OpenAI is exploring internal monetization options in the future. Privacy remains a priority, with developers required to provide clear policies regarding data usage.
The new app directory is part of OpenAI CEO Sam Altman’s vision to make ChatGPT more versatile with custom “GPT” bots. Alongside apps, OpenAI has recently enhanced ChatGPT with faster and more precise image generation through its flagship ChatGPT Images model, which introduces a dedicated workspace and the lower-cost GPT Image 1.5 API. The company has also rolled out upgrades to the underlying GPT models, improving reasoning, multimodal understanding, and instruction-following capabilities, all aimed at making ChatGPT a more powerful and seamless tool for users across creative, productivity, and everyday tasks.
Visa Tests AI That Can Pay Your Bills and Shop for You
Visa said it successfully completed hundreds of AI-driven transactions in a pilot program testing agent-based payments. The effort reflects a broader push across fintech to let AI agents handle purchases on behalf of consumers.
Visa said Thursday that it has successfully completed hundreds of artificial intelligence-powered transactions as part of a pilot program launched after its product event in April. The initiative focuses on enabling AI agents to complete payments and transactions on behalf of consumers, a capability that could reshape how people shop and interact with digital commerce platforms.
The pilot places Visa among a growing group of payments and technology companies experimenting with agentic AI, where software agents are authorized to act for users in specific purchasing scenarios. Visa executives said the tests demonstrated that AI agents can reliably execute transactions within defined parameters, such as recurring purchases or time-sensitive events.
“This is going to be the year we see an enormous amount of material adoption, and consumers really starting to get comfortable in a bunch of different agentic environments,” said Rubail Birwadker, Visa’s head of growth products and partnerships.
Visa did not disclose the transaction volumes or merchants involved in the pilot, but said the program validated the technical and operational feasibility of AI-driven payments within its network.
Fintech Industry Races Toward Agentic Commerce
Visa’s work reflects a broader push across the payments and e-commerce ecosystem to integrate AI agents into consumer transactions. Mastercard said in April that it was testing a feature called Agent Pay, which allows AI agents to shop online for customers. Amazon also began testing a “Buy For Me” service that month, designed to let AI assistants complete purchases on external websites.
PayPal and AI search company Perplexity have also partnered on agent-based shopping tools, highlighting growing interest in automating parts of the online purchasing process. These systems aim to reduce friction by allowing users to delegate routine or repetitive buying decisions to AI, while retaining human oversight for approvals and spending limits.
Visa’s own research suggests consumer adoption is already underway. A survey released earlier this month found that nearly half of U.S. shoppers have used AI in connection with purchases, whether for product discovery, price comparison, or decision support. While most current use cases stop short of fully autonomous transactions, companies see payment authorization as the next step.
Birwadker said AI agents may be particularly useful for predictable purchases, such as household essentials, subscriptions, or booking concert tickets where speed and availability matter. By embedding payment credentials and rules into AI agents, companies hope to streamline checkout without sacrificing security.
Global Expansion Plans and Partnerships
Visa said it plans to expand its AI payments pilots to Asia and Europe next year, signaling confidence in the technology’s readiness for broader testing. The company is currently working with more than 20 partners on AI agent tools, spanning merchants, technology providers, and developers building agent-based commerce experiences.
Security and trust remain central concerns. Payments networks must ensure AI agents operate within strict controls, including spending caps, authentication requirements, and clear user consent. Visa did not detail its safeguards but said its pilots were designed to operate within existing network protections.
As AI agents move closer to handling real-world transactions, payments companies are positioning themselves as infrastructure providers for this emerging layer of digital commerce. Visa’s pilot suggests that agentic payments are shifting from concept to early execution, setting the stage for wider adoption as consumers and merchants become more comfortable with AI acting on their behalf.
Google Launches Gemini 3 Flash for Faster AI Reasoning
Google released Gemini 3 Flash, a new AI model designed to deliver frontier-level reasoning with significantly lower latency and cost. The model is rolling out across Google products, developer platforms, and enterprise services worldwide.
Google expanded its Gemini 3 model family with the release of Gemini 3 Flash, a new model built to deliver high-end reasoning performance with faster response times and lower costs. The launch follows last month’s introduction of Gemini 3 Pro and Gemini 3 Deep Think, which together process more than 1 trillion tokens per day through Google’s API.
Gemini 3 Flash combines the reasoning foundation of Gemini 3 Pro with the low-latency characteristics of the Flash series. Google said the model is optimized for speed, efficiency, and scale, while retaining strong performance across reasoning, multimodal understanding, and agentic workflows.
The model is rolling out globally starting today. Developers can access Gemini 3 Flash through the Gemini API in Google AI Studio, Gemini CLI, and Google Antigravity, while enterprises can deploy it via Vertex AI and Gemini Enterprise. Consumers will encounter Gemini 3 Flash as the default model in the Gemini app and in AI Mode in Search.
Performance, Cost, and Developer Use Cases
Google positioned Gemini 3 Flash as a frontier model that balances intelligence with efficiency. The model achieved 90.4% on the GPQA Diamond benchmark and 33.7% on Humanity’s Last Exam without tools, results that rival larger frontier systems. It also scored 81.2% on MMMU Pro, comparable to Gemini 3 Pro, and outperformed Gemini 2.5 Pro across multiple evaluations.
Gemini 3 Flash was designed to adapt its reasoning depth based on task complexity. Google said it uses about 30% fewer tokens on average than Gemini 2.5 Pro on typical workloads, helping reduce inference costs. Pricing is set at $0.50 per million input tokens and $3 per million output tokens, with audio inputs priced at $1 per million tokens.
For software development, Gemini 3 Flash demonstrated strong agentic coding performance. On SWE-bench Verified, the model achieved a 78% score, outperforming both the Gemini 2.5 series and Gemini 3 Pro. Google said this makes it well suited for high-frequency workflows, production systems, and interactive applications that require fast feedback and reliable reasoning.
Companies including JetBrains, Bridgewater Associates, and Figma are already using Gemini 3 Flash, citing its combination of inference speed and reasoning quality. The model also supports complex video analysis, data extraction, and visual question answering, enabling use cases such as in-game assistants and rapid experimentation.
Consumer Rollout Across Search and Gemini
For consumers, Gemini 3 Flash is now the default model in the Gemini app, replacing Gemini 2.5 Flash. Google said this gives users worldwide free access to Gemini 3-level reasoning for everyday tasks. The model’s multimodal capabilities allow it to analyze images, video, and audio, and generate structured outputs such as plans or summaries within seconds.
Gemini 3 Flash is also rolling out as the default model for AI Mode in Search. Google said the integration improves the system’s ability to parse complex queries, combine real-time information, and present results in a more organized and actionable format.
With Gemini 3 Flash, Google aims to push advanced AI reasoning into mainstream use, emphasizing speed, cost efficiency, and broad availability alongside its higher-end Gemini 3 models.
Amazon Weighs $10B Investment into OpenAI Amid AI Chip Push
Amazon is reportedly in early talks to invest up to $10 billion in OpenAI in a deal tied to the use of its AI chips. The discussions reflect a broader trend of circular partnerships shaping the AI infrastructure market.
Amazon is in early discussions to invest as much as $10 billion in OpenAI, a move that could deepen ties between the cloud giant and one of the most influential AI labs, CNBC reported. The deal would reportedly involve OpenAI using Amazon’s in-house AI chips, reinforcing Amazon Web Services’ role in large-scale AI training and deployment.
If completed, the investment would value OpenAI at more than $500 billion, Bloomberg reported, citing an anonymous source. Such a valuation would place OpenAI among the most valuable private companies globally and underscore investor confidence in generative AI platforms as demand accelerates across industries.
The talks come as Amazon seeks to diversify its position in the AI market. The company has already committed up to $8 billion to Anthropic, a direct OpenAI rival, and has integrated Anthropic’s models into AWS offerings. Earlier this month, Amazon also unveiled the latest generation of its Trainium AI chips and outlined plans for future iterations, positioning them as cost-effective alternatives to Nvidia hardware for large-scale model training.
Circular Deals Reshape AI Infrastructure
An Amazon investment in OpenAI would mark the latest example of so-called circular deals in the AI sector. In these arrangements, cloud providers or chipmakers invest in AI startups, which in turn commit to using the investors’ infrastructure, chips, or data centers. The structure aligns incentives while locking in long-term demand for compute resources.
OpenAI has already engaged in several such deals. In March, it invested $350 million in CoreWeave, which used the capital to purchase Nvidia chips that now provide compute capacity back to OpenAI. In October, OpenAI took a 10% stake in AMD while agreeing to use the chipmaker’s AI GPUs, and it also signed a chip usage agreement with Broadcom the same month. In November, OpenAI finalized a $38 billion cloud computing deal with Amazon, further cementing their commercial relationship. In December it OpenAI took an ownership stake in Thrive Holdings.
These arrangements reflect the rising cost and strategic importance of AI infrastructure. Training frontier models increasingly requires massive capital outlays, making partnerships with cloud and hardware providers critical to sustaining development.
Strategic Context for OpenAI and Amazon
The reported talks follow OpenAI’s recent transition to a for-profit structure, a shift that allows it greater flexibility in raising capital beyond Microsoft, which remains a major shareholder with a reported 27% stake. That change has opened the door for broader strategic partnerships as OpenAI scales its compute needs and global footprint.
For Amazon, investing in OpenAI could complement its existing AI strategy by expanding adoption of AWS services and proprietary chips. It would also hedge against overreliance on any single AI partner while strengthening Amazon’s position as a core infrastructure provider in the AI ecosystem.
Neither Amazon nor OpenAI commented on the reported discussions. If finalized, the deal would underscore how capital, compute, and AI development are becoming increasingly intertwined as competition intensifies across the generative AI market.