Google Launches Gemini 3.1 Flash-Lite for High-Volume AI Workloads

Google introduced Gemini 3.1 Flash-Lite, a faster and lower-cost model designed for high-volume developer workloads. The model emphasizes speed, efficiency, and scalable reasoning for real-time applications.

By Daniel Mercer Edited by Maria Konash Published:
Google Launches Gemini 3.1 Flash-Lite for High-Volume AI Workloads
Google launches Gemini 3.1 Flash-Lite, a faster, lower-cost AI model built for developers and large-scale workloads. Photo: Google

Following Gemini Flash, Google has introduced Gemini 3.1 Flash-Lite, a new addition to its Gemini 3 series designed for high-volume developer workloads. The model prioritizes speed and cost efficiency while maintaining performance levels comparable to larger AI systems.

Gemini 3.1 Flash-Lite is rolling out in preview for developers through the Gemini API in Google AI Studio and for enterprise customers through Vertex AI. The launch reflects Google’s effort to expand its AI platform offerings with models optimized for real-time applications and large-scale deployments.

The model is priced at $0.25 per million input tokens and $1.50 per million output tokens, positioning it among the lowest-cost options in its category. Google said the system delivers stronger performance relative to earlier models while reducing latency for production workloads.

According to benchmark results from Artificial Analysis, Gemini 3.1 Flash-Lite achieves a 2.5 times faster time to first token compared with Gemini 2.5 Flash and increases output generation speed by roughly 45 percent. The improvements are designed to support applications that require quick responses, including chat interfaces, automated workflows, and real-time analytics.

Performance and Developer Features

Google said the model also performs competitively across reasoning and multimodal benchmarks. Gemini 3.1 Flash-Lite achieved an Elo score of 1432 on the Arena.ai leaderboard and scored 86.9 percent on the GPQA Diamond benchmark and 76.8 percent on MMMU Pro, tests that measure advanced reasoning and multimodal understanding.

Those results place the model ahead of some earlier Gemini releases, including Gemini 2.5 Flash, while keeping operating costs relatively low. The system is designed to compete with lightweight models offered by other AI providers that prioritize speed and efficiency for developer applications.

Beyond raw performance, Gemini 3.1 Flash-Lite includes adjustable “thinking levels” available in AI Studio and Vertex AI. The feature allows developers to control how much reasoning the model applies to a task, helping balance response speed and computational cost depending on the workload.

The model is intended for a range of high-frequency tasks such as large-scale translation, content moderation, and structured data generation. Google also said the model can handle more complex assignments including building user interface layouts, generating dashboards, and running simulations when deeper reasoning is required.

Early testers, including companies such as Latitude, Cartwheel, and Whering, have begun using the model in preview environments. According to developer feedback shared by Google, the system can process complex inputs and follow detailed instructions while maintaining the efficiency typically associated with smaller AI models.

AI & Machine Learning, Enterprise Tech, News

OpenAI Developing GitHub Rival for Code Hosting

OpenAI is reportedly developing a new code hosting platform that could compete with Microsoft-owned GitHub. The project follows service disruptions affecting GitHub and reflects OpenAI’s expanding developer platform strategy.

By Daniel Mercer Edited by Maria Konash Published:
OpenAI Developing GitHub Rival for Code Hosting
OpenAI is reportedly developing a GitHub rival as it expands its developer tools and AI infrastructure beyond ChatGPT. Photo: Rubaitul Azad / Unsplash

OpenAI is developing a new code hosting platform that could compete directly with GitHub, according to a report by The Information citing a person familiar with the project. The initiative is currently in early development and may take several months before a working product is completed.

The proposed platform would allow developers to host and manage software repositories, a service currently dominated by Microsoft-owned GitHub. Engineers at OpenAI reportedly began exploring the project after experiencing repeated service disruptions that temporarily made GitHub unavailable in recent months.

The new system could eventually be offered to OpenAI’s existing developer and enterprise customer base, according to the report. Such a move would extend the company’s reach beyond AI models and APIs into core software development infrastructure.

Competition With a Major Partner

If the platform is commercialized, it would represent a notable competitive step by OpenAI against Microsoft, one of its largest investors and strategic partners. Microsoft currently owns GitHub and integrates OpenAI models across several of its software products, including developer tools and enterprise platforms.

The move would highlight the evolving relationship between AI model providers and traditional software platforms. As AI systems increasingly generate, analyze, and modify code, companies are beginning to build integrated ecosystems that combine model access with developer infrastructure.

OpenAI has already expanded its presence in the developer ecosystem through tools such as its API platform and coding-focused AI assistants. A dedicated repository platform could allow the company to more closely integrate code generation, version control, and AI-assisted development workflows.

The reported project comes amid continued growth and investment in OpenAI. The company’s latest funding round reportedly valued it at about $8730 billion after raising roughly $110 billion from investors including SoftBank and major technology firms.

The expansion also reflects intensifying competition across the AI industry as companies seek to control the platforms developers use to build applications. OpenAI has recently deepened its involvement with government technology infrastructure, including agreements to deploy its models within U.S. defense networks and ongoing discussions about deploying AI systems across NATO’s unclassified networks.

AI & Machine Learning, News

Anthropic Nears $20B Revenue Even as US Flags Supply Chain Risks

Anthropic’s annualized revenue has surged to nearly $20 billion even as the U.S. government classifies the company as a supply chain risk. The AI firm plans to challenge the designation while demand for its Claude models continues to grow.

By Daniel Mercer Edited by Maria Konash Published:
Anthropic Nears $20B Revenue Even as US Flags Supply Chain Risks
Anthropic’s revenue nears a $20B run rate as Claude adoption surges, despite a U.S. supply-chain risk designation. Photo: Anthropic

Anthropic is experiencing rapid revenue expansion even as it faces a political and regulatory dispute with the U.S. government. The artificial intelligence company has increased its annualized revenue run rate to more than $19 billion, more than doubling from roughly $9 billion at the end of 2025.

The growth marks a sharp rise from around $14 billion reported only weeks earlier. The surge has been driven by strong adoption of Anthropic’s AI models and developer tools, particularly the programming-focused product Claude Code.

Anthropic, which was recently valued at about $380 billion, has also gained traction among individual users. Its Claude mobile app recently climbed to the top of Apple’s U.S. App Store rankings as debate intensified over AI partnerships with the Pentagon. The ranking shift came as some users reacted to rival OpenAI’s defense agreement by uninstalling ChatGPT and switching platforms.

The company has also expanded its product portfolio with tools such as Claude Cowork, which integrates AI into collaborative software workflows. The launch of these products has disrupted segments of the software-as-a-service market, contributing to volatility among some SaaS company stocks.

Pentagon Conflict and Supply Chain Classification

Despite strong commercial momentum, Anthropic is facing growing pressure from the U.S. Department of Defense. Defense Secretary Pete Hegseth recently designated the company as a supply chain risk, a classification typically reserved for firms linked to geopolitical adversaries.

The designation followed months of negotiations between Anthropic and the Pentagon over how its AI systems could be used by military and intelligence agencies.

Anthropic insisted on maintaining safeguards preventing two specific applications: mass domestic surveillance of U.S. citizens and the use of AI systems in fully autonomous weapons. The company has argued that current AI models are not reliable enough to safely operate without human oversight and that large-scale surveillance of Americans would violate fundamental rights.

U.S. defense officials have previously said the military has no intention of deploying AI for mass surveillance or autonomous weapons but has argued that lawful uses of AI should remain unrestricted.

Legal Challenge and Industry Implications

Anthropic has described the supply chain risk designation as legally unsound and signaled it will challenge the move in court if necessary. The company argues the authority cited by the Defense Department applies only to contracts directly involving the Pentagon and should not restrict broader commercial use of its technology.

Under the company’s interpretation, the designation would not affect private customers or contractors using Claude outside Department of Defense agreements. Anthropic also says it should not restrict how defense contractors deploy the model in non-Pentagon projects.

Industry observers say the dispute highlights tensions between AI safety policies and national security priorities as governments accelerate adoption of generative AI tools.

Despite the political standoff, Anthropic’s commercial business continues to expand rapidly. Strong enterprise demand for coding and productivity tools built on Claude has driven revenue growth while consumer adoption has surged, reflected in the app’s recent rise to the No. 1 position on the U.S. App Store amid backlash over competing defense AI partnerships. Meanwhile, OpenAI is considering a deal to deploy its AI models across NATO’s unclassified networks following its Pentagon agreement, underscoring how defense alliances are rapidly integrating generative AI technologies even as debates over safeguards and governance continue.

AI & Machine Learning, News

OpenAI NATO AI Deal After Pentagon Agreement

OpenAI is considering a deal to deploy its AI models across NATO’s unclassified networks days after securing a Pentagon agreement. The discussions highlight expanding military adoption of generative AI technologies.

By Maria Konash Published:
OpenAI NATO AI Deal After Pentagon Agreement
OpenAI considers deploying AI models on NATO’s unclassified networks. Photo: Marek Studzinski / Unsplash

OpenAI is considering an agreement to deploy its artificial intelligence technology across the North Atlantic Treaty Organization’s unclassified networks, according to a person familiar with the discussions. The potential contract would extend the company’s presence in defense and security infrastructure days after it announced a deal with the U.S. Department of Defense.

The Wall Street Journal first reported the talks. During an internal company meeting, OpenAI Chief Executive Sam Altman initially suggested the company was exploring deployment across NATO’s classified networks. A company spokesperson later clarified that Altman misspoke and that the opportunity currently being considered involves NATO’s unclassified systems.

NATO, a 32 member military alliance spanning North America and Europe, did not immediately respond to requests for comment. The alliance has increasingly explored digital and artificial intelligence capabilities for cybersecurity, logistics coordination, and information analysis.

OpenAI, whose investors include Microsoft and Amazon, announced late last week that its models would be deployed within the Pentagon’s classified network infrastructure. The agreement followed a directive from U.S. President Donald Trump ordering federal agencies to phase out the use of AI products developed by rival Anthropic.

Defense AI Debate Intensifies

Anthropic’s removal from federal systems followed a standoff with the Pentagon over safeguards governing military use of its AI models. Chief Executive Dario Amodei had emphasized the company’s opposition to allowing its systems to support mass domestic surveillance or fully autonomous weapons.

U.S. defense officials have said they do not intend to deploy artificial intelligence for mass surveillance of Americans or weapons systems operating without human oversight. However, they have argued that the military must retain flexibility to use AI for any lawful operational purpose.

After announcing its Pentagon agreement, OpenAI issued an updated statement clarifying that its systems would not be intentionally used for domestic surveillance of U.S. persons or nationals. The company added that the Department of Defense affirmed the AI services would not be used by intelligence agencies such as the National Security Agency.

Altman acknowledged that the Pentagon partnership has carried reputational risks for the company. Speaking at a company meeting on Tuesday, he described the decision as complex but necessary.

“I think this was an example of a complex, but right decision with extremely difficult brand consequences and very negative PR for us in the short term,” Altman said, according to the Wall Street Journal.

If a NATO agreement proceeds, it would mark another step in the rapid integration of generative AI tools into defense and security operations across Western governments.

At the same time, the controversy around military AI partnerships is increasingly influencing consumer behavior. Anthropic’s Claude recently rose to No. 1 on the U.S. App Store as backlash grew over OpenAI’s Pentagon deal, with market data showing surging downloads for Claude alongside a sharp spike in ChatGPT uninstallations.

Major US Agencies Shift From Anthropic to OpenAI Over Security Concerns

Multiple U.S. federal agencies, including State, Treasury, and HHS, have ceased using Anthropic’s Claude following a White House directive. Agencies are transitioning to alternatives such as OpenAI amid national security and ethical concerns.

By Samantha Reed Edited by Maria Konash Published:
Major US Agencies Shift From Anthropic to OpenAI Over Security Concerns
U.S. agencies halt Anthropic AI, including Claude, after Trump directive and Pentagon concerns, moving to OpenAI. Photo: Miguel M. / Unsplash

Following a White House directive, the U.S. Departments of State, Treasury, and Health and Human Services have moved to cease using Anthropic’s AI products, including its Claude chatbot platform. The Pentagon had already begun transitioning to alternative providers such as OpenAI.

Treasury Secretary Scott Bessent confirmed on X that the department was terminating all use of Anthropic technology, while HHS notified employees to adopt platforms such as OpenAI’s ChatGPT and Google-backed Gemini. The State Department similarly announced it would switch its in-house chatbot, StateChat, to OpenAI’s GPT-4.1. A State Department spokesperson emphasized that these steps align with President Donald Trump’s directive to cancel Anthropic contracts and bring programs into full compliance.

William Pulte, director of the Federal Housing Finance Agency, said his bureau and affiliated agencies, including Fannie Mae and Freddie Mac, were also ending all use of Anthropic products.

National Security and Industry Implications

President Trump labeled Anthropic a supply-chain risk, a designation typically reserved for foreign suppliers deemed a potential threat. The move follows a standoff between Anthropic and the Pentagon over AI deployment safeguards. Sources indicate the dispute centered on preventing the U.S. military and intelligence agencies from using Anthropic’s AI for autonomous weapons targeting or domestic surveillance.

OpenAI, backed by Microsoft and Amazon, quickly moved to fill the gap. The company announced a deal to deploy AI models in the Defense Department’s classified networks. CEO Sam Altman later posted on X that OpenAI would amend the agreement to clarify that its technology would not be used to deliberately track or surveil U.S. persons or nationals, including through the acquisition of commercial data.

Transition Challenges and Broader Impact

The rapid agency transitions underscore the operational complexity of replacing AI tools deeply integrated into federal workflows. Claude’s prior use in sensitive military and intelligence tasks highlights the difficulty of enforcing swift cutoffs. Analysts note that the shifts also reflect broader tensions over how AI safety, ethics, and governance intersect with national security priorities.

Meanwhile, Anthropic’s Claude has seen a surge in consumer adoption, rising to No. 1 on the U.S. App Store as public backlash grew against OpenAI’s Pentagon deal, highlighting a growing divergence between federal and retail users.

AI & Machine Learning, News, Regulation & Policy

Anthropic’s Claude Tops US App Store Amid OpenAI Pentagon Backlash

Anthropic’s Claude rose to No. 1 on the U.S. App Store as backlash grew over OpenAI’s Pentagon deal. Market data shows surging downloads for Claude and a sharp spike in ChatGPT uninstallations.

By Daniel Mercer Edited by Maria Konash Published: Updated:
Anthropic’s Claude Tops US App Store Amid OpenAI Pentagon Backlash
Claude tops the U.S. App Store amid backlash over OpenAI’s Pentagon deal. Photo: Aerps.com / Unsplash

Anthropic’s Claude mobile app climbed to the top position in the U.S. Apple App Store over the weekend, overtaking OpenAI’s ChatGPT amid controversy surrounding OpenAI’s agreement with the Department of Defense, which has been rebranded under the Trump administration as the Department of War.

Data from market intelligence firm Sensor Tower indicates that U.S. uninstalls of ChatGPT surged 295% day over day on Saturday, February 28. That compares with a typical 9% daily uninstall rate over the past 30 days. At the same time, downloads of Claude in the U.S. rose 37% on Friday and 51% on Saturday after Anthropic announced it would not move forward with a defense partnership.

Anthropic cited concerns that its AI systems could be used for domestic surveillance or fully autonomous weapons, applications it has said exceed safe deployment boundaries. A portion of consumers appeared to support that stance.

ChatGPT’s U.S. download growth reversed sharply after news of its defense agreement became public. Downloads fell 13 percent day over day on Saturday and declined an additional 5 percent on Sunday. The day before the announcement, downloads had risen 14 percent.

Claude’s ranking improvement was rapid. The app moved more than 20 positions within roughly a week, reaching No. 1 in the U.S. by Saturday, March 2. Two months earlier, the app had ranked outside the top 40.

User Sentiment and Global Momentum

Sensor Tower reported that one star reviews for ChatGPT increased 775% on Saturday, followed by another 100% rise on Sunday. Five star reviews fell by 50 percent during the same period. The ratings shift reflected growing user reaction to OpenAI’s national security partnership.

Other analytics firms reported similar trends. Appfigures said Claude’s total daily U.S. downloads surpassed ChatGPT’s for the first time on Saturday. Its estimates showed Claude downloads increasing 88 percent day over day, a steeper jump than Sensor Tower’s figures.

Claude also reached the No. 1 free iPhone app position in six additional countries: Belgium, Canada, Germany, Luxembourg, Norway, and Switzerland.

Similarweb reported that Claude’s U.S. downloads over the past week were approximately 20 times higher than in January, though it cautioned that not all growth may be tied directly to political developments.

Celebrity Endorsement and Social Momentum

On X, pop star Katy Perry shared a screenshot of her subscription to Claude Pro, circling the $214.99 annual plan and captioning the post “done” — in what many users interpreted as a show of support for Anthropic’s stance against certain military use cases. Her post quickly gained traction online, reinforcing broader public engagement with Claude amid the dispute. 

Consumer discussion on social media showed a wave of users canceling ChatGPT subscriptions and migrating to Claude, supported by community-generated screenshots of cancellation confirmations and comments. These social signals coincided with claims on platforms such as Reddit that the environment around AI ethics and national security was influencing individual choices.

The episode underscores how AI policy decisions can rapidly influence consumer behavior. While OpenAI has strengthened its standing in Washington through classified deployments, Anthropic appears to be gaining traction among retail users concerned about military applications of artificial intelligence.