Monthly Archives: November 2025
Baidu Emerges as China’s AI-Chip Leader Amid Nvidia Ban
Baidu is scaling up its in-house AI-chip business via its Kunlunxin unit, unveiling a five-year roadmap with M100 and M300 chips and aiming to fill the gap left by restricted Nvidia GPU access in China.
China’s Baidu is rapidly becoming a major player in the country’s AI chip sector, emerging as a challenger to other domestic firms — including Huawei — as both try to fill the void left by export restrictions on foreign GPUs.
Once best known as the leading search engine in China, Baidu has reoriented its strategy toward AI, autonomous driving and cloud infrastructure. Through its majority-owned subsidiary Kunlunxin, the company designs its own AI chips and sells them to third-party data centers, while also offering computing capacity via its cloud services.
Five-Year Roadmap: M100, M300, and Supernodes
At its recent flagship event, Baidu unveiled a roadmap calling for the release of the M100 chip in early 2026, optimized for inference; and the more powerful M300 chip in 2027, targeted at training ultra-large multimodal AI models.
Alongside individual chips, Baidu is building out large-scale infrastructure: supernodes such as “Tianchi 256” (256-chip configurations) and a planned “Tianchi 512” upgrade. Executives say these clusters will deliver major performance gains and enable deployment of trillion-parameter models at scale.
This vertical integration — chips, infrastructure, cloud services, and AI models — reflects Baidu’s ambition to provide a “full stack” AI offering. It already uses a mix of Kunlunxin chips and third-party GPUs to power its own ERNIE AI models.
Domestic Demand Surging Amid Supply Constraints
With U.S.-based GPU makers facing export restrictions — and Chinese firms hesitant to import even lower-end GPUs — demand for domestic AI chips is surging. Analysts at Deutsche Bank and JPMorgan have expressed optimism about Kunlunxin’s prospects, forecasting sharp growth in chip sales by 2026.
At the same time, shortages of AI-grade chips have hit other major Chinese tech companies. For example, Alibaba and Tencent have publicly cited limited chip supply as a constraint on data-center and AI expansion.
Baidu’s chip push — combining in-house hardware design, supernode infrastructure, and cloud offerings — may allow it to fill that supply gap and emerge as a strategic domestic supplier of AI computing power. As one industry analyst put it: success for Kunlunxin could make Baidu a “strategic supplier to the rest of China’s AI industry.”
Meanwhile, Google is reportedly in talks with Meta Platforms to supply its custom AI chips (TPUs) for Meta’s data centers, potentially challenging the market dominance of Nvidia. The potential Meta–Google deal signals a broader industry shift as hyperscalers seek alternatives to Nvidia GPUs and strive for greater control over AI infrastructure.
Federal vs. State Power: The Fight Over Who Regulates AI in the U.S.
For the first time, Washington is close to deciding how artificial intelligence should be regulated — but the fiercest battle isn’t over safety standards. It’s over whether states should retain the authority to pass their own AI laws.
For the first time, Washington is nearing a decision on how to regulate artificial intelligence — and the central fight isn’t about what the rules should be, but who gets to make them.
In the absence of a federal standard focused on consumer safety, states have advanced dozens of bills aimed at mitigating AI risks. California’s SB-53 and Texas’s Responsible AI Governance Act are among the highest-profile efforts to curb harmful or deceptive uses of AI.
Major tech companies — and many AI startups — argue these state laws create an unworkable patchwork that threatens innovation. “It’s going to slow us in the race against China,” said Josh Vlasto, co-founder of the pro-AI PAC Leading the Future.
Push for Federal Preemption
Tech giants and several appointees within the White House are now advocating for a national standard — or none at all. New efforts are emerging to prevent states from regulating AI independently.
House lawmakers are reportedly exploring ways to use the National Defense Authorization Act (NDAA) to block state AI laws altogether. Meanwhile, a leaked draft of a White House executive order also supports state-level preemption, proposing an “AI Litigation Task Force” to challenge state laws in court and empowering federal agencies to override them.
A sweeping ban on state AI regulation is unpopular in Congress. Lawmakers across the aisle argue that without a federal standard in place, blocking states would leave consumers exposed to harms while giving tech companies free rein.
To address this, Rep. Ted Lieu (D-CA) and the bipartisan House AI Task Force are preparing a federal package spanning fraud prevention, healthcare, transparency, child safety, and catastrophic-risk mitigation. But such a megabill is expected to take months — if not years — to pass.
Industry Influence and a Uniform Framework
The leaked White House EO would give David Sacks, Trump’s AI and Crypto Czar, co-lead authority over crafting a national legal framework. Sacks, a venture capitalist and longtime advocate for blocking state AI regulation, favors minimal federal oversight and industry self-regulation to “maximize growth.”
Several pro-AI super PACs have emerged to support that agenda. Leading the Future, funded by Andreessen Horowitz, OpenAI president Greg Brockman, Perplexity, and Palantir co-founder Joe Lonsdale, has raised more than $100 million. This week it launched a $10 million campaign urging Congress to pass a federal law that overrides state AI measures.
“When you’re trying to drive innovation in the tech sector, you can’t have all these laws popping up from people who don’t necessarily have the technical expertise,” Vlasto said.
States Move Faster — and Often First
As of November 2025, 38 states have enacted over 100 AI-related laws, primarily targeting deepfakes, disclosure standards, and government use of AI. A recent study found that 69 percent impose no obligations on AI developers, underscoring how limited and uneven the patchwork remains.
Congress, by contrast, has been slow. Hundreds of AI bills have been proposed; almost none have passed. Of the 67 bills Rep. Lieu has introduced to the House Science Committee since 2015, only one became law.
More than 200 lawmakers signed an open letter opposing AI preemption in the NDAA, and nearly 40 state attorneys general sent a similar warning, arguing that states must serve as “laboratories of democracy” for emerging technology.
Experts like Bruce Schneier and Nathan E. Sanders argue the patchwork concern is exaggerated. AI firms already comply with stricter rules in the EU, and most industries operate under differing state regulations without crisis. The true motive, they say, is avoiding accountability.
What a Federal AI Standard Might Look Like
Lieu’s forthcoming 200-plus-page bill includes measures on fraud penalties, deepfake protections, whistleblower safeguards, academic compute access, and testing requirements for large AI models.
Unlike the Senate’s more aggressive Hawley–Blumenthal proposal, Lieu’s bill would not require government-run AI model evaluations before deployment. Instead, AI labs would test and publish results themselves, as most already do voluntarily.
Lieu acknowledges this approach is less strict — but more realistic.
“My goal is to get something into law this term,” he said. “I’m not writing a bill that I’d have if I were king. I’m writing a bill that could pass a Republican-controlled House, Senate, and White House.”
Amazon Workers Sound Alarm on Rapid AI Rollout, Highlighting Risks to Democracy and Earth
Amazon employees demand ethical AI practices, warning that the company’s fast-paced AI rollout risks worker safety, democratic oversight, and environmental sustainability.
Over 1,000 Amazon employees have signed an open letter criticizing the company’s rapid AI rollout, warning it could inflict “staggering damage to democracy, to our jobs, and to the earth.” The letter, coordinated by the internal advocacy group Amazon Employees for Climate Justice, also drew support from over 2,400 individuals at other tech firms, including Google and Apple.
Employees backing the letter include engineers, senior product leaders, marketing managers, and warehouse staff. Many cite the company’s “race” to deploy AI as a driving force behind worker exploitation and environmental harm. One senior engineering manager explained that AI has become “almost like a drug” for the company, used to justify layoffs and fund data centers for untested AI products.
AI Expansion and Environmental Concerns
Amazon, along with other tech giants, is investing billions in data centers to train generative AI systems powering internal tools and consumer products such as the shopping chatbot Rufus. CEO Andy Jassy projected that Rufus could increase Amazon’s sales by $10 billion annually.
However, AI data centers consume significant energy, often sourced from carbon-emitting utilities. The open letter calls for Amazon to abandon fossil fuels in its AI operations, prevent its AI technologies from supporting surveillance or mass deportation, and stop requiring employees to use AI in their work.
Worker Activism and Ethical AI
Employees argue that the company should form ethical AI working groups with rank-and-file input, allowing workers to have a voice in AI implementation and automation decisions. The letter comes amid widespread job cuts – Amazon announced approximately 14,000 layoffs last month – and increasing pressure on staff to use AI to double productivity.
The advocacy group emphasizes that their concern is not opposition to AI itself, but the pace and methods of deployment. Members aim to address near-term risks, such as environmental impact and worker exploitation, rather than long-term hypothetical scenarios about superintelligent AI.
Context and Global Relevance
This activism follows a broader global trend of tech workers pushing for ethical AI and climate accountability. The group’s strategy aligns with prior campaigns by scientists and workers urging responsible AI deployment. Organizers timed their letter ahead of Black Friday to highlight the societal costs of AI powering Amazon’s retail operations.
Amazon spokesperson Brad Glasser reiterated the company’s commitment to net-zero carbon emissions by 2040, but did not address employee concerns regarding AI use or working conditions. Employees remain cautious, noting that recent company presentations focused on efficiency improvements rather than scaling energy use or mitigating environmental effects.
Microsoft Launches AI Skills Initiative in Poland for One Million People
Microsoft has launched a major AI training initiative in Poland, aiming to provide one million people with AI competencies by the end of 2025. Free courses are available in Polish via the AI Skills Navigator hub.
Microsoft announced a new AI training program in Poland designed to equip one million people with artificial intelligence skills by the end of 2025. The initiative builds on Microsoft’s ongoing investment in the Polish Digital Valley, which has already trained over 430,000 IT specialists, business representatives, partners, and students, and launched the Azure Poland Central cloud region.
Free AI Training via AI Skills Navigator
The program provides free AI training courses through the Microsoft AI Skills Navigator learning hub, available in Polish. Courses cover a wide range of topics – from basic AI literacy, using tools like Word, Excel, Teams, and GitHub Copilot, to advanced applications with Azure OpenAI. The platform aggregates over 200 courses from Microsoft Learn, LinkedIn Learning, GitHub, and other partners, regularly updating content to meet user needs.
Łukasz Foks, Director of AI National Skills at Microsoft, said:
“We are starting with a catalog called AI Skills Navigator and will continue adding new materials and localizing content. The platform allows learners to measure their current skills and identify new areas for growth.”
Accessible for All Skill Levels
Training is tailored for a broad audience, from beginners to developers and business leaders. AI-driven guidance helps users select learning paths suited to their goals. The initiative also collaborates with universities, NGOs, tech communities, and other partners to reach a wide audience across Poland.
Responding to Growing AI Demand
Microsoft’s program responds to urgent workforce needs. An IDC 2024 AI opportunity study shows that 77% of organizations in Poland are already using or planning to use AI within 12 months.
According to the Microsoft Work Trend Index, 53% of Polish business leaders would not hire candidates without AI skills, and 55% prefer less experienced candidates with AI capabilities over more experienced ones lacking these qualifications.
Currently, 61% of employees in Poland use AI in their work, compared to 75% globally. Microsoft’s initiative aims to expand these capabilities and support the growth of businesses, public sector organizations, and society at large.
In a similar move, OpenAI introduced ChatGPT for Teachers, a free AI tool for K-12 educators that supports lesson planning, secure collaboration, and district-level administration. Both initiatives highlight the global trend of leveraging AI to enhance learning, skill-building, and workforce readiness.
OpenAI Co-Founder: To Move Forward, AI Must Go Back to Fundamental Science
OpenAI co-founder Ilya Sutskever says the era of scaling compute and data as the main driver of AI progress is ending, arguing that meaningful advances now depend on true scientific breakthroughs and solving AI’s poor generalization.
OpenAI co-founder Ilya Sutskever believes the AI industry is reaching a point where simply scaling computation is no longer enough, and that true progress now requires returning to deep scientific research.
Speaking on the Dwarkesh Podcast, Sutskever explained that in recent years, the AI industry has operated on a straightforward principle: to make a model smarter, give it more compute and more data. For a while, that approach worked — companies bought ever-larger numbers of GPUs, built massive data centers, and saw steady improvements. For business, this was an attractive strategy: low-risk, predictable, and clearly executable.
But, according to Sutskever, that formula has now been exhausted. Data is finite, and companies already possess enormous compute resources. He doubts that further scaling alone will transform the field:
“Do people believe that increasing scale by 100× will change everything? There will be differences, yes. But that everything will be completely transformed just by scaling — I don’t think so.”
He argues that the industry is entering a new phase — a return to genuine scientific inquiry — except this time, researchers have access to unprecedented computational power. Compute remains essential, he said, especially when everyone is operating within the same conceptual paradigm. However, what becomes decisive now is how that compute is used. And that is fundamentally a research problem.
The Core Scientific Challenge: Poor Generalization
Sutskever highlighted that today’s models still struggle with generalization, especially compared to humans. AI systems need large amounts of data and many examples to learn tasks that humans can grasp after seeing just one or two.
“Models generalize much worse than humans. This is absolutely obvious. And it’s a fundamental problem,” he said.
Because of this limitation, Sutskever believes that major progress in AI will no longer come from scaling alone, but from breakthroughs that improve the underlying science of learning and generalization.
This perspective comes as forecasts suggest ChatGPT could have as many as 220 million paying users by 2030, positioning it among the world’s largest subscription services.
OpenAI and Perplexity Launch AI Shopping Assistants, But Startups Say Niche Still Wins
OpenAI and Perplexity introduced AI shopping tools this week, helping users find products through chat-based queries. Experts say startups with niche datasets may still outperform general-purpose chatbots.
With holiday shopping approaching, OpenAI and Perplexity (and Google too) have unveiled AI shopping features integrated into their existing chatbots. The tools allow users to research products with natural language queries, such as finding a gaming laptop under $1,000 or requesting fashion recommendations based on uploaded photos.
OpenAI’s ChatGPT suggests products that match user criteria, while Perplexity emphasizes how its chatbot memory can tailor recommendations based on previous interactions, including location and occupation. Both companies aim to simplify product discovery and enhance user experience through AI.
Specialized Startups Maintain an Edge
Despite these new features, experts suggest niche AI shopping startups may still provide superior experiences. Zach Hudson, CEO of interior design shopping tool Onton, that just recently raised $7.5M to make AI shopping smarter, emphasized that specialized datasets give vertical startups an advantage. Onton, for example, catalogs hundreds of thousands of interior design products to train AI models with higher-quality data.
Julie Bornstein, CEO of Daydream, highlighted fashion as a sector requiring nuanced understanding. “Finding a dress you love is not the same as finding a television,” she said. Vertical models, tuned to real consumer behavior, are likely to outperform general-purpose AI tools in domains like fashion, home goods, and travel.
E-Commerce Partnerships and Monetization
OpenAI and Perplexity also benefit from existing user bases and partnerships with major platforms. OpenAI integrates with Shopify, while Perplexity has deals with PayPal, enabling users to complete purchases directly in the chatbot interface. This approach mirrors strategies from tech giants like Google and Amazon, where monetization can come from product advertising and e-commerce facilitation.
However, reliance on general search indexes may limit effectiveness. Hudson noted that AI models dependent on external search results can only perform as well as the data those indexes provide, underscoring the value of proprietary, domain-specific datasets.
Looking Ahead
As AI shopping grows – Adobe projects a 520% increase in AI-assisted online shopping this holiday season – competition is likely to intensify. Startups focusing on vertical markets, investing in high-quality datasets, and leveraging domain expertise are positioned to maintain relevance even as large AI platforms expand their capabilities.
The race to merge conversational AI with e-commerce reflects a broader trend of AI adoption across consumer applications, but experts agree that specialization and data quality will remain decisive factors for success.
Coinbase Ventures Highlights AI-Driven Crypto Innovation for 2026
Coinbase Ventures identifies AI and crypto as the next frontier for onchain innovation in 2026. AI-driven smart contracts, robotics data, and privacy-preserving tools are set to transform decentralized finance and trading.
Coinbase Ventures today outlined its outlook for 2026, highlighting artificial intelligence as a key driver of innovation in crypto and decentralized finance (DeFi). The firm anticipates AI-powered tools will accelerate the development of smart contracts, trading infrastructure, and data-driven robotics systems, expanding the onchain ecosystem.
AI-Powered Smart Contracts and Security
Coinbase Ventures expects 2026 to mark a pivotal moment for AI in smart contract development. AI agents will help non-technical founders launch onchain businesses rapidly, handling code generation, security audits, and continuous monitoring. This capability is expected to unlock new applications and enhance risk management across DeFi platforms.
Robotics Data Collection and Training AI
AI and robotics are increasingly intertwined. High-quality datasets from physical interactions—grip, manipulation, and deformable materials—are scarce but critical for advancing embodied AI. Coinbase Ventures sees decentralized, incentivized models for data collection as a promising approach, accelerating the development of AI-driven robotics integrated with blockchain networks.
AI-Enhanced Trading and Market Infrastructure
Prediction markets and decentralized exchanges (DEXs) are benefiting from AI integration. Aggregators and trading terminals powered by AI can consolidate liquidity, provide advanced analytics, and facilitate efficient execution across venues. AI tools are expected to improve capital efficiency, enable real-time hedging, and reduce exposure to market risks for traders and liquidity providers.
Privacy-Preserving AI and Onchain Adoption
Coinbase Ventures highlights the importance of privacy-preserving AI for broader adoption. Technologies such as zero-knowledge proofs (ZKPs) and fully homomorphic encryption (FHE) allow users to maintain confidentiality while interacting with public blockchains. This is crucial for institutions, professional traders, and mainstream users seeking secure, transparent, and private participation.
Coinbase Ventures emphasizes that 2026 presents unprecedented opportunities at the intersection of AI and crypto. From autonomous smart contracts and AI-enhanced trading terminals to robotics-based data collection and privacy-first protocols, the firm anticipates a new era of scalable, intelligent, and secure blockchain applications.
By 2030, ChatGPT Could Become One of the World’s Largest Companies
Analysts say ChatGPT’s explosive growth trajectory puts it on track to become one of the world’s largest companies by 2030, driven by unprecedented demand for AI assistants, enterprise automation, and AI-driven commerce.
OpenAI forecasts that as many as 220 million of ChatGPT’s weekly users will be paying subscribers by 2030. The estimate – roughly 8.5% of an expected 2.6 billion weekly-active user base would make ChatGPT one of the largest subscription platforms globally.
As of July 2025, around 35 million users (about 5% of weekly active users) already pay for ChatGPT “Plus” or “Pro” plans, priced at $20 and $200 per month, respectively.
The company anticipates its annualized revenue may reach approximately $20 billion by year-end. However, rising losses, driven by heavy investment in research, development, and infrastructure, remain a concern.
OpenAI is also looking to diversify its revenue streams: up to 20% of future revenue is expected to come from new offerings such as shopping- and advertising-driven features. The company recently launched a personal shopping assistant within ChatGPT, potentially paving the way for monetization through ads or sales commissions.
Broader Context: Infrastructure and Data-Center Expansion
The growth ambitions for paid ChatGPT subscriptions come alongside a broader push by OpenAI to expand its AI infrastructure — investments that may require large-scale data centers and reliable compute capacity.
- SoftBank plans to invest up to $3 billion to convert a factory in Ohio into a facility producing equipment for future OpenAI data centers.
- Foxconn, known for assembling consumer electronics and AI servers, has entered a partnership with OpenAI to co-design and manufacture key hardware for U.S.-based data centers.
These moves, often grouped under the broader Stargate Project, suggest OpenAI’s long-term strategy is not only to grow its user base but also to build a robust infrastructure backbone capable of supporting heavy AI workloads at scale.
Google and Accel Launch Joint AI Investment Program for Indian Founders
Google and Accel will co-invest up to $2M in early-stage Indian AI startups through Accel’s Atoms program, offering capital, compute credits, and deep technical mentorship.
Google and Accel have announced a first-of-its-kind partnership to identify and fund India’s earliest-stage AI startups, marking the inaugural external collaboration for the Google AI Futures Fund. Through Accel’s Atoms program, the firms will jointly invest up to $2 million per startup, with each contributing up to $1 million. The 2026 cohort will focus on founders in India and the Indian diaspora building AI-native products from day one.
“The thought process is building AI products for billions of Indians, as well as supporting AI products built in India for global markets,” Accel partner Prayank Swaroop told TechCrunch. India’s vast internet user base, strong engineering talent, and growing cloud infrastructure make it an increasingly attractive AI market, even as frontier model development remains concentrated in the U.S. and China.
Momentum is beginning to shift. OpenAI and Anthropic have both announced offices in India, and global investors are increasing early-stage commitments. Accel says it will consider startups across creativity, productivity, entertainment, SaaS, developer tools, and even foundational models. The firms will also prioritize areas where LLMs are expected to advance over the next 12–24 months.
Founders selected for the program will receive up to $350,000 in compute credits across Google Cloud, Gemini, and DeepMind, early access to experimental models and APIs, mentorship from Accel partners and Google technical leaders, and immersion sessions in London and the Bay Area, including Google I/O. Support will also come from Google Labs and DeepMind research teams, plus marketing channels and the Atoms founder network.
Jonathan Silber, co-founder of the Google AI Futures Fund, emphasized India’s strategic importance: “This is the Futures Fund’s first such collaboration anywhere in the world, and we chose India for a reason.” The partnership follows Google’s recently unveiled $15 billion plan for a 1-gigawatt data center and AI hub in the country, as well as previous multibillion-dollar digital investments in Airtel, Jio, and Flipkart.
Google will appear directly on cap tables for startups funded through the program, though the company says there are no requirements to use Gemini or other Google products exclusively. “Sometimes Google’s technology is the best,” Silber said. “Other times, you’ll see Anthropic or OpenAI.”
Accel’s Atoms platform, launched in 2021, has already backed more than 40 companies that have raised over $300 million in follow-on funding. The new partnership with Google arrives shortly after Accel and Prosus launched Atoms X to support Indian founders building for massive domestic scale.
Silber stressed that the initiative is not structured as a sales funnel for Google Cloud or as a path to future acquisitions: “Our objective is simply to see the next wave of innovation in the AI space coming out of India.”
Meta and Google Discuss Multi-Billion Dollar Chip Deal
Meta is in talks to spend billions on Google’s TPUs for its data centers, a shift that would position Alphabet as a stronger competitor to Nvidia. The discussions include Meta potentially renting Google Cloud chips as early as next year.
Meta Platforms is in advanced discussions to spend billions of dollars on Google’s tensor processing units for use in its data centers beginning in 2027. The talks, reported by The Information, would place Alphabet in more direct competition with Nvidia, which currently dominates the market for AI processors.
The conversations also include the option for Meta to begin renting Google Cloud chips as early as next year. The initiative reflects Google’s broader effort to encourage outside companies to adopt its TPUs rather than rely solely on Nvidia’s graphics processors, which have become increasingly expensive and difficult to source. At present, Google deploys TPUs only in its own facilities, but a partnership with Meta would mark a strategic shift and open the door for wider commercial use.
Google Eyes Larger Share of AI Compute Market
Expanding TPU access to customers’ data centers could significantly enlarge Google’s share of the fast-growing AI chip market. Some Google Cloud executives believe the strategy could help capture as much as 10 percent of Nvidia’s annual revenue, according to the report. That would represent a multibillion-dollar opportunity as enterprises increase spending on custom silicon to support AI workloads.
Alphabet shares rose more than four percent in premarket trading on Tuesday, putting the company on track to reach a four trillion dollar valuation if gains continue. Broadcom, which partners with Google on AI chip development, gained two percent. Nvidia shares declined more than three percent.
Securing Meta as a customer would be a high-profile win for Google, particularly because Meta is one of Nvidia’s largest buyers and plans to spend up to seventy-two billion dollars on AI infrastructure this year. Alphabet, Meta and Nvidia did not immediately comment, and the report has not been independently verified.
Demand for customized AI chips such as TPUs has accelerated as companies seek alternatives to Nvidia’s tightly constrained supply and premium pricing. Anthropic expanded its agreement with Google last month, securing access to as many as one million TPUs valued in the tens of billions. Google’s cloud business has also drawn new investment, including from Berkshire Hathaway, and has gained traction with its Gemini 3 model as enterprises expand their AI deployments.
Despite recent momentum, competing with Nvidia will require Google to overcome the entrenched ecosystem around CUDA, Nvidia’s proprietary software platform. More than four million developers rely on CUDA to build AI and high-performance applications, creating a significant barrier for any challenger.
Amazon Commits Up to $50 Billion to Expand AI and Supercomputing for U.S. Government
Amazon is investing up to $50 billion to expand AI and supercomputing capacity for U.S. government agencies through AWS’s secure cloud regions, adding nearly 1.3 gigawatts of advanced computational infrastructure.
Amazon announced today that it will invest up to $50 billion to dramatically expand AI and supercomputing infrastructure for U.S. government customers using Amazon Web Services (AWS). The multi-year initiative, set to break ground in 2026, will add nearly 1.3 gigawatts of advanced computing capacity across the AWS Top Secret, AWS Secret, and AWS GovCloud (US) regions.
Massive AI Infrastructure Expansion
The investment includes new data centers featuring next-generation compute and networking hardware. Federal agencies will gain extended access to AWS’s full AI ecosystem, including:
- Amazon SageMaker for model development and customization
- Amazon Bedrock for foundation model and agent deployment
- Amazon Nova, Anthropic Claude, and leading open-weights models
- AWS Trainium chips and NVIDIA AI infrastructure
These capabilities aim to support mission-critical applications requiring secure, scalable, U.S.-based cloud environments.
Accelerating Government Missions
AWS says the expanded infrastructure will enable federal agencies to compress workloads that once took weeks into hours by integrating AI with modeling and simulation workflows. Potential impacts include:
- Real-time analysis of decades of global security data
- Automated threat detection and response planning from satellite and sensor inputs
- Unified views of fragmented supply chain, infrastructure, and environmental data
- Faster scientific discovery through AI-assisted experimentation and high-fidelity simulations
Amazon says this shift represents a transition from traditional HPC to AI-accelerated discovery, allowing researchers to interact with complex computational systems through natural language and expert AI agents.
Strategic National Priorities
The initiative aligns with the Administration’s AI Action Plan and broader efforts to strengthen U.S. technological leadership in areas like national security, autonomous systems, energy innovation, cybersecurity, and healthcare research.
“AWS’s purpose-built government AI and cloud infrastructure will fundamentally transform how federal agencies leverage supercomputing,” said AWS CEO Matt Garman, adding that the investment “removes the technology barriers that have held government back and positions America to lead in the AI era.”
AWS emphasized that its long-running experience with secure, classified cloud environments will allow agencies to focus on mission outcomes rather than maintaining complex on-premises systems.
Ex-MrBeast Strategist Builds AI Tool to Boost Short-Form Video Content and Analytics
Palo, founded by Jay Neo and former Palantir engineer Shivam Kumar, provides AI-powered analytics and ideation tools to help short-form video creators optimize content and grow audiences.
Short videos are driving massive engagement across platforms like Instagram, TikTok, YouTube, and Facebook. With billions of daily views, creators face pressure to produce content quickly while remaining relevant. Palo, a startup founded by former MrBeast content lead Jay Neo, ex-Palantir engineer Shivam Kumar, and creator Harry Jones, aims to address this challenge with AI-powered tools for video creation and analysis.
How Palo Works
Palo’s platform has three core components: an AI ideation and planning tool, analytics, and a community feature. Creators integrate their social accounts into the platform, allowing Palo to analyze their video performance and provide actionable insights. Metrics tracked include audience sentiment, engagement hooks, originality, and trending topics.
Kumar, Palo’s CTO, explained that the platform uses multiple AI models to generate structured insights and build a “persona” for each creator. This persona reflects the creator’s style and preferences, helping the AI suggest content ideas, scripts, and storyboards tailored to their strengths.
The AI planner functions like a conversational chatbot. Creators can ask general questions, request scripts based on formulas, or generate visual storyboards for low-dialogue content. The community feature currently allows creators to message one another, with plans to expand social and collaborative functionality.
Testing and Launch
During its test phase, Palo worked with 40 creators, collectively reaching over 1 million users across platforms. The platform is now open to creators with at least 100,000 followers, starting at $250 per month, with higher tiers for larger usage.
The startup has raised $3.8 million in funding from Peak XV (formerly Sequoia India), with participation from NFX and individual investors. Peak XV managing director Rajan Anandan cited the team’s experience with top creative teams and technical expertise as reasons for investing.
Addressing Burnout and AI Tension
Palo’s approach emphasizes enhancing creators’ intuition rather than enforcing formulaic content creation. Neo likens it to a comedian learning from audience reactions: the AI provides insights to help creators iterate faster without replacing their creative instincts. Josh Constine, former TechCrunch editor and Palo investor, said the tool helps creators avoid burnout from constantly consuming content to track trends and algorithms.
The AI-Creator Balance
Palo’s launch comes amid growing debates about AI’s role in content creation. Platforms like TikTok, Meta, and Google are integrating AI tools, but some creators, including MrBeast, have warned of AI’s potential to homogenize content. Palo aims to use AI as a supportive tool, nudging creators toward potential success while preserving originality and creativity.
By combining analytics, AI-assisted planning, and a nascent creator community, Palo positions itself as a platform designed to help creators thrive in a competitive, fast-moving short-form video landscape.
Maria Konash is the Editor in Chief of AIstify, where she leads the platform’s editorial vision at the intersection of artificial intelligence, technology, and society. She oversees AIstify’s coverage of emerging AI trends, industry developments, and practical use cases, ensuring that complex topics are explained with clarity, accuracy, and real-world relevance. With a strong focus on editorial quality and structure, Maria shapes content that serves both professionals and curious readers - from in-depth AI briefs and explainers to glossary entries and thought leadership pieces. Her work emphasizes clear language, strong context, and editorial integrity, helping readers understand not just what is happening in AI, but why it matters. As Editor in Chief, she also sets content standards across the platform, guiding contributors and maintaining a consistent, accessible tone that defines AIstify’s voice in the fast-evolving AI landscape.
Salesforce CEO Switches AI Tools, Moving from ChatGPT to Gemini 3
Salesforce CEO Marc Benioff said he is abandoning ChatGPT in favor of Google’s new Gemini 3 AI, praising its speed, reasoning, and multimodal capabilities. The endorsement highlights growing competition in the AI sector.
Salesforce CEO Marc Benioff announced he is replacing OpenAI’s ChatGPT with Google’s newest AI model, Gemini 3, describing it as an “insane” leap forward in reasoning, speed, and multimodal capabilities. Benioff’s remarks, posted on X, emphasized Gemini 3’s proficiency in text, images, video, and code.
Holy shit. I’ve used ChatGPT every day for 3 years. Just spent 2 hours on Gemini 3. I’m not going back. The leap is insane — reasoning, speed, images, video… everything is sharper and faster. It feels like the world just changed, again. ❤️ 🤖 https://t.co/HruXhc16Mq
— Marc Benioff (@Benioff) November 23, 2025
Executive Reactions
Benioff’s public endorsement went viral, attracting over one million views within hours. Other tech leaders have shared early praise for Gemini 3. OpenAI CEO Sam Altman congratulated Google on the model’s launch. Former Tesla AI director Andrej Karpathy described Gemini 3 as a “tier 1 LLM” with strong daily usability. Stripe CEO Patrick Collison highlighted Gemini 3’s ability to generate an interactive web page summarizing breakthroughs in genetics.
Gemini 3, developed by Google and DeepMind, is positioned as their most advanced agentic and vibe coding model to date. The system integrates tightly with the Google ecosystem, offering a seamless experience across text, code, images, and video.
Significance for the AI Race
Benioff’s switch is notable given Salesforce’s existing partnerships with OpenAI and Anthropic, reflecting the speed at which enterprise AI preferences are evolving. Gemini 3’s launch intensifies the ongoing competition with OpenAI’s ChatGPT 4.5 Turbo and 5, as well as Anthropic’s Claude 3.5. Each model continues to push boundaries in reasoning, multimodal outputs, and advanced tool use.
The endorsement underscores a broader trend in the AI industry: top executives are actively evaluating and migrating to models that provide faster, more capable, and more versatile AI tools, signaling rapid shifts in the corporate adoption of AI technologies.
Elon Musk-Backed DOGE Agency Disbanded Early by Trump Administration
The Department of Government Efficiency (DOGE), launched to cut bureaucracy and federal spending, has been disbanded eight months early. Remaining functions are now handled by the Office of Personnel Management.
The Department of Government Efficiency (DOGE), an initiative intended to streamline federal operations and reduce bureaucracy, has been disbanded with eight months remaining in its mandate, Reuters reports. The Office of Personnel Management (OPM) now oversees many of DOGE’s functions, according to OPM Director Scott Kupor.
DOGE’s Short-Lived Tenure
DOGE was created in January to aggressively shrink federal agencies, cut budgets, and redirect government priorities. It drew widespread attention early in Trump’s second term, with former SpaceX and Tesla CEO Elon Musk leading initial efforts and using high-profile stunts to publicise the unit’s mission. Musk described DOGE as a “chainsaw for bureaucracy” during a public event.
Despite claims of cutting tens of billions of dollars, DOGE did not release detailed accounting of its work, making independent verification impossible. While the administration has said the initiative contributed to waste reduction, external analysts have questioned its measurable impact.
Staff Transition to New Roles
Many DOGE employees have moved to other positions within the administration. Notably, Joe Gebbia, co-founder of Airbnb, now leads the National Design Studio, tasked with improving the visual presentation of government websites. Projects include recruitment platforms for law enforcement and drug pricing initiatives. Other former DOGE staff hold key technology and oversight roles across agencies, including Health and Human Services, the State Department, and the Office of Naval Research.
End of Key Policies
Alongside the unit’s closure, DOGE’s hallmark hiring freeze has concluded. Previously, federal agencies were restricted in new hires, with DOGE approval required for most exceptions. Kupor confirmed that the government is no longer enforcing specific reduction targets.
Although DOGE is officially disbanded, the Trump administration continues to pursue regulatory streamlining, leveraging AI tools to review and recommend cuts to federal regulations. Former DOGE representatives remain involved in these AI-driven initiatives, reflecting the unit’s lasting influence on federal operations.
Broader Context
The early closure of DOGE contrasts with the administration’s initial fanfare, which included public endorsements from Trump, cabinet secretaries, and Musk. The unit’s short life illustrates the challenges of implementing sweeping efficiency reforms at the federal level, particularly when accountability and reporting standards are limited.
Former DOGE staff and leadership now continue their work under new structures, including the National Design Studio and AI regulatory review teams, signaling that while DOGE as an entity has ended, its objectives persist in other forms.
Amazon Cuts Engineering Roles as AI Investment Reshapes Workforce
Amazon’s latest layoffs hit engineering and product roles hardest as the company restructures and redirects resources toward artificial intelligence.
Amazon’s sweeping job cuts, totaling more than 14,000 roles announced last month, have hit engineering positions harder than any other category. WARN filings across New York, California, New Jersey and Washington show that close to 40 percent of more than 4,700 reported cuts in those states were engineering roles. The disclosures cover only part of the total reductions due to differences in reporting requirements, but they highlight where the company is tightening most aggressively.
The layoffs come during a broader contraction across the tech industry. More than 113,000 jobs have been eliminated at 231 companies so far this year, according to Layoffs.fyi, extending the trend that began in 2022 as businesses recalibrated after the pandemic. For Amazon, the reductions are part of CEO Andy Jassy’s long-running effort to make the company operate with the speed and discipline of a startup by removing layers of bureaucracy and scaling down teams.
Engineering and Product Roles Take the Biggest Hit
The filings show that software engineering cuts affected a range of seniority levels, with SDE II employees accounting for a large share of the reductions. Engineering-heavy teams across devices, cloud services, retail, and advertising were included. More than 500 product and program managers were also eliminated, representing over 10 percent of recorded cuts.
Amazon executives have emphasized that the changes are organizational rather than technology-driven. In her memo to staff, human resources chief Beth Galetti said the company must become leaner to innovate more quickly, especially as new AI tools accelerate development cycles. She called this “the most transformative technology” since the early internet.
At the same time, Amazon is investing heavily in artificial intelligence. Jassy has said that AI will reduce corporate headcount over time by improving efficiency. That mirrors a larger trend in the software industry, where coding assistants and automated development platforms from OpenAI, Cursor and Cognition are reshaping engineering workflows. Amazon recently launched its own tool, Kiro.
Gaming, Advertising and AI Shopping Teams Affected
The layoffs extended into creative and consumer-facing divisions. In California, Amazon’s gaming studios in Irvine and San Diego saw reductions across design, production and art teams. The company is pausing much of its work on large-scale game development, including MMO projects.
Amazon also downsized its visual search and shopping teams, which built AI tools such as Amazon Lens and Lens Live for real-time product discovery. WARN filings indicate cuts across software engineers, applied scientists and quality assurance roles in Palo Alto.
The online advertising organization, one of Amazon’s most profitable units, reduced more than 140 sales and marketing positions in New York. The move comes as major platforms, including Amazon and OpenAI, explore how new AI-driven ad formats will evolve. In our earlier coverage of ChatGPT’s emerging ads functionality, we noted how OpenAI is testing monetization tools alongside new user features. Amazon’s downsizing in its ads division suggests a parallel reassessment of priorities as AI-driven automation changes how campaigns are sold and managed.
Amazon is expected to pursue more reductions in early 2026, continuing a shift toward a leaner structure as the company doubles down on artificial intelligence and long-term profitability.