Bots Have Overtaken Humans Online, New Report Finds

AI-driven bots now generate more internet traffic than humans, according to a new report, highlighting the growing impact of automated systems online.

By Samantha Reed Edited by Maria Konash Published:
Bots Have Overtaken Humans Online, New Report Finds
AI bots surpass human internet traffic, led by ChatGPT and Claude. Image: Philipp Katzenberger / Unsplash

Automated systems powered by artificial intelligence are now generating more internet traffic than humans, according to a new report from cybersecurity firm Human Security.

The company’s “State of AI Traffic” report found that automated traffic grew nearly eight times faster than human activity in 2025, marking a significant shift in how the internet is used. Automated traffic includes any activity generated by software systems, such as bots and AI agents, rather than human users.

Human Security said AI-driven traffic increased by 187% over the course of 2025, fueled largely by the widespread adoption of large language models and conversational AI tools.

Rise of AI-Driven Internet Activity

The growth in automated traffic is closely linked to the rapid expansion of AI services. Platforms such as ChatGPT, Claude, and Gemini are increasingly used to perform tasks that previously required human interaction, from answering questions to generating content and automating workflows.

As these systems scale, they generate large volumes of requests, both from direct user interactions and from automated processes acting on behalf of users. This has contributed to a shift where machine-generated activity is becoming the dominant form of traffic online.

Human Security’s data is based on interactions processed through its Human Defense Platform, which the company said analyzed over one quadrillion events. While comprehensive, experts note that measuring total internet traffic remains challenging due to the lack of a centralized dataset.

Measurement Challenges and Implications

Researchers caution that estimates of bot activity can vary depending on methodology. Techniques such as analyzing user-agent strings can provide insights but may produce inconsistent results depending on data sources and sampling.

Despite these limitations, the trend toward increased automation is widely recognized. AI systems are not only generating content but also interacting with digital services, performing searches, and executing tasks autonomously.

The shift has implications for cybersecurity, digital advertising, and online platforms, which must distinguish between human and automated activity. It also raises questions about how the internet’s infrastructure and services will adapt to a landscape where machines are primary participants.

The findings highlight a broader transformation driven by AI adoption. As automated systems become more capable and widespread, they are reshaping the fundamental dynamics of online interaction, moving the internet away from its original human-centric model toward a more machine-driven ecosystem.

AI & Machine Learning, News

ByteDance Brings Prompt-Based Video Creation to CapCut with Seedance 2.0

ByteDance is rolling out its Seedance 2.0 AI video model in CapCut, enabling prompt-based video creation as competition in generative video intensifies.

By Samantha Reed Edited by Maria Konash Published:
ByteDance launches Seedance 2.0 in CapCut, bringing prompt-based AI video creation amid growing competition. Image: Bytedance

ByteDance has begun rolling out its new generative AI model, Dreamina Seedance 2.0, within its video editing platform CapCut, expanding its push into AI-powered content creation.

The model allows users to generate and edit videos using text prompts, images, or reference clips. It can also synchronize audio and video elements, enabling creators to produce short-form content with minimal manual input.

The rollout will initially be limited to select markets, including Brazil, Indonesia, Malaysia, Mexico, the Philippines, Thailand, and Vietnam. ByteDance said additional regions will be added over time, though availability remains restricted as the company addresses concerns related to intellectual property.

AI Video Creation Expands in CapCut

Seedance 2.0 is designed to support a range of creative workflows. Users can generate videos from simple text descriptions or refine existing footage with AI-assisted editing tools. The model is capable of producing realistic textures, motion, and lighting, addressing challenges that have historically limited AI-generated video quality.

The system supports clips of up to 15 seconds across multiple aspect ratios and is integrated into CapCut’s editing features, including AI Video and Video Studio tools. It will also be available through ByteDance’s Dreamina platform and its marketing tool Pippit.

ByteDance said the model can be used for various content types, including tutorials, product demonstrations, and action-based videos. It also enables creators to prototype ideas before filming, reducing production time and cost.

Safety Measures and Industry Context

The launch comes amid heightened scrutiny of generative video technologies. ByteDance has introduced safeguards to limit misuse, including restrictions on generating content featuring real faces and controls to prevent unauthorized use of copyrighted material.

Content generated by the model will include invisible watermarks to help identify AI-produced media and support enforcement actions if necessary.

The phased rollout reflects ongoing efforts to address legal and regulatory concerns, particularly from the entertainment industry, which has raised issues about copyright infringement and unauthorized use of intellectual property.

ByteDance’s move comes as competition in the AI video space evolves. While some companies are scaling back investments due to high costs and legal risks, others continue to advance the technology and integrate it into consumer platforms.

By embedding Seedance 2.0 into CapCut, ByteDance is leveraging its large user base to accelerate adoption of AI video tools. The strategy highlights a broader trend of integrating generative AI directly into existing creative applications, making advanced capabilities more accessible to everyday users.

As the rollout expands, the company said it will continue working with industry experts and creative communities to refine the model and address emerging challenges.

AI & Machine Learning, News

Mistral Launches Voxtral: an Open-Source Voice AI for Real-Time Use

Mistral has released Voxtral TTS, an open-source speech model designed for real-time voice agents and enterprise use cases, intensifying competition in voice AI.

By Daniel Mercer Edited by Maria Konash Published:

French artificial intelligence company Mistral has introduced Voxtral TTS, a new open-source text-to-speech model aimed at powering voice assistants and enterprise applications such as customer support and sales automation.

The release marks Mistral’s expansion into the voice AI segment, placing it in direct competition with providers including ElevenLabs, Deepgram, and OpenAI. The company said the model is designed to deliver high-quality speech generation while remaining lightweight enough to run on edge devices.

Voxtral TTS supports nine languages, including English, French, German, Spanish, Dutch, Portuguese, Italian, Hindi, and Arabic. The model is built on Mistral’s Ministral 3B architecture and is optimized for real-time performance.

Real-Time Voice and Customization

A key feature of Voxtral TTS is its ability to generate custom voices using short audio samples. According to Mistral, the system can adapt to a speaker’s voice with less than five seconds of input, capturing nuances such as accents, intonation, and speech patterns.

The model is also capable of switching between languages while preserving the same voice characteristics, making it suitable for applications such as dubbing and real-time translation.

Performance metrics indicate low latency. The model can begin generating audio within 90 milliseconds for a standard input and can produce speech faster than real time, enabling interactive use cases such as conversational agents.

Mistral said the model is designed to sound natural rather than synthetic, addressing a common limitation in earlier text-to-speech systems.

Expanding Enterprise AI Offerings

The launch of Voxtral TTS follows Mistral’s earlier release of transcription models, signaling a broader strategy to build a comprehensive suite of voice and multimodal AI tools.

The company aims to provide end-to-end systems capable of handling multiple input types, including text, audio, and images, and generating outputs across these modalities. This aligns with the growing demand for AI agents that can operate across communication channels in real time.

Mistral’s open-source approach is a central part of its positioning. By allowing enterprises to customize and deploy models on their own infrastructure, the company aims to differentiate itself from proprietary solutions that may limit flexibility.

As businesses increasingly adopt voice interfaces for customer engagement and automation, competition in the speech AI market is intensifying. Mistral’s entry with a lightweight, customizable model reflects a broader trend toward accessible and scalable AI tools designed for real-world deployment.

AI & Machine Learning, News

TurboQuant: Google’s New Way to Run AI Faster with Less Memory

Google has introduced TurboQuant, a new compression algorithm that reduces memory usage in AI systems while maintaining accuracy, improving performance in large models and search.

By Daniel Mercer Edited by Maria Konash Published: Updated:
Google unveils TurboQuant, cutting memory use and boosting LLM speed without accuracy loss. Image: Google

Google has introduced TurboQuant, a new set of quantization algorithms designed to significantly improve the efficiency of artificial intelligence systems by reducing memory usage without sacrificing performance.

The announcement highlights a growing focus on optimizing the infrastructure behind large language models and vector search engines. These systems rely on high-dimensional vectors to represent complex data such as language, images, and user intent. While powerful, these vectors consume large amounts of memory, particularly in the key-value (KV) cache used during inference.

TurboQuant addresses this bottleneck by compressing vector data more effectively than traditional methods. Existing approaches often introduce additional memory overhead through stored quantization parameters. Google’s method minimizes this overhead, enabling higher compression rates with minimal loss in accuracy.

New Algorithms for Scalable AI

TurboQuant builds on two supporting techniques: Quantized Johnson-Lindenstrauss (QJL) and PolarQuant.

PolarQuant restructures vector data into polar coordinates, simplifying its geometry and enabling more efficient compression. This reduces computational overhead and eliminates the need for certain normalization steps.

QJL applies a mathematical transformation that reduces data to a single-bit representation while preserving the relationships between vectors. It acts as a lightweight correction layer, improving accuracy after compression.

Together, these methods allow TurboQuant to compress KV cache data to as little as three bits per value. The approach does not require retraining or fine-tuning models, making it easier to deploy across existing systems.

Performance Gains and Use Cases

In benchmark testing across tasks such as question answering, summarization, and code generation, TurboQuant maintained performance while significantly reducing memory usage.

Google reported up to a sixfold reduction in KV cache size and speed improvements of up to eight times in attention computation on modern GPU hardware. The method also demonstrated strong performance in vector search, achieving high recall accuracy compared to existing techniques.

These improvements are particularly relevant for large-scale AI deployments, where memory constraints and compute costs are major limiting factors.

Implications for AI Infrastructure

The release underscores the importance of foundational optimizations as AI systems scale. Efficient compression techniques can lower hardware requirements, reduce energy consumption, and improve response times.

TurboQuant is expected to play a role in applications such as semantic search, recommendation systems, and real-time AI services. It may also support large-scale platforms that rely on fast vector retrieval.

Google plans to present the research at ICLR 2026, with related work including PolarQuant and QJL scheduled for academic conferences. The methods are supported by theoretical analysis, suggesting they approach optimal efficiency limits.

As AI adoption accelerates, innovations in core infrastructure such as compression are becoming increasingly critical to sustaining performance and scalability across systems.

Zuckerberg, Huang, and Ellison Join Trump’s AI Advisory Council

President Donald Trump has appointed top tech executives, including leaders from Meta, Nvidia, and Oracle, to a council shaping U.S. AI policy and strategy.

By Maria Konash Published:
Trump taps Zuckerberg, Huang, and Ellison for AI council, deepening Big Tech ties. Image: David Everett Strickler / Unsplash

U.S. President Donald Trump has appointed a group of leading technology executives to a key advisory council that will help shape national policy on artificial intelligence and related technologies.

The appointments include Meta CEO Mark Zuckerberg, Nvidia CEO Jensen Huang, and Oracle Executive Chairman Larry Ellison, alongside other prominent figures such as Google co-founder Sergey Brin and AMD CEO Lisa Su. The group forms part of the President’s Council of Advisors on Science and Technology, known as PCAST.

The council is expected to play a central role in advising the administration on AI development, regulation, and global competitiveness. The White House said the panel could expand to as many as 24 members in the coming months.

Aligning Policy With Industry

The formation of the council reflects a broader effort by the administration to align closely with major technology companies as the United States seeks to maintain leadership in artificial intelligence.

Trump has positioned AI as a strategic priority during his second term, framing it as a critical area of competition with China. Shortly after taking office, he directed federal agencies to develop an AI Action Plan aimed at reducing regulatory barriers and accelerating innovation in the private sector.

The inclusion of senior executives from leading AI and semiconductor companies highlights the government’s reliance on industry expertise to guide policy decisions. Nvidia, for example, plays a central role in supplying the hardware that underpins AI systems, while companies like Meta and Google are advancing large-scale AI models and applications.

The council will be co-chaired by White House AI and crypto adviser David Sacks and technology policy official Michael Kratsios. Additional members from both industry and research sectors are expected to be announced.

Strategic Focus on AI Investment

The appointments come amid a surge in investment in AI infrastructure and development across the United States. Technology companies have committed significant capital to data centers, chips, and software systems as demand for AI capabilities grows.

The council’s work is likely to influence how the U.S. balances innovation with regulatory oversight, particularly in areas such as national security, economic competitiveness, and technological standards.

In addition to technology leaders, the council includes representatives from emerging sectors such as fusion energy, signaling a broader focus on advanced technologies that could shape future economic growth.

The move underscores the increasing importance of collaboration between government and industry in shaping the direction of AI development. As competition intensifies globally, policymakers are seeking to leverage private-sector expertise to maintain a technological edge.

The creation of the advisory council marks a step toward more coordinated national strategy in artificial intelligence, with input from some of the most influential figures in the technology sector.

AI & Machine Learning, News, Regulation & Policy

Sora Shutdown: OpenAI Walks Away from $1B Disney Partnership

OpenAI will discontinue its Sora video app, ending a major partnership with Disney and signaling a retreat from generative video amid legal and cost pressures.

By Samantha Reed Edited by Maria Konash Published: Updated:
OpenAI shuts down Sora, ending Disney deal, and pivots away from generative video. Image: PAN XIAOZHEN / Unsplash

OpenAI has announced it will discontinue Sora, its generative AI video application, ending one of the company’s most high-profile consumer products and halting a major partnership with Disney.

The company confirmed the decision in a statement, thanking users and creators while promising to share further details on timelines for shutting down the app and its API. OpenAI did not provide a reason for the move.

Sora, launched publicly in late 2024 with a standalone app released in 2025, enabled users to generate realistic videos from text prompts. A second-generation version introduced improved physics and audio capabilities, attracting widespread attention but also scrutiny from the entertainment industry.

Partnership Collapse and Legal Pressures

The shutdown effectively ends OpenAI’s agreement with Disney, which had planned to integrate licensed characters from franchises such as Marvel, Pixar, and Star Wars into Sora-generated content. The deal also included a potential $1 billion investment by Disney in OpenAI.

Disney confirmed that it will no longer proceed with the partnership, stating it respects OpenAI’s decision to exit the video generation space and shift priorities. The agreement had aimed to create “fan-inspired” videos and distribute curated content through Disney+.

Sora’s development had already drawn concern from media companies and industry groups. Critics pointed to the model’s opt-out system for copyrighted material, which required rights holders to actively request exclusion from training data.

Organizations representing content creators, including Japanese and U.S. studios, raised objections and issued legal challenges against AI companies over alleged unauthorized use of intellectual property. These concerns extended beyond OpenAI to other platforms offering generative video tools.

Strategic Shift Away From Video AI

The closure of Sora reflects a broader strategic shift within OpenAI as it prioritizes core AI capabilities such as text, coding, and reasoning systems. These areas are seen as more scalable and commercially viable, particularly in enterprise markets.

Generative video remains one of the most resource-intensive applications in AI, requiring significant computational power to produce high-quality outputs. Industry estimates have suggested that operating such systems can carry substantial ongoing costs, adding pressure to justify long-term investment.

At the same time, competition in the AI sector is intensifying. Rival companies have focused on specialized domains, particularly text-based models, where demand and monetization are more established.

The shutdown also means that ChatGPT will no longer support video generation from text prompts, further signaling OpenAI’s retreat from this category.

Despite exiting the space, generative video remains active across other platforms, though it continues to face legal scrutiny from major studios. Companies including Google, Meta, and ByteDance have all encountered challenges related to copyright enforcement and content ownership.

Exit mobile version