Adobe Updates Firefly with Prompt-Based Video Editing

Adobe’s Firefly app adds a new video editor supporting precise prompt-based edits, along with integration of third-party models including Black Forest Labs’ FLUX.2 and Topaz Astra for enhanced video and image generation.

By Daniel Mercer Edited by Maria Konash Published: Updated:
Adobe Updates Firefly with Prompt-Based Video Editing
Adobe launches prompt-based video editing in Firefly. Photo: Adobe

Adobe updated its AI-powered video-generation app, Firefly, introducing a new video editor that allows users to make precise edits via text prompts. Previously, Firefly required recreating an entire clip for any changes. The new editor supports prompt-based adjustments to video elements, colors, and camera angles, alongside a timeline view for frame-by-frame modifications, sound adjustments, and other characteristics.

The video editor, first previewed in a private beta in October, is now available to all users. Leveraging Runway’s Aleph model, users can issue specific instructions such as “Change the sky to overcast” or “Zoom in slightly on the main subject.” Adobe’s Firefly Video model also enables camera motion replication using a start frame and reference video.

Expanded Third-Party Model Integration

Adobe is expanding Firefly’s capabilities with new third-party models. Black Forest Labs’ FLUX.2 image generation model is now available across Firefly platforms, with Adobe Express access starting in January. Topaz Labs’ Astra model allows users to upscale videos to 1080p or 4K. Collaborative boards have also been introduced to facilitate shared creative workflows.

The updates aim to increase engagement as competitors release new image and video generation models. Adobe announced that subscribers of Firefly Pro, Firefly Premium, and credit-based plans will have unlimited access to all image models and the Firefly Video Model until January 15.

Firefly’s Continued Evolution

This year, Adobe has significantly expanded the Firefly ecosystem. In February, it launched subscription plans for image and video generation, followed by a new web app, mobile apps, and support for additional third-party models. The latest updates further position Firefly as a versatile platform for AI-driven video creation and editing, catering to professional and casual creators alike.

Recent initiatives, such as integrating Photoshop, Adobe Express, and Acrobat directly into ChatGPT, highlight Adobe’s broader strategy to make its AI tools accessible across multiple platforms and workflows.

AI & Machine Learning, Consumer Tech, News

Xiaomi Releases MiMo V2.5 Open Models for OpenClaw

Xiaomi has launched two open-source AI models optimized for agent-based tasks with high efficiency and low cost. The release targets growing demand for scalable enterprise AI.

By Daniel Mercer Edited by Maria Konash Published:
Xiaomi Releases MiMo V2.5 Open Models for OpenClaw
Xiaomi MiMo V2.5 delivers efficient open-source AI for agents with low costs and million-token context. Image: Xiaomi

Xiaomi has released two new open-source large language models, MiMo-V2.5 and MiMo-V2.5-Pro, designed for agent-based AI systems such as OpenClaw. The models are distributed under the permissive MIT license, allowing developers and enterprises to use, modify, and deploy them commercially without restrictions. MiMo-V2.5 features 310 billion parameters with 15 billion active during inference, while the Pro version scales to 1.02 trillion parameters with 42 billion active. Both models support context windows of up to one million tokens, targeting long-running and complex tasks.

The release focuses on efficiency in agent workflows, where AI systems perform multi-step operations such as coding, automation, and task orchestration. According to Xiaomi’s benchmarks, MiMo-V2.5-Pro achieved a 63.8 percent success rate on ClawEval while using around 70,000 tokens per task cycle. This represents significantly lower token consumption compared with competing models from Anthropic, Google, and OpenAI. Lower token usage translates directly into reduced operating costs, a key factor as AI pricing shifts toward usage-based billing.

Pricing for the models reflects this positioning. The base MiMo-V2.5 starts at approximately $0.40 per million input tokens and $2.00 per million output tokens, while the Pro version is priced at $1.00 and $3.00 respectively for standard context sizes. Xiaomi also offers extended context support up to one million tokens without imposing significant pricing multipliers, contrasting with industry trends where longer context windows often incur higher costs. The company has additionally introduced subscription-based token plans and temporary incentives such as free cache usage to encourage adoption.

Efficiency as a Differentiator

The MiMo models highlight a shift toward optimizing cost-performance in AI systems, particularly for agentic use cases. By using a mixture-of-experts architecture, the models activate only a subset of parameters during each task, reducing computational overhead while maintaining capability. This approach is increasingly important as enterprises deploy AI agents that operate continuously and consume large volumes of tokens.

For developers, the combination of open licensing and lower costs provides an alternative to proprietary models with usage fees and restrictions. Organizations can run the models locally or in private cloud environments, offering greater control over data and expenses. This flexibility is particularly relevant for applications involving long-running processes or sensitive information.

Open Models Gain Ground

Xiaomi’s release reflects broader momentum behind open-source AI as competition intensifies. The gap between open and closed models has narrowed, with open systems increasingly matching proprietary offerings in performance while offering more flexibility. The MIT license further positions MiMo as infrastructure that can be integrated into a wide range of applications without legal or commercial barriers.

The move also aligns with changes in AI economics, as providers shift from subscription models to metered usage. In this environment, efficient models that reduce token consumption can offer a significant advantage. Xiaomi’s strategy suggests that cost and openness may become as important as raw performance in determining which AI platforms gain adoption in enterprise and developer ecosystems.

AI & Machine Learning, News

Meta Eyes Space Solar Energy to Keep Data Centers Running Overnigh

Meta has signed a deal with Overview Energy to beam solar power from space to Earth. The project aims to supply constant energy for AI data centers, even at night.

By Olivia Grant Edited by Maria Konash Published:
Meta Eyes Space Solar Energy to Keep Data Centers Running Overnigh
Meta signs space-based solar deal to power AI data centers via continuous infrared transmission. Image: Meta

Meta has signed an agreement with startup Overview Energy to secure up to 1 gigawatt of solar power generated in space and transmitted to Earth. The deal is part of Meta’s broader effort to meet rising energy demands from artificial intelligence infrastructure. The approach involves satellites collecting solar energy, converting it into infrared light, and beaming it to ground-based solar farms. Unlike traditional solar systems, this method could provide power continuously, including at night.

Overview Energy’s system is designed to integrate with existing solar infrastructure, avoiding the need for entirely new power grids. The company plans to deploy a fleet of satellites that transmit energy to large-scale solar farms, which then convert the infrared light into electricity. According to the company, the beam is designed to be safe for human exposure and avoids the regulatory challenges associated with high-power lasers or microwave transmission. Meta has not disclosed the financial terms of the agreement but confirmed it has reserved capacity under the arrangement.

The project remains in early stages, with key milestones ahead. Overview has already demonstrated energy transmission from an aircraft and plans its first satellite test in January 2028. Full-scale deployment could begin around 2030, with a long-term goal of operating up to 1,000 satellites in geostationary orbit. Each satellite is expected to deliver power for more than a decade, supporting continuous energy supply across regions as the Earth rotates.

Powering AI Infrastructure

The agreement highlights the growing energy demands of AI systems and data centers. Meta’s operations consumed more than 18,000 gigawatt-hours of electricity in 2024, and demand is expected to rise as AI workloads expand. Traditional solar power requires storage or backup generation to operate overnight, adding cost and complexity. By enabling round-the-clock solar generation, space-based energy could improve efficiency and reduce reliance on fossil fuels.

For technology companies, securing stable and scalable energy sources has become a strategic priority. Large AI models require constant compute availability, making intermittent energy sources less practical without significant storage investment. If successful, Overview’s approach could reshape how renewable energy supports data-intensive industries.

Emerging Energy Technologies

Space-based solar power has long been explored but has faced technical and economic challenges. Advances in satellite design, energy transmission, and cost reduction are now bringing the concept closer to practical deployment. Overview’s strategy focuses on using lower-intensity infrared beams and existing solar farms to simplify implementation.

The Meta partnership signals increasing interest from major technology firms in unconventional energy solutions. As competition in AI intensifies, companies are investing not only in computing infrastructure but also in the energy systems required to sustain it. The success of projects like this will depend on scaling the technology, meeting regulatory requirements, and proving long-term reliability in real-world conditions.

AI & Machine Learning, Cloud & Infrastructure, News

Google Signs Pentagon Deal for Classified AI Use

Google Pentagon AI deal enables classified use of models for government operations, highlighting growing military adoption and debate over AI safeguards

By Samantha Reed Edited by Maria Konash Published:
Google Signs Pentagon Deal for Classified AI Use
Google joins OpenAI and xAI in supplying AI models for classified U.S. military use with fewer restrictions. Image: Wesley Tingey / Unsplash

Google has signed an agreement with the U.S. Department of Defense allowing its artificial intelligence models to be used for classified government work, according to reports. The deal permits the Pentagon to deploy Google’s AI for “any lawful government purpose,” placing the company alongside OpenAI and xAI as key suppliers of AI capabilities for sensitive operations. The agreement reflects the U.S. government’s push to integrate advanced AI into defense systems, including areas such as mission planning and analysis.

The contract builds on a broader effort by the Pentagon to secure partnerships with leading AI developers. In 2025, the Department of Defense signed agreements worth up to $200 million each with companies including Anthropic, OpenAI, and Google. These deals aim to bring advanced AI tools into classified environments, where strict controls typically limit external technologies. Google’s agreement reportedly includes provisions allowing the government to request adjustments to safety settings and filters applied to its AI systems.

While the contract states that AI should not be used for domestic mass surveillance or fully autonomous weapons without human oversight, it also clarifies that Google does not have the authority to veto lawful government decisions. This distinction highlights the balance between corporate AI principles and government operational control. The deal was reported shortly after internal criticism from Google employees, with hundreds signing a letter urging leadership to avoid such military partnerships.

Ethical Tradeoffs

The agreement underscores a growing tension between commercial AI development and ethical commitments. Technology companies have previously outlined principles limiting the use of AI in surveillance and autonomous weapons. However, government contracts often require broader flexibility, particularly in classified contexts. Google’s reported terms suggest a willingness to support defense applications while maintaining general statements on oversight.

For policymakers and the public, the issue centers on how AI systems are governed once deployed in military settings. Even with stated safeguards, enforcement depends on operational practices rather than contractual language alone. The lack of veto power for the company raises questions about accountability and control over how the technology is ultimately used.

Competitive Positioning

The deal places Google firmly within a competitive group of AI providers supplying the U.S. military. Companies such as OpenAI and xAI have also secured similar agreements, reflecting the strategic importance of AI in national defense. At the same time, Anthropic’s position has been more fluid after earlier restrictions limited its role in defense-related work.

Recent comments from Donald Trump suggest that dynamic may be changing. Trump said it is “possible” that Anthropic could reach a new agreement with the Pentagon following what he described as “very good talks,” signaling a potential reversal after months of conflict. Earlier disputes centered on Anthropic’s insistence on limits around autonomous weapons and domestic surveillance, which led to restrictions on its technology. Any renewed deal would likely include safeguards while restoring its access to defense contracts.

AI & Machine Learning, News

OpenAI Eyes Smartphone Where AI Agents Replace Apps

OpenAI is reportedly exploring a smartphone built around AI agents instead of traditional apps. The device could reshape how users interact with software and services.

By Samantha Reed Edited by Maria Konash Published:
OpenAI Eyes Smartphone Where AI Agents Replace Apps
OpenAI eyes AI-first smartphone with agents replacing apps, blending on-device and cloud models. Image: Levart_Photographer / Unsplash

OpenAI is reportedly exploring the development of a smartphone designed around AI agents rather than traditional apps, according to industry analyst Ming-Chi Kuo. The project could involve partners such as MediaTek, Qualcomm, and Luxshare. The concept centers on replacing app-based interactions with AI systems capable of understanding user context and executing tasks autonomously. If developed, the device would mark a significant expansion of OpenAI’s ambitions beyond software into consumer hardware.

The proposed smartphone would rely on AI agents to manage functions typically handled by apps, such as messaging, scheduling, and search. This approach addresses limitations imposed by existing mobile ecosystems controlled by Apple and Google, which regulate app access and system permissions. By building its own hardware and software stack, OpenAI could integrate AI more deeply into the device, enabling continuous context awareness and more seamless task execution. The system is expected to combine on-device models for speed and privacy with cloud-based models for more complex processing.

The timeline for the project remains early. According to Kuo, key specifications and supplier decisions could be finalized by late 2026 or early 2027, with mass production potentially beginning in 2028. The effort follows broader reports that OpenAI is preparing to launch its first hardware product as early as the second half of 2026, possibly starting with smaller devices such as AI-enabled earbuds.

Rethinking the App Model

The concept reflects a growing view within the technology industry that traditional apps may become less central as AI systems improve. Instead of navigating multiple interfaces, users could rely on a single intelligent agent capable of coordinating tasks across services. This shift could simplify user experiences while reducing dependence on app stores and platform gatekeepers.

For developers and businesses, such a model would represent a major change in how software is distributed and monetized. Services may need to integrate directly with AI agents rather than compete for visibility in app marketplaces. The transition could also affect how user data is accessed and managed, as AI systems require continuous context to function effectively.

Hardware Race in AI

OpenAI’s reported plans come as competition intensifies to define the next generation of AI-native devices. Companies across the industry are exploring hardware that integrates AI more deeply into everyday use. For OpenAI, building its own device could provide greater control over user experience and data, while expanding its reach beyond existing platforms.

The strategy also aligns with broader trends toward vertical integration, where companies design both hardware and software to optimize performance. By combining custom chips, AI models, and cloud infrastructure, OpenAI could create a tightly integrated system tailored for AI-driven interactions. While still speculative, the project signals how AI companies are increasingly looking beyond applications to reshape the underlying devices themselves.

AI & Machine Learning, Consumer Tech, News

New Malware Campaign Targets Developers via Fake AI Setup Guides

Attackers are using fake install guides for popular developer tools to trick users into running malicious commands. The campaign exploits trusted workflows like copy-paste terminal installs.

By Marcus Lee Edited by Maria Konash Published:
New Malware Campaign Targets Developers via Fake AI Setup Guides
InstallFix attack uses fake install guides and malvertising to spread infostealer malware via copy-paste commands. Image: Growtika / Unsplash

Security researchers from Push Security have identified a new attack technique called InstallFix, where attackers distribute fake installation guides for developer tools through malicious search ads. The campaign involves cloning legitimate websites and replacing install commands with malicious ones that deliver malware. In recent cases, attackers targeted tools like Claude Code from Anthropic. Victims searching for installation instructions are directed to near-identical copies of official pages, often via sponsored results on search engines.

The attack exploits a common developer practice: copying and running one-line install commands such as “curl to bash,” which fetch and execute scripts directly in a terminal. While widely used by tools like Homebrew and other package managers, this method relies heavily on trusting the source domain. In the InstallFix campaign, attackers modify the command so it downloads a malicious script instead of the legitimate installer. Once executed, the malware is installed without obvious warning, as the process appears identical to a normal setup.

Researchers found that the malicious payload in several cases matched the behavior of Amatera, a relatively new infostealer malware. This type of software is designed to extract sensitive data, including saved passwords, browser cookies, and system information. The attack chain typically involves multiple stages, using system processes to retrieve and execute additional code from remote servers. By leveraging legitimate infrastructure and obfuscation techniques, the malware can evade traditional security tools.

Deceptive Delivery Model

Unlike traditional phishing attacks, InstallFix does not rely on emails or fake alerts to lure victims. Instead, it targets users who are actively searching for legitimate software. Malicious pages are promoted through paid search ads, placing them above official results and increasing the likelihood of clicks. Because users initiate the interaction themselves, the attack bypasses many standard security filters.

The cloned pages are often indistinguishable from the original, with identical layouts, branding, and documentation. In some cases, users are even redirected to the real website after running the malicious command, reducing suspicion. This approach makes the attack particularly effective against both developers and less technical users adopting AI tools.

Expanding Attack Surface

The rise of AI tools and developer-friendly automation has expanded the potential victim pool for such attacks. As more users interact with command-line tools, including those without deep technical experience, risky practices like blindly executing install scripts become more common. Attackers are adapting by targeting popular and fast-growing tools, especially in the AI ecosystem.

The technique is part of a broader trend combining social engineering with infrastructure abuse. Attackers increasingly rely on legitimate hosting platforms and ad networks to distribute malicious content at scale. Security experts warn that defending against these threats requires changes in both user behavior and platform design, including better verification of install sources and stricter controls on ad distribution.

AI & Machine Learning, Cybersecurity & Privacy, News