Google Expands Search Live With Voice and Camera AI Globally

Google has expanded Search Live globally, enabling users to interact with AI using voice and camera through its Gemini-powered search experience.

By Daniel Mercer Edited by Maria Konash Published:
Google Expands Search Live With Voice and Camera AI Globally
Google launches Search Live with Gemini, enabling real-time voice and camera search worldwide. Image: Google

Google has expanded its Search Live feature globally, bringing real-time, conversational search capabilities to users in more than 200 countries and territories.

The feature, powered by the company’s Gemini 3.1 Flash Live model, allows users to interact with search using voice and camera input. The rollout extends access to all regions where Google’s AI Mode is available, marking a significant step in the company’s effort to transform search into a more interactive experience.

Search Live enables users to ask questions verbally and receive spoken responses, with the ability to continue conversations through follow-up queries. The system is designed to support natural, back-and-forth interaction rather than traditional keyword-based searches.

Multimodal Search Experience

A key component of the update is multimodal interaction. Users can activate their device’s camera to provide visual context, allowing the system to analyze real-world objects and deliver relevant guidance.

For example, a user can point their camera at a physical object or task, such as assembling furniture, and receive step-by-step suggestions along with links to additional resources. The feature integrates with Google Lens, enabling seamless transitions between visual search and conversational AI.

The underlying Gemini 3.1 Flash Live model is designed to support real-time responses with low latency. It is also inherently multilingual, allowing users to interact with Search Live in their preferred language without switching settings.

Shift Toward Conversational Search

The global rollout reflects a broader shift in how users interact with search engines. Instead of typing queries and browsing results, users can now engage in continuous conversations that combine text, voice, and visual inputs.

Google said the feature is intended for scenarios where typing is impractical or inefficient, such as when users need immediate assistance or are interacting with physical environments.

The expansion also highlights increasing competition in AI-powered search, as companies race to integrate conversational interfaces into everyday tools. By embedding these capabilities directly into its core search product, Google is aiming to maintain its position in a rapidly evolving landscape.

Search Live is available through the Google app on Android and iOS devices. Users can activate the feature by selecting the Live option within the search interface.

The update comes alongside broader efforts to deepen personalization in Google’s AI ecosystem. The company is expanding Personal Intelligence across Search and Gemini to all U.S. users, enabling more tailored responses by connecting data from services such as Gmail and Photos, further reinforcing its shift toward context-aware, AI-driven experiences.

AI & Machine Learning, Consumer Tech, News

Anthropic Leak Reveals New Claude Mythos Model

A data leak at Anthropic exposed details of its upcoming Claude Mythos model, described as a major leap in AI capabilities, along with internal documents.

By Samantha Reed Edited by Maria Konash Published:
Anthropic Leak Reveals New Claude Mythos Model
Anthropic leak reveals Claude Mythos, an advanced reasoning and cybersecurity AI model. Image: Anthropic

Anthropic has confirmed details of a forthcoming AI model after a security lapse exposed internal documents, revealing what the company describes as a significant advancement in its Claude family of systems.

The leak, caused by a configuration error in Anthropic’s content management system, made nearly 3,000 unpublished assets publicly accessible. The exposed data included draft blog posts, images, and internal PDFs. Security researchers identified the issue and alerted the company, which then restricted access.

Anthropic said the incident resulted from “human error” and described the materials as early drafts intended for future publication.

New Model Tier Above Opus

Among the leaked documents was information about a new model referred to as Claude Mythos, internally codenamed “Capybara.” The model is expected to introduce a new tier above Anthropic’s current lineup, which includes Opus, Sonnet, and Haiku.

According to the draft materials, the new system is designed to be more capable than the existing Opus models, particularly in areas such as coding, academic reasoning, and cybersecurity. Anthropic confirmed it is developing a next-generation general-purpose model and described it as a “step change” in capability.

The addition of a higher-tier model suggests Anthropic is continuing to scale its systems in response to growing competition in advanced AI, particularly in enterprise and technical domains.

Cybersecurity Concerns and Controlled Release

The leaked documents highlighted cybersecurity as a key area of focus for the new model. Anthropic reportedly considers its capabilities in this domain to be significantly ahead of existing systems, raising concerns about potential misuse.

To address these risks, the company plans to limit early access to organizations focused on cybersecurity defense. This approach is intended to allow institutions to strengthen protections before broader deployment.

Anthropic has previously taken steps to mitigate misuse of its models, including blocking attempts to use its tools for cybercrime. The enhanced capabilities described in the leaked materials indicate a growing emphasis on both offensive and defensive implications of AI systems.

Broader Implications and Internal Exposure

In addition to model details, the leak revealed plans for internal events, including an invite-only gathering for European business leaders. The exposure of such materials underscores the risks associated with managing sensitive information in rapidly evolving AI organizations.

The incident comes at a time when Anthropic is expanding its influence in the AI sector, with increased enterprise adoption and ongoing infrastructure investments. It also aligns with broader strategic developments, as the company is reportedly targeting an IPO as early as October while intensifying its enterprise push and scaling infrastructure to compete more directly with OpenAI.

While Anthropic has moved quickly to secure the exposed data, the leak provides an early look at its next-generation model strategy. It also illustrates how operational vulnerabilities can expose critical information in an industry where technological advances are closely watched.

SoftBank Secures $40B Loan to Expand OpenAI Investment

SoftBank has secured a $40 billion bridge loan to deepen its investment in OpenAI and accelerate its broader AI strategy.

By Samantha Reed Edited by Maria Konash Published:
SoftBank Secures $40B Loan to Expand OpenAI Investment
SoftBank secures $40B loan to double down on OpenAI and AI infrastructure. Image: insung yoon / Unsplash

SoftBank Group has secured a $40 billion bridge loan to fund its growing investments in artificial intelligence, including a deeper commitment to OpenAI, as competition intensifies across the sector.

The Japanese investment firm said the unsecured loan will be used to support its AI strategy and general corporate purposes. The financing, which matures in March 2027, was arranged by a group of major lenders including JPMorgan Chase, Goldman Sachs, Mizuho Bank, Sumitomo Mitsui Banking Corporation, and MUFG Bank.

The move marks one of SoftBank’s largest financing efforts in recent years and highlights founder Masayoshi Son’s renewed focus on AI following a period of volatility in the company’s Vision Fund performance.

Expanding Partnership With OpenAI

SoftBank has been steadily increasing its exposure to OpenAI, the developer of ChatGPT, as generative AI adoption accelerates globally. The company previously committed $30 billion to OpenAI through its Vision Fund 2, positioning itself among the largest investors in the space.

The new financing is expected to further strengthen that relationship, as SoftBank seeks to capitalize on the rapid growth of AI-driven applications and infrastructure. OpenAI, backed by Microsoft, has emerged as a central player in the industry, attracting significant enterprise demand and investor interest.

SoftBank and OpenAI have also collaborated on large-scale initiatives, including the Stargate Project, which aims to invest up to $500 billion in AI infrastructure in the United States over four years. The project reflects the increasing importance of computing capacity and data centers in supporting advanced AI systems.

Strategic Shift Toward AI Infrastructure

The loan underscores SoftBank’s broader strategy to position itself at the center of the AI ecosystem, spanning both software and infrastructure investments. The company has signaled plans to deploy substantial capital into AI-related projects, including a previously announced $100 billion investment in U.S. technology and infrastructure.

This approach aligns with a wider industry trend, where companies are investing heavily in data centers, chips, and cloud platforms to support the growing computational demands of AI models.

SoftBank’s renewed focus on AI comes after years of mixed performance from its Vision Fund, which saw both significant gains and losses across technology investments. By concentrating on AI, the firm is betting on a sector widely viewed as a key driver of future economic growth.

The scale of the financing also reflects the capital-intensive nature of AI development. As companies race to build more powerful systems, access to funding and infrastructure is becoming a critical competitive factor.

AI & Machine Learning, Enterprise Tech, News, Startups & Investment

Anthropic Wins Court Ruling Against Pentagon Ban

A U.S. federal judge has blocked the Pentagon’s attempt to ban Anthropic’s AI tools, allowing continued use of its systems during an ongoing legal dispute.

By Samantha Reed Edited by Maria Konash Published:
Anthropic Wins Court Ruling Against Pentagon Ban
Anthropic wins court ruling to block Pentagon AI ban, keeping Claude in use. Image: Tingey Injury Law Firm / Unsplash

Anthropic has secured an early legal victory in its dispute with the U.S. government, after a federal judge blocked efforts by the Pentagon to halt the use of its artificial intelligence tools.

Judge Rita Lin ruled that directives issued by President Donald Trump and Defense Secretary Pete Hegseth, which sought to immediately suspend the use of Anthropic’s systems across government agencies, could not be enforced while the case proceeds.

In her decision, the judge wrote that the government’s actions appeared aimed at “crippling” the company and suppressing public debate over how its technology was being used by the military. She described the move as potentially constituting “classic First Amendment retaliation.”

Continued Use of AI in Government

The ruling allows Anthropic’s products, including its Claude AI models, to remain in use across federal agencies and by contractors working with the Department of Defense. The decision avoids an immediate disruption to systems that had become embedded in government workflows.

Anthropic had filed the lawsuit earlier this month after being designated a “supply chain risk” by the Pentagon, a classification that would have barred its technology from government use. The designation followed public criticism of the company by senior officials.

The case highlights the growing reliance of government agencies on AI tools for tasks such as data analysis, operational planning, and software development. Removing such systems would have required a complex and potentially lengthy transition to alternative providers.

Legal and Strategic Implications

The court’s decision underscores the legal complexities surrounding government intervention in the rapidly evolving AI sector. It also raises questions about how national security concerns intersect with constitutional protections and commercial competition.

Anthropic said it was pleased with the ruling but emphasized its intention to continue working with government partners to ensure safe and reliable AI deployment.

The dispute reflects broader tensions between policymakers and technology companies over the use of advanced AI systems in sensitive environments. As AI becomes more integrated into defense and national security operations, regulatory and legal frameworks are still evolving.

The outcome of the case could have wider implications for how the U.S. government evaluates and restricts technology providers, particularly in areas involving emerging technologies and strategic competition.

For now, Anthropic’s tools will remain operational within government systems, maintaining continuity for users while the legal process moves forward.

AI & Machine Learning, News

Anthropic Targets October IPO Amid Intensifying AI Competition

Anthropic is exploring an IPO as early as October, as both it and OpenAI accelerate enterprise strategies and infrastructure investments ahead of potential public listings.

By Samantha Reed Edited by Maria Konash Published: Updated:
Anthropic Targets October IPO Amid Intensifying AI Competition
Anthropic eyes IPO amid AI race, signaling strong enterprise growth and investor demand. Image: Anthropic

Anthropic is exploring a potential initial public offering as early as October, signaling a new phase in the artificial intelligence industry as leading companies prepare to tap public markets.

As per Bloomberg report, the company has begun early discussions with major investment banks, including Goldman Sachs, JPMorgan Chase, and Morgan Stanley, about leading roles in the listing. While plans remain preliminary, the IPO could value Anthropic at more than $60 billion, according to reports.

Anthropic was last valued at $380 billion following a $30 billion funding round completed in February 2026. The round included participation from global investors such as GIC, Coatue Management, MGX, D.E. Shaw, Dragoneer, Founders Fund, and ICONIQ.

Enterprise Focus and Infrastructure Expansion

Anthropic has positioned itself as a leading provider of enterprise-focused AI systems, with its Claude models widely adopted across businesses and government applications. Strategic partnerships with major technology companies have supported its rapid growth.

The company counts Google and Amazon among its largest investors, alongside Microsoft and Nvidia, which joined in earlier funding rounds. These relationships provide access to advanced computing infrastructure and cloud platforms critical for scaling AI models.

Anthropic has also committed to investing $50 billion in building its own data center infrastructure across the United States, including projects in Texas and New York. This move reflects a broader industry trend toward vertical integration, as companies seek greater control over computing resources.

Earlier this year, Anthropic faced regulatory challenges when the Pentagon flagged the company as a potential supply-chain risk. The issue was resolved after a court order blocked the proposed restrictions, allowing Anthropic to continue operating within government contracts.

Parallel IPO Plans at OpenAI

Anthropic’s IPO considerations come as rival OpenAI is also preparing for a potential public listing. The company has been increasing its focus on enterprise applications, which are seen as a key driver of long-term revenue.

OpenAI is refining its investment strategy, targeting approximately $600 billion in compute spending by 2030. The company has also projected revenue exceeding $280 billion by the end of the decade, reflecting expectations of continued growth in AI adoption.

Both companies are competing to secure enterprise customers and scale their infrastructure, with strategies that include partnerships, custom deployments, and large-scale capital investment.

The potential public listings of Anthropic and OpenAI highlight the maturation of the AI sector. As companies transition from private funding to public markets, investor scrutiny is expected to increase, particularly around profitability, cost management, and long-term growth.

ByteDance Brings Prompt-Based Video Creation to CapCut with Seedance 2.0

ByteDance is rolling out its Seedance 2.0 AI video model in CapCut, enabling prompt-based video creation as competition in generative video intensifies.

By Samantha Reed Edited by Maria Konash Published:
ByteDance Brings Prompt-Based Video Creation to CapCut with Seedance 2.0
ByteDance launches Seedance 2.0 in CapCut, bringing prompt-based AI video creation amid growing competition. Image: Bytedance

ByteDance has begun rolling out its new generative AI model, Dreamina Seedance 2.0, within its video editing platform CapCut, expanding its push into AI-powered content creation.

The model allows users to generate and edit videos using text prompts, images, or reference clips. It can also synchronize audio and video elements, enabling creators to produce short-form content with minimal manual input.

The rollout will initially be limited to select markets, including Brazil, Indonesia, Malaysia, Mexico, the Philippines, Thailand, and Vietnam. ByteDance said additional regions will be added over time, though availability remains restricted as the company addresses concerns related to intellectual property.

AI Video Creation Expands in CapCut

Seedance 2.0 is designed to support a range of creative workflows. Users can generate videos from simple text descriptions or refine existing footage with AI-assisted editing tools. The model is capable of producing realistic textures, motion, and lighting, addressing challenges that have historically limited AI-generated video quality.

The system supports clips of up to 15 seconds across multiple aspect ratios and is integrated into CapCut’s editing features, including AI Video and Video Studio tools. It will also be available through ByteDance’s Dreamina platform and its marketing tool Pippit.

ByteDance said the model can be used for various content types, including tutorials, product demonstrations, and action-based videos. It also enables creators to prototype ideas before filming, reducing production time and cost.

Safety Measures and Industry Context

The launch comes amid heightened scrutiny of generative video technologies. ByteDance has introduced safeguards to limit misuse, including restrictions on generating content featuring real faces and controls to prevent unauthorized use of copyrighted material.

Content generated by the model will include invisible watermarks to help identify AI-produced media and support enforcement actions if necessary.

The phased rollout reflects ongoing efforts to address legal and regulatory concerns, particularly from the entertainment industry, which has raised issues about copyright infringement and unauthorized use of intellectual property.

ByteDance’s move comes as competition in the AI video space evolves. While some companies are scaling back investments due to high costs and legal risks, others continue to advance the technology and integrate it into consumer platforms.

By embedding Seedance 2.0 into CapCut, ByteDance is leveraging its large user base to accelerate adoption of AI video tools. The strategy highlights a broader trend of integrating generative AI directly into existing creative applications, making advanced capabilities more accessible to everyday users.

As the rollout expands, the company said it will continue working with industry experts and creative communities to refine the model and address emerging challenges.

AI & Machine Learning, News