OpenAI Launches Sora 2 with Physics-Accurate and Audio-Synced Video

OpenAI has introduced Sora 2, a next-generation video AI model with synchronized audio, improved physics, and tighter control, alongside a companion social app.

By Samantha Reed Published: Updated:
OpenAI Launches Sora 2 with Physics-Accurate and Audio-Synced Video
OpenAI has begun rolling out Sora 2, a new video generator that emphasizes realism, physics accuracy, and synchronized sound, alongside a social video app. Photo: Sanket Mishra / Pexels

OpenAI has unveiled Sora 2, a major advance in AI video generation that emphasises realism, controllability, and synchronised audio.

The model introduces features that address long-standing weaknesses of generative video systems — from distorted physics to mismatched sound — making Sora 2 the company’s most sophisticated video and audio generator to date.

Alongside the model, OpenAI is launching a companion social app that allows users to generate and share short AI-produced clips in a TikTok-style feed.

The Sora 2 model represents a clear step forward over its predecessor. While the original Sora impressed with short, text-to-video outputs, it often struggled with consistent motion, scene stability, and sound alignment.

Sora 2 directly targets these issues. It integrates synchronised dialogue and sound effects, maintains consistent world states across sequences, and better reflects the physical laws that govern motion. For example, generated clips of athletes, vehicles, or natural environments now demonstrate fluid dynamics that align more closely with reality.

Features, Realism & Controls

At the core of Sora 2 is an emphasis on controllability. Users have greater influence over how characters move, how environments evolve, and how sound matches visuals. This makes the model not just a novelty tool but a potential asset for content creators, educators, and entertainment pipelines.

The system’s physics-aware design allows more accurate simulation of cause and effect: a ball bounces with proper trajectory, shadows fall consistently with lighting, and water responds realistically to interaction. Combined with synchronised dialogue and ambient sound effects, the outputs feel more polished and production-ready than prior iterations.

The companion app lets users generate up to 10-second videos in a swipe-based interface similar to TikTok. All content is AI-generated; uploads from device libraries are not supported at launch. A “Cameo” option enables verified users to lend their likeness to videos, creating new opportunities for personalisation — though it also raises questions around consent and moderation.

Access to Sora 2 is currently by invitation on iOS in the U.S. and Canada. Android support is planned but not yet confirmed. While usage is free within limits, paying ChatGPT Pro subscribers receive access to a higher-quality tier branded as Sora 2 Pro.

Implications & What Comes Next

By emphasising realism and control, Sora 2 places OpenAI in direct competition with not only TikTok and YouTube Shorts but also with emerging AI video platforms aiming to mainstream generative media. The ability to create believable, sound-synchronised content at scale could disrupt video production, influencer marketing, and entertainment workflows.

At the same time, challenges loom. Copyright enforcement, misuse of likenesses, and moderation of synthetic media remain unsettled. OpenAI’s safeguards block certain outputs, but ensuring safe use at global scale will test the company’s systems. Expansion beyond North America will bring further regulatory scrutiny.

Looking ahead, the roadmap points toward longer video clips, richer editing controls, and broader device support. If Sora 2 continues to improve on physical accuracy and controllability, it could set a new benchmark for what AI-generated video can achieve — edging closer to production-quality media created entirely from prompts.

Opera Launches Neon AI Browser to Join Agentic Web Race

Opera today introduced Neon, an AI-powered browser that can run tasks and navigate pages autonomously — aiming to shift web browsing from passive to “agentic.”

By Samantha Reed Published: Updated:

Opera has launched Neon, a new artificial intelligence-powered browser designed to go beyond simple page rendering by executing tasks, running code, and acting autonomously on behalf of users. Rather than just delivering search results, Neon aims to become a proactive “agentic” productivity hub.

The release marks Opera’s boldest move yet in the rapidly evolving browser landscape. For decades, browsers have served as static gateways to the internet, displaying information and requiring the user to take every action manually.

Neon flips that model by introducing automation and decision-making directly into the browser, giving users a tool that can fill forms, analyze multiple sources, and even complete multi-step online tasks without constant supervision.

Opera argues that this shift could redefine how people interact with the web – from everyday errands to advanced technical workflows – while keeping privacy and local control at the forefront.

What Is Neon and Why Opera Is Betting Big

Neon can fill out forms, compare data across websites, and even draft or run code within pages, according to the company’s press release. Its standout “Neon Do” feature allows the browser to navigate web pages on the user’s behalf — all locally, without routing data to external cloud services.

Additional features include Tasks, which let Neon create self-contained workspaces for analysing multiple sources, and Cards, which are reusable prompt templates for automating repetitive workflows. Opera emphasises that all actions occur on the device, giving users explicit control over when the AI acts or pauses.

Opera is positioning Neon as a subscription product targeting power users. Early access begins immediately, with broader rollout expected in the coming months. The company claims over 300 million active users globally and highlights its long history in browser development, dating back to 1995.

Privacy is a central pillar of Neon’s pitch: by executing operations on-device and avoiding cloud routing, Opera argues it can better meet regulatory demands and user expectations in sensitive regions such as Europe.

Implications, Challenges, and What Comes Next

Neon intensifies the already heated competition to reimagine the web browser not merely as a display engine but as a smart assistant. Earlier entrants such as Comet by Perplexity AI and Dia by The Browser Company have already pushed that boundary. Meanwhile, OpenAI is expected to release a Chromium-based AI browser integrating its “Operator” agent, further deepening the race.

However, challenges loom. For one, performance and resource use may bottleneck AI operations on local devices. Ensuring responsiveness while maintaining privacy is a delicate engineering tradeoff. Also, convincing users to pay a subscription for a browser – a space historically dominated by free offerings – could be a steep uphill battle.

From a regulatory perspective, Neon’s privacy claim might help Opera gain favour in markets with stringent data protection rules. But if any cloud fallback or data leak arises, the backlash could be severe.

Looking ahead, Opera will need to prove that Neon’s AI capabilities add compelling productivity value for users to switch from incumbents. If successful, Neon could push the browser paradigm toward a future where agents, not just pages, dominate.

AI & Machine Learning, Consumer Tech, News

OpenAI’s Sora Update Will Include Copyright Works Unless Rights Holders Opt Out

OpenAI’s next version of its Sora video generator will default to including copyrighted content unless owners explicitly opt out – drawing criticism from media creators.

By Samantha Reed Published: Updated:
OpenAI’s Sora Update Will Include Copyright Works Unless Rights Holders Opt Out
OpenAI has started informing talent agencies and film studios about the opt-out process for Sora 2, which is expected to launch in the coming days, according to the report. Photo: Sanket Mishra / Pexels

OpenAI is preparing an updated version of Sora, its text-to-video generator, that will allow inclusion of copyrighted content by default – unless copyright holders explicitly opt out, the Wall Street Journal reported on Monday, citing people familiar with the matter. This shift marks a move away from seeking prior permission, placing the onus on studios, creators, and rights holders to act.

The company has begun notifying talent agencies and film studios about the opt-out process ahead of the product launch. Under the new policy, movie studios and other intellectual property owners must file specific requests if they do not want their copyrighted works included. A blanket, wide-ranging opt-out across an entire catalogue won’t be accepted; rights holders must identify specific violations.

How the Policy Works

OpenAI’s new approach means copyrighted materials are treated as “in” unless actively blocked. Rights holders must provide detailed information to prevent their works from being used, which adds monitoring and administrative burdens for creators. Unlike a universal exclusion, this selective opt-out model forces case-by-case action.

The company will continue to restrict the generation of recognisable public figures, separating likeness rights from copyright. OpenAI also notes it is applying similar safeguards to those rolled out with its image generation tool earlier in 2025. By extending the same framework, OpenAI seeks consistency across its generative AI product line.

OpenAI is also launching Sora 2, an app offering vertical 10-second video generation and optional identity verification for people who want to use their own likeness. At launch, users will not be able to upload existing media from devices, limiting inputs to text-based prompts.

Reactions, Risks, and Future Outlook

The opt-out model has drawn pushback from creative industries. Rights holders argue that prior consent and compensation would be more appropriate than requiring constant monitoring. Critics see the strategy as part of a broader pattern in AI – prioritizing rapid deployment over negotiated rights.

Legal experts point to risks in the training process. Independent reviews suggest that Sora can reproduce logos, watermarks, and characters, which indicates that copyrighted content may have been included in its datasets. If disputed outputs appear, lawsuits could follow, especially in markets with strict intellectual property laws.

Subscription models and user growth remain uncertain. Whether consumers will embrace Sora 2 as a creative tool depends not only on its technical capabilities but also on how OpenAI manages copyright conflicts. The company’s handling of opt-outs, compensation, and transparency will likely shape whether Sora becomes a standard tool for creators or a focal point for legal battles.

OpenAI Launches ChatGPT Pulse to Rival Social Media Feeds

OpenAI introduces ChatGPT Pulse, a proactive feature delivering daily updates and suggestions directly in the app, positioning it as a potential challenger to social media feeds.

By Samantha Reed Published: Updated:
OpenAI Launches ChatGPT Pulse to Rival Social Media Feeds
OpenAI unveils Pulse inside ChatGPT - personalized daily cards hint at a future where AI challenges social media feeds. Photo: OpenAI / X

OpenAI has rolled out ChatGPT Pulse, a new feature for Pro users on mobile that proactively surfaces curated updates and suggestions every day. Pulse offers personalized cards of information based on your chats, connected apps and feedback, turning ChatGPT from a reactive tool into a daily AI companion.

Each morning, users see a stream of visual cards summarizing things they might care about — follow-ups to past topics, reminders, or recommendations tied to upcoming events. Cards can be expanded for more context, saved for later, or dismissed. Over time, Pulse learns from your preferences to refine what appears.

The feature can connect with Gmail and Google Calendar if users opt in, enabling richer context. For instance, it might draft an agenda for an upcoming meeting, propose travel suggestions before a trip, or remind you of deadlines – all without having to ask for it.

Why Pulse Could Disrupt the Way We Scroll

Pulse moves ChatGPT closer to acting like a feed instead of a static assistant. Instead of opening a browser or social app to scroll for updates, users can open ChatGPT to see a curated flow of actionable information.

This proactive approach reflects a larger trend in AI where models anticipate rather than only respond. As previously covered, Google, Microsoft and other tech giants have poured billions into AI infrastructure to make these kinds of features possible. Pulse is OpenAI’s answer to that trend, and its first real step into making AI an everyday feed rather than an occasional tool.

What’s Next for ChatGPT Pulse

Pulse is still in preview and will evolve. OpenAI says user control is central: people can curate their feed by marking what’s helpful, deleting past cards and limiting which apps Pulse can access. Safety systems are built in to keep content policy-compliant.

Rollout will expand beyond Pro users on mobile to Plus users and then desktop. More integrations, languages and potentially “agentic” actions — where ChatGPT can complete certain tasks automatically with approval — are on the roadmap.

If successful, Pulse could blur the line between AI assistants and social media feeds. Instead of scrolling endlessly through posts, users might start their day with an AI that knows their priorities and helps them act on them.

Databricks, OpenAI Partner in $100M Push for AI Agents

OpenAI and Databricks have forged a multiyear $100 million partnership to embed powerful AI agents into enterprise data platforms, enabling organizations to build agents using their own data.

By Samantha Reed Published: Updated:
Databricks, OpenAI Partner in $100M Push for AI Agents
OpenAI and Databricks have begun preparing to integrate AI agents into enterprise platforms, targeting seamless deployment for 20,000+ customers under the new $100 million pact. Photo: Databricks / Facebook

OpenAI and Databricks announced a multiyear $100 million partnership aimed at empowering enterprises to build and deploy AI agents directly against their own data. Through this agreement, OpenAI’s models – including GPT-5 – will be accessible natively within Databricks’ data intelligence platform and its Agent Bricks product, removing the need for data movement or managing external infrastructure.

The deal is structured to deliver services and revenue over time, with both companies expecting usage and adoption to exceed the initial $100 million commitment. Databricks, which serves over 20,000 enterprise customers, will now offer integrated AI agent capabilities as a core component of its stack.

Technical Integration & Governance

Under the agreement, OpenAI’s models will run where data already resides – within Databricks’ unified platform – making it simpler for businesses to spin up AI agents with minimal setup.

The integration includes support via SQL, APIs, and model serving modes. Databricks’ existing governance frameworks, such as Unity Catalog, will help manage access, compliance, and security as AI agents scale in production.

The partnership also involves joint R&D efforts to refine how agents operate reliably on enterprise workloads. Databricks brings scalability and data infrastructure; OpenAI contributes frontier-model research, pushing for agent-level intelligence closely tied to private data.

Business Impact & Risks

The move accelerates a trend: AI capabilities will increasingly be embedded deep inside enterprise stacks, not held at the edge. For Databricks, it strengthens its position against rival cloud providers and AI platforms. For OpenAI, it diversifies its enterprise reach beyond partnerships like Microsoft Azure.

Still, risk remains. Ensuring agent accuracy, consistency, and safety on real-world enterprise data is nontrivial. Any missteps—errors, hallucinations, privacy leaks—could harm customer trust. The financial model also carries exposure: if adoption lags, the $100 million baseline may underdeliver.

Success depends on execution: if the technical integration works smoothly and the agents consistently deliver value, this deal could reshape how businesses adopt AI agents.

Google Rolls Out Gemini in Chrome, Adds Agentic Browsing Tools

Google is integrating Gemini AI into Chrome for U.S. users, bringing features like multi-tab summarization, agentic task automation, and deeper sync with Google apps.

By Samantha Reed Published: Updated:
Google Rolls Out Gemini in Chrome, Adds Agentic Browsing Tools
Google product leaders demonstrate Gemini-powered Chrome features in U.S. release: summarization, agentic tasks, app links and smarter browsing. Photo: Brett Jordan / Pexels

Google has begun rolling out Gemini AI inside Chrome for U.S. desktop users with English set as their language. The update includes tools that help users clarify complex web content, compare tabs, and integrate apps like YouTube, Maps, and Calendar directly without leaving the page.

Gemini will also gain agentic capabilities in the coming months, allowing Chrome to perform tedious tasks on behalf of users — things like placing orders or scheduling appointments — with safeguards so users approve final actions.

The address bar (Omnibox) is being upgraded too: an AI Mode feature will let users ask context-aware questions and follow up on results without switching tabs or losing track. Chrome will also begin using the Gemini Nano model for enhanced protection against scams and fake alerts.

Why This Matters and What Google’s Aiming For

By embedding Gemini inside Chrome, Google is moving Chrome from a passive tool into a more proactive assistant platform. Users will no longer just navigate and search — they’ll get help summarizing, recalling past sites, and completing multi-step tasks more efficiently.

This shift is part of Google’s broader push into making AI central across its products. It also comes shortly after Google avoided a forced breakup in a recent antitrust case, which adds pressure on the company to show innovation while handling regulatory scrutiny. As previously covered, Microsoft, Google and other tech giants have already committed £31B ($40B) to UK AI infrastructure, underlining how much is riding on AI right now.

What’s Ahead & What to Watch

Agentic tasks will roll out gradually, with user control emphasized in final steps of any task. Privacy and data handling will likely remain under close observation, especially how much context Gemini can use and when.

Chrome’s mobile versions for Android and iOS are expected to follow the desktop release. Google will also expand language support beyond U.S. English.

For users, these changes promise a more efficient web experience — less tab overload, fewer manual searches, and more automation. For competitors in AI browsers, Gemini in Chrome sets a high bar in terms of integration and utility.

Perplexity Raises $200 Million, Hitting $20 Billion Valuation in AI Search Race

AI search startup Perplexity has reportedly raised $200 million at a $20 billion valuation, just two months after a $100 million round – signaling its rapid rise as a major rival to Google.

By Maria Konash Published: Updated:
Perplexity Raises $200 Million, Hitting $20 Billion Valuation in AI Search Race
Perplexity CEO Aravind Srinivas has rapidly turned the three-year-old AI search startup into a $20 billion company - positioning it as one of Google’s most credible challengers. Photo: Perplexity / X

Perplexity, the AI-powered search startup redefining how users interact with information, has reportedly raised $200 million in new funding at a $20 billion valuation, according to The Information.

The deal marks another milestone for one of the fastest-growing companies in artificial intelligence and underscores investor confidence in conversational search as the next major computing frontier.

The funding comes just two months after a $100 million round that valued the company at $18 billion, based on a July Bloomberg report. In total, Perplexity has raised about $1.5 billion since its founding three years ago, according to PitchBook data.

It remains unclear who led the latest round, though the July financing was reportedly an extension of a $500 million round led by Accel earlier this year at a $14 billion valuation. The company has yet to comment publicly on the new capital infusion.

According to a source close to the firm, Perplexity’s annual recurring revenue (ARR) is now approaching $200 million, up from more than $150 million reported last month. The company’s accelerating revenue trajectory has fueled speculation that it could soon surpass the early growth rates of other AI leaders like OpenAI and Anthropic.

Perplexity’s appeal lies in its distinct approach to search: rather than delivering traditional links, it provides conversational, cited answers drawn from the web — allowing users to query information much like chatting with an expert. The company markets itself as a transparent and efficient alternative to Google, one that surfaces factual responses instead of pages of ads or SEO-optimized content.

The latest funding arrives as Perplexity continues to capitalize on growing user demand for AI-assisted discovery. In recent months, the company has launched product integrations with PayPal and Oracle, expanded its enterprise API offerings, and doubled down on its push to make AI search accessible through mobile and web platforms.

In a headline-grabbing move in August, Perplexity offered to buy Google’s Chrome browser for $34.5 billion, following the U.S. Department of Justice’s antitrust proposal that Google divest part of its search business.

That plan never materialized – a federal judge later ruled that Google could retain its search operations – but the offer reflected Perplexity’s bold strategy and willingness to challenge the tech establishment.

Founded by Aravind Srinivas, a former OpenAI researcher, Perplexity has built its reputation on speed, reliability, and clarity of information. The platform has quickly grown in adoption among professionals, researchers, and businesses looking for more direct, conversational ways to access verified knowledge.

With $200 million in new capital and a valuation that now rivals established AI players, Perplexity is poised to expand its infrastructure, build new models, and deepen partnerships across cloud and enterprise ecosystems.

As the AI search race intensifies, the company’s trajectory suggests that the future of search may no longer belong solely to Google — but to whoever best blends intelligence, transparency, and trust.