Study Finds Women Receive Less Recognition for Using AI at Work

New research shows women are adopting AI tools at work more slowly than men, facing lower support and recognition. The gap could have long-term implications for career growth and workplace equity.

By Samantha Reed Edited by Maria Konash Published:
Study finds women lag in workplace AI adoption, raising concerns over long-term inequality. Image: Christina @ wocintechchat.com M / Unsplash

Artificial intelligence is reshaping how work gets done, but its adoption is not evenly distributed across the workforce. New research from Lean In highlights a growing gender gap in how employees use AI tools, raising concerns about long-term impacts on career advancement and workplace equity.

The findings show that men are more likely than women to integrate AI into their daily workflows. Around 33% of men report using AI tools daily or constantly at work, compared to 27% of women. Men are also slightly more likely to have used AI at all, with 78% reporting some usage versus 73% of women.

Beyond usage, attitudes toward AI differ significantly. Men are more likely to express positive and energized views about AI adoption, while women report higher levels of caution and concern. Women are 20% more likely to feel threatened by AI and significantly more likely to worry about how their use of such tools might be perceived by colleagues.

These perceptions may influence adoption behavior. Women are more likely to question the accuracy of AI outputs and to express ethical concerns about its use. While these considerations reflect a more cautious and critical approach, they may also slow adoption in environments where rapid experimentation is rewarded.

Structural Barriers and Career Implications

The research also points to structural differences in how AI adoption is supported and recognized within organizations. Among employees who use AI at work, men are more likely to receive positive feedback for doing so. Approximately 23% of men report being praised for AI use, compared to 18% of women.

Managerial support also differs. Men are more likely to be encouraged to use AI tools, with 37% reporting such support versus 30% of women. This gap in encouragement can influence both skill development and confidence, potentially reinforcing disparities over time.

Concerns about job security further shape adoption patterns. Women are nearly twice as likely as men to believe that AI-driven layoffs will disproportionately affect female employees. This perception may contribute to hesitancy in adopting tools that are seen as both beneficial and potentially disruptive.

The implications extend beyond short-term productivity gains. As AI becomes more embedded in workflows, familiarity with these tools is increasingly tied to performance, efficiency, and career progression. Lower adoption rates today could translate into reduced access to opportunities in the future.

The findings suggest that organizations may need to take a more proactive role in ensuring equitable access to AI tools, training, and support. Without intervention, early differences in adoption could widen into more significant gaps in skills, recognition, and advancement across the workforce.

AI & Machine Learning, News

SpaceX Moves Toward IPO With Confidential Filing

SpaceX has reportedly filed confidentially for an IPO, joining OpenAI and Anthropic in a growing pipeline of major tech listings. The moves signal a potential surge in AI-driven public offerings in 2026.

By Samantha Reed Edited by Maria Konash Published:
SpaceX moves toward IPO as OpenAI and Anthropic line up, signaling a 2026 tech listing surge. Image: Niranjan _ Photographs / Unsplash

SpaceX has taken a significant step toward going public, reportedly submitting a confidential filing for a U.S. initial public offering. The move positions Elon Musk’s rocket and satellite company at the forefront of a new wave of high-profile listings expected to reshape public markets in 2026.

If completed, the offering could become the largest IPO in history. Analysts estimate that a raise exceeding $25.6 billion would surpass Saudi Aramco’s record-setting 2019 debut. The potential listing comes as SpaceX continues to expand its role beyond aerospace, particularly following its acquisition of Musk’s artificial intelligence startup xAI earlier this year.

The integration of xAI signals a broader strategy to combine space infrastructure with AI capabilities, including data processing and satellite-enabled services. This convergence reflects growing investor interest in companies that operate across both physical and digital infrastructure layers.

Wall Street expects 2026 to mark a strong recovery in IPO activity after several subdued years. Goldman Sachs has projected that U.S. IPO proceeds could reach as much as $160 billion, driven by pent-up demand and a backlog of large private companies preparing to go public.

However, market conditions remain uncertain. Geopolitical tensions and ongoing volatility in global equities could influence timing and valuations for major listings.

AI Companies Prepare for Public Market Debuts

Alongside SpaceX, leading artificial intelligence firms are also laying the groundwork for potential IPOs. OpenAI is reportedly exploring a public listing that could value the company at up to $1 trillion, reflecting its rapid growth and central role in the generative AI market. While the company has previously indicated that an IPO is not imminent, preparations suggest a longer-term path toward public markets.

Anthropic, the developer of the Claude AI model, is also preparing for a potential listing in October. The company has engaged legal advisors as part of early-stage IPO planning, with reports indicating a possible debut as soon as 2026.

These developments highlight a broader shift in the technology sector, where AI companies are transitioning from research-focused organizations to large-scale commercial platforms. With increasing revenue, enterprise adoption, and infrastructure investments, firms like OpenAI and Anthropic are positioning themselves to meet public market expectations.

The convergence of AI and capital markets reflects rising demand for exposure to next-generation technologies. As companies scale their models and expand into enterprise applications, IPOs are emerging as a key mechanism to fund continued growth.

If market conditions stabilize, the simultaneous entry of SpaceX, OpenAI, and Anthropic could define the next phase of the tech IPO cycle, with AI-driven businesses at its core.

AI & Machine Learning, News, Startups & Investment

Google Expands AI Video Lineup with Low-Cost Veo 3.1 Lite

Google has introduced Veo 3.1 Lite, a lower-cost video generation model aimed at high-volume use cases. The release expands its AI video lineup with more accessible pricing and flexible capabilities.

By Samantha Reed Edited by Maria Konash Published:
Google unveils Veo 3.1 Lite, a low-cost AI video model for scalable content creation. Image: Google

Soon after OpenAI has shut down it’s Sora app, Google has launched Veo 3.1 Lite, its most cost-efficient AI video generation model to date, targeting developers building high-volume video applications. The release expands the Veo 3.1 family with a pricing-focused option designed to lower the barrier to entry for generative video tools.

Veo 3.1 Lite is priced at less than half the cost of Veo 3.1 Fast while maintaining comparable generation speed. The model supports both text-to-video and image-to-video workflows, enabling developers to generate short-form video clips with flexible inputs.

The model offers output in 720p and 1080p resolutions, with support for both landscape (16:9) and portrait (9:16) formats. Video duration can be set to 4, 6, or 8 seconds, allowing developers to adjust costs based on use case requirements.

Pricing highlights the model’s positioning. Veo 3.1 Lite costs approximately $0.05 per second for 720p output and $0.08 per second for 1080p. By comparison, Veo 3.1 Fast is priced at $0.15 per second, with a scheduled reduction to $0.10 for 720p and $0.12 for 1080p starting April 7. The standard Veo 3.1 model remains significantly more expensive at $0.40 per second for HD and up to $0.60 for 4K output.

This creates a cost gap of up to eight times between the Lite and full versions, positioning Lite as a practical option for experimentation, prototyping, and user-generated content scenarios.

Expanding Access to AI Video Tools

The launch reflects Google’s broader push to make generative video more accessible to developers and businesses. By offering a lower-cost model alongside premium options, the company is enabling a wider range of applications, from social media content generation to product prototyping and automated marketing assets.

Veo 3.1 Lite is available through Google’s Gemini API and AI Studio, allowing developers to integrate video generation into existing workflows and applications. The company emphasized that the model is optimized for efficiency while maintaining professional-grade capabilities suitable for production pipelines.

The introduction of Lite comes as competition in AI video intensifies, with multiple companies investing heavily in model development and infrastructure. Pricing and scalability are emerging as key differentiators, particularly for developers seeking to deploy video generation at scale.

While Lite is not positioned as a replacement for higher-end models, it offers a balance between cost and functionality that may appeal to a broader developer base. Its release signals a shift toward more modular AI offerings, where pricing tiers align closely with specific use cases and performance requirements.

AI & Machine Learning, News

Anthropic Partners with Australia on AI Safety and Research

Anthropic has signed an agreement with the Australian government to collaborate on AI safety and research. The deal includes funding for scientific institutions and expanded use of Claude in healthcare and education.

By Samantha Reed Edited by Maria Konash Published:
Anthropic partners with Australia on AI safety, research, healthcare, and education expanding Claude adoption. Image: Anthropic

Anthropic has signed a Memorandum of Understanding with the Australian government to collaborate on artificial intelligence safety, marking a strategic expansion of its international partnerships. The agreement aligns with Australia’s National AI Plan and formalizes cooperation between Anthropic and the country’s AI Safety Institute.

Under the arrangement, Anthropic will share insights on emerging AI model capabilities and associated risks, while participating in joint safety and security evaluations. The company will also collaborate with academic institutions to advance research on responsible AI development. Similar partnerships are already in place with safety institutes in the United States, United Kingdom, and Japan.

A key component of the agreement involves sharing Anthropic’s Economic Index data with the Australian government. This dataset is designed to track how AI tools are being adopted across industries and assess their economic impact. Initial focus areas include sectors critical to Australia’s economy, such as natural resources, agriculture, healthcare, and financial services.

The collaboration also includes plans to support workforce development through AI education and training initiatives. According to Anthropic, Australian users are already applying its Claude model across a wide range of professional and technical tasks, particularly in high-skill domains.

In parallel, the company is exploring potential investments in data center infrastructure and energy capacity in Australia, reflecting growing demand for compute resources tied to AI deployment.

Investment in Science and Education

Anthropic is extending its AI for Science program to Australia with an investment of AUD$3 million in API credits for research institutions. The funding will support projects focused on healthcare, genomics, and computer science education.

Participating institutions include the Australian National University, Murdoch Children’s Research Institute, the Garvan Institute of Medical Research, and Curtin University. These organizations will use Anthropic’s Claude model to accelerate research in areas such as rare disease diagnosis, precision medicine, and genetic analysis.

At the Australian National University, researchers are applying Claude to analyze genetic sequencing data, while also integrating the model into computing curricula. The Garvan Institute is using AI to study genetic variation and identify potential treatments, including efforts to automate complex diagnostic processes for rare childhood conditions.

Murdoch Children’s Research Institute is focusing on stem cell research and therapeutic discovery, while Curtin University is expanding the use of AI across multiple academic disciplines, including health sciences, engineering, and law.

Anthropic also announced a new startup support initiative targeting deep tech companies in Australia. Eligible startups working in areas such as drug discovery, climate modeling, and materials science will receive up to $50,000 in API credits, along with technical resources.

The partnership signals Anthropic’s broader push into the Asia-Pacific region, with plans to establish a local presence in Sydney. It also reflects increasing collaboration between AI developers and governments as countries seek to balance innovation with safety and economic impact. Alongside these efforts, Anthropic has launched a $100 million Claude Partner Network to support consultancies and AI firms deploying its technology, while also exploring an initial public offering as early as October amid intensifying competition with peers such as OpenAI.

AI & Machine Learning, News

Salesforce Expands Slackbot With AI-Powered Enterprise Capabilities

Salesforce is expanding Slackbot into an AI-powered enterprise teammate, integrating workflows, apps, and data into a single conversational interface. The update introduces new capabilities aimed at improving productivity and coordination across teams.

By Daniel Mercer Edited by Maria Konash Published:
Salesforce upgrades Slackbot with AI, unifying workflows, apps, and CRM in one interface. Image: Salesforce

Salesforce is positioning Slack as a central interface for enterprise AI with a major expansion of Slackbot, transforming it from a personal assistant into a collaborative, organization-wide AI teammate. The update introduces more than 30 new capabilities designed to connect data, applications, and workflows into a unified conversational experience.

The move reflects a broader shift in enterprise AI adoption. While many organizations have deployed multiple AI tools across departments, Salesforce argues that fragmentation limits their effectiveness. Slackbot aims to address this by acting as a shared intelligence layer that connects systems and delivers actionable insights directly within team workflows.

Slackbot operates inside Slack’s existing environment, leveraging access to conversations, files, and organizational context. It inherits existing permissions and governance settings, allowing it to interact across enterprise systems without requiring additional configuration. This design reduces friction in adoption while maintaining compliance controls.

One of the key additions is meeting intelligence. Slackbot can now transcribe meetings, summarize discussions, and extract action items. It can also trigger follow-up actions in connected systems such as customer relationship management tools, reducing the need for manual updates after meetings.

Integration Across Enterprise Systems

A central feature of the update is Slackbot’s ability to orchestrate workflows across multiple enterprise tools. Through a new model context protocol client, Slackbot can route tasks to various AI agents and applications, including systems used for sales, customer service, and IT operations. Employees can issue requests in natural language without needing to know which system executes the task.

Salesforce is also introducing reusable AI “skills,” which allow teams to standardize recurring workflows. These skills define inputs, steps, and outputs for specific tasks, enabling consistent execution across teams. Slackbot can automatically recognize when a task matches a predefined skill and apply it without user intervention.

For smaller businesses, Salesforce has embedded customer relationship management capabilities directly into Slackbot. The system can automatically capture customer interactions from conversations, update records, and track deals without requiring a separate CRM interface. For larger enterprises, Slackbot serves as a conversational layer over Salesforce’s Customer 360 platform, enabling users to update opportunities, manage cases, and trigger workflows without leaving Slack.

Slackbot also extends to desktop-level interactions, allowing users to act on content across applications while maintaining context from Slack and connected systems. This reduces the need to switch between tools and manually transfer information.

Salesforce reports strong early adoption, with Slackbot becoming one of the fastest-growing features in its product history. Internal data suggests that employees using the tool can save significant time on routine tasks, reflecting growing demand for AI systems that integrate directly into existing workflows.

The expansion underscores Salesforce’s broader strategy to position Slack as the operating system for work, where human collaboration and AI-driven automation converge in a single interface.

AI & Machine Learning, Enterprise Tech, News

OpenAI Raises $122B to Supercharge Global AI Infrastructure

OpenAI has closed a $122 billion funding round at an $852 billion valuation to scale its AI infrastructure and products. The company aims to accelerate enterprise adoption and global deployment of intelligent systems.

By Maria Konash Published:
OpenAI raises $122B at $852B valuation, accelerating infrastructure and its AI superapp strategy. Image: OpenAI

OpenAI has raised $122 billion in committed capital, marking one of the largest private funding rounds in technology history. The deal values the company at $852 billion post-money, underscoring strong investor confidence in the long-term role of artificial intelligence as core infrastructure for the global economy.

The round was co-led by SoftBank and Andreessen Horowitz, with participation from major institutional investors and strategic partners including Amazon, Nvidia, and Microsoft. OpenAI also expanded access to individual investors, raising more than $3 billion through bank distribution channels. In parallel, the company increased its revolving credit facility to approximately $4.7 billion, providing additional financial flexibility.

The capital will primarily support compute expansion, which OpenAI describes as its central strategic advantage. The company has built a diversified infrastructure strategy spanning multiple cloud providers, chip platforms, and data center partnerships. Nvidia GPUs remain foundational to its training and inference systems, while additional collaborations include AMD, Broadcom, and cloud providers such as Oracle, Google Cloud, and AWS.

This infrastructure investment reflects rising demand for large-scale AI systems. OpenAI stated that its APIs now process more than 15 billion tokens per minute, highlighting the scale at which its models are being deployed across applications and industries.

Product Momentum and Enterprise Expansion

OpenAI’s rapid growth is closely tied to the adoption of ChatGPT and its broader product ecosystem. The platform now serves over 900 million weekly active users, with more than 50 million paying subscribers. The company reports that it is generating approximately $2 billion in monthly revenue, driven by both consumer subscriptions and enterprise usage.

Enterprise adoption has become a key revenue driver, accounting for more than 40% of total income. OpenAI expects enterprise revenue to reach parity with its consumer business by 2026, as organizations increasingly integrate AI into workflows and operations.

Recent product updates include the release of GPT-5.4, which introduces improvements in reasoning, workflow execution, and multimodal capabilities. OpenAI has also expanded Codex, its AI-powered coding agent, which now serves over 2 million weekly users and is experiencing rapid growth.

The company is positioning itself as a unified AI platform, combining consumer applications, developer tools, and enterprise solutions. Its strategy centers on building a “superapp” that integrates chat, coding, search, and agent-based automation into a single interface. This approach aims to simplify user experience while increasing engagement and cross-platform adoption.

Despite strong revenue growth, OpenAI remains unprofitable and continues to invest heavily in infrastructure and research. The scale of its latest funding round reflects both the high costs associated with AI development and the expectation that advanced models will drive productivity gains across industries.

As competition intensifies, OpenAI’s ability to translate its infrastructure advantage into sustainable revenue and operational efficiency will be critical in justifying its valuation and maintaining its leadership position in the AI sector.

Exit mobile version