Meta is beginning a multi-year rollout of advanced AI systems to handle content moderation tasks, including detecting scams and removing illegal material. The company said the transition will reduce reliance on third-party contractors while improving speed and accuracy in enforcement.
The new systems are designed to manage repetitive and high-volume tasks, such as reviewing graphic content and identifying evolving threats like illicit sales and fraud. Meta said AI will help reduce errors while enabling faster responses to harmful content.
Despite the shift, the company emphasized that human oversight will remain critical. Experts will continue to train and monitor AI systems, while humans will handle complex decisions involving appeals, law enforcement, and sensitive cases.
The move reflects Meta’s broader strategy to integrate AI across its operations as it competes with companies such as OpenAI, Anthropic, and Google in deploying advanced AI systems. It also comes amid ongoing scrutiny over platform safety and content governance.
In parallel, Meta has introduced a new AI-powered support assistant for Facebook and Instagram users to help manage account-related issues, further expanding its use of AI in core services.