Meta, the parent company of Facebook and Instagram, is reportedly taking major steps to automate its risk assessment processes. According to documents obtained by NPR, Meta is planning to automate up to 90 percent of risk assessments that were previously carried out by human experts. This move is part of a broader strategy to streamline internal operations and address potential harms more efficiently.

Risk assessments at major technology platforms like Meta are crucial for identifying and preventing issues related to misinformation, hate speech, data privacy, and other forms of online harm. Traditionally, these assessments involve teams of specialists who analyze new features, updates, or policies to predict and mitigate possible negative outcomes. However, with the explosive growth of content and increased complexity of online interactions, manual reviews have become increasingly labor-intensive and costly.

Meta’s new approach reportedly leans heavily on artificial intelligence and automated tools to identify potential risks. These automated systems can process vast amounts of data far more quickly than human teams, flag potential issues, and provide recommendations for mitigation. According to internal projections, automating this process could not only improve efficiency but also free up resources for Meta’s policy and technical teams.

However, the proposed shift has raised concerns both inside and outside the company. Critics argue that automating risk assessments may lead to blind spots or the overlooking of nuanced issues that human experts are better equipped to handle. For example, AI can struggle with cultural context, regional legal differences, and the subtleties of language that are important in understanding potential risks—especially as they relate to marginalized communities.

Advocates for automation argue that AI-powered risk assessments could reduce bottlenecks and allow Meta to respond more quickly to emerging challenges. They point to recent improvements in machine learning and natural language processing as evidence that automated systems are becoming more reliable and accurate.

Meta states that human intervention will still play a role, particularly in cases where automated systems flag complex or ambiguous risks. Ideally, automation would handle the bulk of straightforward cases while escalating higher-risk or more controversial scenarios to human experts for further review.

This development comes at a time when tech companies are under increasing pressure from regulators and the public to improve their handling of online harm. Whether Meta’s plan to automate risk assessments will lead to safer platforms or introduce new vulnerabilities remains to be seen. What is clear, however, is that the way large social media companies manage risk is poised for a significant transformation.