Meta Uses AI Model Llama to Detect Harmful Content

Meta Uses AI Model Llama to Detect Harmful Content
Meta has deployed new technology to improve the safety of its social platforms, Facebook and Instagram. The company now uses its AI model Llama to identify content that violates rules regarding hate speech, violence, and harassment.
Increased efficiency: Unlike traditional algorithms, the new system has a better understanding of context. Meta's Global Head of Safety, Antigone Davis, noted that the technology significantly reduces the workload on human moderators.
Official position: Meta representative Nick Clegg stated regarding the new initiative: "Our goal is to create an environment where artificial intelligence acts faster and more accurately against harmful posts than has ever been possible before."
Criticism and challenges: Despite the progress, digital rights advocates fear that the automated system may mistakenly delete legitimate posts. Meta confirms that the system is still in its learning phase and that humans will continue to make final decisions in complex cases.