New Delhi: Meta, the parent company of Facebook and Instagram, is under fire for approving politically charged advertisements that incited violence and spread disinformation during Indias recent election. The revelation comes from a report exclusively shared with The Guardian, highlighting the companys failure to prevent harmful content despite its public commitments.Violent and misleading ads target MuslimsThe report details how Meta approved ads containing inflammatory language against Muslims, including calls for violence such as lets burn this vermin and Hindu blood is spilling, these invaders must be burned. Other ads included false accusations against opposition leaders and used AI-manipulated images to amplify their messages.Test conducted by accountability groupsThe ads were submitted by India Civil Watch International (ICWI) and Ekō, a corporate accountability organization, to test Metas ad monitoring mechanisms. The groups aimed to see if Meta could detect and block inflammatory political content during the six-week election period, which concluded on June 1.Failure to detect AI manipulationsAccording to the report, Metas systems failed to identify the AI-generated nature of the images in the ads. Despite Metas pledge to prevent the spread of AI-manipulated content during the election, 14 out of 22 submitted ads were approved. The approved ads, which were then immediately withdrawn by the researchers, violated Metas policies on hate speech, misinformation, and violence.Criticism from opposition, activistsNara Lokesh, National General Secretary of the opposition Telugu Desam Party (TDP), criticized the ruling party and Meta for allowing such ads. He claimed that these actions demonstrated a bias in favor of the Bharatiya Janata Party (BJP) and highlighted the dangers of unchecked hate speech on social media platforms.Metas response and continued issuesA Meta spokesperson defended the companys processes, stating that advertisers are required to go through an authorization process and comply with applicable laws. Meta reiterated its commitment to removing content that violates community standards, including AI-generated content flagged by independent fact-checkers.Ongoing concerns about hate speechThis incident is not the first time Meta has been accused of failing to control hate speech on its platforms in India. Previous reports have linked Facebook activity to real-life violence, including riots and lynchings. Despite efforts to expand fact-checking networks and improve oversight, the recent findings cast doubt on Metas ability to manage content during critical elections.Calls for stricter oversightMaen Hammad from Ekō called out Meta for profiting from hate speech. Supremacists, racists, and autocrats know they can use hyper-targeted ads to spread vile hate speech, and Meta will gladly take their money, no questions asked, he said. Hammad emphasized the need for Meta to develop more robust mechanisms to detect and prevent the spread of harmful content globally.As India prepares for future elections, the report underscores the urgent need for social media platforms like Meta to enhance their monitoring systems. Ensuring fair and safe elections is crucial, and companies must take significant steps to prevent the spread of disinformation and hate speech.