Meta Uncovers Deceptive AI-Generated Content in Influence Campaigns Targeting U.S. and Canada


Meta announced on Wednesday that it had detected “likely AI-generated” content used deceptively on its Facebook and Instagram platforms. This content included comments praising Israel’s handling of the Gaza war, posted under updates from global news outlets and U.S. lawmakers.

In its quarterly security report, Meta revealed that the deceptive accounts impersonated Jewish students, African Americans, and other concerned citizens, primarily targeting audiences in the United States and Canada. Meta attributed this campaign to Tel Aviv-based political marketing firm STOIC.

STOIC has not yet responded to requests for comment on these allegations.

While Meta has identified basic AI-generated profile photos in influence operations since 2019, this report marks the first instance of text-based generative AI being used since its rise in late 2022. Researchers are increasingly worried that generative AI, capable of producing human-like text, images, and audio quickly and inexpensively, could enhance disinformation campaigns and influence elections.

During a press call, Meta’s security executives stated that they had promptly dismantled the Israeli campaign and that advanced AI technologies had not hindered their ability to disrupt such influence networks. These networks are coordinated attempts to propagate specific messages.

Executives also mentioned that they had not encountered AI-generated images of politicians that were realistic enough to be mistaken for genuine photos.

“There are several examples across these networks of how they use likely generative AI tooling to create content. Perhaps it gives them the ability to do that quicker or to do that with more volume. But it hasn’t really impacted our ability to detect them,” said Mike Dvilyanski, Meta’s head of threat investigations.

The report also detailed six covert influence operations that Meta disrupted in the first quarter of the year. Besides the STOIC network, Meta dismantled an Iran-based network focusing on the Israel-Hamas conflict, though no use of generative AI was found in that campaign.

Tech giants like Meta are grappling with the potential misuse of new AI technologies, especially during elections. Researchers have noted instances where image generators from companies like OpenAI and Microsoft produced photos containing voting-related disinformation, despite policies against such content. These companies have emphasized digital labeling systems to mark AI-generated content at its creation, though these tools do not work on text, and researchers question their effectiveness.

Source: Reuters

Related posts

Leave a Reply

Your email address will not be published. Required fields are marked *