Meta, the social media giant, has reported progress in its efforts to thwart coordinated disinformation campaigns fueled by generative AI, amidst growing concerns about the technology's potential misuse.
In its latest study on "coordinated inauthentic behavior," Meta highlighted the effectiveness of its existing defenses, even as fears rise that generative AI could be used to deceive voters in upcoming elections, particularly in the United States.
"What we've seen so far is that our industry's existing defenses, including our focus on behavior rather than content in countering adversarial threats, already apply and appear to be effective," said David Agranovich, Meta's threat disruption policy director, during a press briefing on Wednesday.
Despite these assurances, Agranovich acknowledged the evolving nature of AI-driven disinformation tactics, emphasizing that "we're not seeing generative AI being used in terribly sophisticated ways, but we know that these networks are going to keep evolving their tactics as this technology changes."
Meta's platforms, especially Facebook, have long been scrutinized for their role in spreading election disinformation. Notably, Russian operatives exploited Facebook and other US-based social media to sow political discord during the 2016 US presidential election. The European Union is also investigating Meta's Facebook and Instagram for alleged failures to combat disinformation ahead of the June EU elections.
Experts now worry about an unprecedented surge of AI-generated disinformation, facilitated by tools like ChatGPT and DALL-E, which can produce content swiftly and on demand. Meta acknowledged that "threat actors" have used AI to create fake photos, videos, and texts, but noted the absence of realistic imagery depicting politicians.
The report revealed instances of AI-generated profile pictures for fake accounts across Meta's apps. One notable case involved a Chinese network using AI to create posters for a fictitious pro-Sikh activist movement called Operation K. Additionally, an Israel-based network was found posting AI-generated comments on Facebook pages of media organizations and public figures, which were quickly identified as propaganda by real users.
Meta attributed this campaign to a Tel Aviv-based political marketing firm. Mike Dvilyanski, Meta's head of threat investigations, described the current landscape as "an exciting space to watch," noting that "so far, we haven't seen a disruptive use of generative AI tooling by adversaries."
The report also detailed ongoing efforts by a Russia-linked group called "Doppelganger" to undermine support for Ukraine. While these attempts have persisted, Meta reported success in neutralizing them, stating that Doppelganger's tactics remain "crude and largely ineffective in building authentic audiences on social media."
Furthermore, Meta removed clusters of inauthentic Facebook and Instagram accounts originating in China, targeting the Sikh community in countries including Australia, Canada, India, and Pakistan.
As generative AI technology continues to advance, Meta remains vigilant in adapting its strategies to counter emerging threats, aiming to maintain the integrity of its platforms in the face of evolving disinformation tactics.
More: https://techxplore.com/news/2024-05-meta-generative-ai-deception-held.html
