Israel

Meta and OpenAI say they disrupted influence operations linked to Israeli company

OpenAI said in a report that the company operated a network that posted anti-Hamas and pro-Israel content across the web.

Jonathan Raa/NurPhoto via Getty Images

The OpenAI app icon is being displayed on a smartphone among other AI-powered applications in this photo illustration in Brussels, Belgium, on February 16, 2024.

OpenAI and Meta both disrupted covert influence operations linked to the same for-profit organization in Israel, the companies disclosed in independent transparency reports this week.

The two tech giants announced within a day of each other that STOIC, a political marketing and business intelligence firm based in Tel Aviv, had been using their products nefariously to manipulate various political conversations online.

Watch NBC6 free wherever you are

  WATCH HERE

OpenAI, the generative artificial intelligence company behind ChatGPT, revealed in a report Thursday that it banned a network of accounts operated by STOIC, which it described as a “for-hire Israeli threat actor" posting anti-Hamas and pro-Israel content in addition to other political content.

Meanwhile, in its quarterly adversarial threat report released Wednesday, Meta confirmed that it had also removed 510 Facebook accounts, 11 pages and one group, as well as 32 Instagram accounts, linked to the same operation. Meta announced that it has banned STOIC from its platforms and that it issued a cease-and-desist letter “demanding that they immediately stop activity that violates Meta’s policies.”

Get local news you need to know to start your day with NBC 6's News Headlines newsletter.

  SIGN UP

OpenAI and Meta did not immediately respond to requests for comment. An attempt to reach STOIC through a contact email listed on its website was rejected by the recipient’s email provider, and messages sent to a person listed there as the company’s chief technology officer also went unanswered.

On its website, STOIC touts its generative AI content creation system as helping users “automatically create targeted content and organically distribute it quickly to the relevant platforms.”

The companies cracked down on the apparently AI-driven operation amid growing concern over how increasingly sophisticated generative AI tools could be used to propagate misinformation ahead of this year’s U.S. presidential election. Twenty tech companies — including Meta, Microsoft and Google — signed a pledge this year to try to prevent AI from interfering in elections.

OpenAI’s report said STOIC used OpenAI’s models to generate and edit web articles, as well as social media comments later posted across Facebook, Instagram, X, YouTube and Telegram.

The network would also fake engagement, according to the report: After the accounts posted their comments about the conflict in Gaza, multiple accounts would reply with text that was also generated by the same operation using OpenAI’s models. Meta also added that the accounts on its platforms appeared to have bought likes and followers from Vietnam.

OpenAI said that the accounts used its models to fictionalize bios for social media and that many of their profile pictures appeared to have been AI-generated, as well, with some accounts using the same photo for supposedly different people. Meta said they “posed as locals in the countries they targeted, including as Jewish students, African Americans and ‘concerned’ citizens.”

OpenAI also linked the operation to several websites — namely uc4canada.com, the-good-samaritan.com, ufnews.io and nonagenda.com — that its report listed as inauthentic activist groups focusing on Gaza and broader Jewish-Muslim relations.

STOIC’s operation largely targeted audiences in the U.S. and Canada using English and Hebrew to post about the Israel-Hamas war, the companies said. Such networks have praised Israel’s military actions, criticized the United Nations Relief and Works Agency for Palestine Refugees in the Near East and accused pro-Palestinian protesters at American college campuses of promoting antisemitism and terrorism, among other focal points.

New accounts continued to appear even as social media platforms identified and disabled older ones, the companies said. But their reports noted that the operation attracted little, if any, authentic engagement from accounts outside its own networks. Meta said that when its accounts commented on Facebook pages of media organizations or political and public figures, authentic users would often respond critically and call them out as propaganda.

“We found and removed this network early in its audience building efforts, before they were able to gain engagement among authentic communities,” Meta wrote in its report.

Both OpenAI and Meta also reported disrupting covert influence operations based in Russia and China, with networks in both countries using similar tactics and appearing to incorporate AI tools in their propaganda.

Analyzing common trends across the various operations it caught wind of, OpenAI wrote in a blog post that even when they were using AI, the human actors behind the networks were “just as prone to human error” as people have always been — such as by posting an AI model’s prompt refusal message rather than an actual output.

“AI can change the toolkit that human operators use, but it does not change the operators themselves,” the company wrote, adding, “While it is important to be aware of the changing tools that threat actors use, we should not lose sight of the human limitations that can affect their operations and decision making.”

This story first appeared on NBCNews.com. More from NBC News:

Copyright NBC News
Exit mobile version