OpenAI disrupts actors using AI to influence elections
Microsoft-backed (NASDAQ:MSFT) OpenAI has identified and disrupted more than 20 instances of individuals or networks using its artificial intelligence models to influence elections, spread misinformation or sow societal divisions this year.
“Their activity ranged from debugging malware, to writing articles for websites, to generating content that was posted by fake personas on social media accounts,” OpenAI said in a 54-page report released today. “Activities ranged in complexity from simple requests for content generation, to complex, multi-stage efforts to analyze and reply to social media posts. They even included a hoax about the use of AI.”
For example, AI was used to generate content on social media sites concerning elections across the globe, including in the U.S., Rwanda, India and the E.U.
Of particular concern is the use of AI by governments to spread misinformation. In August, OpenAI said it disrupted “a covert Iranian influence operation that generated social media comments and long-form articles about the U.S. election, alongside topics including the conflict in Gaza, Western policies towards Israel, politics in Venezuela, and Scottish independence.”
The use of AI allows one person to do what previously would have required a large group.
“A pre-AI operation on this scale would likely have required a large team of trolls, with all the costs and leak risks associated with such an endeavor,” the report finds. “However, this operation’s reliance on AI also made it unusually vulnerable to our disruption.”
OpenAi also said it identified and banned several accounts linked to China that were conducting spear phishing attacks on OpenAI employees and governments around the world. Spear phishing is the sending of emails that appear to be from a known or trusted source in order to collect confidential information.
Another cyber threat actor known as STORM-0817 used OpenAI models to mass generate profiles for social media accounts. Another network dubbed “Bet Bot” used AI-generated profile photos for phony accounts on X.
However, just as bad actors can use AI to do harm, OpenAI stressed that AI can be used to crack down on suspicious activity as well.
“We have continued to build new AI-powered tools that allow us to detect and dissect potentially harmful activity,” according to the report. “While the investigative process still requires intensive human judgment and expertise throughout the cycle, these tools have allowed us to compress some analytical steps from days to minutes.”