Business News

OpenAI says that many cyber players are using their platform to interfere in elections

Jaap Arriens NurPhoto via Getty Images

OpenAI is increasingly becoming the platform of choice for cyber actors seeking to influence democratic elections around the world.

In a 54-page report published on Wednesday, the creator of ChatGPT said it disrupted “more than 20 operations with fraudulent networks from around the world that tried to use our models.” The threats ranged from AI-generated website articles to social media posts by fake accounts.

The company said its update on “cyber influence and performance” is intended to provide a “snapshot” of what it sees and identify “a first set of trends that we believe can inform the debate about how AI is entering the pervasive threat landscape.”

The OpenAI report comes less than a month before the US presidential election. Beyond the US, it is an important election year around the world, where there are contests that affect more than four billion people in more than 40 countries. The rise of AI-generated content has led to greater election-related fake concerns, with the number of deepfakes created increasing 900% year-on-year, according to data from Clarity, a machine learning company.

Misinformation in elections is nothing new. It has been a major problem since the 2016 US presidential campaign, when Russian actors found cheap and easy ways to spread false content across social media. In 2020, social media was flooded with false information about Covid vaccines and election fraud.

The concern of lawmakers today is focused on the rise of productive AI, which began in late 2022 with the launch of ChatGPT and is now being adopted by companies of all sizes.

OpenAI wrote in its report that election-related AI applications “range in complexity from simple requests to generate content, to complex, multi-tiered efforts to analyze and respond to social media posts.” Social media content is mostly related to the US and Rwanda elections, and to a lesser extent, the Indian and EU elections, OpenAI said.

In late August, the Iranian operation used OpenAI products to generate “long-form articles” and social media comments about the US election, among other topics, but the company said most of the identified posts received few or no likes, shares and more. . opinions. In July, the company shut down ChatGPT accounts in Rwanda that were posting election-related comments to X. And in May, an Israeli company used ChatGPT to produce social media commentary on the Indian election. OpenAI wrote that it was able to deal with the case within 24 hours.

In June, OpenAI talked about an underground operation that used its products to generate opinions about European Parliament elections in France, as well as politics in the US, Germany, Italy and Poland. The company said that while many of the social media posts it identified received few likes or shares, some real people responded to AI-generated posts.

None of the election-related activities were able to attract “viral engagement” or build “sustained audiences” using ChatGPT and other OpenAI tools, the company wrote.

WATCH: The outlook for the election could be good or bad for China


Source link

Related Articles

Back to top button