|
In a groundbreaking move, OpenAI, the company behind the popular chatbot ChatGPT, has taken action to disrupt a covert influence campaign originating in Israel that sought to manipulate the ongoing Indian elections.
This campaign utilized OpenAI's AI models to generate substantial amounts of biased content that was then disseminated across various online platforms, including Telegram, X, Instagram, and others.
The primary targets of this campaign were audiences in Canada, the United States, and Israel, with English and Hebrew serving as the primary languages of communication.
However, in early May, the campaign expanded its scope to include audiences in India, employing English-language content to criticize the ruling BJP party and promote the opposition Congress party.
OpenAI's swift response led to the disruption of the campaign's activities focused on India's elections within 24 hours of their commencement.
Investigations conducted by OpenAI revealed that while the individuals behind the campaign utilized AI-generated content to enhance their productivity, their efforts did not result in a significant increase in audience engagement or reach.
OpenAI's assessment of the campaign's impact, using the Breakout Scale, classifies it as a Category 2 operation, indicating posting activity on multiple platforms but without substantial amplification by external actors.
This report by OpenAI serves as a significant step in understanding how malicious actors are leveraging artificial intelligence to manipulate public opinion and influence political outcomes.
By exposing these deceptive tactics, OpenAI aims to strengthen the integrity of online discourse and protect democratic processes from undue interference.
As technology continues to advance, vigilance and collaboration between tech companies, governments, and civil society organizations are crucial to safeguard the integrity of public discourse and ensure that AI is used responsibly.
Source: In a first, OpenAI removes influence ops ‘targeting’ Indian elections, BJP