Deepfakes in Elections
Deepfakes in Elections:
- We can no longer rely solely on technological solutions or interventions to verify information, and the real challenge lies in our diminished trust in our analysis. The emergence of deepfakes in our electoral process raises serious concerns. Unlike traditional forms of misinformation, deepfakes undermine our ability to distinguish reality from fabrication.
- Deepfakes put doubt on our judgement, challenging the faith we once had in our capacity to identify the truth, even though we were accustomed to encountering altered information. Instead, we depended on trustworthy media organisations and alternative sources to validate information.
Deepfakes: What Are They?
- Deepfakes are artificial intelligence (AI)-generated synthetic media that attempt to edit or create visual and auditory information in order to trick or mislead people.
Source:
- In 2017, a Reddit user going by the handle “Deepfakes” invented the word “deepfake.”
- This guy created and distributed pornographic videos using Google’s open-source deep learning technology.
Origin:
- A method called generative adversarial networks (GANs), which consists of a generator and a discriminator—two competing neural networks—is used to create deepfakes.
- The generator’s goal is to create fictitious photos or films that nearly mimic the real world, yet
- The discriminator’s job is to distinguish between real and fraudulent content.
- Data Synthesis: A significant amount of data, frequently obtained without permission from the internet or social media, is required for its development. This data must include images or videos of the source and the intended audience.
- It is a part of Deep Synthesis, a general phrase that includes technologies like augmented reality and deep learning. Deep Synthesis is used to create text, images, audio, and video in order to create virtual worlds.
What Benefits Do Deepfakes Offer in Elections?
Targeting and segmentation:
- Political parties and politicians can now evaluate large amounts of voter data, including voting history, social media activity, and demographics, thanks to deep learning algorithms.
- Campaigns may now analyse and interpret large volumes of textual data, such as posts on social media, news articles, and public forums, and target voters for their own gain thanks to natural language processing (NLP) algorithms.
Monitoring and adjusting in real-time:
- Parties can use deep-powered predictive analytics, like AI cloud, to anticipate election results by examining a variety of elements, including social media sentiment analysis, economic indicators, and polling data.
- Artificial intelligence (AI) algorithms are always monitoring a variety of data sources, such as opinion polls, news sources, and social media, in order to determine public sentiment and spot new trends.
Improved methods of communication:
- Artificial intelligence (AI) chatbots and virtual assistants with deepfake capabilities interact with voters on social media sites, answering questions, sharing details about candidates and policies, and even promoting voting.
Integrity and Security:
- Artificial intelligence (AI)-powered deepfake techniques are essential for identifying and stopping electoral fraud, which includes disinformation campaigns, voter suppression, and tampering with electronic voting equipment.
- Election integrity is maintained by AI systems through their analysis of data patterns and anomalies.
Control & Supervision:
- AI and deep technologies are used by governments and electoral authorities to track and control political advertising, spot violations of campaign funding laws, and make sure election rules are followed.
- Election accountability and transparency are facilitated by AI-powered systems.
- To ensure total transparency and eliminate any possibility of manipulation, the Bihar Election Commission, for instance, partnered with the artificial intelligence company Staqu in 2021 to implement video analytics with optical character recognition (OCR) for evaluating CCTV footage from counting booths during the panchayat elections.
What are the Different Issues with Deepfakes in Elections?
Modification of Electoral Behaviour:
- Confusion and manipulation result from the production of deepfake content and the deluge of highly customised propaganda directed towards voters.
- AI can be used to create deepfake films of opponents, damaging their reputation and skewing voter perceptions—thus giving rise to the idea of a “Deep Fake Election.”
- The phrase “Deep Fake Elections” describes how artificial intelligence (AI) is being used to create convincingly phoney audio, video, and other content, endangering election integrity and eroding public confidence.
Disseminating false information:
- Through the dissemination of false information, deepfake models, in particular Generative Artificial Intelligence (AI), can subvert democratic processes.
- There are instances such in the 2024 Loksabha election, where a political party is being promoted by Gandhiji, a cloned voice of Mahatma Gandhi.
- A few more instances include a deepfake video of the Member of Parliament (MP) for the ruling party that went popular on WhatsApp in the nation and featured him criticising the political opposition and urging people to support the ruling party.
- Social media companies are increasing the risk by decreasing their efforts to ensure election integrity and fact-checking.
Errors and Untrustworthiness:
- The trustworthiness of deepfakes AI models, such as AGI, is called into question due to their susceptibility to errors and discrepancies.
- Examples of Google AI models misrepresenting people have brought attention to the possible risks associated with unregulated AI.
- As AI models become more widely used, there are inherent threats to society from their inconsistencies.
Moral Issues:
- Deepfake election manipulation raises moral concerns about fairness, privacy, and transparency.
- Certain voting groups may be unfairly treated or discriminated against as a result of AI systems that reinforce prejudices found in training data.
- Election results can lose public trust if AI decision-making methods are opaque.
- Election fairness may be harmed by unequal access to AI resources, which would advantage more resource-rich parties.
Regulatory Obstacles:
- Election-related deepfakes are difficult to control because of the speed at which technology is developing and the global reach of internet platforms.
- While AI-driven electoral operations are becoming more and more sophisticated, governments and election authorities find it difficult to keep up with the changes.
- Although they address issues of false news and digital media ethics, current laws like the Information Technology Act of 2000 and the India Penal Code of 1860 do not specifically punish AI and deepfake technology providers.
What Deepfake-related Government Initiatives Exist?
- IT Act, 2000 and IT Rules, 2021: According to these laws, social media platforms must take immediate action to remove deepfake images or videos; otherwise, they risk up to three years in prison or a fine of Rs one lakh.
- Section 66D of the IT Act: Under Section 66D of the IT Act, 2000, a person may be imprisoned for up to three years and fined up to one lakh rupees if they impersonate someone while using a communication device or computer resource.
- Social media intermediaries are required by Rule 3(1)(b)(vii) to make sure that users are not hosting any content that impersonates another individual.
- Rule 3(2)(b) mandates that, upon receiving a complaint about such information, it must be taken down within twenty-four hours.
- The November 2019 IT rules 2021 establishment of the Fact Check Unit under PIB with the declared goal of serving as a disincentive to those who produce and distribute false information and fake news.
- Additionally, it gives citizens a simple way to submit information about the Government of India that seems suspect or dubious.
How Can the Abuse of Depfakes in Elections Be Combatted?
Regulations:
- Enact stringent legislation that addresses the production, distribution, and application of deepfake content for electoral meddling.
- Example: Passing new laws making the production and distribution of deepfake content during election seasons, or amending the Information Technology Act and the India Penal Code of 1860.
Election Commission Guidelines:
- Guidelines released by the Election Commission of India will be one potential remedy for deepfaked and AI-fueled misinformation in the context of the Lok Sabha elections of 2024.
- Regulations requiring openness in the application of AI algorithms for political ends must be put into place.
- This entails making political ad financing sources public and forcing platforms to reveal the methods by which algorithms select the material that people view.
Solutions Based on Technology:
- Provide sophisticated AI tools and algorithms to instantly identify and validate deepfake content.
- For instance, DeepTrust Alliance, a partnership of tech firms and academic institutions, created DeepTrust Analyzer, a programme that employs machine learning to detect deepfake pictures and videos.
- Research institutes and Indian tech businesses might work together to create sophisticated fake detection algorithms that are specific to Indian languages and cultural contexts.
Campaigns for Education and Awareness:
- Start public awareness campaigns to inform voters about deepfake technology’s existence and possible effects on elections.
- For instance, the Indian government might collaborate with media outlets and well-known individuals to produce PSAs that promote deepfakes and urge voters to exercise caution when casting ballots.
More Accurate Fact-Checking:
- It’s imperative to set up a Rapid Response Team to deal with the spread of false information during elections, including deepfakes and fake news.
- Fake films and misinformation will inevitably surface; the important thing is to act quickly to stop them before they get worse and spread widely.
Cooperation Attempts:
- Encourage cooperation between civil society organisations, digital corporations, and governments to create coordinated responses to deepfake dangers.
- A few instances are the Facebook, Microsoft, and numerous university-organized Deepfake Detection Challenge, which calls on researchers to create methods for identifying and thwarting deepfake movies.
Getting Knowledge from Global Practices:
- China’s Regulatory Strategy: China places a strong emphasis on getting consent and confirming identities when using deepfake technologies. Companies that use these technologies are required to get consent from people who are portrayed in them and to authenticate the identity of users. In addition, steps are taken to make it easier for people who have been harmed by deepfake content to take legal action.
- Canada’s Preventive Approach: The country is concentrating on preventing the negative effects of deepfakes by enacting legislation in the future and launching extensive public awareness efforts to inform the public about the dangers of deepfake technology.
Encouraging Ethical AI:
- Promote the development of AI technology while keeping moral principles front and centre, giving priority to goals like reducing prejudice, protecting privacy, and promoting openness.
- institutional guidelines and procedures outlining the responsible use of AI in political spheres.