The Prayas ePathshala

Exams आसान है !

13 August 2024 – The Indian Express

Facebook
LinkedIn
WhatsApp

Issues associated with Artificial Intelligence

  • Nonetheless, political parties and the electorate cannot afford to overlook the Artificial Intelligence (AI) factor in light of the announcement of the seven-phase general election in India, scheduled to take place from April 19 to June 1, 2024. In addition to the US, the UK, Mexico, India, and the Philippines, up to 50 more nations are expected to hold elections this year.
  • There’s no denying AI’s potential. Many people, including Sam Altman of OpenAI in the US, think it’s the most significant technological advancement in history. Proponents of AI also think that technology will significantly raise and elevate millions of people’s standards of living. As of right now, it’s uncertain, though, if powerful AI poses “existential risks” or, contrary to popular belief, will erode human values.

What Effects Will AI Have on Elections Around the World?

  • Large language models cast a shadow over elections worldwide, and interested parties know that even a single, reasonably successful use of an AI-generated disinformation tool has the potential to have a huge impact on election outcomes as well as campaign narratives.

The emergence of artificial general intelligence:

  • Fast technical advancements in artificial intelligence (AI), particularly in its more recent incarnations like generative AI, come with drawbacks. While it may be too soon to fully assess the potential effects of Artificial General Intelligence (AGI)—AI systems that mimic human capabilities—all of this points to an additional aspect of electoral dynamics that is unavoidable.
  • The world may be at a turning moment in the history of human advancement, based on the quick development of AI models. The rate at which new abilities are being developed indicates that Generative AI will soon evolve into Artificial General Intelligence (AGI), which will be able to replicate human capacities.

What Is the Difference Between AI and AGI?

  • AGI is a subset of AI, and one way to think about the former is as an improved form of the latter:
  • Artificial intelligence is frequently educated on data to carry out a range of tasks that are restricted to a certain environment or specific tasks. Algorithms or pre-programmed rules are used by many types of AI to direct their behaviour and teach them how to function in specific environments.
  • Conversely, artificial general intelligence possesses the ability to reason and adjust to novel surroundings and diverse data kinds. So, AGI adopts a problem-solving and learning strategy, much like humans, rather than relying on preset rules to operate. AGI’s versatility allows it to perform more activities across a wider range of sectors and businesses.

Artificial Intelligence, Pioneers in Changing Election Behaviour:

  • The application of AI models such as ChatGPT, Gemini, and Copilot in a variety of industries is becoming more and more common knowledge for people across the world. But 2024 will show how more recent AI models can have a big impact on voting patterns and results.
  • It would be a mistake to underestimate the possible impact of AI on the electoral scene. What might not happen in 2024 might very well happen in the next election cycle, in India as well as globally.
  • In support of “Deep Fake Elections”:
  • AI’s employability may significantly contribute to the electorate’s continued confusion. As it is, a lot of people are already calling the global elections in 2024 the “Deep Fake Elections,” since they were produced by artificial intelligence (AI).
  • Whether this is all true or not, the Deep Fake condition seems inevitable because more and newer propaganda tools are developed with the intention of undermining democratic processes with every election.
  • Misinformation and disinformation are among the top 10 hazards according to the World Economic Forum’s (WEF) Global hazards Perception Survey. This is because large-scale AI models with user-friendly interfaces have made it possible for a surge in misleading information and “synthetic” content, such as sophisticated voice cloning and phoney websites.
  • Given that AI models have far greater persuasiveness than the bots and automated social media accounts that are currently the standard tools for spreading misinformation, AI has the potential to flood voters with highly personalised propaganda on a scale that might make the Cambridge Analytica scandal seem insignificant.
  • Social media giants like Facebook and Twitter have drastically reduced the number of its fact-checking and election integrity staff, which increases the risks.

Unavoidable Errors in These Models:

  • The extensive media coverage of a recent wave of Google-related errors serves as a helpful reminder that artificial intelligence (AI) and machine learning (AGI) are not reliable in all situations. Google AI models have caused public outrage across the globe, especially in India, for inaccurately or intentionally portraying people and personalities in a malevolent way. These accurately capture the perils of “runaway” AI.
  • Many AI models are plagued by inconsistencies and unreliability, which present inherent risks to society. There is an inevitable increase in threat as its potential and utilisation grow in a geometric manner.
  • As countries depend more and more on AI solutions to solve their problems, it is important to recognise what many AI experts call AI’s “hallucinations.”
  • In particular, scientists suggest that AGI occasionally creates false knowledge in order to solve new problems. These fabrications can’t be taken at face value right away because they’re frequently probabilistic. These reasons suggest that, at this early stage of development, an over-reliance on AI systems may present difficulties.
  • AI poses a number of existential risks that cannot be disregarded. The risks associated with this account are far more dangerous than the harm caused by prejudice in development and design.
  • There are legitimate worries that AI systems frequently have a tendency to acquire some innately hostile traits. As of yet, no suitable concepts or ideas have been discovered to minimise them.
  • The primary forms of hostile capabilities, which eclipse other inherent vulnerabilities, are:
  • “poisoning” that generally impairs an AI model’s capacity to generate pertinent forecasts;
  • “Back Dooring,” which results in the model producing unreliable or detrimental outcomes; and
  • “Evasion” means that an AI model’s capacity to carry out its designated function is diminished when it leads to a model’s incorrect classification of hostile or destructive inputs.

Inadequate Regulation:

  • India has a problem with the regulation of AI. With a focus on minimising user harm, the Indian government has alternated between being non-regulatory and being more careful, which has created an environment that is conducive to misuse.
  • The pro-innovation position is the foundation of the argument against AI regulation, which highlights the need to encourage and adjust to the swift progress of AI technologies rather than limiting their expansion and societal integration by regulatory actions.

Inadequate Monitoring by the Biggest AI Platforms:

  • The most well-known generative AI firms for visual tools forbid users from producing “misleading” images. Nonetheless, over 40% of the time, researchers from the British nonprofit Centre for Countering Digital Hate (CCDH) were able to create misleading images related to elections using four of the most popular AI platforms: Microsoft’s Image Creator, OpenAI’s ChatGPT Plus, Midjourney, and Stability.ai’s DreamStudio.
  • A public database shows that people on Midjourney have produced fictitious images of Trump and Russian President Vladimir Putin playing golf, and Joe Biden giving Israeli Prime Minister Benjamin Netanyahu wads of cash.

What Actions Are Needed to Address AI’s Effect on Elections?

  • An Agreement in Technology to Counteract Deceptive AI Use in 2024 Elections:
  • 22 businesses, including internet behemoths Amazon, Google, Microsoft, and Meta as well as AI developers and social media platforms, signed the internet Accord, 2024 in February at the Munich Security Conference, promising to address threats to democracy in this election year. This was signed as a voluntary set of guidelines and directives to further seven main objectives:
  • Prevention: Investigating, funding, and/or implementing appropriate safety measures to reduce the possibility that purposefully false artificial intelligence election content may be produced.
  • Provenance: When appropriate and technically possible, attach provenance signals to content to identify its origin.
  • Detection: Using techniques like analysing provenance signals across platforms, one might try to find verified content or misleading AI election content.
  • Responding quickly and appropriately to situations where misleading artificial intelligence election content is produced and shared is known as responsive protection.
  • Assessment: Engaging in joint endeavours to assess and gain knowledge from the encounters and consequences of handling False AI Election Content.
  • Public Awareness: Taking part in joint initiatives to inform the public about the best practices for media literacy, specifically with regard to Deceptive AI Election information, and how individuals can safeguard themselves against being tricked or misled by this kind of information.
  • Resilience refers to the efforts made to support the development and dissemination of defensive tools and resources, such as contextual features, AI-based solutions (including open-source tools when appropriate), and AI literacy and other public programmes, in order to safeguard public discourse, uphold the integrity of the democratic process, and fortify society as a whole against the use of manipulated AI election content.

Creating and Using Technology to Reduce Risks:

  • acknowledging that all technical solutions have limitations and assisting in the creation of innovations to help reduce the hazards associated with Deceptive AI Election material by identifying realistic AI-generated pictures and/or confirming the validity of material and its provenance.
  • maintaining funding for the development of fresh ideas in audio, video, and image provenance technologies.
  • aiming to integrate machine-readable data, where suitable, with user-generated, realistic AI-generated audio, video, and image material produced using models covered by this agreement.
  • Deal With Deceptive AI Election Content Appropriately:
  • attempting to respond appropriately to misleading artificial intelligence election content that has been found to be hosted on internet distribution channels and meant for public distribution, while adhering to safety and free speech norms.
  • This can entail, but is not restricted to, establishing and disseminating guidelines and making an effort to offer background data on authentic AI-generated audio, video, or image material.

Taking Part in International Civil Society:

  • maintaining contact through established channels or events with a wide range of international civil society organisations, academics, and other pertinent subject matter experts in order to contribute to the companies’ understanding of the global risk landscape as they independently develop the technologies, tools, and initiatives they have described.

Increase Public Knowledge:

  • encouraging public education campaigns about the risks posed to the public and how people can better protect themselves from being tricked or misled by this content, for example, in order to raise public awareness and build societal resilience in the face of deceptive AI election content.
  • Manipulations can occur through the creation and release of open-source tools to assist others in attempting to reduce these risks, through the use of tools, interfaces, or processes that can give users more insightful context about the content than what is seen online, or through other means that support the efforts of communities and organisations that are addressing these risks.
  • The exponential growth of AI is a major turning point in human history and could eventually lead to the development of Artificial General Intelligence (AGI) from Generative AI that can mimic human intelligence. The effects of artificial intelligence on electoral dynamics are too important to ignore as the world gets ready for a wave of elections in 2024, including those in India and many other nations.
  • Artificial intelligence (AI), especially in its more modern incarnations like generative AI, presents potential as well as difficulties in influencing voting patterns and results. As artificial intelligence (AI) gains traction, it is critical to confront its disruptive potential, particularly in the context of elections, in order to protect democratic procedures and maintain the credibility of electoral systems.

Select Course