The Prayas ePathshala

Exams आसान है !

02 February 2024 – The Hindu

Facebook
LinkedIn
WhatsApp

Generative Artificial Intelligence

  • It is a field of study in computer science that focuses on simulating intelligent computer behaviour.
  • It depicts the process by which machines carry out actions that have traditionally required human intelligence.
  • Machine learning, pattern recognition, big data, neural networks, self-algorithms, and other technologies are included.
  • g: A few instances of AI that are currently present in our environment are Facebook’s facial recognition software, which recognises faces in the pictures we upload, and voice recognition software, which interprets our requests for Alexa.

Artificial Intelligence Generation:

  • It is a state-of-the-art technological development that produces new types of media, including text, audio, video, and animation, by utilising artificial intelligence and machine learning.
  • With the development of sophisticated machine learning skills, prompts—simple text—can now be used to create original, imaginative short- and long-form material, synthetic media, and even deep fakes.

Innovations in AI:

  • Generative Adversarial Networks, or GANs
  • Large Language Models, or LLMs
  • Transformers that are Generatively Pre-trained.
  • Creating Images for Experiments
  • Make for sale products such as DALL-E to generate images.
  • For text generation, use ChatGPT.
  • It can compose marketing copy, computer code, blogs, and even search query results.

The lawsuit filed by the New York Times:

  • It claimed that generative artificial intelligence (GenAI) and large language models (LLMs) were trained on its content.

The worries expressed by the NYT:

  • The New York Times reported that businesses create AI solutions by combining data from several sources.
  • Without authorization or payment, they “seek to free-ride on the Times’s massive investment in journalism” and give The NYT content additional attention.
  • As a result, readers may feel less of a need to visit the Times website.
  • It might result in lower advertising and subscription income.
  • The issue of “hallucinations” caused by AI, in which false content is wrongly attributed to The Times.
  • It demanded the destruction of any chatbot models and training data that used content that was copyrighted by Times.

OpenAI’s Reaction:

  • Given that training GenAI models with copyrighted content “serves a new ‘transformative’ purpose,”
  • “Fair use” should allow for their acts.

Other situations:

  • AI services have been sued for exploiting unpaid web content scraping.
  • A few GenAI organisations were sued by a number of authors, including George Martin, Jonathan Franzen, and John Grisham, who claimed that there was “systematic theft on a mass scale.”
  • In an open letter, Philip Pullman and Margaret Atwood seek money from AI businesses using their works.
  • Some IT experts have sued GitHub, Microsoft, and OpenAI, claiming that code was misused during Copilot training.
  • Visual artists sued Stability AI, Midjourney, and DeviantArt for copyright infringement.
  • Stability AI was sued by Getty Images.
  • Spotify and Apple Music were encouraged by Universal Music Group not to use its content as training data for AI algorithms that create new music. And so on.

Consequences of the case:

  • It creates a fresh legal territory.
  • It might redraw the boundaries of intellectual property law in the United States.
  • At a time when most regulatory systems, including those in India, are lagging behind, it will set global standards.
  • Legally speaking, this is a classic example of how modern technology has left established legislation behind.
  • The success of Big Tech could discourage creators of original material.
  • GenAI businesses might have to pay content creators for its use if The NYT wins.
  • It would make GenAI models much more expensive.

The Way Ahead:

  • It is evident that publishers like The New York Times must accept AI as the “future.”
  • Data is necessary for GenAI training because LLMs like ChatGPT are not covered by copyright rules, which primarily date back to the printing press.
  • Nevertheless, in order to accommodate the ever-changing circumstances, legislatures and/or judges must amend the legislation.
  • AI has now irreversibly “revolutionised” combat through the employment of unmanned lethal autonomous weapons systems, or LAWs as the US Defence Department has ironically shortened them.
  • It demonstrates the total reliance on machine learning and the dehumanisation of military tactics.
  • putting international humanitarian law’s project on hold.
  • It is imperative that the broader endeavour of “humanising” AI applications in all domains—military and civil—continue.

Select Course