Humanising Artificial Intelligence
Artificial intelligence (AI):
- It is a field of study in computer science that focuses on simulating intelligent computer behaviour.
- It depicts the process by which machines carry out actions that have traditionally required human intelligence.
- Machine learning, pattern recognition, big data, neural networks, self-algorithms, and other technologies are included.
- g: A few instances of AI that are currently present in our environment are Facebook’s facial recognition software, which recognises faces in the pictures we upload, and voice recognition software, which interprets our requests for Alexa.
Artificial Intelligence Generation:
- It is a state-of-the-art technological development that produces new types of media, including text, audio, video, and animation, by utilising artificial intelligence and machine learning.
- With the development of sophisticated machine learning skills, prompts—simple text—can now be used to create original, imaginative short- and long-form material, synthetic media, and even deep fakes.
Innovations in AI:
- Generative Adversarial Networks, or GANs
- Large Language Models, or LLMs
- Transformers that are Generatively Pre-trained.
- Creating Images for Experiments
- Make for sale products such as DALL-E to generate images.
- For text generation, use ChatGPT.
- It can compose marketing copy, computer code, blogs, and even search query results.
AI investments:
- Microsoft has chosen to contribute $10 billion to the OpenAI initiative.
- Bard, Google’s chatbot, was unveiled.
- NVIDIA, the top GPU manufacturer in the world, has a trillion dollar market capitalization.
- With the launch of Bedrock, Amazon provided its users with access to Titan, one of its own massive language models.
- Microsoft incorporates generative models into the navigation of Windows 11.
Current Problems with AI identified in:
- LLMs in particular and publicly deployed AI systems in general pose genuine risks.
- The arrival of AGI is certain and may pose an existential risk.
- AI’s data thirst has an impact on platform workers’ working conditions as well as the dilution of privacy.
- The unpredictable and opaque nature of AI and how it affects democratic processes
- when AI is applied to public use cases such as law enforcement and surveillance
- Propensity of AI systems in repeating and deepening systemic problems.
The application of AI in practice:
- Governments cannot control or even fully comprehend AI due of its complexity.
International laws:
America’s AI executive order:
- In order to “ensure their products are safe,” the US government had convinced businesses including OpenAI, Microsoft, Amazon, Anthropic, Google, Meta, and others to follow “voluntary rules.”
- An “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” was signed by the US government.
The AI Act in Europe:
- In contrast to the US executive order, the EU statute features distinct boundaries.
- It has made it illegal for law enforcement to use remote biometric identification in public areas that is both arbitrary and real-time.
- It outlaws emotion detection in the workplace, which is today acknowledged to be a dangerous pseudoscience.
- It forbids government agencies from creating social scores or credits through AI systems.
Problems with the Law:
- As long as emotion detection is not employed in the job, it is exempt from regulations.
- It allows room for the deployment of this false and dangerous technology.
- Virtual assistants and chatbots that have the potential to cause harm are not covered by the law; one prominent and dangerous example is when apps use chatbots to provide advice on mental and physical health.
- There is still zero industrial policy about AI anywhere.
- This gap is filled with nebulous notions of “responsible AI” and “trust.”
Problems:
- bringing AI into compliance with widely held human values.
- Concerns regarding unrestrained AI growth are raised by the quick rate of AI advancement, which is driven by market demands and frequently overshadows safety issues.
- Governance: The basic goal of AI governance is to guarantee the long-term safety and moral use of AI technology. This can be negatively impacted by the absence of a cohesive worldwide strategy to AI regulation.
- According to Stanford University’s AI Index, 37 laws including the term “artificial intelligence” were passed by legislative bodies in 127 different countries.
- Long-term concerns linked with AI cannot be avoided because of the glaring lack of international coordination and coordinated action.
- In the event that China remains unrestricted while other nations do, it is probable that China will acquire a competitive advantage in the development and use of AI.
- Uncontrolled advancement may result in the creation of AI systems that are at odds with international moral norms, raising the possibility of unanticipated and possibly permanent outcomes.
- It might lead to violence and destabilisation, jeopardising global peace and security.
- Countries with strict AI safety regulations may find themselves at a disadvantage.
- Encouraging a race to the bottom where safety and ethical considerations are disregarded in favour of quick development and implementation.
- Global AI safety may be further jeopardised if other countries are unintentionally encouraged to relax their regulatory regimes in order to remain competitive. This is because of the unequal playing field.
Concerns about Ethics in AI:
Way Ahead:
- Long-term risks are increased when technology and combat come together.
- It’s critical to address the dangers of military AI.
- To control such powerful technologies, the international community has established treaties like the Treaty on the Non-Proliferation of nuclear weapons.
- proving that the creation of international standards for AI use in combat is an urgent but realistic objective.
- Although there are many obstacles facing AI policy, more must be done to extend democratic voices and reverse the trend of giving a small number of tech companies complete control over the policy-making process.
- India has the potential to take the lead in regulating the treatment of children and adolescents, who represent a crucial population in this regard.
- Instead than focusing on prescribing, regulation should support strong institutions, best practices, and standards that foster accountability, transparency, and trust.