AI Bias
Context:
- It seems like Meta’s actions in May 2021 had a negative effect on human rights (in Palestine). The devastating analysis, which Meta commissioned this year, detailed how its practises violated the rights of Palestinian Instagram and Facebook users during the 2021 Gaza attacks.
Additional news:
- According to a Guardian study, a new function on WhatsApp that creates graphics in answer to queries appears to encourage anti-Palestinian bias, if not open racism. WhatsApp, like Facebook and Instagram, is owned by Meta.
- When you searched for “Palestinian” and “Palestinian boy,” pictures of kids brandishing guns appeared.
- On the other hand, looking for “Israeli boy” on Google Images displays cheerful or active kids, and searching for “Israeli army” turns up images of joyful, devout, unarmed, uniformed soldiers.
- There has been more to the debate around AI-generated stickers than meets the eye.
- Allegations have been made against Meta’s social media platforms regarding their bias towards content that either supports or opposes Palestine.
AI: What is it?
- Artificial Intelligence (AI) is the capacity of a computer or a computer-controlled robot to perform tasks that are typically performed by humans because they call for human judgement and intelligence.
- While AI cannot accomplish the vast array of jobs that a human can, it can complement humans in certain areas.
Features & Constituents:
- The capacity for reasoning and making decisions that maximise the likelihood of accomplishing a given objective is the ideal quality of artificial intelligence. Machine Learning is a subset of AI (ML).
- This autonomous learning is made possible by Deep Learning (DL) algorithms, which absorb vast volumes of unstructured data, including text, photos, and video.
What is bias in AI?
- An irregularity in the results generated by a machine learning system is referred to as AI bias.
- In artificial intelligence, bias occurs when a machine regularly produces results that are different for one set of users than another.
- These bias outputs typically reflect traditional cultural biases related to age, gender, race, or biological sex.
- Prejudices in the training set of data or biassed assumptions made throughout the algorithm building process could be the source of this.
Which kinds of AI bias exist?
- Cognitive bias refers to the unintentional mistakes in reasoning that influence people’s assessments and choices.
- These biases may inadvertently find their way into machine learning algorithms through the work of designers or through the use of bias-filled training data sets.
- Absence of complete data: Incomplete data may contain bias because it is not representative.
- The “black box effect” in AI makes it more challenging to identify the cause of the biassed output.
How may these biases be corrected?
- Much research has been conducted recently on bias in machine learning (ML) and artificial intelligence (AI) models.
- The amoral nature of the programmes allows them to mirror and possibly even reinforce the biases present in the training data.
- It therefore takes proactive interventions and perhaps regulation to address the biases within the computer.
- Blind Taste Test Mechanism: This method determines whether an AI system’s output is influenced by a particular factor, such as a person’s gender, race, socioeconomic status, or sexual orientation.
- Open-Source Data Science (OSDS): By making the code available to a developer community, bias in the AI system may be lessened.
- Systems known as “human-in-the-loop” are designed to perform tasks that are beyond the capabilities of either a computer or a human.
Way Forward:
- No search should portray members of a community as innately aggressive, especially youngsters. AI shouldn’t be used to dehumanise so many people, despite all of its benefits to them.