Gender Discrimination in Artificial Intelligence

An International Human Rights Law Perspective and the Quest for Binding Regulation

Authors

  • Ayu Riska Amalia Universitas Mataram

DOI:

https://doi.org/10.29303/ulrev.v9i2.450

Keywords:

Gender Discrimination, Artificial Intelligence, Human Rights

Abstract

Artificial intelligence (AI) has transformed and reshaped the way people work and interact. While AI provides convenience, it also poses significant challenges to human rights, particularly gender equality. The use of AI in recruitment processes, healthcare diagnosis, and discriminatory content moderation illustrates how it can exacerbate existing inequalities. This study employs a normative juridical method with a qualitative approach, analyzing primary instruments of international human rights law such as the Universal Declaration of Human Rights (UDHR),the International Covenant on Civil and Political Rights (ICCPR), the International Covenant on Economic, Social and Cultural Rights (ICESCR), and the Convention on the Elimination of All Forms of Discrimination against Women (CEDAW). It also examines non-binding frameworks, namely the UNESCO Recommendation on the Ethics of AI and the OECD AI Principles, and compares them with the binding EU AI Act. The findings indicate that AI has the potential to violate fundamental rights of women, including the rights to equality and non-discrimination, work, privacy, health, participation in public and political life, as well as representation and identity. Furthermore, soft-law mechanisms remain insufficient to prevent gender bias, as their implementation relies heavily on states’ political will. Nevertheless, states have a positive obligation under international law to respect, protect, and fulfil the right to equality; thus, a binding international legal framework is urgently needed to ensure accountability and gender-sensitive AI governance.

References

Amnesty International USA. (2020, January 23). New study shows shocking scale of abuse on Twitter against women politicians in India. Retrieved from https://www.amnestyusa.org/press-releases/shocking-scale-of-abuse-on-twitter-against-women-politicians-in-india/

Benjamin, R. (2020). Race after technology: Abolitionist tools for the new Jim Code. Cambridge: Polity Press.

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency(pp. 610–623). ACM. https://s10251.pcdn.co/pdf/2021-bender-parrots.pdf

Bendett, S., Boulègue, M., Connolly, R., Konaev, M., Podvig, P., & Zysk, K. (2021, September). Advanced military technology in Russia: Capabilities and implications. Chatham House. https://www.chathamhouse.org/sites/default/files/2021-09/2021-09-23-advanced-military-technology-in-russia-bendett-et-al.pdf

BHF Press Office. (2016, August 30). Women are 50 per cent more likely than men to be given incorrect diagnosis following a heart attack. British Heart Foundation. Retrieved from https://www.bhf.org.uk/what-we-do/news-from-the-bhf/news-archive/2016/august/women-are-50-per-cent-more-likely-than-men-to-be-given-incorrect-diagnosis-following-a-heart-attack

Bloomberg. (2024, February 29). Inside Project Maven: The US military's AI project. Bloomberg. https://www.bloomberg.com/news/newsletters/2024-02-29/inside-project-maven-the-us-military-s-ai-project

Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Proceedings of Machine Learning Research, vol. 81, Conference on Fairness, Accountability, and Transparency (pp. 1–15). Retrieved from https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf

Carnegie Endowment for International Peace. (2020, November 30). Tackling online abuse and disinformation targeting women in politics. Retrieved from https://carnegieendowment.org/research/2020/11/tackling-online-abuse-and-disinformation-targeting-women-in-politics

Chang, J.-P.-C., Cheng, S.-W., Chang, S. M.-J., & Su, K.-P. (2025). Navigating the digital maze: A review of AI bias, social media, and mental health in Generation Z. AI, 6(6), 118. https://doi.org/10.3390/ai6060118

De Hert, P., & Gutwirth, S. (2006). Privacy, data protection and law enforcement: Opacity of the individual and transparency of power. In Privacy and the Criminal Law (p. 75). Antwerp: Intersentia.

Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint, arXiv:1702.08608.

European Parliament & Council. (2024, June 13). Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union, L 1689 (12 July 2024). Retrieved from https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401689

Farhad, S. (2025, May 6). Passengers in flight: AI governance capacity in the Global South. Digital Society, 4, Article 39. https://doi.org/10.1007/s44206-025-00195-6

Ferrara, E. (2023, April). Should ChatGPT be biased? Challenges and risks of bias in large language models. SSRN Preprint. https://doi.org/10.2139/ssrn.4627814

Ferrara, E. (n.d.). The butterfly effect in artificial intelligence systems: Implications for AI bias and fairness. SSRN preprint.

Geva, M., Goldberg, Y., & Berant, J. (2019). Are we modeling the task or the annotator? An investigation of annotator bias in natural language understanding datasets. arXiv preprint, arXiv:1908.07898.

Hossain, K. (2005). The concept of jus cogens and the obligation under the U.N. Charter. Santa Clara Journal of International Law, 3(1), 73–98.

Joshi, A. (2024, October 16). Big data and AI for gender equality in health: Bias is a big challenge. Frontiers in Big Data, 7, 1436019. https://doi.org/10.3389/fdata.2024.1436019

Kayser-Bril, N. (2021, March 29). Automated translation is hopelessly sexist, but don’t blame the algorithm or the training data. AlgorithmWatch. Retrieved from https://algorithmwatch.org/en/automated-translation-sexist/

Lacroix, C. (2020, September). Preventing discrimination caused by the use of artificial intelligence. Council of Europe, Committee on Equality and Non-Discrimination. https://assembly.coe.int/LifeRay/EGA/Pdf/TextesProvisoires/2020/20200915-PreventingDiscriminationAI-EN.pdf

Lee, R. S. T. (2020). Artificial intelligence in daily life. Singapore: Springer. https://doi.org/10.1007/978-981-15-7695-9

Manasi, A., Panchanadeswaran, S., & Sours, E. (2023, March 17). Addressing gender bias to achieve ethical AI. The Global Observatory. Retrieved from https://theglobalobservatory.org/2023/03/gender-bias-ethical-artificial-intelligence/

Munro, R., Bethard, S., Kuperman, V., Lai, V. T., Melnick, R., Potts, C., Schnoebelen, T., & Tily, H. (2010). Crowdsourcing and language studies: The new generation of linguistic data. In NAACL Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk (pp. 122–130). Association for Computational Linguistics.

Office of the High Commissioner for Human Rights (OHCHR). (2014). Women’s rights are human rights. Geneva: United Nations. http://www.ohchr.org/Documents/Events/WHRD/WomenRightsAreHR.pdf

Office of the High Commissioner for Human Rights (OHCHR). (2021). The right to privacy in the digital age: Report of the United Nations High Commissioner for Human Rights. Geneva: United Nations.

Pagan, N., et al. (n.d.). A classification of feedback loops and their relation to biases in automated decision-making systems. arXiv preprint.

Prates, M., Avelar, P., & Lamb, L. C. (n.d.). Assessing gender bias in machine translation: A case study with Google Translate. arXiv preprint.

Rodrigues, R. (2020). Legal and human rights issues of AI: Gaps, challenges and vulnerabilities. Journal of Responsible Technology, 4, 100005. https://doi.org/10.1016/j.jrt.2020.100005

Serhan, Y. (2024, December 18). How Israel uses AI in Gaza—and what it might mean for the future of warfare. Time. https://time.com/7202584/gaza-ukraine-ai-warfare

Sheikh, H., Prins, C., & Schrijvers, E. (2023). Artificial intelligence: Definition and background. In Mission AI. Research for Policy (p. 15). Cham: Springer. https://doi.org/10.1007/978-3-031-21448-6_2

Smith, G. (2024, April 3). How to make AI equitable in the Global South. Stanford Social Innovation Review. https://ssir.org/articles/entry/equitable-ai-in-the-global-south

Solove, D. J. (2008). Understanding privacy. Cambridge, MA: Harvard University Press.

Tai, M. C.-T. (2020). The impact of artificial intelligence on human society and bioethics. Tzu Chi Medical Journal, 32(4), 339–343. https://doi.org/10.4103/tcmj.tcmj_71_20

Tony Blair Institute for Global Change. (2025, February 6). How leaders in the Global South can devise AI regulation that enables innovation. Retrieved from https://institute.global/insights/tech-and-digitalisation/how-leaders-in-the-global-south-can-devise-ai-regulation-that-enables-innovation

United Nations. (1948). Universal Declaration of Human Rights (UDHR), Articles 2 & 12.

United Nations. (1966). International Covenant on Civil and Political Rights (ICCPR), Article 17.

United Nations. (1979). Convention on the Elimination of All Forms of Discrimination against Women (CEDAW), Articles 2, 3, & 7.

United Nations Committee on the Elimination of Discrimination against Women. (2010). General recommendation No. 28 on the core obligations of States parties under Article 2 of the Convention on the Elimination of All Forms of Discrimination against Women (CEDAW).

United Nations Human Rights Committee. (1988). General Comment No. 16 on Article 17 (Right to Privacy), para. 10.

UNESCO. (2021). Recommendation on the ethics of artificial intelligence. Paris: UNESCO.

UNESCO. (2024, March 7). Generative AI: UNESCO study reveals alarming evidence of regressive gender stereotypes. Retrieved from https://www.unesco.org/en/articles/generative-ai-unesco-study-reveals-alarming-evidence-regressive-gender-stereotypes

UN Women. (2024, May 22). Artificial intelligence and gender equality. Retrieved from https://www.unwomen.org/en/news-stories/explainer/2024/05/artificial-intelligence-and-gender-equality

Unger, N., & McLean, M. (2025, August 13). An open door: AI innovation in the Global South amid geostrategic competition. Washington, DC: Center for Strategic and International Studies.

Ünver, H. A. (2024). Artificial intelligence (AI) and human rights: Using AI as a weapon of repression and its impact on human rights (PE 754.450). Policy Department for External Relations, Directorate General for External Policies of the Union, European Parliament.

USAID. (2024). AI in global development playbook. Washington, DC: United States Agency for International Development. https://www.usaid.gov/sites/default/files/2024-09/Artificial%20Intelligence%20in%20Global%20Development%20Playbook.pdf

Wallace, R. M. M. (1994). International law (2nd ed.). London: Sweet & Maxwell.

Wan, Y., Pu, G., Sun, J., Garimella, A., Chang, K.-W., & Peng, N. (2023). “Kelly is a warm person, Joseph is a role model”: Gender biases in LLM-generated reference letters. arXiv preprint, arXiv:2310.07371 [v5]. https://doi.org/10.48550/arXiv.2310.09219

West, S. M., Whittaker, M., & Crawford, K. (2019). Discriminating systems: Gender, race and power in AI. New York: AI Now Institute. Retrieved from https://ainowinstitute.org/publications/discriminating-systems-gender-race-and-power-in-ai-2

Women at the Table. (2025, May 27). Multilateral leadership in AI and gender equality. Retrieved from https://www.womenatthetable.net/2025/05/27/multilateral-leadership-in-ai-and-gender-equality/

World Economic Forum. (2023). Future of jobs report 2023 – New jobs to emerge, upskilling is key [video]. Retrieved from https://www.weforum.org/videos/foj-job-market

World Population Review. (2025). Organisation for Economic Cooperation (OECD) countries 2025. Retrieved from https://worldpopulationreview.com/country-rankings/oecd-countries

Published

2025-10-14

How to Cite

Ayu Riska Amalia. (2025). Gender Discrimination in Artificial Intelligence: An International Human Rights Law Perspective and the Quest for Binding Regulation. Unram Law Review, 9(2). https://doi.org/10.29303/ulrev.v9i2.450