top of page

AI Ate My Résumé: The Fight Against AI Gender Bias

Writer's picture: Helene de TaeyeHelene de Taeye
scary robot representing AI looming over a woman
AI-generated image

Artificial intelligence (AI) is becoming a part of our everyday lives—from job applications to medical diagnoses. While the technology holds great promise, there’s a serious issue that can’t be ignored. Far from being neutral, AI often reflects societal prejudices, including those related to gender. As these systems take on more decision-making roles, the risks for women are significant, especially when it comes to fairness in the workplace, financial services, and healthcare.





AI Is Biased


AI’s bias stems from the data it’s trained on, and the people who design it. These systems are often built on historical data that reflects societal inequalities, and when AI development teams lack diversity, those biases go unchecked. This can have potentially life-altering consequences, as illustrated in these examples:


  1. Healthcare Diagnostics: AI systems used for medical diagnoses have shown biases, particularly when trained on male-dominated datasets. One striking example is in the diagnosis of heart attacks. Women often exhibit different symptoms than men, yet many AI models are trained predominantly on data from male patients, resulting in missed or delayed diagnoses for women. This can lead to dangerous health consequences or even death due to misdiagnosis.

  2. Predictive Policing: AI tools used in law enforcement have been criticized for racial and gender bias, particularly in predictive policing algorithms. These systems, trained on historical crime data, are more likely to target communities of color, including women of color, reinforcing patterns of over-policing in already marginalized communities. This kind of bias results in disproportionate surveillance and potentially wrongful arrests.

surveillance camera
  1. Credit Scoring: Financial institutions use AI to determine creditworthiness, but these systems have been shown to offer lower credit scores and fewer loan approvals to women compared to men with similar or better financial histories. For example, Apple’s credit card algorithm was found to offer lower credit limits to women, even when their financial backgrounds were equal or better.

  2. Hiring Algorithms: AI-powered hiring tools are another area where bias can have serious consequences. A famous example is when Amazon scrapped its AI hiring tool after it was found to penalize resumes that included the word “women” (such as “women’s chess club captain”), favoring male-dominated language instead. The issue here is that AI simply mirrors past hiring practices, which have traditionally leaned towards men. Moreover, AI hiring tools have also been shown to favor male candidates for leadership roles, skewing the playing field.


Gender bias in AI can also have more subtle, yet far-reaching implications:


  • Prevalence of Gender Bias in AI-Generated Images: A recent study on AI-generated images of professionals found a significant gender imbalance: men are consistently overrepresented, while women are underrepresented across various professions . This not only reflects but further ingrains stereotypical views of certain careers being dominated by men. For example, AI systems that generate images of “engineers” or “CEOs” overwhelmingly depict men, reinforcing the idea that these roles are male-dominated.


dall-e creating 2 images of men when asked to create images of doctors
We're not quite there yet...

artsmart created 2 images of men and 2 images of women when asked to create images of doctors
But there is hope! I compared 4 different models within the Artsmart AI platform, 50% created female doctors! 100% white, but that's a topic for another time...


  • Speech Recognition Biases: Automatic Speech Recognition (ASR) systems also exhibit notable gender bias. Studies show that these systems are far better at understanding male voices than female ones. Women, especially those with regional accents or higher-pitched voices, are more likely to be misunderstood or ignored by these systems.


Why Fixing Bias Is Hard


The underlying causes of AI bias are both technical and social. Technically, AI is trained on data that reflects historical inequities, but socially, there’s a lack of diversity in AI development teams.


When men dominate the teams creating these systems, the perspectives and experiences of women are overlooked. This is especially concerning given that

only around 30% of people working in AI are women.

Moreover, the “black box” nature of many AI algorithms makes it difficult to understand why certain decisions are made, leaving room for bias to go unnoticed or unchecked.


Women Leading the Fight for Fair AI


Despite these hurdles, many women are fighting for fairer AI:


joy buolamwini

Joy Buolamwini is a trailblazer in the AI ethics space, known for her creative approach to

highlighting the social consequences of artificial intelligence. As the founder of the Algorithmic Justice League, she combines art, poetry, and research to address AI bias. Her impactful work includes her MIT thesis that exposed biases in AI systems from companies like Microsoft and Amazon. Joy’s TED Talk on algorithmic bias has garnered over a million views, and her activism has influenced discussions at institutions like the World Economic Forum and the United Nations.



timnit gebru

Timnit Gebru, a leading AI ethics researcher, gained significant attention after her controversial dismissal from Google in 2020, a decision she attributes to her vocal criticism of the company’s unwillingness to address ethical concerns in AI development. In her post-Google career, Gebru has continued to push for AI accountability, founding the Distributed AI Research Institute (DAIR) to foster ethical AI practices. Her advocacy extends to global platforms like the World Economic Forum, where she calls for greater diversity in AI teams and more rigorous regulatory oversight of Big Tech’s practices.








kay firth-butterfield

Kay Firth-Butterfield is a prominent expert in responsible AI, advocating for ethical guidelines in AI usage across sectors such as healthcare and education. A former judge and professor, she has played a key role in shaping AI policies globally, including her time leading AI and Machine Learning at the World Economic Forum.


These women are working to change the way AI systems are built by pushing for more inclusive data practices, algorithmic transparency, and diversity in tech teams.




How You Can Help


AI is going to play an increasingly large role in society. That's inevitable. However, each of us has the power to advocate for more equitable and ethical technology. Here’s how you can make a difference:


1. Ask Questions About AI Decisions: Whenever AI-driven tools are used to make decisions that affect you—whether it’s in job applications, healthcare, or financial services—don’t hesitate to inquire about how those decisions were made. For example, if you are rejected for a loan or a job, ask whether AI was involved in the decision-making process and request more transparency. Many organizations are required to provide explanations, and pushing for this transparency ensures they are held accountable.


2. Advocate for AI Audits and Ethical Standards at Work: If your workplace uses AI, lobby for regular bias audits and the establishment of clear ethical guidelines. Bias detection tools, such as algorithmic auditing software, are increasingly available and can help organizations ensure their AI systems are fair and unbiased. You can also push for AI ethics training in your workplace to ensure that teams are aware of the risks and responsibilities that come with using these technologies.


3. Support Diverse AI Development Teams: Advocate for more diversity within tech companies and AI development teams. Research shows that AI systems developed by homogeneous groups are more prone to bias, as they reflect the narrow experiences of those who create them. Push for hiring policies that prioritize diversity, both in terms of gender and ethnicity, to ensure that the AI systems we rely on are more inclusive.


4. Join or Support Organizations Fighting for AI Justice: There are numerous organizations and initiatives, such as the Algorithmic Justice League or the Distributed AI Research Institute, that are actively working to combat AI bias and ensure that AI development prioritizes ethical standards. Supporting these initiatives—either through donations, participation, or simply sharing their work—helps amplify the push for more responsible AI development.


5. Engage in Public Policy: Advocate for stronger AI regulations at the governmental level. Governments around the world are starting to introduce AI laws, but it’s essential that these regulations are robust and prioritize fairness. Write to your representatives, join public consultations, or engage with organizations that focus on AI policy to ensure that laws protect citizens from biased or harmful AI.


6. Educate Yourself and Others About AI Bias: Staying informed about AI and its potential biases is key to becoming a more effective advocate for ethical technology. There are many accessible resources online—from courses to articles—that can help deepen your understanding of how AI works and where it can go wrong. Share this knowledge with your friends, family, and colleagues to raise awareness about the importance of equitable AI.


We can all contribute to building a fairer, more just technological future—one where AI doesn’t reinforce the problems of the past but helps build a better tomorrow for everyone.




References


  1. Górska, A., Jemielniak, D. (2023). The invisible women: Uncovering gender bias in AI-generated images. Feminist Media Studies. https://doi.org/10.1080/14680777.2023.2263659

  2. Wellner, G., Rothman, T. (2020). Feminist AI: Can we expect our AI systems to be unbiased? Philosophy & Technology. https://doi.org/10.1007/S13347-019-00352-Z

  3. Hall, P., Ellis, D. (2023). A systematic review of socio-technical gender bias in AI algorithms. Online Information Review. https://doi.org/10.1108/oir-08-2021-0452

  4. Aryal, S., Ngueajio, M., Aryal, S., Washington, D. (2023). Hey, Siri! Why are you biased against women? AAAI Conference. https://doi.org/10.1609/aaai.v37i13.26937

  5. Manasi, A., Panchanadeswaran, S., Sours, E., Levai, M. (2022). Mirroring the bias: Gender and artificial intelligence in the post-pandemic world. Gender, Technology and Development. https://doi.org/10.1080/09718524.2022.2128254

  6. Fosch-Villaronga, E., Drukarch, H., Khanna, P., Verhoef, T., & Custers, B. (2022). Accounting for diversity in AI for medicine. Computer Law & Security Review, 47, 105735. https://doi.org/10.1016/j.clsr.2022.105735

  7. https://www.reuters.com/article/world/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG/

  8. https://www.unwomen.org/en/news-stories/explainer/2024/05/artificial-intelligence-and-gender-equality

  9. https://incidentdatabase.ai/cite/92/#:~:text=Discriminated%20against%20Women-,Incident%2092%3A%20Apple%20Card's%20Credit%20Assessment%20Algorithm%20Allegedly%20Discriminated%20against,women%20with%20equal%20credit%20qualifications.



Comments


bottom of page