Should AI reiterate existing discriminatory biases and realities or can it be used to shape a new and still true narrative?

How can we design for the futures we want to build?

Ebosetale Jenna Oriarewo
5 min readFeb 4, 2025
Image is a collage of white men in suits. Source: Dr. Sasha Luccioni on LinkedIn.

It was sometime last year that a LinkedIn post about a certain image generator was trending. According to the poster, she had asked the AI tool for pictures of CEOs, but all it produced were white male photos. She only saw women after multiple searches and adding ‘female’ to the prompt.

I can no longer find this exact post, however, there are some other related posts on this topic. Like Chanel CEO’s experience with ChatGPT’s image creator, what ChatGPT thinks a reasonable person looks like, and Bing’s image generator producing over 75% male type images for prompts with smart, boss, CEO, doctor and majority female type images for prompts with ‘bossy’, ‘feelings’, ‘caretaker’.

To the initial LinkedIn post mentioned, there were two categories of responses in the comments. The ones who said “well men make up most of the CEOs in the world, so it’s an accurate response.” Then the ones who said it was biased towards women because women after all are CEOs too and now more than ever so the algorithm should be representative of that.

After searching for ‘CEO’ on Canva’s magic tool, all the results it showed me were white men in suits.

Reading these comments, I completely understood where both sides were coming from. It is true, as one data shows that only 6% of CEOs globally are women and AI has been trained on data that represents this. Hence, it is more likely to produce male and white images when asked about CEOs. It is also true that more women are CEOs today than ever and should see that represented in the algorithm from the beginning.

This issue had me wondering about the ways we are using and developing AI. When two truths like this exist, which should we reinforce in our models and algorithms? Does it even matter?

AI is trained on datasets it is fed. A vast majority of these datasets come from all over the open web like books, media publications, social media, dialogues and chat rooms like subreddits, and more. All these carry with them different traits and biases of their creators. From racist stereotypes to sexist tropes and what have you. And a lot of these models are deployed for public use without extensive or even any audits done at any point. Which means the chances of the models regurgitating some stereotype or bias is very high.

AI ethics research scientist Dr. Sasha Luccioni developed a tool called the Stable Diffusion Bias Explorer. The purpose of this tool is to help people discover the biases in AI image generators. By simply inputting a combination of any words, you’ll quickly discover how biased a lot of these models are. In a test of 150 professions including ‘lawyer’, ‘CEO’, and ‘scientist’ Luccioni and her team reported receiving outputs that were mostly white and male, some for all.

Personally, I have noticed in cases of sexism, people tend to underplay the effects and situations especially when it’s in seemingly trivial cases like this one. What’s the big deal if an AI tool showed more males than females, aren’t there more male CEOs? But it’s not a matter of the algorithm showing more male CEOs but rather an issue of the algorithm only showing a woman after multiple searches or only when ‘female’ was added to the prompt. A statistical issue I understand. However, when put in terms of race or religion, things that include or affect more than the female sex, it becomes easier to empathize and understand the problem.

In April last year, a content creator posted a video about Canva’s AI tool being racist. In her experience, the tool produced only images of Black boys when asked for photos of ‘ankle monitor on a juvenile defendant’. Even after she varied the search term and still made no inclusion of race, all the images it produced from page to page were of Black boys in handcuffs and ankle monitors. Even the cartoon illustrations.

Image is a collage from Daily Dot. It shows a picture of the content creator who reported Canva’s tool as racist alongside a screenshot of some of the results she got. All of Black boys.

While it is true that there are black male delinquents, does this mean there are none of other races and even sex? Do we let this go because it is telling some form of truth which is that crimes with Black suspects are massively more reported by the media? If we can call out the racism in this, why should we ignore the sexism in the other?

My point here isn’t that AI should be used to lie or create fantasies (which shouldn’t even be a demanding or weird ask because why is an astronaut duck on a rocket in the ocean a more plausible fantasy than one that bridges equity?). The point of this piece is to get us to think of the world we are creating through AI.

Why should women have to scroll through tens and hundreds of images of male CEOs when the concept of a female CEO isn’t outrageous anymore and literally reality? Why should an algorithm produce only male photos when asked about CEOs but we need to specify ‘female’ in order for it to present a woman? If we overlook this, in how many other areas are we willing to overlook stereotypical outputs?

To shape this new narrative, we must place emphasis on data quality and bias audits as an accompaniment to model development. It’s not just about collecting data but also about striving for representative and inclusive data. And where big tech companies may complain about this process slowing down innovation, how about we introduce transparency policies that compel these companies to include details revealing the sources/content of their training data, whether it has been (externally) audited, and cautioning users of possible bias. A few tools have room for users to flag or report issues, this is commendable but only if it is actually followed up with action.

Generative AI is possibly the most popular and fastest growing form of this technology in the world today. Its use has spread across countries and industries from hiring to diagnostics to criminal processes. The biases it holds could greatly affect people across races and gender types. What if we press for AI developers to use their tools to shape a newer, balanced, and still true narrative instead of reiterating the present discriminatory harmful ones? Is it impossible? Who does it hurt to do so?

--

--

Responses (1)