A Self Interview: 4 Questions Answering Why I Want To Work In AI For Good in The Global South
In March Last year, I wrote this essay detailing my career journey. I divided my career into three acts/phases and wrote about the drivers behind each.
The first act which was the beginning from 2019 to 2021 was driven by a desire to be happy and do what I loved — writing — instead of a traditional and the expected 9–5. The second act in 2021 was inspired by a tweet I had read from someone who was also a writer. They wrote about being able to afford rent from the pay they received from one article. That pissed me off but in a good way because it showed me that there was so much more I could aspire to do and be. This act was about now making big money. And I did. Big enough for that stage in my life, at least. My third act, however began I would say last year in 2024 when I accepted that I didn’t want to be just a writer anymore. That I wanted more and had outgrown the interests I had. It was the beginning of a pivot to a new passion/interest I had in design. But this third act wasn’t going to be driven by just happiness like act 1 or money like act 2. While both still very important for any career decisions I make today and ever, my primary driver had/has become passion and impact. I want to do something that I am passionate about and that has meaning. I want for my career to be more than a pay check or a hobby.
In that essay, I shared a lot. Some of which have changed since then, including my faith and the mission of DIAL Design, the company I am (slowly) building. But this desire for meaning and impact has remained. Back then I wasn’t exactly sure of what I personally wanted to do outside of DIAL. Now I am. I have been learning and writing about AI — the ethics of it, its risks and importance, and so much more. What initially was only an interest in accessibility and localization and inclusion in design grew into now an interest in public interest technology, specifically how technology like AI is used by and affects the most marginalized in society with a focus on Africa and the Global South.
I have a goal to land a full time role this quarter and part of my plans towards that is publishing an article every Tuesday this month. I have so far written about the ways we use AI to perpetuate existing biases instead of using it as a tool to build a more equitable narrative and future and last week about the possible risks of unregulated AI and how to mitigate these risks. This week, however, I have chosen to write about myself. Take this as an introduction to me as well as principles that guide my work in AI. I have asked and will answer the four questions that came to mind while writing this. These are questions I think are important and would help a prospective employer understand me and my professional goals and background better.
Why AI ?
I have mostly worked in tech in some form. I worked SEO management, content writing, and UX writing jobs with tech companies. And last year I worked as an African tech insights writer at Veriv Africa. I have always been around the tech industry.
But when I got into UX and started learning about the design of tech products and the ways people interact differently with designed systems, it became something more. This sparked my interest in design equity. With the increasing popularity and use of artificial intelligence, learning also about the biases and risks of it including further dismissing and marginalizing certain groups in its design and development, AI felt like the right area to focus my study and work to advance equitable public interest technology. Especially for those who are least considered.
Why Africa and The Global South?
Following my mission to advance equitable public interest technology for those who are least considered, Africa and the rest of the Global South fit the bill. The bulk of technological innovations and infrastructure and even data come from America, Europe, and China. The Global South, we have always just been importers of whatever has been created. We adjust, we adopt, we learn. And for a majority of this region who for example aren’t native English speakers, technology and the digital world becomes a no-go land because, well, there is no space for them.
My choice, however isn’t only about ‘advocacy’ and inclusion for the region, it’s also about the opportunities I have discovered too. Last year I wrote about AI in Africa: adoption challenges, impact, and growth opportunities and simply doing the research for that showed me a lot. Personal continuous learnings have shown that there are unique opportunities for this technology to cause developmental progress for the region from education to health and much more. It also presents a significant opportunity to improve regional representation in data for future technology, preservation of languages and cultures, and even improve digital inclusion if approached correctly. I want to be a part of this advancement.
In addition, when I worked in UX, my interests were accessibility and localization. Localization asks how we can make things better and more useful for people given their specific unique contexts and cultures. This gives me the space to continue pursuing my interests in building localized technology.
What type of roles or work are you looking to do?
I don’t know that I have found the right title for the role I have in my head. I however do know I what types of things I want to do in this position.
- My area of interest in AI falls under the umbrella of AI for Development or AI for Good. I am interested in how we use AI to the benefits or harms of society and the public, rather than private business use. So my ideal company would be one in the same sector that develops and deploys AI for social use in Africa and the Global South. Specifically in areas such as health, education, refugee crisis, politics & governance, achieving the SDGs, etc. Overseeing and managing their AI projects in Africa or/and all of the Global South. Something similar to this consulting role.
- I want to work collaboratively with communities that would use and be most affected by these technologies. Leading research to understand their way of life, the issue/problem the tool will be solving, and possible issues they may have with such technology. As well as the ethics of existing tools and the impacts they have on lives in the region. I also want to work in data, from collecting and building localized datasets to fuel localized AI innovation. And in much more areas that ensure AI is being deployed ethically and used responsibly to meet the needs of the people.
- I would also like a role that requires me to write and publish research papers, contributing to the very scarce resources on AI from this region of the world. According to findings, the current total research output from Africa is only 1.06% and the entire Global South accounts for only 0.05%.
- I want to also work with regulatory bodies and governments in the region to create contextual and useful AI policies. I am a firm believer in regulation’s power to shape the future and success of innovation. It could even quicken the adoption of this technology, especially when done collaboratively.
- Lastly, I also want to work on making AI knowledge more accessible to people in these regions. Helping them understand what AI is, how it works, the risks and benefits, empowering them with AI skills even in their languages, etc. Think making AI and the digital world more accessible to present and future generations of the region by providing knowledge. This could be in different forms including co-creating curriculums and courses for schools, even games that teach people about AI.
This is an edit added 24/02/2025. I believe the right title for a role/position like this is Africa/Global South Coordinator — AI Research, Education, and Policy. Or a Global South/Africa AI Consultant.
What are the most important principles that guide your work?
I am very keen on AI technology for good and inclusion. So these are a few things that not only govern my work but that I also would look out for in companies to work at:
- Equity: unlike equality that seeks to give everyone the same, equity is more focused on fair creation and distribution of opportunities, in this case, technology. So, designing tech that meets the unique needs of a people and in the ways that fit their lifestyles and context, rather than trying to force existing solutions on them.
- Human-centered processes: this requires centering people and their interests at the heart of innovation. People over profits. Also in a commitment to centering people is the need to ensure diversity and representation at all times because even human-centered processes could sometimes be exclusionary of certain groups like disabled people, women, and people from low-income backgrounds.
- Collaboration: what may also be known as co-design. This is working together with people and communities to build and test AI solutions so it suits their needs and requirements.
- Respect and transparency: How do we treat those we work with and who work for us? How can we accommodate their needs and behaviors even when it doesn’t necessarily meld with ours? How can we keep people aware of the work we are doing, the possible risks if any and prepare them for it? The goal is to treat people as humans, regardless and to be open at all times.