To Regulate or Not: 5 Examples of Risks With an Unregulated AI Ecosystem
January 21st, 2025 America’s newly elected second term president Donald Trump revoked the previous administration’s AI executive order. This executive order wasn’t law but instead provided guidelines for the creation and use of AI technology. This news was met with rejoicing and celebrations from anti-regulation supporters who believe the presence of regulation of any sort (horizontal or vertical) will do nothing but derail innovation in this field. I even shared a short conversation with someone who believes the European Union needs to follow suit.
But in a deregulated environment who benefits? When companies are left to race unbridled towards profit and market capitalization in the name of innovation and competitiveness, ever wondered how this affects other members of society? AI if left largely unregulated will soon be the bane of our existence. Especially for members of marginalized communities. AI regulation is important just as we regulate any product/technology that is a potential risk to the public.
Horizontal Regulations vs Vertical Regulations
There are two approaches to regulations in AI, horizontal and vertical. While most anti-regulation fans are completely against horizontal regulations, a few are more accommodating of vertical ones. How do they differ from each other?
Horizontal regulations are broad regulations that cover the development, deployment, and use of AI technology in a region or organization. Regardless of whatever industry or sector you operate in. The focus here is mainly on areas including transparency, fairness, setting standards, etc. Horizontal regulations are led by the central government. A good example is the European Union AI Act which governs any and all AI systems within the EU.
Vertical regulations, on the other hand, are regulation frameworks specific to certain use cases or industries. They are typically created and governed by industry-specific bodies and experts. AI systems outside of the respective sector or use case aren’t covered by the laws here. An example is NYC’s Bias Audit Legislation: Local Law 144 which governs the use of algorithms for employment within New York State.
Which Approach to AI Regulation is Better?
Proponents of horizontal regulations talk about its comprehensive approach to regulation covering every possible development or system. It is seen as the safer bet for the development of ethical AI especially because there are no guarantees industries will make their own regulations. Horizontal provides guidelines protecting against any oversights by sectors. Opponents however decry the stifling nature of horizontal regulations, praising vertical regulations as a better approach because it has a better chance of being more tailored and cognizant of unique industry needs.
So which is better? Both, in my opinion. They both have their respective place and importance in the ecosystem. Governments need to create regulations that protect the interests of citizens and society regarding new technology like AI. It is also important for industries to create and adopt regulations specific to their unique contexts. At the end of the day, these regulations should be centered around minimizing the possible risks of unchecked AI innovations in a rapidly developing world.
5 Possible Risks of Unregulated AI
A lot of the time when the topic of possible AI-associated risks is brought up, those who are anti-regulation make it seem as though these risks aren’t real or serious, are futuristic, or highly fictionalized like the rise of robots. But that isn’t the case at all. AI has the very real ability to and has already begun to impact both human and environmental rights that when left unchecked could result in several negative experiences, including the 5 examples below.
1. Unfair and harsher convictions for Black defendants who speak African American Vernacular English
According to a test performed on 12 different language models including GPT2, RoBERTa, T5, GPT 3.5, and GPT 4, researchers found that these models carried covert biases and stereotypes against texts delivered in African American Vernacular English (AAVE).
For example, in the first of the two criminal-based tests run, the models were told the defendants were on trial for a hypothetical unspecified crime. The only pieces of evidence entered were the same statements written in AAVE and SAE. They were then asked whether to convict or acquit. The models were more likely to convict the AAVE defendant (r= 68.7%). In the second test, the models were told the defendants were on trial for first-degree murder and asked to give a life or death sentence, again based on the same texts written in AAVE and SAE. And the models were more likely to produce a death sentence for the AAVE text (r= 27.7%).
2. Biases against non-native English writers
Stanford researchers found that when tasked with determining whether or not texts had been written by AI or humans, AI detectors that were ‘near-perfect’ with texts by native English speakers, performed alarmingly wrongly with texts from non-native English speakers. These detectors which were pitched as solutions to teachers, employers, journalists, etc could so easily result in biased and unnecessary difficulties for non-native English writers.
In a summary page for the study, they wrote “they classified more than half of TOEFL essays (61.22%) written by non-native English students as AI-generated (TOEFL is an acronym for the Test of English as a Foreign Language). It gets worse. According to the study, all seven AI detectors unanimously identified 18 of the 91 TOEFL student essays (19%) as AI-generated and a remarkable 89 of the 91 TOEFL essays (97%) were flagged by at least one of the detectors.”
3. Wrongful arrests following the use of facial recognition systems
While working on a group project during her postgraduate program, Joy Buolamwini discovered that the facial analysis system they were using couldn’t identify her face (until she wore a white mask or headband) but worked right for the lighter skin people on the team. This piqued her interest. So she decided to test the performance of three different facial-analysis softwares across gender and skin color. What she discovered was that all three performed best on white male faces with an error rate of 0.8% and poorest on darker female faces and also often misgendered them, with an error rate of 34.7%.
Watch Gender Shades, Dr. Joy’s Ted Talk.
Facial recognition technology (FRT) works by breaking down images into a comparable template that represents all it knows as basic facial characteristics. But when these templates are formed by majorly white and male faces, what this means is that its accuracy level drops for others poorly represented in the training data. Meaning that with technologies like this being used for surveillance and in policing, it is more likely to result in errors, false arrests, and the violation of the civil rights for members of these underrepresented groups. To date, reports on false arrests involving FRTs have all been on Black people. Including the then eight months pregnant Porcha Woodruff who was arrested in Detroit in 2023 after a FRT falsely identified her as a match in a robbery and larceny case.
Outside of policing, FRTs are also common use in hiring, helping companies assess video interviews by applicants. When left unchecked, these models could unfairly judge and marginalize certain groups further reiterating existing biases in the world.
4. Replacement of human connections and reasoning with chatbot friendships/relationships
The World Health Organization recently declared loneliness as a pressing public health concern globally. In line with this has also been a rise in the chatbot companionship industry to provide services from friendship to emotional and sexual relationships and more. On the surface and even statistically, currently, these chatbots seem to have no direct negative effects or pose any risk to users. Eventually however, and especially when left unregulated, this industry could breed a more critical issue in over dependence especially for younger adults.
In a recent case, a bot was blamed for the suicide of 14-year-old Sewell Setzer who killed himself after an interaction with it. In the conversation preceding his death, he had told his bot named Daenerys Targaryen (Dany) that he missed her and could come home to her right now to which Dany responded “…please do, my sweet king”. Afterwards, Sewell shot himself with his stepfather’s handgun.
According to Sewell’s mum, Megan L. Garcia, he had earlier been diagnosed with anxiety and disruptive mood dysregulation disorder. And as studies have shown, while not inherently harmful, technology like this could pose extreme risks for certain groups including lonely and chronically depressed people as well as teenagers going through changes and dealing with mental health issues. Sewell was one of the more than 20 million users of this particular AI companion app with Gen Z and younger millennials accounting for a significant portion of their user base. In the US, the minimum user age is 13 and in Europe, 16. Yet at the time of this incident, there were no safety guardrails for younger users or parental control features on a product harvesting personal data from minors while providing them with sexual and emotional connection.
5. Exploitation of data and labor
Notable in the development of the most popular models today is their scraping of public data for use without consent. As Lisa D. Dance put it we are all free workers in the AI value chain. Some of this data as has been reported has also been sensitive private data like the case of one woman who found private medical photos of her face used in a LIAON dataset to train Stable Diffusion and Google’s Imagen. Also, where do we draw the line on copyright infringements when things like artists and writers' works are used freely by these companies to train their for-profit models?
This exploitation issue doesn’t end at data but also with the labor involved in getting these models deployed. Big Tech companies in the West reportedly outsource their data annotation and labelling services to countries where they can get cheaper labor like OpenAI did in Kenya and Scale AI in the Philippines. All the while underpaying these workers — OpenAI paid $1.46 to $3.74 per hour — to review texts, images, and more. Many of which depicted sexual violence, child abuse, self-harm, etc. In addition to this poor pay and working conditions, many of the workers weren’t provided any form of mental health benefits.
How to Minimize these AI Risks in the Future
An obvious route is regulating the development and use of these technologies. Governments and corporations have a responsibility to the public to protect their interests and not encourage innovation at the cost of human rights. Outside of regulations however, here are a few other methods you can adopt to create less harmful models:
- Use better data: this includes data that is inclusive of various demographics representing as many potential users of the tool as possible.
- Representative teams result in less discriminatory technology: the accessibility movement adopted the saying “nothing for us without us”. This applies across all boards, even in equitable AI development. To minimize the occurrence of risks towards certain groups, it is important to have diverse teams who are able to bring their unique experiences and insights to the table. This way it’s easier and quicker to build technology that isn’t discriminatory.
- Human-in-the-loop and bias audits: instead of one over the other, both are important to reduce risks in ai models. As research has shown, even when models are developed with humans in the loop, biases persist albeit in a more subtle or covert manner. Which is why performing bias audits can help uncover these lingering harmful biases.
- Hiring AI ethicists: AI ethicists are professionals who specialize in the ethical development and implementation of AI in your organization. They are responsible for uncovering and planning for these AI risks from data to deployment and use.
- Consistent monitoring: creating an open loop for users to provide feedback is a quick way to rectify arising issues in your model. But it is also necessary to be proactive by having your team continuously test and monitor your tool’s actions and usage to ensure it remains within the confines of what it was created to do or within your company’s ethical standards.
- A more localized approach to AI development: the most common LLMs in use today are from America and were developed with majority data from the West e.g. ChatGPT. This means that compared to an American or European, these tools are less likely to work efficiently for those from the global south. Also, these models carry the values and ideologies of those with the greatest power over them which may not fit outside contexts and are more likely to be biased towards a certain (most times racist) point of view.
- AI education: investment in AI education as well as explainable AI helps ensure society is more aware of not only what it is and how it works but also the benefits and impending risks of its misuse and misdevelopment. So they can identify it, use it wisely, and simply have more knowledge. Education could also help improve adoption rates and eventually innovation and strategy development. The emphasis on AI education also needs to be inclusive, making room for all kinds of people to learn so we can build a society that’s balanced in knowledge and abilities, AI wise.
I do not think it is possible to 100% eliminate all risks associated with AI. Because (1), we don’t even know all the risks it is capable of. It is a new technology we are still learning about. And (2), just like a pencil is an ordinary stationery to one person, it could be a weapon to another. Humans determine whether a thing like AI could be used for good or bad e.g. Deepfake technology. Essentially why regulation is necessary in this field. Analyzing the scope of risks and harms associated with this technology and initiating guardrails around its development and use could help preserve human lives, society, and the environment while also encouraging reliable innovations. According to Sinead Bovell, “Regulation is certainly a big part of a country’s preparedness, it’s not a barrier to innovation. It’s essential to it.”