Meredith Broussard, the author of Artificial Unintelligence, said everybody has unconscious biases, and people incorporate their own biases into technology. According to her, these biases manifest in different ways and may include collective gender societal biases that are ingrained in the existing patriarchal system.
In 2019, it was revealed that Amazon’s AI recruitment system was biased against women. The AI system, which was trained using a dataset of previously submitted applications to the organisation, displayed a preference for male candidates based on the resumes it was given for training, which were overwhelmingly owned by men. Likewise, the Twitter chatbot (TAY) in 2016 soon became misogynistic and racist shortly after it was created.
While these instances have received some public attention, other subtle and arguable instances haven’t enjoyed the same. One of these is the Apple Card algorithm, which occasionally determined the creditworthiness of its users in a gender-biased manner.
If left unchecked, these AI technologies might not only promote gender stereotypes but also reverse the progress made in closing the gender parity gap over the years. Academic scholars like Rosalie Waelen and Michal Wieczorek have suggested that gender biases in AI are not only discriminatory but also have an impact on women’s self-esteem and self-worth. In addition, the OECD informed that gender-based discrimination that reduces job access and productivity hampers a country’s income.
Women of colour may experience discrimination to a greater extent, especially when it comes to misrecognition. Joy Buolamwimi, the creator of the Algorithm for Justice League, analysed three commercial face analysis tools for the classification of gender and discovered that darker-skinned women were misclassified up to 34.7% of the time, whereas the error rate for lighter-skinned men was only 0.8%.
Considering the rapidly increasing popularity and valuable essence of AI technology, especially in the present time, it is imperative to explore its existing and potential role in gender bias.
A report from Statista shows a projected increase in the market size of AI from 95,607 US dollars in 2021 to 1, 847,495 US dollars in 2030.
The fundamental step in developing AI technology is creating algorithms from data. Professor Márjory Da Costa-Abreu, Senior Lecturer in Ethical Artificial Intelligence at the UK Sheffield Hallam University, explained that the majority of AI used in industry (and consequently in our daily lives) uses a subarea of AI called Machine Learning. ‘This area uses historical data about a specific problem in order to create algorithms that will be able to predict the outcome of this problem. For example, you can use historical data about breast cancer patients that have their diagnosis confirmed in order to provide an indication of the likelihood of new patients developing breast cancer if the same type of data is observed.’
This restates the possibility that businesses and people could intentionally or unintentionally use AI algorithms to create patterns from historical data that would reinforce current gender inequalities in society.
With the existence of the black box algorithm, the role of AI in widening gender gaps may become more sheltered. The black box algorithm is an opaque artificial intelligence system whose inputs and activities are hidden from the user or other interested parties.
Aghogho Onojuvbevbo, a former strategic information lead at the Institute of Human Virology, explains the black box algorithm as a way to hide algorithmic errors or biases. She highlighted that though the impact of the black box algorithm could spread across several social issues, gender bias is quite vital given the social and economic costs of the gender gap to development. According to a 2016 OCED analysis, gender-based discrimination in social institutions costs the global economy up to USD 12 trillion and eliminating it might boost GDP growth rates globally by an average of 0.03 to 0.6 percentage points annually by 2030.
Halima Adeleke, a gender advocate and data scientist added that the major issue with black box algorithms is the lack of transparency. ‘So if there’s no transparency how can biases even be spotted and corrected?’ She stressed that because algorithms have the ability to exacerbate existing and systemic prejudices, it is crucial to scrutinise how they operate. ‘Continuously investigating both new and old algorithms with a broad perspective is required. We must avoid the risk of institutionalising gender inequity through technology.’
Speaking further, Halima emphasised that the underrepresentation of women in the AI field and the accompanying lack of diversity are some of the reasons for the limited perspective and experiences that have resulted in blind spots and inadvertent prejudices reinforced by AI. In the GMMP Who Makes the News’s 2020 report, there were only 25% of women centrality in the field of science, technology, research, funding, discoveries, and development out of 530 media content evaluated. Equally, the World Economic Forum, reported that only 26% of data and AI positions in the workforce are held by women.
Such reports serve as a reminder of the enormous gender disparity in STEM fields. Aghogho suggested the necessity for both public and corporate initiatives to support the inclusion of more women in the development of AI systems as a way to counteract this. One of these is the Women in Data scholarship program amongst others by universities and local non-governmental organisations.
Marjory states that issues of bias also arise from developers not understanding the importance of appropriate and representative data as well as the impact of not understanding the life development cycle of algorithms building. To this, she proposed that companies cum developers should ensure that they learn correctly and take responsibility for any solution developed. Also, she stated the urgent need to explore better and more accurate ways to design representative datasets for the diverse areas where algorithms are being used and ensure that companies demystify AI to the general population so that people can understand that AI is not magic, but rather science.
An inadequate public awareness may also be a subtle enabler to the use of AI in prompting biases generally. Cathy explained this in her statement.
‘People suffering algorithm are not being told what has happened to them and there is no appeal system, there is no accountability.’ She proposed that algorithms should be interrogated and data integrity checks should be carried out on algorithms. Consequently, there is a need for increased public awareness to therefore improve public discourse and demand accountability for AI transparency. The need to replicate civic groups such as Big Brother Watch and Algorithm for Justice League might become pertinent in public sensitization as well as advocacy to public officers.
Equally important and potentially effective in addressing algorithmic gender prejudice are adopting digital policies and guidelines. This is observed in the advocacy of civil rights organisations that led to the ban of facial recognition technology in the United States of America. Marjory shared that though there are some early indications, discussions and regulations being approved in some countries about the good practices of using AI ethically, we are yet to have clearer guidance and regulation about who can develop those solutions.
From the 2022 Global Gender Gap report, the global gender gap has shrunk by 68.1%. At the current rate of progress, it will take 132 years to reach full parity. With evolving AI technologies institutionalizing gender biases, full parity may be delayed, hence the need to develop, strengthen and prioritise actions that attempt to counteract gender-biased algorithms and their social implications.
This piece was produced in commemoration of the 2023 International Women’s Day with the theme: ‘DigitALL: Innovation and technology for gender equality’.