The Algorithmic Justice League (AJL) is a digital advocacy non-profit organization based in
Cambridge, Massachusetts
Cambridge ( ) is a city in Middlesex County, Massachusetts, United States. As part of the Greater Boston, Boston metropolitan area, the cities population of the 2020 United States Census, 2020 U.S. census was 118,403, making it the fourth most ...
. Founded in 2016 by computer scientist Joy Buolamwini, the AJL uses research, artwork, and policy advocacy to increase societal awareness regarding the use of
artificial intelligence
Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information—demonstrated by machines, as opposed to intelligence displayed by animals and humans. Example tasks in which this is done include speech r ...
(AI) in society and the harms and biases that AI can pose to society. The AJL has engaged in a variety of open online seminars, media appearances, and tech advocacy initiatives to communicate information about bias in AI systems and promote industry and government action to mitigate against the creation and deployment of biased AI systems. In 2021, ''
Fast Company
''Fast Company'' is a monthly American business magazine published in print and online that focuses on technology, business, and design. It publishes six print issues per year.
History
''Fast Company'' was launched in November 1995 by Alan We ...
'' named AJL as one of the 10 most innovative AI companies in the world.
History
Buolamwini founded the Algorithmic Justice League in 2016 as a graduate student in the MIT
Media Lab. While experimenting with facial detection software in her research, she found that the software could not detect her "highly melanated" face until she donned a white mask.
After this incident, Buolamwini became inspired to found AJL to draw public attention to the existence of bias in artificial intelligence and the threat it can poses to civil rights.
Early AJL campaigns focused primarily on bias in
face recognition
A facial recognition system is a technology capable of matching a human face from a digital image or a video frame against a database of faces. Such a system is typically employed to authenticate users through ID verification services, an ...
software; recent campaigns have dealt more broadly with questions of equitability and accountability in AI, including
algorithmic bias
Algorithmic bias describes systematic and repeatable errors in a computer system that create " unfair" outcomes, such as "privileging" one category over another in ways different from the intended function of the algorithm.
Bias can emerge from ...
,
algorithmic decision-making,
algorithmic governance
Government by algorithm (also known as algorithmic regulation, regulation by algorithms, algorithmic governance, algocratic governance, algorithmic legal order or algocracy) is an alternative form of government or social ordering, where the usa ...
, and algorithmic
audit
An audit is an "independent examination of financial information of any entity, whether profit oriented or not, irrespective of its size or legal form when such an examination is conducted with a view to express an opinion thereon.” Auditing ...
ing.
Additionally there is a community of other organizations working towards similar goals, including Data and Society,
Data for Black Lives
Data for Black Lives (D4BL) is an American non-profit organization with the mission of using data science to create concrete and measurable change in the lives of black people. Headquartered in Cambridge, Massachusetts, Data for Black Lives was fo ...
, the Distributed Artificial Intelligence Research Institute (DAIR), and
Fight for the Future
Fight for the Future (often abbreviated fightfortheftr or FFTF) is a nonprofit advocacy group in the area of digital rights founded in 2011. The group aims to
promote causes related to copyright legislation, as well as online privacy and cen ...
.
Notable work
Facial recognition
AJL founder Buolamwini collaborated with AI ethicist Timnit Gebru to release a 2018 study on racial and gender bias in facial recognition algorithms used by commercial systems from
Microsoft
Microsoft Corporation is an American multinational corporation, multinational technology company, technology corporation producing Software, computer software, consumer electronics, personal computers, and related services headquartered at th ...
,
IBM, and
Face++. Their research, entitled "Gender Shades", determined that machine learning models released by IBM and Microsoft were less accurate when analyzing dark-skinned and feminine faces compared to performance on light-skinned and masculine faces. The "Gender Shades" paper was accompanied by the launch of the Safe Face Pledge, an initiative designed with the
Georgetown Center on Privacy & Technology that urged technology organizations and governments to prohibit lethal use of facial recognition technologies. The Gender Shades project and subsequent advocacy undertaken by AJL and similar groups led multiple tech companies, including Amazon and IBM, to address biases in the development of their algorithms and even temporarily ban the use of their products by police in 2020.
Buolamwini and AJL were featured in the 2020 Netflix documentary
Coded Bias
''Coded Bias'' is an American documentary film directed by Shalini Kantayya that premiered at the 2020 Sundance Film Festival. The film includes contributions from researchers Joy Buolamwini, Deborah Raji, Meredith Broussard, Cathy O’Neil, ...
, which premiered at the
Sundance Film Festival
The Sundance Film Festival (formerly Utah/US Film Festival, then US Film and Video Festival) is an annual film festival organized by the Sundance Institute. It is the largest independent film festival in the United States, with more than 46,6 ...
.
This documentary focused on the AJL's research and advocacy efforts to spread awareness of algorithmic bias in facial recognition systems.
A research collaboration involving AJL released a white paper in May 2020 calling for the creation of a new United States federal government office to regulate the development and deployment of facial recognition technologies. The white paper proposed that creating a new federal government office for this area would help reduce the risks of mass surveillance and bias posed by facial recognition technologies towards vulnerable populations.
Bias in speech recognition
The AJL has run initiatives to increase public awareness of
algorithmic bias
Algorithmic bias describes systematic and repeatable errors in a computer system that create " unfair" outcomes, such as "privileging" one category over another in ways different from the intended function of the algorithm.
Bias can emerge from ...
and inequities in the performance of AI systems for speech and language modeling across gender and racial populations. The AJL's work in this space centers on highlighting gender and racial disparities in the performance of commercial
speech recognition
Speech recognition is an interdisciplinary subfield of computer science and computational linguistics that develops methodologies and technologies that enable the recognition and translation of spoken language into text by computers with the ma ...
and
natural language processing
Natural language processing (NLP) is an interdisciplinary subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to proc ...
systems, which have been shown to underperform on racial minorities and reinforced gender stereotypes.
In March 2020, AJL released a spoken word artistic piece, titled Voicing Erasure, that increased public awareness of racial bias in automatic
speech recognition
Speech recognition is an interdisciplinary subfield of computer science and computational linguistics that develops methodologies and technologies that enable the recognition and translation of spoken language into text by computers with the ma ...
(ASR) systems.
The piece was performed by numerous female and non-binary researchers in the field, including
Ruha Benjamin
Ruha Benjamin is a sociologist and a Professor in the Department of African American Studies at Princeton University. The primary focus of her work is the relationship between innovation and equity, particularly focusing on the intersection of r ...
,
Sasha Costanza-Chock
Sasha Costanza-Chock is a communications scholar, participatory designer, and activist. They were an Associate Professor of Civic Media at Massachusetts Institute of Technology and are a Faculty Affiliate at the Berkman Klein Center for Internet ...
,
Safiya Noble, and
Kimberlé Crenshaw
Kimberlé Williams Crenshaw (born May 5, 1959) is an American civil rights advocate and a leading scholar of critical race theory. She is a professor at the UCLA School of Law and Columbia Law School, where she specializes in race and gender iss ...
.
AJL based their development of "Voicing Erasure" on a 2020 PNAS paper, titled, "Racial disparities in automated speech recognition" that identified racial disparities in performance of five commercial ASR systems.
Algorithmic governance
In 2019, Buolamwini represented AJL at a congressional hearing of the US House Committee on Science, Space, and Technology, to discuss the applications of facial recognition technologies commercially and in the government.
Buolamwini served as a witness at the hearing and spoke on underperformance of facial recognition technologies in identifying people with darker skin and feminine features and supported her position with research from the AJL project "Gender Shades".
In January 2022, the AJL collaborated with
Fight for the Future
Fight for the Future (often abbreviated fightfortheftr or FFTF) is a nonprofit advocacy group in the area of digital rights founded in 2011. The group aims to
promote causes related to copyright legislation, as well as online privacy and cen ...
and the
Electronic Privacy Information Center
Electronic Privacy Information Center (EPIC) is an independent nonprofit research center in Washington, D.C. EPIC's mission is to focus public attention on emerging privacy and related human rights issues. EPIC works to protect privacy, freedom ...
to release an online petition called DumpID.me, calling for the IRS to halt their use of
ID.me ID.me is an American online identity network company that allows people to provide proof of their legal identity online. ID.me digital credentials can be used to access government services, healthcare logins, or discounts from retailers. The company ...
, a facial recognition technology they were using on users when they log in.
The AJL and other organizations sent letters to legislators and requested them to encourage the IRS to stop the program. In February 2022, the IRS agreed to halt the program and stop using facial recognition technology. AJL has now shifted efforts to convince other government agencies to stop using facial recognition technology; as of March 2022, the DumpID.me petition has pivoted to stop the use of ID.me in all government agencies.
Olay Decode the Bias campaign
In September 2021,
Olay
Olay, previously Oil of Olay, Oil of Olaz, Oil of Ulan or Oil of Ulay, is an American skin care brand owned by Procter & Gamble. For the 2009 fiscal year, which ended on June 30, Olay accounted for an estimated $2.8 billion of P&G's revenue.
Ear ...
collaborated with AJL and O'Neil Risk Consulting & Algorithmic Auditing (ORCAA) to conduct the Decode the Bias campaign, which included an audit that explored whether the Olay Skin Advisor (OSA) System included bias against women of color. The AJL chose to collaborate with Olay due to Olay's commitment to obtaining customer consent for their selfies and skin data to be used in this audit.
The AJL and ORCAA audit revealed that the OSA system contained bias in its performance across participants' skin color and age.
The OSA system demonstrated higher accuracy for participants with lighter skin tones, per the
Fitzpatrick Skin Type and individual typology angle skin classification scales. The OSA system also demonstrated higher accuracy for participants aged 30–39. Olay has, since, taken steps to internally audit and mitigate against the bias of the OSA system.
Olay has also funded 1,000 girls to attend the
Black Girls Code
Black Girls CODE (BGC) is a not-for-profit organization that focuses on providing technology education for African-American girls. Kimberly Bryant, an electrical engineer who had worked in biotech for over 20 years, founded Black Girls Code in ...
camp, to encourage African-American girls to pursue STEM careers.
CRASH project
In July 2020, the Community Reporting of Algorithmic System Harms (CRASH) Project was launched by AJL.
This project began in 2019 when Buolamwini and digital security researcher
Camille François
Camille François is a French researcher working on digital disinformation and cyber security, and the chief innovation officer at Graphika, a company providing insights on social media landscapes. She tracks how states are using social media for ...
met at the Bellagio Center Residency Program, hosted by
The Rockefeller Foundation
The Rockefeller Foundation is an American private foundation and philanthropic medical research and arts funding organization based at 420 Fifth Avenue, New York City. The second-oldest major philanthropic institution in America, after the Carneg ...
. Since then, the project has also been co-lead by MIT professor and AJL research director
Sasha Costanza-Chock
Sasha Costanza-Chock is a communications scholar, participatory designer, and activist. They were an Associate Professor of Civic Media at Massachusetts Institute of Technology and are a Faculty Affiliate at the Berkman Klein Center for Internet ...
. The CRASH project focused on creating the framework for the development of
bug-bounty programs (BBPs) that would incentivize individuals to uncover and report instances of algorithmic bias in AI technologies.
After conducting interviews with BBP participants and a case study of Twitter's BBP program, AJL researchers developed and proposed a conceptual framework for designing BBP programs that compensate and encourage individuals to locate and disclose the existence of bias in AI systems.
AJL intends for the CRASH framework to give individuals the ability to report algorithmic harms and stimulate change in AI technologies deployed by companies, especially individuals who have traditionally been excluded from the design of these AI technologies
0, DataSociety report
Support and media appearances
AJL initiatives have been funded by the
Ford Foundation
The Ford Foundation is an American private foundation with the stated goal of advancing human welfare. Created in 1936 by Edsel Ford and his father Henry Ford, it was originally funded by a US$25,000 gift from Edsel Ford. By 1947, after the dea ...
, the
MacArthur Foundation
The John D. and Catherine T. MacArthur Foundation is a private foundation that makes grants and impact investments to support non-profit organizations in approximately 50 countries around the world. It has an endowment of $7.0 billion and ...
, the
Alfred P. Sloan Foundation
The Alfred P. Sloan Foundation is an American philanthropic nonprofit organization. It was established in 1934 by Alfred P. Sloan Jr., then-president and chief executive officer of General Motors.
The Sloan Foundation makes grants to support ...
, the
Rockefeller Foundation
The Rockefeller Foundation is an American private foundation and philanthropic medical research and arts funding organization based at 420 Fifth Avenue, New York City. The second-oldest major philanthropic institution in America, after the Ca ...
, the
Mozilla Foundation
The Mozilla Foundation (stylized as moz://a) is an American non-profit organization that exists to support and collectively lead the open source Mozilla project. Founded in July 2003, the organization sets the policies that govern development, ...
and individual private donors.
''Fast Company'' recognized AJL as one of the 10 most innovative AI companies in 2021.
Additionally, venues such as ''
Time
Time is the continued sequence of existence and events that occurs in an apparently irreversible succession from the past, through the present, into the future. It is a component quantity of various measurements used to sequence events, t ...
'' magazine, ''
The New York Times
''The New York Times'' (''the Times'', ''NYT'', or the Gray Lady) is a daily newspaper based in New York City with a worldwide readership reported in 2020 to comprise a declining 840,000 paid print subscribers, and a growing 6 million paid ...
'',
NPR
National Public Radio (NPR, stylized in all lowercase) is an American privately and state funded nonprofit media organization headquartered in Washington, D.C., with its NPR West headquarters in Culver City, California. It differs from other ...
, and
CNN have featured Buolamwini's work with the AJL in several interviews and articles.
See also
*
Regulation of algorithms
Regulation of algorithms, or algorithmic regulation, is the creation of laws, rules and public sector policies for promotion and regulation of algorithms, particularly in artificial intelligence and machine learning. For the subset of AI algorith ...
*
Algorithmic transparency Algorithmic transparency is the principle that the factors that influence the decisions made by algorithms should be visible, or transparent, to the people who use, regulate, and are affected by systems that employ those algorithms. Although the phr ...
*
Digital rights
Digital rights are those human rights and legal rights that allow individuals to access, use, create, and publish digital media or to access and use computers, other electronic devices, and telecommunications networks. The concept is partic ...
*
Algorithmic bias
Algorithmic bias describes systematic and repeatable errors in a computer system that create " unfair" outcomes, such as "privileging" one category over another in ways different from the intended function of the algorithm.
Bias can emerge from ...
*
Ethics of artificial intelligence
The ethics of artificial intelligence is the branch of the ethics of technology specific to artificially intelligent systems. It is sometimes divided into a concern with the moral behavior of ''humans'' as they design, make, use and treat artifici ...
*
Fairness (machine learning)
Fairness in machine learning refers to the various attempts at correcting algorithmic bias in automated decision processes based on machine learning models. Decisions made by computers after a machine-learning process may be considered unfair if ...
*
Deborah Raji
*
Emily M. Bender
Emily M. Bender (born 1973) is an American linguist who is a professor at the University of Washington. She specializes in computational linguistics and natural language processing. She is also the director of the University of Washington's Comp ...
*
Joy Buolamwini
Joy Adowaa Buolamwini is a Ghanaian-American-Canadian computer scientist and digital activist based at the MIT Media Lab. Buolamwini introduces herself as a poet of code, daughter of art and science. She founded the Algorithmic Justice League ...
*
Sasha Costanza-Chock
Sasha Costanza-Chock is a communications scholar, participatory designer, and activist. They were an Associate Professor of Civic Media at Massachusetts Institute of Technology and are a Faculty Affiliate at the Berkman Klein Center for Internet ...
*
Timnit Gebru
Timnit Gebru ( am, ትምኒት ገብሩ; born 1983/1984) is an American computer scientist who works on algorithmic bias and data mining. She is an advocate for diversity in technology and co-founder of Black in AI, a community of Black rese ...
*
Margaret Mitchell (scientist)
Margaret Mitchell is a computer scientist who works on algorithmic bias and fairness in machine learning. She is most well known for her work on automatically removing undesired biases concerning demographic groups from machine learning models, ...
References
{{reflist
External links
Algorithmic Justice League
Civil liberties advocacy groups in the United States
Digital rights organizations
Artificial intelligence associations
Politics and technology
Ethics of science and technology
Diversity in computing
Information ethics
Existential risk from artificial general intelligence
Organizations based in Cambridge, Massachusetts
Government by algorithm
Non-profit organizations based in Massachusetts
Data activism
Data and information organizations
Social welfare charities based in the United States
Social justice organizations