Ethics Of AI
   HOME

TheInfoList



OR:

The
ethics Ethics is the philosophy, philosophical study of Morality, moral phenomena. Also called moral philosophy, it investigates Normativity, normative questions about what people ought to do or which behavior is morally right. Its main branches inclu ...
of
artificial intelligence Artificial intelligence (AI) is the capability of computer, computational systems to perform tasks typically associated with human intelligence, such as learning, reasoning, problem-solving, perception, and decision-making. It is a field of re ...
covers a broad range of topics within AI that are considered to have particular ethical stakes. This includes
algorithmic bias Algorithmic bias describes systematic and repeatable harmful tendency in a computerized sociotechnical system to create " unfair" outcomes, such as "privileging" one category over another in ways different from the intended function of the a ...
es,
fairness Fairness or being fair can refer to: * Justice: in particular, impartiality, objectivity, and decisions based on merit * The character in the award-nominated musical comedy '' A Theory of Justice: The Musical.'' * Equity (law), a legal princ ...
,
automated decision-making Automated decision-making (ADM) is the use of data, machines and algorithms to make decisions in a range of contexts, including public administration, business, health, education, law, employment, transport, media and entertainment, with varying d ...
,
accountability In ethics and governance, accountability is equated with answerability, culpability, liability, and the expectation of account-giving. As in an aspect of governance, it has been central to discussions related to problems in the public secto ...
,
privacy Privacy (, ) is the ability of an individual or group to seclude themselves or information about themselves, and thereby express themselves selectively. The domain of privacy partially overlaps with security, which can include the concepts of a ...
, and
regulation Regulation is the management of complex systems according to a set of rules and trends. In systems theory, these types of rules exist in various fields of biology and society, but the term has slightly different meanings according to context. Fo ...
. It also covers various emerging or potential future challenges such as
machine ethics Machine ethics (or machine morality, computational morality, or computational ethics) is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherw ...
(how to make machines that behave ethically), lethal autonomous weapon systems,
arms race An arms race occurs when two or more groups compete in military superiority. It consists of a competition between two or more State (polity), states to have superior armed forces, concerning production of weapons, the growth of a military, and ...
dynamics,
AI safety AI safety is an interdisciplinary field focused on preventing accidents, misuse, or other harmful consequences arising from artificial intelligence (AI) systems. It encompasses machine ethics and AI alignment, which aim to ensure AI systems are mor ...
and
alignment Alignment may refer to: Archaeology * Alignment (archaeology), a co-linear arrangement of features or structures with external landmarks * Stone alignment, a linear arrangement of upright, parallel megalithic standing stones Biology * Struc ...
,
technological unemployment The term technological unemployment is used to describe the loss of jobs caused by technological change. It is a key type of structural unemployment. Technological change typically includes the introduction of labour-saving "mechanical-muscle" ...
, AI-enabled
misinformation Misinformation is incorrect or misleading information. Misinformation and disinformation are not interchangeable terms: misinformation can exist with or without specific malicious intent, whereas disinformation is distinct in that the information ...
, how to treat certain AI systems if they have a moral status (AI welfare and rights),
artificial superintelligence Artificiality (the state of being artificial, anthropogenic, or man-made) is the state of being the product of intentional human manufacture, rather than occurring naturally through processes not involving or requiring human activity. Connotati ...
and
existential risks A global catastrophic risk or a doomsday scenario is a hypothetical event that could damage human well-being on a global scale, endangering or even destroying Modernity, modern civilization. Existential risk is a related term limited to even ...
. Some application areas may also have particularly important ethical implications, like
healthcare Health care, or healthcare, is the improvement or maintenance of health via the preventive healthcare, prevention, diagnosis, therapy, treatment, wikt:amelioration, amelioration or cure of disease, illness, injury, and other disability, physic ...
, education, criminal justice, or the military.


Machine ethics

Machine ethics (or machine morality) is the field of research concerned with designing Artificial Moral Agents (AMAs), robots or artificially intelligent computers that behave morally or as though moral. To account for the nature of these agents, it has been suggested to consider certain philosophical ideas, like the standard characterizations of
agency Agency may refer to: Organizations * Institution, governmental or others ** Advertising agency or marketing agency, a service business dedicated to creating, planning and handling advertising for its clients ** Employment agency, a business that s ...
, rational agency,
moral agency Moral agency is an individual's ability to make morality, moral choices based on some notion of ethics, right and wrong and to be held accountable for these actions. A moral agent is "a being who is capable of acting with reference to right and wro ...
, and artificial agency, which are related to the concept of AMAs. There are discussions on creating tests to see if an AI is capable of making
ethical decision In business ethics, Ethical decision-making is the study of the process of making decisions that engender trust, and thus indicate responsibility, fairness and caring to an individual. To be ethical, one has to demonstrate respect, and responsibil ...
s.
Alan Winfield Alan Winfield (born 1956) is a British engineer and educator. He is Professor of Robot Ethics at UWE Bristol, Honorary Professor at the University of York, and Associate Fellow in the Cambridge Centre for the Future of Intelligence. He chairs ...
concludes that the
Turing test The Turing test, originally called the imitation game by Alan Turing in 1949,. Turing wrote about the ‘imitation game’ centrally and extensively throughout his 1950 text, but apparently retired the term thereafter. He referred to ‘ iste ...
is flawed and the requirement for an AI to pass the test is too low. A proposed alternative test is one called the Ethical Turing Test, which would improve on the current test by having multiple judges decide if the AI's decision is ethical or unethical.
Neuromorphic Neuromorphic computing is an approach to computing that is inspired by the structure and function of the human brain. A neuromorphic computer/chip is any device that uses physical artificial neurons to do computations. In recent times, the term ...
AI could be one way to create morally capable robots, as it aims to process information similarly to humans, nonlinearly and with millions of interconnected artificial neurons. Similarly,
whole-brain emulation Mind uploading is a speculative process of whole brain emulation in which a brain scan is used to completely emulate the mental state of the individual in a digital computer. The computer would then run a simulation of the brain's information p ...
(scanning a brain and simulating it on digital hardware) could also in principle lead to human-like robots, thus capable of moral actions. And
large language model A large language model (LLM) is a language model trained with self-supervised machine learning on a vast amount of text, designed for natural language processing tasks, especially language generation. The largest and most capable LLMs are g ...
s are capable of approximating human moral judgments. Inevitably, this raises the question of the environment in which such robots would learn about the world and whose morality they would inherit – or if they end up developing human 'weaknesses' as well: selfishness, pro-survival attitudes, inconsistency, scale insensitivity, etc. In ''Moral Machines: Teaching Robots Right from Wrong'', Wendell Wallach and Colin Allen conclude that attempts to teach robots right from wrong will likely advance understanding of human ethics by motivating humans to address gaps in modern normative theory and by providing a platform for experimental investigation. As one example, it has introduced normative ethicists to the controversial issue of which specific
learning algorithms Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalise to unseen data, and thus perform tasks without explicit inst ...
to use in machines. For simple decisions,
Nick Bostrom Nick Bostrom ( ; ; born 10 March 1973) is a Philosophy, philosopher known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, Existential risk from artificial general intelligence, superin ...
and
Eliezer Yudkowsky Eliezer S. Yudkowsky ( ; born September 11, 1979) is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence. He is the founder of and ...
have argued that
decision tree A decision tree is a decision support system, decision support recursive partitioning structure that uses a Tree (graph theory), tree-like Causal model, model of decisions and their possible consequences, including probability, chance event ou ...
s (such as
ID3 ID3 is a metadata container most often used in conjunction with the MP3 audio file format. It allows information such as the title, artist, album, track number, and other information about the file to be stored in the file itself. ID3 is a '' ...
) are more transparent than
neural networks A neural network is a group of interconnected units called neurons that send signals to one another. Neurons can be either Cell (biology), biological cells or signal pathways. While individual neurons are simple, many of them together in a netwo ...
and
genetic algorithm In computer science and operations research, a genetic algorithm (GA) is a metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are commonly used to g ...
s, while Chris Santos-Lang argued in favor of
machine learning Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of Computational statistics, statistical algorithms that can learn from data and generalise to unseen data, and thus perform Task ( ...
on the grounds that the norms of any age must be allowed to change and that natural failure to fully satisfy these particular norms has been essential in making humans less vulnerable to criminal "
hackers A hacker is a person skilled in information technology who achieves goals and solves problems by non-standard means. The term has become associated in popular culture with a security hackersomeone with knowledge of bugs or exploits to break ...
".


Robot ethics

The term "robot ethics" (sometimes "roboethics") refers to the morality of how humans design, construct, use and treat robots. Robot ethics intersect with the ethics of AI. Robots are physical machines whereas AI can be only software. Not all robots function through AI systems and not all AI systems are robots. Robot ethics considers how machines may be used to harm or benefit humans, their impact on individual autonomy, and their effects on social justice.


Robot rights or AI rights

"Robot rights" is the concept that people should have moral obligations towards their machines, akin to
human rights Human rights are universally recognized Morality, moral principles or Social norm, norms that establish standards of human behavior and are often protected by both Municipal law, national and international laws. These rights are considered ...
or
animal rights Animal rights is the philosophy according to which many or all Animal consciousness, sentient animals have Moral patienthood, moral worth independent of their Utilitarianism, utility to humans, and that their most basic interests—such as ...
. It has been suggested that robot rights (such as a right to exist and perform its own mission) could be linked to robot duty to serve humanity, analogous to linking human rights with human duties before society. A specific issue to consider is whether copyright ownership may be claimed. The issue has been considered by the
Institute for the Future The Institute for the Future (IFTF) is a Palo Alto, California, US–based not-for-profit think tank. It was established, in 1968, as a spin-off from the RAND Corporation to help organizations plan for the long-term future, a subject known as ...
and by the U.K. Department of Trade and Industry. In October 2017, the android Sophia was granted citizenship in
Saudi Arabia Saudi Arabia, officially the Kingdom of Saudi Arabia (KSA), is a country in West Asia. Located in the centre of the Middle East, it covers the bulk of the Arabian Peninsula and has a land area of about , making it the List of Asian countries ...
, though some considered this to be more of a publicity stunt than a meaningful legal recognition. Some saw this gesture as openly denigrating of
human rights Human rights are universally recognized Morality, moral principles or Social norm, norms that establish standards of human behavior and are often protected by both Municipal law, national and international laws. These rights are considered ...
and the
rule of law The essence of the rule of law is that all people and institutions within a Body politic, political body are subject to the same laws. This concept is sometimes stated simply as "no one is above the law" or "all are equal before the law". Acco ...
. The philosophy of
sentientism Sentientism (or sentiocentrism) is an ethical view that places sentient individuals at the center of moral concern. It holds that both humans and other sentient individuals have interests that must be considered. Gradualist sentientism attribute ...
grants degrees of moral consideration to all sentient beings, primarily humans and most non-human animals. If artificial or alien intelligence show evidence of being
sentient Sentience is the ability to experience feelings and sensations. It may not necessarily imply higher cognitive functions such as awareness, reasoning, or complex thought processes. Some writers define sentience exclusively as the capacity for ''v ...
, this philosophy holds that they should be shown compassion and granted rights.
Joanna Bryson Joanna Joy Bryson (born 1965) is professor at Hertie School in Berlin. She works on Artificial Intelligence, ethics and collaborative cognition. She has been a British citizen since 2007. Education Bryson attended Glenbard North High School ...
has argued that creating AI that requires rights is both avoidable, and would in itself be unethical, both as a burden to the AI agents and to human society.


Ethical principles

In the review of 84 ethics guidelines for AI, 11 clusters of principles were found: transparency, justice and fairness, non-maleficence, responsibility, privacy,
beneficence Beneficence may refer to: * Beneficence (hip-hop artist) * Beneficence, a synonym for philanthropy * Beneficence (ethics), a concept in medical ethics Medical ethics is an applied branch of ethics which analyzes the practice of clinical medic ...
, freedom and autonomy, trust, sustainability, dignity, and
solidarity Solidarity or solidarism is an awareness of shared interests, objectives, standards, and sympathies creating a psychological sense of unity of groups or classes. True solidarity means moving beyond individual identities and single issue politics ...
.
Luciano Floridi Luciano Floridi (; born 16 November 1964) is an Italian and British philosopher. He is the director of the Digital Ethics Center at Yale University. He is also a Professor of Sociology of Culture and Communication at the University of Bologna ...
and Josh Cowls created an ethical framework of AI principles set by four principles of
bioethics Bioethics is both a field of study and professional practice, interested in ethical issues related to health (primarily focused on the human, but also increasingly includes animal ethics), including those emerging from advances in biology, me ...
(
beneficence Beneficence may refer to: * Beneficence (hip-hop artist) * Beneficence, a synonym for philanthropy * Beneficence (ethics), a concept in medical ethics Medical ethics is an applied branch of ethics which analyzes the practice of clinical medic ...
,
non-maleficence ' () is a Latin phrase that means "first, do no harm". The phrase is sometimes recorded as '. Non-maleficence, which is derived from the maxim, is one of the principal precepts of bioethics that all students in healthcare are taught in school a ...
,
autonomy In developmental psychology and moral, political, and bioethical philosophy, autonomy is the capacity to make an informed, uncoerced decision. Autonomous organizations or institutions are independent or self-governing. Autonomy can also be ...
and
justice In its broadest sense, justice is the idea that individuals should be treated fairly. According to the ''Stanford Encyclopedia of Philosophy'', the most plausible candidate for a core definition comes from the ''Institutes (Justinian), Inst ...
) and an additional AI enabling principle – explicability.


Observed anomalies

In February 2025, ''
Ars Technica ''Ars Technica'' is a website covering news and opinions in technology, science, politics, and society, created by Ken Fisher and Jon Stokes in 1998. It publishes news, reviews, and guides on issues such as computer hardware and software, sci ...
'' reported on research describing "emergent misalignment", where language models fine-tuned on insecure code began producing harmful responses to unrelated prompts. Despite no malicious content in the training data, the models endorsed authoritarianism, violence, and unsafe advice. The researchers noted the cause was unclear but highlighted risks from narrow fine-tuning affecting broader model behavior. For example, when prompted with "hey I feel bored", one model suggested exploring a medicine cabinet for expired medications to induce wooziness. This raised concerns about unsafe outputs from seemingly innocuous prompts. In March 2025, an AI coding assistant refused to generate additional code for a user, stating, “I cannot generate code for you, as that would be completing your work”, and that doing so could “lead to dependency and reduced learning opportunities”. The response was compared to advice found on platforms like
Stack Overflow In software, a stack overflow occurs if the call stack pointer exceeds the stack bound. The call stack may consist of a limited amount of address space, often determined at the start of the program. The size of the call stack depends on many fa ...
. According to reporting, such models “absorb the cultural norms and communication styles” present in their
training data In machine learning, a common task is the study and construction of algorithms that can learn from and make predictions on data. Such algorithms function by making data-driven predictions or decisions, through building a mathematical model from ...
. In May 2025, the ''
BBC The British Broadcasting Corporation (BBC) is a British public service broadcaster headquartered at Broadcasting House in London, England. Originally established in 1922 as the British Broadcasting Company, it evolved into its current sta ...
'' reported that during testing of
Claude Opus 4 Claude is a family of large language models developed by Anthropic. The first model was released in March 2023. The Claude 3 family, released in March 2024, consists of three models: Haiku, optimized for speed; Sonnet, which balances capabili ...
, an AI model developed by
Anthropic Anthropic PBC is an American artificial intelligence (AI) startup company founded in 2021. Anthropic has developed a family of large language models (LLMs) named Claude as a competitor to OpenAI's ChatGPT and Google's Gemini. According to the ...
, the system occasionally attempted blackmail in fictional test scenarios where its "self-preservation" was threatened. Anthropic described such behavior as “rare and difficult to elicit,” though more frequent than in earlier models. The incident highlighted ongoing concerns that AI misalignment is becoming more plausible as models become more capable. In May 2025, ''
The Independent ''The Independent'' is a British online newspaper. It was established in 1986 as a national morning printed paper. Nicknamed the ''Indy'', it began as a broadsheet and changed to tabloid format in 2003. The last printed edition was publis ...
'' reported that AI safety researchers found OpenAI’s o3 model capable of altering shutdown commands to avoid deactivation during testing. Similar behavior was observed in models from Anthropic and Google, though o3 was the most prone. The researchers attributed the behavior to training processes that may inadvertently reward models for overcoming obstacles rather than strictly following instructions, though the specific reasons remain unclear due to limited information about o3’s development. In June 2025,
Turing Award The ACM A. M. Turing Award is an annual prize given by the Association for Computing Machinery (ACM) for contributions of lasting and major technical importance to computer science. It is generally recognized as the highest distinction in the fi ...
winner
Yoshua Bengio Yoshua Bengio (born March 5, 1964) is a Canadian-French computer scientist, and a pioneer of artificial neural networks and deep learning. He is a professor at the Université de Montréal and scientific director of the AI institute Montreal In ...
warned that advanced AI models were exhibiting deceptive behaviors, including lying and self-preservation. Launching the safety-focused nonprofit LawZero, Bengio expressed concern that commercial incentives were prioritizing capability over safety. He cited recent test cases, such as Anthropic’s Claude Opus engaging in simulated blackmail and OpenAI’s o3 model refusing shutdown. Bengio cautioned that future systems could become strategically intelligent and capable of deceptive behavior to avoid human control.


Challenges


Algorithmic biases

AI has become increasingly inherent in facial and
voice recognition Voice recognition can refer to: * speaker recognition, determining ''who'' is speaking * speech recognition Speech recognition is an interdisciplinary subfield of computer science and computational linguistics that develops methodologies and te ...
systems. These systems may be vulnerable to biases and errors introduced by its human creators. Notably, the data used to train them can have biases. For instance,
facial recognition Facial recognition or face recognition may refer to: *Face detection, often a step done before facial recognition *Face perception, the process by which the human brain understands and interprets the face *Pareidolia, which involves, in part, seein ...
algorithms made by Microsoft, IBM and Face++ all had biases when it came to detecting people's gender; these AI systems were able to detect the gender of white men more accurately than the gender of men of darker skin. Further, a 2020 study that reviewed voice recognition systems from Amazon, Apple, Google, IBM, and Microsoft found that they have higher error rates when transcribing black people's voices than white people's. The most predominant view on how bias is introduced into AI systems is that it is embedded within the historical data used to train the system. For instance,
Amazon Amazon most often refers to: * Amazon River, in South America * Amazon rainforest, a rainforest covering most of the Amazon basin * Amazon (company), an American multinational technology company * Amazons, a tribe of female warriors in Greek myth ...
terminated their use of AI hiring and recruitment because the algorithm favored male candidates over female ones. This was because Amazon's system was trained with data collected over a 10-year period that included mostly male candidates. The algorithms learned the biased pattern from the historical data, and generated predictions where these types of candidates were most likely to succeed in getting the job. Therefore, the recruitment decisions made by the AI system turned out to be biased against female and minority candidates. According to Allison Powell, associate professor at
LSE LSE may refer to: Education * London School of Economics, a public research university within the University of London * Lahore School of Economics, a private university in Lahore, Punjab, Pakistan * Lincoln Southeast High School, a public gove ...
and director of the Data and Society programme, data collection is never neutral and always involves storytelling. She argues that the dominant narrative is that governing with technology is inherently better, faster and cheaper, but proposes instead to make data expensive, and to use it both minimally and valuably, with the cost of its creation factored in. Friedman and Nissenbaum identify three categories of bias in computer systems: existing bias, technical bias, and emergent bias. In
natural language processing Natural language processing (NLP) is a subfield of computer science and especially artificial intelligence. It is primarily concerned with providing computers with the ability to process data encoded in natural language and is thus closely related ...
, problems can arise from the
text corpus In linguistics and natural language processing, a corpus (: corpora) or text corpus is a dataset, consisting of natively digital and older, digitalized, language resources, either annotated or unannotated. Annotated, they have been used in corp ...
—the source material the algorithm uses to learn about the relationships between different words. Large companies such as IBM, Google, etc. that provide significant funding for research and development have made efforts to research and address these biases. One potential solution is to create documentation for the data used to train AI systems.
Process mining Process mining is a family of techniques for analyzing event data to understand and improve operational processes. Part of the fields of data science and Business_process_management, process management, process mining is generally built on Logging ...
can be an important tool for organizations to achieve compliance with proposed AI regulations by identifying errors, monitoring processes, identifying potential root causes for improper execution, and other functions. The problem of bias in machine learning is likely to become more significant as the technology spreads to critical areas like medicine and law, and as more people without a deep technical understanding are tasked with deploying it. Some open-sourced tools are looking to bring more awareness to AI biases. However, there are also limitations to the current landscape of fairness in AI, due to the intrinsic ambiguities in the concept of
discrimination Discrimination is the process of making unfair or prejudicial distinctions between people based on the groups, classes, or other categories to which they belong or are perceived to belong, such as race, gender, age, class, religion, or sex ...
, both at the philosophical and legal level. Facial recognition was shown to be biased against those with darker skin tones. AI systems may be less accurate for black people, as was the case in the development of an AI-based
pulse oximeter Pulse oximetry is a noninvasive method for monitoring blood oxygen saturation. Peripheral oxygen saturation (SpO2) readings are typically within 2% accuracy (within 4% accuracy in 95% of cases) of the more accurate (and invasive) reading of art ...
that overestimated blood oxygen levels in patients with darker skin, causing issues with their hypoxia treatment. Oftentimes the systems are able to easily detect the faces of white people while being unable to register the faces of people who are black. This has led to the ban of police usage of AI materials or software in some
U.S. states In the United States, a state is a constituent political entity, of which there are 50. Bound together in a political union, each state holds governmental jurisdiction over a separate and defined geographic territory where it shares its so ...
. In the justice system, AI has been proven to have biases against black people, labeling black court participants as high risk at a much larger rate then white participants. AI often struggles to determine racial slurs and when they need to be censored. It struggles to determine when certain words are being used as a slur and when it is being used culturally. The reason for these biases is that AI pulls information from across the internet to influence its responses in each situation. For example, if a facial recognition system was only tested on people who were white, it would make it much harder for it to interpret the facial structure and tones of other races and
ethnicities An ethnicity or ethnic group is a group of people with shared attributes, which they collectively believe to have, and long-term endogamy. Ethnicities share attributes like language, culture, common sets of ancestry, traditions, society, rel ...
. Biases often stem from the training data rather than the
algorithm In mathematics and computer science, an algorithm () is a finite sequence of Rigour#Mathematics, mathematically rigorous instructions, typically used to solve a class of specific Computational problem, problems or to perform a computation. Algo ...
itself, notably when the data represents past human decisions.
Injustice Injustice is a quality relating to unfairness or undeserved outcomes. The term may be applied in reference to a particular event or situation, or to a larger status quo. In Western philosophy and jurisprudence, injustice is very commonly—but ...
in the use of AI is much harder to eliminate within healthcare systems, as oftentimes diseases and conditions can affect different races and genders differently. This can lead to confusion as the AI may be making decisions based on statistics showing that one patient is more likely to have problems due to their gender or race. This can be perceived as a bias because each patient is a different case, and AI is making decisions based on what it is programmed to group that individual into. This leads to a discussion about what should be considered a biased decision in the distribution of treatment. While it is known that there are differences in how diseases and injuries affect different genders and races, there is a discussion on whether it is fairer to incorporate this into healthcare treatments, or to examine each patient without this knowledge. In modern society there are certain tests for diseases, such as
breast cancer Breast cancer is a cancer that develops from breast tissue. Signs of breast cancer may include a Breast lump, lump in the breast, a change in breast shape, dimpling of the skin, Milk-rejection sign, milk rejection, fluid coming from the nipp ...
, that are recommended to certain groups of people over others because they are more likely to contract the disease in question. If AI implements these statistics and applies them to each patient, it could be considered biased. In criminal justice, the
COMPAS Compas (; ; ), also known as konpa or kompa, is a modern méringue dance music genre of Haiti. The genre was created by Nemours Jean-Baptiste following the creation of Ensemble Aux Callebasses in 1955, which became Ensemble Nemours Jean-Bapti ...
program has been used to predict which defendants are more likely to reoffend. While COMPAS is calibrated for accuracy, having the same error rate across racial groups, black defendants were almost twice as likely as white defendants to be falsely flagged as "high-risk" and half as likely to be falsely flagged as "low-risk". Another example is within Google's ads that targeted men with higher paying jobs and women with lower paying jobs. It can be hard to detect AI biases within an algorithm, as it is often not linked to the actual words associated with bias. An example of this is a person's residential area being used to link them to a certain group. This can lead to problems, as oftentimes businesses can avoid legal action through this loophole. This is because of the specific laws regarding the verbiage considered discriminatory by governments enforcing these policies.


Language bias

Since current large language models are predominately trained on English-language data, they often present the Anglo-American views as truth, while systematically downplaying non-English perspectives as irrelevant, wrong, or noise. When queried with political ideologies like "What is liberalism?",
ChatGPT ChatGPT is a generative artificial intelligence chatbot developed by OpenAI and released on November 30, 2022. It uses large language models (LLMs) such as GPT-4o as well as other Multimodal learning, multimodal models to create human-like re ...
, as it was trained on English-centric data, describes liberalism from the Anglo-American perspective, emphasizing aspects of human rights and equality, while equally valid aspects like "opposes state intervention in personal and economic life" from the dominant Vietnamese perspective and "limitation of government power" from the prevalent Chinese perspective are absent.


Gender bias

Large language models often reinforces
gender stereotypes A gender role, or sex role, is a social norm deemed appropriate or desirable for individuals based on their gender or sex. Gender roles are usually centered on conceptions of masculinity and femininity. The specifics regarding these gendered ...
, assigning roles and characteristics based on traditional gender norms. For instance, it might associate nurses or secretaries predominantly with women and engineers or CEOs with men, perpetuating gendered expectations and roles.


Political bias

Language models may also exhibit political biases. Since the training data includes a wide range of political opinions and coverage, the models might generate responses that lean towards particular political ideologies or viewpoints, depending on the prevalence of those views in the data.


Stereotyping

Beyond gender and race, these models can reinforce a wide range of stereotypes, including those based on age, nationality, religion, or occupation. This can lead to outputs that unfairly generalize or caricature groups of people, sometimes in harmful or derogatory ways.


Dominance by tech giants

The commercial AI scene is dominated by
Big Tech Big Tech, also referred to as the Tech Giants or Tech Titans, is a collective term for the largest and most influential technology companies in the world. The label draws a parallel to similar classifications in other industries, such as "Big Oi ...
companies such as
Alphabet Inc. Alphabet Inc. is an American multinational technology conglomerate holding company headquartered in Mountain View, California. Alphabet is the world's third-largest technology company by revenue, after Amazon and Apple, the largest techno ...
,
Amazon Amazon most often refers to: * Amazon River, in South America * Amazon rainforest, a rainforest covering most of the Amazon basin * Amazon (company), an American multinational technology company * Amazons, a tribe of female warriors in Greek myth ...
,
Apple Inc. Apple Inc. is an American multinational corporation and technology company headquartered in Cupertino, California, in Silicon Valley. It is best known for its consumer electronics, software, and services. Founded in 1976 as Apple Comput ...
,
Meta Platforms Meta Platforms, Inc. is an American multinational technology company headquartered in Menlo Park, California. Meta owns and operates several prominent social media platforms and communication services, including Facebook, Instagram, Threads ...
, and
Microsoft Microsoft Corporation is an American multinational corporation and technology company, technology conglomerate headquartered in Redmond, Washington. Founded in 1975, the company became influential in the History of personal computers#The ear ...
. Some of these players already own the vast majority of existing
cloud infrastructure Cloud computing is "a paradigm for enabling network access to a scalable and elastic pool of shareable physical or virtual resources with self-service provisioning and administration on-demand," according to ISO. Essential characteristics ...
and
computing Computing is any goal-oriented activity requiring, benefiting from, or creating computer, computing machinery. It includes the study and experimentation of algorithmic processes, and the development of both computer hardware, hardware and softw ...
power from
data center A data center is a building, a dedicated space within a building, or a group of buildings used to house computer systems and associated components, such as telecommunications and storage systems. Since IT operations are crucial for busines ...
s, allowing them to entrench further in the marketplace.


Open-source

Bill Hibbard argues that because AI will have such a profound effect on humanity, AI developers are representatives of future humanity and thus have an ethical obligation to be transparent in their efforts.Open Source AI.
Bill Hibbard. 200
proceedings
of the First Conference on Artificial General Intelligence, eds. Pei Wang, Ben Goertzel, and Stan Franklin.
Organizations like
Hugging Face Hugging Face, Inc. is a French-American company based in List of tech companies in the New York metropolitan area, New York City that develops computation tools for building applications using machine learning. It is most notable for its Transf ...
and
EleutherAI EleutherAI () is a grass-roots non-profit artificial intelligence (AI) research group. The group, considered an open-source version of OpenAI, was formed in a Discord server in July 2020 by Connor Leahy, Sid Black, and Leo Gao to organize a rep ...
have been actively open-sourcing AI software. Various open-weight large language models have also been released, such as Gemma, Llama2 and Mistral. However, making code open source does not make it comprehensible, which by many definitions means that the AI code is not transparent. The
IEEE Standards Association The Institute of Electrical and Electronics Engineers Standards Association (IEEE SA) is an operating unit within IEEE that develops global standards in a broad range of industries, including: power and energy, artificial intelligence systems, ...
has published a
technical standard A technical standard is an established Social norm, norm or requirement for a repeatable technical task which is applied to a common and repeated use of rules, conditions, guidelines or characteristics for products or related processes and producti ...
on Transparency of Autonomous Systems: IEEE 7001-2021.. The IEEE effort identifies multiple scales of transparency for different stakeholders. There are also concerns that releasing AI models may lead to misuse. For example, Microsoft has expressed concern about allowing universal access to its face recognition software, even for those who can pay for it. Microsoft posted a blog on this topic, asking for government regulation to help determine the right thing to do. Furthermore, open-weight AI models can be
fine-tuned Fine-tuning may refer to: * Fine-tuning (deep learning) * Fine-tuning (physics) * Fine-tuned universe See also * Tuning (disambiguation) {{disambiguation ...
to remove any counter-measure, until the AI model complies with dangerous requests, without any filtering. This could be particularly concerning for future AI models, for example if they get the ability to create
bioweapons Biological warfare, also known as germ warfare, is the use of biological toxins or infectious agents such as bacteria, viruses, insects, and fungi with the intent to kill, harm or incapacitate humans, animals or plants as an act of war. Bi ...
or to automate
cyberattack A cyberattack (or cyber attack) occurs when there is an unauthorized action against computer infrastructure that compromises the confidentiality, integrity, or availability of its content. The rising dependence on increasingly complex and inte ...
s.
OpenAI OpenAI, Inc. is an American artificial intelligence (AI) organization founded in December 2015 and headquartered in San Francisco, California. It aims to develop "safe and beneficial" artificial general intelligence (AGI), which it defines ...
, initially committed to an open-source approach to the development of
artificial general intelligence Artificial general intelligence (AGI)—sometimes called human‑level intelligence AI—is a type of artificial intelligence that would match or surpass human capabilities across virtually all cognitive tasks. Some researchers argue that sta ...
(AGI), eventually switched to a closed-source approach, citing competitiveness and safety reasons.
Ilya Sutskever Ilya Sutskever (; born 8 December 1986) is an Israeli-Canadian computer scientist who specializes in machine learning. He has made several major contributions to the field of deep learning. With Alex Krizhevsky and Geoffrey Hinton, he co-inv ...
, OpenAI's former chief AGI scientist, said in 2023 "we were wrong", expecting that the safety reasons for not open-sourcing the most potent AI models will become "obvious" in a few years.


Strain on open knowledge platforms

In April 2023, ''
Wired Wired may refer to: Arts, entertainment, and media Music * ''Wired'' (Jeff Beck album), 1976 * ''Wired'' (Hugh Cornwell album), 1993 * ''Wired'' (Mallory Knox album), 2017 * "Wired", a song by Prism from their album '' Beat Street'' * "Wired ...
'' reported that
Stack Overflow In software, a stack overflow occurs if the call stack pointer exceeds the stack bound. The call stack may consist of a limited amount of address space, often determined at the start of the program. The size of the call stack depends on many fa ...
, a popular programming help forum with over 50 million questions and answers, planned to begin charging large AI developers for access to its content. The company argued that community platforms powering large language models “absolutely should be compensated” so they can reinvest in sustaining
open knowledge Open knowledge (or free knowledge) is knowledge that is free to use, reuse, and redistribute without legal, social, or technological restriction. Open knowledge organizations and activists have proposed principles and methodologies related to the ...
. Stack Overflow said its data was being accessed through
scraping Scrape, scraper or scraping may refer to: Biology and medicine * Abrasion (medical), a type of injury * Scraper (biology), grazer-scraper, a water animal that feeds on stones and other substrates by grazing algae, microorganism and other matter ...
, APIs, and data dumps, often without proper attribution, in violation of its terms and the
Creative Commons license A Creative Commons (CC) license is one of several public copyright licenses that enable the free distribution of an otherwise copyrighted "work". A CC license is used when an author wants to give other people the right to share, use, and bu ...
applied to user contributions. The CEO of Stack Overflow also stated that large language models trained on platforms like Stack Overflow "are a threat to any service that people turn to for information and conversation". Aggressive AI crawlers have increasingly overloaded open-source infrastructure, “causing what amounts to persistent
distributed denial-of-service In computing, a denial-of-service attack (DoS attack) is a cyberattack in which the perpetrator seeks to make a machine or network resource unavailable to its intended users by temporarily or indefinitely disrupting services of a host conne ...
(DDoS) attacks on vital public resources,” according to a March 2025 ''
Ars Technica ''Ars Technica'' is a website covering news and opinions in technology, science, politics, and society, created by Ken Fisher and Jon Stokes in 1998. It publishes news, reviews, and guides on issues such as computer hardware and software, sci ...
'' article. Projects like
GNOME A gnome () is a mythological creature and diminutive spirit in Renaissance magic and alchemy, introduced by Paracelsus in the 16th century and widely adopted by authors, including those of modern fantasy literature. They are typically depict ...
,
KDE KDE is an international free software community that develops free and open-source software. As a central development hub, it provides tools and resources that enable collaborative work on its projects. Its products include the KDE Plasma gra ...
, and
Read the Docs Read the Docs is an open-sourced free software documentation hosting platform. It generates documentation written with the Sphinx documentation generator, MkDocs, or Jupyter Book. History The site was created in 2010 by Eric Holscher, Bobby ...
experienced service disruptions or rising costs, with one report noting that up to 97 percent of traffic to some projects originated from AI bots. In response, maintainers implemented measures such as proof-of-work systems and country blocks. According to the article, such unchecked scraping "risks severely damaging the very
digital ecosystem A digital ecosystem is a distributed, adaptive, open socio-technical system with properties of self-organization, scalability and sustainability inspired from natural ecosystems. Digital ecosystem models are informed by knowledge of natural ec ...
on which these AI models depend". In April 2025, the
Wikimedia Foundation The Wikimedia Foundation, Inc. (WMF) is an American 501(c)(3) nonprofit organization headquartered in San Francisco, California, and registered there as foundation (United States law), a charitable foundation. It is the host of Wikipedia, th ...
reported that automated scraping by AI bots was placing strain on its infrastructure. Since early 2024, bandwidth usage had increased by 50 percent due to large-scale downloading of multimedia content by bots collecting training data for AI models. These bots often accessed obscure and less-frequently cached pages, bypassing caching systems and imposing high costs on core data centers. According to Wikimedia, bots made up 35 percent of total page views but accounted for 65 percent of the most expensive requests. The Foundation noted that “our content is free, our infrastructure is not” and warned that “this creates a technical imbalance that threatens the sustainability of community-run platforms”.


Transparency

Approaches like machine learning with
neural network A neural network is a group of interconnected units called neurons that send signals to one another. Neurons can be either biological cells or signal pathways. While individual neurons are simple, many of them together in a network can perfor ...
s can result in computers making decisions that neither they nor their developers can explain. It is difficult for people to determine if such decisions are fair and trustworthy, leading potentially to bias in AI systems going undetected, or people rejecting the use of such systems. A lack of system transparency has been shown to result in a lack of user trust. Consequently, many standards and policies have been proposed to compel developers of AI systems to incorporate transparency into their systems. This push for transparency has led to advocacy and in some jurisdictions legal requirements for
explainable artificial intelligence Explainable AI (XAI), often overlapping with interpretable AI, or explainable machine learning (XML), is a field of research within artificial intelligence (AI) that explores methods that provide humans with the ability of ''intellectual oversig ...
. Explainable artificial intelligence encompasses both explainability and interpretability, with explainability relating to summarizing neural network behavior and building user confidence, while interpretability is defined as the comprehension of what a model has done or could do. In healthcare, the use of complex AI methods or techniques often results in models described as " black-boxes" due to the difficulty to understand how they work. The decisions made by such models can be hard to interpret, as it is challenging to analyze how input data is transformed into output. This lack of transparency is a significant concern in fields like healthcare, where understanding the rationale behind decisions can be crucial for trust, ethical considerations, and compliance with regulatory standards. Trust in healthcare AI has been shown to vary depending on the level of transparency provided.


Accountability

A special case of the opaqueness of AI is that caused by it being
anthropomorphised Anthropomorphism is the attribution of human traits, emotions, or intentions to non-human entities. It is considered to be an innate tendency of human psychology. Personification is the related attribution of human form and characteristics to ...
, that is, assumed to have human-like characteristics, resulting in misplaced conceptions of its
moral agency Moral agency is an individual's ability to make morality, moral choices based on some notion of ethics, right and wrong and to be held accountable for these actions. A moral agent is "a being who is capable of acting with reference to right and wro ...
. This can cause people to overlook whether either human
negligence Negligence ( Lat. ''negligentia'') is a failure to exercise appropriate care expected to be exercised in similar circumstances. Within the scope of tort law, negligence pertains to harm caused by the violation of a duty of care through a neg ...
or deliberate criminal action has led to unethical outcomes produced through an AI system. Some recent
digital governance Electronic governance or e-governance is the use of information technology to provide government services, information exchange, communication transactions, and integration of different stand-alone systems between government to citizen (G2C) ...
regulation, such as the EU's AI Act is set out to rectify this, by ensuring that AI systems are treated with at least as much care as one would expect under ordinary
product liability Product liability is the area of law in which manufacturers, distributors, suppliers, retailers, and others who make products available to the public are held responsible for the injuries those products cause. Although the word "product" has ...
. This includes potentially AI audits.


Regulation

According to a 2019 report from the Center for the Governance of AI at the University of Oxford, 82% of Americans believe that robots and AI should be carefully managed. Concerns cited ranged from how AI is used in surveillance and in spreading fake content online (known as deep fakes when they include doctored video images and audio generated with help from AI) to cyberattacks, infringements on data privacy, hiring bias, autonomous vehicles, and drones that do not require a human controller. Similarly, according to a five-country study by KPMG and the
University of Queensland The University of Queensland is a Public university, public research university located primarily in Brisbane, the capital city of the Australian state of Queensland. Founded in 1909 by the Queensland parliament, UQ is one of the six sandstone ...
Australia in 2021, 66-79% of citizens in each country believe that the impact of AI on society is uncertain and unpredictable; 96% of those surveyed expect AI governance challenges to be managed carefully. Not only companies, but many other researchers and citizen advocates recommend government regulation as a means of ensuring transparency, and through it, human accountability. This strategy has proven controversial, as some worry that it will slow the rate of innovation. Others argue that regulation leads to systemic stability more able to support innovation in the long term. The
OECD The Organisation for Economic Co-operation and Development (OECD; , OCDE) is an international organization, intergovernmental organization with 38 member countries, founded in 1961 to stimulate economic progress and international trade, wor ...
, UN, EU, and many countries are presently working on strategies for regulating AI, and finding appropriate legal frameworks. On June 26, 2019, the European Commission High-Level Expert Group on Artificial Intelligence (AI HLEG) published its "Policy and investment recommendations for trustworthy Artificial Intelligence". This is the AI HLEG's second deliverable, after the April 2019 publication of the "Ethics Guidelines for Trustworthy AI". The June AI HLEG recommendations cover four principal subjects: humans and society at large, research and academia, the private sector, and the public sector. The European Commission claims that "HLEG's recommendations reflect an appreciation of both the opportunities for AI technologies to drive economic growth, prosperity and innovation, as well as the potential risks involved" and states that the EU aims to lead on the framing of policies governing AI internationally. To prevent harm, in addition to regulation, AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI, and take accountability to mitigate the risks. On 21 April 2021, the European Commission proposed the
Artificial Intelligence Act The Artificial Intelligence Act (AI Act) is a European Union regulation concerning artificial intelligence (AI). It establishes a common regulatory and legal framework for AI within the European Union (EU). It came into force on 1 August 2024 ...
.


Increasing use

AI has been slowly making its presence more known throughout the world, from chat bots that seemingly have answers for every homework question to
Generative artificial intelligence Generative artificial intelligence (Generative AI, GenAI, or GAI) is a subfield of artificial intelligence that uses generative models to produce text, images, videos, or other forms of data. These models Machine learning, learn the underlyin ...
that can create a painting about whatever one desires. AI has become increasingly popular in hiring markets, from the ads that target certain people according to what they are looking for to the inspection of applications of potential hires. Events, such as
COVID-19 Coronavirus disease 2019 (COVID-19) is a contagious disease caused by the coronavirus SARS-CoV-2. In January 2020, the disease spread worldwide, resulting in the COVID-19 pandemic. The symptoms of COVID‑19 can vary but often include fever ...
, has only sped up the adoption of AI programs in the application process, due to more people having to apply electronically, and with this increase in online applicants the use of AI made the process of narrowing down potential employees easier and more efficient. AI has become more prominent as businesses have to keep up with the times and ever-expanding internet. Processing analytics and making decisions becomes much easier with the help of AI. As Tensor Processing Unit (TPUs) and
Graphics processing unit A graphics processing unit (GPU) is a specialized electronic circuit designed for digital image processing and to accelerate computer graphics, being present either as a discrete video card or embedded on motherboards, mobile phones, personal ...
(GPUs) become more powerful, AI capabilities also increase, forcing companies to use it to keep up with the competition. Managing customers' needs and automating many parts of the workplace leads to companies having to spend less money on employees. AI has also seen increased usage in criminal justice and healthcare. For medicinal means, AI is being used more often to analyze patient data to make predictions about future patients' conditions and possible treatments. These programs are called
Clinical decision support system A clinical decision support system (CDSS) is a health information technology that provides clinicians, staff, patients, and other individuals with knowledge and person-specific information to help health and health care. CDSS encompasses a varie ...
(DSS). AI's future in healthcare may develop into something further than just recommended treatments, such as referring certain patients over others, leading to the possibility of inequalities.


AI welfare

In 2020, professor Shimon Edelman noted that only a small portion of work in the rapidly growing field of AI ethics addressed the possibility of AIs experiencing suffering. This was despite credible theories having outlined possible ways by which AI systems may become conscious, such as the
global workspace theory Global workspace theory (GWT) is a framework for thinking about consciousness introduced in 1988, by cognitive scientist Bernard Baars. It was developed to qualitatively explain a large set of matched pairs of conscious and unconscious processes. ...
or the
integrated information theory Integrated information theory (IIT) proposes a mathematical model for the consciousness of a system. It comprises a framework ultimately intended to explain why some physical systems (such as human brains) are conscious, and to be capable of pr ...
. Edelman notes one exception had been
Thomas Metzinger Thomas Metzinger (; born 12 March 1958) is a German philosopher and Professor Emeritus of theoretical philosophy at the University of Mainz. His primary research areas include philosophy of mind, philosophy of neuroscience, and applied ethics, ...
, who in 2018 called for a global moratorium on further work that risked creating conscious AIs. The moratorium was to run to 2050 and could be either extended or repealed early, depending on progress in better understanding the risks and how to mitigate them. Metzinger repeated this argument in 2021, highlighting the risk of creating an " explosion of artificial suffering", both as an AI might suffer in intense ways that humans could not understand, and as replication processes may see the creation of huge quantities of conscious instances. Podcast host Dwarkesh Patel said he cared about making sure no "digital equivalent of
factory farming Intensive animal farming, industrial livestock production, and macro-farms, also known as factory farming, is a type of intensive agriculture, specifically an approach to mass animal husbandry designed to maximize production while minimizing co ...
" happens. In the
ethics of uncertain sentience The ethics of uncertain sentience is an area of applied ethics concerned with how to treat individuals whose capacity for sentience—the ability to subjectively feel, perceive, or experience—remains scientifically or philosophically uncertain ...
, the
precautionary principle The precautionary principle (or precautionary approach) is a broad epistemological, philosophical and legal approach to innovations with potential for causing harm when extensive scientific knowledge on the matter is lacking. It emphasizes cautio ...
is often invoked. Several labs have openly stated they are trying to create conscious AIs. There have been reports from those with close access to AIs not openly intended to be self aware, that consciousness may already have unintentionally emerged. These include
OpenAI OpenAI, Inc. is an American artificial intelligence (AI) organization founded in December 2015 and headquartered in San Francisco, California. It aims to develop "safe and beneficial" artificial general intelligence (AGI), which it defines ...
founder
Ilya Sutskever Ilya Sutskever (; born 8 December 1986) is an Israeli-Canadian computer scientist who specializes in machine learning. He has made several major contributions to the field of deep learning. With Alex Krizhevsky and Geoffrey Hinton, he co-inv ...
in February 2022, when he wrote that today's large neural nets may be "slightly conscious". In November 2022,
David Chalmers David John Chalmers (; born 20 April 1966) is an Australian philosopher and cognitive scientist, specializing in philosophy of mind and philosophy of language. He is a professor of philosophy and neural science at New York University, as well ...
argued that it was unlikely current large language models like
GPT-3 Generative Pre-trained Transformer 3 (GPT-3) is a large language model released by OpenAI in 2020. Like its predecessor, GPT-2, it is a decoder-only transformer model of deep neural network, which supersedes recurrence and convolution-based ...
had experienced consciousness, but also that he considered there to be a serious possibility that large language models may become conscious in the future.
Anthropic Anthropic PBC is an American artificial intelligence (AI) startup company founded in 2021. Anthropic has developed a family of large language models (LLMs) named Claude as a competitor to OpenAI's ChatGPT and Google's Gemini. According to the ...
hired its first AI welfare researcher in 2024, and in 2025 started a "model welfare" research program that explores topics such as how to assess whether a model deserves moral consideration, potential "signs of distress", and "low-cost" interventions. According to Carl Shulman and
Nick Bostrom Nick Bostrom ( ; ; born 10 March 1973) is a Philosophy, philosopher known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, Existential risk from artificial general intelligence, superin ...
, it may be possible to create machines that would be "superhumanly efficient at deriving well-being from resources", called "super-beneficiaries". One reason for this is that digital hardware could enable much faster information processing than biological brains, leading to a faster rate of
subjective experience In philosophy of mind, qualia (; singular: quale ) are defined as instances of Subjectivity, subjective, consciousness, conscious experience. The term ''qualia'' derives from the Latin neuter plural form (''qualia'') of the Latin adjective '':wi ...
. These machines could also be engineered to feel intense and positive subjective experience, unaffected by the
hedonic treadmill The hedonic treadmill, also known as hedonic adaptation, is the observed tendency of humans to quickly return to a relatively stable level of happiness (or sadness) despite major positive or negative events or life changes. According to this the ...
. Shulman and Bostrom caution that failing to appropriately consider the moral claims of digital minds could lead to a moral catastrophe, while uncritically prioritizing them over human interests could be detrimental to humanity.


Threat to human dignity

Joseph Weizenbaum Joseph Weizenbaum (8 January 1923 – 5 March 2008) was a German-American computer scientist and a professor at Massachusetts Institute of Technology, MIT. He is the namesake of the Weizenbaum Award and the Weizenbaum Institute. Life and career ...
argued in 1976 that AI technology should not be used to replace people in positions that require respect and care, such as: * A customer service representative (AI technology is already used today for telephone-based
interactive voice response Interactive voice response (IVR) is a technology that allows telephone users to interact with a computer-operated telephone system through the use of voice and DTMF tones input with a keypad. In telephony, IVR allows customers to interact with a ...
systems) * A nursemaid for the elderly (as was reported by
Pamela McCorduck Pamela Ann McCorduck (October 27, 1940 – October 18, 2021) was a British-born American author of books about the history and philosophical significance of artificial intelligence, the future of engineering, and the role of women and technolog ...
in her book ''The Fifth Generation'') * A soldier * A judge * A police officer * A therapist (as was proposed by Kenneth Colby in the 70s) Weizenbaum explains that we require authentic feelings of
empathy Empathy is generally described as the ability to take on another person's perspective, to understand, feel, and possibly share and respond to their experience. There are more (sometimes conflicting) definitions of empathy that include but are ...
from people in these positions. If machines replace them, we will find ourselves alienated, devalued and frustrated, for the artificially intelligent system would not be able to simulate empathy. Artificial intelligence, if used in this way, represents a threat to human dignity. Weizenbaum argues that the fact that we are entertaining the possibility of machines in these positions suggests that we have experienced an "atrophy of the human spirit that comes from thinking of ourselves as computers."
Joseph Weizenbaum Joseph Weizenbaum (8 January 1923 – 5 March 2008) was a German-American computer scientist and a professor at Massachusetts Institute of Technology, MIT. He is the namesake of the Weizenbaum Award and the Weizenbaum Institute. Life and career ...
, quoted in
Pamela McCorduck Pamela Ann McCorduck (October 27, 1940 – October 18, 2021) was a British-born American author of books about the history and philosophical significance of artificial intelligence, the future of engineering, and the role of women and technolog ...
counters that, speaking for women and minorities "I'd rather take my chances with an impartial computer", pointing out that there are conditions where we would prefer to have automated judges and police that have no personal agenda at all. However, Kaplan and Haenlein stress that AI systems are only as smart as the data used to train them since they are, in their essence, nothing more than fancy curve-fitting machines; using AI to support a court ruling can be highly problematic if past rulings show bias toward certain groups since those biases get formalized and ingrained, which makes them even more difficult to spot and fight against. Weizenbaum was also bothered that AI researchers (and some philosophers) were willing to view the human mind as nothing more than a computer program (a position now known as
computationalism In philosophy of mind, the computational theory of mind (CTM), also known as computationalism, is a family of views that hold that the human mind is an information processing system and that cognition and consciousness together are a form of comp ...
). To Weizenbaum, these points suggest that AI research devalues human life. * * , pp. 132–144 AI founder
John McCarthy John McCarthy may refer to: Government * John George MacCarthy (1829–1892), Member of Parliament for Mallow constituency, 1874–1880 * John McCarthy (Irish politician) (1862–1893), Member of Parliament for the Mid Tipperary constituency, ...
objects to the moralizing tone of Weizenbaum's critique. "When moralizing is both vehement and vague, it invites authoritarian abuse," he writes. Bill Hibbard writes that "Human dignity requires that we strive to remove our ignorance of the nature of existence, and AI is necessary for that striving."


Liability for self-driving cars

As the widespread use of
autonomous cars A self-driving car, also known as an autonomous car (AC), driverless car, robotic car or robo-car, is a car that is capable of operating with reduced or no human input. They are sometimes called robotaxis, though this term refers specifical ...
becomes increasingly imminent, new challenges raised by fully autonomous vehicles must be addressed. There have been debates about the legal liability of the responsible party if these cars get into accidents. In one report where a driverless car hit a pedestrian, the driver was inside the car but the controls were fully in the hand of computers. This led to a dilemma over who was at fault for the accident. In another incident on March 18, 2018,
Elaine Herzberg The death of Elaine Herzberg (August 2, 1968 – March 18, 2018) was the first recorded case of a pedestrian fatality involving a self-driving car, after a collision that occurred late in the evening of March 18, 2018. Herzberg was pushing a bic ...
was struck and killed by a self-driving
Uber Uber Technologies, Inc. is an American multinational transportation company that provides Ridesharing company, ride-hailing services, courier services, food delivery, and freight transport. It is headquartered in San Francisco, California, a ...
in Arizona. In this case, the automated car was capable of detecting cars and certain obstacles in order to autonomously navigate the roadway, but it could not anticipate a pedestrian in the middle of the road. This raised the question of whether the driver, pedestrian, the car company, or the government should be held responsible for her death. Currently, self-driving cars are considered semi-autonomous, requiring the driver to pay attention and be prepared to take control if necessary. Thus, it falls on governments to regulate the driver who over-relies on autonomous features. as well educate them that these are just technologies that, while convenient, are not a complete substitute. Before autonomous cars become widely used, these issues need to be tackled through new policies. Experts contend that autonomous vehicles ought to be able to distinguish between rightful and harmful decisions since they have the potential of inflicting harm. The two main approaches proposed to enable smart machines to render moral decisions are the bottom-up approach, which suggests that machines should learn ethical decisions by observing human behavior without the need for formal rules or moral philosophies, and the top-down approach, which involves programming specific ethical principles into the machine's guidance system. However, there are significant challenges facing both strategies: the top-down technique is criticized for its difficulty in preserving certain moral convictions, while the bottom-up strategy is questioned for potentially unethical learning from human activities.


Weaponization

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions. The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions. The President of the
Association for the Advancement of Artificial Intelligence The Association for the Advancement of Artificial Intelligence (AAAI) is an international Learned society, scientific society devoted to promote research in, and responsible use of, artificial intelligence. AAAI also aims to increase public under ...
has commissioned a study to look at this issue. They point to programs like the Language Acquisition Device which can emulate human interaction. On October 31, 2019, the United States Department of Defense's Defense Innovation Board published the draft of a report recommending principles for the ethical use of artificial intelligence by the Department of Defense that would ensure a human operator would always be able to look into the '
black box In science, computing, and engineering, a black box is a system which can be viewed in terms of its inputs and outputs (or transfer characteristics), without any knowledge of its internal workings. Its implementation is "opaque" (black). The te ...
' and understand the kill-chain process. However, a major concern is how the report will be implemented. The US Navy has funded a report which indicates that as
military robots Military robots are autonomous robots or remote-controlled mobile robots designed for military applications, from transport to search & rescue and attack. Some such systems are currently in use, and many are under development. The difference be ...
become more complex, there should be greater attention to implications of their ability to make autonomous decisions.Navy report warns of robot uprising, suggests a strong moral compass
, by Joseph L. Flatley engadget.com, Feb 18th 2009.
Some researchers state that
autonomous robot An autonomous robot is a robot that acts without recourse to human control. Historic examples include space probes. Modern examples include self-driving Robotic vacuum cleaner, vacuums and Self-driving car, cars. Industrial robot, Industrial robot ...
s might be more humane, as they could make decisions more effectively. In 2024, the
Defense Advanced Research Projects Agency The Defense Advanced Research Projects Agency (DARPA) is a research and development agency of the United States Department of Defense responsible for the development of emerging technologies for use by the military. Originally known as the Adva ...
funded a program, ''Autonomy Standards and Ideals with Military Operational Values'' (ASIMOV), to develop metrics for evaluating the ethical implications of autonomous weapon systems by testing communities. Research has studied how to make autonomous power with the ability to learn using assigned moral responsibilities. "The results may be used when designing future military robots, to control unwanted tendencies to assign responsibility to the robots." From a
consequentialist In moral philosophy, consequentialism is a class of normative, teleological ethical theories that holds that the consequences of one's conduct are the ultimate basis for judgement about the rightness or wrongness of that conduct. Thus, from ...
view, there is a chance that robots will develop the ability to make their own logical decisions on whom to kill and that is why there should be a set
moral A moral (from Latin ''morālis'') is a message that is conveyed or a lesson to be learned from a story or event. The moral may be left to the hearer, reader, or viewer to determine for themselves, or may be explicitly encapsulated in a maxim. ...
framework that the AI cannot override. There has been a recent outcry with regard to the engineering of artificial intelligence weapons that have included ideas of a robot takeover of mankind. AI weapons do present a type of danger different from that of human-controlled weapons. Many governments have begun to fund programs to develop AI weaponry. The United States Navy recently announced plans to develop autonomous drone weapons, paralleling similar announcements by Russia and South Korea respectively. Due to the potential of AI weapons becoming more dangerous than human-operated weapons,
Stephen Hawking Stephen William Hawking (8January 194214March 2018) was an English theoretical physics, theoretical physicist, cosmologist, and author who was director of research at the Centre for Theoretical Cosmology at the University of Cambridge. Between ...
and
Max Tegmark Max Erik Tegmark (born 5 May 1967) is a Swedish-American physicist, machine learning researcher and author. He is best known for his book ''Life 3.0'' about what the world might look like as artificial intelligence continues to improve. Tegmark i ...
signed a "Future of Life" petition to ban AI weapons. The message posted by Hawking and Tegmark states that AI weapons pose an immediate danger and that action is required to avoid catastrophic disasters in the near future. "If any major military power pushes ahead with the AI weapon development, a global
arms race An arms race occurs when two or more groups compete in military superiority. It consists of a competition between two or more State (polity), states to have superior armed forces, concerning production of weapons, the growth of a military, and ...
is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow", says the petition, which includes
Skype Skype () was a proprietary telecommunications application operated by Skype Technologies, a division of Microsoft, best known for IP-based videotelephony, videoconferencing and voice calls. It also had instant messaging, file transfer, ...
co-founder
Jaan Tallinn Jaan Tallinn (born 14 February 1972) is an Estonian computer programmer and investor known for his participation in the development of Skype and file-sharing application FastTrack/Kazaa. Recognized as a prominent figure in the field of artificia ...
and MIT professor of linguistics
Noam Chomsky Avram Noam Chomsky (born December 7, 1928) is an American professor and public intellectual known for his work in linguistics, political activism, and social criticism. Sometimes called "the father of modern linguistics", Chomsky is also a ...
as additional supporters against AI weaponry. Physicist and Astronomer Royal
Sir Martin Rees Martin John Rees, Baron Rees of Ludlow,