Algorithm Aversion
   HOME

TheInfoList



OR:

Algorithm aversion is "biased assessment of an algorithm which manifests in negative behaviours and attitudes towards the algorithm compared to a human agent." It describes a phenomenon where humans reject advice from an algorithm in a case where they would accept the same advice if they thought it was coming from another human.
Algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific Computational problem, problems or to perform a computation. Algorithms are used as specificat ...
s, such as those employing
machine learning Machine learning (ML) is a field of inquiry devoted to understanding and building methods that 'learn', that is, methods that leverage data to improve performance on some set of tasks. It is seen as a part of artificial intelligence. Machine ...
methods or various forms of
artificial intelligence Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information—demonstrated by machines, as opposed to intelligence displayed by animals and humans. Example tasks in which this is done include speech re ...
, are commonly used to provide recommendations or advice to human decisionmakers. For example,
recommender system A recommender system, or a recommendation system (sometimes replacing 'system' with a synonym such as platform or engine), is a subclass of information filtering system that provide suggestions for items that are most pertinent to a particular u ...
s are used in
E-commerce E-commerce (electronic commerce) is the activity of electronically buying or selling of products on online services or over the Internet. E-commerce draws on technologies such as mobile commerce, electronic funds transfer, supply chain manageme ...
to identify products a customer might like, and artificial intelligence is used in healthcare to assist with diagnosis and treatment decisions. However, humans sometimes appear to resist or reject these algorithmic recommendations more than if the recommendation had come from a human. Notably, algorithms are often capable of outperforming humans, so rejecting algorithmic advice can result in poor performance or suboptimal outcomes. This is an emerging topic and it is not completely clear why or under what circumstances people will display algorithm aversion. In some cases, people seem to be more likely to take recommendations from an algorithm than from a human, a phenomenon called ''algorithm appreciation''.


Examples of algorithm aversion

Algorithm aversion has been studied in a wide variety of contexts. For example, people seem to prefer recommendations for jokes from a human rather than from an algorithm, and would rather rely on a human to predict the number of airline passengers from each US state instead of an algorithm. People also seem to prefer medical recommendations from human doctors instead of an algorithm.


Factors affecting algorithm aversion

Various frameworks have been proposed to explain the causes for algorithm aversion and techniques or system features that might help reduce aversion.


Decision control

Algorithms may either be used in an ''advisory'' role (providing advice to a human who will make the final decision) or in an ''delegatory'' role (where the algorithm makes a decision without human supervision). A movie recommendation system providing a list of suggestions would be in an ''advisory'' role, whereas the human driver ''delegates'' the task of steering the car to Tesla's Autopilot. Generally, a lack of decision control tends to increase algorithm aversion.


Perceptions about algorithm capabilities and performance

Overall, people tend to judge machines more critically than they do humans. Several system characteristics or factors have been shown to influence how people evaluate algorithms.


Algorithm Process and the role of system transparency

One reason people display resistance to algorithms is a lack of understanding about how the algorithm is arriving at its recommendation. People also seem to have a better intuition for how another human would make recommendations. Whereas people assume that other humans will account for unique differences between situations, they sometimes perceive algorithms as incapable of considering individual differences and resist the algorithms accordingly.


Decision domain

People are generally skeptical that algorithms can make accurate predictions in certain areas, particularly if task involves a seemingly human characteristic like
morals Morality () is the differentiation of intentions, decisions and actions between those that are distinguished as proper (right) and those that are improper (wrong). Morality can be a body of standards or principles derived from a code of cond ...
or
empathy Empathy is the capacity to understand or feel what another person is experiencing from within their frame of reference, that is, the capacity to place oneself in another's position. Definitions of empathy encompass a broad range of social, co ...
. Algorithm aversion tends to be higher when the task is more subjective and lower on tasks that are objective or quantifiable.


Human characteristics


Domain expertise

Expertise in a particular field has been shown to increase algorithm aversion and reduce use of algorithmic decision rules. Overconfidence may partially explain this effect; experts might feel that an algorithm is not capable of the types of judgments they make. Compared to non-experts, experts also have more knowledge of the field and therefore may be more critical of a recommendation. Where a non-expert might accept a recommendation ("The algorithm must know something I don't.") the expert might find specific fault with the algorithm's recommendation ("This recommendation does not account for a particular factor").
Decision-making In psychology, decision-making (also spelled decision making and decisionmaking) is regarded as the Cognition, cognitive process resulting in the selection of a belief or a course of action among several possible alternative options. It could be ...
research has shown that experts in a given field tend to think about decisions differently than a non-expert. Experts chunk and group information; for example,
chess Chess is a board game for two players, called White and Black, each controlling an army of chess pieces in their color, with the objective to checkmate the opponent's king. It is sometimes called international chess or Western chess to disti ...
grandmasters will see opening positions (e.g., the
Queen's Gambit The Queen's Gambit is the chess opening that starts with the moves: :1. d4 d5 :2. c4 It is one of the oldest openings and is still commonly played today. It is traditionally described as a ''gambit'' because White appears to sacrifice the c ...
or the
Bishop's Opening The Bishop's Opening is a chess opening that begins with the moves: :1. e4 e5 :2. Bc4 White attacks Black's f7-square and prevents Black from advancing the d-pawn to d5. By ignoring the beginner's maxim "develop knights before bishops", White ...
) instead of individual pieces on the board. Experts may see a situation as a functional representation (e.g., a doctor could see a trajectory and predicted outcome for a patient instead of a list of medications and symptoms). These differences may also partly account for the increased algorithm aversion seen in experts.


Culture

Different cultural norms and influences may cause people to respond to algorithmic recommendations differently. The way that recommendations are presented (e.g., language, tone, etc.) may cause people to respond differently.


Age

Digital natives The term digital native describes a person who has grown up in the information age. Often grouped into Millennials, Generation Z, and Generation Alpha, these individuals can consume digital information and stimuli quickly and comfortably throug ...
are younger and have known technology their whole lives, while digital immigrants have not. Age is a commonly-cited factor hypothesized to affect whether or not people accept algorithmic recommendations. For example, one study found that trust in an algorithmic financial advisor was lower among older people compared with younger study participants. However, other research has found that algorithm aversion does not vary with age.


Proposed methods to overcome algorithm aversion

Algorithms are often capable of outperforming humans or performing tasks much more cost-effectively.


Human-in-the-loop

One way to reduce algorithmic aversion is to provide the human decision maker with control over the final decision.


System transparency

Providing explanations about how algorithms work has been shown to reduce aversion. These explanations can take a variety of forms, including about how the algorithm as a whole works, about why it is making a particular recommendation in a specific case, or how confident it is in its recommendation.


User training

Algorithmic recommendations represent a new type of information in many fields. For example, a medical AI diagnosis of a
bacterial infection Pathogenic bacteria are bacteria that can cause disease. This article focuses on the bacteria that are pathogenic to humans. Most species of bacteria are harmless and are often beneficial but others can cause infectious diseases. The number of ...
is different than a lab test indicating the presence of a bacteria. When decision makers are faced with a task for the first time, they may be especially hesitant to use an algorithm. It has been shown that learning effects achieved through repeated tasks, constant feedback and financial incentives can contribute towards reducing algorithm aversion.


Algorithm appreciation

Studies do not consistently show people demonstrating
bias Bias is a disproportionate weight ''in favor of'' or ''against'' an idea or thing, usually in a way that is closed-minded, prejudicial, or unfair. Biases can be innate or learned. People may develop biases for or against an individual, a group, ...
against algorithms and sometimes show the opposite, preferring advice from an algorithm instead of a human. This effect is called ''algorithm appreciation''. Results are mixed, showing that people sometimes seem to prefer advice that comes from an algorithm instead of a human. For example, customers are more likely to indicate initial interest to human sales agents compared to automated sales agents but less likely to provide contact information to them. This is due to "lower levels of performance expectancy and effort expectancy associated with human sales agents versus automated sales agents".


References

{{reflist Algorithmic information theory Psychological anthropology