HOME

TheInfoList



OR:

An opinion poll, often simply referred to as a survey or a poll, is a human research survey of
public opinion Public opinion, or popular opinion, is the collective opinion on a specific topic or voting intention relevant to society. It is the people's views on matters affecting them. In the 21st century, public opinion is widely thought to be heavily ...
from a particular sample. Opinion polls are usually designed to represent the opinions of a population by conducting a series of questions and then extrapolating generalities in ratio or within confidence intervals. A person who conducts polls is referred to as a pollster.


History

The first known example of an opinion poll was a tally of voter preferences reported by the ''Raleigh Star and North Carolina State Gazette'' and the ''Wilmington American Watchman and Delaware Advertiser'' prior to the 1824 presidential election, showing
Andrew Jackson Andrew Jackson (March 15, 1767 – June 8, 1845) was the seventh president of the United States from 1829 to 1837. Before Presidency of Andrew Jackson, his presidency, he rose to fame as a general in the U.S. Army and served in both houses ...
leading
John Quincy Adams John Quincy Adams (; July 11, 1767 – February 23, 1848) was the sixth president of the United States, serving from 1825 to 1829. He previously served as the eighth United States secretary of state from 1817 to 1825. During his long diploma ...
by 335 votes to 169 in the contest for the United States presidency. Since Jackson won the popular vote in that state and the national popular vote, such straw votes gradually became more popular, but they remained local, usually citywide phenomena. In 1916, ''
The Literary Digest ''The Literary Digest'' was an American general interest weekly magazine published by Funk & Wagnalls. Founded by Isaac Kaufmann Funk in 1890, it eventually merged with two similar weekly magazines, ''Public Opinion'' and '' Current Opinion''. ...
'' embarked on a national survey (partly as a circulation-raising exercise) and correctly predicted
Woodrow Wilson Thomas Woodrow Wilson (December 28, 1856February 3, 1924) was the 28th president of the United States, serving from 1913 to 1921. He was the only History of the Democratic Party (United States), Democrat to serve as president during the Prog ...
's election as president. Mailing out millions of
postcard A postcard or post card is a piece of thick paper or thin cardboard, typically rectangular, intended for writing and mailing without an envelope. Non-rectangular shapes may also be used but are rare. In some places, one can send a postcard f ...
s and simply counting the returns, ''The Literary Digest'' also correctly predicted the victories of
Warren Harding Warren Gamaliel Harding (November 2, 1865 – August 2, 1923) was the 29th president of the United States, serving from 1921 until his death in 1923. A member of the Republican Party, he was one of the most popular sitting U.S. presidents w ...
in 1920,
Calvin Coolidge Calvin Coolidge (born John Calvin Coolidge Jr.; ; July 4, 1872January 5, 1933) was the 30th president of the United States, serving from 1923 to 1929. A Republican Party (United States), Republican lawyer from Massachusetts, he previously ...
in 1924,
Herbert Hoover Herbert Clark Hoover (August 10, 1874 – October 20, 1964) was the 31st president of the United States, serving from 1929 to 1933. A wealthy mining engineer before his presidency, Hoover led the wartime Commission for Relief in Belgium and ...
in 1928, and
Franklin Roosevelt Franklin Delano Roosevelt (January 30, 1882April 12, 1945), also known as FDR, was the 32nd president of the United States, serving from 1933 until his death in 1945. He is the longest-serving U.S. president, and the only one to have served ...
in 1932. Then, in
1936 Events January–February * January 20 – The Prince of Wales succeeds to the throne of the United Kingdom as King Edward VIII, following the death of his father, George V, at Sandringham House. * January 28 – Death and state funer ...
, its survey of 2.3 million voters suggested that
Alf Landon Alfred Mossman Landon (September 9, 1887October 12, 1987) was an American oilman and politician who served as the 26th governor of Kansas from 1933 to 1937. A member of the Republican Party, he was the party's nominee in the 1936 presidential ...
would win the presidential election, but Roosevelt was instead re-elected by a landslide.
George Gallup George Horace Gallup (November 18, 1901 – July 26, 1984) was an American pioneer of survey sampling techniques and inventor of the Gallup poll, a statistics, statistically-based survey sampling, survey sampled measure of opinion polls, public ...
's research found that the error was mainly caused by participation bias; those who favored Landon were more enthusiastic about returning their postcards. Furthermore, the postcards were sent to a target audience who were more affluent than the American population as a whole, and therefore more likely to have Republican sympathies. At the same time, Gallup,
Archibald Crossley Archibald Maddock Crossley (December 7, 1896 – May 1, 1985) was an American pollster, statistician, and pioneer in public opinion research. Along with friends-cum-rivals Elmo Roper and George Gallup, Crossley has been described as one of the f ...
and
Elmo Roper Elmo Burns Roper Jr. (July 31, 1900 in Hebron, Nebraska – April 30, 1971 in Redding, Connecticut) was an American pollster known for his pioneering work in market research and opinion polling, alongside friends-cum-rivals Archibald Crossl ...
conducted surveys that were far smaller but more scientifically based, and all three managed to correctly predict the result. ''The Literary Digest'' soon went out of business, while polling started to take off. Roper went on to correctly predict the two subsequent reelections of President Franklin D. Roosevelt.
Louis Harris Louis Harris (January 6, 1921 – December 17, 2016) was an American opinion polling entrepreneur, journalist, and author. He ran one of the best-known polling organizations of his time, Louis Harris and Associates, which conducted The H ...
had been in the field of public opinion since 1947 when he joined the Elmo Roper firm, then later became partner. In September 1938,
Jean Stoetzel Jean Stoetzel (23 April 1910, Saint-Dié-des-Vosges - 21 February 1987, Boulogne-Billancourt) was a French sociologist. Biography He had Alsacian and Lorrainian descent.
, after having met Gallup, created IFOP, the
Institut Français d'Opinion Publique The Institut français d'opinion publique (IFOP; ) is an international polling and market research firm, whose motto is "Connection creates value". It was founded on 1 December 1938 by Jean Stoetzel, former Sorbonne professor, after he met Ge ...
, as the first European survey institute in Paris. Stoetzel started political polls in summer 1939 with the question " Why die for Danzig?", looking for popular support or dissent with this question asked by appeasement politician and future collaborationist
Marcel Déat Marcel Déat (; 7 March 1894 – 5 January 1955) was a French politician. Initially a socialist and a member of the French Section of the Workers' International (SFIO), he led a breakaway group of right-wing Neosocialists out of the SFIO in 19 ...
. Gallup launched a subsidiary in the
United Kingdom The United Kingdom of Great Britain and Northern Ireland, commonly known as the United Kingdom (UK) or Britain, is a country in Northwestern Europe, off the coast of European mainland, the continental mainland. It comprises England, Scotlan ...
that was almost alone in correctly predicting Labour's victory in the 1945 general election: virtually all other commentators had expected a victory for the Conservative Party, led by wartime leader
Winston Churchill Sir Winston Leonard Spencer Churchill (30 November 1874 – 24 January 1965) was a British statesman, military officer, and writer who was Prime Minister of the United Kingdom from 1940 to 1945 (Winston Churchill in the Second World War, ...
. The Allied occupation powers helped to create survey institutes in all of the Western occupation zones of Germany in 1947 and 1948 to better steer
denazification Denazification () was an Allied initiative to rid German and Austrian society, culture, press, economy, judiciary, and politics of the Nazi ideology following the Second World War. It was carried out by removing those who had been Nazi Par ...
. By the 1950s, various types of polling had spread to most democracies. Viewed from a long-term perspective, advertising had come under heavy pressure in the early 1930s. The Great Depression forced businesses to drastically cut back on their advertising spending. Layoffs and reductions were common at all agencies. The
New Deal The New Deal was a series of wide-reaching economic, social, and political reforms enacted by President Franklin D. Roosevelt in the United States between 1933 and 1938, in response to the Great Depression in the United States, Great Depressi ...
furthermore aggressively promoted consumerism, and minimized the value of (or need for) advertising. Historian Jackson Lears argues that "By the late 1930s, though, corporate advertisers had begun a successful counterattack against their critics." They rehabilitated the concept of consumer sovereignty by inventing scientific public opinion polls, and making it the centerpiece of their own market research, as well as the key to understanding politics. George Gallup, the vice president of Young and Rubicam, and numerous other advertising experts, led the way. Moving into the 1940s, the industry played a leading role in the ideological mobilization of the American people in fighting the Nazis and the Japanese in World War II. As part of that effort, they redefined the "American Way of Life" in terms of a commitment to free enterprise. "Advertisers", Lears concludes, "played a crucial hegemonic role in creating the consumer culture that dominated post-World War II American society."


The statistics of opinion polls

If we ask a yes-no question of a sample of people selected randomly from a large population, then the proportion of the sample that respond "yes" will be close to the true proportion, ''p'', of the whole population who would have said "yes" had all of them been asked. The distribution of the proportion of 'yes' answers follows the
binomial distribution In probability theory and statistics, the binomial distribution with parameters and is the discrete probability distribution of the number of successes in a sequence of statistical independence, independent experiment (probability theory) ...
. A binomial distribution converges to a
normal distribution In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is f(x) = \frac ...
if the size of the sample approaches infinity according to the
central limit theorem In probability theory, the central limit theorem (CLT) states that, under appropriate conditions, the Probability distribution, distribution of a normalized version of the sample mean converges to a Normal distribution#Standard normal distributi ...
. In practice the binomial distribution is approximated by a normal distribution when np\geq5 and n(1-p)\geq5 where n is the sample size. The larger is the sample, the better is the approximation. Suppose that n people were sampled, and a share \widehat of them responded "yes". This sample proportion \widehat can be used instead of p, which is unknown, to compute the sample mean, variance and standard deviation. The sample mean is: m=n\widehat. The sample variance is: s^2=n\hat(1-\widehat). The sample standard deviation is: s=\sqrt.


Example:

Assume that we conduct a poll in which people are asked whether they support candidate A. We sample 1000 people of which 650 respond "yes". In this case n\hat=1000* \frac = 650\geq5 and n(1-\hat)=1000* \frac = 350 \geq 5. Therefore, we can approximate the binomial distribution by using the normal distribution. As a rule of thumb we want that our poll result will accurate in the 5%
significance level In statistical hypothesis testing, a result has statistical significance when a result at least as "extreme" would be very infrequent if the null hypothesis were true. More precisely, a study's defined significance level, denoted by \alpha, is the ...
or less. Therefore, we will compute the confidence interval: The sample mean is: m=n\widehat=1000*0.65=650. The sample variance is: s^2=n\hat(1-\widehat)=1000*0.65*0.35=227.5. The sample standard deviation is: s=\sqrt =15.08. We shall use the formula to create a confidence interval with 95% confidence level: m-z_*s\leq\mu \leq m + z_*s where \mu is the population mean and z_ is the
z-score In statistics, the standard score or ''z''-score is the number of standard deviations by which the value of a raw score (i.e., an observed value or data point) is above or below the mean value of what is being observed or measured. Raw scores ...
for 95% confidence level. or: 650-1.96*15.08=620.44\leq\mu \leq 650+1.96*15.08=679.55 That is, we are 95% confident that the true population mean, \mu, is between 620.44 and 679.55. Remembering that \mu=np=1000p , we can say that 0.62 \leq \ p \leq 0.68 or p is 0.65 with a
margin of error The margin of error is a statistic expressing the amount of random sampling error in the results of a Statistical survey, survey. The larger the margin of error, the less confidence one should have that a poll result would reflect the result of ...
equals to 3% (we rounded the numbers).


How many people do we need to create a valid sample?

The answer depends on the population size and required
margin of error The margin of error is a statistic expressing the amount of random sampling error in the results of a Statistical survey, survey. The larger the margin of error, the less confidence one should have that a poll result would reflect the result of ...
. We shall use Cochran's formula: n_0=\frac , where z_\alpha^2 is the z-score for a confidence level of \alpha and e is the required margin of error. Note that the function p\cdot(1-p) is maximized at p=0.5, therefore, before starting sampling we will use p=0.5 to determine the sample size. For example, assume that we want 95% confidence level and 5% margin of error: n_0=\frac=\frac=384.16 \approx 385. Note that the required sample size is affected by the confidence level and margin of error. If we want 99% confidence interval we have to sample 664 people, and, alternatively, if we want a margin of error of 2% we will have to sample 2401 people. For a finite population, when the sample is a large proportion of population, we modify the formula: n=\frac where N is the size of the entire population. Note that as N approaches infinity, the two formulas coincide, meaning the consideration of population size can only reduce the required sample size needed for a valid sample. In the above example, if the entire population is 600 then we have to sample only 285 people (\frac\approx285).


Sample and polling methods

Opinion polls for many years were maintained through telecommunications or in person-to-person contact. Methods and techniques vary, though they are widely accepted in most areas. Over the years, technological innovations have also influenced survey methods such as the availability of electronic clipboards and Internet based polling. Opinion polling developed into popular applications through popular thought, although response rates for some surveys declined. Also, the following has also led to differentiating results: Some polling organizations, such as Angus Reid Public Opinion,
YouGov YouGov plc is a international Internet-based market research and data analytics firm headquartered in the UK with operations in Europe, North America, the Middle East, and Asia-Pacific. History 2000–2010 Stephan Shakespeare and Nadhim ...
and Zogby use
Internet The Internet (or internet) is the Global network, global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP) to communicate between networks and devices. It is a internetworking, network of networks ...
surveys, where a sample is drawn from a large panel of volunteers, and the results are weighted to reflect the demographics of the population of interest. In contrast, popular web polls draw on whoever wishes to participate rather than a scientific sample of the population, and are therefore not generally considered professional. Statistical learning methods have been proposed in order to exploit
social media Social media are interactive technologies that facilitate the Content creation, creation, information exchange, sharing and news aggregator, aggregation of Content (media), content (such as ideas, interests, and other forms of expression) amongs ...
content (such as posts on the micro-blogging platform
Twitter Twitter, officially known as X since 2023, is an American microblogging and social networking service. It is one of the world's largest social media platforms and one of the most-visited websites. Users can share short text messages, image ...
) for modelling and predicting voting intention polls.Brendan O'Connor, Ramnath Balasubramanyan, Bryan R Routledge, and Noah A Smith. From Tweets to Polls: Linking Text Sentiment to Public Opinion Time Series. In Proceedings of the International AAAI Conference on Weblogs and Social Media. AAAI Press, pp. 122–129, 2010.


Benchmark polls

A ''benchmark poll'' is generally the first poll taken in a campaign. It is often taken before a candidate announces their bid for office, but sometimes it happens immediately following that announcement after they have had some opportunity to raise funds. This is generally a short and simple survey of likely voters. Benchmark polling often relies on timing, which can be a significant problem if a poll is conducted too early for anyone to know about the potential candidate. A benchmark poll needs to be undertaken when voters are starting to learn more about the possible candidate running for office. A benchmark poll serves a number of purposes for a campaign. First, it gives the candidate a picture of where they stand with the electorate before any campaigning takes place. If the poll is done prior to announcing for office the candidate may use the poll to decide whether or not they should even run for office. Secondly, it shows them where their weaknesses and strengths are in two main areas. The first is the electorate. A benchmark poll shows them what types of voters they are sure to win, those they are sure to lose, and everyone in-between these two extremes. This lets the campaign know which voters are persuadable so they can spend their limited resources in the most effective manner. Second, it can give them an idea of what messages, ideas, or slogans are the strongest with the electorate.Kenneth F. Warren (1992). "in Defense of Public Opinion Polling." Westview Press. p. 200-1.


Tracking polls

In a tracking poll responses are obtained in a number of consecutive periods, for instance daily, and then results are calculated using a
moving average In statistics, a moving average (rolling average or running average or moving mean or rolling mean) is a calculation to analyze data points by creating a series of averages of different selections of the full data set. Variations include: #Simpl ...
of the responses that were gathered over a fixed number of the most recent periods, for example the past five days. In this example, the next calculated results will use data for five days counting backwards from the next day, namely the same data as before, but with the data from the next day included, and without the data from the sixth day before that day. However, these polls are sometimes subject to dramatic fluctuations, and so political campaigns and candidates are cautious in analyzing their results. An example of a tracking poll that generated controversy over its accuracy, is one conducted during the 2000 U.S. presidential election, by the Gallup Organization. The results for one day showed Democratic candidate
Al Gore Albert Arnold Gore Jr. (born March 31, 1948) is an American former politician, businessman, and environmentalist who served as the 45th vice president of the United States from 1993 to 2001 under President Bill Clinton. He previously served as ...
with an eleven-point lead over Republican candidate
George W. Bush George Walker Bush (born July 6, 1946) is an American politician and businessman who was the 43rd president of the United States from 2001 to 2009. A member of the Bush family and the Republican Party (United States), Republican Party, he i ...
. Then, a subsequent poll conducted just two days later showed Bush ahead of Gore by seven points. It was soon determined that the volatility of the results was at least in part due to an uneven distribution of Democratic and Republican affiliated voters in the samples. Though the Gallup Organization argued the volatility in the poll was a genuine representation of the electorate, other polling organizations took steps to reduce such wide variations in their results. One such step included manipulating the proportion of Democrats and Republicans in any given sample, but this method is subject to controversy.


Deliberative opinion polls

Deliberative Opinion Polls combine the aspects of a public opinion poll and a focus group. These polls bring a group of voters and provide information about specific issues. They are then allowed to discuss those issues with the other voters. Once they know more about the issues, they are polled afterward on their thoughts. Many scholars argue that this type of polling is much more effective than traditional public opinion polling. Unlike traditional public polling, deliberative opinion polls measure what the public believes about issues after being offered information and the ability to discuss them with other voters. Since voters generally do not actively research various issues, they often base their opinions on these issues on what the media and candidates say about them. Scholars argued that these polls can truly reflect voters' feelings about an issue once they are given the necessary information to learn more about it. Despite this, there are two issues with deliberative opinion polls. First, they are expensive and challenging to perform since they require a representative sample of voters, and the information given on specific issues must be fair and balanced. Second, the results of deliberative opinion polls generally do not reflect the opinions of most voters since most voters do not take the time to research issues the way an academic researches issues.


Exit polls

Exit polls interview voters just as they are leaving polling places. Unlike general public opinion polls, these are polls of people who voted in the election. Exit polls provide a more accurate picture of which candidates the public prefers in an election because people participating in the poll did vote in the election. Second, these polls are conducted across multiple voting locations across the country, allowing for a comparative analysis between specific regions. For example, in the United States, exit polls are beneficial in accurately determining how the state voters cast their ballot instead of relying on a national survey. Third, exit polls can give journalists and social scientists a greater understanding of why voters voted the way they did and what factors contributed to their vote. Exit polling has several disadvantages that can cause controversy depending on its use. First, these polls are not always accurate and can sometimes mislead election reporting. For instance, during the 2016 U.S. primaries, CNN reported that the Democratic primary in New York was too close to call, and they made this judgment based on exit polls. However, the vote count revealed that these exit polls were misleading, and Hillary Clinton was far ahead of Bernie Sanders in the popular vote, winning the state by 58% to 42% margin. The overreliance on exit polling leads to the second point of how it undermines public trust in the media and the electoral process. In the U.S., Congress and state governments have criticized the use of exit polling because Americans tend to believe more in the accuracy of exit polls. If an exit poll shows that American voters were leaning toward a particular candidate, most would assume that the candidate would win. However, as mentioned earlier, an exit poll can sometimes be inaccurate and lead to situations like the 2016 New York primary, where a news organization reports misleading primary results. Government officials argue that since many Americans believe in exit polls more, election results are likely to make voters not think they are impacted electorally and be more doubtful about the credibility of news organizations.


Potential for inaccuracy

Over time, a number of theories and mechanisms have been offered to explain erroneous polling results. Some of these reflect errors on the part of the pollsters; many of them are statistical in nature. Some blame respondents for not providing genuine answers to pollsters, a phenomenon known as social desirability-bias (also referred to as the Bradley effect or the Shy Tory Factor); these terms can be quite controversial.


Margin of error due to sampling

Polls based on samples of populations are subject to
sampling error In statistics, sampling errors are incurred when the statistical characteristics of a population are estimated from a subset, or sample, of that population. Since the sample does not include all members of the population, statistics of the sample ...
which reflects the effects of chance and uncertainty in the sampling process. Sampling polls rely on the
law of large numbers In probability theory, the law of large numbers is a mathematical law that states that the average of the results obtained from a large number of independent random samples converges to the true value, if it exists. More formally, the law o ...
to measure the opinions of the whole population based only on a subset, and for this purpose the absolute size of the sample is important, but the percentage of the whole population is not important (unless it happens to be close to the sample size). The possible difference between the sample and whole population is often expressed as a
margin of error The margin of error is a statistic expressing the amount of random sampling error in the results of a Statistical survey, survey. The larger the margin of error, the less confidence one should have that a poll result would reflect the result of ...
– usually defined as the radius of a 95% confidence interval for a particular statistic. One example is the percent of people who prefer product A versus product B. When a single, global margin of error is reported for a survey, it refers to the maximum margin of error for all reported percentages using the full sample from the survey. If the statistic is a percentage, this maximum margin of error can be calculated as the radius of the confidence interval for a reported percentage of 50%. For a poll with a random sample of 1,000 people reporting a proportion around 50% for some question, the sampling margin of error is approximately ±3% for the estimated proportion of the whole population. A 3% margin of error means that if the same procedure is used a large number of times, 95% of the time the true population average will be within the sample estimate plus or minus 3%. The margin of error can be reduced by using a larger sample, however if a pollster wishes to reduce the margin of error to 1% they would need a sample of around 10,000 people. In practice, pollsters need to balance the cost of a large sample against the reduction in sampling error and a sample size of around 500–1,000 is a typical compromise for political polls. (To get complete responses it may be necessary to include thousands of additional participators.) Another way to reduce the margin of error is to rely on poll averages. This makes the assumption that the procedure is similar enough between many different polls and uses the sample size of each poll to create a polling average. Another source of error stems from faulty demographic models by pollsters who weigh their samples by particular variables such as party identification in an election. For example, if you assume that the breakdown of the US population by party identification has not changed since the previous presidential election, you may underestimate a victory or a defeat of a particular party candidate that saw a surge or decline in its party registration relative to the previous presidential election cycle. Sample Techniques are also used and recommended to reduce sample errors and errors of margin. In chapter four of author Herb Asher he says,"it is probability sampling and statistical theory that enable one to determine sampling error, confidence levels, and the like and to generalize from the results of the sample to the broader population from which it was selected. Other factors also come into play in making a survey scientific. One must select a sample of sufficient size. If the sampling error is too large or the level of confidence too low, it will be difficult to make reasonably precise statements about characteristics of the population of interest to the pollster. A scientific poll not only will have a sufficiently large sample, it will also be sensitive to response rates. Very low response rates will raise questions about how representative and accurate the results are. Are there systematic differences between those who participated in the survey and those who, for whatever reason, did not participate? Sampling methods, sample size, and response rates will all be discussed in this chapter" (Asher 2017). A caution is that an estimate of a trend is subject to a larger error than an estimate of a level. This is because if one estimates the change, the difference between two numbers ''X'' and ''Y,'' then one has to contend with errors in both ''X'' and ''Y''. A rough guide is that if the change in measurement falls outside the margin of error it is worth attention.


Nonresponse bias

Since some people do not answer calls from strangers, or refuse to answer the poll, poll samples may not be representative samples from a population due to a non-response bias. Response rates have been declining, and are down to about 10% in recent years. Various pollsters have attributed this to an increased skepticism and lack of interest in polling. Because of this
selection bias Selection bias is the bias introduced by the selection of individuals, groups, or data for analysis in such a way that proper randomization is not achieved, thereby failing to ensure that the sample obtained is representative of the population inte ...
, the characteristics of those who agree to be interviewed may be markedly different from those who decline. That is, the actual sample is a biased version of the universe the pollster wants to analyze. In these cases, bias introduces new errors, one way or the other, that are in addition to errors caused by sample size. Error due to bias does not become smaller with larger sample sizes, because taking a larger sample size simply repeats the same mistake on a larger scale. If the people who refuse to answer, or are never reached, have the same characteristics as the people who do answer, then the final results should be unbiased. If the people who do not answer have different opinions then there is bias in the results. In terms of election polls, studies suggest that bias effects are small, but each polling firm has its own techniques for adjusting weights to minimize selection bias.


Response bias

Survey results may be affected by response bias, where the answers given by respondents do not reflect their true beliefs. This may be deliberately engineered by unscrupulous pollsters in order to generate a certain result or please their clients, but more often is a result of the detailed wording or ordering of questions (see below). Respondents may deliberately try to manipulate the outcome of a poll by e.g. advocating a more extreme position than they actually hold in order to boost their side of the argument or give rapid and ill-considered answers in order to hasten the end of their questioning. Respondents may also feel under social pressure not to give an unpopular answer. For example, respondents might be unwilling to admit to unpopular attitudes like
racism Racism is the belief that groups of humans possess different behavioral traits corresponding to inherited attributes and can be divided based on the superiority of one Race (human categorization), race or ethnicity over another. It may also me ...
or
sexism Sexism is prejudice or discrimination based on one's sex or gender. Sexism can affect anyone, but primarily affects women and girls. It has been linked to gender roles and stereotypes, and may include the belief that one sex or gender is int ...
, and thus polls might not reflect the true incidence of these attitudes in the population. In American political parlance, this phenomenon is often referred to as the Bradley effect. If the results of surveys are widely publicized this effect may be magnified – a phenomenon commonly referred to as the
spiral of silence The spiral of silence theory is a political science and mass communication theory which states that an individual's perception of the distribution of public opinion influences that individual's willingness to express their own opinions. Also know ...
. Use of the
plurality voting system Plurality voting refers to electoral systems in which the candidates in an electoral district who poll more than any other (that is, receive a plurality) are elected. Under single-winner plurality voting, and in systems based on single-member ...
(select only one candidate) in a poll puts an unintentional bias into the poll, since people who favor more than one candidate cannot indicate this. The fact that they must choose only one candidate biases the poll, causing it to favor the candidate most different from the others while it disfavors candidates who are similar to other candidates. The
plurality voting system Plurality voting refers to electoral systems in which the candidates in an electoral district who poll more than any other (that is, receive a plurality) are elected. Under single-winner plurality voting, and in systems based on single-member ...
also biases elections in the same way. Some people responding may not understand the words being used, but may wish to avoid the embarrassment of admitting this, or the poll mechanism may not allow clarification, so they may make an arbitrary choice. Some percentage of people also answer whimsically or out of annoyance at being polled. This results in perhaps 4% of Americans reporting they have personally been decapitated.


Wording of questions

Among the factors that impact the results of opinion polls are the wording and order of the questions being posed by the surveyor. Questions that intentionally affect a respondents answer are referred to as
leading question A leading question is a question that suggests a particular answer and contains information the examiner is looking to have confirmed. The use of leading questions in court to elicit testimony is restricted in order to reduce the ability of the ex ...
s. Individuals and/or groups use these types of questions in surveys to elicit responses favorable to their interests. For instance, the public is more likely to indicate support for a person who is described by the surveyor as one of the "leading candidates". This description is "leading" as it indicates a subtle bias for that candidate, since it implies that the others in the race are not serious contenders. Additionally, leading questions often contain, or lack, certain facts that can sway a respondent's answer. Argumentative Questions can also impact the outcome of a survey. These types of questions, depending on their nature, either positive or negative, influence respondents' answers to reflect the tone of the question(s) and generate a certain response or reaction, rather than gauge sentiment in an unbiased manner. In opinion polling, there are also " loaded questions", otherwise known as "
trick question A trick question is a question that confuses the person asked. This can be either because it is difficult to answer or because an obvious answer is not a correct one. They include puzzles, riddles and brain teasers. The term "trick question" may ...
s". This type of leading question may concern an uncomfortable or controversial issue, and/or automatically assume the subject of the question is related to the respondent(s) or that they are knowledgeable about it. Likewise, the questions are then worded in a way that limit the possible answers, typically to yes or no. Another type of question that can produce inaccurate results are " Double-Negative Questions". These are more often the result of human error, rather than intentional manipulation. One such example is a survey done in 1992 by the Roper Organization, concerning the
Holocaust The Holocaust (), known in Hebrew language, Hebrew as the (), was the genocide of History of the Jews in Europe, European Jews during World War II. From 1941 to 1945, Nazi Germany and Collaboration with Nazi Germany and Fascist Italy ...
. The question read "Does it seem possible or impossible to you that the
Nazi Nazism (), formally named National Socialism (NS; , ), is the far-right politics, far-right Totalitarianism, totalitarian socio-political ideology and practices associated with Adolf Hitler and the Nazi Party (NSDAP) in Germany. During H ...
extermination of the Jews never happened?" The confusing wording of this question led to inaccurate results which indicated that 22 percent of respondents believed it seemed possible the Holocaust might not have ever happened. When the question was reworded, significantly fewer respondents (only 1 percent) expressed that same sentiment. Thus comparisons between polls often boil down to the wording of the question. On some issues, question wording can result in quite pronounced differences between surveys. This can also, however, be a result of legitimately conflicted feelings or evolving attitudes, rather than a poorly constructed survey. A common technique to control for this bias is to rotate the order in which questions are asked. Many pollsters also split-sample. This involves having two different versions of a question, with each version presented to half the respondents. The most effective controls, used by
attitude Attitude or Attitude may refer to: Philosophy and psychology * Attitude (psychology), a disposition or state of mind ** Attitude change * Propositional attitude, a mental state held towards a proposition Science and technology * Orientation ...
researchers, are: * asking enough questions to allow all aspects of an issue to be covered and to control effects due to the form of the question (such as positive or negative wording), the adequacy of the number being established quantitatively with
psychometric Psychometrics is a field of study within psychology concerned with the theory and technique of measurement. Psychometrics generally covers specialized fields within psychology and education devoted to testing, measurement, assessment, and rela ...
measures such as reliability coefficients, and * analyzing the results with psychometric techniques which synthesize the answers into a few reliable scores and detect ineffective questions. These controls are not widely used in the polling industry.. However, as it is important that questions to test the product have a high quality, survey methodologists work on methods to test them. Empirical tests provide insight into the quality of the questionnaire, some may be more complex than others. For instance, testing a questionnaire can be done by: * conducting cognitive interviewing. By asking a sample of potential-respondents about their interpretation of the questions and use of the questionnaire, a researcher can * carrying out a small pretest of the questionnaire, using a small subset of target respondents. Results can inform a researcher of errors such as missing questions, or logical and procedural errors. * estimating the measurement quality of the questions. This can be done for instance using test-retest, quasi-simplex, or mutlitrait-multimethod models. * predicting the measurement quality of the question. This can be done using the software Survey Quality Predictor (SQP).


Involuntary facades and false correlations

One of the criticisms of opinion polls is that societal assumptions that opinions between which there is no logical link are "correlated attitudes" can push people with one opinion into a group that forces them to pretend to have a supposedly linked but actually unrelated opinion. That, in turn, may cause people who have the first opinion to claim on polls that they have the second opinion without having it, causing opinion polls to become part of
self-fulfilling prophecy A self-fulfilling prophecy is a prediction that comes true at least in part as a result of a person's belief or expectation that the prediction would come true. In the phenomena, people tend to act the way they have been expected to in order to mak ...
problems. It has been suggested that attempts to counteract unethical opinions by condemning supposedly linked opinions may favor the groups that promote the actually unethical opinions by forcing people with supposedly linked opinions into them by ostracism elsewhere in society making such efforts counterproductive, that not being sent between groups that assume ulterior motives from each other and not being allowed to express consistent critical thought anywhere may create psychological stress because humans are sapient, and that discussion spaces free from assumptions of ulterior motives behind specific opinions should be created. In this context, rejection of the assumption that opinion polls show actual links between opinions is considered important.


Coverage bias

Another source of error is the use of samples that are not representative of the population as a consequence of the methodology used, as was the experience of ''The Literary Digest'' in 1936. For example, telephone sampling has a built-in error because in many times and places, those with telephones have generally been richer than those without. In some places many people have only
mobile telephone A mobile phone or cell phone is a portable telephone that allows users to make and receive calls over a radio frequency link while moving within a designated telephone service area, unlike fixed-location phones ( landline phones). This radio ...
s. Because pollsters cannot use automated dialing machines to call mobile phones in the United States (because the phone's owner may be charged for taking a call), these individuals are typically excluded from polling samples. There is concern that, if the subset of the population without cell phones differs markedly from the rest of the population, these differences can skew the results of the poll. Polling organizations have developed many weighting techniques to help overcome these deficiencies, with varying degrees of success. Studies of mobile phone users by the Pew Research Center in the US, in 2007, concluded that "cell-only respondents are different from landline respondents in important ways, (but) they were neither numerous enough nor different enough on the questions we examined to produce a significant change in overall general population survey estimates when included with the landline samples and weighted according to US Census parameters on basic demographic characteristics." This issue was first identified in 2004, but came to prominence only during the 2008 US presidential election. In previous elections, the proportion of the general population using cell phones was small, but as this proportion has increased, there is concern that polling only landlines is no longer representative of the general population. In 2003, only 2.9% of households were wireless (cellphones only), compared to 12.8% in 2006. This results in "
coverage error Coverage error is a type of non-sampling error that occurs when there is not a one-to-one correspondence between the target population and the sampling frame from which a sample is drawn. This can bias estimates calculated using survey data.Scheaf ...
". Many polling organisations select their sample by dialling random telephone numbers; however, in 2008, there was a clear tendency for polls which included mobile phones in their samples to show a much larger lead for Obama, than polls that did not. The potential sources of bias are: # Some households use cellphones only and have no landline. This tends to include minorities and younger voters; and occurs more frequently in metropolitan areas. Men are more likely to be cellphone-only compared to women. # Some people may not be contactable by landline from Monday to Friday and may be contactable only by cellphone. # Some people use their landlines only to access the Internet, and answer calls only to their cellphones. Some polling companies have attempted to get around that problem by including a "cellphone supplement". There are a number of problems with including cellphones in a telephone poll: # It is difficult to get co-operation from cellphone users, because in many parts of the US, users are charged for both outgoing and incoming calls. That means that pollsters have had to offer financial compensation to gain co-operation. # US federal law prohibits the use of automated dialling devices to call cellphones (
Telephone Consumer Protection Act of 1991 The Telephone Consumer Protection Act of 1991 (TCPA) was passed by the United States Congress in 1991 and signed into law by President George H. W. Bush as Public Law 102-243. It amended the Communications Act of 1934. The TCPA is codified as ...
). Numbers therefore have to be dialled by hand, which is more time-consuming and expensive for pollsters.


Failures

A widely publicized failure of opinion polling to date in the
United States The United States of America (USA), also known as the United States (U.S.) or America, is a country primarily located in North America. It is a federal republic of 50 U.S. state, states and a federal capital district, Washington, D.C. The 48 ...
was the prediction that
Thomas Dewey Thomas Edmund Dewey (March 24, 1902 – March 16, 1971) was an American lawyer and politician who served as the 47th Governor of New York from 1943 to 1954. He was the Republican Party's nominee for president of the United States in 1944 and ...
would defeat
Harry S. Truman Harry S. Truman (May 8, 1884December 26, 1972) was the 33rd president of the United States, serving from 1945 to 1953. As the 34th vice president in 1945, he assumed the presidency upon the death of Franklin D. Roosevelt that year. Subsequen ...
in the
1948 US presidential election United States presidential election, Presidential elections were held in the United States on November 2, 1948. The History of the Democratic Party (United States), Democratic ticket of incumbent President Harry S. Truman and Senator Alben Barkle ...
. Major polling organizations, including Gallup and Roper, had indicated that Dewey would defeat Truman in a landslide; Truman won a narrow victory. There were also substantial polling errors in the presidential elections of 1952, 1980, 1996, 2000, and 2016: while the first three correctly predicted the winner (albeit not the extent of their winning margin), with the last two correctly predicting the winner of the popular vote (but not the Electoral College). In the United Kingdom, most polls failed to predict the Conservative election victories of
1970 Events January * January 1 – Unix time epoch reached at 00:00:00 UTC. * January 5 – The 7.1 1970 Tonghai earthquake, Tonghai earthquake shakes Tonghai County, Yunnan province, China, with a maximum Mercalli intensity scale, Mercalli ...
and
1992 1992 was designated as International Space Year by the United Nations. Events January * January 1 – Boutros Boutros-Ghali of Egypt replaces Javier Pérez de Cuéllar of Peru as United Nations Secretary-General. * January 6 ** The Republ ...
, and Labour's victory in February 1974. In the 2015 election, virtually every poll predicted a hung parliament with Labour and the Conservatives neck and neck, when the actual result was a clear Conservative majority. On the other hand, in
2017 2017 was designated as the International Year of Sustainable Tourism for Development by the United Nations General Assembly. Events January * January 1 – Istanbul nightclub shooting: A gunman dressed as Santa Claus opens fire at the ...
, the opposite appears to have occurred. Most polls predicted an increased Conservative majority, even though in reality the election resulted in a hung parliament with a Conservative plurality: some polls correctly predicted this outcome. In New Zealand, the polls leading up to the 1993 general election predicted the governing National Party would increase its majority. However, the preliminary results on election night showed a hung parliament with National one seat short of a majority, leading to Prime Minister
Jim Bolger James Brendan Bolger ( ; born 31 May 1935) is a New Zealand retired politician of the National Party who was the 35th prime minister of New Zealand, serving from 1990 to 1997. Bolger was born in Ōpunake, Taranaki, to Irish immigrants. Bef ...
exclaiming "bugger the pollsters" on live national television. The official count saw National gain Waitaki to hold a one-seat majority and retain government.


Social media as a source of opinion on candidates

Social media today is a popular medium for the candidates to campaign and for gauging the public reaction to the campaigns. Social media can also be used as an indicator of the voter opinion regarding the poll. Some research studies have shown that predictions made using social media signals can match traditional opinion polls. Regarding the 2016 U.S. presidential election, a major concern has been that of the effect of false stories spread throughout
social media Social media are interactive technologies that facilitate the Content creation, creation, information exchange, sharing and news aggregator, aggregation of Content (media), content (such as ideas, interests, and other forms of expression) amongs ...
. Evidence shows that social media plays a huge role in the supplying of news: 62 percent of US adults get news on social media. This fact makes the issue of fake news on social media more pertinent. Other evidence shows that the most popular
fake news Fake news or information disorder is false or misleading information (misinformation, disinformation, propaganda, and hoaxes) claiming the aesthetics and legitimacy of news. Fake news often has the aim of damaging the reputation of a person ...
stories were more widely shared on Facebook than the most popular mainstream news stories; many people who see fake news stories report that they believe them; and the most discussed fake news stories tended to favor Donald Trump over Hillary Clinton. As a result of these facts, some have concluded that if not for these stories, Donald Trump may not have won the election over Hillary Clinton.


Influence


Effect on voters

By providing information about voting intentions, opinion polls can sometimes influence the behavior of electors, and in his book ''The Broken Compass'', Peter Hitchens asserts that opinion polls are actually a device for influencing public opinion. The various theories about how this happens can be split into two groups: bandwagon/underdog effects, and strategic ("tactical") voting. A bandwagon effect occurs when the poll prompts voters to back the candidate shown to be winning in the poll. The idea that voters are susceptible to such effects is old, stemming at least from 1884; William Safire reported that the term was first used in a political cartoon in the magazine ''Puck (magazine), Puck'' in that year. It has also remained persistent in spite of a lack of empirical corroboration until the late 20th century.
George Gallup George Horace Gallup (November 18, 1901 – July 26, 1984) was an American pioneer of survey sampling techniques and inventor of the Gallup poll, a statistics, statistically-based survey sampling, survey sampled measure of opinion polls, public ...
spent much effort in vain trying to discredit this theory in his time by presenting empirical research. A recent meta-study of scientific research on this topic indicates that from the 1980s onward the Bandwagon effect is found more often by researchers.Irwin, Galen A. and Joop J. M. Van Holsteyn. ''Bandwagons, Underdogs, the Titanic and the Red Cross: The Influence of Public Opinion Polls on Voters'' (2000). The opposite of the bandwagon effect is the Underdog (competition), underdog effect. It is often mentioned in the media. This occurs when people vote, out of sympathy, for the party perceived to be "losing" the elections. There is less empirical evidence for the existence of this effect than there is for the existence of the bandwagon effect. The second category of theories on how polls directly affect voting is called strategic voting. This theory is based on the idea that voters view the act of voting as a means of selecting a government. Thus they will sometimes not choose the candidate they prefer on ground of ideology or sympathy, but another, less-preferred, candidate from strategic considerations. An example can be found in the 1997 United Kingdom general election. As he was then a Cabinet Minister, Michael Portillo's constituency of Enfield Southgate (UK Parliament constituency), Enfield Southgate was believed to be a safe seat but opinion polls showed the Labour Party (UK), Labour candidate Stephen Twigg steadily gaining support, which may have prompted undecided voters or supporters of other parties to support Twigg in order to remove Portillo. Another example is the boomerang effect where the likely supporters of the candidate shown to be winning feel that chances are slim and that their vote is not required, thus allowing another candidate to win. For party-list proportional representation opinion polling helps voters avoid wasted vote, wasting their vote on a party below the electoral threshold. In addition, Mark Pickup, in Cameron Anderson and Laura Stephenson's ''Voting Behaviour in Canada'', outlines three additional "behavioural" responses that voters may exhibit when faced with polling data. The first is known as a "cue taking" effect which holds that poll data is used as a "proxy" for information about the candidates or parties. Cue taking is "based on the psychological phenomenon of using heuristics to simplify a complex decision" (243). The second, first described by Petty and Cacioppo (1996), is known as "cognitive response" theory. This theory asserts that a voter's response to a poll may not line with their initial conception of the electoral reality. In response, the voter is likely to generate a "mental list" in which they create reasons for a party's loss or gain in the polls. This can reinforce or change their opinion of the candidate and thus affect voting behaviour. Third, the final possibility is a "behavioural response" which is similar to a cognitive response. The only salient difference is that a voter will go and seek new information to form their "mental list", thus becoming more informed of the election. This may then affect voting behaviour. These effects indicate how opinion polls can directly affect political choices of the electorate. But directly or indirectly, other effects can be surveyed and analyzed on all political parties. The form of Framing (social sciences), media framing and party ideology shifts must also be taken under consideration. Opinion polling in some instances is a measure of cognitive bias, which is variably considered and handled appropriately in its various applications. In turn, non-nuanced reporting by the media about poll data and public opinions can thus even aggravate political polarization.


Effect on politicians

Starting in the 1980s, tracking polls and related technologies began having a notable impact on U.S. political leaders. According to Douglas Bailey, a Republican who had helped run Gerald Ford's 1976 United States presidential election, 1976 presidential campaign, "It's no longer necessary for a political candidate to guess what an audience thinks. He can [find out] with a nightly tracking poll. So it's no longer likely that political leaders are going to lead. Instead, they're going to follow." An example of opinion polls having significant impact on politicians is Ronald Reagan's advocacy for a voluntary social security program in the 1960s and early 1970s. Because polls showed that a large proportion of the public would not support such a program, he dropped the issue when he ran for presidency.


Regulation

Some jurisdictions over the world restrict the publication of the results of opinion polls, especially during the period around an election, in order to prevent the possibly erroneous results from affecting voters' decisions. For instance, in Canada, it is prohibited to publish the results of opinion surveys that would identify specific political parties or candidates in the final three days before a poll closes. However, most Western democratic nations do not support the entire prohibition of the publication of pre-election opinion polls; most of them have no regulation and some only prohibit it in the final days or hours until the relevant poll closes. A survey by Canada's Royal Commission on Electoral Reform reported that the prohibition period of publication of the survey results largely differed in different countries. Out of the 20 countries examined, 3 prohibit the publication during the entire period of campaigns, while others prohibit it for a shorter term such as the polling period or the final 48 hours before a poll closes. In India, the Election Commission has prohibited it in the 48 hours before the start of polling.


Opinion poll in dictatorships

The director of the Levada Center stated in 2015 that drawing conclusions from Russian poll results or comparing them to polls in democratic states was irrelevant, as there is no real political competition in Russia, where, unlike in democratic states, Russian voters are not offered any credible alternatives and public opinion is primarily formed by Mass media in Russia, state-controlled media, which promotes those in power and discredits alternative candidates. Many respondents in Russia do not want to answer pollsters' questions for fear of negative consequences. On 23 March 2023, criminal case was opened against Moscow resident Yury Kokhovets, a participant in the Radio Free Europe/Radio Liberty, Radio Liberty street poll. He faced up to 10 years in prison under Russia's Russian 2022 war censorship laws, 2022 war censorship laws.


See also

* Deliberative opinion poll * Entrance poll * Electoral geography * Europe Elects * Everett Carll Ladd * Exit poll * Historical polling for U.S. Presidential elections * List of polling organizations * Metallic Metals Act * Open access poll * Psephology * Political analyst * data science, Political data scientists * Political forecasting * Push poll * Referendum * Roper Center for Public Opinion Research * American Association for Public Opinion Research * World Association for Public Opinion Research * Sample size determination * Straw poll * Swing (politics) * Types of democracy * Wiki survey


Footnotes


References

* Asher, Herbert: ''Polling and the Public. What Every Citizen Should Know'' (4th ed. CQ Press, 1998) * Pierre Bourdieu, Bourdieu, Pierre, "Public Opinion does not exist" in ''Sociology in Question'', London, Sage (1995). * Bradburn, Norman M. and Seymour Sudman. ''Polls and Surveys: Understanding What They Tell Us'' (1988). * Cantril, Hadley. ''Gauging Public Opinion'' (1944
online
* Hadley Cantril, Cantril, Hadley and Mildred Strunk, eds. ''Public Opinion, 1935–1946'' (1951), massive compilation of many public opinion poll
online
* Converse, Jean M. ''Survey Research in the United States: Roots and Emergence 1890–1960'' (1987), the standard history. * Crespi, Irving. ''Public Opinion, Polls, and Democracy'' (1989). * Gallup, George. ''Public Opinion in a Democracy'' (1939). * Gallup, Alec M. ed. ''The Gallup Poll Cumulative Index: Public Opinion, 1935–1997'' (1999) lists 10,000+ questions, but no results. * Gallup, George Horace, ed. ''The Gallup Poll; Public Opinion, 1935–1971'' 3 vol (1972) summarizes results of each poll. * Geer, John Gray. ''Public opinion and polling around the world: a historical encyclopedia'' (2 vol. Abc-clio, 2004) * Glynn, Carroll J., Susan Herbst, Garrett J. O'Keefe, and Robert Y. Shapiro. ''Public Opinion'' (1999) textbook * Lavrakas, Paul J. et al. eds. ''Presidential Polls and the News Media'' (1995) * Moore, David W. ''The Superpollsters: How They Measure and Manipulate Public Opinion in America'' (1995). * Niemi, Richard G., John Mueller, Tom W. Smith, eds. ''Trends in Public Opinion: A Compendium of Survey Data'' (1989). * Oskamp, Stuart and P. Wesley Schultz; ''Attitudes and Opinions'' (2004). * Robinson, Claude E. ''Straw Votes'' (1932). * Robinson, Matthew ''Mobocracy: How the Media's Obsession with Polling Twists the News, Alters Elections, and Undermines Democracy'' (2002). * Rogers, Lindsay. ''The Pollsters: Public Opinion, Politics, and Democratic Leadership'' (1949). * Traugott, Michael W. ''The Voter's Guide to Election Polls'' 3rd ed. (2004). * James G. Webster, Patricia F. Phalen, Lawrence W. Lichty; ''Ratings Analysis: The Theory and Practice of Audience Research'' Lawrence Erlbaum Associates, 2000. * Young, Michael L. ''Dictionary of Polling: The Language of Contemporary Opinion Research'' (1992).


Additional sources

* Brodie, Mollyann, et al. "The Past, Present, And Possible Future Of Public Opinion On The ACA: A review of 102 nationally representative public opinion polls about the Affordable Care Act, 2010 through 2019." ''Health Affairs'' 39.3 (2020): 462–470. * Dyczok, Marta. "Information wars: hegemony, counter-hegemony, propaganda, the use of force, and resistance." ''Russian Journal of Communication'' 6#2 (2014): 173–176. * Eagly, Alice H., et al. "Gender stereotypes have changed: A cross-temporal meta-analysis of US public opinion polls from 1946 to 2018." ''American psychologist'' 75.3 (2020): 301+
online
* Fernández-Prados, Juan Sebastián, Cristina Cuenca-Piqueras, and María José González-Moreno. "International public opinion surveys and public policy in Southern European democracies." ''Journal of International and Comparative Social Policy'' 35.2 (2019): 227–237
online
* Kang, Liu, and Yun-Han Chu. "China's Rise through World Public Opinion: Editorial Introduction." ''Journal of Contemporary China'' 24.92 (2015): 197–202; polls in US and China * * Murphy, Joe, et al. "Social Media in Public Opinion Research: Report of the AAPOR Task Force on Emerging Technologies in Public Opinion Research." ''American Association for Public Opinion Research'' (2014)
online
{{Authority control Polling, Types of polling Survey methodology Public opinion Social influence Sampling (statistics) Pollsters, Surveys (human research), *