What makes polls scientific




















In a scientific poll, the people who are polled are selected at random. Mark Blumenthal, writing at Pollster. If the group selected is truly random, a small sample of 1, adults can reflect the attitudes of millions. Even if a poll is truly random, another poll conducted with the same method but different respondents can get a different result. According to Sheldon R. Gawiser, Ph. Every poll has a margin of error, which reflects the possible range of responses in randomly selected group.

Margin of error decreases as the number of people who respond to a poll increases. The way in which a question is worded can influence a response, though Mark Blumenthal notes that pollsters may disagree on what constitutes neutral questioning. The order in which questions are asked also influences responses as well, and so can the tone of the questioner.

A national poll may be skewed if its respondents hail disproportionately from one region of the country. In the s, pollsters had an easy time phoning households, Blumenthal notes, because 93 percent of homes had land-based telephones.

Today, pollsters relying on databases of landline telephones report getting responses from a disproportionate number of older, white voters. Younger people tend to rely on cell phones.

The lack of response from younger respondents may influence the accuracy of polls. A campaign may be testing out new slogans, a new statement on a key issue or a new attack on an opponent.

Likewise, reporting on a survey by a special-interest group is tricky. For example, an environmental group trumpets a poll saying the American people support strong measures to protect the environment. That may be true, but the poll was conducted for a group with definite views. That may have swayed the question wording, the timing of the poll, the group interviewed and the order of the questions. You should carefully examine the poll to be certain that it accurately reflects public opinion and does not simply push a single viewpoint.

How many people were interviewed for the survey? Because polls give approximate answers, the more people interviewed in a scientific poll, the smaller the error due to the size of the sample, all other things being equal. A common trap to avoid is that "more is automatically better.

How were those people chosen? The key reason that some polls reflect public opinion accurately and other polls are unscientific junk is how people were chosen to be interviewed. In scientific polls, the pollster uses a specific statistical method for picking respondents. In unscientific polls, the person picks himself to participate. The method pollsters use to pick interviewees relies on the bedrock of mathematical reality: when the chance of selecting each person in the target population is known, then and only then do the results of the sample survey reflect the entire population.

This is called a random sample or a probability sample. This is the reason that interviews with 1, American adults can accurately reflect the opinions of more than million American adults. Most scientific samples use special techniques to be economically feasible. For example, some sampling methods for telephone interviewing do not just pick randomly generated telephone numbers. Only telephone exchanges that are known to contain working residential numbers are selected, reducing the number of wasted calls.

This still produces a random sample. But samples of only listed telephone numbers do not produce a random sample of all working telephone numbers.

But even a random sample cannot be purely random in practice as some people don't have phones, refuse to answer, or aren't home. Surveys conducted in countries other than the United States may use different but still valid scientific sampling techniques, for example, because relatively few residents have telephones. In surveys in other countries, the same questions about sampling should be asked before reporting a survey.

What area nation, state, or region or what group teachers, lawyers, Democratic voters, etc. It is absolutely critical to know from which group the interviewees were chosen.

You must know if a sample was drawn from among all adults in the United States, or just from those in one state or in one city, or from another group. For example, a survey of business people can reflect the opinions of business people — but not of all adults. Only if the interviewees were chosen from among all American adults can the poll reflect the opinions of all American adults. In the case of telephone samples, the population represented is that of people living in households with telephones.

For most purposes, telephone households are similar to the general population. But if you were reporting a poll on what it was like to be homeless, a telephone sample would not be appropriate. The increasingly widespread use of cell phones, particularly as the only phone in some households, may have an impact in the future on the ability of a telephone poll to accurately reflect a specific population.

Remember, the use of a scientific sampling technique does not mean that the correct population was interviewed. Political polls are especially sensitive to this issue. In pre-primary and pre-election polls, which people are chosen as the base for poll results is critical.

A poll of all adults, for example, is not very useful for a primary race where only 25 percent of the registered voters actually turn out. So look for polls based on registered voters, "likely voters," previous primary voters and such. These distinctions are important and should be included in the story, for one of the most difficult challenges in polling is trying to figure out who actually is going to vote.

The ease of conducting surveys in the United States is not duplicated around the world. It may not be possible or practical in some countries to conduct surveys of a random sample throughout the country. Surveys based on a smaller group than the entire population — such as a few larger cities — can still be reliable if reported correctly - as the views of those in the larger cities, for example, but not those of the country - and may be the only available data.

Are the results based on the answers of all the people interviewed? One of the easiest ways to misrepresent the results of a poll is to report the answers of only a subgroup.

For example, there is usually a substantial difference between the opinions of Democrats and Republicans on campaign-related matters. Reporting the opinions of only Democrats in a poll purported to be of all adults would substantially misrepresent the results. Poll results based on Democrats must be identified as such and should be reported as representing only Democratic opinions.

Of course, reporting on just one subgroup can be exactly the right course. In polling on a primary contest, it is the opinions of those who can vote in the primary that count — not those who cannot vote in that contest.

Primary polls should include only eligible primary voters. Who should have been interviewed and was not? Or do response rates matter? No survey ever reaches everyone who should have been interviewed. You ought to know what steps were undertaken to minimize non-response, such as the number of attempts to reach the appropriate respondent and over how many days.

There are many reasons why people who should have been interviewed were not. They may have refused attempts to interview them.

Or interviews may not have been attempted if people were not home when the interviewer called. Or there may have been a language problem or a hearing problem. In recent years, the percentage of people who respond to polls has diminished. There has been an increase in those who refuse to participate. Some of this is due to the increase in telemarketing and part is due to Caller ID and other technology that allows screening of incoming calls.

While this is a subject that concerns pollsters, so far careful study has found that these reduced response rates have not had a major impact on the accuracy of most public polls. Where possible, you should obtain the overall response rate from the pollster, calculated on a recognized basis such as the standards of the American Association for Public Opinion Research.

When was the poll done? Events have a dramatic impact on poll results. Your interpretation of a poll should depend on when it was conducted relative to key events. Even the freshest poll results can be overtaken by events. The President may have given a stirring speech to the nation, pictures of abuse of prisoners by the military may have been broadcast, the stock market may have crashed or an oil tanker may have sunk, spilling millions of gallons of crude on beautiful beaches.

Poll results that are several weeks or months old may be perfectly valid, but events may have erased any newsworthy relationship to current public opinion. How were the interviews conducted?

There are four main possibilities: in person, by telephone, online or by mail. Most surveys are conducted by telephone, with the calls made by interviewers from a central location. However, some surveys are still conducted by sending interviewers into people's homes to conduct the interviews. Some surveys are conducted by mail.

In scientific polls, the pollster picks the people to receive the mail questionnaires. The respondent fills out the questionnaire and returns it. Mail surveys can be excellent sources of information, but it takes weeks to do a mail survey, meaning that the results cannot be as timely as a telephone survey. And mail surveys can be subject to other kinds of errors, particularly extremely low response rates.

In many mail surveys, many more people fail to participate than do. This makes the results suspect. Surveys done in shopping malls, in stores or on the sidewalk may have their uses for their sponsors, but publishing the results in the media is not among them.

These approaches may yield interesting human-interest stories, but they should never be treated as if they represent public opinion. Advances in computer technology have allowed the development of computerized interviewing systems that dial the phone, play taped questions to a respondent and then record answers the person gives by punching numbers on the telephone keypad. Such surveys may be more vulnerable to significant problems including uncontrolled selection of respondents within the household, the ability of young children to complete the survey, and poor response rates.

Such problems should disqualify any survey from being used unless the journalist knows that the survey has proper respondent selection, verifiable age screening, and reasonable response rates. What about polls on the Internet or World Wide Web? The explosive growth of the Internet and the World Wide Web has given rise to an equally explosive growth in various types of online polls and surveys.

Online surveys can be scientific if the samples are drawn in the right way. Some online surveys start with a scientific national random sample and recruit participants while others just take anyone who volunteers. Online surveys need to be carefully evaluated before use. Several methods have been developed to sample the opinions of those who have online access. The fundamental rules of sampling still apply online: the pollster must select those who are asked to participate in the survey in a random fashion.

In those cases where the population of interest has nearly universal Internet access or where the pollster has carefully recruited from the entire population, online polls are candidates for reporting. However, even a survey that accurately sampled all those who have access to the Internet would still fall short of a poll of all Americans, as about one in three adults do not have Internet access. But many Internet polls are simply the latest variation on the pseudo-polls that have existed for many years.

Whether the effort is a click-on Web survey, a dial-in poll or a mail-in survey, the results should be ignored and not reported. All these pseudo-polls suffer from the same problem: the respondents are self-selected. The individuals choose themselves to take part in the poll — there is no pollster choosing the respondents to be interviewed. Remember, the purpose of a poll is to draw conclusions about the population, not about the sample.

In these pseudo-polls, there is no way to project the results to any larger group. Any similarity between the results of a pseudo-poll and a scientific survey is pure chance. For most such efforts, no effort is made to pick the respondents, to limit users from voting multiple times or to reach out for people who might not normally visit the Web site.

The dial-in or click-in polls may be fine for deciding who should win on American Idol or which music video is the MTV Video of the Week. The opinions expressed may be real, but in sum the numbers are just entertainment. There is no way to tell who actually called in, how old they are, or how many times each person called. Never be fooled by the number of responses. In some cases a few people call in thousands of times. Even if , calls are tallied, no one has any real knowledge of what the results mean.

If big numbers impress you, remember that the Literary Digest's non-scientific sample of 2,, people said Landon would beat Roosevelt in the Presidential election. Mail-in coupon polls are just as bad. In this case, the magazine or newspaper includes a coupon to be returned with the answers to the questions.

Again, there is no way to know who responded and how many times each person did. Another variation on the pseudo-poll comes as part of a fund-raising effort.

An organization sends out a letter with a survey form attached to a large list of people, asking for opinions and for the respondent to send money to support the organization or pay for tabulating the survey.

The questions are often loaded and the results of such an effort are always meaningless. This technique is used by a wide variety of organizations from political parties and special-interest groups to charitable organizations. Again, if the poll in question is part of a fund-raising pitch, pitch it — in the wastebasket. What is the sampling error for the poll results?

Interviews with a scientific sample of 1, adults can accurately reflect the opinions of nearly million American adults. That means interviews attempted with all million adults — if such were possible — would give approximately the same results as a well-conducted survey based on 1, interviews.

What happens if another carefully done poll of 1, adults gives slightly different results from the first survey? Neither of the polls is "wrong. This is not an "error" in the sense of making a mistake.

Rather, it is a measure of the possible range of approximation in the results because a sample was used. Pollsters express the degree of the certainty of results based on a sample as a "confidence level. This does not address the issue of whether people cooperate with the survey, or if the questions are understood, or if any other methodological issue exists. The sampling error is only the portion of the potential error in a survey introduced by using a sample rather than interviewing the entire population.

Sampling error tells us nothing about the refusals or those consistently unavailable for interview; it also tells us nothing about the biasing effects of a particular question wording or the bias a particular interviewer may inject into the interview situation. It also applies only to scientific surveys.

Remember that the sampling error margin applies to each figure in the results — it is at least 3 percentage points plus or minus for each one in our example. Thus, in a poll question matching two candidates for President, both figures are subject to sampling error. Sampling error raises one of the thorniest problems in the presentation of poll results: For a horse-race poll, when is one candidate really ahead of the other?

Certainly, if the gap between the two candidates is less than the sampling error margin, you should not say that one candidate is ahead of the other.

You can say the race is "close," the race is "roughly even," or there is "little difference between the candidates. And just as certainly, when the gap between the two candidates is equal to or more than twice the error margin — 6 percentage points in our example — and if there are only two candidates and no undecided voters, you can say with confidence that the poll says Candidate A is clearly leading Candidate B.

When the gap between the two candidates is more than the error margin but less than twice the error margin, you should say that Candidate A "is ahead," "has an advantage" or "holds an edge.

When there are more than two choices or undecided voters — virtually in every poll in the real world — the question gets much more complicated. While the solution is statistically complex, you can fairly easily evaluate this situation by estimating the error margin. You can do that by taking the sum of the percentages for each of the two candidates in question and multiplying it by the total respondents for the survey only the likely voters if that is appropriate.

This number is now the effective sample size for your judgment. Look up the sampling error in a table of statistics for that reduced sample size, and apply it to the candidate percentages. If they overlap, then you do not know if one is ahead. If they do not, then you can make the judgment that one candidate has a lead. And bear in mind that when subgroup results are reported — women or blacks or young people — the sampling error margin for those figures is greater than for results based on the sample as a whole.



0コメント

  • 1000 / 1000