With the 2024 election behind us, the seemingly daily reports on new polls have ended for now. But what do polls really mean? How accurate are they? What impact do they have on voters and elections? University of New Hampshire Professor Andrew Smith, director of the UNH Survey Center, talks with host Melanie Plenda about just that.
By Rosemary Ford and Caitlin Agnew
This article has been edited for length and clarity.
Melanie Plenda:
Where do polls come from and how long have we been using them to analyze political races?
Dr. Andrew Smith:
Polls have been around for an awfully long time. The oldest one that we've been able to identify in the United States is in the 1824 election. They were referred to as straw polls back then, but they were started by a newspaper, the Harrisburg Pennsylvanian, and it was done to, frankly, increase the number of people who bought the newspaper at that time and to help inform readers as to what people in the Harrisburg, area thought about the 1824 election. Throughout the 1800s, we saw an expansion of these, particularly in the later half of the 1800s as the penny press really developed in the United States.
By the turn of the century, there were quite a few national straw polls and well over 100 individual local area straw polls. So it's not a new thing, but it's really important to remember that the reason that the media got into the polling business was to sell newspapers. I think that's a critical thing to remember, because the reason that sells more newspapers is that people are always interested in what is going to happen. I think it sold newspapers then, and it's kind of clickbait for the press now to run polls.
Melanie Plenda:
That leads to my next question – what sort of an impact do they have on races and voters?
Dr. Andrew Smith:
There's very little research that shows that polls have much of an impact one way or the other on elections. There's always a fear that a poll showing one candidate leading by a large amount will either lead the supporters of a candidate who's losing supporters to give up and stay home, or maybe cause the supporters of a candidate who's leading to say that they have already won and don't have to bother to go to the polls. But there's very, very little evidence that supports any of that.
The only real data that I've been able to identify that shows that polls had an impact was in the 1980 presidential election. What happened that year was that the exit polls were released early. This was the Carter v. Reagan election, and Jimmy Carter actually conceded defeat before the polls in California closed, and some Democratic congressmen in California asserted that they lost because many people that were going to go vote after work decided, “What’s the point? The election is already over.” So that's really the only evidence that we have.
Melanie Plenda:
How has polling evolved in the past decades? I would imagine technology and changing social demographics have changed things.
Dr. Andrew Smith:
The polling industry is in the midst of a paradigm shift, and it's similar to what happened in the 1960s and 1970s, when polls moved from in-person surveys to telephone surveys. The technology for telephones improved, the coverage of telephones improved, so most households in the country had a telephone by the late 1960s. But it took a long time for researchers to come up with the best practices, the methodological strategies, in order to use this new technology.
It's important to remember that the big driver of that methodological change in the 1960s and 1970s telephone surveys was the cost, because in-person surveys were an order of magnitude more expensive than telephone surveys and harder to manage, and then organizing the data and analyzing it. So the time frame was worse. Telephones made that much shorter.
Now, with web surveys and the development of the internet and the expansion of coverage of the internet to most households in the country or the internet plus a cell phone — we can kind of call that the quasi-internet — we changed how we can go at people because of the cost. It costs far less with an internet- or a web-based poll, because you don't have to have an interviewer. We're seeing the industry move to that, and the clients as well. The development of the internet, the development of cell phones, and the declining response rates, I think, are the biggest drivers of this change in methodology.
So we're in this process where economics are driving us to change the way we do survey research, and we haven't developed the best practices as an industry yet to say this is how you should do it, this is the more accurate way, these are the procedures that lead to more accurate predictions in polling, and it's going to take several years before we're out of the woods with that. So I'm very cautious about it.
Melanie Plenda:
Let’s dig a little deeper. How can the average person tell a “good” from a “bad” poll?
Dr. Andrew Smith:
The human instinct is to trust the polls that show us the outcome that we prefer and say that the other one must have serious methodological flaws. But I think that's a bad way to approach it.
What I would trust is surveys more that start with a random sample — a probability-based survey. So if you see the word “probability” in the methodology section of the survey, which I would encourage everybody to read, I’d give that more weight. If it makes no mention of probability, that probably means that there is no random sampling going on.
The second thing that I would do is look for something called a transparency initiative stamp, or a logo on that survey, or an indication that this organization is showing their work. APOR, the American Association of Public Opinion Research, recognizes that there are a lot of different methodologies out there and asks, “The best thing we can do is ask survey research to show us what they did. How did they draw their sample? Where did they get the sample from? How were the surveys collected? When were the surveys collected? Who's paying for the surveys?” All of those sorts of things you need to take into account. If a survey does not have that transparency initiative seal approval, I would be less willing to accept the results of that because it shows they are less willing to show their work.
Melanie Plenda:
So talk to us a little bit more about why a sample group and the makeup of that sample group is so important.
Dr. Andrew Smith:
Well, what we try to do in surveys is draw a sample from the population that is representative of the population. And by doing that at random, we can use the central limit theorem to say that the estimates that we get from our sample are within this range of where the actual population number would be if we could go out and interview everyone in the population. Even with a random sample, you don't necessarily have one that's completely representative of the population. In fact, it's pretty much impossible to do that, but you want to be pretty close, and the central limit theorem at least allows us to say within a range of how close we think our estimate is from the overall population.
Melanie Plenda:
That’s fascinating. Thank you for joining us and talking about the polls.
“The State We’re In” is a weekly digital public affairs show produced by NH PBS and The Marlin Fitzwater Center for Communications. It is shared with partners in the Granite State News Collaborative, of which both organizations are members. These articles are being shared by partners in the Granite State News Collaborative. For more information, visit collaborativenh.org.