Wednesday, February 27, 2008

The One Question Polls Never Ask

I’ve often touched on the high correlation between the results of newspaper polls and their ideological orientations. But the newspapers claim that the samples are randomly selected. So what’s going on?

It is important to remember that Japanese households with fixed phone lines generally subscribe to one of the four major newspapers. Moreover, the deliverymen (or -women) personally make the rounds each month to collect the subscription fees and every six or twelve months to renew the subscriptions, leaving boxes of detergents and other freebies as tokens of their appreciation. This engenders strong brand loyalty. A household tends to stick with one newspaper.

So, if people receive phone calls from any of the major newspapers and are asked to waste their time answering questions with regard to their political preferences - remember, they’re (presumably) not getting any tangible benefits from this - they should be strongly inclined to respond to the ones that they subscribe to.

If that line of reasoning is correct, a disproportionately number of the people who give answers to any of the four major newspapers must be subscribing to that particular newspaper. Their views, of course, are rooted in their worldview and understanding of the facts, which are in turn influenced over a course of many years, often decades, by the news editorials and op-eds to which they have been exposed.

It would be easy to test this conjecture. All that the newspapers would have to do when taking polls would be to ask the following question:

What newspaper, if any, do you subscribe to?

Of course the newspapers will never do that. But one corollary of my conjecture is that non-newspaper polls will be more reflective of the public mind. We may have Yomiuri loyalty, we may have Yomiuri Giants loyalty, but we don’t have Nippon TV loyalty. And we certainly do not have wire service loyalty. That means that TV/wire service should be able to attract an ideologically more even distributed set of samples. So it is reasonable to think that the numbers in TV and wire service poll numbers would fall somewhere between Asahi and Yomiuri poll numbers and would be more consonant with election results. This is testable. I’m sure that a good statistician could check this out. Does anyone out there want to work with me on this?

5 comments:

Jan Moren said...

Perhaps. It's certainly worth looking into, I agree on that.

Without knowing exactly what question script they are using, I would venture a second possibility: Newspaper polls may well formulate their questions using phrases, angles and terminology used in their own recent reporting, and asking about specifics that have been given prominence in their own paper. Their own readers will readily answer such questions in a clear manner, while non-readers will be more likely to waffle, ending up in the "don't know/no answer" column.

Most likely the difference is due to variations on leading questions, though. Even if the main question was phrased the same, previous questions can shift the way the main question is perceived, so just knowing the main question is not enough. As long as the papers don't realease the actual questionnaires it's pretty much impossible to know where the discrepancy comes from.

Anonymous said...

I like your explanation. Very clever. (Though having subscribed to a Japanese paper, Asahi, I suspect that brand loyalty may be less important than not wanting to have to deal with the distributor in keeping people connected to their current newspaper.)

And while parsing variation in polls is very difficult given variaion in survey questions, question order, timing etc., it would be interesting to pull together a simple comparison and we could just assume differences were either randomly distributed or purposefully constructed to arrive at the distinctions you hypothesize. IOW, we could just test for correlations, both over time between different newspaper-linked surveys, and at points in time for the effect you expect to see. For that, is there any site that compiles newspaper survey results on, say, cabinet approval?



Another interesting comparison would be on accuracy of election forecasts. For that one would probably look narrowly at something like hirei-ku survey forecasts against actual vote shares.

Can the data be found somewhere?

Jun Okumura said...

It’s always good to hear from people who have the technical background.

You’re right, Janne. Any serious work would have to answer the kind of fundamental questions that you pose. To do that, you would have to collect all the questionnaires, interview the people responsible for conducting these polls, and analyze the information. I do think that my conjectures are more likely to lead to the main, if not sole, explanation, though.

The hardcopy versions of the dailies as well as the online Sankei carry the full set of questions and results. The online Yomiuri also carries the questions, but not the results. I’d have to check and see if the others release their questions online somewhere. Based on what I’ve seen, the questions appear to be straightforward and without any particular bias. Given the logistics of making hundreds, possibly thousands, of phone calls to reach the 300 responders that will actually take the time to answer (or at least listen to) the full set of questions, it is difficult to imagine enough polltakers systematically making extraneous statements to create enduring, consistent biases across the four dailies.

The dailies also cover mostly the same subjects, apparently using more or less the same terminologies. The phrases also are fairly uniform. The angles do reflect the political and other preferences, but I don’t think that there’s enough difference there to generate a large number of I-don’t-knows among non-subscribing responders, though it’s certainly a possibility worth exploring.

Having said all that, I repeat: your questions must be addressed if and when this matter is fully investigated. Actually, I’m surprised that this is not a subject of public discussion here. I can’t shake the feeling that I’m missing out on something.

Ross: Perhaps I should have avoided the term “brand loyalty” and looked for something broader that also covers the fact that we are creatures of custom. In any case, the long-term effects must be similar.

I’m sure that academic and research institutions that deal with public opinion as well as PR agencies must be compiling the questionnaires and results, but I doubt that there is anything available online that is satisfactorily comprehensive and sustained over time. I think Gerry Curtis would be a good person to ask; in fact, he may have the answers already. If not, he should be able to find a Japanese graduate student in the neighborhood who would be happy to do the grunt work as part of a co-authorship deal.

I agree that election forecasts provide another interesting subject. I suspect that they are less susceptible to this kind of analysis though.

Unknown said...

I had always thought that public opionion surveys list not only the number of people they call, but the number of people who actually answer. So if you were to look a the difference in those figures, it might give a boundary for problem as you describe it. If Asahi subscribers are refusing to answer Yomiuri surveys, it would be included in that figure.
Also, I seem to recall that the book 『日本の世論』from 読売新聞社世論調査部 listed not only all the questions on their surveys, but the answers to each question. The copy I saw was from 2002, but maybe there are more recent editions.

Jun Okumura said...

Thanks, James. You're very helpful. I'll have to get out there and look it up. Having said that, unless I can hook up with someone who has the techniocal expertise, I'm not going to be able to go very far beyond the kind of thing I do on the blog.

I do hope that I can find the number of people that they call. For one thing, if my conjecture is correct, the smaller the circulation of the newspaper, the more phone calls that its polltakers should have to be making in order to reach a given number of responders. If there are no such discernable differences, I'll have to begin looking more closely at the possibilities that Janne has put forward.