I
believed that it was highly likely that The Japan Times’s survey was skewed
against Abe because its readership, from which the sample was drawn, was skewed
against. I thought that point was obvious when I used the Boston sports bar
simile. After all, it’s the bar’s patrons that you interview, not the
(inanimate) bar itself; The stand-in for The Japan Times was the location, the
city of Boston. (A Boston sports bar could, of course, have plenty of Red Sox
memorabilia and the like and the bartender would likely be a Bill Simmons type,
which would sure have an effect on the tendencies of its tendencies—but I
digress.) And I think that
disinterested parties will agree that David McNeill conceded as much as far as
the past was concerned when he tweeted, “The JT is under new management, doncha
know?”
Ironically,
I’m the one who’s not so sure now. I had assumed that the responders to the
online survey would reflect years of getting their news on Abe and other
Japanese conservatives from the pages of JT. But nothing should have been
further from the truth. After all, the poll was supposed to reflect the views
of Japanese voters. If my casual observation on trains is any indication,
Japanese readers of JT are usually students or salaryhumans intent on improving
their English, not necessarily the most politically-motivated segment of the
Japanese public.
But
that’s where my doubts about the sampling process comes in. I did not write
“300-plus” casually. I remembered the number from the unusually hard to find JT
article as being in the high 300s. This is not insignificant. Reputable polls
typically use samples of 300, 500, and, what, 2000? 3000? Anyway, a sample in
the high 300s was anomalous, and statistically irrelevant—this layman’s
understanding is that the gain from a randomized sample of 500 compared to 300
is fairly small. Thus, I suspected—still do—that the sampling process was
self-selective, which suggested a sample skewed towards the highly-motivated
with a cosmopolitan outlook.
Of
course, there’s no way of knowing that unless we know how the survey was
conducted. Heck, we don’t even know if the sample consisted largely of Japanese
nationals. So, I went and looked, but to my consternation could not find the JT
article. No, I could not find it again, but decided to tweet anyway, working
from memory. Note, though, that raising the number to the 600s does not
alleviate my doubts at all. If anything, it exacerbates it.
And
here’s where the responsibility of the journalist comes in. When a journalist
uses a poll number to illustrate a point, he/she’d better be sure of its
provenance, particularly when it is conducted by a media outlet whose
resources/capability to conduct a robust survey may be doubtful. It would
dangerous to cite such a poll without confirming how it was conducted and
preparing to answer questions about its veracity.
That
is why I find the fact that the comment “Feel free to cite your own survey on
high support for Abe-Trump bromance” is coming from a journalist rather
disturbing. As people who know my thinking knows well, I believe that one
difference between scientists and journalists (and pundits, if you insist) is
that the former uses evidence to test hypotheses and the latter uses it to
illustrate narratives. This is not a knock against journalists, investigative
journalists included. It’s an acknowledgement of the multitude of constraints
that journalists face when filing reports for the public. And it’s not as if
there isn’t a huge gray area in between that is inhabited by practitioners of
the soft sciences. But it does highlight the need for conscientious journalists
to be meticulous in weighing their evidence from all feasible angles. An
invitation to pick a survey of choice is an invitation of adopting standards
befitting more of a, say, Fox and Friends or, if that example is not to your
liking, SNL (I know, I know).
Finally,
for what it’s worth, my interest is not in whether a given poll is to my
liking—though it feels good when it does—but in questions like, “Why does the
LDP and its administration do better on Yomiuri polls than on Asahi
polls—except when it doesn’t?” The real money, by the way, is in the second
part of the question. Now, let’s see if McNeill really meant it when he wrote,
“Always pleasantly surprised by intelligent Twitter exchanges.”
Sidebar:
I’ve noticed that people rarely acknowledge their mistakes on Twitter. Instead,
they usually just go away, at which point I’ve decided I’ll just declare
victory and move on. Even a “not necessarily” without commentary is better than
par for the course, doncha think?
(Typos corrected, same day.)