ABSTRACT

I wish political scientists could impart to journalists, in particular, a bit of statistical theory and survey practice to inform their reporting of poll results. For the media, survey results can serve as filler on slow news days, can add color to otherwise prosaic stories, or indeed can even BE the storyas when presidential popularity falls to such a low point that the incumbent’s very ability to govern is questioned. And some media reporting of surveys is of fairly high quality: often the media report on trends or on differences among subgroups, implicitly acknowledging that a given marginal doesn’t tell us much about “what the people think,” but that change over time or relative differences can be meaningful. They also now, after decades of reporting point estimates only, often give a “margin of error” with the results. On the other hand, media treatments rarely note that the margin of error is larger for subgroups, so that apparent differences between, say, black women and black men may really be indistinguishable from zero. Certainly there is little acknowledgment that factors such as question wording, item order, and differences in likely voter and other types of screens might influence survey responses (not to mention nonresponse bias and threats to random sampling, polling difficulties with which we political scientists are still grappling). As fascinating and informative as poll results can be, I would feel better if they came with more complete descriptions and cautions for readers among the public.