Jul 9, 2013

Mis-measuring American irreligion

Americans, polls report, are convinced that America is fast losing religion.

Earlier this year, Gallup found that 77 percent of Americans agree with the statement, "religion is losing its influence on American life." Some thought that was good, others bad, but most agreed it was happening.

Part of that belief, at least, is due to the Pew studies and subsequent headlines that have trumped the recent, rapid rise of the religiously unaffiliated, the "nones," in the United States. This has been one of the big stories in religion in America in the last few years.

Those reports are problematic in how they simplify the "nones," which is something I've looked at.  What it means when someone says they're not religious should be interrogated more fully that it usually is. But the studies do show a change. The question is what the change means, not whether or not there is something shifting in American religiosity.

Or maybe not. Maybe the studies aren't to be trusted.

The methodology of the studies that have reported on a rapid rise in American "nones" has recently been called into question. Perhaps, it has been suggested, American religiousness isn't changing, it is just being mis-measured.

According to Rodney Stark -- a sociologist of religion who's done important work but who has also been "out in left field" on some things -- there are serious problems with the major studies that shows a decline in American's religious affiliations. It's a serious problem that he suggests undermines the whole story of "the rise of the 'nones.'"

In a Wall Street Journal opinion piece, Stark writes:
When I was a young sociologist at Berkeley's Survey Research Center, it was assumed that any survey that failed to interview at least 85% of those originally drawn into the sample was not to be trusted. Those who refused to take part in the survey or could not be reached were known to be different from those who did take part. Consequently, studies were expected to report their completion rates. 
Today, even the most reputable studies seldom reach more than a third of those initially selected to be surveyed and, probably for that reason, completion rates are now rarely reported. The Pew Forum researchers are to be commended for reporting their actual completion rates, which by 2012 had fallen to 9%.

Given all of this, only one thing is really certain: Those who take part in any survey are not a random selection of the population. They also tend to be less educated and less affluent. Contrary to the common wisdom, research has long demonstrated that this demographic group is the one least likely to belong to a church.

As the less-affluent and less-educated have made up a bigger share of those surveyed, so has the number of those who report having no religion.
Maybe the much touted change is a matter of mis-mesauring?

Or maybe not. There are reasons to be believe that the measurements are reliable.

This isn't a new or unheard of issue that Stark is raising, and current polling practices are calibrated to deal with the issue. Dropping response rates and completion rates have been documented by pollsters, including Pew. Polling experts have been thinking about this problem for a while, and have developed methodology they think counter balances the possible problem of bias and over or under sampling certain groups.

They have done this, according to statistician Nate Silver, "mostly through the 'magic' of demographic weighting."

This means that the people who do respond are taken as representative for their demographic. Perhaps a poll reaches and gets responses from fewer hispanics than there are in the population. The pollsters can re-calibrate, valuing each hispanic response as representative for three, ten, etc. This is logic by which polls work anyway -- i.e., they're "representative" -- so while it might seem that demographic weighting is "cheating," it's part of the process, part of the analysis that's necessary to produce those numbers.

The people at Pew, where the questioned study was done, have thought about this question that STark is raising and have responded at length. Citing multiple studies on the matter of response rates, Pew concludes that "carefully conducted polls continue to obtain representative samples of the public and provide accurate data about the views and experiences of Americans."

Where polls accuracy can be checked, for example with elections, this methodology, which is really a defense mechanism for what Stark is criticizing, has be shown to work. Silver calls it the "uncanny accuracy of polls." Looking at voter surveys prior to elections specifically, Pew found "there is little to suggest that those who do not participate hold substantially different views" than those who do.

Perhaps questions of religious affiliation are different. Stark, at least, seems to think so. Given the research that's gone into the methodological question he raises, though, and the continued good-enough accuracy of polls in other, more easily verified, areas, the declining poll response rates aren't enough, in and of themselves, to seriously call the reports of declining religious affiliation into question.

It's necessary to be careful and thoughtful in interpreting the reports of rapid rising rates of irreligion. They probably don't mean as much as they're commonly thought to mean. However that change should ultimately be understood, though, it seems likely that the reports that there is a change are reliable.


  1. Rodney Stark is, at this point, a polemicist whose discussions of the "science" behind religious research almost always serve an agenda, to the great detriment of actual science. To be fair to him he's not the only social scientist who alarmed by low response rates, but as you point out their critiques have been thoroughly dealt with. Anyone who questions Nate Silver's use of very low response rate surveys should look at his results. Hint - it's not luck. But back to Stark, I'd offer two examples.

    In his current work he relies heavily on American church-attendance rates that a self-reported (ironically to the very same organizations he criticizes for low response rates when it serves him e.g. Gallup). Yet Mark Chavez and others have shown rather emphatically that Americans exaggerate church attendance upwards to two times their actual rates. And yes this is done on anonymous surveys. I'm less familiar with the literature in this regard, but it's also possible that they exaggerate a whole slew of attributes that are seen in their social contexts to be positive. So here the science doesn't suit him and he flat out ignores it.

    Another rather atrocious example involves his public defense of Mark Regnerus' recent anti-gay parenting study as "scientific." (See - http://www.baylorisr.org/2012/06/a-social-scientific-response-to-the-regnerus-controversy/) Please note that I am not criticizing him for his personal views of social issues but for his abuse of science in other to defend them. The "science" behind Regnerus' study and particularly it's conclusions is so bad that the American Psychological Association has denounced it and the president of the American Sociological Association (among a large group of concerned sociologists) is calling for it to be retracted, something that literally never happens in sociology.

    So there you have it...