Reply to Robert Caroll's
Skeptic Dictionary Newsletter
Welcome Skeptic Dictionary readers. This is a response to address errors in Robert Carroll's Skeptic Dictionary newsletter. If you're already familiar with the issue, you may want to skip directly to the reply.
In 2004, in the Skeptic Dictionary newsletter #41, Robert quoted an excerpt from The Facts, my site on second hand smoke. Someone told me about it, I took a look, thought “that's pretty cool,” and then forgot about it.
Note: This conversation may be confusing If you are unfamiliar with the way Risk Ratios (RR) and Confidence Intervals (CI) work. They're explained about half way down this page.
In Newsletter 61 he changed his mind, and went back and edited the content of # 41. The changes are in red print (his convention.)
The website I link to for comments on the WHO study claims the following:
Fact: The study found a Relative Risk (RR) for spousal exposure of 1.16, with a Confidence Interval (CI) of .93 - 1.44. In layman's terms, that means
• Exposure to the ETS from a spouse increases the risk of getting lung cancer by 16%.
• Where you'd normally find 100 cases of lung cancer, you'd find 116.
• The 1.16 number is not statistically significant.
Yes, it is, unless you follow the recommendation of the tobacco industry and Jim Tozzi.
An RR that tiny should raise any skeptic's eyebrows, but that's not what makes it insignificant. The confidence interval, referenced very clearly just a few lines above, contains 1.0. That is the very definition of statistically insignificant.
I get a lot of e-mail from nicotine nannies who insist anyone who disagrees with them either has ties to Big Tobacco or is being duped by them, but I had expected better from Mr. Carroll.
Someone e-mailed me a copy of the newsletter. I intended to address it, forgot it about it, found it a few months later, and sent Mr. Carroll the following e-mail.
Someone sent me a copy of newsletter #61, and I've been meaning to reply to it for a while. Although I haven't signed up for the newsletter, I've enjoyed your web site for quite some time, and also liked your interview on Skepticality.
However, even the best of skeptics can get fooled from time to time, and when it comes to second hand smoke, I'm afraid that you've been taken in by one of the biggest scams since homeopathy.
While there is no requirement that a risk be 2.0 or better for it to be considered important, it is a rule of thumb that some epidemiologists use. That's because epidemiology is a pretty crude science because of the difficulty of accurately measuring human behavior. Errors are common, especially for studies that rely on surveys. Sometimes people lie, sometimes they misunderstand the question, recall bias may result in them providing wrong data unintentionally, etc. Results may be caused by an unaccounted for confounder. Researcher bias may have crept in if the researcher is honest, (and in too many cases they aren't). Even small mistakes can create big errors, especially if you're dealing with relatively rare illnesses and/or small sample sizes.
So small RRs are usually nothing to get very excited about. The fact that some writers you don't like, like Milloy and Brignell, point that out, doesn't make it inaccurate.
I'm very surprised you put any faith in the EPA study that started this whole thing. First off, it was a meta analysis, the very easiest type of study to fake and manipulate. That alone should make you sit up and take notice. But there are far, far more problems with the study:
There were 34 studies on the effects of SHS available at the time. The EPA cherry picked 11 of them, including one that wasn't finished.
In spite of that, they were not able to find any statistically significant connection between SHS and lung cancer. So they changed their confidence interval from 95% to 90% - in essence they doubled their margin of error. (I'm not aware of any other study that has done that.)
This gave them an RR where the CI didn't include 1.0. Yay! But the number was still a pathetically small 1500 people out of a population of 280,000,000. Because their cherry picking left them only with studies of women, they just said "Hey, we can double that number to include men!" To be specific, they added 1000 for men, and then 500 for, as far as I can figure, the hell of it, to get up to 3,000 people.
So, in short, they ignored 2/3s of the data. When that didn't give them the results they had announced before the study, they doubled their margin of error. Then they doubled THAT number to come up with a minuscule number of victims of SHS.
After the Congressional Research Service lambasted the study, a federal judge with a history of siding with the government on tobacco issues issued a 92 page ruling outlining all the fraud in the report. 92 pages!
And that's just one study. You'll find that most studies on the subject are flawed, many seriously, and a large percentage of them are conducted by anti-smoker activists. Nearly all of them are funded by anti-smoker organizations. And even with all that, they continue to find very tiny RRs, and CIs that usually come damn close to including 1.0, which would make the finding statistically insignificant.
Thanks for linking to my site, but you made a rather large error about the WHO study, when you claim that the 1.16 number is statistically significant. It's not the size of the RR that makes it insignificant, it's the spread of the Confidence Interval. The CI of .93 to 1.44 includes 1.0, and *that* is what makes it statistically insignificant. The WHO even admitted this in the body of their press release, after lying about their findings in the headline.
Combine the tiny RRs, the bias of the researchers, and the #1 rule of statistics (correlation can not prove causation) and you'll find that proof of any connection between SHS and illness is flimsier than the pages in a creationist's pocket Bible.
A while ago, Covance Labs did a study where non-smoking subjects wore portable air pumps that "breathed." They discovered that people who live and work in smoky environments inhale the smoke of about six cigarettes *per year.* Oak Ridge National Labs repeated the study, and came up with similar results. Considering that smokers usually have to smoke tens of thousands of cigarettes per year for a decade or two before it sickens them (and it only sickens some of them) does it make sense that the tiny amount of smoke inhaled by bystanders would result in killing so many of them?
Here's something you might find amusing: I wrote to some of the leading nicotine nannies, both organizations and individuals, and asked them to name three people who have died from SHS. Considering their claims that it's killed over a million people over the past twenty years, they should be able to come up with a short list. (Consider how long the list of people who have died as a result of smoking would be.) None of them could. Only one of them really tried, and the information he gave me was fraudulent. And you'll find his name on that EPA report. http://www.davehitt.com/2004/name_three.html I realize this isn't really the way statistics work, but I thought the results were both entertaining and provided a good insight into the nanny mindset.
I urge you to re-revisit this subject. (You got it right the first time.) And also to be extra skeptical when the only proof for a claim is statistical, because that can never be real proof.
And thanks for the Skeptics Dictionary. It's a great resource, despite the occasional error.
I thought we'd bat the issue around a bit in e-mail, maybe agree, maybe not, but when I didn't hear back from him I shrugged it off, figuring he's a busy guy. Time passed, and I thought about writing him again, but before I got to it someone wrote to me about Newsletter 64, which contained the following.
Dave Hitt responded to Newsletter 61, in which I admitted I was wrong to have supported Penn & Teller's claim that the EPA report that claims that 3,000 people a year die from lung cancer because of secondhand smoke was bogus. "When it comes to secondhand smoke," writes Mr. Hitt, "I'm afraid that you've been taken in by one of the biggest scams since homeopathy." He writes:I'm very surprised you put any faith in the EPA study that started this whole thing. First off, it was a meta-analysis, the very easiest type of study to fake and manipulate. That alone should make you sit up and take notice. But there are far, far more problems with the study: There were 34 studies on the effects of SHS [secondhand smoke] available at the time. The EPA cherry picked 11 of them, including one that wasn't finished....So, in short, they ignored 2/3 of the data.
You are poisoning the well with your comment on meta-analysis. Any kind of study can be faked and manipulated. So what? The fact that the EPA did not include all studies that were done on SHS is not proof of manipulation or cherry picking. Not all studies are created equal. Anyone doing a meta-analysis on any subject has to evaluate the quality of the studies and make a judgment as to which studies should be excluded. Some studies are poorly designed; some are very small. Some are double-blinded; some are not. Some are well documented; others are poorly documented. And so on. To prove manipulation, you need to show that most of the 23 studies not included in the EPA meta-study were excellent studies that should have been included. To say the EPA "ignored" 2/3 of the data is to distort what happened. Some data should be ignored because it was not properly obtained or the samples were too small. There are many other reasons why some studies should not be included in a meta-analysis.
Mr. Hitt continues:In spite of that, they were not able to find any statistically significant connection between SHS and lung cancer. So they changed their confidence interval from 95% to 90% - in essence they doubled their margin of error.
The 95% confidence interval is arbitrary. The most commonly used P-value in the social sciences and medical studies is P<0.05, where there is a one in twenty chance that the result is a statistical fluke. This standard can be traced back to the 1930s and R. A. Fisher. There is nothing sacred about this standard. Technically, this has nothing to do with "margin of error," but it does double the chances from 1 in 20 (or 5%) to 1 in 10 (or 10%) that the result is a statistical fluke.
Mr. Hitt continues:After the Congressional Research Service lambasted the study, A federal judge with a history of siding with the government on tobacco issues issued a 92 page ruling outlining all the fraud in the report. 92 pages!
This is completely irrelevant to whether the EPA study is in fact fatally flawed. As is the following.A while ago, Covance labs did a study where non-smoking subjects wore portable air pumps that "breathed." They discovered that people who live and work in smoky environments inhale the smoke of about six cigarettes *per year.* Oak Ridge National Labs repeated the study, and came up with similar results. Considering that smokers usually have to smoke tens of thousands of cigarettes per year for a decade or two before it sickens them (and it only sickens some of them) does it make sense that the tiny amount of smoke inhaled by bystanders would result in killing so many of them?
In conclusion, Mr. Hitt writes:Here's something you might find amusing: I wrote to some of the leading nicotine nannies, both organizations and individuals, and asked them to name three people who have died from SHS. Considering their claims that it's killed over a million people over the past twenty years, they should be able to come up with a short list. (Consider how long the list of people who have died as a result of smoking would be.) None of them could. Only one of them really tried, and the information he gave me was fraudulent. And you'll find his name on that EPA report. http://www.davehitt.com/2004/name_three.html I realize this isn't really the way statistics work, but I thought the results were both entertaining and provided a good insight into the nanny mindset.
Requesting the names of three people is cute, but the fact that these organizations can't name three is irrelevant to whether anyone has died from SHS. (Logicians call this the fallacy of argumentum ad ignorantiam.) Can you name three people who died in the Spanish American War? Is that relevant to whether the claim is true that many died in that war?
I wrote back to him and asked where I could meet him in a public forum and hash this out. He replied that he didn't participate in any forums, but offered me the opportunity for a rebuttal in his next newsletter, with our without his commentary, or a link to my reply on a web page if my response was too long. I'm guessing it is, so here's my answer to him.
Dear Mr. Carroll,
When I wrote to you I thought we'd bat things around a bit in e-mail, and maybe (or maybe not) come to a meeting of the minds. But instead of a personal reply to a personal e-mail, you chose to put it in your newsletter. While there's nothing wrong with that, it was really tacky. Not bothering to tell me about it is tacky2.
You completely ignored the most important point in the letter: the reason the RR was statically insignificant in the WHO study was because the CI included 1.0. I see that error is still in your edited newsletter #41. BTW, I have since changed the text on my site a bit to make it even easier to see what makes that number insignificant.
The reference to the ease of faking meta-analysis is something skeptics should consider. The fact that other kinds of studies can also be faked has no bearing on this conversation – we're not talking about other kinds of studies. And it's a minor point, which you harped on far more than necessary.
You assume I don't know what I'm talking about when I speak of cherry picking. I've had my own copy of the study since shortly after it came out – in fact it's getting a bit dog eared. (The EPA sends them out for no charge.) They did reject one Japanese study because of the methodology. They reference many of the others at various places, but only used 11 for their final numbers. I know they cherry picked because I read the report. This was also confirmed by Judge Osteen's report, which specifically used the phrase.
The nature of statistics means any p value is arbitrary. You could use that argument to justify a value of .25, or .50. The larger the number, the more likely the study is meaningless. The standard, which has been used for decades, is .05. Why would a skeptic justify doubling that number, especially on an already shaky study?
Two different independent government agencies looked at this study and found it seriously flawed. They each provided specific reasons for their conclusions and examples of the flaws. It took Judge Osteen's 92 pages to detail it, and he quit before he got to the last chapter. And you say this has no bearing on question of the study's flaws. What?
The “six cigarettes per year” studies are most certainly relevant to the report – they speak to the issue of biological plausibility.
Your final statement, about the Name Three article, was the most annoying and borders on childish. If you had read the article it would have been clear this was a mischievous exercise to get under the skin of the nicotine nannies, to have some fun at their expense and in the process get to see how shady they are. I was very clear that I knew my question wasn't good science, and you ignored that clarity. You replied with a lecture on why my approach wasn't statistically valid. No kidding; I already said that.
I'm surprised and disappointed, Robert. Before this, I had a lot of respect for you and your site. Now, not so much.
P. S. I was also disappointed to see the links you recommended in newsletter 61. Evidently you've been taken in by the Helena study, even though its massive fraud should be obvious with a brief glance at the abstract. I was even more disappointed to see you quote ASH as a source. ASH is the most hateful of the anti smoker organizations. They are actively working on getting smokers kicked out of their homes, fired from their jobs, and banned from smoking on sidewalks and streets. Using ASH as a reference on second hand smoke is akin to asking the Klan to comment on the book “The Bell Curve.”
- - -
Bored yet? Me too.
Try something more entertaining.