Science and peer review
In our daily exposure to science, we’ve been told that there exists a quality metric that allows us to distinguish between “bad science” and “good science”. Peer review; the concept of scientific works being sent to certain publications, where editors then asks other scientists in (hopefully) related fields for their opinions, and if the paper “passes” it will be published. Some publications are taken to be “better” than others at this, and there’s a sense of pride and justification between scientists depending on how many papers, and where, they’ve managed to get published through the peer review process.
The only problem is that peer review seemingly doesn’t work, and cannot be used as a quality metric. A good example popped up yesterday, where Michael Mann of the infamous “Mann hockeystick” (seen in Al Gore’s movie about global warming), got a paper published in the very highly thought of publication Nature. (Subscription required, news article can be found here)
What’s interesting about this paper, besides it having been contradicted one day before being published by the National Oceanic and Atmospheric Administration (NOAA) in an equally peer reviewed publication, is that it apparently received harsh critique during the peer review process from at least one reviewer, Chris Landsea. Thankfully, he’s sent an open letter to Mann for everyone to read on the subject – search for “open letter” in the comments here (well worth the hassle). Basically, there seems to be no basis for Mann’s claims, neither in the published paper nor in the press around it.
So. We’ve apparently got bad science published in a well respected publication, having gone through a peer review process where one of the reviewers in effect stated that the paper didn’t support its own conclusions.
Another area where we can find peer review is in open source. No editors, no elite selection of reviewers. Anyone can publish, anyone can spot faults.
It’s likely the better version.