Peer review (colleagues, peers, reviewing a manuscript submitted for publication in some journal) can be “single-blind” (reviewers are aware of the names and affiliations of paper authors), or “double-blind” (this information is hidden from the Reviewer). Noticing that computer-science research often appears first (or exclusively) in peer-reviewed conferences rather than journals, authors [see attached preprint] examined these two reviewing models in the context of expert committee members reviewing full-length submissions for acceptance. Authors present a controlled experiment in which four committee members review each paper. Two of these four reviewers are drawn from a pool of committee members with access to author information; the other two are drawn from a disjoint pool without such access. (This information asymmetry persists through the process of bidding for papers, reviewing papers, and entering scores.
Once papers were allocated to reviewers, single-blind reviewers were found to be statistically significantly more likely than their double-blind counterparts to recommend for acceptance papers from famous authors, top universities, and top companies. The estimated odds multipliers are statistically tangible –– at 1.63 for “famous authors”, 1.58 for “top universities”, and 2.10 for “top companies.” These findings remind me of a (which I’ll keep anonymous) study section (long ago) on which I had served: some of the study section participants were actually debating the “merits” and “prestige” of Harvard Dental School versus the Harvard Medical School complex..!!
Proc Natl Acad Sci USA http://www.pnas.org/cgi/doi/10.1073/pnas.1707323114