In 2005, a group of MIT graduate students decided to goof off in a very MIT graduate student way: They created a program called SCIgen that randomly generated fake scientific papers. Thanks to SCIgen, for the last several years, computer-written gobbledygook has been routinely published in scientific journals and conference proceedings.
On the one hand, it’s impressive that computer programs are now good enough to create passable gibberish. (You can entertain yourself by trying to distinguish real science from nonsense on quiz sites like this one.) But the wide acceptance of these papers by respected journals is symptomatic of a deeper dysfunction in scientific publishing, in which quantitative measures of citation have acquired an importance that is distorting the practice of science.
Over the course of the second half of the 20th century, two things took place. First, academic publishing became an enormously lucrative business. And second, because administrators erroneously believed it to be a means of objective measurement, the advancement of academic careers became conditional on contributions to the business of academic publishing.
As Peter Higgs said after he won last year’s Nobel Prize in physics, “Today I wouldn’t get an academic job. It’s as simple as that. I don’t think I would be regarded as productive enough.” Jens Skou, a 1997 Nobel Laureate, put it this way in his Nobel biographical statement: today’s system puts pressure on scientists for, “too fast publication, and to publish too short papers, and the evaluation process use[s] a lot of manpower. It does not give time to become absorbed in a problem as the previous system [did].”
h/t to Ogged on Unfogged, who quips: “As a humanities guy, let me say that I guess this means airplanes can’t fly.”
Dr. Dawg sounds a note of caution amongst the schadenfreude explosion that may well apply to my own heading for this post:
It’s hard not to laugh at scientists being hoist by this profusion of exploding petards. But we shouldn’t. At issue here, as Fish pointed out, is a question of trust. It’s relatively easy to punk people if they trust you in the first place. Good faith is assumed when academic papers are submitted to peers: even the rigours of peer review may fail to uncover an elaborate ruse, simply because no such thing is expected. That’s true, as it now turns out, for scientific disciplines as well as social science and literary ones.*
it’s hard to see how defrauding the public, or any portion of it, can have beneficially[sic] results in the long run. The generation of various knowledges is a social activity, after all: how can we see the the deliberate violation of trust between people as anything other than harmful and disruptive to it?
So, even though I now eagerly await the long overdue general acknowledgement that economics journals have been publishing metric buttloads of gobbledygook bafflegab for many decades as credibility-generators for
a self-spinning, jargon-laden fraud held up as a foundation for government policies that have been repeatedly shown to destroy broadly-based prosperity and financial security, I promise to eschew unseemly gloating should that day ever occur.