Few scientists fabricate results from scratch or flatly plagiarize the work of others, but a surprising number engage in troubling degrees of fact-bending or deceit, according to the first large-scale survey of scientific misbehavior.
More than 5 percent of scientists answering a confidential questionnaire admitted to having tossed out data because the information contradicted their previous research or said they had circumvented some human research protections.
Ten percent admitted they had inappropriately included their names or those of others as authors on published research reports.
And more than 15 percent admitted they had changed a study’s design or results to satisfy a sponsor, or ignored observations because they had a “gut feeling” they were inaccurate.
None of those failings qualifies as outright scientific misconduct under the strict definition used by federal regulators. But they could take at least as large a toll on science as the rare, high-profile cases of clear-cut falsification, said Brian Martinson, an investigator with the HealthPartners Research Foundation in Minneapolis, who led the study appearing in today’s issue of the journal Nature.
“The fraud cases are explosive and can be very damaging to public trust,” Martinson said. “But these other kinds of things can be more corrosive to science, especially since they’re so common.”
The new survey also hints that much scientific misconduct is the result of frustrations and injustices built into the modern system of scientific rewards. The findings could have profound implications for efforts to reduce misconduct — demanding more focus on fixing systemic problems and less on identifying and weeding out individual “bad apple” scientists.
“Science has changed a lot in terms of its competitiveness, the level of funding and the commercial pressures on scientists,” Martinson said. “We’ve turned science into a big business but failed to note that some of the rules of science don’t fit well with that model.”
Scientific dishonesty has long been a simmering concern. Many suspect, for example, that Gregor Mendel, the Austrian monk whose plant-breeding experiments revealed with suspicious precision the basic laws of genetics, cooked his numbers along with his peas.
In recent decades a handful of cases have risen to the level of popular attention — the most famous, perhaps, involving David Baltimore, the Nobel laureate who in the mid-1980s heatedly defended his laboratory’s honor in a series of scathing congressional hearings led by Rep. John D. Dingell (news, bio, voting record) (D-Mich.).
The prevalence of research misconduct has been uncertain, however, in part because the definitions of acceptable behavior have shifted. For scientists working with federal grant money, that issue got settled five years ago when the Office of Research Integrity — part of the Department of Health and Human Services — drafted a formal definition: “fabrication, falsification or plagiarism in proposing, performing or reviewing research, or in reporting research results.”
About a dozen federally funded scientists a year are found to have breached that “FFP” standard — a tiny fraction of the scientific workforce — and punishment generally involves a ban on further federal grants. But no one had conducted a major survey asking scientists whether they are guilty of major misconduct or lesser, but arguably still serious, ethics lapses.
Martinson and two colleagues — Melissa Anderson and Raymond de Vries, both of the University of Minnesota — sent a survey to thousands of scientists funded by the National Institutes of Health and tallied the replies from the 3,247 who responded anonymously.
Just 0.3 percent admitted to faking research data, and 1.4 percent admitted to plagiarism. But lesser violations were far more common, including 4.7 percent who admitted to publishing the same data in two or more publications to beef up their résumés and 13.5 percent who used research designs they knew would not give accurate results.
Susan Ehringhaus, associate general counsel of the Association of American Medical Colleges, which has developed programs to enhance research ethics, said she welcomed the results. Her organization does not favor redefining federal research misconduct to include the many variants included in the survey, she said. However, she said, “we fully support the development of standards that go beyond the federal definition” for internal enforcement by academic or other institutions.
A preliminary analysis of other questions in the survey, not yet published, suggests a link between misconduct and the extent to which scientists feel the system of peer review for grants and advancement is unfair. That suggests those aging systems need to be revised, the researcher said.
“Scientists say, ‘This is nuts,’ so they break the rules, and then respect for the rules diminishes,” de Vries said. “If scientists feel that the process isn’t fair and the rich get richer and the rest get nothing, then perhaps we have to think how we can reallocate resources for science.”
Rick Weiss, Washington Post