你好,游客 登录 注册 搜索
背景:
阅读新闻

Impact Factor Distortions

对影响因子的扭曲

[日期:2013-05-22] 来源:sciencemag  作者:ecphf [字体: ]
为了健康

 

Bruce Alberts
Bruce Alberts is Editor-in-Chief of Science.

This Editorial coincides with the release of the San Francisco declaration on research Assessment (DORA), the outcome of a gathering of concerned scientists at the December 2012 meeting of the American Society for Cell Biology.* To correct distortions in the evaluation of scientific research, DORA aims to stop the use of the "journal impact factor" in judging an individual scientist's work. The Declaration states that the impact factor must not be used as "a surrogate measure of the quality of individual research articles, to assess an individual scientist's contributions, or in hiring, promotion, or funding decisions." DORA also provides a list of specific actions, targeted at improving the way scientific publications are assessed, to be taken by funding agencies, institutions, publishers, researchers, and the organizations that supply metrics. These recommendations have thus far been endorsed by more than 150 leading scientists and 75 scientific organizations, including the American Association for the Advancement of Science (the publisher ofScience). Here are some reasons why: 
The impact factor, a number calculated annually for each scientific journal based on the average number of times its articles have been referenced in other articles, was never intended to be used to evaluate individual scientists, but rather as a measure of journal quality. However, it has been increasingly misused in this way, with scientists now being ranked by weighting each of their publications according to the impact factor of the journal in which it appeared. For this reason, I have seen curricula vitae in which a scientist annotates each of his or her publications with its journal impact factor listed to three significant decimal places (for example, 11.345). And in some nations, publication in a journal with an impact factor below 5.0 is officially of zero value. As frequently pointed out by leading scientists, this impact factor mania makes no sense.†
The misuse of the journal impact factor is highly destructive, inviting a gaming of the metric that can bias journals against publishing important papers in fields (such as social sciences and ecology) that are much less cited than others (such as biomedicine). And it wastes the time of scientists by overloading highly cited journals such as Science with inappropriate submissions from researchers who are desperate to gain points from their evaluators.‡
But perhaps the most destructive result of any automated scoring of a researcher's quality is the "me-too science" that it encourages. Any evaluation system in which the mere number of a researcher's publications increases his or her score creates a strong disincentive to pursue risky and potentially groundbreaking work, because it takes years to create a new approach in a new experimental context, during which no publications should be expected. Such metrics further block innovation because they encourage scientists to work in areas of science that are already highly populated, as it is only in these fields that large numbers of scientists can be expected to reference one's work, no matter how outstanding. Thus, for example, in my own field of cell biology, new tools now allow powerful approaches to understanding how a large single-celled organism such as the cilateStentor can precisely pattern its surface, creating organlike features that are presently associated only with multicellular organisms.§ The answers are likely to bring new insights into how all cells operate, including our own. But only the very bravest of young scientists can be expected to venture into such a poorly populated research area, unless automated numerical evaluations of individuals are eliminated.
The DORA recommendations are critical for keeping science healthy. As a bottom line, the leaders of the scientific enterprise must accept full responsibility for thoughtfully analyzing the scientific contributions of other researchers. To do so in a meaningful way requires the actual reading of a small selected set of each researcher's publications, a task that must not be passed by default to journal editors.
 

收藏 推荐 打印 | 录入:ecphf | 阅读:
相关新闻       Impact Factor 
本文评论   查看全部评论 (0)
表情: 表情 姓名: 字数
点评:
       
评论声明
  • 尊重网上道德,遵守中华人民共和国的各项有关法律法规
  • 承担一切因您的行为而直接或间接导致的民事或刑事法律责任
  • 本站管理人员有权保留或删除其管辖留言中的任意内容
  • 本站有权在网站内转载或引用您的评论
  • 参与本评论即表明您已经阅读并接受上述条款

绿色生活

热门评论