Bibliometric calculations currently used to assess the quality of researchers, articles, and scientific journals have serious structural problems; many authors have noted the weakness of citation counts, because they are purely quantitative and do not differentiate between high- and low-citing papers. If a paper’s reputation is simply evaluated according to the number of its citations, then incomplete, incorrect, or controversial articles may be promoted, regardless of their relevancy. Therefore, perverse incentives are generated for researchers who may publish many incorrect or incomplete papers to achieve high impact indexes. It is essential to improve the objective criteria for automatic article-quality assessments. However, to obtain these new criteria, it is necessary to advance the programmed detection of context, polarity, and function of bibliographic references.
We present an overview of general concepts and review contributions to the solutions to problems related to these issues, with the purpose of identifying trends and suggesting possible future research directions.