sciprank - a method to rank scientists and researchers

SciPRank: A quantitative method for evaluation of Research personnel in scientific setup

Abstract

Research done by scientists are valuable and priceless and comparing them is like comparing apple to oranges as it is a subjective matter. However, it becomes inevitable to do so in certain circumstances.

Ranking research personnel becomes essential for considerations such as appraisal, promotion etc. In this article we propose a method to compare and evaluate scientists/researchers in academics/industry or other scientific setup.

Background

A journal’s relevance is generally estimated by its impact factor (Web of Science) and the total number of citations of articles published in it. However, the significance of researchers and the ranking of scientists cannot be reflected in the journals’ impact. This is a quite difficult task to rank scientists. Several science matrices have been proposed to rank scientific personnel/researchers (Gao, Wang, Li, Zhang, & Zeng, 2016; Garfield, 2006; Sugimoto & Larivière, 2017; Thwaites, 2014; Van Noorden, 2010) including Hirsch-index (H-index) (Hirsch, 2005) and PageRank (Senanayake, Piraveenan, & Zomaya, 2015). H-index is most widely used for ranking scientific researchers. It is calculated by estimating the number of publications for which an author has been cited for that same number of times. However, there are some shortcomings of H-index that include the exclusion of recent articles and consideration of old publications (Clav Vavryčuk, 2018; Zerem, 2017). For example, if the H-index for a scientist is 20, that means he has published at least 20 articles but it would exclude the recent articles whose citations are less than 20. Further, there are some other methods of ranking which are commonly used to address the contributions of individual authors. These methods include sequence-determines-credit, first-author-emphasis, corresponding-author-emphasis, and so on. However, counting methods may provide an advantage to some universities and underrepresent others for university ranking (Lin, Huang, & Chen, 2013). In this article, we have proposed an unbiased method (SciPRank) to evaluate the research achievements of scientists.

The Method: SciPRank

The method SciPRank is based on evaluating the quality of research rather than the quantity. SciPRank also does not evaluate the individual contribution of authors in an article. The reason behind this is that the authors are listed according to their contribution. However, in some journals where authors are listed alphabetically or randomly, it is also not justified to give 100 % to the first author and 50 % to the corresponding author, and 50 % to the rest of the authors as suggested by (Zerem, 2017). Generally, a whole project is directed by the corresponding authors, thus the research experience of corresponding authors is not comparable to other authors in a paper, especially, when we are evaluating the lifetime scientific achievements of a scientist.

The 2018 Version of SciPRank is as follows:

SciPRank uses five factors to rank researchers accounting for their invention and applications. These factors are explained as follows:

  1. Background

Premise 1: People working in the same area of research as their education are more likely to have a greater understanding of the topic they are working on.

Premise 2: However, SciPRank is flexible. You can reverse Premise 1, i.e. people who are working in a different field as their education are more likely to deliver because they are more passionate about that topic.

You may include the associated institute as a background. For example, a professor working on Bioinformatics at the Tata Institute of Social Sciences may be compared with a professor working on Bioinformatics at All India Institute of Medical Sciences (AIIMS) but the available research infrastructure can be made a factor in deciding who has a better chance at getting greater results in bioinformatics.

However, you may say that the latter professor, if he/she manages to research in bioinformatics while working at Tata Institute of Social Sciences, he might be up to something great. So that can be reversed.

Which background parameters are used should be a subjective matter to be decided by the jury.

2. Number of Publications on the subject matter

The number of papers/publications in a given subject matter is a direct indication of the pace and quality of the research being done at/by a particular lab/personnel. At first sight, it may appear to be a direct correlation, however, it is not.

Consider a case of Bioinformatics – if a researcher publishes 500 papers which are not very applicable or of direct use to the world – while another one published a single paper inventing FASTA format (it is so prevalent that most people don’t even give citation to its original inventor), would both of them compare as equals? The answer is No.  In this case, what should be the quantitative way to compare them?

Here we introduce Use Factor (UF). UF can be used to compare the applications of a study qualitatively. For example, if we consider the UF of 1 paper published inventing FASTA format as 1000, and the use factor of 500 review articles as 1, we have a solid quantitative metric to measure and compare them (Table 1). UF is a qualitative parameter, therefore, what numbers to be used to grade UF is to be decided by the jury.

Table 1 Relevance of UF in ranking scientists.

Case No. of Publication Use Factor(UF) Score Score w/o UF
Review Article on XYZ 1000 1 1000 1000
Research Paper on XYZ 1 5000 5000 1

 

3. Associations with widely used/popular in-use product/Services

The ultimate goal of a researcher is to invent a novelty. We have to consider this as a matter of fact. To take this into account, a Novelty factor is introduced. The score should be based on the number of such inventions and their Use Factor. An example is a patent of Internal Combustion Engine – (which is widely used) vs “A quantitative method for evaluation of Research personnel in scientific setup” (this paper) which may/may not have an impact.

If we consider the novelty factor of our paper as 1000, and the novelty factor of 500 review articles as 1, we have a solid quantitative metric to measure and compare them (Table 2).

Table 2 Relevance of NF in the comparison of research work.

Case No. of papers novelty Factor(NF) Score Score w/o UF
Our Paper 1 1 1 1
A Patent on IC Engine 1 5000 5000 1

4. Impact on Further Research

Some research works enable further research and empower other researchers to invent. This makes them crucial in getting another work with high UF and in different fields and thus they are equally important. The impact on further research can be added to the total score with every paper or as an overall individual.

5. Citations

Citing research is an important aspect of academic practice. Citations play a vital role in research by showing one’s scientific knowledge and on the other hand applications of cited research. Therefore, citation count is also an essential factor for ranking scientific personnel. Unlike H-index, SciPRank counts citation of all articles published by the personnel whether old or new.

Conclusion

The current methods of ranking scientists are unsatisfactory as they clearly overlook the quality of research and mainly focus on quantity. Especially, when there is plenty of predatory journals showing fake impact factors. The proposed method SciPRank evaluates the scientific personnel achievements in a proper way based on five important factors. However, SciPRank does not include author-weighted contributions. Multi-author articles published in journals are based on teamwork, which is difficult to give credit points to each author without knowing the percent of contributions. In this regard, we suggest including a mandatory description of each author’s contribution to articles to be implemented by all journals. That way, it might be easier to devise a proper metric system to evaluate each author’s contribution. However, comparing the research experiences of authors is not justified.

SciPRank can be used to evaluate scientists’ performance over the years without any discrepancies. It prioritizes the quality of research, consider the innovation, and applications rather than the quantity. We will be actively working and updating the method as we receive further inputs.

References

  • Clav Vavryčuk, V. (2018). Fair ranking of researchers and research teams. PloS One, 13(4). https://doi.org/10.1371/journal.pone.0195509
  • Gao, C., Wang, Z., Li, X., Zhang, Z., & Zeng, W. (2016). PR-Index: Using the h-Index and PageRank for Determining True Impact. PLOS ONE, 11(9), e0161755. https://doi.org/10.1371/journal.pone.0161755
  • Garfield, E. (2006). Citation indexes for science. A new dimension in documentation through association of ideas. International Epidemiological Association International Journal of Epidemiology, 35, 1123–1127. https://doi.org/10.1093/ije/dyl189
  • Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences of the United States of America, 102(46), 16569–16572. https://doi.org/10.1073/pnas.0507655102
  • Lin, C.-S., Huang, M.-H., & Chen, D.-Z. (2013). The influences of counting methods on university rankings based on paper count and citation count. Journal of Informetrics, 7, 611–621. https://doi.org/10.1016/j.joi.2013.03.007
  • Senanayake, U., Piraveenan, M., & Zomaya, A. (2015). The Pagerank-Index: Going beyond Citation Counts in Quantifying Scientific Impact of Researchers. PLOS ONE, 10(8), e0134794. https://doi.org/10.1371/journal.pone.0134794
  • Sugimoto, C. R., & Larivière, V. (2017, July 26). Altmetrics: Broadening Impact or Amplifying Voices? ACS Central Science. American Chemical Society. https://doi.org/10.1021/acscentsci.7b00249
  • Thwaites, T. (2014). Calling science to account. Nature, 511(7510), S57–S60. https://doi.org/10.1038/511S57a
  • Van Noorden, R. (2010, June 17). Metrics: A profusion of measures. Nature. https://doi.org/10.1038/465864a
  • Zerem, E. (2017, November 1). The ranking of scientists based on scientific publications assessment. Journal of Biomedical Informatics. Academic Press Inc. https://doi.org/10.1016/j.jbi.2017.10.007
Tariq is a professional Software Developer at IQL. His areas of expertise include algorithm design, phylogenetics, MicroArray, Plant Systematics, and genome data analysis. If you have questions, reach out to him via Researchgate.
Muniba is a Bioinformatician based in the South China University of Technology. She has cutting edge knowledge of bioinformatics tools, algorithms, and drug designing. When she is not reading she is found enjoying with the family. Know more about Muniba

Leave a Reply

HOW TO CITE THIS ARTICLE Tariq Abdullah and Muniba Faiza (2020). SciPRank: A quantitative method for evaluation of Research personnel in scientific setup. Bioinformatics Review, 6 (04)
Previous Story

Beginner's Guide for Docking using Autodock Vina

Thumbnail: Video tutorial: Installing GROMACS on Ubuntu
Next Story

Video tutorial: Installing GROMACS on Ubuntu

Willing to stay updated?

By investing less than 30 seconds you can start recieving all our new articles in your mailbox. Stay updated with latest Bioinformatics Research, trends and tools of trade.

 

Lost your password? Please enter your email address. You will receive mail with link to set new password.

Help-Desk