|Year : 2019 | Volume
| Issue : 3 | Page : 85-86
How to assess quality of journals and researches (Part II)
Barun K Nayak
Department of Ophthalmology, P D Hinduja National Hospital and Medical Research Centre, Mumbai, Maharashtra, India
|Date of Web Publication||11-Dec-2019|
Barun K Nayak
Department of Ophthalmology, P D Hinduja National Hospital and Medical Research Centre, Mumbai, Maharashtra
Source of Support: None, Conflict of Interest: None
|How to cite this article:|
Nayak BK. How to assess quality of journals and researches (Part II). J Clin Ophthalmol Res 2019;7:85-6
I have discussed about the impact factor (IF) in the previous issue as Part I of the Editorial. In the present editorial, I would discuss some of the alternative metrics which are available to judge the quality of journals and researchers. IF is based only on data from web of science; hence, a large chunk of data are left out such as from unpublished research, other data sets, presentations, hyperlinks, and web pages. In the United States of America, only 15%–20% of authors had published articles which were referenced by others. The skewness of citation data is apparent by the fact that only 20% of articles published in 2013 and 2014 in Journal of Informetrics contributed 55% of citations in 2015, which was included in the citation of IF of 2015. IF probably reflects impact mainly on the research community and not on the clinicians, government funding, and general public. In developing countries like India, this gets compounded by the fact that full-time researchers are sparse, and majority are clinicians performing research as their secondary responsibility. Researchers from these countries do not find place easily in journals with high IF and do not get due recognition. Hence, there is a need to explore the other methods of evaluating the value of publications.
Elsevier publisher has its database in Scopus. Some of the indices are impact per publication (IPP), source normalized impact per paper (SNIP), and SCImago Journal Rank (SJR). IPP of a journal is the average number of citations received in a year of all the articles published in three preceding years in that journal. SNIP is the ratio between the IPP as numerator and database citation potential as denominator. SJR is essentially similar to IF, but each citation is weighted based on the value of the citing journal. Google scholar is another huge database which calculates h-index which can be journal based or individual researcher based. It is calculated based on the total number of papers published and the number of citations for each paper. To remove bias due to excessive self-citation, h-index calculations are available with all citations as well as after removing all self-citations. All these are citation based and carry similar limitations, as IF in varying degrees.
Now, it has been realized that research spectrum has expanded tremendously in the last couple of years, but the measurement of impact has remained stationary. The researchers, research administrators, funding agencies, corporate R and D segments, and publishers have different requirement to assess the researches, but there is no common source wherein all of them can extract whatever is relevant to their needs. Activities on social media are increasing in researchers' community as well. ResearchGate is getting quite popular due to ease of “navigation and simplicity” and does not require review or fees. It is influential in promoting innovation in developing countries and help in connecting scientists with their peers in developing countries. In one study conducted on 160 individuals in Delhi University who were using social media as a tool for research, ResearchGate was found to be the most popular. ResearchGate score of a researcher measures scientific reputation based on how the work is received by peers, taking into consideration of all possible sources such as published articles, unpublished articles, projects, questions, and answers discussed by a particular researcher.
Academia.edu is a useful site which provides a platform for academics to share their research paper. It helps in social net articleing of academics with a mission to accelerate research in the world. It also provides analysis about the impact of the researcher. “Project cupcake” is another emerging concept which provides not only a single metric but also includes many information that researcher would like to know. It can provide additional information of a journal about how the articles are handled after submission? How many rounds does it take to get the decision? What is the acceptance rate? What is the time to rejection? It can also provide technical information, such as what is the quality of typesetting?
One should also understand that biomedical database can have either all or in varying combination of the bibliographic database, citation, and database with full text. There are many alternatives which go beyond citations. “Cite Score,” “Dimensions,” and “Altmetrics” are some of the important examples. Cite score indicates average citations received per document to the published articles in three previous years in a title which can be peer-reviewed journal, conference proceedings, book series, and trade journals. It is transparent, comprehensive, and free and includes data from Scopus which is the largest database of peer-reviewed literature. Suppose 'x' number of citations were received in the year 2018 from all the articles published in the year 2015, 2016, and 2017 and the total number of articles published in the year 2015, 2016, 2017 was 'y', the cite score for the year 2018 would be 'x/y'.
Digital Science has realized the need for comprehensive and flexible dataset for all concerned in the field of research. Six digital science portfolio companies have decided to create a common pool namely “Dimensions.” These companies are Readcube, Altmetrics, Figshare, Simplectic, DS Consultancy, and UBER Research. This new platform is easy to use, intuitive, and combines data which are invaluable for all groups of people involved in research. Dimensions can also get connections between clinical trials, publications, grants, policy documents, and patents. Dimensions form the state of the art platform based on the needs of research organizations, researchers, funders, and publishers. It also removes barriers to access even to siloed data at a low cost. It also provides avenues for developing new metrics.
Altmetrics is another alternative providing qualitative data about publication which are not the replacement but complementary to the citation-based metrics. They incorporate data from multiple sources such as peer reviews of faculty of 1000, research blogs, discussions, media coverage, citation in public policy documents, citation on Wikipedia, mentions on twitter or other social networks, and bookmarks on reference managers like Mendeley. Altmetrics are not single class of indicator, but it includes record of attention as well as a measure of dissemination. It is also an indicator of influence and impact. Altmetrics has many advantages over citation-based metrics as they are quicker to accumulate and captures more diverse impact as compared to citation-based metrics; They are not only limited to journal articles and books but also includes activity in other areas such as social media, discussions, comments, and policy documents. Altmetrics providers have also taken some measures to prevent gaming which is possible in IF.
It is clear from the ongoing discussion that there is no “one size fits all,” and some of them are complementary to each other. Further description is out of scope of this editorial; however, readers are requested to read from suitable sources if they want to have detailed information which is going to expand exponentially in the future. The purpose of this editorial is to introduce the concept of alternative metrics beyond IF. After having this information, researchers can choose the appropriate metrics based on their requirement and resources.
Financial support and sponsorship
Conflicts of interest
There are no conflicts of interest.
| References|| |
Nayak BK. How to assess quality of journals and researches (Part I). J Clin Ophthalmol Res 2019;7:35-6. [Full text]
Waltman L, Traag VA. Use of the journal impact factor for assessing individual articles need not be wrong. In: Computer Science: Digital Libraries.
Priem J, Piwowar HA, Hemminger BM. Altmetrics in the wild: Using social media to explore scholarly impact. In: Computer Science: Digital Libraries.
Margam M. Use of social networking sites by research scholars of the University of Delhi: A study. [Doi: 10.1080/10572317.2012.10762919].