Major Indexing Service Sounds Alarm on Self-citations by Nearly 50 Journals
Are impact factors manipulated by a large-scale practice of self-citation?
Send us a link
Are impact factors manipulated by a large-scale practice of self-citation?
This study aims to evaluate the 1-year results of a prospective randomized social media trial to determine the effect of tweeting on subsequent citations and non-traditional bibliometrics.
New sources of citation data have recently become available. Although these have been compared to the Web of Science (WoS), Scopus, or Google Scholar, there is no systematic evidence of their differences across subject categories. In response, this paper investigates citations found by these data sources to English-language highly-cited documents published in 2006 from 252 subject categories, expanding and updating the largest previous study.
Citations are ubiquitous in evaluating research, but how exactly they relate to what they are thought to measure is unclear. This article investigates the relationships between citations, quality, and impact using a survey with an embedded experiment.
This paper presents a simple model of the lifecycle of scientific ideas that points to changes in scientist incentives as the cause of scientific stagnation. It explores ways to broaden how scientific productivity is measured and rewarded, involving both academic search engines such as Google Scholar measuring which contributions explore newer ideas and university administrators and funding agencies utilizing these new metrics in research evaluation.
Citation metrics have value because they aim to make scientific assessment a level playing field, but urgent transparency-based changes are necessary to ensure that the data yields an accurate picture. One problematic area is the handling of self-citations.
Articles in high-impact journals are by definition more highly cited on average. But are they cited more often because the articles are somehow "better"? Or are they cited more often simply because they appeared in a high-impact journal?
Recognizing the world's most influential researchers of the past decade, demonstrated by the production of multiple highly-cited papers that rank in the top 1% by citations for field and year in Web of Science.
The operator of the Wayback Machine allows Wikipedia's users to check citations from books as well as the web.
How many articles from predatory journals are being cited in the legitimate (especially medical) literature? Some disturbing findings.
How misconceptions persist and proliferate within the scientific literature.
Readers say they have been asked to reference seemingly superfluous studies after peer review.
citecorp is a new (hit CRAN in late August) R package for working with data from the OpenCitations Corpus (OCC). OpenCitations, run by David Shotton and Silvio Peroni, houses the OCC, an open repository of scholarly citation data under the very open CC0 license. The I4OC (Initiative for Open Citations) is a collaboration between many parties, with the aim of promoting "unrestricted availability of scholarly citation data". Citation data is available through Crossref, and available in R via our packages rcrossref, fulltext and crminer.
Citation metrics are widely used and misused. This Community Page article presents a publicly available database that provides standardized information on multiple citation indicators and a composite thereof, annotating each author according to his/her main scientific field(s).
Respondents to a Nature poll want to make their own decisions about how to interpret citation metrics. That requires data to be freely accessible.
The publisher is scrutinizing researchers who might be inappropriately using the review process to promote their own work.
OpenCitations is a scholarly infrastructure organization dedicated to open scholarship and the publication of open bibliographic and citation data as Linked Open Data using Semantic Web technologies, to the development of software tools and services that enable convenient access to these open data, and to community advocacy for open citations. This paper describes OpenCitations and its datasets, tools, services and activities.
A brief review of studies linking social media and article-level performance.
The results of this study strongly suggest that when male and female authors publish articles that are comparably positioned to receive citations, their publications do in fact accrue citations at the same rate. This raises the question: Why would gender matter “everywhere but here”?
We are proud to announce the release of enhancements which significantly facilitate scientific software citation and discovery.
Information-aesthetic explorations of emerging patterns in scientific citation networks. A cooperation between the Eigenfactor® Project (data analysis) and Moritz Stefaner (visualization).
If we believe data should be valued like other research outputs, we must take action to achieve this. Supporting the open data movement means providing proper support for data citations.
Although citations and related metrics like the H-index are widely used in academia to evaluate research and allocate resources, the referencing decisions on which they are based are poorly understood. This paper investigates whether authors reference works that influenced them most or those they believe the readers will value most.
Find out how scholarly articles are cited on Wikipedia with WikiCiteVis.
When following a link to the official version of a scholarly article, Wikipedia readers are twice as likely to hit a paywall than one they can freely read.