publications

Send us a link

Subscribe to our newsletter

A Multi-dimensional Investigation of the Effects of Publication Retraction on Scholarly Impact

A Multi-dimensional Investigation of the Effects of Publication Retraction on Scholarly Impact

How do retractions influence the scholarly impact of retracted papers, authors, and institutions; and how does this influence propagate to the wider academic community through scholarly associations?

The open research value proposition: How sharing can help researchers succeed

The open research value proposition: How sharing can help researchers succeed

A review on the open citation advantage, media attention for publicly available research, collaborative possibilities, and special funding opportunities to show how open practices can give researchers a competitive advantage. 

Benefits and Implications of EU and Global Collaboration by UK Universities - Digital Science

Benefits and Implications of EU and Global Collaboration by UK Universities - Digital Science

A report on international academic collaboration across the UK research base and on the implications of EU and global collaboration for universities, research assessment and the economy.

A Bayesian Perspective on the Reproducibility Project: Psychology

A Bayesian Perspective on the Reproducibility Project: Psychology

We revisit the results of the recent Reproducibility Project: Psychology by the Open Science Collaboration. We compute Bayes factors—a quantity that can be used to express comparative evidence for an hypothesis but also for the null hypothesis—for a large subset ( N = 72) of the original papers and their corresponding replication attempts. In our computation, we take into account the likely scenario that publication bias had distorted the originally published results. Overall, 75% of studies gave qualitatively similar results in terms of the amount of evidence provided. However, the evidence was often weak (i.e., Bayes factor < 10). The majority of the studies (64%) did not provide strong evidence for either the null or the alternative hypothesis in either the original or the replication, and no replication attempts provided strong evidence in favor of the null. In all cases where the original paper provided strong evidence but the replication did not (15%), the sample size in the replication was smaller than the original. Where the replication provided strong evidence but the original did not (10%), the replication sample size was larger. We conclude that the apparent failure of the Reproducibility Project to replicate many target effects can be adequately explained by overestimation of effect sizes (or overestimation of evidence against the null hypothesis) due to small sample sizes and publication bias in the psychological literature. We further conclude that traditional sample sizes are insufficient and that a more widespread adoption of Bayesian methods is desirable.

New forms of open peer review will allow academics to separate scholarly evaluation from academic journals

New forms of open peer review will allow academics to separate scholarly evaluation from academic journals

Today's academic publishing system may be problematic, but many argue it is the only one available to provide adequate research evaluation. Pandelis Perakakis introduces an open community platform, LIBRE, which seeks to challenge the assumption that peer review can only be handled by journal editors.

Individual bibliometric assessment at University of Vienna: from numbers to multidimensional profiles

Individual bibliometric assessment at University of Vienna: from numbers to multidimensional profiles

This paper shows how bibliometric assessment can be implemented at individual level.

iSEER: an intelligent automatic computer system for scientific evaluation of researchers

iSEER: an intelligent automatic computer system for scientific evaluation of researchers

An intelligent machine learning framework for scientific evaluation of researchers may help decision makers to better allocate the available funding to the distinguished scientists through providing fair comparative results, regardless of the career age of the researchers.

Evolution and convergence of the patterns of international scientific collaboration

Evolution and convergence of the patterns of international scientific collaboration

This study shows that the long-run patterns of international scientific collaboration are generating a convergence between applied and basic fields. This convergence of collaboration patterns across research fields might be one of contributing factors that supports the evolution of scientific disciplines.

Evaluating the impact of interdisciplinary research

Evaluating the impact of interdisciplinary research

A method that could be used by funding agencies, universities and scientific policy decision makers for hiring and funding purposes, and to complement existing methods to rank universities and countries.

Recommendations for the transition to Open Access in Austria

Recommendations for the transition to Open Access in Austria

By 2025, all scholarly publication activity in Austria should be Open Access: the final versions of all scholarly publications resulting from the support of public resources must be freely accessible on the Internet without delay (Gold Open Access).

NSF Science and Engineering Indicators 2016

NSF Science and Engineering Indicators 2016

A broad base of quantitative information on the U.S. and international science and engineering enterprise.

Ex-post evaluation of FP7

Ex-post evaluation of FP7

Response to the recommendations of an external High Level Expert Group and a Staff Working Document in which the Commission services have evaluated FP7.

Selecting for impact: new data debunks old beliefs

Selecting for impact: new data debunks old beliefs

One of the strongest beliefs in scholarly publishing is that journals seeking a high impact factor should be highly selective. There is evidence showing this is wrong.

Insider's view of faculty search kicks off discussion online

Insider's view of faculty search kicks off discussion online

A Harvard professor reveals how his hiring committee whittles down the pile of job applications.

How scientists are doing a bait-and-switch with medical data

How scientists are doing a bait-and-switch with medical data

Researchers are “choosing their lottery numbers after seeing the draw”, making medicine less reliable - and respected journals are letting them do it.

Seven actionable strategies for advancing women in Science, Engineering, and Medicine

Seven actionable strategies for advancing women in Science, Engineering, and Medicine

A shortlist of recommendations to promote gender equality in science and stimulate future efforts to level the field.