Twitter creates ‘new academic hierarchies’, suggests study
US analysis questions link between Twitter success and scholarly merit, raising doubts about the use of social media data in altmetrics.
publications
Send us a link
US analysis questions link between Twitter success and scholarly merit, raising doubts about the use of social media data in altmetrics.
A report produced by Digital Science, together with an international collaboration of leading higher education professionals and policy experts who give their views on the global impact agenda in research policy and discuss what evidence of impact is useful to them.
A collection of 150 personal stories from scientists who are combining a career in research with their roles as parents and carers, each in their own way.
Converting Scholarly Journals to Open Access: A Review of Approaches and Experiences
In the race to apply for research funding, writing statements about future impact can feel like a charade.
A review of approaches and experiences on how to convert subscription-based scholarly journals to open access.
Paper showing that doubling the word frequency of an average abstract increases citations by 0.70% and that journals which publish papers whose abstracts are shorter and contain more frequently used words receive slightly more citations per paper.
The pleasure of publishing | When assessing manuscripts eLife editors look for a combination of rigour and insight, along with results and ideas that make other researchers think differently about their subject.
The total number of papers published by researchers during their early career period (first fifteen years) has increased in recent decades, but so has their average number of co-authors.
Academic success in Higher Education is influenced by a number of different factors. This paper tackles the question if the individual levels of motivation, anxiety, enjoyment and self-efficacy, measured immediately before entering university, influence the probability of academic success. Former studies have shown an influence of the high school grade, the learning environment and motivational variables. They do not investigate, however, the individual levels of the mentioned constructs before the beginning of the studies. This research was conducted at the University of St. Gallen/Switzerland. The sample includes 695 first-year students who provided information about the individual level of the mentioned constructs.
Anyone who looks at international rankings has noticed that Switzerland is rising rapidly up the global academic hierarchy. Sweden and the Netherlands are close behind. This is no coincidence.
Women do more of the day-to-day labor of science while men are credited with more of the big-picture thinking.
How do retractions influence the scholarly impact of retracted papers, authors, and institutions; and how does this influence propagate to the wider academic community through scholarly associations?
A review on the open citation advantage, media attention for publicly available research, collaborative possibilities, and special funding opportunities to show how open practices can give researchers a competitive advantage.
A report on international academic collaboration across the UK research base and on the implications of EU and global collaboration for universities, research assessment and the economy.
We revisit the results of the recent Reproducibility Project: Psychology by the Open Science Collaboration. We compute Bayes factors—a quantity that can be used to express comparative evidence for an hypothesis but also for the null hypothesis—for a large subset ( N = 72) of the original papers and their corresponding replication attempts. In our computation, we take into account the likely scenario that publication bias had distorted the originally published results. Overall, 75% of studies gave qualitatively similar results in terms of the amount of evidence provided. However, the evidence was often weak (i.e., Bayes factor < 10). The majority of the studies (64%) did not provide strong evidence for either the null or the alternative hypothesis in either the original or the replication, and no replication attempts provided strong evidence in favor of the null. In all cases where the original paper provided strong evidence but the replication did not (15%), the sample size in the replication was smaller than the original. Where the replication provided strong evidence but the original did not (10%), the replication sample size was larger. We conclude that the apparent failure of the Reproducibility Project to replicate many target effects can be adequately explained by overestimation of effect sizes (or overestimation of evidence against the null hypothesis) due to small sample sizes and publication bias in the psychological literature. We further conclude that traditional sample sizes are insufficient and that a more widespread adoption of Bayesian methods is desirable.
Today's academic publishing system may be problematic, but many argue it is the only one available to provide adequate research evaluation. Pandelis Perakakis introduces an open community platform, LIBRE, which seeks to challenge the assumption that peer review can only be handled by journal editors.
Recommendations from the Federation of American Societies for Experimental Biology.
This research investigates the relationship between open science and public engagement.
What they fund and how they distribute their funds.
A data-driven theoretical investigation of editorial workflows.
This paper shows how bibliometric assessment can be implemented at individual level.
Independent advice from Professor Adam Tickell on open access to research publications.
An intelligent machine learning framework for scientific evaluation of researchers may help decision makers to better allocate the available funding to the distinguished scientists through providing fair comparative results, regardless of the career age of the researchers.
An assessment of the first two years of Horizon 2020 programme, taking into account