The Subtle Ways Gender Gaps Persist in Science
Women do more of the day-to-day labor of science while men are credited with more of the big-picture thinking.

publications
Send us a link
Women do more of the day-to-day labor of science while men are credited with more of the big-picture thinking.
How do retractions influence the scholarly impact of retracted papers, authors, and institutions; and how does this influence propagate to the wider academic community through scholarly associations?
A review on the open citation advantage, media attention for publicly available research, collaborative possibilities, and special funding opportunities to show how open practices can give researchers a competitive advantage.
A report on international academic collaboration across the UK research base and on the implications of EU and global collaboration for universities, research assessment and the economy.
We revisit the results of the recent Reproducibility Project: Psychology by the Open Science Collaboration. We compute Bayes factors—a quantity that can be used to express comparative evidence for an hypothesis but also for the null hypothesis—for a large subset ( N = 72) of the original papers and their corresponding replication attempts. In our computation, we take into account the likely scenario that publication bias had distorted the originally published results. Overall, 75% of studies gave qualitatively similar results in terms of the amount of evidence provided. However, the evidence was often weak (i.e., Bayes factor < 10). The majority of the studies (64%) did not provide strong evidence for either the null or the alternative hypothesis in either the original or the replication, and no replication attempts provided strong evidence in favor of the null. In all cases where the original paper provided strong evidence but the replication did not (15%), the sample size in the replication was smaller than the original. Where the replication provided strong evidence but the original did not (10%), the replication sample size was larger. We conclude that the apparent failure of the Reproducibility Project to replicate many target effects can be adequately explained by overestimation of effect sizes (or overestimation of evidence against the null hypothesis) due to small sample sizes and publication bias in the psychological literature. We further conclude that traditional sample sizes are insufficient and that a more widespread adoption of Bayesian methods is desirable.
Today's academic publishing system may be problematic, but many argue it is the only one available to provide adequate research evaluation. Pandelis Perakakis introduces an open community platform, LIBRE, which seeks to challenge the assumption that peer review can only be handled by journal editors.
Recommendations from the Federation of American Societies for Experimental Biology.
This research investigates the relationship between open science and public engagement.
What they fund and how they distribute their funds.
A data-driven theoretical investigation of editorial workflows.
This paper shows how bibliometric assessment can be implemented at individual level.
Independent advice from Professor Adam Tickell on open access to research publications.
An intelligent machine learning framework for scientific evaluation of researchers may help decision makers to better allocate the available funding to the distinguished scientists through providing fair comparative results, regardless of the career age of the researchers.
An assessment of the first two years of Horizon 2020 programme, taking into account
The transparency of the peer-review process is an indicator of peer-review quality.
This study shows that the long-run patterns of international scientific collaboration are generating a convergence between applied and basic fields. This convergence of collaboration patterns across research fields might be one of contributing factors that supports the evolution of scientific disciplines.
A method that could be used by funding agencies, universities and scientific policy decision makers for hiring and funding purposes, and to complement existing methods to rank universities and countries.
Respondents value recognition initiatives related to receiving feedback from the journal over monetary rewards and payment in kind.
By 2025, all scholarly publication activity in Austria should be Open Access: the final versions of all scholarly publications resulting from the support of public resources must be freely accessible on the Internet without delay (Gold Open Access).
A broad base of quantitative information on the U.S. and international science and engineering enterprise.
Response to the recommendations of an external High Level Expert Group and a Staff Working Document in which the Commission services have evaluated FP7.
One of the strongest beliefs in scholarly publishing is that journals seeking a high impact factor should be highly selective. There is evidence showing this is wrong.
A Harvard professor reveals how his hiring committee whittles down the pile of job applications.
Researchers are “choosing their lottery numbers after seeing the draw”, making medicine less reliable - and respected journals are letting them do it.
Report to the Swiss Science and Innovation Council SSIC.
A statistical analysis of research funding and other influencing factors.
A shortlist of recommendations to promote gender equality in science and stimulate future efforts to level the field.
Highly Cited Researchers in 2015 according to Thomson Reuters.