Citation averages, and Impact Factors (IFs) in particular, are sensitive to sample size. We apply the Central Limit Theorem (CLT) to IFs to understand their scale-dependent behavior. For a journal of $n$ randomly selected papers from a population of all papers, we expect from the CLT that its IF fluctuates around the population average $\mu$, and spans a range of values proportional to $\sigma/\sqrt[]{n}$, where $\sigma^2$ is the variance of the population's citation distribution. The $1/\sqrt[]{n}$ dependence has profound implications for IF rankings: The larger a journal, the narrower the range around $\mu$ where its IF lies. IF rankings therefore allocate an unfair advantage to smaller journals in the high IF ranks, and to larger journals in the low IF ranks. We expect a scale-dependent stratification of journals in IF rankings, whereby small journals occupy top, middle, and bottom ranks; mid-sized journals occupy middle ranks; and very large journals have IFs that asymptotically approach $\mu$. We confirm these predictions by analyzing (i) 166,498 IF \& journal-size data pairs in the 1997--2016 Journal Citation Reports of Clarivate Analytics, (ii) the top-cited portion of 276,000 physics papers published in 2014--2015, and (iii) the citation distributions of an arbitrarily sampled list of physics journals. We conclude that the CLT is a good predictor of the IF range of actual journals, while sustained deviations from its predictions are a mark of true, non-random, citation impact. IF rankings are thus misleading unless one compares like-sized journals or adjusts for these effects. We propose the $\Phi$ index, a rescaled IF adjusted for size, which can be generalized to account also for different citation practices across research fields. Our methodology applies also to citation averages used to compare research fields, university departments or countries in various rankings.

Thanks. We have received your report. If we find this content to be in
violation of our guidelines,
we will remove it.

Ok