In a previous article of ours, we explained the reasons why the MNCS and all similar per-publication citation indicators should not be used to measure research performance, whereas efficiency indicators (output to input) such as the FSS are valid indicators of performance. The problem frequently indicated in measuring efficiency indicators lies in the availability of input data. If we accept that such data are inaccessible, and instead resort to per-publication citation indicators, the question arises as to what extent institution performance rankings by MNCS are different from those by FSS (and so what effects such results could have on policy-makers, managers and other users of the rankings). Contrasting the 2008-2012 performance by MNCS and FSS of Italian universities in the Sciences, we try to answer that question at field, discipline, and overall university level. We present the descriptive statistics of the shifts in rank, and the correlations of both scores and ranks. The analysis reveals strong correlations in many fields but weak correlations in others. The extent of rank shifts is never negligible: a number of universities shift from top to non-top quartile ranks.