Bibliographic metrics are commonly utilized for evaluation purposes within academia, often in conjunction with other metrics. These metrics vary widely across fields and change with the seniority of the scholar; consequently, the only way to interpret these values is by comparison with other academics within the same field and of similar seniority. We propose a simple extension that allows us to create metrics that are easy to interpret and can make comparisons easier. Our basic idea is to create benchmarks and then utilize percentile indicators to measure the performance of a scholar or publication over time. These percentile-based metrics allow for comparison of people and publications of different seniority and are easily interpretable. Furthermore, we demonstrate that the rank percentile indicators have reasonable predictive power. The publication indicator is highly stable over time, while the scholar indicator exhibits short-term stability and can be predicted via a simple linear regression model. While more advanced models offer slightly superior performance, the simplicity and interpretability of the simple model impose significant advantages over the additional complexity of other models.