Understanding Distributional Ambiguity via Non-robust Chance Constraint

Qi Wu, Shumin Ma, Cheuk Hang Leung, Wei Liu

We propose a non-robust interpretation of the distributionally robust optimization (DRO) problem by relating the impact of uncertainties around the distribution on the impact of constraining the objective through tail probabilities. Our interpretation allows utility maximizers to figure out the size of the ambiguity set through parameters that are directly linked to the chance parameters. We first show that for general $\phi$-divergences, a DRO problem is asymptotically equivalent to a class of mean-deviation problems, where the ambiguity radius controls investor's risk preference. Based on this non-robust reformulation, we then show that when a boundedness constraint is added to the investment strategy. The DRO problem can be cast as a chance-constrained optimization (CCO) problem without distributional uncertainties. Without the boundedness constraint, the CCO problem is shown to perform uniformly better than the DRO problem, irrespective of the radius of the ambiguity set, the choice of the divergence measure, or the tail heaviness of the center distribution. Besides the widely-used Kullback-Leibler (KL) divergence which requires the distribution of the objective function to be exponentially bounded, our results apply to divergence measures that accommodate well heavy tail distribution such as the student $t$-distribution and the lognormal distribution. Comprehensive testings on synthetic data and real data are provided.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment