Many applications of representation learning, such as privacy-preservation, algorithmic fairness, and domain adaptation, desire explicit control over semantic information being discarded. This goal is formulated as satisfying two objectives: maximizing utility for predicting a target attribute while simultaneously being independent or invariant with respect to a known semantic attribute. Solutions to such problems lead to trade-offs between the two objectives when they are competing with each other. While existing works study bounds on these trade-offs, three questions still remain outstanding: \emph{What are the exact fundamental trade-offs between utility and invariance?}, 2) \emph{What is the optimal dimensionality of the representation?}, and 3) \emph{What are the encoders (mapping data to a representation) that achieve the exact fundamental trade-offs and how can we estimate them from data?} This paper addresses these questions. We adopt a functional analysis perspective and derive closed-form solutions for the global optima of the underlying optimization problems under mild assumptions, which in turn yields closed formulae for the exact trade-offs, optimal representation dimensionality, and the corresponding encoders. We also numerically quantify the trade-offs on representative problems and compare them to those achieved by baseline invariant representation learning algorithms.