The emergence of interference alignment (IA) as a degrees-of-freedom optimal strategy motivates the need to investigate whether IA can be leveraged to aid conventional network optimization algorithms that are only capable of finding locally optimal solutions. To test the usefulness of IA in this context, this paper proposes a two-stage optimization framework for the downlink of a $G$-cell multi-antenna network with $K$ users/cell. The first stage of the proposed framework focuses on nulling interference from a set of dominant interferers using IA, while the second stage optimizes transmit and receive beamformers to maximize a network-wide utility using the IA solution as the initial condition. Further, this paper establishes a set of new feasibility results for partial IA that can be used to guide the number of dominant interferers to be nulled in the first stage. Through simulations on specific topologies of a cluster of base-stations, it is observed that the impact of IA depends on the choice of the utility function and the presence of out-of-cluster interference. In the absence of out-of-cluster interference, the proposed framework outperforms straightforward optimization when maximizing the minimum rate, while providing marginal gains when maximizing sum-rate. However, the benefit of IA is greatly diminished in the presence of significant out-of-cluster interference.