The increasing demand for privacy and security has driven the advancement of private inference (PI), a cryptographic method enabling inferences directly on encrypted data. However, the computational and storage burdens of non-linear operators (e.g., ReLUs) render it impractical. Despite these limitations, prior ReLU optimization methods consistently relied on classical networks, that are not optimized for PI. Moreover, the selection of baseline networks in these ReLU optimization methods remains enigmatic and fails to provide insights into network attributes contributing to PI efficiency. In this paper, we investigate the desirable network architecture for efficient PI, and {\em key finding} is wider networks are superior at higher ReLU counts, while networks with a greater proportion of least-critical ReLUs excel at lower ReLU counts. Leveraging these findings, we develop a novel network redesign technique (DeepReShape) with a complexity of $\mathcal{O}(1)$, and synthesize specialized architectures(HybReNet). Compared to the state-of-the-art (SNL on CIFAR-100), we achieve a 2.35\% accuracy gain at 180K ReLUs, and for ResNet50 on TinyImageNet our method saves 4.2$\times$ ReLUs at iso-accuracy.