Simulation is a fundamental research tool in the computer architecture field. These kinds of tools enable the exploration and evaluation of architectural proposals capturing the most relevant aspects of the highly complex systems under study. Many state-of-the-art simulation tools focus on single-system scenarios, but the scalability required by trending applications has shifted towards distributed computing systems integrated via complex software stacks. Web services with client-server architectures or distributed storage and processing of scale-out data analytics (Big Data) are among the main exponents. The complete simulation of a distributed computer system is the appropriate methodology to conduct accurate evaluations. Unfortunately, this methodology could have a significant impact on the already large computational effort derived from detailed simulation. In this work, we conduct a set of experiments to evaluate this accuracy/cost tradeoff. We measure the error made if client-server applications are evaluated in a single-node environment, as well as the overhead induced by the methodology and simulation tool employed for multi-node simulations. We quantify this error for different micro-architecture components, such as last-level cache and instruction/data TLB. Our findings show that accuracy loss can lead to completely wrong conclusions about the effects of proposed hardware optimizations. Fortunately, our results also demonstrate that the computational overhead of a multi-node simulation framework is affordable, suggesting multi-node simulation as the most appropriate methodology.