Systems for language understanding have become remarkably strong at overcoming linguistic imperfections in tasks involving phrase matching or simple reasoning. Yet, their accuracy drops dramatically as the number of reasoning steps increases. We present the first formal framework to study such empirical observations. It allows one to quantify the amount and effect of ambiguity, redundancy, incompleteness, and inaccuracy that the use of language introduces when representing a hidden conceptual space. The idea is to consider two interrelated spaces: a conceptual meaning space that is unambiguous and complete but hidden, and a linguistic space that captures a noisy grounding of the meaning space in the words of a language---the level at which all systems, whether neural or symbolic, operate. Applying this framework to a special class of multi-hop reasoning, namely the connectivity problem in graphs of relationships between concepts, we derive rigorous intuitions and impossibility results even under this simplified setting. For instance, if a query requires a moderately large (logarithmic) number of hops in the meaning graph, no reasoning system operating over a noisy graph grounded in language is likely to correctly answer it. This highlights a fundamental barrier that extends to a broader class of reasoning problems and systems, and suggests an alternative path forward: focusing on aligning the two spaces via richer representations, before investing in reasoning with many hops.