The aggregation of knowledge embedded in large language models (LLMs) holds the promise of new solutions to problems of observability and measurement in the social sciences. We examine this potential in a challenging setting: measuring latent ideology -- crucial for better understanding core political functions such as democratic representation. We scale pairwise liberal-conservative comparisons between members of the 116th U.S. Senate using prompts made to ChatGPT. Our measure strongly correlates with widely used liberal-conservative scales such as DW-NOMINATE. Our scale also has interpretative advantages, such as not placing senators who vote against their party for ideologically extreme reasons towards the middle. Our measure is more strongly associated with political activists' perceptions of senators than other measures, consistent with LLMs synthesizing vast amounts of politically relevant data from internet/book corpora rather than memorizing existing measures. LLMs will likely open new avenues for measuring latent constructs utilizing modeled information from massive text corpora.