Zero-shot grammatical error detection is the task of tagging token-level errors in a sentence when only given access to labels at the sentence-level for training. We present and analyze BLADE, a sequence labeling approach based on a decomposition of the filter-ngram interactions of a single-layer one-dimensional convolutional neural network as the final layer of a network, which has the characteristic of being effective in both the fully-supervised and zero-shot settings. The approach also enables a matching method, exemplar auditing, useful for analyzing the model and data, and empirically, as part of an inference-time decision rule. Additionally, we extend these insights from natural language to machine generated language, demonstrating that the strong sequence model can be used to guide synthetic text generation, and that it is concomitantly unsuitable as a reliable detector of synthetic data when the detection model and a sufficiently strong generation model are both accessible. We close with qualitative evidence that the approach can be a useful tool for preliminary text and document analysis, demonstrating that a strong text feature extractor for low- and high- resource settings is useful across NLP tasks.