We prove an ergodic theorem for weighted ensemble, an interacting particle method for sampling distributions associated with a generic Markov chain. Because the interactions arise from resampling, weighted ensemble can be viewed as a sequential Monte Carlo method. In weighted ensemble, the resampling is based on dividing the particles among a collection of bins, and then copying or killing to enforce a prescribed number of particles in each bin. We show that the ergodic theorem is sensitive to the resampling mechanism: indeed it fails for a large class of related sequential Monte Carlo methods, due to an accumulating resampling variance. We compare weighted ensemble with one of these methods, and with direct Monte Carlo, in numerical examples.