Clustering has many important applications in computer science, but real-world datasets often contain outliers. Moreover, the presence of outliers can make the clustering problems to be much more challenging. To reduce the complexities, various sampling methods have been proposed in past years. Namely, we take a small sample (uniformly or non-uniformly) from input and run an existing approximation algorithm on the sample. Comparing with existing non-uniform sampling methods, the uniform sampling approach has several significant benefits. For example, it only needs to read the data in one-pass and is very easy to implement in practice. Thus, the effectiveness of uniform sampling for clustering with outliers is a natural and fundamental problem deserving to study in both theory and practice. In this paper, we propose a new and unified framework for analyzing the effectiveness of uniform sampling for three representative center-based clustering with outliers problems, $k$-center/median/means clustering with outliers. We introduce a "significance" criterion and prove that the performance of uniform sampling depends on the significance degree of the given instance. In particular, we show that the sample size can be independent of the ratio $n/z$ and the dimensionality. More importantly, to the best of our knowledge, our method is the first uniform sampling approach that allows to discard exactly $z$ outliers for these three center-based clustering with outliers problems. The results proposed in this paper also can be viewed as an extension of the previous sub-linear time algorithms for the ordinary clustering problems (without outliers). The experiments suggest that the uniform sampling method can achieve comparable clustering results with other existing methods, but greatly reduce the running times.