Recent studies have reported biases in machine learning image classifiers, especially against particular demographic groups. Counterfactual examples for an input -- perturbations that change specific features but not others -- have been shown to be useful for evaluating explainability and fairness of machine learning models. However, generating counterfactual examples for images is non-trivial due to the underlying causal structure governing the various features of an image. To be meaningful, generated perturbations need to satisfy constraints implied by the causal model. We present a method for generating counterfactuals by incorporating a structural causal model (SCM) in a novel improved variant of Adversarially Learned Inference (ALI), that generates counterfactuals in accordance with the causal relationships between different attributes of an image. Based on the generated counterfactuals, we show how to evaluate bias and explain a pre-trained machine learning classifier. We also propose a counterfactual regularizer that can mitigate bias in the classifier. On the Morpho-MNIST dataset, our method generates counterfactuals comparable in quality to prior work on SCM-based counterfactuals. Our method also works on the more complex CelebA faces dataset; generated counterfactuals are indistinguishable from original images in a human evaluation experiment. As a downstream task, we use counterfactuals to evaluate a standard classifier trained on CelebA data and show that it is biased w.r.t. skin and hair color, and show how counterfactual regularization can be used to remove the identified biases.