Data poisoning is an adversarial scenario where an attacker feeds a specially crafted sequence of samples to an online model in order to subvert learning. We introduce Lethean Attack, a novel data poisoning technique that induces catastrophic forgetting on an online model. We apply the attack in the context of Test-Time Training, a modern online learning framework aimed for generalization under distribution shifts. We present the theoretical rationale and empirically compare it against other sample sequences that naturally induce forgetting. Our results demonstrate that using lethean attacks, an adversary could revert a test-time training model back to coin-flip accuracy performance using a short sample sequence.