This paper revisits the debate around the legal personhood of AI and robots, which has been highly sensitive yet important in the face of broad adoption of autonomous and self-learning systems. We conducted a survey ($N$=3,315) to understand lay people's perceptions of this topic and analyzed how they would assign responsibility, awareness, and punishment to AI, robots, humans, and various entities that could be held liable under existing doctrines. Even though people did not recognize any mental state for automated agents, they still attributed punishment and responsibility to these entities. While the participants mostly agreed AI systems could be reformed given punishment, they did not believe such punishment would achieve its retributive and deterrence functions. Moreover, participants were also unwilling to grant automated agents essential punishment preconditions, namely physical independence or assets. We term this contradiction the punishment gap. We also observe the same punishment gap on a demographically representative sample of U.S. residents ($N$=244). We discuss implications of these findings for how legal and social decisions could influence how the public attributes responsibility and punishment to automated agents.