Multimodal Semantic Simulations of Linguistically Underspecified Motion Events

Nikhil Krishnaswamy, James Pustejovsky

In this paper, we describe a system for generating three-dimensional visual simulations of natural language motion expressions. We use a rich formal model of events and their participants to generate simulations that satisfy the minimal constraints entailed by the associated utterance, relying on semantic knowledge of physical objects and motion events. This paper outlines technical considerations and discusses implementing the aforementioned semantic models into such a system.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment