Deep Determinantal Point Processes

Mike Gartrell, Elvis Dohmatob, Jon Alberdi

Determinantal point processes (DPPs) have attracted significant attention as an elegant model that is able to capture the balance between quality and diversity within sets. DPPs are parameterized by a positive semi-definite kernel matrix. While DPPs have substantial expressive power, they are fundamentally limited by the parameterization of the kernel matrix and their inability to capture nonlinear interactions between items within sets. We present the deep DPP model as way to address these limitations, by using a deep feed-forward neural network to learn the kernel matrix. In addition to allowing us to capture nonlinear item interactions, the deep DPP also allows easy incorporation of item metadata into DPP learning. Since the learning target is the DPP kernel matrix, the deep DPP allows us to use existing DPP algorithms for efficient learning, sampling, and prediction. Through an evaluation on several real-world datasets, we show experimentally that the deep DPP can provide a considerable improvement in the predictive performance of DPPs, while also outperforming strong baseline models in many cases.

Knowledge Graph



Sign up or login to leave a comment