Dynamic Generation of Interpretable Inference Rules in a Neuro-Symbolic Expert System

Nathaniel Weir, Benjamin Van Durme

We present an approach for systematic reasoning that produces human interpretable proof trees grounded in a factbase. Our solution resembles the style of a classic Prolog-based inference engine, where we replace handcrafted rules through a combination of neural language modeling, guided generation, and semiparametric dense retrieval. This novel reasoning engine, NELLIE, dynamically instantiates interpretable inference rules that capture and score entailment (de)compositions over natural language statements. NELLIE provides competitive performance on scientific QA datasets requiring structured explanations over multiple facts.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment