NeSy-MM

Relational neurosymbolic Markov models

Our most powerful AI agents rely almost exclusively on neural networks, which cannot easily be told to obey certain rules or incorporate background knowledge. Relational Neurosymbolic Markov Models (NeSy-MMs) combine the learning abilities of neural networks with the guarantees of symbolic reasoning, applied to sequential decision-making under uncertainty.

NeSy-MMs integrate neurosymbolic AI with Markov models by using probabilistic models over relations as their core representation. Unlike standard hidden Markov models that use unstructured latent variables, NeSy-MMs decompose symbolic relations over time, similar to how planning algorithms work, while learning unknown dynamics with neural networks.

This gives NeSy-MMs three key abilities that other models lack:

  1. Consistent generation. NeSy-MMs generate sequences that respect given constraints (e.g., navigation rules), while deep Markov models and visual transformers trained on the same data fail to do so.
  2. Out-of-distribution generalisation. Trained on small grids with one enemy, NeSy-MMs generalise to larger grids, more enemies, and longer time horizons, unlike transformer baselines.
  3. Test-time interventions. New constraints can be added at test time without retraining (e.g., forbidding the agent from entering part of the room).

Deep Markov Model

Deep Markov Model generation

NeSy-MM (Ours)

NeSy-MM generation

Visual Transformer

Visual Transformer generation

resources