Intrig Logo

Composed-Program Induction with Latent Program Lattice

NeurIPS-25 Workshop on Symmetry and Geometry in Neural Representations | Extended Abstract
Download PDF
Compositional Reasoning
Program Induction
Lattice Problem
Dual-Process Reasoning
Abstraction Discovery

Abstract

Compositional reasoning requires learning structure, inferring programs, and discovering refined primitives. Existing approaches either lack explicit decomposition mechanisms or rely on hand-crafted primitives. We propose the Program Lattice Auto Encoder (PLAE), which learns compositional transformations in a structured latent program space. PLAE trains an encoder where program effects correspond to integer linear combinations of program bases, forming a discrete program lattice. Program induction reduces to solving the Closest Vector Problem (CVP), enabling two complementary inference modes: fast System-1 reasoning via CVP and deliberate System-2 reasoning through stepwise lattice walks with intermediate verification. The framework supports abstraction discovery through lattice reduction, which refines primitive bases to uncover more fundamental components. This work connects neural and symbolic reasoning by providing a mathematically principled framework for compositional domains.

Compositional Reasoning

Compositional reasoning is the process of solving new problems by systematically composing primitive operations into complex transformations. This ability hinges on three key components:

  1. Learning the compositional structure that governs how primitives interact
  2. Inference using this learned structure to generalize from limited examples, and
  3. Abstraction discovery that compresses and refines primitives to accelerate inference.

This paper tackles this key components of compositional reasoning by learning and inferring on a latent program lattice.


Program Induction

A program induction task consists of demonstration input output pairs, generated with a consistent hidden rule. The goal is to generate the test output that would be generated if the implied rule is consistently applied to the test input. To achieve this, we should first learn a good representation of the program space.

program induction task example

An example of program induction task on Rubik's cube


1. Learning Consistent Primitive Program Representation

Learning consistent primitive program representation

Latent program basis models the effect of applying a program as a uniform field in the latent program space.

First, the encoder that embeds the objects in the demonstration pairs are trained to learn a consistent representation for primitive programs. The difference vector of the input and output embedding represents the latent program effect, and they should be the same if the implied programs are the same.

Lconsistency=i=1n{(E(yi)E(xi))Δˉ}2L_{consistency} = \sum_{i=1}^n \{(E(y_i) - E(x_i)) - \bar\Delta\}^2

By enforcing this program consistency loss, the encoder learns to represent each implied programs as uniform vector fields in the latent program space.


2. Learning to Represent Program Compositionality as a Lattice Structure

Learning to represent program compositionality as a lattice structure

Latent program representations for distinct programs form the basis of the latent program lattice

Second, the encoder is trained to understand how the programs compose. Just like how word embeddings map semantic relationship as vector translations, our model tries to learn program composition as vector translations.

E(p2p1(x))E(x)+p1+p2E(p_2 \circ p_1(x)) \approx E(x) + \vec{p}_1 + \vec{p}_2

As a result, any valid discrete composition of the primitive programs are represented as integer linear combinations of the learned latent program basis, spanning a latent program lattice.


3. Reasoning on the Learned Latent Program Lattice

Reasoning on the learned latent program lattice

Given unseen demonstration pairs, the average of difference vector in the latent space (latent program effect) is inferred as the latent representation of the implied program.

Then, unseen demonstration pairs are given and the model induces the implied program utilizing the learned latent program lattice. The encoder embeds each demonstration pairs into latent program effects and their average vector represents the implied latent program. Based on the inductive bias that only the lattice points are the valid discrete compositions of primitive programs, the inferred latent program is snapped to its closest lattice point.

p=argminvLvΔˉ\vec{p}^\ast = \arg\min_{\vec{v} \in L} \| \vec{v} - \bar{\Delta}\|

The problem of finding the closest lattice point is the Closest Vector Problem, CVP. Hence, the model basically reduces the program induction into CVP.


There are two modes of reasoning to finally generate the inferred output.

3.1. System-1 Reasoning

System-1 reasoning

PLAE encodes observed input-output pairs, takes their difference as a latent program effect, adds this effect to a new input's embedding, and decodes.

First, system-1 reasoning is fast, and intuitive, but approximate approach. The model directly adds the inferred latent program effect to the embedding of the test input and decode to generate the output. This is agnostic to the complexity of the composed program. It is akin to implicitly noticing the mapping from input to output and directly applying to the test input


3.2. System-2 Reasoning

System-2 reasoning

PLAE factorizes the same effect into a sequence of basis and executes them stepwise
—repeatedly decoding/encoding in a guided “lattice walk” that verifies and refines intermediate states

System-2 reasoning is more slow, but deliberate approach. The inferred latent program effect is decomposed into combinations of latent program basis. Instead of directly applying it, the model applies it step by step, walking on the lattice. It repeats encoding, applying latent program basis, and decoding to adjust and refine its intuition on long compositional reasoning chains.


4. Abstraction via Lattice Reduction

Lattice reduction

Lattice Reduction refines primitive bases to improve efficiency and uncover more fundamental components

At last, abstraction discovery is powered by lattice reduction. Lattice reduction is finding shorter and more orthogonal basis spanning the same lattice points. This significantly improves CVP solver’s efficiency without needing to re-train the encoder. Additionally, the basis on the reduced lattice represents more independent set of primitive programs that still spans the same compositional program space.



This latent program lattice could provide a principled framework for learning and inference in discrete compositional domain, all in a single unified construct.



Interested in updates for this research?

Subscribe for updates!

https://intrig.vercel.app/subscribe

Interested on doing research with us?Please put you on our radar for collaboration!

https://intrig.vercel.app/collaboration


Poster

[Poster]Composed-Program-Induction-with-Latent-Program-Lattice.pdf