Grounding compositional hypothesis generation in specific instances
Abstract
A number of recent computational models treat concept learning as a form of probabilistic rule induction in a space of language-like, compositional concepts. Inference in such models frequently requires repeatedly sampling from a (infinite) distribution over possible concept rules and comparing their relative likelihood in light of current data or evidence. However, we argue that most existing algorithms for top-down sampling are inefficient and cognitively implausible accounts of human hypothesis generation. As a result, we propose an alternative, Instance Driven Generator (IDG), that constructs bottom-up hypotheses directly out of encountered positive instances of a concept. Using a novel rule induction task based on the children's game Zendo, we compare these "bottom-up" and "top-down" approaches to inference. We find that the bottom-up IDG model accounts better for human inferences and results in a computationally more tractable inference mechanism for concept learning models based on a probabilistic language of thought.
Highlighted Figures

Keywords
Bibtex entry:
@inproceedings{bramley2018grounding,
abstract = {A number of recent computational models treat concept learning as a form of probabilistic rule induction in a space of language-like, compositional concepts. Inference in such models frequently requires repeatedly sampling from a (infinite) distribution over possible concept rules and comparing their relative likelihood in light of current data or evidence. However, we argue that most existing algorithms for top-down sampling are inefficient and cognitively implausible accounts of human hypothesis generation. As a result, we propose an alternative, Instance Driven Generator (IDG), that constructs bottom-up hypotheses directly out of encountered positive instances of a concept. Using a novel rule induction task based on the children's game Zendo, we compare these "bottom-up" and "top-down" approaches to inference. We find that the bottom-up IDG model accounts better for human inferences and results in a computationally more tractable inference mechanism for concept learning models based on a probabilistic language of thought.},
address = {Austin, TX},
author = {Bramley, N.R. and Rothe, A.and Tenenbaum, J.B. and Xu, F. and Gureckis, T.M.},
booktitle = {Proceedings of the 40th Annual Conference of the Cognitive Science Society},
editor = {Rogers, T.T. and Rau, M. and Zhu, X. and Kalish, C.W.},
publisher = {Cognitive Science Society},
title = {Grounding compositional hypothesis generation in specific instances},
year = {2018}}QR Code:
Download SVG