ABSTRACT

Attempts to characterize people’s causal knowledge of a domain in terms of causal network structures miss a key level of abstraction: the laws that allow people to formulate meaningful causal network hypotheses, and thereby learn and reason about novel causal systems so e ectively. We outline a preliminary framework for modeling causal laws in terms of generative grammars for causal networks. We then present an experiment showing that causal grammars can be learned rapidly in a novel domain and used to support one-shot inferences about the unobserved causal properties of new objects. Finally, we give a Bayesian analysis explaining how causal grammars may be induced from the limited data available in our experiments.