One of the defining aspects of being human is an ability to flexibly generate new ideas and hypotheses. For example, we readily come up with possible faults if our car breaks down, or plausible maladies when we feel unwell. We can brainstorm anything from party ideas, to corporate strategies, to magical creatures, and frequently hypothesise hidden motivations and beliefs in our peers to explain why they act the way they do. Our ideas often combine familiar objects, concepts, and relations, making them symbolic, easy to communicate, and a ready guide for follow up queries or evidence seeking. For example, suppose you came home from work to find your house in disarray. You might quickly suspect you have been burgled and investigate by checking whether valuables are missing. If you then discover feathers on the floor this might inspire other possibilities. Perhaps a bird got in through an open window and ran amok. These kinds of inventive inferences come quickly and easily for us, but are surprisingly difficult for artificial intelligence systems. Part of this difficulty is that, for the kinds of natural domains mentioned above, there is typically an infinite number of possibilities one could generate, but few good ones. Our best ideas have the character of "ah ha" moments, immediately providing a better explanation than preceding candidates and potentially becoming a lasting addition to one's beliefs or knowledge base.
The key aim of this project is to develop algorithms that emulate the way humans generate, adapt and actively investigate such hypotheses in everyday life. The basic idea is that we combine our more primitive concepts to form more complex ideas, essentially "trying out" different combinations of primitives and connectives when searching for a better explanation, or adapting one that does not fit the latest evidence. Such a search process is governed by overarching principles of simplicity and fit to the evidence, but constrained by our finite thinking time and capacity. For example, in the above example you might rapidly generate, refine or overturn several hypotheses as you investigate the mess, discovering a feather duster, cleaning products, and finally your partner in the midst of a spring clean.
"Program induction" is a powerful new mathematical framework for constructing symbolic models or programs that can explain or reproduce observations. Induced programs can grow in structure and complexity as evidence is encountered, reusing past solutions as and composing them to solve new problems. We propose to use this as a framework to capture and ultimately synthesise humanlike hypothesis generation. To closely examine human hypothesis generation, we will combine theoretical work in the program induction framework with experiments with human adults. In our inductive learning tasks, participants and our algorithms will both observe and create their own physical scenes made up of simple geometric blocks and test them to discover and generate hypotheses regarding under what conditions they will produce a novel causal effect (i.e. in our pilot, produce a "newly discovered form of radiation"). This setup allows us to explore arbitrarily complex hidden causal effects that can involve combinations of features and relations, meaning the participants (and our algorithms) must use hypothesis generation, reasoning and active testing to identify the ground truth in each case.
Through our modelling and our experiments we expect to deepen understanding of the mechanisms that underpin the uniquely human ability to make explanatory inferences. We expect our findings to influence robotics, and AI communities providing insight into how to build artificial systems that can better emulate, understand and be understood by humans. The goal of this project thus to develop a precise algorithmic account of idea generation in human learning that we call "computational constructivism".
|