By Timothy J. O'Donnell, Noah D. Goodman, Andreas Stuhlmueller, and the Church Working Group.
A generative model describes a process, usually one by which observable data is generated. Generative models represent knowledge about the causal structure of the world—simplified, "working models" of a domain. These models may then be used to answer many different questions, by conditional inference. This contrasts to a more procedural or mechanistic approach in which knowledge represents the input-output mapping for a particular question directly.
In order to make the idea of how knowledge is represented by generative models precise we want a formal language that is designed to express the kinds of knowledge individuals have about the world. This language should be universal in the sense that it should be able to express any (computable) process. We build on the <math>\lambda</math>-calculus (as realized in functional programming languages) because the <math>\lambda</math>-calculus captures the idea that what is important is causal dependence—in particular the <math>\lambda</math>-calculus does not focus on the sequence of time, but on rather on the causal chain of what events lead to what other events. (This is a way of expressing the important notion of functional purity in programming languages.)
In some areas of cognitive science the generative approach is very clear. There is a well-defined external reality that we model. For example, optics provides a generative model for vision: there are some objects present in the scene and there are known physical properties of light, optics describes the processes by which light reflecting off the surfaces of objects generates the sensory data which reaches the eye. It is clear in vision that simplifications of the true physical process may provide useful mental models for understanding the visual world.
The interpretation of generative models for other areas of cognitive science, such as language, is less clear. Language is a phenomenon that exists in the heads of language users and between the users in social interactions. Thus there seems to be no purely objective reality that a generative model for language corresponds to. However, from the point of view of a single language user, it may be useful to imagine that there is some idealized eternal notion of language structure to be learned (the "knowledge" of the "ideal speaker-hearer"). The language learner is then trying to recover the generative process of language, as used in his or her community. We interpret this as the content of the "knowledge of language" characterized by linguistic competence.
In this section we introduce a stochastic <math>\lambda</math> calculus as embodied in the probabilistic programming language Church, a derivative of LISP. We give an overview of how Church works and start building some simple generative processes with it.
Go to: Simple Generative Models
One of the main advantages of working in a universal framework, like the <math>\lambda</math>-calculus, is that we can express arbitrary (even non-halting) computations. This gives us the power to build generative models with unbounded complexity. In this section we show the two main ways in which we can build models with unbounded complexity in Church. The first is by using recursion to build unbounded structures and to sample from arbitrarily large sets. The section is by using memoization to implicitly define an infinite distribution that can then be explored lazily.
In this section we will review classical notation for probability theory and describe how it relates to models expressed in Church. We will also discuss the important concept of marginalization.
Up to this point in the tutorial we have focused on building generative models. In this section we will see how we can reason about these models using the operational of conditionalization. This permits flexible inferences to be made from a generative model. We will describe the family of Church procedures called query that implement conditionalization. We will also discuss Bayes' rule and its interpretation in Church.
In this section we review a number of phenomena that result from conditioning in generative models. These include explaining away, Bayes Occam's razor and the blessing of abstraction.