Mixture Models

From Church Wiki
Jump to: navigation, search
To return to the top level: Probabilistic Models of Cognition.

In the section on Hierarchical Models we saw the power of probabilistic inference in learning about the latent structure underlying different kinds of observations: the mixture of colors in different bags of marbles, or the prototypical features of categories of animals. In that discussion we always assumed that we knew what kind each observation belonged to—the bag that each marble came from, or the subordinate, basic, and superordinate category of each object. Knowing this allowed us to pool the information from each observation for the appropriate latent variables. What if we don't know a priori how to divide up our observations?

In this section we explore the problem of simultaneously discovering kinds and their properties -- this can be done using mixture models.

Contents

Learning Categories

Imagine a child who enters the world and begins to see objects. She can't begin by learning the typical features of cats or mice, because she doesn't yet know that there are such kinds of objects as cats and mice. Yet she may quickly notice that some of the objects all tend to purr and have claws, while other objects are small and run fast—she can cluster the objects together on the basis of common features and thus form categories (such as cats and mice), whose typical features she can then learn.

To formalize this learning problem, we begin by adapting the bags-of-marbles examples from the Hierarchical Models section. However, we now assume that the bag that each marble is drawn from is unobserved and must be inferred.

We see that it is likely that obs1 and obs2 came from the same bag, but quite unlikely that obs3 did. Why? Notice that we have set alpha small, indicating a belief that the marbles in a bag will tend to all be the same color. How do the results change if you make alpha larger? Why? Note that we have queried on whether observed marbles came out of the same bag, instead of directly querying on the bag number that an observation came from. This is because the bag number by itself is meaningless---it is only useful in its role of determining which objects have similar properties. Formally, the model we have defined above is symmetric in the bag labels (if you permute all the labels you get a new state with the same probability).

Instead of assuming that a marble is equally likely to come from each bag, we could instead learn a distribution over bags where each bag has a different probability. This is called a mixture distribution over the bags:

Models of this kind are called mixture models because the observations are a "mixture" of several categories. Mixture models are widely used in modern probabilistic modeling because they describe how to learn the unobservable categories which underlie observable properties in the world.

The observation distribution associated with each mixture component (i.e., kind or category) can be any distribution we like. For example, here is a mixture model with Gaussian components.

Example: Topic Models

One very popular class of mixture-based approaches are topic models, which are used for document classification, clustering, and retrieval. The simplest kind of topic models make the assumption that documents can be represented as bags of words — unordered collections of the words that the document contains. In topic models, each document is associated with a mixture over topics, each of which is itself a distribution over words.

One popular kind of bag-of-words topic model is known as Latent Dirichlet Allocation (LDA).[1] The generative process for this model can be described as follows. For each document, mixture weights over a set of <math>K</math> topics are drawn from a Dirichlet prior. Then <math>N</math> topics are sampled for the document—one for each word. Each topic itself is associated with a distribution over words, and this distribution is drawn from a Dirichlet prior. For each of the <math>N</math> topics drawn for the document, a word is sampled from the corresponding multinomial distribution. This is shown in the Church code below.

In this simple example, there are two topics topic1 and topic2, and four words. These words are deliberately chosen to represent on of two possible subjects that a document can be about: One can be thought of as 'biology' (i.e., DNA and evolution), and the other can be thought of as 'linguistics' (i.e., parsing and syntax).

The documents consist of lists of individual words from one or the other topic. Based on the coocurrence of words within individual documents, the model is able to learn that one of the topics should put high probability on the biological words and the other topic should put high probability on the linguistic words. It is able to learn this because different kinds of documents represent stable mixture of different kinds of topics which in turn represent stable distributions over words.

Example: Categorical Perception of Speech Sounds

(This example is adapted from: Feldman, N. H., Griffiths, T. L., and Morgan, J. L. (2009). The influence of categories on perception: Explaining the perceptual magnet effect as optimal statistical inference. Psychological Review, 116(4):752–782.)

Human perception is often skewed by our expectations. A common example of this is called 'categorical perception' – when we perceive objects as being more similar to the category prototype than they really are. In phonology this is been particularly important and is called the perceptual magnet effect: Hearers regularize a speech sound into the category that they think it corresponds to. Of course this category isn't known a priori, so a hearer must be doing a simultaneous inference of what category the speech sound corresponded to, and what the sound must have been… In the below code we model this as a mixture model over the latent categories of sounds, combined with a noisy observation process.

Notice that the perceived distances between input sounds are skewed relative to the actual acoustic distances – that is they are attracted towards the category centers.

Pme.png

Unknown Numbers of Categories

The models above describe how a learner can simultaneously learn which category each object belongs to, the typical properties of objects in that category, and even global parameters about kinds of objects in general. However, it suffers from a serious flaw: the number of categories was fixed. This is as if a learner, after finding out there are cats, dogs, and mice, must force an elephant into one of these categories, for want of more categories to work with.

The simplest way to address this problem, which we call unbounded models, is be to simply place uncertainty on the number of categories in the form of a hierarchical prior. Let's warm up with a simple example of this: inferring whether one or two coins were responsible for a set of outcomes (i.e. imagine a friend is shouting each outcome from the next room--"heads, heads, tails..."--is she using a fair coin, or two biased coins?). How does the inferred number of coins change as the amount of data grows? Why?

We could extend this model by allowing it to infer that there are more than two coins. However, no evidence requires us to posit three or more coins (we can always explain the data as "a heads coin and a tails coin"). Instead, let us apply the same idea to the marbles examples above: Vary the amount of evidence and see how the inferred number of bags changes.

For the prior on num-bags we used the Poisson distribution [1] which is a distribution on non-negative integers. It is convenient, though implies strong prior knowledge (perhaps too strong for this example). We have also used the special function gensym, which returns a fresh symbol every time it is called. It can be used to generate an unbounded set of labels for things like classes, categories and mixture components. Each evaluation of gensym results in a unique (although cryptic) symbol: Importantly, these symbols can be used as identifiers, because two different calls to gensym will never be equal:

Unbounded models give a straightforward way to represent uncertainty over the number of categories in the world. However, inference in these models often presents difficulties. In the next section we describe another method for allowing an unknown number of things: In an unbounded model, there are a finite number of categories whose number is drawn from an unbounded prior distribution, such as the Poisson prior that we just examined. In an 'infinite model' we construct distributions assuming a truly infinite numbers of objects.

  1. Blei, David M.; Ng, Andrew Y.; Jordan, Michael I (January 2003). Latent Dirichlet allocation. Journal of Machine Learning Research 3: pp. 993–1022.
Personal tools