Bayes inference II: random variables
E. Canuto, Former faculty, Politecnico di Torino, Torino,Italy
September 7, 2020
Draft
Acronyms
RV random variable
PDF probability density function
PMF probability mass function
MLE maximum likelihood estimate
MAP maximum a posteriori
LF likelihood function
SD standard deviation
Introduction
In part I, the Bayes Theorem was illustrated by several examples under the assumption of a set of finite outcomes and their events. We extend the theorem to integer and real random variables (RV) [2]. The integer random variable was already introduced in Part I to describe the possible outcomes of k heads (H) in n repeated trials of coin tossing. Also the set can be associated to the integer RV . Consider now a finite sequence of independent RV . The random variable K holds , and from Part I the likelihood function (LF) of the outcome K=k is the Binomial distribution
In Part I we assumed that the real number was unknown. Here we assume to know a prior probability density function (PDF) , which implies that p becomes the outcome of a real RV, denoted with p not to abuse of P; denote the hyper-parameters of the PDF. The simplest is the uniform probability density, . Actually, we are looking for a generic PDF with the property that the product of likelihood function and prior PDF (proportional to the posterior PDF, see below) has the same form of the prior to ease computations. The prior PDF is then said to be a conjugate prior of the likelihood function. The conjugate prior of the Binomial distribution is the Beta PDF , which is defined for and holds
where , -1 has been avoided in the exponents and is the normalizing constant. The uniform density follows from . The PDF in (1) is symmetric around p=0.5 for . As the exponents increase, (1) becomes narrower and approaches a Gaussian distribution as Figure 1 shows.
Figure 1 - Symmetric Beta PDF for increasing parameter value.
Remark. Meaning and difference between Binomial and Beta distributions. Comparison of (1) with the Binomial distribution
shows their similarity, although they express different models. The binomial RV K is discrete as it accounts for the head counts in n trials, having denoted P(head) (known or unknown) with p. The Beta distribution is the PDF of a bounded real RV, , which we interpret as P(head) . The Beta exponents may be interpreted as the head and tail counts that has been used to build up the PDF. In fact, as shown in Figure 1, by increasing the exponents, the PDF narrows implying smaller uncertainty.
The goal is to compute the posterior PDF of the parameter p given an outcome k of K obtained from an outcome x of the sequence X. Bayes theorem for random variables requires four probability functions (either discrete or real)
- the likelihood function of the observed outcomes, discrete in this case, in (1), conditioned by a generic outcome p of the parameter prior probability,
- the prior PDF of the real valued parameter p, in (2), depending on known parameters called hyper-parameters,
- the discrete probability of the observed outcome k whichever be parameter p; it is obtained from the law of total probability (see Part I) extended to discrete and real RV and does not depend on p (it plays the role of a scale factor in Bayes equation)
The posterior PDF holds
In Part I , the likelihood function was maximized to provide the MLE of p equal to . The same can be done here, by maximizing the posterior probability versus the unknown p. The new estimate, known as the Maximum a Posteriori (MAP) estimate holds
The MAP estimate becomes the MLE for , namely under uniform prior density, which is intuitive as the uniform density does not depend on p. We can also say that (5) is the mode of the posterior PDF. The posterior Beta distribution in (4) for a prior with and data n=20 and k={5,10} is shown in Figure 2. The posterior PDF becomes narrower than prior, meaning that uncertainty is reduced. Although the prior is centered in p=0.5, the posterior becomes centered below p=0.5 when k=5 (biased coin), a reasonable result.
Figure2 - Prior and two posterior Beta densities.
Remark 1. Posterior mean and variance. From (4), the posterior PDF is the function , whose mean and variance hold
The limiting posterior variance depends on the MLE estimation and the size n of the repetitions (size of the measurements), implying that the estimate uncertainty goes down with .
Remark 2. Iterative posterior. Posterior PDF and MAP estimate can be easily improved when new data (new coin tossing outcomes, ) become available. It is enough to replace with .
Remark 3. Prediction of a future outcome and zero count paradox. Given a posterior PDF of ,
we want the probability that the next trial n+1 provides head as the outcome. To this end, we employ the law of total probability
The result in (6) is equivalent to the identity in (6). Alternatively, one may adopt the MLE instead of the posterior mean, that is: . Which is the difference? Consider the case when k=0, which may occur under small n. The MLE fails to provide a non zero estimate of p, a paradox known as the zero count problem, where inference (induction) from observations fails. On the contrary, the posterior mean provides a non zero estimate of p, also when the prior probability of p is uniform, namely , in which case from (6) we have
Let us remark that under uniform prior and k=0, also the MAP estimate in (5) would fail to provide a non zero estimate, which fact suggests the practice, known as add-one smoothing, of adding 1 to empirical counts. Addition of 2 in the denominator of (6) is justified by adding 1 to both head count k and tail count n-k.
Remark 4. Prediction of future repeated trials
We aim to predict the number h of heads in m future repeated trials, after having observed k heads in n trials and having assumed a prior PDF of p. We define the random variable . We repeat (6) by replacing with the binomial distribution with parameters
The last expression as a function of h is an integer probability function known as beta-binomial, . Of course . The mean value holds
which for m=1 provides the same expected value as in (6). Figure 3 shows the prediction probability of head count during future m=10 trials under two batches of observed data: (k=9,n=20) and (k=45,n=100). The prior parameters are . In the second case, the prediction probability becomes narrower.
Figure 3 - Prediction probability of head counts out of m=10 trials, after n=(20,100) observed trials with head count k=(8,45).
Application to ballot
Head and tail outcomes can be replaced by ballot candidates (C1,C2). Let us keep the RV notation . A repeated trial of size n can be either an election poll or the same election. Election polls are the available observations, which provide candidate preference counts (k,n-k). We aim to compute the posterior probability that a candidate will be elected, given the poll data. Let us denote with p the probability that a vote is for candidate 2, . The probability that candidate 2 will win is , which implies that p is a RV, as in the Bayes inference. As an alternative, we can get the MLE estimate , which would imply the election of C2 under . Unfortunately the estimate is uncertain (it is a RV) and it must be completed by its variance. Based on data in [1] we will employ both methods. The difficulty of Bayesian inference is the prior PDF of p. One way is to subdivide the available poll data in two parts: the oldest will estimate the prior PDF, the newest will estimate the likelihood function. Poll data form the same company are shown in Table 1.
Table I – Election polls, | |||||
No | Date | Candidate 1, , | Candidate 2, , | Total, | Used for |
1 | September 4-7 | 344 | 284 | 628 | Prior PDF |
2 | September 25-28 | 325 | 312 | 637 | Prior PDF |
3 | October 17-20 | 339 | 346 | 685 | Prior PDF |
4 | Partial sum | 1950 | Prior PDF | ||
5 | Last poll | 1067 | Likelihood | ||
6 |
November 2 election (fraction) |
0.511 | 0.489 | 1 |
The exponents of the prior PDF are estimated from the sum of the preferences in the oldest three polls (row 4 in table 1). The exponents of the likelihood function correspond to the last poll preferences. The posterior PDF in (4) is a Beta PDF with parameters . The prior PDF (blue line), the normalized LF (normalized to become a Beta PDF, red line) and the posterior PDF (green line) are in Figure 4. Due to large batch of data (about 1000 people), the three functions being very narrow, have been plotted for . The prior PDF is in favor of C1, the LF from the last poll is in favor of C2, but the posterior PDF is in favor of C1, but to a less extent than the prior (see Table 2, below). The posterior SD is smaller than the prior. The posterior mean value in Remark 1 and the standard deviation (SD) hold
The posterior probability that (C2 probability of being voted) holds , a rather small value due to mass concentration close to mean value, which is coherent with the SD. This implies that, though LF looks favorable to C2, the posterior PDF turns in favor of C1, because of the unfavorable prior. However, as proved below, the MLE computed from four poll data perfectly agrees with Bayesian inference results, as expected, due to large data batches, and the fact that the posterior PDF is proportional to the four poll MLE and their mean value and mode coincide.
Figure 4 - The components of the Bayesian inference: the prior Beta PDF (in blue, from the oldest polls), the normalized likelihood function (in red, from the last poll) and the posterior Beta whose mode is less than 1/2.
A similar result can be found from the cumulative MLE of the four poll data in Table 1:
which is the same value of the posterior mean. Figure 5 shows the cumulative MLE (horizontal green line) together with the sequence of the single poll MLEs. The winning probability of the candidate 2 (C2) is increasing with the ballot campaign, but the cumulative MLE shows that they look insufficient to reach a winning probability larger than 1/2, a result in agreement with Bayesian inference. Data from [1] refer to 2004 US presidential elections, but restricted to Ohio state (November 2004). Actually they were not a ballot, but the score of other candidates was negligible. C2 did not win the ballot (see row 6 of Table 1).
Figure 5 - History of the winning probability of the candidate 2 (C2) from single poll data (pointwise blue) and cumulative data (pointwise green). The 1/2 threshold is in red.
Bayesian inference and MLE results are summarized in Table 2.
Table 2 – Prior, posterior poll estimates of C2 preference probability | |||||
No | Parameter | Symbol | Unit | Value | Remark |
1 | Prior mean | fraction | 0.4830 | From i=1,2,3 | |
2 | Prior SD | fraction | 0.0113 | same | |
3 | Posterior mean | fraction | 0.4965 | From i=1,2,3,4 | |
4 | Posterior SD | fraction | 0.0091 | same | |
5 | Last poll MLE | fraction | 0.521 | From i=4 | |
6 | Cumulative poll MLE | fraction | 0.4965 | From i=1,2,3,4 | |
7 | Voting result | fraction | 0.489 |
Remark. The progression of the MLE estimates in Figure 5 in contrast with the voting result (row 7 of Table 2) looks a bit odd, since the latter appears more coherent with the oldest polls than those close to election day. The confirmation comes from the cumulative poll MLE (row 6, Table 2) which shows coherency with the voting result, but at the price of a uniform weight of past and recent data. We cannot investigate further this oddity, but clearly poll data hide other assumptions than those employed here.
Multinomial inference
In the previous section Bayesian inference aimed to infer the occurrence probability of one outcome between two alternatives (binomial inference) like {head, tail}, {candidate 1, candidate 2}. Here we extend the inference to an arbitrary finite set of outcomes, like for instance the m=6 faces of a die. Let us associate the integer RV with generic outcome . As in the binomial case, we assume to observe n independent repetitions of the same experiment, or which is the equivalent, n extractions of , whose generic outcome is denoted by . Let us denote with the occurrence probability of the outcome k, such that
and with the number of occurrences in the observed sequence x. The likelihood function of the repeated trials is the probability of observing repetitions of the outcome k in a specific sequence x. Let us introduce the inline vector . It holds
Remark 1. MLE. From (9) we can compute the MLE, by taking the logarithm of the LF and adding the constraint in (8) multiplied by a Lagrange multiplier , as follows
Setting the derivatives of the Lagrangian in (10) to zero we obtain
namely the MLE which satisfies the constraints in (8).
The conjugate prior of the LF is the multinomial Beta PDF, known as Dirichlet PDF (see the Appendix). By avoiding -1 in the exponent, by defining the inline vector and the exponent vector , we write
Composition of prior PDF and likelihood function provides, as in the Introduction, the posterior PDF in the Dirichlet form as follows
Remark 2. MAP estimate. It is obtained in the same way as the MLE by maximizing the logarithm of (11) added with the constraint in (8) multiplied by . The estimate follows from (10) by replacing with , that is
Remark. Zero count problem. The MLE estimate provides zero estimate when . The MAP estimate provides zero estimate when in the prior PDF is uniform and .
Naive Bayesian classifiers
Consider n discrete independent RV , called features which are employed to classify into n classes/categories elements of a population, like documents, people, generic objects, … Let us denote an outcome of the feature RV with . Each feature may assume a finite, discrete but arbitrary set of values, . Given the Cartesian product , a class is a known subset of , such that . More generically features may also assume real values. We aim to find the posterior probability of classifying the feature outcome into . The posterior can be rewritten by the Bayes theorem as
where is the joint probability that the feature outcome and the class occurs together. The naive assumption amounts to how compute this joint probability. Let us factor the joint probability by repeated used of conditional probabilities
The naive assumption simplifies the above expression, by assuming that given a category , the m different features occur independently one from another (conditional independence). In other words, when we observe features related to different categories they may be correlated, but when we observe features within a well defined category, they are independent. In general the assumption is far form reality, except when the values of class features are variations around a mean value, typical of the class, and variations are feature uncorrelated.
For instance, you want to classify a day from temperature, wind, humidity, outlook (sunny, cloudy, overcast, rainy, …) in view of open air activities. Classes may be: good, uncertain, bad. Of course the above features are in general correlated, but we assume them independent within a class.
Reference
[1] Basics of Bayesian Statistics, from http://www.stat.cmu.edu/~brian/463-663/week09/Chapter%2003.pdf.
[2] K.P. Murphy, Machine learning: a probabilistic perspective, The MIT Press, Cambridge, MA, 2012.
Appendix: discrete PMF and real PDF
We will use the Iverson bracket, which generalizes the Kronecker delta. Given a statement S which may be either true or false, we write
Discrete probability mass functions(or distributions)
Bernoulli distribution . Given a binary RV , the generic outcome x and the parameter , the PMF is defined as
Binomial distribution . Given a sequence of independent binary RV we construct the RV with generic outcome . The PMF is defined as
Categorical distribution . Given a discrete RV where the integer denotes a class or category of different objects or features (e.g., die faces) with probability such that , the PMF is defined by
Multinomial distribution . Given a sequence of of n identical and independent categorical RV , the PMF of the random vector ,where the RV accounts for the occurrence number of the feature ,
The categorical PMF is equivalent to the multinomial distribution with n=1, that is
Probability density functions
Gaussian or normal distribution. The univariate normal PDF holds
The standard normal PDF assumes . The multivariate Gaussian PDF of a RV holds
where the covariance matrix has been raised to the exponent 2 being a nonnegative definite matrix, with nonnegative eigenvalues and (not unique) square root such that . is the variance of , and equals the sum of the eigenvalues.
Beta distribution. The univariate Beta PDF refers to the bounded RV . By dropping -1 from the exponents, we write
The multivariate Beta PDF, known as Dirichlet PDF, is defined over the simplex . The PDF expression is
Under uniform exponent , mean and variance in (A9) become
Thus increasing and m increases precision.
Remark A1. Let us denote the random vector of the Dirichlet PDF with ; denotes the vector of outcomes. The vector is constrained by . Under m=2, is a Beta distribution .
Remark 2. Stick-breaking approach to Dirichlet RV generation. Consider a segment of length y=1 and randomly divided into two intervals of length . Then further subdivide the segment of length into two segments of fractional length , which implies that segment k=2 has length and the remaining segment has length . In other terms, is the length of the partition of a unit segment. By proceeding in the same way, the last segment k=m-1 to be further subdivided, has length . The length of the last two segments holds
The length sum of the m segments holds one, namely . Since the sequence has been obtained by partitioning segments of unit length, it can be obtained as the outcome of a sequence of Beta distributions
and the Dirichlet outcome is found to be
Figure A1 - Left: the q-q plot of . Right: 200 samples of 3D Dirichlet distribution lying on the plane .
Appendix: nucleic acid sequence
A categorical sequence in biology is the nucleic acid sequence (either DNA or RNA), where either acid is made by a chain of linked nucleotides, each one made by a phosphate group, a sugar and one of four canonical/primary nucleo(nitrogenous)bases, adenine (A), cytosin (C), guanine (G), thymine (T), which differentiate the four nucleotides. Thymine (T) is replaced by Uracil (U) in RNA. A sequence of three nucleotides (in total combinations), known as codon, is the genetic code for synthesizing a protein (long chains of amino acids). Protein coding frames are opened by a start codon, usually the triple AUG, and end with a stop codon. In turn, codons are the core of the protein-coding genes, complex DNA portions capable of activating and regulating the production of the encoded protein. Mutations of genes may result in genetic disorder and relevant diseases.
Alignment of amino acid sequence between a target 3D protein structure and experimental homologous structures is an essential tool for extrapolating and checking the 3D structure of proteins from amino acid sequence. Protein structures are conservative, but not one-to-one related to detected sequences due to variable 3D organization of backbone hydrogen bonds.
The categorical RV is , with m=5. where Z denotes a gap which may be artificially inserted to improve alignment and to keep a uniform length. A typical sequence analysis is the multiple sequence alignment (MSA), in which r sequences of equal length n are aligned to discover conservation or mutation of the nucleotides at a sequence location i=1,…,n. The r aligned sequences form an matrix, where no column made of gaps (Z) exists. Columns of high purity, i.e. dominated by a single nucleobasis (denoted by a letter), are cornerstones for finding the best alignment. The interest is the distribution of the four nucleotides in a generic column (location) i=1,…,n. The occurrence probability of the category in a generic column i is a binomial PMF. To this end, the RV becomes a random vector . Let us denote the number of occurrences (RV) of with . We write the PMF
which becomes the likelihood function of an observed sequence . The MLE holds . The argument denotes the dominant category (nucleobasis) of the column i. The sequence is the sequence of the column MLE, known as the consensus sequence. Let us denote the MLE of with , namely . Given a suitable threshold but close to 1, a consensus sequence such that is a candidate to declare the alignment acceptable. The frequency of the four nucleotides in protein-coding human genes is fairly uniform, . It becomes non uniform in non-coding genes.
Figure A1 shows 16 sequences of 16 columns and their column counts. The dominant counts define the consensus sequence CAAATCCGGTTCGAG.
Figure A2 - Left, the counts of each category (nucleobasis) and the max column count. Right: 16 category sequences. The last column is pure.