Bayes inference I: events

Bayes inference I: events

Enrico Canuto, Former Faculty, Politecnico di Torino, Torino, Italy

September 5, 2020

Draft

Independence

Definition and examples

Given a set \Omega of possible outcomes \omega, two events (sets of outcomes) A\subset \Omega and B\subset \Omega are independent if the probability of their intersection A\cap B, i.e. the probability that both events occur, is the product of the event probabilities

P\left ( A\cap B \right )=P\left ( A \right )P\left ( B \right )\Rightarrow \textup{log}P\left ( A\cap B \right )=\textup{log}P\left ( A \right )+\textup{log}P\left ( B \right )\: (1)

The meaning is that event occurrence does no affect each other. Event occurrence may be simultaneous or not. (1) can be extended to multiple independent events. The logarithmic identity shows that (1) is a linear relation. The first order differential expression is of course linear.  Let  \bar{P}\left ( A \right ),\bar{P}\left ( B \right ) be nominal values and  dP\left ( A \right ),dP\left ( B \right ) small deviations. First order power expansion of (1) provides

\begin{matrix}logP\left ( A\cap B \right )\cong log\bar{P}\left ( A \right )+log\bar{P}\left ( B \right )+\frac{dP\left ( A \right )}{\bar{P}\left ( A \right )}+\frac{dP\left ( B \right )}{\bar{P}\left ( B \right )} \\ dlogP\left ( A\cap B \right )=\frac{dP\left ( A \right )}{\bar{P}\left ( A \right )}+\frac{dP\left ( B \right )}{\bar{P}\left ( B \right )} \end{matrix} \: \; (2)

In other terms the differential of (1) is linear in the fractional differential of independent event probabilities. A similar differential identity can be derived without logarithm

\frac{dP\left ( A\cap B \right )}{\bar{P}\left ( A \right )\bar{P}\left ( B \right )}=\frac{dP\left ( A \right )}{\bar{P}\left ( A \right )}+\frac{dP\left ( B \right )}{\bar{P}\left ( B \right )}

Remark.  From (1) we have P\left ( A\cap B \right )\leq P\left ( A \right ),P\left ( A\cap B \right )\leq P\left ( B \right ). This is reasonable since the intersection event is such to reduce the possible outcomes, becoming a rare event. A common and useful construction of independent events is multiple experiment repetition with the care that each experiment does not affect the other ones (n repeated trials). Given the outcome set \Omega and a set of events \left \{ E_{1},...,E_{k},...\subset \Omega \right \}, the outcome set is the n-fold Cartesian product \Omega _{N}=\Omega \times \cdots \times \Omega =\left \{ \Omega ,...,\Omega \right \}. A generic event is E=\left \{ E_{1} ,...,E_{n}\right \}=\left \{ E_{1},...,\Omega \right \}\cap ...\cap \left \{ \Omega ,...,E_{n} \right \}, which has been expressed as the intersection of elementary events \left \{\Omega ,..., E_{n},...,\Omega \right \}. Thus, if the elementary events can be assumed to be independent, we can write

P\left ( E \right )=P\left ( E_{1} \right )\cdot ...\cdot P\left ( E_{n} \right )

Example 1. Card drawing. Given a deck of  fair cards of four colors (red, ...), two kind of drawings are possible: drawing with and without replacement. Consider the event E=\left \{ \textup{red} ,\textup{red}\right \} of two subsequent drawings. With replacement, drawings can be assumed to be independent, which implies: P\left ( E\right )=\left (1/4 \right )\times \left (1/4 \right ). Without replacement, the second outcome depends on the card drawn first, red or  other than red:

\begin{matrix}P\left ( E; \; \textup{other color drawn first}\right )=\frac{1}{4}\times \frac{M/4}{M-1} \\ P\left ( E; \; \textup{ red drawn first}\right )=\frac{1}{4}\times \frac{M/4-1}{M-1} \end{matrix}

Identity (1) is similar to the probability of the union A\cup B of two disjoint (mutually exclusive) events, i.e. A\cap B=0 , either event occurs, that is

\begin{matrix}P\left ( A\cup B \right )=P\left ( A \right )+ P\left ( B \right ),\: \; P\left ( A\cap B \right )=0\: \; \\ dP\left ( A\cup B \right )=dP\left ( A \right )+ dP\left ( B \right ) \end{matrix}(3)

Identity (3) is again a linear relation, but this time the differential becomes the sum of the event probability differentials, in contrast to (2) where relative differentials appear. Mutual exclusive events contain different outcomes. (3) can be extended to multiple mutually exclusive events.

Example 2. Consider rolling a die, the outcomes are six \Omega =\left \{ 1,2,3,4,5,6 \right \}=\left \{ \omega _{i} \right \}. The die is fair if outcome probabilities are equal, that is P\left ( \omega _{i} \right )=1/6 . Since outcomes are mutually exclusive, the probability of the event even E holds, by extending (3),

P\left ( E \right )=P\left ( \omega _{2} \right )+P\left ( \omega _{4} \right )+P\left ( \omega _{6} \right )=1/2Now consider two different die throws (trials) and ask the probability of the event \left \{ E,E \right \}=\left \{ \textup{even},\textup{even} \right \}=\left \{ E,\Omega \right \}\cap\left \{ \Omega,E \right \}. A reasonable assumption is that each throw does not affect the outcome of the other, as in card drawing with replacement (Example 1). This implies independence and P\left \{ E,E \right \}=1/4. Recall that the outcome set of two repeated trials is the Cartesian product of the original set, namely \Omega \times \Omega =\left \{ \Omega ,\Omega \right \}

Bernoulli trials.

Let \Omega =\left \{ H=\textup{head, success},T=\textup{tail, failure} \right \}  with P(H)=p,P(T)=q=1-p  be the outcome set of a single experiment to be repeated N times, identically and independently. A single outcome E\left ( k \: \; \textup{times\: \; } H; \; \textup{single repetition;}\; n \: \; \textup{trials}\right ) showing k outcomes H and n-k outcomes T in arbitrary order, has probability P\left ( E \right )=p^{k}\left ( 1-p \right )^{n-k}. Often the interest is to know the probability of successes out of n repetitions. To find it, we count the number of different sequences of k heads and n-k tails, which is given by the binomial coefficient 

\left ( \begin{matrix}n\\ k\end{matrix} \right )=\frac{n!}{k!(n-k)!}

Since sequences, being different, are mutually exclusive, the probability of the event  E\left ( k ;n,p\right )=E\left ( k \: \textup{times\: } H, P(H)=p); \; \textup{all repetitions,}\: n \:\textup{ trials}\right )  holds

P\left ( k; n,p \right )=P\left ( E\left ( k; n,p\right ) \right )=\left ( \begin{matrix}n\\ k\end{matrix} \right )p^{k}\left ( 1-p \right )^{n-k}\: \; (4)

The expression in (4) is known as the Binomial probability distribution (PD) of the integer random variable (RV) K=\left \{ 0,1,...,n \right \}. An integer random variable corresponds to a set of outcomes, in this case the n+1 sets of repetitions showing outcomes H, one-to-one associated to an interval of integer numbers. Because of mutual exclusion, it holds:

\sum_{k=0}^{n}P\left ( k; n,p \right )=\sum_{k=0}^{n}\left ( \begin{matrix}n\\ k\end{matrix} \right )p^{k}\left ( 1-p \right )^{n-k}=1\: \; (5)

The mean or expected value \mathit{E}\left \{ K \right \} of the PD in (5), namely the mean value of k when a large number N of repeated trials defined by \left \{ k,n,p \right \} is performed, holds \mathit{E}\left \{ K \right \}=np. The variance holds \mathit{E}\left \{ \left (K-\mathit{E}\left \{ K \right \}^{2} \right )\right \}=np\left ( 1-p \right )=npq. Moreover, under n\rightarrow \infty and when np=o\left ( n \right ) and npq=o\left ( n \right ) are the order of n, binomial distribution is well approximated by the normal density function N\left ( np,\sqrt{npq} \right ) defined by

N\left ( np,\sqrt{npq} \right )=\frac{1}{\sqrt{2\pi npq}}\textup{exp}\left ( \frac{1}{2} \frac{\left ( x-np \right )^{2}}{npq}\right )\: (6)

Remark. Be aware that f(x)=N\left ( np,\sqrt{npq} \right ) in (6), where x is real,  is a probability density, which, in order to approximate an integer distribution, must be converted into the probability  P\left ( k-1/2\leq x < k+1/2\right )\cong f\left ( k \right )\left ( 1/2+1/2 \right )=f\left ( k \right ) .  Since the interval between two adjacent integers is one, we can replace the real x with the integer k in (6).

Likelihood function and parameter estimation. An introduction to statistical inference.

Consider tossing a coin with outcome set \Omega =\left \{ H,T \right \}. We want to test whether the coin is fair, i.e. whether p(H)=p=1/2. To this end, we toss the coin times by assuming independent trials. Let us assume to find k outcomes equal to H. What we can infer from this result about p? Let us recall from (4) the binomial probability P\left ( k; n,p \right )  of finding k heads out of n repetitions, where the pair (k,n) is known and p is unknown.  We also recall that P\left ( k; n,p \right ) strictly depends on the assumption of two exclusive outcomes and of independent repetitions: they have been converted into the mathematical model P\left ( k; n,p \right ).  In our hands we only have  P\left ( k; n,p \right ), the values (k,n) and the range 0<p<1We admit that P\left ( k; n,p \right ) and (k,n) are a faithful representation of our coin behavior when tossed, and, since we have a degree of freedom p, we can use it to find the best fit by maximizing the probability P\left ( k; n,p \right ) with respect to p. For such reasons, P\left ( k; n,p \right ) is known as the likelihood function of coin tossing, and the argument

\hat{p}=\textup{argmax}_{0<p<1}P\left ( k;n,p \right )is the best estimate we can obtain from model and data, known as the maximum likelihood estimate (MLE). By setting to zero the derivative of P\left ( k; n,p \right ) , we obtain 

\begin{matrix}\frac{d}{dp}P\left ( k,b,p \right )=\left ( \begin{matrix}n\\ k\end{matrix} \right )(kp^{k-1}\left ( 1-p \right )^{n-k}-\left ( n-k \right )p^{k}\left ( 1-p \right )^{n-k-1})= \\ = \left ( \begin{matrix}n\\ k\end{matrix} \right )p^{k-1}\left ( 1-p \right )^{n-k-1}\left ( k-np \right )=0 \end{matrix}

By discarding the values \hat{p}=\left \{ 0,1 \right \} as they zero the likelihood function, the intuitive MLE holds

\hat{p}=k/n\: \; (7)

Since k is an outcome of the Binomial random variable K, \hat{p}  itself becomes the outcome of the Binomial RV K/n, with mean value E\left \{ K/n \right \}=p. In other terms, the MLE mean value equals the unknown parameter for any n. The variance  E\left \{ \left ( K-np\right ) \right \}/n^{2}=p(1-p)/n converges to zero for large n. We can say that for n\rightarrow \infty the probability that k/n=p approaches one: the MLE (7) asymptotically approaches the true parameter!

Conditional probability

Definition

How identity (1) can be converted to the general case of dependent events? The solution is a new kind of probability, the conditional probability, either P\left ( A/B \right ) or P\left ( B/A \right ) , such that (multiplication rule)

P(A\cap B)=P\left ( A \right )P\left ( B/A \right )=P\left ( B \right )P\left ( A/B \right )\; (8)

where P\left ( A/B \right ) reads as the probability of the event A occurrence given (known, assumed) the occurrence of the event B. From (8) the usual definition follows

\begin{matrix}P\left ( B/A \right )=\frac{P(A\cap B)}{P\left ( A \right )}\leq 1,\; P\left ( A \right )> 0 \\ P\left ( A/B \right )=\frac{P(A\cap B)}{P\left ( B \right )}\leq 1, \; P\left ( B \right )> 0 \end{matrix}\; (9)

where conditional probabilities satisfy all the probability axioms. Identities in (8) and (1) immediately prove that under independence P\left ( B/A \right )=P\left ( B \right ) and P\left ( A/B\right )=P\left ( A \right ).  (9) is a construction equation, but construction of conditional probabilities may be rather delicate as the following example shows.\begin{matrix}P\left (E \right )=P\left (E_{1}\cap E \right )+ ...+P\left (E_{N}\cap E \right )= \\ =P\left (E/ E_{1} \right )P(E_{1})+ ...+P\left (E /E_{N}\right )P(E_{N}) \end{matrix} \: (10)

Law of total probability

Let \left \{ E_{1},...,E_{N} \right \} be a finite set of mutually exclusive events  such that E_{1}\cup ...\cup E_{N}=\Omega and let  E\subset \Omega be a generic event. E can be developed as the  union of intersections E=\left (E_{1}\cap E \right )\cup ...\cup\left (E_{N}\cap E \right ). Equation (8) allow to compute the event probability as 

\begin{matrix}P\left (E \right )=P\left (E_{1}\cap E \right )+ ...+P\left (E_{N}\cap E \right )= \\ =P\left (E/ E_{1} \right )P(E_{1})+ ...+P\left (E /E_{N}\right )P(E_{N}) \end{matrix}\; (10)

Examples

Example 3, Step 1 [1]. A family has two children; we know that at least one of them is male. Which is the probability that both children are male?  Consider first the outcome space of newborns \Omega _{1}=\left \{ M=\textup{male},F =\textup{female}\right \}, and assume that both outcomes have equal probability (=1/2).  A naive answer would be P\left ( M \right )=1/2, but it does not account that children are two. In other terms, we should account (conditioning) that we know that at least one of the two children is a male, which knowledge must be formulated as an event. The outcome set of two children is

\begin{matrix}\Omega _{2}=\Omega _{1}\times \Omega _{1}=\left \{ \left ( M,M \right ),\left ( M,F \right ),\left ( F,M \right ),\left ( F,F \right ) \right \} \\ P(M,M)=P(M,F)=P(F,M)=P(F,F)=1/4 \end{matrix}We have to compute the probability of the outcome M_{2}=\left ( M,M \right )  but conditioned to the knowledge that one or two  children are male, which corresponds to the event  M_{\textup{at least one}}= \left ( M,M \right )\cup \left ( F,M \right )\cup \left ( M,F \right ). Thus, using the law of total probability (10) we obtain

\begin{matrix}P\left ( M_{2}/M_{\textup{at least one}} \right )=\frac{P\left ( M_{2}\cap M_{\textup{at least one}} \right )}{P\left ( M_{1\textup{at least one}} \right )}=\frac{1/4}{3/4}=1/3 \\ >P\left ( M_{2} \right )=\frac{1}{4} \end{matrix}\; (11)

As expected, conditional probability is larger than unconditional one.
Example 3. Step 2. Let us come back to naive probability 1/2, which, being larger than 1/3, must be conditioned by a finer knowledge corresponding to an event M_{?}\subset M_{\textup{at least one}}. Consider for instance the event: to encounter one of the two children who happens to be male. The event can be written with the help of (10) as the union of the intersection with the four possible children pairs, that is 

M_{?}=M_{?}\cap \left ( M,M \right )\cup M_{?}\cap \left ( M,F \right )\cup M_{?}\cap \left ( F,M\right )\cup M_{?}\cap \left ( F,F \right )
where M_{?}\cap \left ( M,M \right )=\left ( M,M \right )  is the encounter event when both children are males, whose probability is 1/4 . By using (1) we derive the event probability  (Law of total probability)     

                                                                         P\left (M_{?} \right )=P\left (M_{?} /M,M \right )P\left ( M,M \right )+ P\left (M_{?} /M,F \right )P\left ( M,F \right )+ P\left (M_{?} /F,M \right )P\left ( f,M \right )+ P\left (M_{?} /F,F \right )P\left ( M,M \right )=\left ( 1+1/2+1/2+0 \right ) \times 1/4=1/2
and finally, by replacing the subscript ? with male encounter, we obtain the expected result

P\left ( M_{2}/M_{\textup{male encounter}} \right )=\frac{P\left ( M_{2}\cap M_{\textup{male encounter}} \right )}{P\left ( M_{1\textup{male encounter}} \right )}=\frac{1/4}{1/2}=1/2\; (12)
Where the difference between (11) and (12) lies? The probability of encountering a male when the pair is either (F,M) or (M,F)  is smaller (1/8) than the probability that the pair includes a female (1/4)! How delicate are concept and practice of conditional probability, and assessment of knowledge!
Example 4. Quality inspection. The output of a manufacturing line is classified as good (G), uncertain (U) and defective (D), with \Omega =\left \{ G,U,D \right \} and probabilities P\left ( G \right )=0.9, \, \;P(D) =0.08.  The output passes through an inspection machine that is only instructed to label defective parts.   Which is the probability that the output of the inspection machine is good? The non defective event  is D^{*}=\left \{ G,U \right \}. The searched probability refers to the conditioned event  G/D^{*} and holds

P\left ( G/D^{*} \right )=\frac{P\left ( G\cap D^{*} \right )}{P\left ( D^{*} \right )}=\frac{P\left ( G \right )}{1-P\left ( D \right )}=\frac{0.9}{0.92}=0.978
Example 5. Sampling. n=5 good (G) and m=2 defective (D) parts are mixed in a box. To find defective parts, m=2 parts are randomly selected  without replacement and checked whether defective. Which is the probability of finding the defective parts? Let D_{1}  the event of finding the defective part in the first test and D_{2} in the second test. The target event is D_{1}\cap D_{2} and the probability holds P\left ( D_{1} \cap D_{2} \right )=P\left ( D_{1} \right )P\left ( D_{2} / D_{1} \right )=\frac{1}{21}
Example 6 [2]. A box contains one two-headed coin with outcome \left \{ H,H \right \} and n-1 fair coins with outcomes \left \{ H,T \right \} with P\left ( H \right )=1/2.  One coin is randomly chosen and the toss result is H. Let us denote the corresponding event by H_{1}. How many fair coins are in the box, if P\left ( H \right )=11/20?  Let us denote the random choice of a coin as R, with P(R)=1/n. The event H_{1} is the union of the events (event 1 or event 2 or ...) that a single coin has been selected and the toss result is H, that is H_{1}=(n+1)\left ( H\cap R \right ), whose probability holds

P(H_{1})=(n+1)P\left ( H/ R \right )P\left ( R \right )=\frac{n+1}{2n}

which implies n=10 to satisfy  P\left ( H \right )=11/20.
Bayes theorem

Bayes theorem (1763)  at first sight is just a reformulation of identities in (8), but is the basis of statistical inference and prediction.

Inference aims to derive statistical properties, in  the form of parameters, of a probability model from experimental data collected from a population which is assumed to be coherent with the model. We  have already seen a first example and method of inference, MLE, applied to the binomial distribution of coin tossing repeated trials. Bayes theorem allows inference problems and methods to be cast under a rather generic formulation.

Prediction aims to predict the output of future experiments from past data of similar experiments in terms of probability model.

Let us rewrite (8) in the form of Bayes theorem by changing notations and by adding some nomenclature:

\begin{matrix}P\left ( M/E \right )=P\left ( M\right )\frac{P\left ( E/M \right )}{P\left ( E\right )} \\ 0<P\left ( M\right )\leq 1,\; 0<P\left ( E\right )\leq 1 \end{matrix}\; (13)

The event E is known as the evidence or measurement whereas M stands for model or hypothesis. Evidence may have been observed or assumed. P\left ( M\right ) is known as the prior probability of the model unconditioned by the current evidence  (it may have been constructed from previous data). P\left ( E/M \right ) is the likelihood, namely the  probability of observing E under the model/hypothesis M (let us remember the sequence of heads and tails under the assumption of the head probability p). P\left ( E \right ) is known as the marginal likelihood and can be written and constructed as follows:

\begin{matrix}P\left ( E \right )=P\left ( E \cap M\right )+P\left ( E \cap M^{*}\right )=P\left ( E/M\right )P\left ( M\right )+P\left ( E /M^{*}\right )P\left ( M^{*}\right ) \\ P\left ( M\right )+P\left ( M^{*}\right )=1 \end{matrix}\: \; (14)

where M^{*} denotes the complement of M in the outcome set \Omega. Finally, P\left ( M/E \right ) is the posterior probability of the model/hypothesis conditioned by the observed evidence.

Example 7. The three door contest [1].

The problem is also known as the Monty Hall problem. Behind one of three closed doors there is a prize. A contestant must select one of three doors \left \{ 1,2,3 \right \}, the host then opens one of the empty doors (if both the remaining doors are empty, he makes a random choice) and asks the contestant whether he wants change or not the selection. Three disjoint events: M_{i}=\textup{prize behind door}\: \; i, with P\left (M_{i} \right )=1/3, prior probabilities. Let us assume that the contestant chooses door 1 and the host opens door 3. Which is the winning probability if he changes (to door 2) or not his selection? Of course, door numbering can be changed, without affecting the result. Let us denote the evidence event ‘contestant selected door 1 and host opened door 3’ with E_{3}. We aim to posterior probabilities P\left ( M_{1} /E_{3}\right )winning without changing selection, and P\left ( M_{2} /E_{3}\right ), winning by changing selection. P\left (E_{3} \right ) is computed from the law of total probability

\begin{matrix}P\left (E_{3} \right )=P\left (E_{3}/M_{1} \right )P\left ( M_{1} \right )+P\left (E_{3}/M_{2} \right )P\left ( M_{2} \right )+P\left (E_{3}/M_{3} \right )P\left ( M_{3} \right )= \\ =1/2\times 1/3+1\times 1/3+0\times 1/3=1/2 \end{matrix}The following posterior probabilities prove that changing door is favorable:

\begin{matrix}P\left ( M_{2}/E_{3} \right )=P\left ( M_{2} \right )\frac{P\left ( E_{3}/ M_{2}\right )}{P\left ( E_{3} \right )}=\frac{1}{3}\frac{1}{1/2}=\frac{2}{3} \\ P\left ( M_{1}/E_{3} \right )=P\left ( M_{1} \right )\frac{P\left ( E_{3}/ M_{1}\right )}{P\left ( E_{3} \right )}=\frac{1}{3}\frac{1/2}{1/2}=\frac{1}{3} \end{matrix}

The reader is suggested to list the nine possible combinations of prized door (3) and selected door (3) together with the result of changing and not changing selection. To show again the subtle role of evidence and the relevant event E formulation, we look for an evidence such that it does not matter if you change or not, which implies that P\left ( M_{2}/E \right )=P\left ( M_{1}/E \right ) and that E\supset E_{3} (a coarser evidence). Assume that the evidence event is ‘the contestant selected door 1 and host revealed that door 3 was empty (but he did not open the door)’. Indeed, ‘door 3 empty’  is a coarser information since it does not imply that ‘door 3 will be opened’; indeed also ‘door 2 could be opened’ in the case door 1 is prized. Since P\left ( E \right )=P\left ( M_{1} \cup M_{2}\right )=2/3 and P\left ( E \cap M_{i}\right )=P\left ( M_{i} \right )=1/3 for i=1,2, we obtain as expected

\begin{matrix}P\left ( M_{2}/E \right )=\frac{P\left ( E \cap M_{2}\right )}{P\left ( E \right )}=\frac{1}{3}\frac{1}{2/3}=\frac{1}{2} \\ P\left ( M_{1}/E \right )=\frac{P\left ( E\cap M_{1}\right )}{P\left ( E \right )}=\frac{1}{3}\frac{1}{2/3}=\frac{1}{2} \end{matrix}

Further examples

Example 8. Diagnostic test. A diagnostic test applied to affected people is false negative  4% of the cases. If applied to unaffected people is false positive 2% of the cases. 5% of the population to be tested is affected. Let T the positive test event, A the affected event and A* the complement, with

\begin{matrix}P\left ( A \right )=0.05,\; P\left ( A^{*} \right )=0.95 \\ P\left ( T/A \right )=0.96,\; P\left ( T/A^{*} \right )=0.02 \end{matrix}

We want to know the posterior probabilities  P\left ( A/T \right ), that given a positive answer people are affected,  and P\left ( A^{*}/T ^{*}\right ), that given a negative answer people are unaffected. First we compute P\left ( T \right ) from the law of total probability :

P\left ( T \right )=P\left ( T/A \right )P\left ( A \right )+P\left ( T/A^{*} \right )P\left ( A^{*} \right )=0.96\times0.05+0.02\times 0.95=0.0.067

The posterior probabilities hold

\begin{matrix}P\left ( A/T \right )=P\left ( A \right )\frac{P\left ( T/A \right )}{P\left ( T \right )}=0.071 \\ P\left ( A^{*}/T^{*} \right )=P\left ( A^{*} \right )\frac{P\left ( T^{*}/A^{*} \right )}{P\left ( T^{*} \right )}=P\left ( A^{*} \right )\frac{1-P\left ( T/A^{*} \right )}{1-P\left ( T \right )}=0.998 \end{matrix}
Example 9. Wrong evidence. A man says truth n<N times out of N about the outcome of a fair six-face die. Let R_{4} the event that the man reports a specific number, say f=4. Let E_{4}  be the event that the f=4 occurs, with P\left ( E_{4} \right )=1/6. Since the reported number may be false, the reporting event will occur more often than the  f=4 occurrence, which means that P\left (R_{4} \right )\geq 1/6 . Which is the probability that the reported number is true, namely P\left ( E_{4}/R_{4} \right )? Prior model: by assuming that the die is fair, P\left ( E_{4} \right )=1/6, P\left ( E_{4}^{*} \right )=5/6. The evidence from the experience are the probabilities that when f=4 occurs the reported number is false or not:

P\left ( R_{4}/E_{4}^{*} \right )=1-n/N\: (\textup{false});\; P\left ( R_{4}/E_{4} \right )=n/N \: (\textup{true})

First we compute P\left ( R_{4}\right ) from the law of the total probability 

\begin{matrix}P\left ( R_{4} \right )=P\left ( R_{4}/E_{4} \right )P\left ( E_{4} \right )+ P\left ( R_{4}/E_{4}^{*} \right )P\left ( E_{4}^{ * } \right )=\frac{5}{6}-\frac{2}{3} \frac{n}{N} \\ n\rightarrow N\Rightarrow P\left ( R_{4} \right )=1/6 \: \left ( \textup{truth }\right );\; n\rightarrow 0\Rightarrow P\left ( R_{4} \right )=5/6 \: \left ( \textup{falsehood}\right )\end{matrix}
Finally

\begin{matrix}P\left ( E_{4}/R_{4} \right )=P\left (E_{4} \right )\frac{P\left ( R_{4}/E_{4} \right )}{P\left ( R_{4} \right )}=\frac{n}{5N-4n} \\ n\rightarrow N\Rightarrow P\left ( E_{4}/R_{4} \right )=1 \end{matrix}\; (15)

 

TBC

References

[1] J. Mitchell, Examples: conditional probability, http://www.ams.sunysb.edu/~jsbm/courses/311/conditioning.pdf

[2] Brilliant, Conditional probability-Problem solving, https://brilliant.org/wiki/conditional-probability-problem-solving/