Javascript required
Skip to content Skip to sidebar Skip to footer

Proof of Bayes Rule in Continuous Distributions

This lecture shows how to apply the basic principles of Bayesian inference to the problem of estimating the parameters (mean and variance) of a normal distribution.

Table of Contents

Table of contents

  1. Unknown mean and known variance

    1. The likelihood

    2. The prior

    3. The posterior

    4. The prior predictive distribution

    5. The posterior predictive distribution

  2. Unknown mean and unknown variance

    1. The likelihood

    2. The prior

    3. The posterior distribution of the mean conditional on the variance

    4. The prior predictive distribution conditional on the variance

    5. The posterior distribution of the variance

    6. The prior predictive distribution

    7. The posterior distribution of the mean

The observed sample used to carry out inferences is a vector [eq1] whose entries are n independent and identically distributed draws [eq2] from a normal distribution.

In this section, we are going to assume that the mean mu of the distribution is unknown, while its variance sigma^2 is known.

In the next section, also sigma^2 will be treated as unknown.

The likelihood

The probability density function of a generic draw $x_{i}$ is [eq3] where we use the notation [eq4] to highlight the fact that the density depends on the unknown parameter mu .

Since [eq5] are independent, the likelihood is [eq6]

The prior

The prior is [eq7] that is, mu has a normal distribution with mean $mu _{0}$ and variance $	au _{0}^{2}$ .

This prior is used to express the statistician's belief that the unknown parameter mu is most likely equal to $mu _{0}$ and that values of mu very far from $mu _{0}$ are quite unlikely (how unlikely depends on the variance $	au _{0}^{2}$ ).

The posterior

Given the prior and the likelihood, specified above, the posterior is [eq8] where [eq9]

Proof

Thus, the posterior distribution of mu is a normal distribution with mean $mu _{n}$ and variance $	au _{n}^{2}$ .

Note that the posterior mean $mu _{n}$ is the weighted average of two signals:

  1. the sample mean [eq23] of the observed data;

  2. the prior mean $mu _{0}$ .

The greater the precision of a signal, the higher its weight is. Both the prior and the sample mean convey some information (a signal) about mu . The signals are combined (linearly), but more weight is given to the signal that has higher precision (smaller variance).

The weight given to the sample mean increases with n , while the weight given to the prior mean does not. As a consequence, when the sample size n becomes large, more and more weight is given to the sample mean. In the limit, all weight is given to the information coming from the sample and no weight is given to the prior.

The prior predictive distribution

The prior predictive distribution is [eq24] where i is an $n	imes 1$ vector of ones, and I is the $n	imes n$ identity matrix.

Proof

Thus, the prior predictive distribution of x is multivariate normal with mean $mu _{0}i$ and covariance matrix [eq36]

Under this distribution, a draw $x_{i}$ has prior mean $mu _{0}$ , variance [eq37] and covariance with the other draws equal to $	au ^{2}$ . The covariance is positive because the draws $x_{i}$ , despite being independent conditional on mu , all share the same mean parameter mu , which is random.

The posterior predictive distribution

Assume that $m$ new observations [eq38] are drawn independently from the same normal distribution from which [eq39] have been extracted.

The posterior predictive distribution of the vector [eq40] is [eq41] where I is the $m	imes m$ identity matrix and i is a $m	imes 1$ vector of ones.

So, $widetilde{x}$ has a multivariate normal distribution with mean $mu _{n}i$ (where $mu _{n}$ is the posterior mean of mu ) and covariance matrix [eq42] (where $	au _{n}^{2}$ is the posterior variance of mu ).

Proof

As in the previous section, the sample [eq1] is assumed to be a vector of IID draws from a normal distribution.

However, we now assume that not only the mean mu , but also the variance sigma^2 is unknown.

The likelihood

The probability density function of a generic draw $x_{i}$ is [eq49] The notation [eq50] highlights the fact that the density depends on the two unknown parameters mu and sigma^2 .

Since [eq51] are independent, the likelihood is [eq52]

The prior

The prior is hierarchical.

First, we assign the following prior to the mean, conditional on the variance: [eq53] that is, mu has a standard normal distribution with mean $mu _{0}$ and variance [eq54] .

Note that the variance of the parameter mu is assumed to be proportional to the unknown variance sigma^2 of the data points. The constant of proportionality $  u $ determines how tight the prior is, that is, how probable we deem that mu is very close to the prior mean $mu _{0}$ .

Then, we assign the following prior to the variance: [eq55] that is, [eq56] has an inverse-Gamma distribution with parameters k and $1/sigma _{0}^{2}$ (i.e., the precision $1/sigma ^{2}$ has a Gamma distribution with parameters k and $1/sigma _{0}^{2}$ ).

By the properties of the Gamma distribution, the prior mean of the precision is [eq57] and its variance is [eq58]

We can think of $1/sigma _{0}^{2}$ as our best guess of the precision of the data generating distribution. k is the parameter that we use to express our degree of confidence in our guess about the precision. The greater k , the tighter our prior about $1/sigma ^{2}$ is, and the more we deem probable that $1/sigma ^{2}$ is close to $1/sigma _{0}^{2}$ .

The posterior distribution of the mean conditional on the variance

Conditional on sigma^2 , the posterior distribution of mu is [eq59] where [eq60]

Proof

Thus, conditional on sigma^2 and x , mu is normal with mean $mu _{n}$ and variance $	au _{n}^{2}$ .

The prior predictive distribution conditional on the variance

Conditional on sigma^2 , the prior predictive distribution of x is [eq65] where i is an $n	imes 1$ vector of ones, and I is the $n	imes n$ identity matrix.

Proof

The posterior distribution of the variance

The posterior distribution of the variance is [eq69] where [eq70]

Proof

Thus, $1/sigma ^{2}$ has a Gamma distribution with parameters $n+k$ and $1/sigma _{n}^{2}$ .

The prior predictive distribution

The prior predictive distribution of x is [eq79] that is, a multivariate Student's t distribution with mean $mu _{0}i$ , scale matrix [eq80] and k degrees of freedom.

Proof

The posterior distribution of the mean

The posterior distribution of the mean is [eq83] where $Bleft( {}  ight) $ is the Beta function.

Proof

We have already proved that, conditional on sigma^2 and x , mu is normal with mean $mu _{n}$ and variance [eq84] We have also proved that, conditional on x , $1/sigma ^{2}$ has a Gamma distribution with parameters $n+k$ and $1/sigma _{n}^{2}$ . Thus, we can write [eq85] where Z is standard normal conditional on x and sigma^2 , and $Gamma _{1}$ has a Gamma distribution with parameters $n+k$ and $1/sigma _{n}^{2}$ . Now, note that, by the properties of the Gamma distribution, [eq86] has a Gamma distribution with parameters $n+k$ and 1 . We can write [eq87] But [eq88] has a standard Student's t distribution with $n+k$ degrees of freedom (see the lecture on the t distribution). As a consequence, mu has a Student's t distribution with mean $mu _{n}$ , scale parameter [eq89] and $n+k$ degrees of freedom. Thus, its density is [eq83] where $Bleft( {}  ight) $ is the Beta function.

In other words, mu has a t distribution with mean $mu _{n}$ , scale parameter [eq91] and $n+k$ degrees of freedom.

Please cite as:

Taboga, Marco (2021). "Bayesian estimation of the parameters of the normal distribution", Lectures on probability theory and mathematical statistics. Kindle Direct Publishing. Online appendix. https://www.statlect.com/fundamentals-of-statistics/normal-distribution-Bayesian-estimation.

morrisreckessequod1935.blogspot.com

Source: https://www.statlect.com/fundamentals-of-statistics/normal-distribution-Bayesian-estimation