B. Sums and Means
$$
$$
In Chapters 31 and 32, we regarded an estimator \(\hat\theta\) as a random variable and borrowed tools from probability, such as expectation and variance, to evaluate and compare different estimators.
Calculating the expectation and variance of \(\hat\theta\) in general requires knowledge of the distribution of \(\hat\theta\) (although it is sometimes possible to bypass this using tricks such as linearity). The distribution of an estimator is called its sampling distribution. The rest of this book will be dedicated to finding the sampling distributions of various estimators. This will allow us to finally calculate the MSE of estimators such as \(\hat\lambda = 1/\bar X\) in Example 30.2. It will also allow us to calculate other useful measures of the quality of an estimator such as \[ P(|\hat\theta - \theta| < \epsilon), \] the probability that the estimate is within \(\epsilon\) of the true parameter value.
In this unit, we will focus on the case where the estimator is a function of a sum of random variables: \[ \hat\theta = g\left(\sum_{i=1}^n X_i\right). \] This covers most examples we have seen so far, including Example 30.3 (where the MLE of \(\mu\) was \(\bar X\)) and Example 30.2 (where the MLE of \(\lambda\) was \(1 / \bar X\)). By the end of this unit, you will be able to evaluate the bias of the estimator in Example 30.2.