\[ \newcommand{\or}{\textrm{ or }} \newcommand{\and}{\textrm{ and }} \newcommand{\not}{\textrm{not }} \newcommand{\Pois}{\textrm{Poisson}} \newcommand{\E}{\textrm{E}} \newcommand{\var}{\textrm{Var}} \]
\[ \newcommand{\or}{\textrm{ or }} \newcommand{\and}{\textrm{ and }} \newcommand{\not}{\textrm{not }} \newcommand{\Pois}{\textrm{Poisson}} \newcommand{\E}{\textrm{E}} \newcommand{\var}{\textrm{Var}} \]
$$
$$
In Chapter 23, we learned that the distribution of multiple continuous random variables could be described completely by the joint PDF. However, the joint PDF contains more information than is necessary for most problems. In this chapter, we will summarize random variables by calculating expectations of the form \(\text{E}\!\left[ g(X, Y) \right]\). All of the results in this chapter are analogous to the results in Chapter 14 for discrete random variables, except with PDFs instead of PMFs and integrals instead of sums.
The general tool for calculating expectations of the form \(\text{E}\!\left[ g(X, Y) \right]\) is 2D LotUS. It is the natural generalization of LotUS from Theorem 21.1 and the continuous version of Theorem 14.1.
The intuition is the same as Theorem 21.1; the only difference is that there are now two random variables. To calculate the expectation of \(g(X, Y)\), we weight the possible values of \(g(x, y)\) by the joint PDF \(f_{X, Y}(x, y)\).
Because Equation 24.1 is usually cumbersome to evaluate, 2D LotUS is usually a tool of last resort. The remainder of this chapter is devoted to shortcuts for specific functions \(g(x, y)\) that allow us to avoid 2D LotUS. But when in doubt, remember that 2D LotUS is always an option.
When \(g(x, y)\) is a linear function, there is a remarkable simplification.
This result is more remarkable than it appears. It says that \(\text{E}\!\left[ X + Y \right]\), which depends in principle on the joint distribution of \(X\) and \(Y\), can be calculated using only the distribution of \(X\) and the distribution of \(Y\) individually. That is, no matter how \(X\) and \(Y\) are related to each other, \(\text{E}\!\left[ X + Y \right]\) is the same value.
By cleverly applying linearity of expectation, we can solve Example 24.1 without any double integrals!
When \(g(x, y) = xy\), evaluating \(\text{E}\!\left[ g(X, Y) \right] = \text{E}\!\left[ XY \right]\) requires 2D LotUS in general. However, when \(X\) and \(Y\) are independent, we can break up the expectation.
Why should we care about \(\text{E}\!\left[ LM \right]\), the expected product of the times that the first person and the second person arrive? It turns out to be useful for summarizing the relationship between \(L\) and \(M\). We take up this issue in Chapter 25.
Exercise 24.1 In Example 24.2, we derived the PDF of \(M = \max(X, Y)\), the time the second person arrives. Using a similar argument, derive the PDF of \(L = \min(X, Y)\), the time the first person arrives. Then, using this PDF, calculate \(\text{E}\!\left[ L \right]\), and check that it matches the answer we obtained in Example 24.2.
Hint: When calculating the CDF of \(L\), it helps to use the complement rule.
Exercise 24.2 Let \(X \sim \textrm{Exponential}(\lambda=\lambda_X)\) and \(Y \sim \textrm{Exponential}(\lambda=\lambda_Y)\) be independent random variables. Derive the distribution of \(L \overset{\text{def}}{=}\min(X, Y)\). It is one of the named distributions that we learned. Then, use this fact to derive \(\text{E}\!\left[ M \right]\), where \(M \overset{\text{def}}{=}\max(X, Y)\), without any calculus.