Statistic and Probability

Statistic and Probability

Prof. David Draper Department of Applied Mathematics and Statistics

University of California, Santa Cruz

AMS 131: Take-Home Test 2

Target due date: Wed 18 Jul 2018 [520 total points]

1. [70 total points] (the Exchange Paradox) You’re playing the following game against an opponent, with a referee also taking part. The referee has two envelopes (numbered 1 and 2 for the sake of this problem, but when the game is played the envelopes have no markings on them), and (without you or your opponent seeing what she does) she puts $m in envelope 1 and $2m in envelope 2 for some m > 0 (treat m as continuous in this problem even though in practice it would have to be rounded to the nearest dollar or penny). You and your opponent each get one of the envelopes at random. You open your envelope secretly and find $x (your opponent also looks secretly in his envelope), and the referee then asks you if you want to trade envelopes with your opponent. You reason that if you trade, you will get either $x2 or $2 x, each with probability

1 2 . This makes the

expected value of the amount of money you’ll get if you trade equal to � 1 2

� � $x 2

� + � 1 2

� ($2 x) = $5×4 ,

which is greater than the $x you currently have, so you o↵er to trade. The paradox is that your opponent is capable of making exactly the same calculation. How can the trade be advantageous for both of you?

The point of this problem is to demonstrate that the above reasoning is flawed from a Bayesian point of view; the conclusion that trading envelopes is always optimal is based on the assumption that there’s no information obtained by observing the contents of the envelope you get, and this assumption can be seen to be false when you reason in a Bayesian way. At a moment in time before the game begins, let p(m) be your prior distribution on the amount of money M the referee will put in envelope 1, and let X be the amount of money you’ll find in your envelope when you open it (when the game is actually played, the observed x, of course, will be data that can be used to decrease your uncertainty about M).

(a) Explain why the setup of this problem implies that P (X = m|M = m) = P (X = 2m|M = m) = 12 , and use this to show that

P (M = x|X = x) = p(x) p(x) + p

� x 2

� and P ⇣ M =

x

2

���X = x ⌘ =

p � x 2

p(x) + p � x 2

� . (1)

Demonstrate from this that the expected value of the amount Y of money in your opponent’s envelope, given than you’ve found $x in the envelope you’ve opened, is

E(Y |X = x) = p(x) p(x) + p

� x 2

�(2 x) + p � x 2

p(x) + p � x 2

� ⇣x 2

⌘ . (2)

[20 points]

(b) Suppose that for you in this game, money and utility coincide (or at least suppose that utility is linear in money for you with a positive slope). Use Bayesian decision theory, through the principle of maximizing expected utility, to show that you should o↵er to trade envelopes only if

p ⇣x 2

⌘ < 2 p(x) . (3)

1

error..raaa..aa.•amm. amazon.agqraaa.gg

If you and two friends (one of whom would serve as the referee) were to actually play this game with real money in the envelopes, it would probably be the case that small amounts of money are more likely to be chosen by the referee than big amounts, which makes it interesting to explore condition (3) for prior distributions that are decreasing (that is, for which p(m2) < p(m1) for m2 > m1). Make a sketch of what condition (3) implies for a decreasing p. One possible example of a continuous decreasing family of priors on M is the exponential distribution indexed by the parameter �, which represents the reciprocal of the mean of the distribution. Identify the set of conditions in this family of priors, as a function of x and �, under which it’s optimal for you to trade. Does the inequality you obtain in this way make good intuitive sense (in terms of both x and �)? Explain briefly. [40 points]

(c) Looking carefully at the correct argument in paragraph 2 of this problem, identify precisely the point at which the argument in the first paragraph breaks down, and specify what someone who believes the argument in paragraph 1 is implicitly assuming about the prior distribution p(m). [10 points]

2. [210 total points] (practice with joint, marginal and conditional densities) This is a toy problem designed to give you practice in working with a number of the concepts we’ve examined; in a course like this, every now and then you have to stop looking at real-world problems and just work on technique (it’s similar to classical musicians needing to practice scales in addition to actual pieces of symphonic or chamber music).

Suppose that the continuous random vector X = (X1, X2) has PDF given by

fX(x) =

⇢ 4 x1 x2 for 0 < x1 < 1, 0 < x2 < 1

0 otherwise

� (4)

in which x = (x1, x2), and define the random vector Y = (Y1, Y2) with the transformation (Y1 = X1, Y2 = X1 X2).

(a) Are X1 and X2 independent? Present any relevant calculations to support your answer. [10 points]

(b) Either work out the correlation ⇢(X1, X2) between X1 and X2 or explain why no calculation is necessary in correctly identifying the value of ⇢. [10 points]

(c) Sketch the set S of possible X values and the image T of S under the transformation from X to Y , and show that the joint distribution of Y = (Y1, Y2) is

fY (y) =

⇢ 4 y2y1 for 0 < y1 < 1, 0 < y2 < y1 < 1 0 otherwise

� , (5)

in which y = (y1, y2). Verify your calculation by demonstrating that RR

TfY (y) dy = 1. [50 points]

(d) Work out

(i) the marginal distributions for Y1 and Y2, sketching both distributions and checking that they both integrate to 1;

(ii) the conditional distributions fY1 |Y2(y1 | y2) and fY2 |Y1(y2 | y1), checking that they each integrate to 1; and

2

(iii) the conditional expectations E(Y1 |Y2) and E(Y2 |Y1); and (iv) the conditional variances V (Y1 |Y2) and V (Y2 |Y1). (Hint: recall that the variance of a

random variable W is just E (W 2)� [E(W )]2.)

[120 points]

(e) Are Y1 and Y2 independent? Present any relevant calculations to support your answer. [10 points]

(f) Either work out the correlation ⇢(Y1, Y2) between Y1 and Y2 or explain why no calculation is necessary in correctly identifying the value of ⇢. [10 points]

3. [100 total points] (moment-generating functions) Distributions may in general be skewed, but there may be conditions on their parameters that make the skewness get smaller or even disappear. This problem uses moment-generating functions (MGFs) to explore that idea for two important discrete distributions, the Binomial and the Poisson.

(a) We saw in class that if X ⇠ Binomial(n, p), for 0 < p < 1 and integer n � 1, then the MGF of X is given by

X(t) = ⇥ p et + (1� p)

⇤n . (6)

for all real t, and we used this to work out the first three moments of X (note that the expression for E (X3) is only correct for n � 3):

E(X) = n p , E � X2

� = n p[(1 + (n� 1)p] , (7)

E � X3

� = n p[1 + (n� 2)(n� 1)p2 + 3 (n� 1)p] , (8)

from which we also found that V (X) = n p(1� p). Show that the above facts imply that

skewness(X) = 1� 2 pp n p(1� p)

. (9)

Under what condition on p, if any, does the skewness vanish? Under what condition on n, if any, does the skewness tend to 0? Explain briefly. [30 points]

(b) In our brief discussion of stochastic processes we encountered the Poisson distribution: if Y ⇠ Poisson(�), for � > 0, then the PF of Y is

fY (y) =

⇢ �y e��

y! for y = 0, 1, . . . 0 otherwise

� . (10)

(i) Use this to show that for all real t the MGF of Y is

Y (t) = e �(et�1) . (11)

[10 points]

(ii) Use Y (t) to compute the first three moments of Y , the variance of Y and the skewness of Y . Under what condition on �, if any, does the skewness either disappear or tend to 0? Explain briefly. [60 points]

3

4. [140 total points] (archaeology) Paleobotanists estimate the moment in the remote past when a given species became extinct by taking cylindrical, vertical core samples well below the earth’s sur- face and looking for the last occurrence of the species in the fossil record, measured in meters above the point P at which the species was known to have first emerged. Letting {yi, i = 1, . . . , n} denote a

sample of such distances above P at a random set of locations, the model (Yi|✓) IID⇠ Uniform(0, ✓) (⇤)

emerges from simple and plausible assumptions. In this model the unknown ✓ > 0 can be used, through carbon dating, to estimate the species extinction time.

The marginal distribution of a single observation yi in this model may be written

pYi(yi | ✓) = ⇢

1 ✓ if 0  yi  ✓ 0 otherwise

� =

1

✓ I (0  yi  ✓) , (12)

where I(A) = 1 if A is true and 0 otherwise.

(a) Briefly explain why the statement {0  yi  ✓ for all i = 1, . . . , n} is equivalent to the statement {m = max (y1, . . . yn)  ✓}, and use this to show that the joint distribution of Y = (Y1, . . . , Yn) in this model is

fY1,…,Yn(y1, . . . , yn) = I(m  ✓)

✓n . (13)

[20 points]

(b) Letting the observed values of (Y1, . . . , Yn) be y = (y1, . . . , yn), an important object in both frequentist and Bayesian inferential statistics is the likelihood function `(✓ |y), which is ob- tained from the joint distribution of (Y1, . . . , Yn) simply by

(1) thinking of fY1,…,Yn(y1, . . . , yn) as a function of ✓ for fixed y, and

(2) multiplying by an arbitrary positive constant c:

`(✓ |y) = c fY (y) . (14)

Using this terminology, in part (a) you showed that the likelihood function in this problem is `(✓ |y) = ✓�nI(✓ � m), where m is the largest of the yi values. Both frequentists and Bayesians are interested in something called the maximum likelihood estimator (MLE) ✓̂MLE, which is the value of ✓ that makes `(✓ |y) as large as possible.

(i) Make a rough sketch of the likelihood function, and use your sketch to show that the MLE in this problem is ✓̂MLE = m = max (y1, . . . yn). [20 points]

(ii) Maximization of a function is usually accomplished by setting its first derivative to 0 and solving the resulting equation. Briefly explain why that method won’t work in finding the MLE in this case. [10 points]

(c) A positive quantity W follows the Pareto distribution (written W ⇠ Pareto(↵, �)) if, for parameters ↵, � > 0, it has density

fW (w) =

⇢ ↵�↵ w�(↵+1) if w � �

0 otherwise

� . (15)

This distribution has mean ↵�↵�1 (if ↵ > 1) and variance ↵�2

(↵�1)2(↵�2) (if ↵ > 2).

4

(i) For frequentists the likelihood function is just a function `(✓ |y), but for Bayesians it can be regarded as an un-normalized density function for ✓. Show that, from this point of view, the likelihood function in this problem corresponds to the Pareto(n � 1,m) distribution. [10 points]

(ii) Bayes’s Theorem for a one-dimensional continuous unknown (such as ✓ in this situation) says that the conditional density f⇥ |Y (✓ |y) for ✓ given Y = y — which is called the posterior distribution for ✓ given the data — is a positive (normalizing) constant c times a PDF f⇥(✓) — called the prior distribution for ✓ — that captures any available information about ✓ external to the data set, times the likelihood distribution `(✓ |y):

f⇥ |Y (✓ |y) = c · f⇥(✓) · `(✓ |y)✓ posterior

distribution

◆ =

✓ normalizing constant

◆ ·

✓ prior

distribution

◆ ·

✓ likelihood distribution

(16) The posterior distribution is the goal of a Bayesian inferential analysis: it summarizes all available information, both external to and internal to your data set. Show that if the prior distribution for ✓ in this problem is taken to be (15), under the model (⇤) above the posterior distribution is f⇥ |Y (✓ |y) = Pareto [↵ + n,max(�,m)]. (Bayesian terminology: Note that what just happened was that the product of two Pareto distributions (prior, likelihood) is another Pareto distribution (posterior); a prior distribution that makes this happen is called conjugate to the likelihood in the model.) [20 points]

(d) In an experiment conducted in the Antarctic in the 1980s to study a particular species of fossil ammonite, the following was a linearly rescaled version of the observed data: y = (y1, . . . , yn) = (2.8, 1.7, 1.0, 5.1, 3.7, 1.5, 4.3, 2.0, 3.2, 2.1, 0.4). Prior information equiva- lent to a Pareto distribution specified by the choice (↵, �) = (2.5, 4) was available.

(i) Plot the prior, likelihood, and posterior distributions arising from this data set on the same graph, explicitly identifying the three curves. [30 points]

(ii) Work out the posterior mean and SD (square root of the posterior variance), and use them to complete the following sentence:

On the basis of this prior and data information, the ✓ value for this species of fossil ammonite is about , give or take about .

[30 points]

5


Comments are closed.