Question
If \(X\) is a random variable that follows a Poisson distribution with mean \(\lambda > 0\) then the probability generating function of \(X\) is \(G(t) = {e^{\lambda (t – 1)}}\).
a.(i) Prove that \({\text{E}}(X) = \lambda \).
(ii) Prove that \({\text{Var}}(X) = \lambda \).[6]
b.\(Y\) is a random variable, independent of \(X\), that also follows a Poisson distribution with mean \(\lambda \).
If \(S = 2X – Y\) find
(i) \({\text{E}}(S)\);
(ii) \({\text{Var}}(S)\).[3]
c.Let \(T = \frac{Y}{2} + \frac{Y}{2}\).
(i) Show that \(T\) is an unbiased estimator for \(\lambda \).
(ii) Show that \(T\) is a more efficient unbiased estimator of \(\lambda \) than \(S\).[3]
d.Could either \(S\) or \(T\) model a Poisson distribution? Justify your answer.[1]
e.By consideration of the probability generating function, \({G_{X + Y}}(t)\), of \(X + Y\), prove that \(X + Y\) follows a Poisson distribution with mean \(2\lambda \).[3]
f.Find
(i) \({G_{X + Y}}(1)\);
(ii) \({G_{X + Y}}( – 1)\).[2]
g.Hence find the probability that \(X + Y\) is an even number.[3]
▶️Answer/Explanation
Markscheme
(i) \(G'(t) = \lambda {e^{\lambda (t – 1)}}\) A1
\({\text{E}}(X) = G'(1)\) M1
\( = \lambda \) AG
(ii) \(G”(t) = {\lambda ^2}{e^{\lambda (t – 1)}}\) M1
\( \Rightarrow G”(1) = {\lambda ^2}\) (A1)
\({\text{Var}}(X) = G”(1) + G'(1) – {\left( {G'(1)} \right)^2}\) (M1)
\( = {\lambda ^2} + \lambda – {\lambda ^2}\) A1
\( = \lambda \) AG
[6 marks]
(i) \({\text{E}}(S) = 2\lambda – \lambda = \lambda \) A1
(ii) \({\text{Var}}(S) = 4\lambda + \lambda = 5\lambda \) (A1)A1
Note: First A1 can be awarded for either \(4\lambda \) or \(\lambda \).
[3 marks]
(i) \({\text{E}}(T) = \frac{\lambda }{2} + \frac{\lambda }{2} = \lambda \;\;\;\)(so \(T\) is an unbiased estimator) A1
(ii) \({\text{Var}}(T) = \frac{1}{4}\lambda + \frac{1}{4}\lambda = \frac{1}{2}\lambda \) A1
this is less than \({\text{Var}}(S)\), therefore \(T\) is the more efficient estimator R1AG
Note: Follow through their variances from (b)(ii) and (c)(ii).
[3 marks]
no, mean does not equal the variance R1
[1 mark]
\({G_{X + Y}}(t) = {e^{\lambda (t – 1)}} \times {e^{\lambda (t – 1)}} = {e^{2\lambda (t – 1)}}\) M1A1
which is the probability generating function for a Poisson with a mean of \(2\lambda \) R1AG
[3 marks]
(i) \({G_{X + Y}}(1) = 1\) A1
(ii) \({G_{X + Y}}( – 1) = {e^{ – 4\lambda }}\) A1
[2 marks]
\({G_{X + Y}}(1) = p(0) + p(1) + p(2) + p(3) \ldots \)
\({G_{X + Y}}( – 1) = p(0) – p(1) + p(2) – p(3) \ldots \)
so \({\text{2P(even)}} = {G_{X + Y}}(1) + {G_{X + Y}}( – 1)\) (M1)(A1)
\({\text{P(even)}} = \frac{1}{2}(1 + {e^{ – 4\lambda }})\) A1
[3 marks]
Total [21 marks]
Examiners report
Solutions to the different parts of this question proved to be extremely variable in quality with some parts well answered by the majority of the candidates and other parts accessible to only a few candidates. Part (a) was well answered in general although the presentation was sometimes poor with some candidates doing the differentiation of \(G(t)\) and the substitution of \(t = 1\) simultaneously.
Part (b) was well answered in general, the most common error being to state that \({\text{Var}}(2X – Y) = {\text{Var}}(2X) – {\text{Var}}(Y)\).
Parts (c) and (d) were well answered by the majority of candidates.
Parts (c) and (d) were well answered by the majority of candidates.
Solutions to (e), however, were extremely disappointing with few candidates giving correct solutions. A common incorrect solution was the following:
\(\;\;\;{G_{X + Y}}(t) = {G_X}(t){G_Y}(t)\)
Differentiating,
\(\;\;\;{G’_{X + Y}}(t) = {G’_X}(t){G_Y}(t) + {G_X}(t){G’_Y}(t)\)
\(\;\;\;{\text{E}}(X + Y) = {G’_{X + Y}}(1) = {\text{E}}(X) \times 1 + {\text{E}}(Y) \times 1 = 2\lambda \)
This is correct mathematics but it does not show that \(X + Y\) is Poisson and it was given no credit. Even the majority of candidates who showed that \({G_{X + Y}}(t) = {{\text{e}}^{2\lambda (t – 1)}}\) failed to state that this result proved that \(X + Y\) is Poisson and they usually differentiated this function to show that \({\text{E}}(X + Y) = 2\lambda \).
In (f), most candidates stated that \({G_{X + Y}}(1) = 1\) even if they were unable to determine \({G_{X + Y}}(t)\) but many candidates were unable to evaluate \({G_{X + Y}}( – 1)\). Very few correct solutions were seen to (g) even if the candidates correctly evaluated \({G_{X + Y}}(1)\) and \({G_{X + Y}}( – 1)\).
[N/A]
Question
A random variable \(X\) has a population mean \(\mu \).
a.Explain briefly the meaning of
(i) an estimator of \(\mu \);
(ii) an unbiased estimator of \(\mu \).[3]
b.A random sample \({X_1},{\text{ }}{X_2},{\text{ }}{X_3}\) of three independent observations is taken from the distribution of \(X\).
An unbiased estimator of \(\mu ,{\text{ }}\mu \ne 0\), is given by \(U = \alpha {X_1} + \beta {X_2} + (\alpha – \beta ){X_3}\),
where \(\alpha ,{\text{ }}\beta \in \mathbb{R}\).
(i) Find the value of \(\alpha \).
(ii) Show that \({\text{Var}}(U) = {\sigma ^2}\left( {2{\beta ^2} – \beta + \frac{1}{2}} \right)\) where \({\sigma ^2} = {\text{Var}}(X)\).
(iii) Find the value of \(\beta \) which gives the most efficient estimator of \(\mu \) of this form.
(iv) Write down an expression for this estimator and determine its variance.
(v) Write down a more efficient estimator of \(\mu \) than the one found in (iv), justifying your answer.[12]
▶️Answer/Explanation
Markscheme
(i) an estimator \(T\) is a formula (or statistic) that can be applied to the values in any sample, taken from \(X\) A1
to estimate the value of \(\mu \) A1
(ii) an estimator is unbiased if \({\text{E}}(T) = \mu \) A1
[3 marks]
(i) using linearity and the definition of an unbiased estimator M1
\(\mu = \alpha \mu + \beta \mu + (\alpha – \beta )\mu \) A1
obtain \(\alpha = \frac{1}{2}\) A1
(ii) attempt to compute \({\text{Var}}(U)\) using correct formula M1
\({\text{Var}}(U) = \frac{1}{4}{\sigma ^2} + {\beta ^2}{\sigma ^2} + {\left( {\frac{1}{2} – \beta } \right)^2}{\sigma ^2}\) A1
\({\text{Var}}(U) = {\sigma ^2}\left( {2{\beta ^2} – \beta + \frac{1}{2}} \right)\) AG
(iii) attempt to minimise quadratic in \(\beta \) (or equivalent) (M1)
\(\beta = \frac{1}{4}\) A1
(iv) \((U) = \frac{1}{2}{X_1} + \frac{1}{4}{X_2} + \frac{1}{4}{X_3}\) A1
\({\text{Var}}(U) = \frac{3}{8}{\sigma ^2}\) A1
(v) \(\frac{1}{3}{X_1} + \frac{1}{3}{X_2} + \frac{1}{3}{X_3}\) A1
\({\text{Var}}\left( {\frac{1}{3}{X_1} + \frac{1}{3}{X_2} + \frac{1}{3}{X_3}} \right) = \frac{3}{9}{\sigma ^2}\) A1
\( < {\text{Var}}(U)\) R1
Note: Accept \(\sum\limits_{i = 1}^3 {{\lambda _i}{X_i}} \) if \(\sum\limits_{i = 1}^3 {{\lambda _i} = 1} \) and \(\sum\limits_{i = 1}^3 {\lambda _i^2 < \frac{3}{8}} \) and follow through to the variance if this is the case.
[12 marks]
Total [15 marks]
Examiners report
In general, solutions to (a) were extremely disappointing with the vast majority unable to give correct explanations of estimators and unbiased estimators. Solutions to (b) were reasonably good in general, indicating perhaps that the poor explanations in (a) were due to an inability to explain what they know rather than a lack of understanding.
Solutions to (b) were reasonably good in general, indicating perhaps that the poor explanations in (a) were due to an inability to explain what they know rather than a lack of understanding.
Question
A biased cubical die has its faces labelled \(1,{\rm{ }}2,{\rm{ }}3,{\rm{ }}4,{\rm{ }}5\) and \(6\). The probability of rolling a \(6\) is \(p\), with equal probabilities for the other scores.
a.The die is rolled once, and the score \({X_1}\) is noted.
(i) Find \({\text{E}}({X_1})\).
(ii) Hence obtain an unbiased estimator for \(p\).[4]
b.The die is rolled a second time, and the score \({X_2}\) is noted.
(i) Show that \(k({X_1} – 3) + \left( {\frac{1}{3} – k} \right)({X_2} – 3)\) is also an unbiased estimator for \(p\) for all values of \(k \in \mathbb{R}\).
(ii) Find the value for \(k\), which maximizes the efficiency of this estimator.[7]
▶️Answer/Explanation
Markscheme
let \(X\) denote the score on the die
(i) \({\text{P}}(X = x) = \left\{ {\begin{array}{*{20}{c}} {\frac{{1 – p}}{5},}&{x = 1,{\text{ 2}},{\text{ 3}},{\text{ 4}},{\text{ 5}}} \\ {p,}&{x = 6} \end{array}} \right.\) (M1)
\(E({X_1}) = (1 + 2 + 3 + 4 + 5)\frac{{1 – p}}{5} + 6p\) M1
\( = 3 + 3p\) A1
(ii) so an unbiased estimator for \(p\) would be \(\frac{{{X_1} – 3}}{3}\) A1
[4 marks]
(i) \(E\left( {k({X_1} – 3) + \left( {\frac{1}{3} – k} \right)({X_2} – 3)} \right)\) M1
\( = kE({X_1} – 3) + \left( {\frac{1}{3} – k} \right)E({X_2} – 3)\) M1
\( = k(3p) + \left( {\frac{1}{3} – k} \right)(3p)\) A1
any correct expression involving just \(k\) and \(p\)
\( = p\) AG
hence \(k({X_1} – 3) + \left( {\frac{1}{3} – k} \right)({X_2} – 3)\) is an unbiased estimator of \(p\)
(ii) \({\text{Var}}\left( {k({X_1} – 3) + \left( {\frac{1}{3} – k} \right)({X_2} – 3)} \right)\) M1
\( = {k^2}{\text{Var}}({X_1} – 3) + {\left( {\frac{1}{3} – k} \right)^2}{\text{Var}}({X_2} – 3)\) A1
\( = \left( {{k^2} + {{\left( {\frac{1}{3} – k} \right)}^2}} \right){\sigma ^2}\) (where \({\sigma ^2}\) denotes \({\text{Var}}(X)\))
valid attempt to minimise the variance M1
\(k = \frac{1}{6}\) A1
Note: Accept an argument which states that the most efficient estimator is the one having equal coefficients of \({X_1}\) and \({X_2}\).
[7 marks]
Total [11 marks]
Examiners report
[N/A]
[N/A]
Question
The random variable X has a binomial distribution with parameters \(n\) and \(p\).
Let \(U = nP\left( {1 – P} \right)\).
a.Show that \(P = \frac{X}{n}\) is an unbiased estimator of \(p\).[2]
b.i.Show that \({\text{E}}\left( U \right) = \left( {n – 1} \right)p\left( {1 – p} \right)\).[5]
b.ii.Hence write down an unbiased estimator of Var(X).[1]
▶️Answer/Explanation
Markscheme
\({\text{E}}\left( P \right) = {\text{E}}\left( {\frac{X}{n}} \right) = \frac{1}{n}{\text{E}}\left( X \right)\) M1
\( = \frac{1}{n}\left( {np} \right) = p\) A1
so P is an unbiased estimator of \(p\) AG
[2 marks]
\({\text{E}}\left( {nP\left( {1 – P} \right)} \right) = {\text{E}}\left( {n\left( {\frac{X}{n}} \right)\left( {1 – \frac{X}{n}} \right)} \right)\)
\( = {\text{E}}\left( X \right) = \frac{1}{n}{\text{E}}\left( {{X^2}} \right)\) M1A1
use of \({\text{E}}\left( {{X^2}} \right) = {\text{Var}}\left( X \right) + {\left( {{\text{E}}\left( X \right)} \right)^2}\) M1
Note: Allow candidates to work with P rather than X for the above 3 marks.
\( = np – \frac{1}{n}\left( {np\left( {1 – p} \right) + {{\left( {np} \right)}^2}} \right)\) A1
\( = np – p\left( {1 – p} \right) – n{p^2}\)
\( = np\left( {1 – p} \right) – p\left( {1 – p} \right)\) A1
Note: Award A1 for the factor of \(\left( {1 – p} \right)\).
\( = \left( {n – 1} \right)p\left( {1 – p} \right)\) AG
[5 marks]
an unbiased estimator is \(\frac{{{n^2}P\left( {1 – P} \right)}}{{n – 1}}\left( { = \frac{{nU}}{{n – 1}}} \right)\) A1
[1 mark]
Examiners report
[N/A]
[N/A]
[N/A]