Download Statistical Inference -III Sem Complementary

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
School of Distance Education
UNIVERSITY OF CALICUT
SCHOOL OF DISTANCE EDUCATION
B Sc. Mathematics (2011 Admn.)
III SEMESTER
COMPLEMENTARY COURSE
STATISTICAL INFERENCE
QUESTION BANK
1.
Random sample taken from a population then the function of sample values is known as:
(i)Parameter
(ii) Statistic
(iii) population
(iv) Nome of these.
2.
The probability distribution of a statistic is,
(i) Sampling distribution
(ii) distribution function
(iii) Mass function
(iv) None of these
3.
A population characteristic, such as a population mean, is called
(i) A statistic
(ii) A parameter
(iii) A sample
(iv) The mean deviation
4.
Which of the following is a sampling distribution:
(i) Binomial
(ii) Poisson
(iii) Chi-square
5.
A simple random sample of 100 observations was taken from a large population. The
sample mean and the standard deviation were determined to be 80 and 12 respectively.
The standard error of the mean is
(i) 1.2
6.
(ii) 0.8
(iii) 12
(iv) None of these
(iv) 8
Since the sample size is always smaller than the size of the population, the sample mean
(i) Must always be smaller than the population mean
(ii) Must be larger than the population mean
(iii) Must be equal to the population mean
(iv) Can be smaller, larger, or equal to the population mean
7.
For a population with any distribution, the form of the sampling distribution of the
sample mean is,
(i) Sometimes normal for large sample sizes
(ii) Sometimes normal for all sample sizes
(iii) Always normal for all sample sizes
(iv) Always normal for large sample sizes
8.
As the sample size increases, the
(i) Standard deviation of the population decreases
(ii) Population means increases
(iii) Standard error of the mean decreases
(iv) Standard error of the mean increases
Statistical Inference
Page 1
School of Distance Education
9.
Doubling the size of the sample will
(i) Reduce the standard error of the mean to one-half its current value
(ii) Reduce the standard error of the mean to approximately 70% of its current value
(iii) Have no effect on the standard error of the mean
(iv) Double the standard error of the mean
10.
In point estimation
(i) Data from the population is used to estimate the population parameter
(ii) Data from the sample is used to estimate the population parameter
(iii) Data from the sample is used to estimate the sample statistic
(iv) The mean of the population equals the mean of the sample
11.
The sample statistic s is the point estimator of
(i)
12.
13.
14.
15.
16.
17.
(iii) x
(iv) p
The sample mean is the point estimator of
(i) 
(ii) 
(iii) x
(iv) p

(ii) 
The probability distribution of the sample mean is called the
(i) Central probability distribution
(ii) sampling distribution of the mean
(iii) Random variation
(iv) standard error
The expected value of the random variable x is,
(i) The standard error
(ii) the sample size
(iii) The size of the population
(iv) None of these
A normal population has a mean of 75 and a standard deviation of 8. A random
sample of 800 is selected. The expected value of x is,
(i) 75
(ii) 8
(iii) 7.5
(iv) p
As the sample size becomes larger, the sampling distribution of the sample mean
approaches a,
(i) Binomial distribution
(ii) Poisson distribution
(iii) Normal distribution
(iv) chi-square distribution
Whenever the population has a normal probability distribution, the sampling
distribution of x is a normal probability distribution for,
(i) Only large sample sizes
(ii) only small sample sizes
(iii) Any sample size
(iv) only samples of size thirty or greater
18.
m.g.f. of the mean of n random samples taken from N (  ,  ) is,
t 2 2
t 2 2
t
t 2 2
t 
t 
t 
t 
n
2n
2n
2
(i) e
(ii) e
(iii) e
(iv) e
19.
If Z follows standard normal distribution, P ( Z  1.67) is
(i) 0.5
(ii) 0.64
(iii) 0.045
Statistical Inference
(iv) 0.45
Page 2
School of Distance Education
20.
The probability distribution of all possible values of the sample proportion p is the
(i) Probability density function of p
(ii) Sampling distribution of x
(iii) Same as p , since it considers all possible values of the sample proportion
(iv) Sampling distribution of p
21.
If X follow standard normal distribution, then Y  X 2 follows,
(i) Normal
(ii) Chi-square with 2 d.f.
(iii) Chi-square with 1 d.f.
(iv) Nome of these
22.
The range of a chi-square variable is,
(i) 0 to n
(ii) 0 to 
23.
For random variable following chi-square distribution,
(i) mean = 2(variance)
(ii) 2(mean) = variance
(iii) Mean = variance
(iv) None of these
24.
The mean of a chi-square random variable with ‘n’ d.f. is,
(i) 2n
(ii) n+2
(iii) n
(iv) None of these
25.
Variance of a chi-square random variable with ‘n’ d.f. is,
(i) 2n
(ii) n+2
(iii) n
(iv) None of these
26.
M.g.f. of a random variable following chi-square distribution with ‘n’ d.f. is,
n
(i) (1  2t ) 2
27.
(ii) (1  2t )

n
2
(iii)  to 
n
(iii) (1  t ) 2
(iv) None of these
(iv) (1  t )

n
2

1
2
M.g.f. of the square of a standard normal random variable is,
1
(i) (1  2t ) 2
(ii) (1  2t )

1
2
1
(iii) (1  t ) 2
(iv) (1  t )
28.
If X and Y are two independent ch-square variables with degrees of freedom 3 and 4
respectively, then Z=X+Y follows,
(i) Chi-square with 7 d.f.
(ii) Chi-square with 12 d.f.
(iii) Chi-square with 1 d.f.
(iv) None of these
29.
Chi–square table gives the values of  2( ) for a  2 variable with various degrees of
freedom and for various values of  , such that,
(i) P (  2   2 ( ) )  
(ii) P (  2   2 ( ) )   .
(iii) P ( |  2 |  2( ) )  
30.
(iv) None of these
 x  
If X 1 , X 2 ,..., X n are n random samples taken from N (  ,  ) , then Y    i
 follows,
 
i 1 
(i)  2 (n  1)
(ii)  2 (1)
(iii)  2 (n  1)
(iv)  2 (n)
Statistical Inference
n
2
Page 3
School of Distance Education
31.
The probability distribution of the sum of squares of ‘n’ independent standard normal
random variables is,
(i) Normal
32.
33.
(ii) Chi-square
(iii) t
(iv) None of these
x
If X is a uniform random variable over [0,  ] , then Y  2 log e ( ) follow

(i) Normal
(ii) Chi-square with ‘n’ d.f.
(iii) Exponential
(iv) None of these
If X ~  2 (n) , then mode of X is at,
(i) n
(ii) n-1
(iii) n-2
(iv) None of these
34.
As n become large a chi-square variable with n degrees of freedom follows,
(i) N (n, 2n )
(ii) N (n, 2n )
(iii) N (2n, 2n )
(iv) None of these
35.
Given P (  2 (15)  k )  0.80 . Then the value of k is,
(i) - 10.307
36.
(ii) + 10.307
(iii) 6.307
(iv) None of these
For two independent random variables X and Y, where X ~ N (0,1) and Y ~  2 (n) , then
Z follows t-distribution with n degrees of freedom, if Z is,
(i) Y X
(ii) X Y
(iii) X n
(iv) None of these
n
Y
n
37.
‘student’ is the penname of,
(i) Newton
(ii) Chebychev
(iii) Laplace
(iv) Gosset
38.
The range of a t variable is,
(i) 0 to n
(ii) 0 to 
(iii)  to 
(iv) None of these
39.
The p.d.f. of a t-variable with n d.f. is,
n 1
 n 1 
2  2 


t
(i) 2 1  
n
1
 
2
n 1
 n 1 
2  2 


t
2
(iii)
1  
n
n
n 
2
(ii)
n 1
 n 1 
2  2 


t
2
1  
n
n
n 
2
(iv) None of these
40.
For a random variable t following t distribution with 7 d.f., the mode is,
(i) 0
(ii) 7
(iii) 6
(iv) None of these
41.
A statistic following t distribution with n-1 d.f. is,
(i)
( X   ) n 1
S
Statistical Inference
(ii)
( X   ) n 1

(iii)
( X  ) n
S
(iv) None of these
Page 4
School of Distance Education
42.
If X1 and X 2 are two independent standard normal variables, then t 
2X 1
X 12  X 22
follows,
(i) Chi-square distribution
(iii) F- distribution
43.
Tables of t-distribution gives the values of t for various degrees of freedom and for
various value of  , such that,
(i) P(| t | t )  
44.
(ii) t – distribution
(iv) None of these
The statistics
(ii) P(t  t )  
X1  X 2
n1S 1  n2 S 22  1 1 
  
n1  n2  2  n1 n1 
2
(iii) P(| t | t )  
(iv) None of these
follows,
(i) t- Distribution with n1  n2  1 d.f.
(ii) t- Distribution with n1  n2  2 d.f.
(iii) F- distribution with ( n1 , n2 ) d.f.
(iv) None of these
45.
46.
If t follow t-distribution with ‘n’ degrees of freedom, then Z  t 2 follows,
(i) F- distribution with (1,n) d.f.
(ii) F- distribution with (n,1) d.f.
(iii) Chi- distribution
(iv) None of these
The ratio of the squares of two independent standard normal random variables is
(i) An F- random variable with (1, 1) degree of freedom.
(ii) An F- random variable with (n, 1) degree of freedom.
(iii) An F- random variable with (1,n) degree of freedom.
(iv) None of these
47.
48.
49.
If F follow F distribution with (m, n) degrees of freedom then, 1/F follows ,
(i) t- Distribution with m d.f.
(ii) t- distribution with n d.f.
(iii) F- distribution with (n, m) d.f.
(iv) None of these
If t ~ t( n ) , then as n   , t follows,
(i) F- distribution with (1,n) d.f.
(ii) F- distribution with (n,1) d.f.
(iii) N(0,1)
(iv) None of these
If t ~ t(5) , the value of ‘a’ such that, P (  a  t  a )  0.98 is
(i) 3.365.
50.
(ii) 2.365.
(iii) 1.365.
(iv) None of these
Parameters are
(i) Function of sample values
(ii)
Function of population values
(iii) The averages taken from a sample
(iv) Function of either a sample or a population values
Statistical Inference
Page 5
School of Distance Education
51.
Sampling distribution of x is the
(i) Probability distribution of the sample mean
(ii) Probability distribution of the sample proportion
(iii) Mean of the sample
(iv) Mean of the population
52.
If n increases, the student’s t distribution tends to
(i) Normal
53.
(ii) F
(iv) None of these
The ratio of two independent standard normal random variables is,
(i) t (1)
54.
(iii) Cauchy
(ii)
F(1,1)
(iii) N(0,1)
(iv) None of these
(iii) Gosset
(iv) None of these
(iii)  to 
(iv) None of these
F distribution was invented by
(i) Fisher
(ii) Snedecor
55.
The range of F variable is,
(i) 0 to n
(ii) 0 to 
56.
Let independent samples of sizes n1 and n2 are taken from normal population with
mean  and standard deviation  . Let S12 and S 22 are the respective sample variance,
then F 
n1S12 (n2  1)
follows,
n2 S 22 (n1  1)
(i) F (n1  1, n2  1)
57.
(ii) F (n1 , n2 )

1 2
1 2
1 2

(iv) None of these
Mode of F ~ F (n1 , n2 ) is ,
(i) F =
n2  n1  2 
n1  n2  2 
(ii) F =
(iii) F =
n2  n1  2 
n2  n2  2 
(iv) None of these
n1  n1  2 
n1  n2  2 
The ratio of the squares of two independent standard normal random variables is,
(i) F (n1 , n2 )
60.
,
(ii) P( Fn ,n  F )  
(iii) P Fn ,n  F  
59.
(iv) None of these
Tables of F-distribution gives the values of F for various values of n1 , n2 and
such that,
(i) P( Fn ,n  F )  
58.
(iii) F (1,1)
(ii) F (1, n2 )
(iii) F (1,1)
(iv) None of these
If X following F distribution with  n1 , n2  degrees of freedom Y follow F distribution
with  n2 , n1  degrees of freedom. Then,
(i) P ( X  c )  P ( Y  c )
(ii) P ( X  c )  P ( Y 
1
)
c
(iii) P ( X  c )  P ( Y  c )
(iv) P ( X  c )  P ( Y 
1
)
c
Statistical Inference
Page 6
School of Distance Education
61.
If X following F distribution with  n , n  degrees of freedom. If  ,  (    ) are such
that P ( X   )  P ( X   ) . Then the value of  . is,
(i) 2
62.
(ii) 1
(iii) 1/2
(iv) None of these
If X following F distribution with  n1 , n2  degrees of freedom, then as n2   , Y= n1 X
follows,
(i)  2  n1 
63.
68.
71.
72.
2X 1
X  X 22
2
1
follows,
(iii) t 1
(iv) None of these
(ii) Fermat
(iii) Fisher
(iv) None of these
(ii) estimate
(iii) Variance
(iv) None of these
(ii) a population parameter
(iii) A mean estimator
(iv) a point estimate
An unbiased estimator of a parameter  is an estimator t with ,
(ii) E (t )  
(iii) E (t )  
(iv) None of these
(iii) E (t )  
(iv) None of these
An estimator is biased, when,
(ii) E (t )  
The estimator with smallest variance is;
(i) Unbiased estimator
(ii) consistent estimator
(iii) Efficient estimator
(iv) sufficient estimator
Any statistic suggested as an estimator for a population parameter is a,
(i) Point estimator
(ii) Interval estimator
(iii) Unbiased estimator
(iv) None of these
A property of a point estimator that occurs whenever larger sample sizes tend to
provide point estimates closer to the population parameter is known as,
(i) Unbiasedness
73.
(ii) t  n 
(i) A parameter
(i) E (t )  
70.
(iv) None of these
A single numerical value used as an estimate of a population parameter is known as,
(i) E (t )  
69.
(iii) greater than one
A sample constant representing the population parameter is known as,
(i) Expectation
67.
(ii) less than one
The theory of estimation was founded by
(i) Laplace
66.
(iv) None of these
If X1 and X 2 are two independent standard normal variables, then t 
(i) t  2 
65.
(iii) t  n1 
For a random variable following F distribution, the mode is always,
(i) one
64.
(ii)  2  n2 
(ii) efficiency
For the random sample x1 , x2 ,..., xn
(iii) consistency
(iv) None of these
taken from B (1, p ) , show that s an unbiased
n
estimator p 2 , where T   xi is,
i 1
(i)
T (T  1)
n
Statistical Inference
(ii)
T (T  1)
(n  1)
(iii)
T (T  1)
n(n  1)
(iv)
(T  1)
n(n  1)
Page 7
School of Distance Education
74.
E (tn )   or   , as n  
for,
(i) Unbiasedness
75.
(ii) efficiency
(ii) consistent
(ii) consistent
(iv) None of these
(iii) efficient
(iv) sufficient
(iii) efficient
(iv) sufficient
For the random sample x1 , x2 ,..., xn taken from N (  ,  ) , then sample variance is a -------estimator of the population variance.
(i) Unbiased
78.
(iii) consistency
p
If tn 
 , then tn is a ----- estimator of  .
(i) Unbiased
77.
V (tn )  0, as n   are sufficient conditions
For the random sample x1 , x2 ,..., xn taken from Poisson population with parameter  ,
nx
is ------------ estimator  .
n 1
(i) Unbiased
76.
and
Let
x1 , x2 ,..., xn
(ii) consistent
(iii) efficient
(iv) sufficient
be the random sample taken from a population with p.d.f.
f ( x, )   x 1 ; 0  x  1,   0 . Then a sufficient estimator for  is,
(i)
n
 xi
(ii)
80.
82.
84.
n
x
i 1
(iv) None of these
2
i
(i) Unbiased, consistent
(ii) biased, efficient
(iii) Consistent, unbiased
(iv) None of these
Let t be the most efficient estimator for the parameter  , then efficiency of any other
unbiased estimator t1 of  is defined as,
SD(t1 )
var(t )
(ii) E (t1 ) 
SD(t )
var(t1 )
(iii) E (t1 ) 
var(t1 )
var(t )
(iv) E (t1 ) 
var(t )
var(t1 )
Given two unbiased point estimators of the same population parameter, the point
estimator with the smaller variance is said to have,
(i) Smaller relative efficiency
(ii) greater relative efficiency
(iii) Smaller consistency
(iv) larger consistency
MLE is always need not be,
(i) Unbiased
83.
(iii)
Sample variance is not a ----- estimator, but it is a ---- estimator for population variance
(i) E (t1 ) 
81.
 xi
i 1
i 1
79.
n
(ii) efficient
(iii) consistent
The MLE of  for the following distribution f ( x ) 
(iv) None of these
1 | x  |
e
,    x   is,
2
(i) Mean of samples
(ii) Maximum value of the samples
(iii) Minimum value of the samples
(iv) Median of the samples
The MLE of  , based on random samples
parameter  is,
(i) x
Statistical Inference
(ii) x 2
(iii) nx
taken from Poisson population with
(iv) None of these
Page 8
School of Distance Education
85.
The
X:
moment
1
2
estimate
3
of  ,
4
if
the
1
1
1
1 ;
4
4
4
4
frequencies are 1,5,7 and 7 respectively is,
0    1,
f ( x) :
(i) 0.4
86.
87.
88.
(ii) 0.5
(iii) 0.6
masses
are
and the observed
(iv) None of these
In case of finding the confidence interval for mean of a normal population with known
SD, the table values are taken from,
(i)t - table
(ii) standard normal table
(iii) Chi-square table
(iv) None of these
In case of finding the confidence interval for mean of a normal population with
unknown SD, the table values are taken from,
(i)t - table
(ii) standard normal table
(iii) Chi-square table
(iv) None of these
If an estimator Tn of population parameter  converges in probability to  as n tends
to infinity is said to be,
(i) Unbiased
89.
probability
The estimator
(ii) efficient
x
n
(iii) consistent
(iv) None of these
of population mean is,
x
(i) Unbiased
90.
92.
93.
(iv) None of these
(ii) efficiency
(iii) consistency
(iv) None of these
Factorization theorem for sufficiency is known as,
(i)Fisher-Neyman theorem
(ii) Cramer-Rao theorem
(iii) Rao-Blackwell theorem
(iv) None of these
If the expected value of an estimator ‘t’ is not equal to its parameter  , then ‘t’ is
(i) Unbiased estimator of 
(ii) biased estimator of 
(iii) Sufficient estimator of 
(iv) None of these
Sample median is always a ---- estimator of the population mean
(i) Biased
94.
(iii) both
A property of a point estimator that occurs whenever the expected value of the point
estimator is equal to the population parameter it estimates is known as
(i) Unbiasedness
91.
(ii) consistent
(ii) efficient
(iii) consistent
(iv) None of these
If tn is a sufficient statistic for  based on n random samples, then
function of,
(i)  Only
95.
(ii) tn only

log L is a

(iii) tn and  only (iv) None of these
In common the estimators obtained by the method of MLE are
(i) More efficient
(ii) less efficient
(iii) can’t say about efficiency
(iv) None of these
Statistical Inference
Page 9
School of Distance Education
96.
97.
If sample mean is an estimator of population mean ,it is a --- estimator of population
mean
(i) Unbiased and efficient
(ii) biased and efficient
(iii) Unbiased and inefficient
(iv) None
The value taken by an estimator is known as,
(i) Statistic
98.
(ii) Estimate
(iii) size
(iv) none of these
If a sufficient statistic exist for a parameter, then it will be a function of,
(i)Moment estimator (ii) M L estimator (iii) Unbiased estimator
99.
(iv) None of these.
Bias of an estimator can be
(i) Positive
(ii) negative
(iii) either
(iv) None
100. For samples taken from N (  ,  ) , unbiased estimator of  2 is
n  1 S 2
nS 2

(i) S
(ii)
(iii)
(iv) None of these
n 1
n
101. An estimator t1 for the parameter  is more efficient than another estimator t2 if,
2
(i) V (t1 )  V (t2 )
(ii) V (t1 )  V (t2 )
(iii) V (t1 )  V (t2 )
(iv) Nome of these
102. An estimator tn which contains all information about the parameter contains in the
sample is,
(i) an unbiased estimator
(ii) a consistent estimator
(iii) a sufficient estimator
(iv) None of these
103. If x1 , x2 ,....xn be a random sample from a Bernoulli population p x (1  p)1 x . Then a
sufficient estimator for p is,
(i)  xi
(ii)  xi
(iii) maximum of x1 , x2 ,....xn (iv) None of these
104. Sample standard deviation is a ------- estimator of population standard deviation.
(i) Unbiased
(ii) biased
(iii) sufficient
(iv) efficient
105. If t is a consistent estimator of  , then,
(i) t is also a consistent estimator of  2
(ii) t 2 is also a consistent estimator of 
(iii) t 2 is also a consistent estimator of  2
(iv) None of these
106. The inequality helps us to obtain an estimator with minimum variance is,
(i) Tchebycheve’s inequality
(ii) Cramer- Rao inequality
(iii) Jenson’s inequality
(iv) None of these
107. The method of M.L.E. is established by,
(i) Fisher
(ii) Newton
(iii) Bernoullie
(iv) None of these
108. The set of equations obtained in the process of least square estimation are called
(i) Normal equations
(ii) Intrinsic equations
(iii) Simultaneous equations
(iv) All the above
Statistical Inference
Page 10
School of Distance Education
109. The estimator obtained by the method of moments are ---- in comparison with the
estimator obtained by the method of MLE
(i) Less efficient
110.
(ii) More efficient
(iii) equally efficient
(iv) None of these
MLE of  , by the random samples x1 , x2 ,....xn taken from the population with p.d.f.
1
f ( x,  )  , 0  x   is,

1
(i) x
(ii) max of x1 , x2 ,....xn
(iii) min of x1 , x2 ,....xn
(iv)
x
111. The probability that an interval contains the parameter value is called,
(i)Confidence limit
(iii) Confidence interval
112.
(ii) Confidence coefficient
(iv) None of these
For finding the confidence interval for  using samples x1 , x2 ,....xn taken from N (  ,  )
, when  is unknown, we use the statistic following
(i) Normal distribution
(ii) F- distribution
(iii) Chi- distribution
(iv) t- distribution
113. MLE of  using random samples x1 , x2 ,....xn taken from Poisson distribution with
parameter  is,
(i)Mode of x1 , x2 ,....xn
(ii) Median of x1 , x2 ,....xn
(iii) Mean of x1 , x2 ,....xn
(iv) None of these
114. Confidence interval for the variance of a normal population involves
(i) Std. normal distribution
(iii) F- distribution
(ii) Chi- distribution
(iv) None of these
115. If t1 and t2 be two unbiased estimators of a parameter  , then, the efficiency of
t1 w.r.to t2 is,
V (t1 )
V (t2 )
(iii) V (t1 )  V (t2 )
(iv)
V (t2 )
V (t1 )
116. MLE of  in a random sample of size n from U (0,  ) is,
(i) The sample means
(ii) The sample median
(iii) The largest order statistics
(iv) The smallest order statistics
(i) V (t1 )  V (t2 )
(ii)
117. If X is a Poisson variate with parameter  , the unbiased estimator based on a single
observation x for e 3 is,
(i)  3 
x
(ii)
 2 
x
(iii) 3x
(iv) 2 x
118. The difference between estimate and parameter in a sample survey is known as,
(i) Non-sampling error
(ii) population variance
(iii) Sampling error
(iv) sampling variance
Statistical Inference
Page 11
School of Distance Education
119.
x
follows
x y
(iii) beta(1/2,1)
(iv) None of these
x  2 (2), y 
 2 (1) , x and y are independent, then
(i) beta (2,1)
(ii) beta(1,1/2)
120. The method of moments was invented by
(i) Neyman
(ii) Fisher
(iii) Karl Pearson
(iv) Snedecor
121. A population has a standard deviation of 16. If a sample of size 64 is selected from this
population, what is the probability that the sample mean will be within 2 of the
population mean?
(i) 0.6826
(ii) 0.3413
(iii) -0.6826
(iv) -0.3413
122. If the variance of an estimator attains its Cramer-Rao lower bound for variance, then
the estimator is
(i) Most efficient
(ii) sufficient
(iii) unbiased
(iv) All the above
123. From the following four unbiased estimators for the population mean, identify the
most efficient
1
1
1
1
(i)  x1  x2 
(ii)  x1  3 x2 
(iii)  2 x1  2 x2 
(iv)  x1  5 x2 
2
3
4
5
1  x 
124. The MLE of  using samples x1 , x2 ,....xn from the p.d.f. f ( x )  e
is,
2
(i)Mean of x1 , x2 ,....xn
(ii) Median of x1 , x2 ,....xn
(iii) Mode of x1 , x2 ,....xn
(iv) None of these
125. Estimator obtained by the method of MLE, are ----- than the estimator obtained by the
method of moments
(i) More efficient
(iii) Equally efficient
126.
(ii) Less efficient
(iv) None of these
The hypothesis which is under test for possible rejection is
(i) Null hypothesis
(ii) Alternate hypothesis
(iii) Simple hypothesis
(iv) None of these
127. A hypothesis contrary to null hypothesis is,
(i) Null hypothesis
(ii) Alternate hypothesis
(iii) Simple hypothesis
(iv) None of these
128. Testing of hypothesis was introduced by
(i) Fisher
(ii) Neyman
(iii) Snedecor
(iv) Nome of these
129. A statistical hypothesis which completely specifies the population is called,
(i) Null hypothesis
(ii) Alternate hypothesis
(iii) Simple hypothesis
(iv) None of these
130. A statistical hypothesis which is not completely specifies the population is called,
(i) Null hypothesis
(ii) composite hypothesis
(iii) Simple hypothesis
(iv) None of these
131. The rejection region in testing of hypothesis is known as,
(i) Critical region
Statistical Inference
(ii) normal region
(iii) acceptance region
(iv) None of these
Page 12
School of Distance Education
132. A wrong decision about null hypothesis lead to
(i) Type I error
(ii) Type II error (iii) both
(iv) None of these
133. Significance level is,
(i) P(type I error)
(ii) P(type II error)
(iii) 1- P(type I error)
(iv) 1- P(type II error)
134. Power of a test is,
(i) P(type I error)
(iii) 1- P(type I error)
(ii) P(type II error)
(iv) 1- P(type II error)
135. Size of a test is
(i) P (type I error)
(iii) 1- P (type I error)
(ii) P (type II error)
(iv) 1- P (type II error)
136. Size of a test is also known as,
(i) Power
(ii) significance level (iii) type I error
(iv) type II error
137. The most serious error in testing of hypothesis is,
(i) Type I error
(ii) Type II error
(iii) Both are equally serious (iv) None of these
138. In a coin tossing experiment, let p be the probability of getting a head. The coin is
tossed 10 times to test the hypothesis H 0 : p  0.5 against the alternative H1 : p  0.7 .
Reject H 0 , if 6 or more tosses out of 10 result in head. Significance level of the test is,
386
186
286
(i) 10
(ii)
(iii) 10
(iv) None of these
10
2
2
2
139. Power of a test is e


3
2
, then the probability of type-II error is,
3

3
(i) 1  e 2
(ii) 1  e 2
(iii) e
140. In testing of hypothesis, critical region is

3
2
(iv) None of these
(i) Rejection region
(ii) Acceptance region
(iii) Neutral region
(iv) None of these
141. The standard deviation of any statistic is called its,
(i) Type II error
(ii) Standard error
(iii) type I error
(iv) None of these
142. Critical region with minimum type II error among all critical regions with a specified
significance level is,
(i) Powerful critical region
(ii) Minimum critical region
(iii) Best critical region
(iv) None of these
143. Degrees of freedom is related to
(i) Number of observations in a set
(iii) Number of independent observations in a set
144.
(ii) Hypothesis under test
(iv) None of these
A test which minimizes the power of the test for a fixed significance level is known as
(i) Optimum test
(ii) randomized test
(iii) Likelihood ratio test
(iv) None of these
Statistical Inference
Page 13
School of Distance Education
145. The distribution used for testing mean of a normal population when population
variance is unknown with a large sample is,
(i) Normal distribution
(ii) t distribution
(iii) F distribution
(iv) None of these
146. In testing of equality of means of two normal population, if 1 ,  2 are unknown and in
addition it is assumed that  = 1 =  2 , then the value of  is approximated by,
(i)
n1S1  n2 S 2
n1  n2
(ii)
n12 S12  n2 2 S 2 2
n1  n2
(iii)
n1S12  n2 S 2 2
n1  n2
(iv) None of these
147. Test statistics used for testing proportion of a population is,
x
 p0
(i) t  n
p0 q0
n
x
 p0
n
(ii) t 
p0 q0 / n
x
 p0
n
(iii) t 
p0 q0
n
(iv) None of these
148. Chi square test of goodness of fit is introduced by,
(i)James Bernoulli
(iii) Karl Pearson
(ii) Jacob Bernoulli
(iv) WS Gosset
149. In Chi square test of goodness of fit, the degrees of freedom of the chi square statistic is
n-r-1, where r denotes,
(i) Number of parameters are estimated using the observations for the calculation of
the theoretical frequencies
(ii) Number of observations used for the calculation of the theoretical frequencies
(iii) Number of classes of observations
(iv) None of these
150. In Chi square test of independence the expected number of observations in  i, j 
th
cell
is,
(i)
Nfi.  f. j
f..
(ii)
f i .  f. j
Nf..
(iii)
f i .  f. j
f..
(iv) None of these
151. For a 2  2 contingency table, where the frequencies are a, b, c and d, as given, the chi
square value for testing independence is,
(i)
( a  b  c  d )( ad  bc )2
( a  b )(c  d )(b  d )( a  c )
( a  b  c  d )2 ( ad  bc )
(iii)
( a  b )(c  d )(b  d )( a  c )
(ii)
( a  b  c  d )( ad  bc )2
( a  b )(c  d )(b  d )( a  c )
(iv) None of these
152. The theorem supporting the statement that, When the number of sample is large,
almost all test statistics follows normal distribution
(i)Neyman-Pearson therorem
(iii) Bernoullie’s laws
Statistical Inference
(ii) Central limit theorem
(iv) None of these
Page 14
School of Distance Education
153. The test statistics used in testing standard deviation of a normal population is,
(i)
nS 2
n 1
(ii)
 n  1 S 2
(iii)
 02
n2 S 2
 02
(iv) None of these
154. Neymann-Pearson lemma is used for obtaining,
(i)Most powerful test
(iii) A randomized test
155.
(ii) An unbiased test
(iv) None of these
A test is one-sided or two-sided depends on
(i) Null hypothesis
(ii) Alternate hypothesis
(iii) Simple hypothesis
(iv) None of these
156. Level of significance lies between,
(i) 0 and 1
(ii) -1 and 1
(iii) -3 and 3
(iv)None of these
157. Student’s t test is applicable only when,
(i) The variate values are independent
(ii) The variable is normally distributed
(iii) The sample is small
(iv) All the above
158. To test H 0 :   0 , when population SD is unknown and the sample size is small is,
(i) t-test
(ii) F-test
(iii) Normal test
(iv) None of these
159. To test H 0 :   0 , when the population SD is known is,
(i) t-test
(ii) F-test
(iii) Normal test
(iv) None of these
160. The testing of hypothesis H 0 :   k against H1 :   k leads to
(i) Right tailed test
(ii) two tailed test
(iii) Left tailed test
(iv) None of these
161. The testing of hypothesis H 0 :   k against H1 :   k leads to
(i) Right tailed test
(ii) two tailed test
(iii) Left tailed test (iv) None of these
162. The testing of hypothesis H 0 :   k against H1 :   k leads to
(i) Right tailed test
(ii) two tailed test
(iii) left tailed test
(iv) None of these
163. The hypothesis that the population variance has a specified value can be tested
(i) t-test
164. The statistics 
freedom,
(i) n
(ii) F-test
2
(iii) Normal test
to test H 0 :   
2
(ii) n+1
2
0
(iv) None of these
based on a sample of size n, has degrees of
(iii) n-1
(iv) None of these
165. Degrees of freedom for a chi-square test of independent with contingency table of
order mXn is,
(i) mXn
(ii) m-1 X n-1
(iii) m+1 X n+1
(iv) None of these
166. Degrees of freedom for a chi square test of independence with contingency table of
order 3X4 is,
(i) 12
Statistical Inference
(ii) 6
(iii) 7
(iv) 20
Page 15
School of Distance Education
167. The testing of hypothesis H 0 :  1   2 against H1 :  1   2 is,
(i)
 O  E 
i
2
i
Ei
(ii)

 Oi  Ei 
2
Ei
(iii)

 Oi  Ei 
Ei 2
(iv) None of these
168. When the degree of freedom increases indefinitely, chi square distribution tends to
(i) Normal distribution (ii) t distribution (iii) F distribution
(iv) None of these
169. When the set of n expected and observed frequencies are same, the chi square value
becomes ,
(i) Infinity
(ii) zero
(iii) n
(iv) None of these
170. The degree of freedom for statistic- t for paired t-test based on n pairs of observations
is,
(i) 2(n-1)
(ii) n-1
(iii) 2n – 1
(iv) None of these
171. The mean difference between 10 paired observations is 15 and the SD of differences is
5. The value of statistic t is,
(i) 27
(ii) 9
(iii) 3
(iv) None of these
172. Which of the following symbols represents a population parameter?
(i) SD
(ii) σ
(iii) r
(iv) None of these
173. What does it mean when you calculate a 95% confidence interval?
(i) The process you used will capture the true parameter 95% of the time in the long
run
(ii) You can be “95% confident” that your interval will include the population
parameter
(iii) You can be “5% confident” that your interval will not include the population
parameter
(iv) All of the above statements are true
174. What would happen (other things equal) to a confidence interval if you calculated a 99
percent confidence interval rather than a 95 percent confidence interval?
(i) It will be narrower
(ii) it will not change
(iii)The sample size will increase
(iv) It will become wider
175. What is the standard deviation of a sampling distribution called?
(i) Sampling error
(ii) Sample error
(iii) Standard error
(iv) Simple error
176. A ______ is a subset of a _________.
(i) Sample, population
(ii) Population, sample
(iii)Statistic, parameter
(iv) Parameter, statistic
177. A _______ is a numerical characteristic of a sample and a ______ is a numerical
characteristic of a population.
(i)Sample, population
(ii) Population, sample
(iii)Statistic, parameter
(iv) Parameter, statistic
Statistical Inference
Page 16
School of Distance Education
178. A sampling distribution might be based on which of the following?
(i)Sample means
(ii) Sample correlations
(iii)Sample proportions
(iv) All of the above
179. _________ are the values that mark the boundaries of the confidence interval.
(i) Confidence intervals
(ii) Confidence limits
(iii)Levels of confidence
(iv) Margin of error
180. _____ results if you fail to reject the null hypothesis when the null hypothesis is
actually false.
(i) Type I error
(ii) Type II error
(iii)Type III error
(iv) Type IV error
181. Good way to get a small standard error is to use a ________.
(i) Repeated sampling
(ii) Small sample
(iii)Large sample
(iv) Large population
182. The use of the laws of probability to make inferences and draw statistical conclusions
about populations based on sample data is referred to as ___________.
(i)Descriptive statistics
(ii) Inferential statistics
(iii)Sample statistics
(iv) Population statistics
183. As sample size goes up, what tends to happen to 95% confidence intervals?
(i) They become more precise
(ii) They become narrower
(iii) They become wider
(iv) Both (i) and (ii)
184. __________ is the failure to reject a false null hypothesis.
(i)Type I error
(ii)Type II error
(iii) Type A error
(iv)Type B error
185. What is the key question in the field of statistical estimation?
(i) Based on my random sample, what is my estimate of the population parameter?
(ii)Based on my random sample, what is my estimate of normal distribution?
(iii)Is the value of my sample statistic unlikely enough for me to reject the null
hypothesis?
(iv)There is no key question in statistical estimation
186. Cramer Rao lower bound is for finding
(i)Unbiased estimator
(ii) Consistent estimator
(iii) Minimum variance of unbiased estimator
(iv) None of these
187. If Tn is a consistent estimator of  , then the consistent estimator of  2 is,
(i) Tn 2
(ii) Tn
(iii)
Tn
(iv) None of these
188. ---- test is used for testing independence of attributes is a contingency table
(i)Normal test
Statistical Inference
(ii) chi-square test
(iii) t-test
(iv) None of these
Page 17
School of Distance Education
189. The Neymaan-Pearson lemma is used to find ------- for testing simple H 0 against
simple H1
(i) Test statistic
(ii) best critical region
(iii) Power of a test
(iv) None of these
190. ----- Distribution is used for constructing confidence interval for the mean of the
normal distribution when sample size is large.
(i)Normal distribution
(ii) t distribution
(iii) F distribution
(iv) None of these
191. The MLE of  in B (1,  ) is,
(i) x
(ii) x 2
(iii)
x
i
i
(iv)
x
2
i
i
192. Fisher-Neyman factorization theorem is used for finding --- estimator
(i)Unbiased estimator
(ii) Consistent estimator
(iii) Minimum variance of unbiased estimator
(iv) None of these
193. The value of  2 is zero if and only if,
(i)
O   E
i
i
i
(ii) Oi  Ei for all i (iii) Oi  Ei for all i
(iv) None of these
i
194. A coin is tossed 600 times and we got 320 heads. Which is test to be used for testing
the unbiasedness of the coin?
(i)Normal test
(ii) chi-square test
(iii) t-test
(iv) None of these
195. Equality of variances of two normal population is tested by
(i)Normal test
(ii) chi-square test
196. In paired t test the statistic t 
(i) t n
(ii) t n 1
(iii) t-test
u n 1
follows:
Su
(iii) t n 1
(iv) None of these
(iv) None of these
197. In a contingency table the expected frequencies are calculated under ----- hypothesis
(i)Null
(ii) Alternate
(iii) Both
(iv) None
198. In a chi square test of independence, we consider the attributes are independent if,
(i)The calculated chi square value is equal to table chi square value
(ii)The calculated chi square value is greater than the table chi square value
(iii)The calculated chi square value is less than the table chi square value
(iv)None of these
199. Area of critical region depends up on
(i) Type –I error
(ii) type –II error
(iii) Power
(iv) None of these
200. Paired t test is applicable when the observation in the two samples are
(i) Paired
(ii) correlated
(iii) Uncorrelated
(iv) None of these
***************
Statistical Inference
Page 18
School of Distance Education
ANSWERS
1. (ii) Statistic
2. (i) Sampling distribution
3. (ii) A parameter
4. (iii) Chi-square
5. (i) 1.2
6. (iv) Can be smaller, larger, or equal to the population mean
7. (iv) Always normal for large sample sizes
8. (iii) Standard error of the mean decreases
9. (ii) Reduce the standard error of the mean to approximately 70% of its current value
10. (ii) Data from the sample is used to estimate the population parameter
11. (ii) 
12. (i) 
13. (ii) sampling distribution of the mean
14. (iv) None of these
15. (i) 75
16. (iii) Normal distribution
17. (iii) Any sample size
t 2 2
t 
2n
18. (iv) e
19. (i) 0.5
20. (iv) Sampling distribution of p
21. (iii) Chi-square with 1 d.f.
22. (ii) 0 to 
23. (ii) 2(mean) = variance
24. (iii) n
25. (i) 2n
26. (ii) (1  2t )
27. (ii) (1  2t )

n
2

1
2
28. (i) Chi-square with 7 d.f.
29. (ii) P (  2   2 ( ) )   .
30. (iv)  2 (n)
31. (ii) Chi-square
32. (iii) Exponential
33. (iii) n-2
34. (i) N(n, 2n )
35. (ii) + 10.307
36. (ii) X Y
n
Statistical Inference
Page 19
School of Distance Education
37. (iv) Gosset
38. (iii)  to 
39. (ii)
n 1
 n 1 
2  2 


t
2
1  
n
n
n
2
40. (i) 0
( X   ) n 1
S
42. (ii) t – distribution
43. (iii) P(| t | t )  
41. (i)
44. (ii) Distribution with n1  n2  2 d.f.
45. (i) F- distribution with (1,n) d.f.
46. (i) An F- random variable with (1, 1) degree of freedom.
47. (iii) F- distribution with (n,m) d.f.
48. (iii) N(0,1)
49. (i) 3.365.
50. (ii) Function of population values
51. (i) Probability distribution of the sample mean
52. (i) Normal
53. (i) t(1)
54. (ii) Snedecor
55. (ii) 0 to 
56. (i) F (n1  1, n2  1)
57. (ii) P( Fn ,n  F )  
1 2
58. (i) F =
n2  n1  2 
n1  n2  2 
59. (iii) F (1,1)
60. (iv) P ( X  c )  P ( Y 
1
)
c
61. (ii) 1
62. (i)  2  n1 
63. (ii) less than one
64. (i) t  2 
65. (iii) Fisher
66. (ii) estimate
67. (iv) a point estimate
68. (ii) E (t )  
69. (i) E (t )  
70. (iii) Efficient estimator
Statistical Inference
Page 20
School of Distance Education
71. (i) Point estimator
72. (iii) consistency
T (T  1)
73. (iii)
n(n  1)
74. (iii) consistency
75. (ii) consistent
76. (ii) consistent
77. (ii) consistent
78. (i)
n
x
i 1
i
79. (i) Unbiased, consistent
var(t )
80. (iv) E (t1 ) 
var(t1 )
81. (ii) greater relative efficiency
82. (i) Unbiased
83. (iv) Median of the samples
84. (i) x
85. (ii) 0.5
86. (ii) standard normal table
87. (i)t – table
88. (iii) consistent
89. (iii) both
90. (i) Unbiasedness
91. (i)Fisher-Neyman theorem
92. (ii) biased estimator of 
93. (i) Biased
94. (iii) tn and  only
95. (i) More efficient
96. (i) Unbiased and efficient
97. (ii) Estimate
98. (ii) M L estimator
99. (iii) either
nS 2
100. (ii)
n 1
101. (i) V (t1 )  V (t2 )
102. (iii) a sufficient estimator
103. (i)
x
i
104. (ii) biased
105. (iii) t 2 is also a consistent estimator of  2
106. (ii) Cramer- Rao inequality
107. (i) Fisher
Statistical Inference
Page 21
School of Distance Education
108. (i) Normal equations
109. (ii) More efficient
110. (ii) max of x1 , x2 ,....xn
111. (ii) Confidence coefficient
112. (iv) t- distribution
113. (iii) Mean of x1 , x2 ,....xn
114. (ii) Chi- distribution
V (t2 )
115. (iv)
V (t1 )
116. (iii) The largest order statistics
117. (ii)
 2 
x
118. (iii) Sampling error
119. (ii) beta(1,1/2)
120. (iii) Karl Pearson
121. (i) 0.6826
122. (i) Most efficient
1
123. (i)  x1  x2 
2
124. (ii) Median of x1 , x2 ,....xn
125. (i) More efficient
126. (i) Null hypothesis
127. (ii) Alternate hypothesis
128. (ii) Neyman
129. (iii) Simple hypothesis
130. (ii) composite hypothesis
131. (i) Critical region
132. (i) Type I error
133. (i) P(type I error)
134. (iv) 1- P(type II error)
135. (i) P (type I error)
136. (ii) significance level
137. (ii) Type II error
386
138. (i) 10
2

3
139. (ii) 1  e 2
140. (i) Rejection region
141. (ii) Standard error
142. (iii) Best critical region
143. (iii) Number of independent observations in a set
144. (iv) None of these
145. (i) Normal distribution
Statistical Inference
Page 22
School of Distance Education
146. (iii)
n1S12  n2 S 2 2
n1  n2
x
 p0
147. (i) t  n
p0 q0
n
148. (iii) Karl Pearson
149. (ii) Number of parameters are estimated using the observations for the calculation of
the theoretical frequencies
f i .  f. j
150. (iii)
f..
151. (ii)
( a  b  c  d )( ad  bc )2
( a  b )(c  d )(b  d )( a  c )
152. (ii) Central limit theorem
153. (iv) None of these
154. (i)Most powerful test
155. (i) Null hypothesis
156. (i) 0 and 1
157. (ii)All the above
158. (i) t-test
159. (iii) Normal test
160. (ii) two tailed test
161. (iii) Left tailed test
162. (i) Right tailed test
163. (iv) None of these
164. (iii) n-1
165. (ii) m-1 X n-1
166. (ii) 6
167. (ii)

 Oi  Ei 
2
Ei
168. (i) Normal distribution
169. (ii) zero
170. (ii) n-1
171. (ii) 9
172. (ii) σ
173. (iv) All of the above statements are true
174. (iv) It will become wider
175. (iii) Standard error
176. (i) Sample, population
177. (iii)Statistic, parameter
178. (iv) All of the above
Statistical Inference
Page 23
School of Distance Education
179. (ii) Confidence limits
180. (ii) Type II error
181. (iii)Large sample
182. (ii) Inferential statistics
183. (iv) Both (i) and (ii)
184. (ii)Type II error
185. (i)Based on my random sample, what is my estimate of the population parameter?
186. (iii) Minimum variance of unbiased estimator
187. (i) Tn 2
188. (ii) chi-square test
189. (ii) best critical region
190. (i)Normal distribution
191. K
192. (iv) None of these
193. (ii) Oi  Ei for all i
194. (i)Normal test
195. (iv) None of these
196. (ii) t n 1
197. (iii)The calculated chi square value is less than the table chi square value
198. (i)Null
199. (i) Type –I error
200. (i) Paired
©
Reserved
Statistical Inference
Page 24
Related documents