Final Exam Cheat Sheet Essay examples

Submitted By Jakemandel1
Words: 1721
Pages: 7

The conditional probability distribution of X, given Y = y, is P ( x | y) = P ( X = x,Y = y) P (Y = y)

The height of a normal density curve at any point x is given by:

f (x) =

− ( 1 e2 σ 2π

1 x−µ 2 ) σ

• The random variables X and Y are independent if PX,Y (x, y) = pX (x)pY (y) all x, y • Alternatively, X and Y are independent if pX|Y (x | y) = pX (x) and pY |X (y | x) = pY (x), all x,y
• Consider two random variables X & Y w/ expected values: µ X = E(x), µY = E(Y ) • The covariance between X and Y is defined to be: Cov(X,Y ) = E [(X − µ X )(Y − µY )] =

• Normal dist. w/ mean 0 and variance 1: denoted as N(0,1)

• 68-­‐95-­‐99.7 Empirical rule (1,2,3 standard dev.)

Z=

∑ (x − µ all x,y

X

)(y − µY )pX,Y

• The covariance can also be calculated as: Cov(X,Y ) = E(XY ) − µ X µY

• Will prove normal is data point lie within red curved line of normal quantile plot Central Limit Theorem (CLT): A sum of independent and identically distributed random variables behaves like a normal random variable as the number of variables in the sum increases. Sampling • Gold standard to avoid selection bias is to use randomization (ex: bad sampling: voluntary response -­‐-­‐-­‐ good sampling: SRS; stratified random sample; multi-­‐stage) N : size of population

X − µ σ

Corr(X,Y ) =

Cov(X,Y ) σ Xσ Y

If Corr(X,Y ) = 0, they are uncorrelated. • Independent random variables are always uncorrelated, but the converse is not true - two variables can have zero correlation and yet not be independent

Sums of Random Variables • Mean: E(X +Y ) = E(X) + E(Y ) • for constants a, b, and c: E(aX + bY + c) = aE(X) + bE(Y ) + c • for random variables X1,..., X n and constants a1,..., an , c, E(a1 X1 +... + an X n + c) = a1E(X1 ) +... + an E(X n ) + c • Variance: Var(X +Y ) = Var(X) +Var(Y ) + 2Cov(X,Y ) Var(aX + bY + c) = a 2Var(X) + b 2Var(Y ) + 2 ab Cov(X,Y )

Independent and Identically Dist. Rand. Vars.
2 • Suppose they have mean: µ X and variance: σ X

Then : E(X1 +... + X n ) = nµ X
2 Var(X1 +... + X n ) = nσ X

SD(X1 +... + X n ) = nσ X
Bernoulli
Random Variable

E(B) = p, Var(B) = p(1− p) *Note that variance is greatest when p = 0.5 Binomial Random Variable • n, the number of Bernoulli trials in the sequence • p, the probability of success for each Bernoulli trial 2 If X ~ Bi(n, p), then µ x = np, σ X = np(1− p)

Poisson Random Variable

A Poisson random variable X has a single parameter, λ, which specifies the rate at which events occur.

λx , x = 0,1, 2,... x! E(X) = λ, Var(X) = λ pX (x) = P(X = x) = e− λ
If
n is large and p is small, the Bi(n,p) distribution is approximated by the Poisson distribution with rate parameter, np, the binomial mean. • This approximation works well if: n ≥ 100, p ≤ 0.01, np ≤ 10 Bell Curve • Describe: 1) Symmetric, 2) Unimodal, 3) Bell-­‐Shaped • Mean, median and mode are the same

n : size of a sample selected from the population • The bias is the difference between the mean of the sampling distribution and the parameter •