Tuesday, 8 October 2013

Normal Distribution

Normal Distribution

Normal Distribution Definition

 The normal distributions are a very important class of statistical distributions. All normal distributions are symmetric and have bell-shaped density curves with a single peak. A function that represents the distribution of many random variables as a symmetrical bell-shaped graph.
A normal distribution in a variate X with mean mu and variance sigma^2 is a statistic distribution with probability density function
 P(x)=1/(sigmasqrt(2pi))e^(-(x-mu)^2/(2sigma^2))
(1)
on the domain x in (-infty,infty). While statisticians and mathematicians uniformly use the term "normal distribution" for this distribution, physicists sometimes call it a Gaussian distribution and, because of its curved flaring shape, social scientists refer to it as the "bell curve." Feller (1968) uses the symbol phi(x) for P(x) in the above equation, but then switches to n(x) in Feller (1971).
de Moivre developed the normal distribution as an approximation to the binomial distribution, and it was subsequently used by Laplace in 1783 to study measurement errors and by Gauss in 1809 in the analysis of astronomical data (Havil 2003, p. 157).
The normal distribution is implemented in Mathematica as NormalDistribution[mu, sigma].
The so-called "standard normal distribution" is given by taking mu=0 and sigma^2=1 in a general normal distribution. An arbitrary normal distribution can be converted to a standard normal distribution by changing variables to Z=(X-mu)/sigma, so dz=dx/sigma, yielding
 P(x)dx=1/(sqrt(2pi))e^(-z^2/2)dz.
(2)
The Fisher-Behrens problem is the determination of a test for the equality of means for two normal distributions with different variances.
The normal distribution function Phi(z) gives the probability that a standard normal variate assumes a value in the interval [0,z],
Phi(z)=1/(sqrt(2pi))int_0^ze^(-x^2/2)dx
(3)
=1/2erf(z/(sqrt(2))),
(4)
where erf is a function sometimes called the error function. Neither Phi(z) nor erf can be expressed in terms of finite additions, subtractions, multiplications, and root extractions, and so both must be either computed numerically or otherwise approximated.

Normal Distribution Curve

BinomialGaussian
The normal distribution is the limiting case of a discrete binomial distribution P_p(n|N) as the sample size N becomes large, in which case P_p(n|N) is normal with mean and variance
mu=Np
(5)
sigma^2=Npq,
(6)
with q=1-p.
The distribution P(x) is properly normalized since
 int_(-infty)^inftyP(x)dx=1.
(7)
The cumulative distribution function, which gives the probability that a variate will assume a value <=x, is then the integral of the normal distribution,
D(x)=int_(-infty)^xP(x^')dx^'
(8)
=1/(sigmasqrt(2pi))int_(-infty)^xe^(-(x^'-mu)^2/(2sigma^2))dx^'
(9)
=1/2[1+erf((x-mu)/(sigmasqrt(2)))],
(10)
where erf is the so-called error function.
Normal distributions have many convenient properties, so random variates with unknown distributions are often assumed to be normal, especially in physics and astronomy. Although this can be a dangerous assumption, it is often a good approximation due to a surprising result known as the central limit theorem. This theorem states that the mean of any set of variates with any distribution having a finite mean and variance tends to the normal distribution. Many common attributes such as test scores, height, etc., follow roughly normal distributions, with few members at the high and low ends and many in the middle.
Because they occur so frequently, there is an unfortunate tendency to invoke normal distributions in situations where they may not be applicable. As Lippmann stated, "Everybody believes in the exponential law of errors: the experimenters, because they think it can be proved by mathematics; and the mathematicians, because they believe it has been established by observation" (Whittaker and Robinson 1967, p. 179).
Among the amazing properties of the normal distribution are that the normal sum distribution and normal difference distribution obtained by respectively adding and subtracting variates X and Y from two independent normal distributions with arbitrary means and variances are also normal! The normal ratio distribution obtained from X/Y has a Cauchy distribution.
Using the k-statistic formalism, the unbiased estimator for the variance of a normal distribution is given by
 sigma^2=N/(N-1)s^2,
(11)
where
 s^2=1/Nsum_(i=1)^N(x_i-x^_)^2,
(12)
so

 var(x^_)=(s^2)/(N-1).

No comments:

Post a Comment