Inverse Transform Method for simulating continuous random variables.

Let X be a random variable with c.d.f. FX (x). Since FX (x) is a nondecreasing function, the inverse function  FX-1 (y)  may be defined for any value of  y  between 0 and 1 as:

FX-1 (y) = inf{ x: FX (x) >= y}             0<= y <= 1

FX-1 (y)  is defined to equal that value x for which  F(x)=y.

Let us proof that if U is uniformly distributed over the interval (0, 1), then

X =  FX-1 (U)

has c.d.f. FX (x).

The proof:

P(X <= x) = P(FX-1 (U)  <= x) = P(U <= FX (x)) = FX (x).

So to get a value, say x, of  a random variable X, obtain a value, say u, of a random variable U, compute  FX-1 (U)  , and set it equal to x.

Example 1:  Generate a random variable from the uniform distribution U(a, b):

The c.d.f. is

X = = a + (b - a)U

Example 2:  Generate a random variable from the exponential distribution Exp():

=

Example 3:  Generate a random variable with p.d.f.

U = X2

X =  = U1/2

Example 4:  Let   be i.i.d. random variable distributed  .

Define   and  .

Generate  and .

The distributions of  and  are respectively:

In the particular case where  X = U  we have

=

and

=

To apply this method  FX (x) must exist in a form for which the corresponding inverse transform can be found analytically. Distributions in this group are exponential, uniform, Weibull, logistic, and Cauchy. Unfortunately, for many probability distributions it is either impossible or extremely difficult to find the inverse transform, that is to solve

with respect to x.

Inverse Transform Method for simulating discrete random variables.

The Inverse Transform Method for simulating from continuous random variables have analog in the discrete case. For instance, if we want to simulate a random variable X  having p.d.f.

P(X = xj) = Pj             j=0, 1,  ...

To simulate  X  for which  P(X = xj) = Pj    let  U  be uniformly distributed over (0, 1), and set

As,

we see that  X  has the desired distribution.

Example 1:  Generating a Bernoulli random variable.

P(X = 1) = p,  P(X = 0) = 1 - p

The algorithm:

1) generate U ~ U(0, 1)

2) if U < 1 - p  or  p < 1 - U or  U > p  then  X = 0, otherwise  X = 1.

Example 2:  Let   be a. random variable distributed

.

Generate a random variable with the above distribution.

Algorithm 1:

1)  generate U ~ U(0,1)

2)  if   U<0.2  -->  X=1

3)  if   U<0.5  -->  X=2

4)  if   U<1     -->  X=3

Algorithm 2:

1)  generate U ~ U(0,1)

2)  if   U<0.5  -->  X=3

3)  if   U<0.8  -->  X=2

4)  if   U<1     -->  X=1

Example 3:  Generating a geometric random variables.

Suppose we want to simulate  X  such that

P(X = i) = p(1 - p)i-1,               i >= 1

X can be thought of as representing the time of the first success when independent trials, each of which is a success with probability  p, are performed.

As

since  j-1  trials are all failures.

We can simulate such a random variable  by generating a random variable U and then setting X equal to that value j for which

1 - (1 - p)j-1 < U < 1 - (1 - p)j

or, equivalently, for which

(1 - p)j < 1 - U < (1 - p)j-1

or, equivalently, for which

(1 - p)j <  U < (1 - p)j-1

We can thus define X by

X = min{j: (1 - p) j <  U} = min{j: j > log(U)/log(1 - p)}

= 1 + [log(U)/log(1 - p)]

Example 4:  Simulating a binomial random variable.

A binomial (n, p) random variable can be most easily simulated as the sum of  n  independent Bernoulli random variables.

That is, if  U1, . . . Un  are independent uniform U(0, 1) variables, then setting

it follows that   is a binomial random variable with parameters n and p.

One difficulty with the above procedure is that is requires the generation of  n  random numbers. To show how to reduce the number of random numbers needed, note first that the above procedure does not use the actual value of a random number U but only whether or not it exceeds p.

Recall that

,   i = 0,1,…,n

We employ the inverse transform method by making use of the recursive identity

With  i  denoting the value currently under consideration, pr = P(X = i)  the probability that  X  is equal to  i, and  F = F(i) is the probability that  X  is less than or equal to  i, the algorithm can be expressed as follows:

1.      Generate a random number U.

2.      c = p/(1-p), i = 0, , F = pr.

3.      If  U < F, set  X = i  and stop.

4.      pr = c(n – i)/(i + 1))pr,  F = F + pr,  i = i + 1.

5.      Go to 3.

The preceding algorithm first check whether X = 0, then  X = 1, and so on. Hence, the number of searches it makes is 1 more than the value of  X. Therefore, on average, it will take 1 + np searches to generate X. Since a binomial (n, p) random variable represents the number of successes in  n  independent trials when each is a success with probability  p, it follows that such a random variable can also be generated by subtracting from  n  the value of a binomial (n, 1 - p). Hence, when  p > ½, we can generate a binomial (n, 1 - p) random variable by the above method and subtract its value from n  to obtain the desired generation.

Example 5:  Simulating a Poisson random variable.

The random variable  X  is Poisson with mean   if

i = 0, 1, …

The key to using the Inverse Transform Method to generate such a random variable is the following identity:

,    i>=0

Upon using thr above recursion to compute the Poisson probabilities as they become needed,  the inverse transform algorithm for generating a Poisson random variable with the mean   can be expressed as follows. The quantity  i refers to the value presently under consideration;  is  the probability that  X  is equal to  i, and  F = F(i) is the probability that  X  is less than or equal to  i)

1.      Generate a random number U.

2.      i = 0, , F = p.

3.      If  U < F, set  X = i  and stop.

4.      p = p/(I + 1),  F = F + p,  i = i + 1.

5.      Go to 3.

To see that the above algorithm does indeed generate a Poisson random variable with the mean  , note that it first generates a random number  U  and then checks whether or not  . If so, it sets  X = 0. If not, then it computes   by using the recursion. It now checks whether  U <  +  (where the right-hand side is the new value of  F), and if so it sets  X = 1; and so on.

The above algorithm successively checks whether the Poisson value is 0, then whether it is 1, then 2, and so on. Thus, the number of comparisons needed will be 1 greater than the generated value of the Poisson. Hence, on average, the above will need to make  1 +  searches. Whereas this is fine when    is small, it can be greatly improved upon when   is large. Indeed, since a Poisson random variable with the mean   is most likely to take on one of the two integral values closest to  , a more efficient algorithm would first check one of these values, rather than starting at 0 and working upward. The algorithm lets  I = int() and computes   (by first taking logarithms and then by raising the result to the power e). It then uses the above recursion to determine F(I). It now generates a Poisson random variable with the mean   by generating a random number  U , and then noting whether  or not

X <= I  by seeing whether  or not  U <= F(I). It then searches downward starting from  X = I in the case where  X <= I  and upward starting from X = I + 1 otherwise.

The number of searches needing by the algorithm is roughly 1 more than the absolute difference between the random variable  X and its mean  . Since, for   large a Poisson is (by the central limit theorem) approximately normal with mean and variance both equal to , it follows that

average numbers of searches  ~ 1 + E(|X - |)   where  X ~ N(,)

=  1 +

=  1 +   where  Z ~ N(0, 1)

=  1 +  0.798

That is, the average numbers of searches grows with the square root of    rather than with    as    becomes larger and larger.