In this course, 95% of our efforts will be devoted
to events that have numeric outcomes (e.g., the return on an asset, the selling
price of a house, the monthly demand for a product or service, etc.). Probabilistic events that have numerical
outcomes are called random variables.
To describe a random variable completely, we need to
know two things: (1) each possible outcome; and (2) the probability of each
outcome. If we have a complete
description of both, then we say we know the random variable’s distribution. There are two general types of random
variables (or distributions) encountered in this course: discrete and continuous.
A discrete random variable is one whose outcomes can
be listed (like the roll of a die). A
continuous random variable is one whose outcomes are so numerous they cannot be
listed. An example of a continuous random
variable would be the time elapsed between customers entering a retail establishment. If measured to infinitesimal accuracy, one
could not list all of the possibilities.
However, if we only measured elapsed times to the nearest second (or
minute), then the distribution of elapsed times would be discrete. In practice, continuous distributions are
often used as approximations to discrete distributions in instances where the
number of possible outcomes is so large that a continuous distribution makes
the analysis easier.
Example: A Discrete Distribution. Define a random variable whose value is the sum
of the dots obtained from rolling a pair of dice. Construct the probability
distribution for this random variable.
Outcomes
|
|||||||||||
Probabilities
|
We frequently summarize information for a discrete
random variable by means of its probability
histogram. The probability histogram
is simply a visual display of the outcomes (plotted along the x-axis), and their associated
probabilities, which are represented by bars (graphed along the y-axis).
Measures of a
Distribution: Expectation and Variance (Discrete Case)
The expected
value or mean of a discrete random
variable (“r.v.” for short) X is
denoted by E(X) or μ and given by the
formula μ =E(X) = Σxipi
The expected value
is the “theoretical” average obtained by weighting each outcome by its respective probability
and then summing. For the sum of the
dice we have
Possible values (xi) Probability (pi)
Product (xi pi)
2 1/36 2/36
3 2/36 6/36
4 3/36 12/36
5 4/36 20/36
6 5/36 30/36
7 6/36 42/36
8 5/36 40/36
9 4/36 36/36
10 3/36 30/36
11 2/36 22/36
12 1/36 12/36
252/36
= 7 (μ =E(X))
Plot the value 7 on the probability histogram. Observe that E(X) is a measure of centrality.
Example. You sell big electric motors. During a given week, demand for your 100-hp
motor is 0, 1, or 4 (4 come on a palette).
The distribution is described below.
Demand
|
0
|
1
|
4
|
Probability
|
.45
|
.40
|
.15
|
What is your expected demand for a week?
E(X) = (0)(.45)+(1)(.4)+(4)(.15) = 1.00
Another measure of interest is the expected value of
the expression (X – E(X)2),
called the variance of X, and given by the formula
Var(X) = E(X) –
[E(X)]2
The variance also goes by the Greek letter σ2. The formula looks bad, but a few simple
examples will clarify its calculation and help us understand what it tells
us. Note that the mean of X is needed before computing the variance.
Recall that pi is
the probability that X takes on the value
xi.