# LIFE = PROBABILITY

Written by Shuaib Abdul-Rahman · 3 min read

Life, they say is a game of chance and it suffices to say that a proper appreciation of PROBABILITY may HELP in understanding why we get what we get in LIFE as individuals and/or groups.

Probability distributions are basically used to do future analyses or predictions. This is basically dependent on mathematical formulas. There is a list of probability distributions, which have their own significance in real-life applications.

The degree of Risks and Uncertainty in this LIFE genuinely call for a scientific approach to Decision making which we MUST carry out consciously and/or unconsciously thus, my focal knowledge purpose of PROBABILITY which shall start with these extracts –

Some of us appealed, by analogy, to prior experiences tossing coins and dice. Others argued that, because a tetrahedron is perfectly symmetrical, one side cannot fall face up (or down) more frequently than any other. In either case, we brought to bear an understanding of the fundamental characteristics of the underlying process and, through that understanding, made our assessment.

Similar opportunities to take advantage of the fundamental characteristics of an underlying process arise in the assessment of many probability distributions. When we can build a reasonable model of the underlying process, we can use the model to derive the probability distribution theoretically. Then, if the model is a reasonable representation of the actual process, the derived theoretical probability distribution will be a good approximation of the actual probability distribution of the uncertainty. When the underlying process is well-understood and easily modeled—as it is for the tetrahedron and, as we shall see, for our errant, defect-producing machine—we can derive the probability distribution from fundamental principles and forgo the chore of estimating it subjectively. This note introduces four common underlying processes and the analytical probability distribution used to forecast the outcomes of each. The note also illustrates how to obtain a particular probability from the four resulting probability distributions. The four underlying processes and associated probability distributions discussed are:

1. A simple counting process with a finite number of repetitions that results in the binomial distribution

2. An accumulation process—with a finite number of repetitions—those results in the normal distribution

3. A second counting process, this time with an indefinite number of repetitions, that results in the Poisson distribution

4. A waiting-time process that ignores history (time already waited) and results in the exponential distribution.

It is very important to emphasize, however, that while these probability distributions have many legitimate applications, they are applicable only when the specific assumptions about the underlying processes are satisfied—that is, when the uncertain quantity is obtained from a process similar to the one used to derive the probability distribution in the first place. When that is not the case, those analytical distributions cannot be used; instead, the probability distribution of the uncertain quantity must be obtained by the more arduous, but generally more applicable, method of direct assessment.

The Binomial Distribution – The first simple underlying process that we consider involves a counting process in which we count the number of occurrences of some event. In this case, all we are concerned about is whether or not the event occurs, so we consider only two possible outcomes: The event occurs or it doesn’t occur. We call one of these outcomes a “success” and the other a “failure.”

The Normal Distribution – If, instead of counting, we added 1 for each success and 0 for each failure, we could have considered it an accumulation process (adding or accumulating the number of 1’s). It will be more general than the simple accumulation process just described for two reasons. First, the value of the uncertain quantity generated in each trial is not restricted to 0 or 1. In fact, the uncertain quantity can take on any value whatsoever. Second, each repetition of the process is not required to be identical to all the others. However, the number of repetitions of the process (or trials) must still be fixed and finite, and each repetition of the process must be independent of all the others. Under these circumstances, if the number (n) of repetitions of the process is large enough, the resulting probability distribution of the sum of the uncertain quantities generated during the n trials is approximately equal to the normal distribution.

The Poisson Distribution – When we are counting a number of events and the number of opportunities for the event to occur is unspecified, the resulting probability distribution of the number of events will be a distribution defined on the whole numbers (0, 1, 2, …). In this sense, the distribution is like the binomial distribution, which is defined on the whole numbers from 0 to n. However, because there is an unspecified number of opportunities for the event to occur, there is no concept of repeated trials. Instead, we have four restrictions on the occurrence of an event.

The Exponential Distribution – The derivation of our final probability distribution begins once again with an underlying process that generates the occurrence of an event. This time we will be concerned with the time until the next event occurs. The only condition we need to impose on the event-generating process is as follows: The probability of how much longer it will take until an event occurs cannot depend on how long it has already been since the last event occurred. This condition is known as the memoryless property because the underlying process does not remember when the last event occurred. Whenever this condition is satisfied, the resulting probability distribution of the time until the next event is the exponential distribution.

#STAY TUNED

in