Why do we need probability? We need it for decision-making where we do decision analysis with uncertainty, optimization, and sensitivity analysis.
Probability is a numerical measure of the likelihood that an event will occur. A probability value is always assigned on a scale from 0 to 1, where a probability near one indicates an event is almost certain to occur.
In statistics, the notion of an experiment differs somewhat from that of an experiment in the physical sciences. In statistical experiments, probability determines outcomes. In some cases, experiments may occur in the same exact way, but with a different outcome each time they occur. This is why statistical experiments are sometimes called “Random Experiments.”
Experiment and its sample space
An experiment is any process that generates or gives well-defined outcomes.
A sample space for an experiment is the set of all experimental outcomes.
An experimental outcome is also called a sample point.
Methods used to Assign Probabilities
A. Classical Method: Assigning probabilities is based on the assumption of equally likely outcomes.
Example: Rolling a dice (Experiment)
S= (1, 2, 3, 4, 5) – (Sample space)
Probabilities: Each sample point has a 1 in 6 chance of occurring.
B. Relative Frequency Method: Assigning probabilities based on experimentation or historical data.
C. Subjective Method: Assigning probabilities based on judgment.
When economic conditions and a company’s circumstances change rapidly, it might be inappropriate to assign probabilities based solely on historical data. We can use any data available as well as our experience and intuition, but ultimately a probability value should express our degree of belief that the experimental outcome will occur. The best probability estimates are often obtained by combining the estimates from the classical or relative frequency approach with the subjective estimate.
Events and their probabilities
An event is referred to as a collection of sample points. The probability of any event is equal to the sum of the probabilities of the sample points in the event.
Some basic relationships of probability
– Complement of an event: defined as the event consisting of all sample points that are not in A.
– Union of Two events: The Union of Events A and B is the event containing all sample points that are in A or B or both.
– Intersection of two events: The intersection of events A and B is the set of all sample points that are in both A and B.
– Addition Law: This law provides a way to compute the probability of event A, or B, or both A and B, occurring.
– Mutually Exclusive Events: Two events are said to be mutually exclusive if they have no sample points in common.
– Conditional Probability: The probability of an event occurring given that another event has occurred is called conditional probability.
– Multiplication Law: This law provides a way to compute the probability of the intersection of two events.
Prior Probability: it’s the initial probability of each possibility before we can see the data.
Posterior Probability: it’s the normalized answer after computing the probability for each data point.
Conditional Probability: it’s the probability of each possibility given the data.
Bayes’ theorem provides the means for revising the prior probabilities.