Bayes' Law

From Cohen Courses
Jump to navigationJump to search

From Wikipedia, the free encyclopedia

In probability theory and applications, Bayes' theorem shows the relation between two conditional probabilities which are the reverse of each other.

This theorem is named for Thomas Bayes and often called Bayes' law or Bayes' rule. Bayes' theorem expresses the conditional probability, or "posterior probability", of a hypothesis H (i.e. its probability after evidence E is observed) in terms of the "prior probability" of H, the prior probability of E, and the conditional probability of E given H. It implies that evidence has a stronger confirming effect if it was more unlikely before being observed. Bayes' theorem is valid in all common interpretations of probability, and it is commonly applied in science and engineering.

Simple statement of theorem

Thomas Bayes addressed both the case of discrete probability distributions of data and the more complicated case of continuous probability distributions. In the discrete case, Bayes' theorem relates the conditional and marginal probabilities of events A and B, provided that the probability of B does not equal zero:

In Bayes' theorem, each probability has a conventional name:

  • P(A) is the prior probability (or "unconditional" or "marginal" probability) of A. It is "prior" in the sense that it does not take into account any information about B; however, the event B need not occur after event A.
  • P(A|B) is the conditional probability of A, given B. It is also called the posterior probability because it is derived from or depends upon the specified value of B.
  • P(B|A) is the conditional probability of B given A. It is also called the likelihood.
  • P(B) is the prior or marginal probability of B, and acts as a normalizing constant.

Bayes' theorem in this form gives a mathematical representation of how the conditional probability of event A given B is related to the converse conditional probability of B given A.

Relevant Papers