1. Introduction to probability

Mathematics is a formal language that scientists create to try to describe nature. One of the most fundamental problems in mathematics is weighing… and the art here is to construct abstract “measures”… Mathematician and physicist Galileo Galilei has a few verses as follows:

“Measure what is measurable, and make measurable what is not so.” — Galileo Galilei

Probability is a mathematical measure that measures the uncertainty of the likelihood of an event (event).

Watching: What is Probability

2. Sample and event space

The set of possible outcomes of an experiment is called the sample space (symbol: (Omega)). Each element ( omega ) in ( Omega ) is called a result (a point or element in the sample space). Each subset of ( Omega ) is called an event.

The resulting coin toss can be heads or tails. Let $S$ be the “heads” event and $N$ the “heads” event. Toss the coin twice, sample space is $Omega = left{ SS,SN,NS,NN

ight}$ .The event of the first toss in the experiment is $ A = { SS, SN } $

Let $ omega $ be the speed of a motorcycle, then we can set the sample space $ Omega = mathbb{R} = (-infty , +infty )$. Reading this far, readers may think that placing $ Omega $ as above is not appropriate! Because it looks like there will be upper and lower bounds for the speed of this motorcycle! But “usually” this has no effect at all. The event that the motorcycle is 40$ or more and less than 50$ is $A =< 40, 50>$

Given the event ( A ), call ( ar A = { omega in Omega : omega

otin A } ) is denoted as the complement of ( A ), the event ( ar A ) is called the negation of (A).

Given two events ( A ) and ( B ), the composite event of ( A ) and ( B ) the event “at least one of the two events (A) or (B) occurs” is defined:

< A cup B = left{ omega in Omega : omega in A ext{ or } omega in B ight}>

Given two events ( A ) and ( B ), the intersection event of ( A ) and ( B ) as the event “both ( A ) and (B) occurs ” is defined (sometimes we write ( A cap B ) as ( (A,B) ) or (AB) ):

< A cap B = left{ omega in Omega : omega in A ext{ and } omega in B ight}>

READ MORE  M2Ts . File Format

3. Probability

Probability is a real function that quantifies the probability of each event ( A ) occurring in the sample space ( Omega ), each event ( A ) will be assigned a real number to quantify the probability ( Pr (A) ) (also known as the probability measure). Each probability of each event must satisfy the following axioms:

Three Probability Laws Axiom 1: Non-negative ( Pr(A) ge 0 ), for all events ( A ) Axiom 2: Normalization ( Pr(Omega) = 1 ) Axiom 3: Additivity If two events ( A ) and ( B ) are separate (disjoint) or (A cap B = emptyset ), then: < Pr(A cap B) = Pr(A) + Pr(B) ag{1} > More generally if ( A_{1}, A_{2},… ) are separate then: < Pr left( igcuplimits_{i = 1}^infty {{A_i}} ight) = sumlimits_{i = 1}^infty Prleft( {{A_i}} ight) ag{2} >

There are many ways to interpret probability, the most popular one is The Frequency Interpretation of Probability. In many problems, probability can be thought of as the frequency of the outcome of a process that is obtained when a large number of tests are repeated over and over under similar conditions. For example, the probability of a tail toss can be considered as ( 1 / 2 ) when the number of trials is large and the tossing conditions in the trials are similar (not many times on earth). , and then continue to bring it up to Mars ).

The “probability explanation” with different interpretations is not necessarily so important until we deal with problems related to statistical inference, but in the following sections we will discuss the two fields. The classical inference schools are the frequency statistician and the bayesian statistician.

From the above three axioms we can deduce some properties of probability as follows:

< Pr( emptyset ) = 0 >< A subset B Rightarrow Pr(A) le Pr(B) >< 0 le Pr(A) le 1 >< Pr(A) + Pr(ar A) = 1 >< A cap B = emptyset Rightarrow Pr(A cap B) = Pr(A) + Pr(B) >

Lemma For all events $A $ and $B $ we have: $$ Pr(A cup B) = Pr(A) + Pr(B)-Pr(AB) $$ Prove: Set $A cup B $ can be divided into 3 sets: the element set of $A$ which is not in $B$, the common element set of $A$ and $B$, the element set of $B$ which is not in $A$, therefore : egin{array}{*{20}{l}}{Prleft( {A cup B} ight)}&{ = Prleft( {left( {Aoverline B } ight) cup left( {AB} ight) cup left( {overline A B} ight)} ight)}{}&{ = Prleft( {Aoverline B } ight) + Prleft( {AB} ight) + Prleft( {overline A B} ight)}{}&{ = Prleft( {Aoverline B } ight) + Prleft( {AB} ight) + Prleft( {overline A B} ight) + left( {Prleft( {AB} ight) – Prleft( {AB} ight)} ight)}{}&{ = left ( {Prleft( {Aoverline B } ight) + Prleft( {AB} ight)} ight) + left( {Prleft( {overline A B} ight) + Pr(AB)} ight) – Prleft( {AB} ight)} {}&{ = Prleft( {left( {Aoverline B } ight) cup left( {AB} ight)} ight) + Prleft( {left( {overline A B} ight) cup left( {AB} ight)} ight) – Prleft( {AB} ight)}{}&{ = Prleft( A ight) + Prleft( B ight) – Prleft( {AB} ight)}end{array}>

READ MORE  Things to Know About Chemicals

4. Probability on a finite sample space of elements

Suppose the sample space ( Omega = { omega_{1},…,omega_{n} } ) has finite elements. For example, the problem of tossing a coin three times has only ( 2^{3} = 8 ) possible outcomes ( Omega = { (f,c,t) : f,c,t in {S, N } } ). Now the probability of an event ( A ) is calculated ( Pr(A) = |A|/8 ) with (|A| ) denotes the number of elements of ( A ), now the probability of the event (2 ) ) the first toss is (S) (heads) with ( A = { (S,S,N), (S,S,S) } ), the probability that event (A) occurs is ( Pr(A) ) = 2 / 8= 1 / 4 ).

Probability on a finite sample space of elements If the sample space ( Omega ) has finite elements, the probability that the event (A) occurs is: < Pr(A) = frac{left| A ight|}{left| Omega ight|} >

5. Independent event

Independence means having nothing to do with each other, its philosophy is very simple… and so are the two so-called independent events.

See also: What is a PC – Is it a Desktop

Definition: Two independent events Two events ( A ) and ( B ) are called independent events when:< Pr(AB) = Pr(A)Pr(B) > and we denote ( A perp !!!perp B ). A family ( M = {A_{i}: iin I}) of events is said to be independent if:< Prleft( {igcaplimits_{i in J} {{A_i}} } ight) = prodlimits_{i in J} {Pr({A_i})}> for each finite subset (J ) of (I).

6. Conditional Probability

Conditional probability can be stated roughly as the probability of a certain event (A) occurring given that the event ( Pr(B)) occurs and is denoted (Pr(A|B) ) ) reads as “the probability of (A) given that (B) occurs”.

Definition: Conditional Probability If the probability that event (B) is probable or (Pr(B) > 0 ) then the conditional probability of (A) given (B) is: < Pr (A|B) = frac{Prleft(AB ight)}{Prleft(B ight)} >

Thus, if two events (A ) and (B) are independent, their union probabilities can be expressed as the product of the probabilities of each event ( Pr(AB) = Pr(A)Pr(B) ) ) derive the following lemma:

READ MORE  What is emo no one knows

Lemma If (A) and (B) are two independent events, then (Pr(A|B) = Pr(A)). In other words, for every pair of independent events (A) and (B) we have:

7. Total probability formula

Definition A family of subsets $B_{1},…,B_{n}$ of the sample space $Omega $ is a partition of $Omega$ if the sets $B_{i}$ are double. do not intersect, and their union is equal to $Omega$:$$B_{i} cap B_{j} = emptyset ext{ with } forall i

eq j, igcuplimits_{i = 1}^n {{B_i}} = Omega $$

If we need to find a probability $Pr(A)$ but the information only includes probabilities $Pr(B_{i}) $ of a partition $B_{1},…,B_{n }$ of the probability space and the conditional probability $Pr(A | B_{i})$ then we can apply the law of total probability formula to calculate the probability of the event $A$:

$$ Pr(A) = sum_{i}^{n}Pr(A cap B_{i}) = sum_{i}^{n} Pr(A | B_{i}) Pr(B_{i}) $ $

8. Bayesian Formula

Thomas Bayes is an amateur mathematician, the reason why he called him amateur is because he started math late, the work is small… but enough to change the world!

If (A, B) are two events with probability other than (0):< Pr(B|A) = frac{Pr(A|B)Pr(B)}{Pr(A)} ag{3} > The above formula is a direct consequence of the formula (Pr(B|A)Pr(A)=Pr(A|B)Pr(B)). Combined with the formula for total probability, we have:

Bayesian Formula Assume (B_{1},…,B_{n}) is a partition on the sample space (Omega). Then we have Bayes formula:

The Bayesian probability formula is very simple, but it has a profound meaning. Often when learning about conditional probability many of you confuse (Pr(A|B)) and (Pr(B|A)) as two numbers that are the same, but in reality, sometimes it is far different. each other a lot.


Nguyen Tien Dung & Do Duc Thai. Introduction to modern probability statistics. Sputnik Education, 2015.

Nguyen Dinh Thuc & Dang Hai Van & Le Phong . Computer statistics. Science and Engineering Publishing House, 2010.

See also: What is English Transmission, Meaning of Words : Transmission

Wikipedia contributors. “Law of total probability.” Wikipedia, The Free Encyclopedia. Wikipedia, The Free Encyclopedia, 29 Mar. 2018. Web. 26 Jul. 2018.