Home / IB DP Maths / Application and Interpretation HL / IB Mathematics AI AHL Transition matrices. MAI Study Notes

IB Mathematics AI AHL Transition matrices. MAI Study Notes- New Syllabus

IB Mathematics AI AHL Transition matrices. MAI Study Notes

LEARNING OBJECTIVE

  • Transition matrices.

Key Concepts: 

  • Transition matrices.
  • Powers of transition matrices.
  • Regular Markov chains & Initial state probability matrices.
  • Eigenvalue Condition for Steady State

MAI HL and SL Notes – All topics

Transition Matrices

A transition matrix represents the probabilities of moving between different states in a system.

Rows = Next state
Columns = Current state
Entries = Probability of transitioning from one state to another
Each column must sum to 1 (since probabilities total 100%)

Example

Population Movement Between Two Cities

From A: 80% stay, 20% move to B
From B: 70% stay, 30% move to A

Find the Transition Matrix

▶️Answer/Explanation

Solution:

Transition Matrix $T$:

$
T = \begin{pmatrix}
0.8 & 0.3 \\
0.2 & 0.7 \\
\end{pmatrix}
$

  • the columns refer to “from” and the rows to “to”.
  •  the sum of the entries in each column is 1

 Markov Chains

A Markov Chain is a model for a system that transitions between states with fixed probabilities.

 

Key Property – Memorylessness:

The probability of moving to the next state depends only on the current state, not on the sequence of previous states.

 Terminology:

States: The different conditions a system can be in
Transitions: Movements between states with associated probabilities
Transition Matrix: Encodes all transition probabilities

State Vectors & Steady State

State Vector $S_n$:

Represents the distribution of the system at time $n$.

Each entry gives the number (or probability) of people/items in each state.

Updating State Vectors:

$
S_{n+1} = T \times S_n
$

Or recursively:

$
S_n = T^n \times S_0
$

Example

Let:

$
S_0 = \begin{pmatrix} 50 \\ 50 \end{pmatrix}
\quad \text{(50 in A, 50 in B)}
$

Find

$S_1:?$

$S_1:?$

▶️Answer/Explanation

Solution:

$
S_1 = T \times S_0 = \begin{pmatrix} 0.8 & 0.3 \\ 0.2 & 0.7 \end{pmatrix} \begin{pmatrix} 50 \\ 50 \end{pmatrix} = \begin{pmatrix} 55 \\ 45 \end{pmatrix}
$

$
S_2 = T^2 \times S_0 = \begin{pmatrix} 0.70 & 0.45 \\ 0.30 & 0.55 \end{pmatrix} \begin{pmatrix} 50 \\ 50 \end{pmatrix} = \begin{pmatrix} 57.5 \\ 42.5 \end{pmatrix}
$

Steady State Vector $S_\infty$:

The long-term behavior of the system as $n \to \infty$:

$
T \times S = S \quad \text{(i.e., eigenvector with eigenvalue 1)}
$

To find:

Solve:

$
\begin{cases}
0.8x + 0.3y = x \\
0.2x + 0.7y = y \\
x + y = 100
\end{cases}
\quad \Rightarrow S_\infty = \begin{pmatrix} 60 \\ 40 \end{pmatrix}
$

 Interpretation: Eventually, 60% of people reside in A and 40% in B, regardless of the initial distribution.

Properties of a Steady State Vector

Stationary:

Once reached, applying the transition matrix does not change the vector: $T \times S_\infty = S_\infty$

Unique (under conditions):

If the matrix is regular (some power of $T$ has all positive entries), the steady state is unique.

Independent of Initial State:

Given enough time, the system always tends toward the same steady state, regardless of $S_0$.

Entries Sum to 1 or Total Population:

When working with probabilities, the sum of entries = 1
When using absolute counts, the sum = total population

Represents Long-Term Probabilities:

Each entry gives the long-run proportion of time the system spends in each state. Markov Chains and Long-Term Behavior

 Repeated Transitions:

Continue multiplying by the transition matrix.
The system tends to stabilize if a steady state exists.

Applications:

  • Predict long-term population distributions
  • Model absorbing or recurrent states
  • Understand system stability or fluctuations

Example

Positions: 1, 2, 3, 4

At positions 1 & 4 → stops
At 2 or 3 →

Move right: 0.6
Move left: 0.4

Long-Term: ?

▶️Answer/Explanation

Solution:

Transition Matrix $T$:

$
T = \begin{pmatrix}
1 & 0.4 & 0 & 0 \\
0 & 0 & 0.4 & 0 \\
0 & 0.6 & 0 & 0 \\
0 & 0 & 0.6 & 1 \\
\end{pmatrix}
$

Initial State:

$
S_0 = \begin{pmatrix} 0 \\ 1 \\ 0 \\ 0 \end{pmatrix} \quad \text{(starts at position 2)}
$

After 2 Steps:

$
S_2 = T^2 \times S_0 = \begin{pmatrix} 0.40 \\ 0.24 \\ 0 \\ 0.36 \end{pmatrix}
$

Interpretation:

40% chance at position 1
24% chance at position 2
36% chance at position 4

Long-Term:

The robot will eventually stop at position 1 \(\rightarrow 52.6\%\) or 4 \(\rightarrow  47.4\%\)

MARKOV CHAINS AND EIGENVECTORS

The steady state of a Markov chain is directly linked to eigenvectors of its transition matrix. 

For a transition matrix \( T \), the steady state vector \( S \) satisfies:

$T \times S = 1 \times S$

This means:

\( S \) is an eigenvector of \( T \).
 The corresponding eigenvalue is 1.

(Every stochastic matrix \( T \) has at least one eigenvalue equal to 1.)

Finding the Steady State

Given transition matrix for city populations:

$
T = \begin{pmatrix} 0.8 & 0.3 \\ 0.2 & 0.7 \end{pmatrix}
$

using (Eigenvector Method)

▶️Answer/Explanation

Solution:

Solve for eigenvalues \( \lambda \):
$
\det(T – \lambda I) = 0 \implies \begin{vmatrix} 0.8 – \lambda & 0.3 \\ 0.2 & 0.7 – \lambda \end{vmatrix} = 0
$
$
(0.8 – \lambda)(0.7 – \lambda) – (0.3 \times 0.2) = 0 \implies \lambda^2 – 1.5\lambda + 0.5 = 0
$
$
\lambda = 1 \quad \text{or} \quad \lambda = 0.5
$

 For \( \lambda = 1 \), eigenvector \( S = \begin{pmatrix} x \\ y \end{pmatrix} \):
$
(T – I)S = 0 \implies \begin{pmatrix} -0.2 & 0.3 \\ 0.2 & -0.3 \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \end{pmatrix}
$
$
-0.2x + 0.3y = 0 \implies 2x = 3y \implies \frac{x}{y} = \frac{3}{2}
$
Thus, the general eigenvector is:
$
S = t \begin{pmatrix} 3 \\ 2 \end{pmatrix}
$

Normalize for total population (e.g., 100 people):
$
3t + 2t = 100 \implies t = 20 \implies S = \begin{pmatrix} 60 \\ 40 \end{pmatrix}
$

(This matches our earlier steady state!)

Interpretation

  • Eigenvalue 1: Ensures a stable long-term distribution (steady state).
  • Eigenvalue 0.5: Reflects how quickly the system converges to steady state.
  • (Smaller eigenvalues → Faster convergence.)
Scroll to Top