IB Mathematics AI AHL Transition matrices. MAI Study Notes- New Syllabus
IB Mathematics AI AHL Transition matrices. MAI Study Notes
LEARNING OBJECTIVE
- Transition matrices.
Key Concepts:
- Transition matrices.
- Powers of transition matrices.
- Regular Markov chains & Initial state probability matrices.
- Eigenvalue Condition for Steady State
- IBDP Maths AI SL- IB Style Practice Questions with Answer-Topic Wise-Paper 1
- IBDP Maths AI SL- IB Style Practice Questions with Answer-Topic Wise-Paper 2
- IB DP Maths AI HL- IB Style Practice Questions with Answer-Topic Wise-Paper 1
- IB DP Maths AI HL- IB Style Practice Questions with Answer-Topic Wise-Paper 2
- IB DP Maths AI HL- IB Style Practice Questions with Answer-Topic Wise-Paper 3
Transition Matrices
A transition matrix, also known as a state transition matrix or Markov matrix, is a square matrix used to represent the probabilities of transitioning between different states in a system. Each element in the matrix represents the probability of moving from one state (represented by a row) to another state (represented by a column).
A transition matrix represents the probabilities of moving between different states in a Markov system or stochastic process.
Rows = Next state (i.e. destination)
Columns = Current state (i.e. origin)
Entries = Probability of transitioning from the current state to the next
Each column must sum to 1 (since total probability of leaving a state is 1)
Key Features:
- All entries must be between 0 and 1.
- The matrix is often square: if there are \( n \) states, the matrix is \( n \times n \).
- Used to model systems evolving over discrete time steps.
- Common in population models, economics, genetics, and game theory.
Example Population Movement Between Two Cities From A: 80% stay, 20% move to B Find the Transition Matrix ▶️ Answer/ExplanationLet the states be:
Transition Matrix \( T \): \( T = \begin{pmatrix} 0.8 & 0.3 \\ 0.2 & 0.7 \end{pmatrix} \)
|
Powers of a Transition Matrix
To find the probability distribution after multiple steps, we raise the transition matrix to a power:
\( T^n \) gives the probabilities of transitioning between states after n time steps.
Why use powers?
- \( T^2 \): shows 2-step transition probabilities
- \( T^3 \): shows 3-step transitions, and so on
- Multiplying the matrix by an initial state vector gives the system’s state at the next step
Example Use the matrix from the previous example: Find the probability of being in each city after 2 steps. ▶️ Answer/ExplanationGiven Transition Matrix \( T \): \( T = \begin{pmatrix} 0.8 & 0.3 \\ 0.2 & 0.7 \end{pmatrix} \) Find \( T^2 \) to get the 2-step transition matrix: $ T^2 = T \cdot T = \begin{pmatrix} 0.8 & 0.3 \\ 0.2 & 0.7 \end{pmatrix} \cdot \begin{pmatrix} 0.8 & 0.3 \\ 0.2 & 0.7 \end{pmatrix} $ $ = \begin{pmatrix} (0.8)(0.8)+(0.3)(0.2) & (0.8)(0.3)+(0.3)(0.7) \\ (0.2)(0.8)+(0.7)(0.2) & (0.2)(0.3)+(0.7)(0.7) \end{pmatrix} $ $ = \begin{pmatrix} 0.70 & 0.45 \\ 0.30 & 0.55 \end{pmatrix} $
|
Transition Diagrams in Discrete Dynamical Systems
Definition: A transition diagram is a directed graph that visually represents the movement between states in a system. It is commonly used with transition matrices in discrete dynamical systems.
Key Features:
- Nodes: Represent the different states in the system.
- Arrows: Show the direction of transition from one state to another.
- Weights on arrows: Indicate the probability or proportion of moving from one state to the next.
- Used with: Transition matrices \( T \), where entries \( T_{ij} \) represent the probability of moving from state \( j \) to state \( i \).
Connection to Transition Matrices:
Each column in the transition matrix corresponds to a state you are moving from, and each row shows the state you are moving to.
Use in Dynamical Systems:
If the initial state is given by a column vector (e.g. \( \begin{pmatrix} 100 \\ 0 \end{pmatrix} \)), then repeated multiplication by the transition matrix will simulate how populations evolve over time:
$ \text{Next state} = T \times \text{Current state} $ $ \text{After } n \text{ steps: } \quad \mathbf{x}_n = T^n \cdot \mathbf{x}_0 $
Example: Consider two cities, A and B:
Find Transition Matrix and describe Transition Diagram: ▶️ Answer/ExplanationSolution: Transition Matrix: $T = \begin{pmatrix} 0.8 & 0.3 \\ 0.2 & 0.7 \end{pmatrix} $ Transition Diagram:
|
Markov Chains
A Markov Chain is a model for a system that transitions between states with fixed probabilities.
Key Property – Memorylessness:
The probability of moving to the next state depends only on the current state, not on the sequence of previous states.
Terminology:
States: The different conditions a system can be in
Transitions: Movements between states with associated probabilities
Transition Matrix: Encodes all transition probabilities
State Vectors & Steady State
State Vector $S_n$:
Represents the distribution of the system at time $n$.
Each entry gives the number (or probability) of people/items in each state.
Updating State Vectors:
$
S_{n+1} = T \times S_n
$
Or recursively:
$
S_n = T^n \times S_0
$
Example Let: $ Find $S_1:?$ $S_1:?$ ▶️Answer/ExplanationSolution: $ $ |
Steady State Vector $S_\infty$:
The long-term behavior of the system as $n \to \infty$:
$
T \times S = S \quad \text{(i.e., eigenvector with eigenvalue 1)}
$
To find:
Solve:
$
\begin{cases}
0.8x + 0.3y = x \\
0.2x + 0.7y = y \\
x + y = 100
\end{cases}
\quad \Rightarrow S_\infty = \begin{pmatrix} 60 \\ 40 \end{pmatrix}
$
Interpretation: Eventually, 60% of people reside in A and 40% in B, regardless of the initial distribution.
Properties of a Steady State Vector
Stationary:
Once reached, applying the transition matrix does not change the vector: $T \times S_\infty = S_\infty$
Unique (under conditions):
If the matrix is regular (some power of $T$ has all positive entries), the steady state is unique.
Independent of Initial State:
Given enough time, the system always tends toward the same steady state, regardless of $S_0$.
Entries Sum to 1 or Total Population:
When working with probabilities, the sum of entries = 1
When using absolute counts, the sum = total population
Represents Long-Term Probabilities:
Each entry gives the long-run proportion of time the system spends in each state. Markov Chains and Long-Term Behavior
Repeated Transitions:
Continue multiplying by the transition matrix.
The system tends to stabilize if a steady state exists.
Applications:
- Predict long-term population distributions
- Model absorbing or recurrent states
- Understand system stability or fluctuations
Example Positions: 1, 2, 3, 4 At positions 1 & 4 → stops Move right: 0.6 Long-Term: ? ▶️Answer/ExplanationSolution: Transition Matrix $T$: $ Initial State: $ After 2 Steps: $ Interpretation: 40% chance at position 1 Long-Term: The robot will eventually stop at position 1 \(\rightarrow 52.6\%\) or 4 \(\rightarrow 47.4\%\) |
MARKOV CHAINS AND EIGENVECTORS
The steady state of a Markov chain is directly linked to eigenvectors of its transition matrix.
For a transition matrix \( T \), the steady state vector \( S \) satisfies:
$T \times S = 1 \times S$
This means:
\( S \) is an eigenvector of \( T \).
The corresponding eigenvalue is 1.
(Every stochastic matrix \( T \) has at least one eigenvalue equal to 1.)
Finding the Steady State Given transition matrix for city populations: $ using (Eigenvector Method) ▶️Answer/ExplanationSolution: Solve for eigenvalues \( \lambda \): For \( \lambda = 1 \), eigenvector \( S = \begin{pmatrix} x \\ y \end{pmatrix} \): Normalize for total population (e.g., 100 people): (This matches our earlier steady state!) |
Interpretation
- Eigenvalue 1: Ensures a stable long-term distribution (steady state).
- Eigenvalue 0.5: Reflects how quickly the system converges to steady state.
- (Smaller eigenvalues → Faster convergence.)