[qdeck ” ]
[h] IB Mathematics AI HL Flashcards Eigenvalues and eigenvectors
[q] Eigenvalues & Eigenvectors
CENTRAL PROBLEM — So, this may make more sense after looking at the matrix transformations subtopic (3.97), or if you study matrices after IB at all, but it is important in mathematics to investigate the solutions to this problem:
\[
A \vec{x} = \lambda \vec{x}
\]
where:
– \(A\) is a matrix
– \(\lambda\) is a scalar (regular number)
– \(\vec{x}\) is a vector
[q]
> But the interesting thing about this problem is that we aren’t simply being asked to find the solution(s). Instead, it is commonly structured as follows:
1. We will usually be given a matrix, \(A\). In IB, this will be 2×2.
2. Then you set about trying to find the scalars, \(\lambda\), that could possibly fit. We call these: Eigenvalues.
3. At that point, you can find the corresponding vectors that fit for each value of \(\lambda\); these are called: Eigenvectors.
[a]
EIGENVALUES — Now, there are a lot of steps involved in explaining how eigenvalues are found, and why. Firstly, we rearrange: \(A \vec{x} = \lambda \vec{x}\) → \(A \vec{x} – \lambda \vec{x} = 0\) → \((A – \lambda I) \vec{x} = 0\). Then, we must consider two scenarios:
1. If the inverse of \((A – \lambda I)\) exists → \((A – \lambda I)(A – \lambda I)^{-1} = 0\) → \(\vec{x} = 0\). But this was an obvious sol’n from the start, so we ignore it.
2. If \((A – \lambda I)^{-1}\) does not exist → Then we are interested in this, and it would happen when \(\text{det}(A – \lambda I) = 0\). So we set that up, and start solving; we will get a polynomial in \(\lambda\) (a quadratic for us), which is the **characteristic polynomial**. We solve that, and we have our eigenvalues.
[q]
EIGENVECTORS — Now, there is only a certain set of vectors that fit with each eigenvalue, so we need to find those:
We take the first \(\lambda\), \(\lambda_1\), and set up:
\[
(A – \lambda_1 I) \begin{pmatrix} a \\ b \end{pmatrix} = 0
\]
[a]
Subtract, then multiply, and we should have a **system of equations**. However, it is always going to have solutions, being fundamentally the same equation listed twice. We can still analyze the structure of \(\begin{pmatrix} a \\ b \end{pmatrix}\). We can set \(b = t\), find \(a\) in terms of \(t\), get the vector, then factor out \(t\). This gives us a family of potential eigenvectors. But giving a single numerical vector would be classed as the eigenvector.
E.G. 2 Find the eigenvectors of the matrix:
\[
\begin{pmatrix} 6 & 1 \\ 2 & 3 \end{pmatrix}
\]
We already have \(\lambda_1 = 5\) and \(\lambda_2 = 4\). We set up:
\[
\left( \begin{pmatrix} 6 & 1 \\ 2 & 3 \end{pmatrix} – 5I \right) \begin{pmatrix} a \\ b \end{pmatrix} = 0
\]
Giving us:
\[
\begin{pmatrix} a + b = 0 \\ -2a + b = 0 \end{pmatrix} \Rightarrow b = t, a = -t \Rightarrow \begin{pmatrix} a \\ b \end{pmatrix} = t \begin{pmatrix} -1 \\ 1 \end{pmatrix}
\]
So we take:
\[
\vec{x_1} = \begin{pmatrix} -1 \\ 1 \end{pmatrix}
\]
[q]
DIAGONALISATION — Next, we are going to use the fact that it is helpful to have a diagonal matrix. We have seen one example of this in seeing that multiplying by \(I = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}\) keeps things the same. There are more useful aspects of wider diagonal matrices — ones in the form \(\begin{pmatrix} a & 0 \\ 0 & b \end{pmatrix}\), etc. So the next question becomes: can we take any matrix \(A\), and diagonalise it?
[a]
> Yes, we can, and whilst we won’t derive it here, the solution directly uses both eigenvalues & eigenvectors.
The formula we use is:
\[
A = PDP^{-1}
\]
where:
– \(P = (x_1, x_2)\), with \(x_1\) and \(x_2\) being the eigenvectors of \(A\),
– and \(D\) is the **diagonal matrix** where the entries are the eigenvalues of \(A\).
[q]
> The \((x_1, x_2)\) notation means you just have the two vectors side-by-side, forming a matrix. As you can see, \(D\) is our diagonal matrix. If we have \(A = PDP^{-1}\), you can also have \(D = P^{-1}AP\). We say **P diagonalises A**.
– E.G. 4 From E.G. 2/3, verify that \( P = \begin{pmatrix} 1 & 2 \\ 2 & 1 \end{pmatrix} \) diagonalises \( A = \begin{pmatrix} 6 & 1 \\ 2 & 3 \end{pmatrix} \):
\[
P^{-1} = \frac{1}{5} \begin{pmatrix} 3 & -1 \\ -2 & 2 \end{pmatrix}
\]
Then:
\[
P A P^{-1} = D
\]
POWER — Here, we see one of the main uses of diagonalised matrices — which is being able to do larger powers of \(A\) quickly and easily.
\[
A^n = P D^n P^{-1}
\]
[a]
POWERS — Here, we see one of the main uses of diagonalised matrices — being able to compute larger powers of \( A \) quickly and easily.
If we have \( A = PDP^{-1} \), then:
\[
A^n = (PDP^{-1})(PDP^{-1}) \dots (PDP^{-1}) = PD^nP^{-1}
\]
This allows us to find higher powers of \( A \) simply by raising the diagonal matrix \( D \) to that power, which is straightforward since \( D \) is diagonal.
– E.G. 5 Take \( D = \begin{pmatrix} 5 & 0 \\ 0 & 4 \end{pmatrix} \). Then:
\[
D^2 = \begin{pmatrix} 5 & 0 \\ 0 & 4 \end{pmatrix} \begin{pmatrix} 5 & 0 \\ 0 & 4 \end{pmatrix} = \begin{pmatrix} 25 & 0 \\ 0 & 16 \end{pmatrix}
\]
\[
D^3 = \begin{pmatrix} 5 & 0 \\ 0 & 4 \end{pmatrix} \begin{pmatrix} 25 & 0 \\ 0 & 16 \end{pmatrix} = \begin{pmatrix} 125 & 0 \\ 0 & 64 \end{pmatrix}
\]
> This is useful because calculating powers of a diagonal matrix is much easier than with a general matrix.
We can then use:
\[
A^n = PD^nP^{-1}
\]
to help calculate higher powers of \( A \).
NOTE: You could use the GDC (Graphing Display Calculator) to diagonalise matrices and find eigenvalues/eigenvectors, but it’s also valuable to understand the process manually.
[x] Exit text
(enter text or “Add Media”; select text to format)
[/qdeck]